Development

Development

To get info about new technologies, perspective products and useful services

BigData

BigData

To know more about big data, data analysis techniques, tools and projects

Refactoring

Refactoring

To improve your code quality, speed up development process

Category: Development

Articles about refactoring, code purity and new technologies.

Scala as backend language. Tips, tricks and pain

Scala as backend language. Tips, tricks and pain

I’ve got a legacy service, written in Scala. Stack was: Play2, Scala, Slick, Postgres.

Here is described why such technology stack is not the best option, what should be done to make it work better with less efforts and how to avoid underwater rocks.

For impatient:
If you have choice – don’t use Slick.
If you have more freedom – don’t use Play.
And finally – try to avoid Scala on the back-end. It might be good for Spark applications, but not for the backends.

Data layer

Every backend with persistence data needs to have data layer.

From my experience the best way of code organizing is repository pattern. You have your entity (dao) and repository, which you access when you need to do some manipulations with data. Nowadays modern ORMs are your friends here. They do a lot of things for you.

Slick – back in 2010

It was my first thought, when I started using it. In Java you can use Spring-data, which generates a repository implementation for you. All you need is to annotate your entity with JPA and write repository interface.

Slick is another thing. It can work in two ways.

Manual definition

You define your entity as a case class, mentioning all needed fields and their types:

case class User(
    id: Option[Long],
    firstName: String,
    lastName: String
)

And then you manually repeat all the fields and their types defining the schema:

class UserTable(tag: Tag) extends Table[User](tag, "user") {
    def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
    def firstName = column[String]("first_name")
    def lastName = column[String]("last_name")

    def * = (id.?, firstName, lastName) <> (User.tupled, User.unapply)
}

Nice. Like in ancient times. Forget about @Column automapping. In case you have DTO and you need to add a field you should always remember to add it to DTO, DAO and schema. 3 places.

And have you seen insert method implementation?

def create(name: String, age: Int): Future[Person] = db.run {
    (people.map(p => (p.name, p.age))
returning people.map(_.id)
into ((nameAge, id) => Person(id, nameAge._1, nameAge._2))
    ) += (name, age)
}

I used to have save method defined somewhere in abstract repository only once and have it in one line, something like myFavouriteOrm.insert(new User(name, age)).

Full example is here: https://github.com/playframework/play-scala-slick-example

I don’t understand why Play’s authors say ORM’s “will quickly become counter-productive“. Writing manual mapping on real projects would become a pain much faster then abstract “ORM counter-productivity“.

Code generation

The second approach is code generation. It scans your DB and generates the code, based on it. Like reversed migration. I didn’t like this approach completely (it was in the legacy code I’ve got).

First, to make it working you need to have db access at compile time, which is not always possible

Second, if backend owns the data – it should be responsible for the schema. It means there should be schema from code or code changes + migration with schema changes in the same repository.

Third, have you seen the generated code? Lots of unnecessary classes, no format (400-600 characters in a line), no ability to modify this classes, by adding some logic or extending an interface. I had to create my own data layer, around this generated data layer 🙁

Ebean and some efforts to make it work

So, after fighting with Slick I’ve decided to remove it together with data layer completely and to use another technology. I’ve selected Ebean, as it is official ORM for Play2 + Java. Looks like Play developers don’t like Hibernate for some reason.

Important thing to notice – it is Java ORM and Scala is not supported officially (its support was dropped a few years ago). So you need to apply some efforts to make it work.

First of all – add jaxb libraries to your dependencies. They were removed in Java 9. So on 9+ Java your app will crash at runtime without them.

libraryDependencies ++= Seq(
  
"javax.xml.bind" % "jaxb-api" % "2.2.11",
  
"com.sun.xml.bind" % "jaxb-core" % "2.2.11",
  
"com.sun.xml.bind" % "jaxb-impl" % "2.2.11",
  
"javax.activation" % "activation" % "1.1.1"
)

Next – do not forget to add jdbc library and driver library for your database.

After it you are ready to set up your data layer.

Entity

Write your entities as normal java entities:

@Table(name = "master")
@Entity
class Master {
  @Id
  @GeneratedValue(strategy = GenerationType.AUTO)
  @Column(name = "master_id")
  var masterId: Int = _

  @Column(name = "master_name")
  var masterName: String = _

  @OneToMany(cascade = Array(CascadeType.MERGE))
  var pets: util.List[Pet] = new util.ArrayList[Pet]()
}

Basic Scala types are supported, but with several limitations:

  • You have to use java.util.list in case of one/many-to-many relationship. Scala’s ListBuffer is not supported as Ebean doesn’t know how to de/serialize it. Scala’s List also, as it is immutable and Ebean can’t populate it.
  • Primitives like Int or Double should not be nullable in the database. If you have it nullable – use java.lang.Double (/ Int) or you will get exception as soon as you will try to load such object from the database, because Scala’s Double is compiled to double primitive, which can’t be null.
    Scala’s Option[Double] won’t work, as ORM will return null instead of Option[null].
  • Relations are supported, including bridge table, which is also created automatically. But, because of the bug, @JoinColumn can’t be specified.
  • Ebean uses java lists, so you need to use scala.collection.JavaConverters every time you are planning to use lists in query (like where.in) and every time you return a list (like findList).
Repository

It is (the only) nice thing in Scala, which can be useful here: trait can extend abstract class. It means you can create your abstract CRUD repository and use it in business repositories. Like you have out of the box in Spring-Data 🙂

1. Create your abstract repository:

class AbstractRepository[T: ClassTag] {
  var ebeanServer: EbeanServer = _

  @Inject()
  def setEbeanServer(ebeanConfig: EbeanConfig): Unit = {
    ebeanServer = Ebean.getServer(ebeanConfig.defaultServer())
  }

  def insert(item: T): T = {
    ebeanServer.insert(item)
    item
  }

  def update(item: T): T = {
    ebeanServer.update(item)
    item
  }

  def saveAll(items: List[T]): Unit = {
    ebeanServer.insertAll(items.asJavaCollection)
  }

  def listAll(): List[T] = {
    ebeanServer.find(classTag[T].runtimeClass.asInstanceOf[Class[T]])
      .where().findList().asScala.toList
  }

  def find(id: Any): Option[T] = {
    Option(ebeanServer.find(classTag[T].runtimeClass.asInstanceOf[Class[T]], id))
  }
}

You need to use classTag here to determine the class of the entity.

2. Create your business repository trait, extending this abstract repository:

@ImplementedBy(classOf[MasterRepositoryImpl])
trait MasterRepository extends AbstractRepository[Master] {
}

Here you can also set up some special methods, which will be used only in this repository.

In the implementation you need to define only methods from MasterRepository. In case of none – just leave it empty. Methods from the AbstractRepository will be accessible anyway.

@Singleton
class MasterRepositoryImpl extends MasterRepository {
}

After data layer refactoring ~70% of code was removed. The main point here – functional staff (FRM and other “modern” things) can be useful only in case you don’t have business objects. F.e. you are creating telecom back-end, which main intent is to parse network packages, do something with it’s data and fire them to the next point of your data pipeline. In all other cases, when your business logic touches real world – you need to use object oriented design.  

DTO layer

Another part of the system which made me disappointed. This layer’s intent is to receive messages from outside (usually REST) and run some actions, based on message type. Usually it means that you get message, parse it (usually from JSON) and pass to service layer. Then take service layer’s return and send outside as an encoded answer. Encoding and decoding messages (DTOs) is the main thing here.

For some reason working with json is unfriendly in Scala. And super unfriendly in Play2.

Json serialization – is not automated anymore

In normal frameworks specifying the type of an object to be parsed is all you need to do. You specify root object, request body will be parsed and serialized to this object, including all sub-objects. F.e. build(@RequestBody RepositoryDTO body) taken from one of my opensource projects.

In Play you need to set up implicit reader for every sub-object, used in your DTO. In case your MasterDTO contains PetDTO, which contains RoleDTO you have to set up reader for all of them:

def createMaster: Action[AnyContent] = Action.async { request =>
    implicit val formatRole: OFormat[RoleDTO] = Json.format[RoleDTO]
    implicit val formatPet: OFormat[PetDTO] = Json.format[PetDTO]
    implicit val format: OFormat[MasterDTO] = Json.format[MasterDTO]
    val parsed = Json.fromJson(request.body.asJson.get)(format)
    val body: MasterDTO = parsed.getOrElse(null)
    // …
}

Maybe there is some automated way, but I haven’t found it. All approaches end up with getting request’s body as json and parsing it manually.

Json validation – more boilerplate for the god of boilerplate!

Play has it’s own modern functional way of data validation. In three steps only:

  1. Forget about javax.validation
  2. Define your DTO as case-class. Here you write your field names and their types.
  3. Manually write Form mapping, mentioning all dto’s field names and writing their types once again.

After Slick’s manual schema definition, I’ve expected something shitty. But it overcame my expectations.

The example:

case class SomeDTO(id: Int, text: String, option: Option[Double]).
def validationForm: Form[SomeDTO] = { 
  import play.api.data.Forms._
  Form(
       mapping(
              "id" -> number,
              "text" -> nonEmptyText,
              "option" -> optional(of(doubleFormat))
       )(SomeDTO.apply)(SomeDTO.unapply)
  )
}

It is used like this:

    def failure(badForm: Form[_]) = {
      BadRequest(badForm.errorsAsJson(messagesProvider))
    }

    def success(input: SomeDTO) = {
      // your business logic here 
    }

    validationForm.bindFromRequest()(request).fold(failure, success)

Json deserialization – forget about heterogeneity

It was the main problem with Play’s json implementation and the main reason I’ve decided to get rid of it. Unfortunately I didn’t find a quick solution to remove it completely (looks like it is hardcoded). So I use Play json for validation & dto decoding and json4s for dto encoding.

Why?

I have all my DTOs implement my JsonSerializable trait and I have few services, which work with generic objects. Imagine DogDTO and CatDTO: they are different business entities but some actions are common. To avoid code duplication I just send them via Pet trait to those services (like FeedPetService). They do their job and return just a List of JsonSerializable objects (can be either Cat or Dog DTOs, based on input type).

It turned out that Play can’t serialize trait if it is not sealed. It requires an implicit writer to be set up explicitly. So after googling a bit I’ve switch to json4s.

Now I have 2 lines of implementation for any DTO:

def toJson(elements: List[JsonSerializable]): String = {
    implicit val formats: AnyRef with Formats = Serialization.formats(NoTypeHints)
    Serialization.write(elements)
  }

It is defined in trait. Every companion object, which extends this trait has json serialization of class-objects out of the box.

Summing up

  • Slick’s creators call Slick “Functional Relational Mapper” (FRM) and claim it to have minimum configuration advantages. As far as I see it is yet another not successful attempt to create something with “Functional” buzzword. From 10 years of my experience I spend around 4 years in functional programming (Erlang) and saw a lot of dead projects, which started like “New Innovative Functional Approach”
  • Scala’s implicit is something magical which breaks KISS principle and makes the code messy. Here is a very good thread about Scala implicits + Slick
  • Working with json in Play2 is pain.
Python & Graphql. Tips, tricks and performance improvements.

Python & Graphql. Tips, tricks and performance improvements.


Recently I’ve finished another back-end with GraphQL, but now on Python. In this article I would like to tell you about all difficulties I’ve faced and narrow places which can affect the performance.

Technology stack: graphene + flask and sqlalchemy integration. Here is a piece of requirements.txt:

graphene
graphene_sqlalchemy
flask
flask-graphql
flask-sqlalchemy
flask-cors
injector
flask-injector

This allow me to map my database entities directly to GraphQL.

It looks like this:

The model:

class Color(db.Model):
  """color table"""
  __tablename__ = 'colors'

  color_id = Column(BigInteger().with_variant(sqlite.INTEGER(), 'sqlite'), primary_key=True)
  color_name = Column(String(50), nullable=False)
  color_r = Column(SmallInteger)
  color_g = Column(SmallInteger)
  color_b = Column(SmallInteger)

The node:

class ColorNode(SQLAlchemyObjectType):
  class Meta:
    model = colours.Color
    interfaces = (relay.Node,)

  color_id = graphene.Field(BigInt)

Everything is simple and nice.

But what are the problems?

Flask context.

At the time of writing this article I was unable to send my context to the GraphQL.

app.add_url_rule('/graphql',
                 view_func=GraphQLView.as_view('graphql',
                 schema=schema.schema,
                 graphiql=True,
                 context_value={'session': db.session})
                 )

This thing didn’t work for me, as view in flask-graphql integration was replaced by flask request.

Maybe this is fixed now, but I have to subclass GrqphQLView to save the context:

class ContexedView(GraphQLView):
  context_value = None

  def get_context(self):
    context = super().get_context()
    if self.context_value:
      for k, v in self.context_value.items():
        setattr(context, k, v)
    return context

CORS support

It is always a thing I forget to add 🙂

For Python Flask just add flask-cors in your requirements and set it up in your create_app method via CORS(app). That’s all.

Bigint type

I had to create my own bigint type, as I use it in the database as primary key in some columns. And there were graphene errors when I try to send int type.

class BigInt(Scalar):
  @staticmethod
  def serialize(num):
    return num

  @staticmethod
  def parse_literal(node):
    if isinstance(node, ast.StringValue) or isinstance(node, ast.IntValue):
      return int(node.value)

  @staticmethod
  def parse_value(value):
    return int(value)

Compound primary key

Also, graphene_sqlalchemy doesn’t support compound primary key out of the box. I had one table with (Int, Int, Date) primary key. To make it resolve by id via Relay’s Node interface I had to override get_node method:

@classmethod
def get_node(cls, info, id):
  import datetime
  return super().get_node(info, eval(id))

datetime import and eval are very important here, as without them date field will be just a string and nothing will work during querying the database.

Mutations with authorization

It was really easy to make authorization for queries, all I needed is to add Viewer object and write get_token and get_by_token methods, as I did many times in java before.

But mutations are called bypassing Viewer and its naturally for GraphQL.

I didn’t want to add authorization code in every mutation’s header, as it leads to code duplication and it’s a little bit dangerous, as I may create a backdoor by simply forgetting to add this code.

So I’ve subclass mutation and reimplement it’s mutate_and_get_payload like this:

class AuthorizedMutation(relay.ClientIDMutation):
  class Meta:
    abstract = True

  @classmethod
  @abstractmethod
  def mutate_authorized(cls, root, info, **kwargs):
    pass

  @classmethod
  def mutate_and_get_payload(cls, root, info, **kwargs):
    # authorize user using info.context.headers.get('Authorization')
    return cls.mutate_authorized(root, info, **kwargs)

All my mutations subclass AuthorizedMutation and just implement their business logic in mutate_authorized. It is called only if user was authorized.

Sortable and Filterable connections

To have my data automatically sorted via query in connection (with sorted options added to the schema) I had to subclass relay’s connection and implement get_query method (it is called in graphene_sqlalchemy).

class SortedRelayConnection(relay.Connection):
  class Meta:
    abstract = True

  @classmethod
  def get_query(cls, info, **kwargs):
    return SQLAlchemyConnectionField.get_query(cls._meta.node._meta.model, info, **kwargs)

Then I decided to add dynamic filtering over every field. Also with extending schema.

Out of the box graphene can’t do it, so I had to add a PR https://github.com/graphql-python/graphene-sqlalchemy/pull/164 and subclass connection once again:

class FilteredRelayConnection(relay.Connection):
  class Meta:
    abstract = True

  @classmethod
  def get_query(cls, info, **kwargs):
    return FilterableConnectionField.get_query(cls._meta.node._meta.model, info, **kwargs)

Where FilterableConnectionField was introduced in the PR.

Sentry middleware

We use sentry as error notification system and it was hard to make it work with graphene. Sentry has good flask integration, but problem with graphene is – it swallows exceptions returning them as errors in response.

I had to use my own middleware:

class SentryMiddleware(object):

  def __init__(self, sentry) -> None:
    self.sentry = sentry

  def resolve(self, next, root, info, **args):
    promise = next(root, info, **args)
    if promise.is_rejected:
      promise.catch(self.log_and_return)
    return promise

  def log_and_return(self, e):
    try:
      raise e
    except Exception:
      traceback.print_exc()
      if self.sentry.is_configured:
      if not issubclass(type(e), NotImportantUserError):
        self.sentry.captureException()
    return e

It is registered on GraphQL route creation:

app.add_url_rule('/graphql',
                 view_func=ContexedView.as_view('graphql',
                 schema=schema.schema,
                 graphiql=True,
                 context_value={'session': db.session},
                 middleware=[SentryMiddleware(sentry)]
                )

Low performance with relations

Everything was well, tests were green and I was happy till my application went to dev environment with real amounts of data. Everything was super slow.

The problem was in sqlalchemy’s relations. They are lazy by default. https://docs.sqlalchemy.org/en/latest/orm/loading_relationships.html

It means – if you have graph with 3 relations: Master -> Pet -> Food and query them all, first query will receive all masters (select * from masters`). F.e. you’ve received 20. Then for each master there will be query (select * from pets where master_id = ?). 20 queries. And finally – N food queries, based on pet return.

My advice here – if you have complex relations and lots of data (I was writing back-end for big data world) you have to make all relations eager. The query itself will be harder, but it will be only one, reducing response time dramatically.

Performance improvement with custom queries

After I made my critical relations eager (not all relations, I had to study front-end app to understand what and how they query) everything worked faster, but not enough. I looked at generated queries and was a bit frightened – they were monstrous! I had to write my own, optimized queries for some nodes.

F.e. if I have a PlanMonthly entity with several OrderColorDistributions, each of it having one Order.

I can use subqueries to limit the data (remember, I am writing back-end for big data) and populate relations with existing data (I anyway had this data in the query, so there was no need to use eager joins, generated by ORM). It will facilitates the request.

Steps:

  1. Mark subqueries with_labels=True
  2. Use root’s (for this request) entity as return one:
    Order.query \
      .filter(<low level filtering here>) \
      .join(<join another table, which you can use later>) \
      .join(ocr_query, Order.order_id == ocr_query.c.order_color_distribution_order_id) \
      .join(date_limit_query,
            and_(ocr_query.c.order_color_distribution_color_id == date_limit_query.c.plans_monthly_color_id,
                 ocr_query.c.order_color_distribution_date == date_limit_query.c.plans_monthly_date,
                 <another table joined previously> == date_limit_query.c.plans_monthly_group_id))
  3. Use contains_eager on all first level relations.
    query = query.options(contains_eager(Order.color_distributions, alias=ocr_query))
  4. If you have second layer of relations (Order -> OrderColorDistribution -> PlanMonthly) chain contains_eager:
    query = query.options(contains_eager(Order.color_distributions, alias=ocr_query)
                 .contains_eager(OrderColorDistribution.plan, alias=date_limit_query))

Reducing number of calls to the database

Besides data rendering level I have my service layer, which knows nothing about GraphQL. And I am not going to introduce it there, as I don’t like high coupling.

But each service needs fetched months data. To use all the data only once and have it in all services, I use injector with @request scope. Remember this scope, it is your friend in GraphQL.

It works like a singleton, but only within one request to /graphql. In my connection I just populate it with plans, found via GraphQL query (including all custom filters and ranges from front-end):

app.injector.get(FutureMonthCache).set_months(found)

Then in all services, which need to access this data I just use this cache:

@inject
def __init__(self,
             prediction_service: PredictionService,
             price_calculator: PriceCalculator,
             future_month_cache: FutureMonthCache) -> None:
  super().__init__(future_month_cache)
  self._prediction_service = prediction_service
  self._price_calculator = price_calculator

Another nice thing is – all my services, which manipulate data and form the request have also @request scope, so I don’t need to calculate predictions for every month. I take them all from cache, do one query and store the results. Moreover, one service can rely on other service’s calculated data. Request scope helps a lot here, as it allows me to calculate all data only once.

On the Node side I call my request scope services via resolver:

def resolve_predicted_pieces(self, _info):
  return app.injector.get(PredictionCalculator).get_original_future_value(self)

It allows me to run heavy calculations only if predicted_pieces were specified in the GraphQL query.

Summing up

That’s all difficulties I’ve faced. I haven’t tried websocket subscriptions, but from what I’ve learned I can say that Python’s GraphQL is more flexible, than Java’s one. Because of Python’s flexibility itself. But if I am going to work on high-load back-end, I would prefer not to use GraphQL, as it is harder to optimize.

GraphQL with Spring: Query and Pagination

GraphQL with Spring: Query and Pagination

In this article I’ll describe you how to use GraphQL with Spring with this library. Full example is available here.

Why annotations?

From my point of view schema should not be written manually, just because it is easy to make a mistake. Schema should be generated from code instead. And your IDE can help you here, checking types and typos in names.

Nearly always GraphQL schema has the same structure as back-end data models. This is because back-end is closer to data. So it would much be easily to annotate your data models and keep the existing schema rather than to write schema manually (maybe on front-end side) and then create bridges between this schema and existing data models.

Add library and create core beans

First thing to do is to add library to your spring boot project. I assume you’ve already added web, so just add graphql to your build.gradle:

compile('io.github.graphql-java:graphql-java-annotations:5.2')

Graphql object is a start execution for GraphQL queries. To build it we need to provide a schema and strategy to it.
Let’s create a schema bean in your Configuration class:

@Bean
public GraphQLSchema schema() {
    GraphQLAnnotations.register(new ZonedDateTimeTypeFunction());
    return newSchema()
            .query(GraphQLAnnotations.object(QueryDto.class))
            .mutation(GraphQLAnnotations.object(MutationDto.class))
            .subscription(GraphQLAnnotations.object(SubscriptionDto.class))
            .build();
}

Here we register custom ZoneDateTime type function to convert ZonedDateTime from java to string with format yyyy-MM-dd’T’HH:mm:ss.SSSZ and back.

Then we use a builder to create new schema with query, mutation and subscription. This tutorial covers query only.

Building a schema is not so cheap, so it should be done only once. GrahpQlAnnotations will scan your source tree starting from QueryDto and going through it’s properties and methods building a schema for you.

After schema is ready you can create a GraphQL bean:

@Bean
public GraphQL graphQL(GraphQLSchema schema) {
    return GraphQL.newGraphQL(schema)
            .queryExecutionStrategy(new EnhancedExecutionStrategy())
            .build();
}

According to the documentation building GraphQL object is cheap and can be done per request, if required. It is not needed for me, but you can add prototype scope on it.

I’ve used EnhancedExecutionStrategy to have ClientMutationId be inserted automatically to support Relay mutations.

Create controller with CORS support

You will receive your graphql request as ordinary POST request on /graphql:

@CrossOrigin
@RequestMapping(path = "/graphql", method = RequestMethod.POST)
public CompletableFuture<ResponseEntity<?>> getTransaction(@RequestBody String query) {
    CompletableFuture<?> respond = graphqlService.executeQuery(query);
    return respond.thenApply(r -> new ResponseEntity<>(r, HttpStatus.OK));
}

It should always return ExecutionResult and Http.OK, even if there is an error!
Also it is very important to support OPTIONS request. Some front-end GraphQL frameworks send it before sending POST with data.
In Spring all you need is just add @CrossOrigin annotation.

Execute graphql with spring application context

You can get your query in two formats: json with variables:

{"query":"query SomeQuery($pagination: InputPagination) { viewer { someMethod(pagination: $pagination) { data { inner data } } } }","variables":{"pagination":{"pageSize":50,"currentPage":1}}}

or plain GraphQL query:

query SomeQuery {
 viewer {
   someMethod(pagination: {pageSize:50, currentPage:1}) {
     data { inner data }
   }
 }
}

The best way to convert both formats to one is to use this inner class:

private class InputQuery {
    String query;
    Map<String, Object> variables = new HashMap<>();
    InputQuery(String query) {
        ObjectMapper mapper = new ObjectMapper();
        try {
            Map<String, Object> jsonBody = mapper.readValue(query, new TypeReference<Map<String, Object>>() {
            });
            this.query = (String) jsonBody.get("query");
            this.variables = (Map<String, Object>) jsonBody.get("variables");
        } catch (IOException ignored) {
            this.query = query;
        }
    }
}

Here we parse JSON first. If parsed – we provide query with variables. If not – we just assume input string to be a plain query.
To execute your query you should construct GraphQL execution input and pass it to the execute method of your GraphQL object.

@Async
@Transactional
public CompletableFuture<ExecutionResult> executeQuery(String query) {
    InputQuery queryObject = new InputQuery(query);
    ExecutionInput executionInput = ExecutionInput.newExecutionInput()
            .query(queryObject.query)
            .context(appContext)
            .variables(queryObject.variables)
            .root(mutationDto)
            .build();
    return CompletableFuture.completedFuture(graphQL.execute(executionInput));
}

Where:

@Autowired
private ApplicationContext appContext;
@Autowired
private GraphQL graphQL;
@Autowired
private MutationDto mutationDto;

appContext is spring application context. It is used as execution input context in order to access spring beans in GraphQL objects.
GraphQL is your bean, created earlier.
MutationDto is your mutation. I’ll cover it in another tutorial.

The query

Query is a start point for your GraphQL request.

@GraphQLName("Query")
public class QueryDto

I used Dto suffix for all GraphQL objects to separate them from data objects. However this suffix is redundant for schema, so @GraphQLName annotation is used.

@GraphQLField
public static TableDto getFreeTable(DataFetchingEnvironment environment) {
    ApplicationContext context = environment.getContext();
    DeskRepositoryService repositoryService = context.getBean(DeskRepositoryService.class);
    return repositoryService.getRandomFreeDesk().map(TableDto::new).orElse(null);
}

Every public static method in QueryDto annotated with @GrahpQLField will be available for query:

query {
 getFreeTable {
   tableId name
 }
}

GraphQL Objects

Your query returns TableDto which is your GraphQL object.

The difference between QueryDto and normal TableDto is that first one is always static, while objects are created. In listing above is is created from Desk.

To make fields and methods of created object visible for the query you should make them public and annotate with @GraphQLField.

In case of properties you can leave them private. GraphQL library will access them anyway:

@GraphQLNonNull
@GraphQLField
private Long tableId;
@GraphQLNonNull
@GraphQLField
private String name;

@GraphQLField
public String getWaiterName(DataFetchingEnvironment environment) {
     //TODO use context to retrieve waiter.
    return "default";
}

DataFetchingEnvironment will be automatically filled in by GrahpQl Annotations library if added to function’s arguments. You can skip it if not needed:

@GraphQLField
public String getWaiterName() {
    return "default";
}

You can also use any other arguments including objects:

@GraphQLField
public String getWaiterName(DataFetchingEnvironment environment, String string, MealDto meal) {
     //TODO use context to retreive waiter.
    return "default";
}

You can use @GraphQLNonNull to make any argument required.

Relay compatibility

Every object should implement Node interface, which has non null id:

@GraphQLTypeResolver(ClassTypeResolver.class)
public interface Node {
    @GraphQLField
    @GraphQLNonNull
    String id();
}

ClassTypeResolver allows GraphQL to include interface to your schema.
I usually use Class name + class Id for Node Id. Here is AbstractId every object extends.

Then in TableDto constructor I will use: super(TableDto.class, desk.getDeskId().toString());

For the ability to get Table by it’s Node id let’s use this:

public static TableDto getById(DataFetchingEnvironment environment, String id) {
    ApplicationContext context = environment.getContext();
    DeskRepositoryService repositoryService = context.getBean(DeskRepositoryService.class);
    return repositoryService.findById(Long.valueOf(id)).map(TableDto::new).orElse(null);
}

It is be called from QueryDto:

@GraphQLField
public static Node node(DataFetchingEnvironment environment, @GraphQLNonNull String id) {
    String[] decoded = decodeId(id);
    if (decoded[0].equals(TableDto.class.getName()))
        return TableDto.getById(environment, decoded[1]);
    if (decoded[0].equals(ReservationDto.class.getName()))
        return ReservationDto.getById(environment, decoded[1]);
    log.error("Don't know how to get {}", decoded[0]);
    throw new RuntimeException("Don't know how to get " + decoded[0]);
}

by this query: query {node(id: "unique_graphql_id") {... on Table { reservations {edges {node {guest from to}} }}}}

The pagination

To support pagination your method should return PaginatedData<YourClass> and have additional annotation @GraphQLConnection:

@GraphQLField
@GraphQLConnection
@GraphQLName("allTables")
public static PaginatedData<TableDto> getAllTables(DataFetchingEnvironment environment) {
    ApplicationContext context = environment.getContext();
    DeskRepositoryService repositoryService = context.getBean(DeskRepositoryService.class);
    Page page = new Page(environment);
    List<Desk> allDesks;
    if(page.applyPagination()) {
        allDesks = repositoryService.findAll(); // TODO apply pagination!
    } else {
        allDesks = repositoryService.findAll();
    }
    List<TableDto> tables = allDesks.stream().map(TableDto::new).collect(Collectors.toList());
    return new AbstractPaginatedData<TableDto>(false, false, tables) {
        @Override
        public String getCursor(TableDto entity) {
            return entity.id();
        }
    };
}

Here I create Page using default GraphQL pagination variables. But you can use any other input for pagination you like.

To implement pagination you have two options:

  • in memory pagination. You retrieve all objects from your repository and then paginate, filter and sort them. For this solution it is much better to create your implementation of PaginatedData and pass environment, as well as pagination/sorting/filtering input there.
  • Actions in repository. It is much better, as you won’t load lot’s objects to memory, but it requires you to generate complex queries in the repository.

All GraphQL objects can have methods with pagination. In TableDto you can retrieve a paginated list of all reservations for current table with:

@GraphQLField
@GraphQLConnection
public PaginatedData<ReservationDto> reservations(DataFetchingEnvironment environment) {

Next steps

In the next article I will cover mutations and subscriptions.

Create Erlang service with Enot

Create Erlang service with Enot

This is step-by-step guide on how to create your Erlang service with the powerful Enot build system. Ensure you have Enot and Erlang installed locally.

As a result you will build http service, which will listen the 8080 port and response JSON with service’s version and MAGIC_GREETING operation system’s environment variable or simple “hello world” on every request.
The complete source code is available on GitHub.

Read More Read More

Dynamic environment configuration (Erlang)

Dynamic environment configuration (Erlang)

Most of Erlang applications requires configuration variables. It can be: database credentials, metrics report hosts, some system limits and so on. Such variables can be set in application resource configuration file. They can be accessed with application:get_env. You should prefer dynamic configuration instead of hardcoded one. But what if you need different properties in configuration depend on environment?

Assume you have your compound service, which uses database, some microservices and also sends push notifications to mobile phones. All these configuration properties rely on environment. You have different microservice hosts/ports, auth tokens, certificates, database passwords in production, staging and testing. You don’t use same credentials everywhere, do you? How to deal with it and handle configuration changes gracefully? (Quick hint, the most cool stuff is at the end of the article, so, if you don’t like reading the comparison of different ways – just skip it). Lets explore all options.

Read More Read More