Development

Development

To get info about new technologies, perspective products and useful services

BigData

BigData

To know more about big data, data analysis techniques, tools and projects

Refactoring

Refactoring

To improve your code quality, speed up development process

Category: Refactoring

Articles about refactoring, code purity and effective development

Scala as backend language. Tips, tricks and pain

Scala as backend language. Tips, tricks and pain

I’ve got a legacy service, written in Scala. Stack was: Play2, Scala, Slick, Postgres.

Here is described why such technology stack is not the best option, what should be done to make it work better with less efforts and how to avoid underwater rocks.

For impatient:
If you have choice – don’t use Slick.
If you have more freedom – don’t use Play.
And finally – try to avoid Scala on the back-end. It might be good for Spark applications, but not for the backends.

Data layer

Every backend with persistence data needs to have data layer.

From my experience the best way of code organizing is repository pattern. You have your entity (dao) and repository, which you access when you need to do some manipulations with data. Nowadays modern ORMs are your friends here. They do a lot of things for you.

Slick – back in 2010

It was my first thought, when I started using it. In Java you can use Spring-data, which generates a repository implementation for you. All you need is to annotate your entity with JPA and write repository interface.

Slick is another thing. It can work in two ways.

Manual definition

You define your entity as a case class, mentioning all needed fields and their types:

case class User(
    id: Option[Long],
    firstName: String,
    lastName: String
)

And then you manually repeat all the fields and their types defining the schema:

class UserTable(tag: Tag) extends Table[User](tag, "user") {
    def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
    def firstName = column[String]("first_name")
    def lastName = column[String]("last_name")

    def * = (id.?, firstName, lastName) <> (User.tupled, User.unapply)
}

Nice. Like in ancient times. Forget about @Column automapping. In case you have DTO and you need to add a field you should always remember to add it to DTO, DAO and schema. 3 places.

And have you seen insert method implementation?

def create(name: String, age: Int): Future[Person] = db.run {
    (people.map(p => (p.name, p.age))
returning people.map(_.id)
into ((nameAge, id) => Person(id, nameAge._1, nameAge._2))
    ) += (name, age)
}

I used to have save method defined somewhere in abstract repository only once and have it in one line, something like myFavouriteOrm.insert(new User(name, age)).

Full example is here: https://github.com/playframework/play-scala-slick-example

I don’t understand why Play’s authors say ORM’s “will quickly become counter-productive“. Writing manual mapping on real projects would become a pain much faster then abstract “ORM counter-productivity“.

Code generation

The second approach is code generation. It scans your DB and generates the code, based on it. Like reversed migration. I didn’t like this approach completely (it was in the legacy code I’ve got).

First, to make it working you need to have db access at compile time, which is not always possible

Second, if backend owns the data – it should be responsible for the schema. It means there should be schema from code or code changes + migration with schema changes in the same repository.

Third, have you seen the generated code? Lots of unnecessary classes, no format (400-600 characters in a line), no ability to modify this classes, by adding some logic or extending an interface. I had to create my own data layer, around this generated data layer 🙁

Ebean and some efforts to make it work

So, after fighting with Slick I’ve decided to remove it together with data layer completely and to use another technology. I’ve selected Ebean, as it is official ORM for Play2 + Java. Looks like Play developers don’t like Hibernate for some reason.

Important thing to notice – it is Java ORM and Scala is not supported officially (its support was dropped a few years ago). So you need to apply some efforts to make it work.

First of all – add jaxb libraries to your dependencies. They were removed in Java 9. So on 9+ Java your app will crash at runtime without them.

libraryDependencies ++= Seq(
  
"javax.xml.bind" % "jaxb-api" % "2.2.11",
  
"com.sun.xml.bind" % "jaxb-core" % "2.2.11",
  
"com.sun.xml.bind" % "jaxb-impl" % "2.2.11",
  
"javax.activation" % "activation" % "1.1.1"
)

Next – do not forget to add jdbc library and driver library for your database.

After it you are ready to set up your data layer.

Entity

Write your entities as normal java entities:

@Table(name = "master")
@Entity
class Master {
  @Id
  @GeneratedValue(strategy = GenerationType.AUTO)
  @Column(name = "master_id")
  var masterId: Int = _

  @Column(name = "master_name")
  var masterName: String = _

  @OneToMany(cascade = Array(CascadeType.MERGE))
  var pets: util.List[Pet] = new util.ArrayList[Pet]()
}

Basic Scala types are supported, but with several limitations:

  • You have to use java.util.list in case of one/many-to-many relationship. Scala’s ListBuffer is not supported as Ebean doesn’t know how to de/serialize it. Scala’s List also, as it is immutable and Ebean can’t populate it.
  • Primitives like Int or Double should not be nullable in the database. If you have it nullable – use java.lang.Double (/ Int) or you will get exception as soon as you will try to load such object from the database, because Scala’s Double is compiled to double primitive, which can’t be null.
    Scala’s Option[Double] won’t work, as ORM will return null instead of Option[null].
  • Relations are supported, including bridge table, which is also created automatically. But, because of the bug, @JoinColumn can’t be specified.
  • Ebean uses java lists, so you need to use scala.collection.JavaConverters every time you are planning to use lists in query (like where.in) and every time you return a list (like findList).
Repository

It is (the only) nice thing in Scala, which can be useful here: trait can extend abstract class. It means you can create your abstract CRUD repository and use it in business repositories. Like you have out of the box in Spring-Data 🙂

1. Create your abstract repository:

class AbstractRepository[T: ClassTag] {
  var ebeanServer: EbeanServer = _

  @Inject()
  def setEbeanServer(ebeanConfig: EbeanConfig): Unit = {
    ebeanServer = Ebean.getServer(ebeanConfig.defaultServer())
  }

  def insert(item: T): T = {
    ebeanServer.insert(item)
    item
  }

  def update(item: T): T = {
    ebeanServer.update(item)
    item
  }

  def saveAll(items: List[T]): Unit = {
    ebeanServer.insertAll(items.asJavaCollection)
  }

  def listAll(): List[T] = {
    ebeanServer.find(classTag[T].runtimeClass.asInstanceOf[Class[T]])
      .where().findList().asScala.toList
  }

  def find(id: Any): Option[T] = {
    Option(ebeanServer.find(classTag[T].runtimeClass.asInstanceOf[Class[T]], id))
  }
}

You need to use classTag here to determine the class of the entity.

2. Create your business repository trait, extending this abstract repository:

@ImplementedBy(classOf[MasterRepositoryImpl])
trait MasterRepository extends AbstractRepository[Master] {
}

Here you can also set up some special methods, which will be used only in this repository.

In the implementation you need to define only methods from MasterRepository. In case of none – just leave it empty. Methods from the AbstractRepository will be accessible anyway.

@Singleton
class MasterRepositoryImpl extends MasterRepository {
}

After data layer refactoring ~70% of code was removed. The main point here – functional staff (FRM and other “modern” things) can be useful only in case you don’t have business objects. F.e. you are creating telecom back-end, which main intent is to parse network packages, do something with it’s data and fire them to the next point of your data pipeline. In all other cases, when your business logic touches real world – you need to use object oriented design.  

Bugs and workarounds

I’ve recently faced 2 bugs, which I would like to mention. Both are connected with Ebean-Play integration.

First – sometimes application fails to start because it can’t find Ebean class. It is connected with logback.xml but I am not sure, how. My breaking change was adding Sentry‘s logback.

There are two solutions:

  • some people fix it by just playing with logback.xml by removing or changing appenders. That doesn’t look so stable.
  • another workaround is to inject EbeanDynamicEvolutions into your repository (AbstractRepository is the best place). You don’t need to use it. I think it is connected with Play’s attempts to run evolutions on start. The connection to logback is still unclear.

Second – connection pool leak in Ebean. It looks very serious, but it is not, don’t worry. It appears only in tests. For some reason during Play integration test Ebean starts two connection pools, but stopps only one. According to the logs it is also connected with evolutions. It starts pool to run evolution before test. Then it starts application (with another pool), checks evolution once again (why?), runs the testcase and close application’s pool on it’s teardown. First pool remains connected.

If you are using postgres you won’t be able to connect to it after ~10 tests. I’ve created an issue but for now it is in the backlog 🙁

Workaround is only one – increase the number of connections on the database level. For postgres add -N 500 flag.

DTO layer

Another part of the system which made me disappointed. This layer’s intent is to receive messages from outside (usually REST) and run some actions, based on message type. Usually it means that you get message, parse it (usually from JSON) and pass to service layer. Then take service layer’s return and send outside as an encoded answer. Encoding and decoding messages (DTOs) is the main thing here.

For some reason working with json is unfriendly in Scala. And super unfriendly in Play2.

Json deserialization – is not automated anymore

In normal frameworks specifying the type of an object to be parsed is all you need to do. You specify root object, request body will be parsed and serialized to this object, including all sub-objects. F.e. build(@RequestBody RepositoryDTO body) taken from one of my opensource projects.

In Play you need to set up implicit reader for every sub-object, used in your DTO. In case your MasterDTO contains PetDTO, which contains RoleDTO you have to set up reader for all of them:

def createMaster: Action[AnyContent] = Action.async { request =>
    implicit val formatRole: OFormat[RoleDTO] = Json.format[RoleDTO]
    implicit val formatPet: OFormat[PetDTO] = Json.format[PetDTO]
    implicit val format: OFormat[MasterDTO] = Json.format[MasterDTO]
    val parsed = Json.fromJson(request.body.asJson.get)(format)
    val body: MasterDTO = parsed.getOrElse(null)
    // …
}

Maybe there is some automated way, but I haven’t found it. All approaches end up with getting request’s body as json and parsing it manually.

Finally I’ve ended with json4s and parsing objects like this:

JsonMethods.parse(request.body.asJson.get.toString()).extract[MasterDTO]

What I still don’t like here is you have to get body as json, convert it to string and parse one more time. I am lucky, this project is not realtime, but if your is – think twice before doing so.

Json validation – more boilerplate for the god of boilerplate!

Play has it’s own modern functional way of data validation. In three steps only:

  1. Forget about javax.validation
  2. Define your DTO as case-class. Here you write your field names and their types.
  3. Manually write Form mapping, mentioning all dto’s field names and writing their types once again.

After Slick’s manual schema definition, I’ve expected something shitty. But it overcame my expectations.

The example:

case class SomeDTO(id: Int, text: String, option: Option[Double]).
def validationForm: Form[SomeDTO] = { 
  import play.api.data.Forms._
  Form(
       mapping(
              "id" -> number,
              "text" -> nonEmptyText,
              "option" -> optional(of(doubleFormat))
       )(SomeDTO.apply)(SomeDTO.unapply)
  )
}

It is used like this:

    def failure(badForm: Form[_]) = {
      BadRequest(badForm.errorsAsJson(messagesProvider))
    }

    def success(input: SomeDTO) = {
      // your business logic here 
    }

    validationForm.bindFromRequest()(request).fold(failure, success)

Json serialization – forget about heterogeneity

It was the main problem with Play’s json implementation and the main reason I’ve decided to get rid of it. Unfortunately I haven’t found a quick solution to remove it completely (looks like it is hardcoded) and replace with json4s.

I have all my DTOs implement my JsonSerializable trait and I have few services, which work with generic objects. Imagine DogDTO and CatDTO: they are different business entities but some actions are common. To avoid code duplication I just send them via Pet trait to those services (like FeedPetService). They do their job and return just a List of JsonSerializable objects (can be either Cat or Dog DTOs, based on input type).

It turned out that Play can’t serialize trait if it is not sealed. It requires an implicit writer to be set up explicitly. So after googling a bit I’ve switch to json4s.

Now I have 2 lines of implementation for any DTO:

def toJson(elements: List[JsonSerializable]): String = {
    implicit val formats: AnyRef with Formats = Serialization.formats(NoTypeHints)
    Serialization.write(elements)
  }

It is defined in trait. Every companion object, which extends this trait has json serialization of class-objects out of the box.

Summing up

  • Slick’s creators call Slick “Functional Relational Mapper” (FRM) and claim it to have minimum configuration advantages. As far as I see it is yet another not successful attempt to create something with “Functional” buzzword. From 10 years of my experience I spend around 4 years in functional programming (Erlang) and saw a lot of dead projects, which started like “New Innovative Functional Approach”
  • Scala’s implicit is something magical which breaks KISS principle and makes the code messy. Here is a very good thread about Scala implicits + Slick
  • Working with json in Play2 is pain.
Code Structural Patterns (Erlang)

Code Structural Patterns (Erlang)

    With this article, I would like to start a series on “Writing optimal and effective code in Erlang”. These articles are describing my point of view on the topic according to my experience in real world software development. I have been inspired by the marvelous book of M. Fowler and Co “Refactoring” if you haven’t read it yet – you definitely should do that. Put it on the top of your reading list, right after my article. In that book, authors describe a number of methods to simplify and optimize Object Oriented Code. Of course, if you have wide experience most methods would be obvious, but I have started to think: why don’t we have something similar in Functional Programming in application to Erlang?
And here we are, in the first article I describe some insights on Code Structural Pattern in Erlang which bring fast development and help to create effective code.

Code Structural Pattern in Erlang

    Code Structural Pattern – is a general reusable solution to a commonly occurring situation/entity (don’t mix with OOP entity), here within a given context in code structure. It’s not necessary a process or behavior implementation, but it can be recognized as a repeated pieces of code. Don’t mix with Design Patterns from OOP, since we don’t have objects in Erlang. The main rule is – one general structural pattern = one erl module. Using correct naming for different structural patterns makes your program easy to understand for other people and you. I single out 6 main Structural Patterns in Erlang.

I. Application

    Main module, which implements application behavior. This code starts execution at callback start/2 on call application:start or when your project is included into some other program app.src.
It’s a root of your program – starts top Supervisor and often do some useful init stuff:

case your_top_sup:start_link() of
    {ok, _} = Res ->
        ok = metrics_mngr:init(),
        ok = http_handler_mngr:init(),
        Res;
    Error->
        log:err(Error),
        Error
    end.

    As it is the main entry point in your program and since it runs only once: it’s the perfect place to initialize external dependencies, run processes with global config and those which relies on it, like node discovery and joining a cluster.

II. OTP process

    OTP processes are very important. It is an extension of specific part Erlang defined a generic process with your business logic. They can be gen_server, gen_fsm, gen_statem and gen_event. Each process type describes its own behavior. Refer to an erlang documentation to know more about it, this is out of the scope of this article.
OTP processes as a separate module should be used in several cases:

  • you need a long living process (f.g. daemon, a job with periodic timer);
  • your process is not so long living, but need to keep data in its state (f.e. connected client with client data saved in process state);
  • your process handles different messages (f.e. driver to a database, which gets different message types from and to it).

OTP process is well known to every Erlang programmer and if you preferring to use it your programs are easier to understand. Try to limit the use of non-otp processes. Left it for short living tasks only, like processing something in parallel.

III. Supervisor

    A supervisor is a special process from OTP scope but here I single it as the separate structural pattern. It is such OTP process which can start various OTP processes itself. The main function is to supervise started processes. It is an important part of Erlang process tree. If child process has crashed – supervisor will take actions, which were set up in its policy on start up. Supervisors can be dynamic and static. Static supervisors are one_for_oneone_for_all and rest_for_one. They start all specified children just after themselves. Not like dynamic, which start different specified processes: one process per child.

A dynamic supervisor is simple_one_for_one – it doesn’t start any children when started and can have only one type of child in spec. But it can spawn multiple processes per specification. Supervisor tree reflects the architecture of your project – other people will analyze it starting from the top supervisor and going down to its children. You should keep your tree simple as possible. Avoid dynamically adding children to static supervisors, as it adds complexity to understanding. Static supervisors are often used to start long-running background jobs, daemons, and other supervisors, while dynamic supervisors used for creating pools.

IV. Interface

The interface module, like the application, doesn’t have its own process. It just describes common behavior with callbacks, any other module can implement it. The Interface doesn’t contain any other code besides callbacks definitions and calling implementations:

-callback handle_common_stuff(Args :: type()) → Return :: type().

handle_common_stuff(Module, Args)->
  Module:handle_common_stuff(Args).

It is better to move interfaces and their implementations to a separate directory. Then program becomes more clear – you can easily see all the implementations.

Here sn_auth_intrf.erl is an interface module and sn_auth_facebook.erlsn_auth_msisdn.erl and sn_auth_plain.erl are it’s implementations.

V. Manager

Manager pattern is also just a module with a code. It contains API for part of program logic, it is responsible for. You can separate your logically tied code into another directory and treat it like a small library. Manager module is playing a role of an API and all other code will be just internal for the outside scope. In the other words: to ask any department to do something you have to ask this department manager first, and he will call suitable worker for you.
Using managers make your code more organized. It will also help you to easily move logically connected code into a separate library if you decide to divide your codebase later.

On this picture, you can see a piece of social network’s code, responsible for friendship and friends related stuff. This code is separated to its own directory, as there are lots of other modules. When entering this directory first you should notice its manager module. It will tell you which API is exposed outside. So, as in real live everything is simple: if you are in the unknown department, just ask its manager what this department is responsible for.
Old and bulky way to separate logically tied code was to only add a prefix into module name without dividing it into separate directories, it’s like if in open space all teams and managers are mixed together and not clear who is working together with whom. And if you are a newcomer you have to find and ask each manager which department he belongs to and what they responsible for. What is clearly not the most productive way of working.

Here you can see sn_delete_logic – it contains all code, which describes deleting friends. Its exports shouldn’t be used outside this package. For external usages, they are reexported throug sn_friends_mngr – manager of this “department”. sn_notification_logic contains all rules about notification friends and sending events to them. It’s exported functions are also included to sn_friends_mngr, as a part of API.
Invitation logic is more complex, so it is moved to another subpackage, with its own manager – sn_invite_mngr. In this situation, all calls from friends to invitation package’s functions should be done through sn_invite_mngr.
A manager always exposes other module’s exports, or down level manager’s exports. You can see same code represented as a tree.

VI. Logic

The logic module looks just opposite to the manager one. It contains all internal code, which can be used only inside the package where the logic module exists. It can’t be used either up nor down in subpackages.

Use manager to expose current logic module functions to the rest of a program. Like you saw in the previous example: there are 2 logic modules in invitation subpackage. They display the exact program logic – user can become friends via the mutual following (is described in sn_follow_logic) or if their phone numbers mutually exist in their address books (is described in sn_contacts_logic).

By separating your code this way you make testing it much easier:

  • no need to export internal functions especially for calling in tests if they are already exported for a manager;
  • exported in logic modules functions can be easily mocked.

The second way to use logic modules is to store internal common code, which is in use by multiple modules of the same package. What reduces code duplication dramatically. Here you can see an auth package with three implementations, responsible for authentication via Facebook, SMS and plain passwords. They have their own codebase but also use sn_auth_logic common functions for doing common staff such as checking credentials, passing log_in and log_out events and some others.
Separating code this way helps you to change tool specific code without changing business logic.

Naming Convention

Now when we got a brief understanding of main Code Structural Patterns, let us take a closer look at naming convention.
There are two ways entity modules can be named: suffix and prefix. Doesn’t matter which one will seem to be more convenient to you or your team – you just need to be consistent.
Suffix naming is default to Erlang, you can see _app and _sup suffixes in nearly every application. Interface modules will get _intf suffix, managers – _mngr, logic – _logic. OTP process modules don’t have a special suffix, although they stand out because of that.

This way has some cons:

  • it is too bulky;
  • you should read the whole name to determine the type;
  • entities don’t stand out of other code.

There is another way of naming modules – prefix naming.
In this case, modules get this prefixes at name. So, names are compact and can be easily seen among other modules. They are also always on top when sorting names alphabetically. Check follow table to compare on 2 ways of naming convention. Where foo symbolizes a namespace:

Type Prefix Pref Example Suffix Suf Example
Application __a foo__a_social_net _app foo_social_net_app
Supervisor __s foo__s_top _sup foo_top_sup
Interface __i foo__i_handler _intf foo_handler_intf
Manager __m foo__m_friends _mngr foo_friends_mngr
Logic __l foo__l_export _logic foo_export_logic

As a short conclusion, I hope by reading this article you got a general understanding of Code Structural Pattern in Erlang and will implement it in practice. My bits of advice would be:

  1. Don’t throw all erl modules into one src directory, like into a rubbish bin – divide logically tight code into separate sub-directories;
  2. Use structural code patterns, inside sub-directories introduce:
    • Manager as API holder for external use;
    • Interface for internal general callbacks;
    • Logic modules as code implementation for shared or logic for managers calls;
  3. Apply correct and consistent naming convention.

As you can see – using this technique makes your code more ordered, better testing and easy to understand and modify.

For arguing and/or asking questions welcome to comments behind. Full sample project code can be found in git.
Have a nice day :).