Development

Development

To get info about new technologies, perspective products and useful services

BigData

BigData

To know more about big data, data analysis techniques, tools and projects

Refactoring

Refactoring

To improve your code quality, speed up development process

Tag: catcher

End-to-end from front-end to back-end with Catcher

End-to-end from front-end to back-end with Catcher

Today Catcher’s external modules 5.1.0 were finally released. It’s great news as it enables Selenium step for Front-end testing!

How should proper e2e test look like?

Imagine you have a user service with nice UI, which allows you to get information about users, registered in your system. Deeply in the back-end you also have an audit log, which saves all actions.

Before 5.1.0 you could use HTTP calls to mimic front-end behavior to trigger some actions on the back-end side.

Your test probably looked like:
– call http endpoint to search for a user
– check search event was saved to the database
– compare found user with search event, saved in the database

This test checks 100% of back-end functionality. But most likely front-end is the part of your system also! So proper end-to-end test should start with front-end application and end up in a back-end.

Without touching front-end you could have false-positive results in e2e tests. F.e.: a user has some special symbols in his name. All back-end tests passes and you deploy your application in production. After the deploy your users start to complain that front-end part of the application crashes. The reason is – front-end can’t handle back-end’s response when rendering user details with special symbols in his name.

With the new Catcher’s version you can include Front-end in your test. So – instead of calling http you can use selenium step.

The test

Let’s write a test, which will search for a user and will check that our search attempt was logged.

Every test starts with variables. To cover false-positive results we need to save multiple users and then check that only the correct one is returned. Let’s compose our users. Every user will have a random email and random name thanks to random built-in function.

variables:
    users:
        - name: '{{ random("name") }}'
          email:  '{{ random("email") }}'
        - name: '{{ random("name") }}'
          email:  '{{ random("email") }}'
        - name: '{{ random("name") }}'
          email:  '{{ random("email") }}'

Now we are ready to write our steps.

Populate the data

The first step we need to do is to populate the data with prepare step.

Let’s prepare a users.sql which will create all back-end tables (in case of clean run we don’t have them).

 CREATE TABLE if not exists users_table( 
                     email varchar(36) primary key,
                     name varchar(36) NOT NULL 
                     );

Next – we need to fill our table with test data. users.csv will use our users variable to prepare data for our step.

email,name
{%- for user in users -%}
{{ user.email }},{{ user.name }}
{%- endfor -%}

The step itself will take users.sql and create database tables if needed. Then it will populate it using users.csv based on users variable.

steps:
  - prepare:
      populate:
          postgres:
              conf: '{{ postgres }}'
              schema: users_table.sql
              data:
                  users: users.csv
      name: Populate postgres with {{ users|length }} users

Select a user to search for

The next (small) step is to select a user for our search. Echo step will randomly select user from users variable and register it’s email as a new variable.

- echo: 
    from: '{{ random_choice(users).email }}'
    register: {search_for: '{{ OUTPUT }}'}
    name: 'Select {{ search_for }} for search'

Search front-end for our user

With the Selenium step we can use our front-end to search for the user. Selenium step runs the script in JS/Java/Jar/Python from resources directory.

It passes Catcher’s variables as environment variables to the script so you can access it within Selenium. It also greps the script’s output, so you can access everything in Catcher’s next steps.

- selenium:
        test:
            file: register_user.js
            driver: '/usr/lib/geckodriver'
        register: {title: '{{ OUTPUT.title  }}'}

The script will run register_user which searches for our selected user and will register page’s title.

Check the search log

After we did the search we need to check if it was logged. Imagine our back-end uses MongoDB. So we’ll use mongo step.

- mongo:
      request:
            conf: '{{ mongo }}'
            collection: 'search_log'
            find: {'text': '{{ search_for }}'}
      register: {search_log: '{{ OUTPUT }}'}

This step searches MongoDB search_log collection for any search attempts with our user in text.

Compare results

Final steps are connected with results comparison. First – we’ll use echo again to transform our users so that we can search in users by email.

- echo:
        from: '{{ users|groupby("email")|asdict }}'
        register: {users_kv: '{{ OUTPUT }}'}

Second – we will compare front-end page title got from selenium with MongoDB search log and user’s name.

 - check:
        and:
            - equals: {the: '{{ users_kv[search_for][0].name }}', is: '{{ title }}'}
            - equals: {the: '{{ title }}', is: '{{ search_log.name }}'}

The selenium resource

Let’s add a Selenium test resource. It will go to your site and will searches for your user. If everything is OK page title will be the result of this step.

Javascript

Selenium step supports Java, JS, Python and Jar archives. In this article I’ll show you all of them (except Jar, it is the same as Java, but without compilation). Let’s start with JavaScript.

const {Builder, By, Key, until} = require('selenium-webdriver');
async function basicExample(){
    let driver = await new Builder().forBrowser('firefox').build();
    try{
        await driver.get(process.env.site_url);
        await driver.findElement(By.name('q')).sendKeys(process.env.search_for, Key.RETURN);
        await driver.wait(until.titleContains(process.env.search_for), 1000);
        await driver.getTitle().then(function(title) {
                    console.log('{\"title\":\"' + title + '\"}')
            });
        driver.quit();
    }
    catch(err) {
        console.error(err);
        process.exitCode = 1;
        driver.quit();
      }
}
basicExample();

Catcher passes all it’s variables as environment variables, so you can access them from JS/Java/Python. process.env.site_url in this example takes site_url from Catcher’s variables and process.env.search_for takes user email to search for it.

Everything you write to STDOUT is caught by Catcher. In case of JSON it will be returned as dictionary. F.e. with console.log('{\"title\":\"' + title + '\"}') statement OUTPUT.title will be available on Catcher’s side. If Catcher can’t parse JSON – it will return a text as OUTPUT.

Python

Here is the Python implementation of the same resource. It should be also placed in resources directory. To use it instead of Java implementation you need to change file parameter in Selenium step.

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import os
from selenium.webdriver.firefox.options import Options

options = Options()
options.headless = True
driver = webdriver.Firefox(options=options)
try:
    driver.get(os.environ['site_url'])
    assert "Python" in driver.title
    elem = driver.find_element_by_name("q")
    elem.clear()
    elem.send_keys(os.environ['search_for'])
    elem.send_keys(Keys.RETURN)
    assert "No results found." not in driver.page_source
    print(f'{"title":"{driver.title}"')
finally:
    driver.close() 

Java

Java is a bit more complex, as (if you are not using already compiled Jar) Catcher should compile Java source before running it. For this you need to have Java and Selenium libraries installed in your system.

Luckily Catcher comes with Docker image where libraries (JS, Java, Python), Selenium drivers (Firefox, Chrome, Opera) and tools (NodeJS, JDK, Python) installed.

package selenium;

import org.openqa.selenium.By; 
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.firefox.FirefoxBinary;
import org.openqa.selenium.firefox.FirefoxOptions;

public class MySeleniumTest {

    public static void main(String[] args) {
        FirefoxBinary firefoxBinary = new FirefoxBinary();
        FirefoxOptions options = new FirefoxOptions();
        options.setBinary(firefoxBinary);
        options.setHeadless(true);
        WebDriver driver = new FirefoxDriver(options);
        try {
            driver.get(System.getenv("site_url"));
            WebElement element = driver.findElement(By.name("q"));
            element.sendKeys(System.getenv("search_for"));
            element.submit();
            System.out.println("{\"title\":\""+driver.getTitle() + "\"}");
        } finally {
            driver.quit();
        }
    }
} 

Conclusion

Catcher’s update 5.1.0 unites front and back-end testing, allowing them both to exist in one testcase. It improves the coverage and make the test really end-to-end.

Testing Airflow data pipelines with Catcher end to end

Testing Airflow data pipelines with Catcher end to end

This article is about writing end-to-end test for a data pipeline. It will cover Airflow, as one of the most popular data pipeline scheduler now days and one of the most complicated to test. For impatient – here is the repository with everything set up.

What is data pipeline and why is it important to test it?

It is a development pattern, when we take data from one or several data sources, process it (not always) and move it to another place. It can be real-time or batch. It can be done via different frameworks and tools (like Airflow, Spark, Spring Batch, hand-made).
But it has one common thing:

Any data pipeline is extremely hard to test, as you always need a fully deployed system, prepared in advance data set and mocks of external services.

Imagine you have standard business case: your backend is sending results into Postgres and you need to update merchant’s status in Salesforce for your customer support agent being able to answer customers’ questions on the fly.

To test it you’ll have to pass these complex steps:

  • download data from production & depersonalize it;
  • set up sandboxes or use mocks directly in your code;

What is Catcher and how can it help you?

Catcher is an end-to-end tool, specially designed to test systems containing many components. Initially developed as end-to-end microservices test tool it perfectly fits needs of data pipeline testing.

Main Catcher’s features are:

  • modular architecture. It has lots of modules for different requirements – from Kafka to S3;
  • templates. It fully supports Jinja2 templates which allows you to generate data sets easily;
  • different inventory files for different environments. Write your tests locally and run in cloud environment by just changing inventory file;

You can read about it here.

If you are new to Catcher you can find this article useful.

The pipeline

Imagine, you have a back-end which handles user registration. All newly registered users are stored in MySQL. You also have another back-end which works with GDPR and unsubscriptions. The business requirement is – the second back-end should somehow know about newly created users, as it needs this information for proper unsubscription events matching. The final point is – your back-end developers don’t know about Kafka/Rabbit so the only way to do it – is to write a pipeline which will upload data from MySQL to Postgres.

The pipeline will:

  1. take data from MySQL and load to S3
  2. take data from S3 and put it to Postgres
start >> mysql_to_s3 >> s3_to_psql >> end

In real world, most likely second and third steps of this pipeline would be joined into custom operator MySQLtoPostgresViaS3Operator. But here we divide it to show an example of longer than one actual step pipeline :).

Both ‘start’ and ‘end’ are dummy operators. I added it because it is a good place to have custom notifications to slack or etc.
mysql_to_s3 is a python operator:

mysql_to_s3 = PythonOperator(
    task_id='mysql_to_s3',
    python_callable=mysql_to_s3,
    retries=0,
    dag=dag,
    provide_context=True
)

It just calls mysql_to_s3 function:

def mysql_to_s3(**context):
    mysql_hook = MySqlHook(mysql_conn_id=mysql_conn_id)
    s3_hook = S3Hook(aws_conn_id=aws_conn_id)
    sql = f'Select * from {mysql_tbl_name} order by email'
    df: DataFrame = mysql_hook.get_pandas_df(sql=sql) 
    with NamedTemporaryFile(newline='', mode='w+') as f:
        key_file = f"data/{mysql_tbl_name}/year={datetime.date.today().year}/" \
                   f"month={datetime.date.today().strftime('%m')}/" \
                   f"day={datetime.date.today().strftime('%d')}/" \
                   f"{mysql_tbl_name}.csv"
        df.to_csv(path_or_buf=f,
                  sep=",",
                  columns=df.columns.values.tolist(),
                  index=False
                  )
        f.flush()
        s3_hook.load_file(filename=f.name,
                          key=key_file,
                          bucket_name=bucket_name)
        context["ti"].xcom_push(key=key_str, value=key_file)
        f.close()

In this function, via MySQL hook, we retrieve Pandas Data Frame from given SQL query (be mindful, make sure you don’t read too much data with this query and don’t overload memory, otherwise read in chunks) and store this Data Frame as CSV file on S3.

After S3 file is loaded next task: s3_to_psql is called:

s3_to_psql = PythonOperator(
    task_id='s3_to_psql',
    python_callable=s3_to_psql,
    retries=0,
    dag=dag,
    provide_context=True
)

It is also a python operator which calls s3_to_psql function:

def s3_to_psql(**context):
    ti = context["ti"]
    key_file = ti.xcom_pull(dag_id='simple_example_pipeline',
                            task_ids='mysql_to_s3',
                            key=key_str)
    psql_hook = PostgresHook(postgres_conn_id=postgres_conn_id)
    s3_hook = S3Hook(aws_conn_id=aws_conn_id)
    lines = s3_hook.read_key(key=key_file, bucket_name=bucket_name).split("\n")
    lines = [tuple(line.split(',')) for line in lines if line != '']
    df = DataFrame.from_records(data=lines[1:], columns=lines[0])
    df.to_sql(name=psql_tbl_name,
              con=psql_hook.get_sqlalchemy_engine(),
              if_exists="replace",
              index=False
              )

In this function we read file from S3 into worker memory, build Pandas Data Frame out of it and store it into Postgres.

All the Airflow connection ids are hard-coded at the beginning of the file:

postgres_conn_id = 'psql_conf'
mysql_conn_id = 'mysql_conf'
aws_conn_id = 's3_config'

You do not need to bother and add it into Airflow test environment – Catcher will handle it for your during running test.

That’s all. Now it is a time to test it.

The test itself

Let’s start with defining Catcher’s test-local variables:

variables:
  users:
    - uuid: '{{ random("uuid4") }}'
      email: 'bar@test.com'
    - uuid: '{{ random("uuid4") }}'
      email: 'baz@test.com'
    - uuid: '{{ random("uuid4") }}'
      email: 'foo@test.com'
  pipeline: 'simple_example_pipeline'
  mysql_tbl_name: 'my_table'

We set up Airflow’s pipeline name, mysql table name and 3 users which will be exported.

We provide here two different inventories: one for local run and one for docker run. Which inventories to provide is up to you, depends on particular case.
Local inventory is:

mysql_conf: 'root:test@localhost:3307/test'
psql_conf: 'postgres:postgres@localhost:5432/postgres'
airflow_db: 'airflow:airflow@localhost:5433/airflow'
airflow_web: 'http://127.0.0.1:8080'
s3_config:
    url: http://127.0.0.1:9001
    key_id: minio
    secret_key: minio123

If you have already stage or devenvironment setup, you can add inventory for it the same way as we done for local, but specifying DNS names instead of localhost ip addresses.

Docker inventory is the same, but with domain names instead of localhost.

mysql_conf: 'mysql://root:test@mysql:3306/test'
psql_conf: 'postgresql://postgres:postgres@custom_postgres_1:5432/postgres'
airflow_db: 'airflow:airflow@postgres:5432/airflow'
airflow_web: 'http://webserver:8080'
airflow_fernet: 'zp8kV516l9tKzqq9pJ2Y6cXbM3bgEWIapGwzQs6jio4='
s3_config:
    url: http://minio:9000
    key_id: minio
    secret_key: minio123

Steps

First step should populate test data: it creates MySQL and Postgres tables and generates data. It allows you to avoid monkey labor and simplify your life as a Data Engineer. Forget about test datasets manual building and production data exporting into csv and copying to the test environment. And all sorts of issues related to it: data anonymization, regexp in sql and generators.

For preparing your test data prepare step fits the best:

prepare:
    populate:
      mysql:
        conf: '{{ mysql_conf }}'
        schema: my_table.sql
        data:
          my_table: my_table.csv
      postgres:
        conf: '{{ psql_conf }}'
        schema: psql_tbl.sql

As you can see inside prepare we have defined populate both for MySQL and for Postgres data sources. For both data sources this step follow the same logic: provide configuration and run DDL code from specified SQL files. Both mysql_conf and psql_conf values are taken from the current inventory file (you are running test with).

The only difference, for mysql we specify input data which would be used to fill my_table. We do not specify input data for Postgres as it should be filled in by our Airflow pipeline during the execution. Lets dive deeper into how mysql populate statements are defined.

my_table.sql is a SQL file containing create table statement. In real world you may also have here grant access statement, adding indexes and etc:

CREATE TABLE if not exists test.my_table(
                    user_id      varchar(36)    primary key,
                    email        varchar(36)    NOT NULL
                );

my_table.csv is a data file, main difference with general testing approach is – we don’t specify actual data here. We apply Jinja2 template to generate the csv file based on our users variable from the very beginning. So one of the Catcher’s coolest feature: it supports Jinja2 templates everywhere.

user_id,email
{%- for user in users -%}
{{ user.uuid }},{{ user.email }}
{%- endfor -%}

psql_tbl.sql is almost the same as my_table.sql but with another table name.

When all data is prepared we should trigger our pipeline. It is the second step:

- airflow:
    run:
      config:
        db_conf: '{{ airflow_db }}'
        url: '{{ airflow_web }}'
        populate_connections: true
        fernet_key: '{{ airflow_fernet }}'
      dag_id: '{{ pipeline }}'
      sync: true
      wait_timeout: 150

It will run airflow pipeline simple_example_pipeline and will wait for it to finish (or fail in 150 seconds). And it will also create airflow connections based on your catcher inventory file.

One important thing here – Catcher will create connections in Airflow and name them as they are named in inventory file:

For psql_conf: 'postgres:postgres@localhost:5432/postgres' from inventory it will create connection psql_conf in Airflow. So, in order to have working test connection id in your pipeline should be the same as connection id in inventory file: postgres_conn_id = 'psql_conf'. Name itself does not matter.

Third step is to check if S3 file was created and download it:

- s3:
    get:
      config: '{{ s3_config }}'
      path: 'my_awesome_bucket/data/{{ mysql_tbl_name }}/year={{ now()[:4] }}/month={{ now()[5:7] }}/day={{ now()[8:10] }}/my_table.csv'
    register: {s3_csv: '{{ OUTPUT }}'}

As we said above, Catcher can apply jinja templates everywhere, here you see an example, how to compose path to our S3 resource. Our path in original pipeline build dynamically, depends on execution_date we use built-in function now() which returns current datetime as string, and apply some python string manipulation, like [5:7] to retrieve only part of the string. We pull the resource and register steps’ output as a new variable s3_csv.

Next two steps is to load the content from resource file and compare it with s3_csv (our final step in original airflow pipeline):

- echo: {from_file: 'my_table.csv', register: {expect_csv: '{{ OUTPUT }}'}}
- check:
    equals: {the: '{{ s3_csv.strip() }}', is: '{{ expect_csv.strip() }}'}
    name: 'Check data in s3 expected'

echo step can be also used to write or read from file. Here we read the same resource my_table.csv, which was used to populate MySQL and save step’s output to the variable expect_csv. Echo step will also run Jinja2 template and generate the proper content.

check equals step is used to compare expect_csv and s3_csv variables values. As their content is string we use python’s string strip() method to remove trailing spaces and empty lines.

The last step is to check what was actually written into Postgres. Expect steps fits us the best here:

- expect:
    compare:
      postgres:
        conf: '{{ psql_conf }}'
        data:
          psql_tbl: 'my_table.csv'
        strict: true
    name: 'Postgres data match expected'

Final cleanup

We need to add cleanup after the test to remove any side effects for other tests to run under clean environment.

So, we add block finally to test’s root:

finally:
  - mysql:
      request:
        conf: '{{ mysql_conf }}'
        query: 'drop table my_table'
      name: 'Clean up mysql'
      ignore_errors: true
  - postgres:
      request:
        conf: '{{ psql_conf }}'
        query: 'drop table psql_tbl'
      name: 'Clean up postgres'
      ignore_errors: true
  - s3:
      delete:
        config: '{{ s3_config }}'
        path: 'my_awesome_bucket/data/{{ mysql_tbl_name }}/year={{ now()[:4] }}/month={{ now()[5:7] }}/day={{ now()[8:10] }}/my_table.csv'
      name: 'Clean up s3'
      ignore_errors: true

We remove all data from Mysql and Postgres as well as remove file from S3. ignore_errors means that we don’t care if action fails (if there is no such file on S3 or table in the database). By the way, good approach here would be to move S3 file path into Catcher variable and reuse it in `S3:get` (Step #3) and inside delete step, to reduce code duplication.

How to run

Locally in docker

If you don’t have any environment you can use this docker-compose to start one locally. It is based on puckel-airflow docker repository.

docker-compose up -d

After you started docker-compose you need to run catcher docker image in the same network mounting your tests, resources and specifying inventory:

docker run -it --volume=$(pwd)/test:/opt/catcher/test \
               --volume=$(pwd)/resources:/opt/catcher/resources \
               --volume=$(pwd)/inventory:/opt/catcher/inventory \
               --network catcherairflowexample_default \
           comtihon/catcher -i inventory/docker.yml test

The network is very important, because, if you will run catcher locally, you would probably use local inventory with all services hosts 127.0.0.1. Both you and Catcher would be able to access them, but Catcher will populate Airflow connection with 127.0.0.1 from your local inventory, so your pipeline would fail, because Airflow in docker won’t be able to access database/minio via 127.0.0.1.

See more docker run instructions here.

Remotely in the environment

It is a good point to automate your tests. You can make your CI run catcher after every deployment to every environment. Both Catcher-in-docker or Catcher cli can be used from CI agents. Just use the proper inventory.

The output

If you run your test you’ll see nice colorful output. It will be split on two parts.

First is step-by-step run summary. It helps you to understand how test is running:

INFO:catcher:Step Create table and populate initial data OK
INFO:catcher:Step Trigger pipeline simple_example_pipeline OK
INFO:catcher:Step Get file from s3 OK
INFO:catcher:user_id,email
ea1d710b-0a7b-45f6-a1c4-52a5f7d91bce,bar@test.com
cf0a3043-6958-412d-a7b0-924092e7e95b,baz@test.com
e7a53958-f4aa-45e7-892e-69514990c992,foo@test.com
INFO:catcher:Step echo OK
INFO:catcher:Step Check data in s3 expected OK
INFO:catcher:Step Postgres data match expected OK
INFO:catcher:Test test/test.yml passed.

And following cleanup part:

INFO:catcher:Step Clean up mysql OK
INFO:catcher:Step Clean up postgres OK
INFO:catcher:Step Clean up s3 OK
INFO:catcher:Test test/test.yml [cleanup]  passed.

In case of multiple tests they will follow each other.

Second part is a run summary, one for all tests. It shows you statistics and status of each test run. In case of failure it will show the step number. In our case:

INFO:catcher:Test run 1. Success: 1, Fail: 0. Total: 100%
Test test: pass

Conclusion

When developers hear about end-to-end tests, they usually think about complex BDD frameworks and tons of code which they need to write in order to make everything work. It is not about Catcher. You don’t need to know any programming language to create a test.

When QA engineers hear about end-to-end tests, they usually think about lot’s of manual actions, easy to miss or make an error while testing. It is not about Catcher. Do your manual actions once and put them in Catcher’s script, to repeat every time you do a deploy.

When Data Engineers hear about end-to-end tests, they usually think how cool it would be to have a framework for testing data pipelines. And here we finally have one.

Happy testing!

Catcher e2e tests tool for beginners

Catcher e2e tests tool for beginners

What is Catcher?

It is an end-to-end test tool, specially designed to test systems containing many components. Initially developed as end-to-end microservices test tool it perfectly fits needs of data pipeline testing. Here we describe basic Catcher concepts.

Catcher is a modular system. It has core modules which are shipped with Catcher itself and additional modules which can be installed separately on demand.

Steps

In Catcher terminology module and step is the same.
Every Catcher test can have next types of steps:

  • built-in core Catcher steps. They include basic steps like checks, sh, loops, http, wait and others;
  • extended Catcher steps. They have more complex implementation and sometimes require dependencies or drivers to be installed in the system. For example: kafka, elastic, docker, mongo, s3 and others.
  • your custom steps, written in Python, Java, sh or any other language you prefer.

Simple example of how test can look like:

steps:
  - http:  # load answers via post body
      post:
        url: 'http://127.0.0.1:8080/load'
        body_from_file: "data/answers.json"
  - elastic:  # check in the logs that answers were loaded
      search:
        url: 'http://127.0.0.1:9200'
        index: test
        query:
          match: {payload : "answers loaded successfully"}

Variables

Steps with hard-coded values are not that useful. Variables is one of Catcher’s key features. Jinja2 templates are fully supported. Try to use as much variables as you can to avoid code duplication and make things flexible.

Test-local variables

Every step can use existing variables by defining them in the variables section. They can be either static (as token) or computed (as user_email).

variables:
  user_email: '{{ random("email") }}'
  token: 'my_secret_token'
steps:
  - http:
      post:
        headers: {Content-Type: 'application/json', Authorization: '{{ token }}'}
        url: 'http://127.0.0.1:8080?user={{ user_email }}'
        body: {'foo': bar}

Every step can also register it’s output (or part of it) as a new variable:

variables: 
  user_email: '{{ random("email") }}'
  token: 'my_secret_token'
steps:
  - mongo:  # search MongoDb for user 
      request:
        conf:
            database: test
            username: test
            password: test
            host: localhost
            port: 27017
        collection: 'users'
        find_one: {'user_id': '{{ user_email }}'}
      register: {found_user: '{{ OUTPUT }}'}
  - check:  # check token was saved
      equals: {the: '{{ found_user["token"] }}', is: "{{ token }}"}

Let’s take closer look at this line: register: {found_user: '{{ OUTPUT }}'}. Here OUTPUT system variable stores mongo step’s output. We register it as a new variable found_user.

OUTPUT is the system variable which is being used to store every step’s output.

System variables

Catcher can also access your system environment variables. F.e. run export MY_ENV_HOST=localhost && catcher my_test.yml and your environment variable will be picked up. You can disable this behavior by running Catcher with -s flag: catcher -s false my_test.yml.

steps:
  - http:  # load answers via post body
      post:
        url: 'http://{{ MY_ENV_HOST }}/load'

To override variables use -e param while running tests:

catcher -e var=value -e other_var=other_value tests/

Override priority is:

  1. command line variables override everything below
  2. newly registered variables override everything below
  3. test-local variables override everything below
  4. inventory variables override everything below
  5. environment variables override nothing

Inventories

As you may notice hard-coding services configuration either in the test or in test’s variables is not that flexible, as they are different for every environment. Catcher uses inventory files for environment-specific configuration. Inventories are stored in the inventory folder. They also support templates.

You can create inventory file: local.yml and set variables there:

airflow_web: 'http://127.0.0.1:8080'
s3_config:
    url: http://127.0.0.1:9001
    key_id: minio
    secret_key: minio123

And create develop_aws.yml with variables:

airflow_web: 'http://my_airflow:8080'
s3_config:
    key_id: my_real_aws_key_id
    secret_key: my_real_aws_secrey

When you run Catcher you can specify inventory via -i param:

catcher -i inventory/local.yml test

Reports

Sometimes, when you write your test, you need to see what is going on in your system after each step. At the moment Catcher supports only json output. Run it with -p json option:

catcher -p json my_test.yml

And check reports directory for a json step-by-step report. It will contain all in and out variables for every step.

Installation

Docker

For a quick start you can use Catcher Docker image. It includes catcher-core, all extended modules, drivers and dependencies. It comes ready to use – all you need to do is to mount your local directories with your tests, inventories, etc.
The full command to run it is:

docker run -v $(pwd)/inventory:/opt/catcher/inventory
           -v $(pwd)/resources:/opt/catcher/resources
           -v $(pwd)/test:/opt/catcher/tests
           -v $(pwd)/steps:/opt/catcher/steps
           -v $(pwd)/reports:/opt/catcher/reports
            catcher -i inventory/my_inventory.yml tests

Catcher in Docker is usually used in CI while developers prefer Catcher local installation for writing tests.

Local

For a local installation ensure you have Python 3.5+ version. You can use miniconda if you don’t have it.

Run pip install catcher to install catcher-core. It will install the Catcher and basic steps. If you need any extended steps you need to install them separately. F.e. for kafka and postgres you have to run pip install catcher_modules[kafka,postgres]

Some extended steps also require drivers and system requirements to be installed first. F.e. libclntsh.dylib library for the Oracle database.

Microservices and continuous delivery

Microservices and continuous delivery

Imagine a typical situation – yesterday your devops engineer was eaten by a tiger. You a very sad because he didn’t finish the release system for your project. It contains 4 repositories: 2 back-end, 1 front-end, 1 data pipeline.

And now it is you who should set up a deploy pipeline for your project tomorrow.

In this article you’ll get to know how to set up Jenkins, Ansible and Catcher to build multi-environment production ready CI/CD with E2E tests and minimum effort.

Individual pipeline

First step to do – is to set up an individual pipeline for every service. I assume that you are a good developer and you have a separate git repository for each service.

All you need to do here – is to write a Jenkins pipeline and fed it to Jenkins via organization plugin, manually or automatically. The pipeline will be triggered on every commit. It will run tests for every branch. In case of environment branch (develop, stage or master) it will also build docker image and will deploy it to the right environment.


Set up an agent

Agent is the starting point of every Jenkins pipeline. The most common is agent any, unless you don’t need any special stuff.

Set up triggers

Your pipeline should be triggered on every commit. If your Jenkins is not accessible from external network – use pollSCM.

Set up environment variables

They make your life much easier, as they allow you to copy-paste your Jenkinsfile with minimum changes.
Environment should include the docker image names.

environment {
    
    IMAGE_NAME = "<your_docker_registry_url:port>/<your_project>:${env.BUILD_NUMBER}-${env.BRANCH_NAME}"
    LATEST_IMAGE_NAME = "<your_docker_registry_url:port>/<your_project>:latest-${env.BRANCH_NAME}"

}

Set up common steps

Common steps are steps, that should be called on every branch. Even if it is a feature branch.

steps {

        sh "make test"

    }


Remember, that keeping to a standard is a wise decision (or you will be eaten by a tiger too). So, ensure you have a Makefile in your repository. It is your friend here, as it allows you to build language agnostic pipeline. Even if your new devops don’t know your programming language or build system, they will understand, that calling `make test` will test your project.

It is also the right place for notifications. Use slackSend to send a notification to your project’s Slack channel.

slackSend color: "warning", message: "Started: ${env.JOB_NAME} - ${env.BUILD_NUMBER} (<${env.BUILD_URL}|Open>)"

Set up special build steps

Special steps are the steps, that should be run only when changes are made to a special branch. Jenkins allows you to use a when condition:

stage('Build') {

   when {

     expression {

        return env.BRANCH_NAME == 'master' || env.BRANCH_NAME == 'develop' || env.BRANCH_NAME == 'stage'

     }

   }
   steps {
      sh "docker build -t ${env.IMAGE_NAME} ."
      sh "docker push ${env.IMAGE_NAME}"

      sh "docker tag ${env.IMAGE_NAME}
  ${env.LATEST_IMAGE_NAME}"

      sh "docker push ${env.LATEST_IMAGE_NAME}"
   }
}

Set up environment-specific deploy

Besides the when condition, you should also select the proper image or configuration to deploy the right environment. I use Marathon and my dev/stage/prod use different CPU limitations, secrets and other configurations. They are stored in marathon/marathon_<env>.json. So before the deploy you should select the proper configuration file. Use script for this:

stage('Deploy_api'){

  when {

    expression {

       return env.BRANCH_NAME == 'master' || env.BRANCH_NAME == 'develop' || env.BRANCH_NAME == 'stage'

    }

  }

  steps {

    script {

        if (env.BRANCH_NAME == 'master') {

            env.MARATHON = "marathon/marathon_prod.json"

        } else if (env.BRANCH_NAME == 'stage') {

            env.MARATHON = "marathon/marathon_stage.json"

        } else {

            env.MARATHON = "marathon/marathon_dev.json"

        }

    }

    marathon(

      url: 'http://leader.mesos:8080',

      docker: "${env.IMAGE_NAME}",

      filename: "${env.MARATHON}"

    )

  }

}

Ansible promote role

The easiest way to set up a promotion from one environment to another is to trigger the individual pipeline, configured previously.

In the previous article I showed you, that it is much better to use Jenkins together with Ansible. There are no exceptions here (just imagine, that tiger also ate your Jenkins-machine).

We will use a python script wrapped in the Ansible role. For those who haven’t read my previous article – groovy jenknis shared library can be used instead, but it is not recommended as:

  • it is difficult to develop and debug such libraries, because of different versions of Jenkins, Jenkins groovy plugin and groovy installed locally.
  • it makes your release highly depend on your Jenkins, which is OK until you decide to move to another CI, or your Jenkins is down and you need to do a release.

Python script

To trigger the promotion from develop to stage you should merge develop into the stage and push it. That’s all. After the push it’s internal pipeline will be triggered.

The python script itself:

  1. Clone the repository
  2. Checkout to the branch you are going to promote
  3. Merge the previous environment’s branch
  4. Push it!

That looks very easy, although here are some tips.

Prefer your system’s git to the python’s library. In this case you can use your own keys while running locally.

def call_with_output(cmd: str, directory='.'):

    output = subprocess.Popen(cmd.split(' '),

                              stdout=subprocess.PIPE,

                              stderr=subprocess.STDOUT,

                              cwd=directory)

    stdout, stderr = output.communicate()

    if stderr is None:

        return stdout

    raise Exception(stderr)

If your repository is not public, you should clone it by token. Notice, that git_user, git_token and company are ansible variables. They don’t change too often, so I store them in role’s default variables.

call_with_output(f'git clone https://{{ git_user }}:{{ git_token }}@github.com/{{ company }}/{ repo }.git')

It is good not to call push if there are no changes. But not all git versions have the same output. up-to-date differs from up to date. It took me a while to notice this.

changes = call_with_output(f"git merge { from_branch }", repo).decode("utf-8").strip()
if changes != "Already up to date." and changes != "Already up-to-date.":

    call_with_output(f"git push origin HEAD:{ to_branch }", repo)

Sending a slack notification directly to your project’s channel is also a good idea. You can do it via slack webhook.

def notify_slack(callback, message):

    response = requests.post(callback, data=json.dumps({'text': message}),

                             headers={'Content-Type': 'application/json'}

                             )

    if response.status_code != 200:

        raise ValueError('Request to slack returned an error %s, the response is:\n%s'

                         % (response.status_code, response.text)

                         )

Jenkins shared pipeline

Now you have your Ansible promote role. It’s time to create a Jenkins pipeline for the whole project, which will call Ansible for you. This pipeline can be triggered manually by you or automatically by any of the project’s services.

Start with adding a parameter:

parameters {
    choice(choices: 'develop\nstage\nmaster', description: 'Which environment should I check?', name: 'environment')
}

The deploy step:

stage('Promote dev to stage') {
    when {
        expression {
            return params.environment == 'develop'
        }
    }
    steps {
        deploy_all('develop', 'stage')
    }

}

Where deploy_all downloads your ansible repository with the role you’ve created and calls it for every service of project being deployed.

def deploy_all(from, to) {
    git branch: 'master',
        credentialsId: “${env.GIT_USER_ID}”,
        url: "https://github.com/<your_company>/<your_ansible_repo>"
    deploy('repo_1', from, to)
    deploy('repo_2', from, to)
    deploy('repo_3', from, to)
}


def deploy(repo, from, to) {
    ansiblePlaybook(
        playbook: "${env.PLAYBOOK_ROOT}/deploy_service.yaml",
        inventory: "inventories/dev/hosts.ini",
        credentialsId: ${env.SSH_USER_ID},
        extras: '-e "to=' + "${to}" + ' from=' +"${from}" + ' repo=' + "${repo}" + ' slack=' + "${env.SLACK_CALLBACK}" + '" -vvv')
}

Now you have the deploy pipeline for all services and can call it manually. It is 3x faster, than calling manually the pipeline of each of 3 projects. But it is not our goal yet.

We need this pipeline to be triggered by any of our internal pipelines.

Add this step to all 3 Jenkinsfiles of your services:

stage('Trigger promotion pipeline) {

 when {

    expression {

         return env.BRANCH_NAME == 'master' || env.BRANCH_NAME == 'develop' || env.BRANCH_NAME == 'stage'

    }

 }

 steps {

   build job: "../<jenkins_promote_project_pipeline_name>/master",

         wait: false,

         parameters: [

           string(name: 'environment', value: String.valueOf(env.BRANCH_NAME))

         ]

 }

}

Automation part is done now. After you’ve merged your feature branch local service’s tests are run and service is deployed to develop environment. After it the pipeline immediately triggers promotion pipeline for the whole project. All services which were changed will be deployed to the next environment.

Add end-to-end test

Automatic promotions is good, but what is the point of it? It just moves your changes from environment to environment without any high-level acceptance tests?

In Catcher’s article I’ve already mentioned, that green service’s tests don’t give you dead certainty that your services can interact with each other normally. To ensure, that the whole system is working you need to add end-to-end tests in your promotion pipeline.

To add Catcher end to end tests just create inventory and tests in your Jenkins shared pipeline’s repository project (I assume that you have separate git repository, where you store the pipeline, readme with deployment description, etc).

In the inventory you should mention all project’s services, for every environment. F.e. for develop:

backend1: "http://service1.dev:8000"
frontend: "http://service2.dev:8080"
backend2: "http://service3.dev:9000"
database: "http://service4.dev:5432"

In tests you should put your end-to-end tests. The simpliest thing will be checking their healthchecks. It will show you that they are at least working.

---
steps:
  - http:
      name: 'Check frontend is up'
      get:
        url: '{{ backend1 }}'
  - http:
      name: 'Check backend1 is up'
      post:
        url: '{{ backend1 }}/graphql'
        body: '''
        {
           __schema {
              types {
                name
              }
           }
        }'''
        headers:
          Content-Type: "application/graphql"
  - http:
      name: 'Check backend2 is up'
      get:
        url: '{{ backend2 }}/healthcheck'
  - postgres:
      conf: '{{ database }}'
      query: 'select 1'

Add test step to your jenkins pipeline just before the deploy.
Do not forget to create a Makefile.

stage('Prepare') {
     steps {
       sh "make conda"
       sh "make requirements"
     }
    }

Make sure you’ve selected the proper environment. You should always test the same environment, which is specified in patameter.environment.

stage('Test') {
     steps {
        script {
            if (params.environment == 'develop') {
                env.INVENTORY = "dev.yml"
            } else {
                env.INVENTORY = "stage.yml"
            }
        }
        sh "make test INVENTORY=${env.INVENTORY}"
     }
    }

Piece of the Makefile:

CONDA_ENV_NAME ?= my_e2e_env
ACTIVATE_ENV = source activate ./$(CONDA_ENV_NAME)

.PHONY: conda
conda: $(CONDA_ENV_NAME)
$(CONDA_ENV_NAME):
	conda create -p $(CONDA_ENV_NAME) --copy -y python=$(PY_VERSION)
	$(ACTIVATE_ENV) && python -s -m pip install -r requirements.txt

.PHONY: requirements
requirements:
	$(ACTIVATE_ENV) && python -s -m pip install -r requirements.txt

.PHONY: test
test:
	$(ACTIVATE_ENV) && catcher script/tests -i inventory/${INVENTORY}

Disable automatic prod promotion

End-to-end test is good, but not perfect. You shouldn’t let every change deploy on prod realtime. Unless you like to work at night.

Add an input for promote stage to master pipeline’s step. If nobody will press this input – it will be ignored.

stage('Promote stage to prod') {
     when {
        expression {
             return params.environment == 'stage'
        }
     }
     steps {
        script {
          def userInput = false
          try {
            timeout(time: 60, unit: 'SECONDS') {
                userInput = input(id: 'userInput',
                                  message: 'Promote current stage to prod?',
                                  parameters: [
                                      [$class: 'BooleanParameterDefinition', defaultValue: false, description: '', name: 'Promote']
                                  ])
            }
          } catch(err) {

          }
          if (userInput) {
            print('Deploying prod')
            deploy_all('stage', 'master')
          } else {
            print('Skip deploy')
          }
        }
     }
    }

In this case prod will be deployed only after stage’s e2e test is successfull and user decides changes are ready to be promoted.

Conclusion

Such pipeline allows you to deploy a bunch of microservices at once with minimal changes to an existing infrastructure, as we re-use each service’s internal deploy pipeline, which you probably already have.

It is not perfect, as it doesn’t take into a consideration broken build or red service-level tests. But it allows you to save your time during the deploy and remove human error factor by setting all dependent services at one place.

In my next article I’ll show you the example of a rollback pipeline for a set of microservices.

End-to-end microservices testing with Catcher

End-to-end microservices testing with Catcher

I would like to introduce a new tool for end-to-end testing – Catcher.

What is an e2e test?

End-to-end test usually answers the questions like: “Was this user really created, or service just returned 200 without any action?”.

In comparison with project level tests (unit/functional/integration) e2e runs against the whole system. They can call your backend’s http endpoints, check values written to the database, message queue, ask another services about changes and even emulate external service behaviour.

E2E tests are the tests with the highest level. They are usually intended to verify that a system meets the requirements and all components can interact with each other.

Why do we need e2e tests?

Why do we need to write these tests? Even M.Fowler recommends to avoid these tests in a favor of more simple ones.

However, on more higher abstract layer tests are written – the less rewrites will be done. In case of refactoring, unit tests are usually rewritten completely. You should also spend most of your time on functional tests during code changes. But end-to-end tests should check your business logic, which is unlikely to change very often.

Besides that, even the full coverage of all microservices doesn’t guarantee their correct in-between interaction. Developers may incorrectly implement the protocol (naming or data type errors). Or develop new features relying on the data schema from the documentation. Anyway you can get a surprise at the prod environment, since schema mismatches: a mess in the data or someone forgot to update the schema.

And each service’s tests would be green.

Why do we need to automate tests?

Indeed. In my previous company was decided not to spend efforts on setting up automated tests, because it takes time. Our system wasn’t big at that time (10-15 microservices with common Kafka). CTO said that “tests are not important, the main thing is – system should work”. So we were doing manual tests on multiple environments.

How it looked like:

  1. Discuss with owners of other microservices what should be deployed to test a new feature.
  2. Deploy all services.
  3. Connect to remote kafka (double ssh via gateway).
  4. Connect to k8s logs.
  5. Manually form and send kafka message (thanks god it was plain json).
  6. Check the logs in attempt to understand whether it worked or not.

And now let’s add a fly in this ointment: majority of tests requires fresh users to be created, because it was difficult to reuse existing one.

How user sign up looked like:

  1. Insert various data (name, email, etc).
  2. Insert personal data (address, phone, various tax data).
  3. Insert bank data.
  4. Answer 20-40 questions.
  5. Pass IdNow (there was mock up for dev, but stage took 5+ minutes, because their sandbox was sometimes overloaded).
  6. This step requires opening bank account which you can’t do via front-end. You have to go to kafka via ssh and act as a mock-service (send a message, that account was opened).
  7. Go to moderator’s account on another frontend and approve the user you’ve just created.

Super, the user has just been created! Now lets add another fly: some tests require more than one user. When tests fail you have to start again with registering users.

How new features pass business team’s checks? The same actions need to be done in the next environment.

After some time you start feeling yourself like a monkey, clicking these numerous buttons, registering users and performing manual steps. Also, some developers had problems with kafka connection or didn’t know about tmux and faced this bug with default terminal and 80 char limit.

Pros:

  • No need to do a set up. Just test on existing environment.
  • Don’t need high qualification. Can be done by cheap specialists

Cons:

  • Takes much time (the further – the more).
  • Usually only new features are tested (without ensuring, that all features, tested previously are ok).
  • Usually manual testing is performed by qualified developers (expensive developers are utilized on cheap job).

How to automate?

If you’ve read till this point and are still sure, that manual testing is ok and everything was done right in this company, then the other part of my article won’t be interesting to you.

Developers can have two ways to automate repeating actions. They depend on the type of the programmer, who had enough time:

  • Standalone back-end service, which lives in your environment.  Tests are hardcoded inside and are triggered via endpoints. May be partly automated with CI.
  • Script with hardcoded test. It differs only in way of run. You need to connect somewhere (probably via ssh) and call this script. Can be put into a Docker image. May be also automated with CI.

Sounds good. Any problems?

Yes. Such tests are usually created using technologies that the author knows. Usually it is a scripting language such as python or ruby, which allows you to write a test quickly and easily.

However, sometimes you can stumble upon a bunch of bash scripts, C or something more exotic. Once I spent a week rewriting the bike on bash scripts to a python, because these scripts were no longer extensible and no one really knew how do they work or what do they test . The example of self-made end-to-end test is here.

Pros:

  • They are automated!

Cons:

  • Has additional requirements to developer’s qualification (F.e. main language is Java, but tests were written in Python)
  • You write a code to test a code (who will test the tests?)

Is there anything out of the box?

Of course. Just look on BDD. There is Cucumber or Gauge.

In short – the developer describes the business scenario in a special language and writes the implementation later. This language is usually human readable. It is assumed that it will be read/written not only by developers, but also by project managers.

Together with implementation scenario is stored in the standalone project and is run by third party services (Cucumber, Gauge…).

The scenario:

Customer sign-up
================

* Go to sign up page

Customer sign-up
----------------
tags: sign-up, customer

* Sign up a new customer with name "John" email "jdoe@test.de" and "password"
* Check if the sign up was successful

The implementation:

@Step("Sign up as <customer> with email <test@example.com> and <password>")
    public void signUp(String customer, String email, String password) {
        WebDriver webDriver = Driver.webDriver;
        WebElement form = webDriver.findElement(By.id("new_user"));
        form.findElement(By.name("user[username]")).sendKeys(customer);
        form.findElement(By.name("user[email]")).sendKeys(email);
        form.findElement(By.name("user[password]")).sendKeys(password);
        form.findElement(By.name("user[password_confirmation]")).sendKeys(password);
        form.findElement(By.name("commit")).click();
    }

    @Step("Check if the sign up was successful")
    public void checkSignUpSuccessful() {
        WebDriver webDriver = Driver.webDriver;
        WebElement message = webDriver.findElements(By.className("message"));
        assertThat(message.getText(), is("You have been signed up successfully!"));
    }

The full project can be found here.

Pros:

  • Business logic is described in human readable language and is stored in one place (can be used as documentation).
  • Existing solutions are used. Developers only need to know how to use them.

Cons:

  • Managers won’t read/write these specs.
  • You have to maintain both specifications and implementations.

Why do we need Catcher?

Of course, to simplify the process.

The developer just writes a test scenarios in json or yaml, catcher executes them. The scenario is just a set of consecutive steps, f.e.:

steps:
    - http:
        post:
          url: '127.0.0.1/save_data'
          body: {key: '1', data: 'foo'}
    - postgres:
        request:
          conf: 'dbname=test user=test host=localhost password=test'
          query: 'select * from test where id=1'

Catcher supports Jinja2 templates, so you can use variables instead of hardcoded values. You can also store global variables in inventory files (as in ansible), fetch them from environment or register new one.

variables:
  bonus: 5000
  initial_value: 1000
steps:
- http:
        post:
          url: '{{ user_service }}/sign_up'
          body: {username: 'test_user_{{ RANDOM_INT }}', data: 'stub'}
        register: {user_id: '{{ OUTPUT.uuid }}'
- kafka:
        consume:
            server: '{{ kafka }}'
            topic: '{{ new_users_topic }}'
            where:
                equals: {the: '{{ MESSAGE.uuid }}', is: '{{ user_id }}'}
        register: {balance: '{{ OUTPUT.initial_balance }}'}

Additionally, you can run verification steps:

- check: # check user’s initial balance
    equals: {the: '{{ balance }}', is: '{{ initial_value + bonus }}'}

You can also run one tests from another, which allows you to reuse the code and keep it separated logically.

include:
    file: register_user.yaml
    as: sign_up
steps:
    # .... some steps
    - run:
        include: sign_up
    # .... some steps

Catcher also has a tag system – you can run only some special steps from included test.

Besides built-in steps and additional repository it is possible to write your own modules on python (simply by inheriting ExternalStep) or in any other language:

#!/bin/bash
one=$(echo ${1} | jq -r '.add.the')
two=$(echo ${1} | jq -r '.add.to')
echo $((${one} + ${two}))

And executing it:

---
variables:
  one: 1
  two: 2
steps:
    - math:
        add: {the: '{{ one }}', to: '{{ two }}'}
        register: {sum: '{{ OUTPUT }}'}

It is recommended to place tests in the docker and run them via CI.

Docker image can also be used in Marathon / K8s to test an existing environment. At the moment I am working on a backend (analogue of AnsibleTower) to make the testing process even easier and more convenient.

The example of e2e test for a group of microservices is here.
Working example of e2e test with Travis integration is here.

Pros:

  • No need to write any code (only in case of custom modules).
  • Switching environments via inventory files (like in ansible).
  • Easy extendable with custom modules (in any language).
  • Ready to use modules.

Cons:

  • The developer have to know not very human readable DSL (in comparison with other BDD tools).

Instead of conclusion

You can use standard technologies or write something on your own. But I am talking about microservices here. They are characterized by a wide variety of technologies and a big number of teams. If for JVM team junit + testcontainers will be an excellent choice, Erlang team will select common test. After your department will grow, all e2e tests will be given to a dedicated team – infrastructure or qa. Imagine how happy they will be because of this zoo?

When I was writing this tool, I just wanted to reduce the time I usually spend on tests. In every new company I usually have to write (or rewrite) such test system.

However, this tool turned out to be more flexible than I’ve expected. F.e. Catcher can also be used for organizing centralized migrations and updating microservice systems, or data pipelines integration testing.