Technology Insights

Introduction to the Workflow Microservice

By Pascal Chouinard / April 4, 2018

Workflow Microservice Blog

At the beginning of this year, the Engineering team at AppDirect rolled out the first deployment of the Workflow Service (WFS) in production for a large European telecom customer.

In this blog, we will explain what the workflow service is and why we built it. We will also share the challenges we encountered and the lessons we learned along the way and finally, we'll discuss our thoughts and vision for the evolution of the microservice.

What the Workflow Service Is

WFS is a new microservice that allows you to "script" business workflows by orchestrating a series of tasks endpoints. The endpoints can be on the primary core codebase, or potentially on any other microservice in our infrastructure. Since it’s a microservice, the execution of a workflow happens outside the primary core codebase. A benefit of the workflow being scripted is that all the tasks in a workflow are more loosely coupled.

For example, if we wish to change where a task endpoint is hosted, we simply update the workflow definition and that's it. Or, if you want to add an extra step in a workflow after it's deployed, you can simply update the workflow definition and the other parts of the workflow are unaffected.

Why We Built the WFS

Over time, we ended up with a lot of duplicate workflow systems inside the primary core codebase. The process to use which systems and when was very unclear, and the documentation very scarce. Given the fact that our customers are increasingly asking for custom workflows—and also given the fact that we want to keep our system loosely coupled and avoid ugly customization code—we determined that we needed a common reusable service to deploy those workflows.

Our customers are increasingly asking for custom workflows, so we determined that we needed a common reusable service to deploy them.

During our brainstorming and analysis, we established the high-level requirements of the ideal workflow engine that we would like to develop. However, creating this engine was a massive undertaking and we had very limited time to deliver two custom workflows that were already signed by customers. Given this, we searched for already available solutions, ideally an open source framework with a license compatible with our needs.

Fortunately, we came across a new framework named Conductor that was open sourced last December 2016 by Netflix. As we started to look into this framework, we realized its core functionality performed pretty much everything we needed, so we decided to use it as foundation to rapidly put together our workflow service. The workflow microservice we created essentially utilizes Conductor and adds access-control and per-partner configuration that is missing for our use cases. Conductor also comes with a UI that is exposed through the WFS. This UI allows you to monitor the executions of your workflows as well as seeing the execution graph, etc.

How the WFS Works

In its first version, the WFS allows you to invoke task endpoints on the primary core codebase or also on any other existing REST APIs (internal or external). For added security, we also support OAuth 1.0a for authentications from both sides between the primary core codebase and the WFS.

API endpoints must also accept and return JSON as the data-exchange format to be usable with Conductor. To deploy a new workflow on the WFS, developers have to create a workflow definition.

A workflow definition is simply a JSON script that follows the Conductor DSL for describing the tasks in a workflow and the connections between them. The DSL supports a convenient JSON path binding language. This allows you to bind the output of a task to the input of another task, for example.

Example workflow definition:

 "name": "encode_and_deploy",
 "description": "Encodes a file and deploys to CDN",
 "version": 1,
 "tasks": [
 "name": "encode",
 "taskReferenceName": "encode",
 "type": "SIMPLE",
 "inputParameters": {
 "fileLocation": "${workflow.input.fileLocation}"
 "name": "deploy",
 "taskReferenceName": "d1",
 "type": "SIMPLE",
 "inputParameters": {
 "fileLocation": "${encode.output.encodeLocation}"
 "outputParameters": {
 "cdn_url": "${d1.output.location}"
 "schemaVersion": 2

The workflow definitions are defined in the WFS's database and kept in sync with Conductor. We wanted to ensure that workflow definitions can be easily deployed as code, and could be submitted as pull-requests, so they can be code-reviewed.

We followed an approach very similar to the reporting service: Workflow definitions are submitted as a pull-requests containing the database migrations. This approach ensures that the new workflows are properly code reviewed and are relevant.

Contributing to the Open Source Project

One of the big challenges we encountered is when we started looking at deploying the workflow service to the customer’s on-premise environment. The issue was because Conductor supported only DynamoDB or Redis as a storage backend. Since the Conductor project is open source, we ended-up submitting a pull request for adding support for the MySQL storage backend. The owners of the Conductor project reviewed and merged our PR after we addressed all issues they mentioned.

A big challenge was deploying the workflow service to the customer’s on-premise environment.

This is great because we wanted to keep using the community version of the project rather than having to do our own fork and then lose the further improvements from Netflix and other contributors. In fact, since we started using Conductor and interacting with the project owners, we have already contributed 12 PRs to the project.

Kubernetes Deployment in a Single Pod with Conductor

Configuring the workflow microservice proved to be quite complex because it didn't follow to usual pattern in which only one container usually connects to a MySQL database.

In Kubernetes, a Pod is used to configure a group of containers that share the same storage/network. In our typical deployments, all the dependencies of a microservice are deployed together in the same Pod. Since the workflow microservice has multiple dependencies—conductor server, conductor UI, ElasticSearch and finally to the storage engine (e.g., MySQL)—it was more complex to get the optimal Kubernetes configuration.

It took a lot of fine-tuning and debugging, but we managed to finally have a good working configuration. The microservice is also now available on an on-demand environment provisioner via the Kubernetes services icon.

Click here to learn more about the Conductor project and see the framework.

Future Development and Vision

One vision we had for the workflow service is to reduce customization work inside the primary core codebase. We also envisioned that it could be a great fit for reducing recurring DBCMs being done for support requests, since many times we have the APIs but simply need a way to "wire” them together.

We are currently working on a management console, which will be a portal accessible to the Marketplace Manager solution. There users will be able to start workflow executions that they are given access to and monitor existing executions on a self-serve basis.

We initially designed the WFS to facilitate implementation of customization requests from partners, but it's not limited to that. It can be used to orchestrate any endpoints together in one business workflow. However, it is more specialized for backend tasks since the execution runs asynchronously in the background.

Another thing we would like to have a look at is doing an integration with the PubSub service so that events could fire workflows and also workflows could trigger back other events.

Pascal Chouinard is a Senior Software Developer at AppDirect.