Developing in a Microservice Environment: Part 4

October 6th, 2015

In the first, second, and third posts of this series on helpful guidelines to help you get started in the creation of effective microservice architectures, I described the importance of effective documentation when developing microservices, the challenge of maintaining consistency in an environment with distributed development teams, and some strategies for effective communication. In this post, I will look at how to set up a development environment for success.

  1. Lack of documentation will kill your team’s productivity, but selecting the correct way to document is just as important.
  2. Consistent use of best practices are necessary to maintain developers’ sanity.
  3. Effective communication is extremely important, but too much communication is a detriment.
  4. Setting up a good development environment is as important as (or even more important than) the development itself.

Setting up your environment

Before writing its first line of code in a microservice project, the team has to set up an environment to facilitate an effective development process. The distributed nature of microservice architecture makes setting up such an environment non-trivial. This complexity is reflected in testing, continuous integration, logging, and other facets of the development and deployment process. In this post, I will discuss some of the tools and techniques that can alleviate some of the pain.

Testing

Integration Testing

In order to test a component that depends on a downstream service, the developer must understand how to recreate a production environment in which all the downstream services are available. Fortunately, there is a great tool that makes the development of such tests extremely easy if you’re using Java or Groovy, and that is Moco. It allows you to run a mock server on a specific port on your local system, while stubbing out the results of some predefined API calls. Even if your system depends on five or six downstream services (I’ve had components that depended on as many as eight), Moco can handle it without a problem. Moco provides a very flexible interface that allows it to return predefined data for specific type of API call, the definition of which can get as granular as you’d like, down to the headers being sent along with the call. It also allows your to send a different response for each time an endpoint is invoked. Another great feature of Moco is the ability to imitate failure scenarios like timeouts or specific error statuses.

Performance Testing

One of the justifications for a microservice architecture is the scale at which it can perform. In order to understand whether what you’re building meets SLAs (service-level agreements), there has to be continuous performance testing of the system. The tool we used for this was JMeter. It might have an old-school interface, but don’t let that fool you. It is a powerful performance testing tool that allows you to set up a reusable load test on the system using multiple threads of requests. It also allows you to create variables like ‘host’, ‘port’, etc. in order to prevent repetition. It has plenty of exporting and charting options so you can communicate these stats to the stakeholders.

Continuous Integration

The difficulty with integration and functional testing dovetails with the usage of continuous integration (CI). Jenkins is a free and open source solution with tons of useful integrations and plugins. This is our preferred CI tool. Automating a continuous integration environment to run through the build/test/deploy cycle with microservices can be tricky, since multiple test phases can be going on at the same time and you may have issues with port clashing. This is something that we ran across while testing our services during an engagement with a major retail client. In order to effectively parallelize the test runs, you have to randomly select ports on which mocked downstream services will run. Luckily, this can be done programmatically with Moco by randomizing the port number during the test run-time from a range of allowable port numbers.

As microservice projects get more complicated, the Jenkins configurations start to get out-of-hand. This requires the team to have more control over the configurations. You might want to share configurations, for example, or deployment settings between projects. The solution to this is the Job DSL Plugin. This plugin allows you to create a Jenkins configuration using a Groovy-based Domain Specific Language. Three key benefits are the ability to reuse any portion of the configuration between projects, a view into the history of a configuration, and an ability to completely recreate your Jenkins environment from scratch if you’re switching your CI hardware. This simple and elegant solution saved an enormous amount of time during our year-long engagement at the retail client.

There is an enormous wealth of plugins available for Jenkins here.

Central Logging System

Although one of the most important tools for development, logging becomes exponentially more complicated in a microservice environment. There is a wealth of solutions out there that centralize distributed log feeds, including the more popular Splunk and the open-source Greylog. Greylog, which is a solution built on top of Elasticsearch (which might get difficult in setup), is in my opinion vastly superior to Splunk in terms of both performance and interface. It provides an easy way to filter the log stream between services and nodes.

Another solution to centralized logging that has been gaining a lot of popularity is the ELK Stack. I haven’t had a chance to use this on a project yet, but I look forward to editing this post to include a comparison later on.

Metrics

Effective instrumentation is the key to understanding where and what to optimize in your system. When a performance issue is detected, it’s time to look at the metrics generated by each one of the services in the environment. We’ve used Dropwizard Metrics on numerous engagements with much success. It provides a simple interface, and integrates well with common metrics-gathering platforms like Graphite. It allows various gauges including timers, counters, and health checks.

For metrics analysis, we use is Grafana on top of Graphite. Grafana’s interface takes a bit of time getting used to, but once you get a hang of it, you can build elaborate visualization dashboards over the necessary gauges within your system.

Conclusion

We put together this four-part blog series in order to help technical managers and architects get started developing in a microservice environment. We covered subjects like documentationbest practices, communication, and tooling.

At SVDS, we are constantly researching new ways to make developing microservices, and managing microservice teams, easier for ourselves and our clients. We are also constantly looking for feedback from industry experts in order to improve our understanding of this approach to programming. If you have any comments, questions, or suggestions about these posts, please let us know below.