A bit less than 3 years ago, a team of hackers rather fond of - at the time - seemingly niche technologies, decided to ditch their monolithic PHP application and move to a more maintainable and resilient architecture.

Microservices were not as popular back then as today - some of the most successful startups around the time went with this new-old approach, but the whole topic fell outside the periphery of most hackers.

Yet, this tech and craft beer loving crew decided to rebuild Hailo in Go, using microservices. The idea of h2 was born.

You can have a look at the project on Github

Microservices? What? Why? How?

There are a lot of resources on microservices nowadays, but it can be (incompletely) summarized like this:

A service is an interface - a black box. One can only interact with a service through this interface. At Hailo, database tables are namespaced - each service has only access to its own - this clear separation enables us to change the implementation of a service without worrying of breaking anything else - so long the interface stays the same, no other component of the system will be affected.

Deployment becomes a breeze - no coordination required. Deployments of different components can happen at the same time.

This also enables service writers to pick any language they want - if they can interact with the service interface through a certain protocol, the implementation language of the service is irrelevant. We chose to have a codebase in a single language however, because - having a small team - we value that anybody can dive into the code of any service.

After picking a language, we had to decide what tools to use to build h2.

Internals and infrastructure choices

Service to service communication

RabbitMQ is used for the interservice communication. A custom exchange (h2o) handles the routing (ie. traffic weighting, label based routing etc), a direct one for delivering responses to the callers. Although this is asyncronous under the hood, for the service writer, this all seems like run of the mill, syncronous request-response.

Calling a service from the outside

There is a single service that does the translation from http requests to internal service calls, see the http2rpc container.

Persistence

All the core services use Cassandra exclusively to presist data, because it’s a very fine piece of linearly scaling technology. Although it does not allow for overly complex querying - simplicity is exactly what enables Cassandra to handle massive datasets - unless someone is building fancy user facing functionality, Cassandra provides more than enough functionality to express ourselves.

We love Cassa so much wrote a go client for it, gocassa.

NSQ

NSQ is used for the more traditional, async messaging. “NSQ scales horizontally, without any centralized brokers” - nsq.io.

ZooKeeper

ZooKeeper is used for distributed locking.

Go Services

(the following infomation is likely to get outdated as the project progresses)

The version that’s available under said URL contains the libraries that enables someone to build a service, and a bunch of precompiled service binaries in docker containers. The dashboards are all served from a single nginx container called ‘dashboards’.

At the moment, this is enough to experiment with the approach, even to write services, but it definitely takes a brave person to run the whole thing in production.

In the repo, you can find two go libraries:

The one under h2/go is used to run and call services. It has an interface with a rather minimalistic feature set, but since the internal package contains most of the code, one can change the package and export more - but ideally we should have a stabile API. The legacy API exports too much, we only use a subset of it - this is the perfect opportunity to clean it up. Help is greatly appreciated.

The other library, h2/go/i (i stands for ‘infstrastructure’), is a bunch of wrappers around infrastructure component drivers. These drivers are integrated with the config service, providing hassle free access.

The services themselves are shipped as a binary because we can’t open source them just yet - since we didn’t build them with open sourcing in mind, there might be distracting Hailo specific bits here and there. Usual we try to keep things tidy, but sometimes this thing called ‘life’ gets in the way. This is the largest task before all of it can be open sourced.

Provisioning is an other big topic. It is likely that we will open source our resource management system that runs the containers.

But that is a story for an other day.

The future

Open sourcing the services is a must - and it is on the roadmap. Hopefully, even in its current state, this project can inspire people interested in microservices, even if developing it straight out of the box takes some effort.

Separating out the internal package to a daemon and having a really thin library could be an other improvement - it would allow us to have the same functionality in multiple languages

Github