Over the last three years, the technology platform at Motus has undergone a radical transformation. This time three years ago, we had three physical servers in a rack somewhere in Texas hosting two monolithic applications and one database. Every developer and tester had their own full Linux VM that required constant care and feeding to manage and update. Our production deployment process involved two people, three wiki pages of manual steps, and a lot of crossed fingers. We didn’t dare touch our systems for a quarter of the month lest something go wrong from which we could not recover.
Today, the story is different. Over the next few posts, we’d like to share with you the choices we made, the tools we’ve used (and built), and how those decisions have held up to the realities of production. This post will talk about our internal dev and QA systems, which were the first to undergo a change.
In the summer of 2014, we had just started to implement our change from a monolithic application platform to a more service-oriented one. We decided to move our authentication and authorization functions to this new architecture, and it was getting complicated. Our developers and QA testers each had a VM that hosted their copy of a database, our legacy PHP application, and our Java mileage entry application. All of the sudden, we were asking them to manage not only those three components, but three more Java service and frontend applications. It became very clear very quickly that this was a) very time consuming, and b) a royal pain in the rear. When we did production deployments, we had huge puppet changes to make for every service for each environment to configure it. The complication factor made it very easy to make mistakes and very hard for certain key people to go on vacation.
Also in the summer of 2014, a technology called Docker was starting to really take hold. It was developed by a hosting company to make Linux containers more usable and portable, and it looked really interesting. Here was a way to package up not only your application but your server components, configuration bits, and dependencies, and ship it around your various environments. We had been playing around with it for a few months internally, mainly as a way of trying to make our dev databases easier to manage. But all of the sudden we thought that if we started packaging our apps as Docker containers including Java, Tomcat, and the app, we could make the deployment of all the correct versions much easier. Thus was born DynEnv.
Motus DynEnv is a mechanism for our internal users to manage suites of application versions, all wired together with a single domain name. It starts with the user creating a manifest that looks something like this:
USERSERVICE_BUILD_NAME=motus/userservice:dev_branch-latest
LEGACY_BUILD_NAME=motus/legacy:dev_branch-2
...
This defines a list of services, their code branch, and build number (or latest build) that the user wants to deploy. The user can go to a web portal, specify the manifest they want, and deploy it. After deployment is successful, they are presented with a URL like https://myenv.dynenv that they can use. This radically simplifies the process of managing software builds and deployments because the versioning, configuration, and installation all happens behind the scenes.
How does it work? We used a few key technologies:
One of the biggest challenges in moving to a services architecture is service discovery. Since all of our services talk to each other over HTTP, we decided early on that DNS would be our discovery mechanism. That proved to be a fortuitous decision, because in summer of 2014, the Docker/microservices/service discovery landscape was still very much in flux. Sticking with DNS meant that we didn’t need to test something too bleeding edge. It also made wiring together our dynamic environments pretty easy once we discovered Hipache. Each service is configured using environment variables, so during creation of the dynamic environment, we simply pass unique service URLs to all the clients and when the services come up, they can talk to each other.
DynEnv wasn’t automatically a bucket of sunshine and roses. There were quite a few things that we learned:
Despite all the above, this mechanism is the only way that we have been able to scale our service development. We currently have north of 30 services, and it would be impossible to manage without this kind of system.
Stay tuned for stories from our production environment!