The Story Behind Meteor's Next Big Move


by David Woody

In 2011, Marc Andreessen, partner of VC firm Andreessen Horowitz (AH), wrote an essay for The Wall Street Journal titled “Why Software is Eating the World”. In the essay, he articulated how services catering to software developers are creating fertile ground for rapid innovation. A few months after writing that essay in July 2012, AH would invest most of the $11.2 million of Series A funds into a company called Meteor Development Group (MDG). At that point, MDG had ideas for a commercial product, but were 100% focused on building Meteor’s open source framework and community first. Fast forward to May 2015 and AH would again participate in MDG’s $20M Series B round. But this time, all the focus was on the commercial product in development: Galaxy. Everyone in the Meteor community is excited for the release of Galaxy, but some may be unaware of the technology innovations enabling it. The story starts back in 2006 with two tinkering engineers at Google.

The Birth of CGroups

In 2006, Paul Menage and Rohit Seth set out to solve a problem. Google services consume an enormous amount of computing power making resource usage a prime concern. The standard method at the time for isolating processes was through full virtualization, where a virtual machine would run its own operating system. However, Menage and Seth were investigating ways to limit and isolate resource usage to more efficiently run a given set of processes on a machine.

Their work was the start of a Linux kernel feature called control groups, or cgroups, which would lay the groundwork for a key element in the cloud computing revolution: containerization.

Why are Containers Better?

As Menage and Seth knew, the standard method for running an instance of a visualized application, not only required the application, but also required an individual operating system for every instance. The application may be 10MB, while the operating system could be 10GB (1000x larger than the application itself). If you wanted to run 10 instances of this application, 10 separate operating systems would need to be run; one for each virtual machine. The innovation in the container model is being able to securely run isolated sets of processes and share the same operating system kernel.

Virtual Machines Vs. Containerization

This operating system feature is the foundation for the open source project called Docker. What Docker accomplishes is a standard way to go about setting up one of these containers for an application to run inside of. This innovation radically simplifies the creation of highly distributed systems by allowing multiple applications (or other processes) to run autonomously on a single physical machine, or across a range of virtual machines. However, another problem emerged. When scaling up to hundreds or even thousands of containers, managing those containers can quickly become a problem itself.

Commanding the Cloud

Google had to tackle this problem internally when scaling out its highly distributed services, such as Gmail or Search. The teams at Google devised a system abstraction called Borg, and in June 2014 Google announced they would open source the system under the name Kubernetes.

Kubernetes captures the decade of experience Google has with running massive distributed systems. It’s Google’s secret to taking a large group of containers and figuring out how to best run those containers on a set of machines. The teams at Google have the most experience with the challenges and best ways for running large distributed applications.

The Power of Kubernetes

The DevOps Disconnect

The rise of key technologies, such as Node.js, in combination with the advent of mobile has driven the ever increasing adoption of JavaScript. Not only is JavaScript the most popular language on GitHub, but it is arguably the most versatile. From server code to native mobile applications, it is possible to write an entire multi-platform application using JavaScript, and in recent years, a whole host of JavaScript frameworks have developed, from Angular to Ember to React. However, even though these frameworks have improved the development cycle for applications, none of them were created to address the increasing complexities of operating a successful software product. There is still a gaping hole in the broader DevOps equation.

Traditional DevOps Flow

Teams of people have been trying to solve this problem. Amazon Web Services (AWS) was initially created to support Amazon.com, the largest online retailer. When Amazon productized it’s own internal system as AWS, the cloud computing craze was launched. The Docker project addresses how to setup containers and Kubernetes addresses how to orchestrate groups containers, however utilizing all of these open source technologies is still a massive challenge. Ideally, there would be a simple, yet powerful system that would package up and deliver these technologies into one unified platform bridging the gap in the DevOps flow.

Ideal DevOps Flow

The Future of the Galaxy

The $20M Series B round signals MDG is working on something big. The Series A round was used to build out the open source Meteor API, which is now a Top Starred repo on Github. MDG is now focused on working closely with the teams at Google to port and maintain Kubernetes support for AWS. Being able to harness the power of Docker containerization, with the orchestration capabilities of Kubernetes along with the largest, most advanced cloud infrastructure, AWS would be game-changing. I believe MDG is going to succeed in doing just that for the JavaScript community. For the first time, a team of JavaScript developers would be able to easily develop and operate an application using the same powerful technologies used at Google and Amazon.

It’s an exciting time to be in the Meteor community.

__ Thanks to Josh Kissel for the visualization and Josh Gerding for assisting with research. Discuss on Crater.io

Share Share on Twitter Share on Facebook Share on LinkedIn

How Can We Help?

Reaching out doesn’t mean you’re ready to start a project, but we’d love to learn more about the challenge you’re facing, answer any questions, and see if we might be a good fit for working together.

Contact Us