The Container Revolution: Is OpenStack now Obsolete?

Robert Starmer, CTO, Kumulus Technologies
188
291
46

Robert Starmer, CTO, Kumulus Technologies

Containers are all people want to talk about these days. It is a paradigm shift to how applications will be written from this point forward for sure, but it is not the only piece of the puzzle, and understanding the shift in the landscape will be critical in understanding how Containers, Container supporting services, and the rest of the cloud systems infrastructure will come together to implement and enable the future IT services department. One key piece of that integration is the interaction between Containers and OpenStack services. OpenStack was the previous answer to all that is your IT department and in my view, is still a critical component of any ongoing IT strategy. Yet, even that toolset has not been left unscathed by the shift in everyone is focused towards Containers.

At the Newton OpenStack summit in Austin Texas in April of 2016, nearly half of the general conference sessions had some mention of Containers or Container technologies as a part of the session. In fact there was an entire section of the conference dedicated to Continer focused companies and technologies. This groundswell of interest has changed the face of OpenStack for the near future, and in fact, has likely changed things for the better. One key case in point is the potential simplification of the process of upgrading OpenStack service components by containerizing the actual application services themselves. Historically, this process was fraught with dependency interactions, and a nearly impossible to manage rollback process (or rather a rebuild the previous version process).

  Containers can both simplify the operations of OpenStack environments, and OpenStack can simplify the creation of per-project Container management environments 

While this one particular aspect of Containers has perhaps pointed the way to how distributed applications will be deployed in any case going forward, it also highlights the new chicken/egg problem, and the other Containers interaction with OpenStack. In order to deploy OpenStack as a Container application, a Container infrastructure is needed. This is really just a twist on the previous issue, which lead to the OpenStack-on-OpenStack or Triple-O project, that looked at a small (often virtual) OpenStack system running that managed a bare metal OpenStack deployment tooling (using the OpenStack Ironic bare metal management project). In much the same way, there is a project called Magnum that, at its core, intends to support the deployment of Container management platforms like Kubernetes or Docker-Swarm, or even the DC/OS project, all of which then can be used to support the deployment of a Container based OpenStack service.

One of the biggest advantages of getting a Container infrastructure in place for a management application as distributed and broad as OpenStack is that the tools for enabling resiliency (server load balancing) and rolling upgrades are effectively built into these toolsets. Therefore, as long as either a new configuration or an updated container with embedded configuration is available, it is possible in many cases to roll out a new version of a service component of OpenStack without impacting the rest of the running system. This is a very important aspect that enables more effective IT staff, as less time is spent in trying to manually orchestrate these processes, and more time can be spent in managing the integration of new features and capabilities, something that most IT staff does not have bandwidth to even contemplate due to the overhead of just keeping an OpenStack system running and stable.

Therefore, a project stack that still looks to OpenStack to get the system running, and then uses Containers under OpenStack will eventually produce a running OpenStack management environment that can talk to and manage the scale out hardware of a modern datacenter. Yet, this does not address the pressure that most IT departments are getting to enable end-user Container deployment solutions. Part of the problem here is a question of where to draw the management and control boundary of the Container control plane. If we take as a starting point the fact that the container manager that currently runs under an OpenStack environment is not going to be used for anything but that OpenStack service, then the question is: Does one deploy another single container manager and allow end users to access it directly, or does a container manager environment per tenant get deployed, allowing each tenant to have what appears to be a dedicated container environment just for their appliaction(s).

Both processes have merits, and with a tool like the OpenStack Magnum project, it might be easy enough to deploy a per tenant Container manager, and even have that manager be different on a tenant by tenant basis. Magnum is functional in its current state, but has not quite the same level of backing of some of the other OpenStack projects, and so is a bit more difficult to get up and running in a manageable/ production sense, but it provides a management model that appears to map well to end-user requirements for enabling container services. There are other similar approaches, such as just using OpenStack HEAT templates, or using the Murano application catalog to implement the same sort of per-project Container manager deployments. However, in the end, it is still useful to have a tool like OpenStack under the end-user Container manager, even if it is only there to enable and deploy the system once, and then is effectively not involved in the management of the rest of the application stack.

Containers can both simplify the operations of OpenStack environments, and OpenStack can simplify the creation of per-project Container management environments. To that end, the Container revolution may have eclipsed the perceived value of OpenStack to the greater IT community, but it appears clear that OpenStack as a core management application will be here for the long term, and Containers will become another part of the compute management services, working alongside the bare metal and virtual models that are well supported today. Get ready for Containers to become one of the many services that will be supported in the current Cloud datacenter, both as a tool, and as a service.

Read Also

Operating OpenStack at Scale

James Penick, Cloud Architect, Yahoo [NASDAQ:YHOO]

The Difference a Manager Makes with OpenStack Management

Lionel Gibbons, VP Marketing, Bright Computing