OpenStack - Overview, Getting Started, and Avoiding Pitfalls

Anand Krishnan, EVP Cloud, Canonical

Anand Krishnan, EVP Cloud, Canonical


For private organizations and public service providers alike, OpenStack cloud is one of the hottest technology topics of 2015, and for good reason. From a strategic perspective, though, to determine the potential value of OpenStack in your organization, it’s worth diving a bit into what it is, how it’s designed, and the problems it is intended to solve.

The most important thing to understand about OpenStack is that it is not a single product or piece of software. OpenStack is a collection of open source software projects that are intended to interoperate for providing core cloud services.

In general, each of the core OpenStack projects provides base functionality. What that allows you to do, is put them together in ways that let you build the cloud you need. And your cloud can now be amended and changed as needs and requirements grow and evolve. This is done through additional projects, project updates, and a plugins feature.

There are tremendous advantages to this approach. You only install the components you need and have the ability to extend and tailor functionality through a wide range of plugins.

Plugins could come from the open source community or from enterprise hardware and/or software vendors. For example, when OpenStack launches a new workload, by default it creates a virtual machine for that workload. You could choose for that virtual machine to be based on KVM. Or you could add an alternative. For example, Canonical provides a plugin that enables workloads to deploy a newer technology called a LXD machine container, that offers the manageability of a virtual machine with the efficiency of a container. Both are now options for your users. That is only one example. There are many others.

Enterprises are deploying OpenStack in growing numbers because it is open, scalable, and solves problems that incumbent solutions can’t address. As an open platform, there is no lock-in to an expensive or proprietary technology, and the solutions you develop on OpenStack are future-proofed with a cost protection against reengineering.

The scale-out design offers both technological and financial planning benefits when compared to traditional scale-up designs. Scale-up solutions often require downtime for system upgrades, or total replacement of large systems to increase capacity, and the costs can be unpredictable. In a scale-out scenario, additional capacity is achieved by adding more nodes (servers) with a known configuration and predictable costs.

Existing systems have served the enterprise well. But new demands on IT, such as complex business analytics for finance and marketing, self-service systems, with SaaS portal functionality company-wide, and new development models for developers, are all examples of workloads ideally suited for OpenStack.

High level considerations

In general, here are some things to keep in mind as you decide to build and run a cloud.

For starters, the solution you’re building needs to be economically competitive with public cloud alternatives. One of the core tenets here is usage-based pricing to your customers and hence, ideally your infrastructure cost needs to align with that. Everyone will offer you traditional pricing models like per-node. In a cloud world, look for alternatives such as zone-based and usage-based pricing for your infra stack, that gives you more flexibility.

You should expect your cloud to scale and grow in complexity, but you can’t expect to scale your operational headcount and cost in step with that. Automation is a necessity—the tooling you use to stand up and operate your cloud needs to be designed for scale and to minimize human intervention. When choosing your flavor of Openstack, look for a tooling suite that offers that step function improvement in automation.

What does that look like? Here’s an example—need to setup, configure and add thousands of bare-metal servers of varying types to your cloud? Canonical makes something called MaaS, Metal as a Service, that does exactly that and does so at cloud-scale. Similarly, Canonical's Autopilot is a graphical installation tool that lets you select the subset of Openstack components you need and will then one-click deploy your cloud against whatever hardware you point it at.

We also see a lot of customers that want to get started on OpenStack, but they need little assistance. Which is why we provide something called Bootstack (Build, Operate & Optionally Transfer). BootStack is fully managed Openstack, that can be on-site or hosted, at the size you need, and you can have it up and running for test or production in a matter of days.

Working directly with the OpenStack development community is just as important. We participate in project steering committees, write and contribute open source code to many of the projects, and even offer funded support directly back to projects that our customers are paying for use in production. These sort of activities are a critical bridge between the OpenStack community and organizations that want to benefit from OpenStack’s capabilities.

Increase success with service delivery

But all of the above simply stands up cloud infrastructure. What your users want is their apps and services up and running fast. For example, are your most commonly-used services and solutions available as a one-click deployment experience? Will running solutions scale seamlessly? How would you modify deployed services?

To offer real differentiation from traditional applications, business-as-usual, you need to leverage OpenStack’s application and service delivery model.

Application Service Models

Modern software is moving away from large, monolithic solutions. The current trend is toward microservices-based architectures. OpenStack itself is designed around this premise, and the way it handles workloads is best suited for modern, microservices-based solutions.

Traditional, monolithic, applications generally had rigid and static configurations. For example, your team would design a new solution, install an app server, database, and web servers. They would manually configure the relationships between all the software and infrastructure, and the solution would largely remain static. The design works, but potentially limits development options and scalability.

Microservice means running dozens or hundreds of processes that function together as a complete solution. For example a traditional application might have a catalog, accounts management, analytics, ordering, checkout, etc., all built into it. As microservices, the components are decoupled. Each microservice becomes independently scalable and highly available. The development can now be done in different languages and the developer aligned with the business as opposed to the technology.

Manually configuring and defining the relationships and communications paths between microservices doesn’t scale, especially with automation. Configuration management tools help to alleviate the burden, but they don’t address the dynamic aspect of changing what the solution does, or how the solution works, without script editing or additional manual input.

Automation and scalability make it vital to have a sort of application abstraction layer between the applications themselves and how their relationships and data paths are defined.

Service modeling tools (tooling again!) like Juju coordinate the deployment of multiple services. They automate the definition and creation of application relationships so that as a service or scales changes, there’s no need for an administrator to manually change configuration files or complex deployment scripts. IT can maintain business focus on providing the workloads users need and being agile about it.

As you grow

As your OpenStack cloud grows, it’s important that the base principles behind the initial design are not compromised. The economics need to work, the model should be sustainable. The focus should be on the services provided, not the infrastructure. Everything you do should be a repeatable process, and an automated process.

A successful conclusion

OpenStack enables the cloud you need. When evaluating OpenStack options, look for a partner that gives you options—a rich partner ecosystem and interoperability labs for testing at scale. Their offering should provide value-added tools that can manage microservices, bundle, coordinate and related applications, and provide a rich set of command line, graphical, and automated (programmatic) interfaces. The tools and resources included will keep your teams focused on your business objectives, not the infrastructure details.

If your organization doesn’t have the in-house expertise, but could benefit from an OpenStack cloud, a managed solution, can reduce the complexity and ease the transition to microservices-based solutions. To help with cost analysis, check out the TCO calculator.

The tangible benefits are there—cost savings, ease and speed of development, and accelerated time to market—to name a few, but not without focus and planning. Organizations that remain focused on their business objectives, and the role that an OpenStack cloud has in meeting them, are poised for the greatest success.

Read Also

Open-Source Software Economics

Capers Jones, VP and CTO, Namcook Analytics LLC

Cloud Dataverse: A Data Repository Platform for the Cloud

Mercè Crosas, Ph.D., Chief Data Science & Technology Officer at IQSS, Harvard University

Operating OpenStack at Scale

James Penick, Cloud Architect, Yahoo [NASDAQ:YHOO]

How SAP Leverages OpenStack Technology for Driving Digital Transformation

Sanjay Patil, Chief Product Expert, SAP HANA Cloud Platform Product Management, SAP [NYSE:SAP]