When OpenStack was launched five years ago, public and private clouds were equally important. The first two users were NASA and Rackspace, representing the private and public use cases, and AWS API compatibility was an important feature. People were starting to (mis)use the H word, referring to “hybrid clouds” when they really meant “hybrid applications”, and OpenStack held out the promise of hybrid solutions based on public and private OpenStack clouds. A year later, this was one of the reasons advanced by Rackspace and others for deprecating AWS compatibility in favor of a “more advanced” OpenStack API model.
Fast forward to last week, when the OpenStack Operators’ Midcycle meeting took place in Palo Alto. I borrowed 15 minutes from the session of the Large-scale Deployment Working Group to argue for the creation of a group — full-blown WG or SIG — focussed on Public Clouds. I based this on the fact that there are a number of important use cases and functional requirements which are specific to public clouds, and which are not represented in any other working groups. Here are four examples:
- Legal tenancy. In a public cloud, the tenants are generally independent legal entities from the cloud service provider (CSP). What is the contractual model for tenancy — who has what rights over what resources? What happens if a tenant is a “bad actor”, if their activities attract the attention of law enforcement or other agencies? What does the CSP have to do about lawful intercept, digital forensics, sequestration, or other actions? And even if OpenStack isn’t going to implement such things, do we need to (e.g.) extend the life cycle models for instances, volumes, and other resources to support them?
- Multitenancy. Most public cloud customers want to run multiple applications in the cloud, sharing resources between them, and to do so in a way that is completely hidden from other customers. There is also growing interest in cloud service resale and brokerage, and in the use of federation to support multiregion and hybrid deployments. The hierarchical multitenancy (HMT) work in Keystone looks on the surface to be ideal for this purpose, supporting multiple projects per domain. Unfortunately the work is incomplete — the resource name spaces don’t support arbitrary hierarchies, and administrative delegation is broken — and none of the other OpenStack projects have incorporated public cloud style HMT in their plans.
- Service assurance. OpenStack supports a variety of test and certification frameworks, from Rally and Tempest to Refstack. These are great for acceptance testing, but none of them is suitable for the kind of continuous service assurance needed for a public cloud. Rather than reinventing things from scratch, it would be very useful if existing tests could be integrated into a framework that could be run continuously, from a tenant’s perspective (i.e. outside the firewall), providing real-time information on service availability and latency for both CSPs and users.
- Billing. When Ceilometer was introduced, it promised the ability to capture both billing data (resource consumption metrics) and near-real-time behavioral data (for use in elastic provisioning, load balancing, and application monitoring). Unfortunately, this “converged” approach overlooked the significantly different requirements for each. For billing data, we need to emphasize completeness and accuracy, together with long term storage supporting audits with non-repudiation. Behavioral data is latency sensitive, bursty, and ephemeral. The highest priority is to route the data to the control system which consumes it, so that the system can respond quickly. Late data is useless. These requirements are sufficiently different that no single system can adequately support them both, particularly at scale. Ceilometer might be sufficient in a private cloud, where “billing” is based on best-effort chargebacks using virtual money, but in a public cloud we’re dealing with real money and legal contracts.
It’s important to remember that all of the CSPs based on OpenStack have already done a lot of work to address these issues, as well as many others. Unfortunately they’ve all had to solve them in isolation, often by wrapping OpenStack mechanisms in proprietary software systems or by forking OpenStack. There has been little attention paid to these issues in the developer community. This is not a unique situation; it has historically been difficult for the “voice of the customer” to get a hearing in the OpenStack community. But things are changing. The User Committee and its associated working groups are becoming more active, and the Product Working Group (of which I’m a member) is making progress in building a roadmap by capturing requirements and transforming them into blueprints and resource commitments. It’s tempting to see this as a “pivot” from a developer-centric community to one that includes all of the stake-holders; we’ll see how it goes.
In any case, the reaction to my proposal for the creation of a Public Cloud group was uniformly positive, and most people recommended that we structure it as a full-blown “WG” under the User Committee. So I’m inviting everyone to discuss this over the next two months, so that we can submit a proposal to the User Committee at the Tokyo Summit.
Today, there is a widespread view that the future of cloud computing is hybrid: distributed applications, most developed using PaaS frameworks, incorporating various SaaS services, and continuously deployed using container technology into public and private IaaS infrastructure. Many of these deployments will be heterogeneous, using different technologies. However there is significant opportunity for innovation and advantage — especially in connectivity, agility, and security — when the same stack is used by different participants. Cisco’s Intercloud architecture provides a compelling vision using public clouds, federated partner clouds, and managed private clouds. All of this is a great reason for making sure that OpenStack can support state-of-the-art public clouds.