More on interoperability and Open Stack

There have been a number of useful discussions on interoperability issues here at the OpenStack Summit, including a panel discussion on Tuesday afternoon. It is, of course, a complicated question, with various dimensions: what do we mean by interoperability; how do we assess (or even quantify) it; who does it apply to; what are reasonable expectations; what should we do about it…. This is worthy of a really lengthy essay, but for now I’m just going to jot down a few ideas that seem important.

  • Interoperability is fundamentally about switching or adoption costs: the greater the degree of interoperability between two systems, the smaller the costs of switching from one to the other, or making them work together in some way.
  • Costs are measured relative to expectations, not absolutes. If I switch from an x86-based OpenStack cloud to one built on ARM servers, I’m going to expect some substantial costs, but I can hope that the user experience will be largely the same.
  • Interoperability applies to both service consumers and service providers. Users may be interested in moving a workload from the RackSpace Public Cloud to a Piston private cloud; XYZ Telco may want to switch their code base from Nebula to Cloudscaling.
  • Issues of interoperability are tied to ideas about branding. What is expected (or required) of an IaaS service or distribution that claims to be “OpenStack”? The community has gone back and forth over the years about whether conformance should be based on code or APIs. Today, the requirement is still that you are “running Nova and Swift” (the code), which is naturally unacceptable to the growing number of users who have deployed API-compatible alternatives to Swift.
  • There is a plan to create an OpenStack conformance testing system called RefStack. The (simplified) idea is that a service provider could submit the publicly-accessible end-point for an OpenStack deployment, and the RefStack system would perform a series of tests and come up with a compliance scorecard (not a pass/fail). It would be purely voluntary, but as several people on the panel pointed out, consumer pressure would probably lead to general adoption.
  • For RefStack to provide meaningful results, it seems to me that there needs to be an actual reference: an actual OpenStack deployment which scores 100% on RefStack. Specifications are always somewhat ambiguous, and we need to be able to say, “If the spec is unclear, or we disagree about what this means, the correct behavior is what THIS system actually does.” (This is straight out of the JCP.) Those who believe that everyone should run exactly the same code will argue that this is unnecessary, but they’re wrong: the operational semantics of an OpenStack system will always depend on the behavior of elements – hardware, code and configuration – which are beyond the scope of the OpenStack community.
  • The RefStack approach clearly shifts the focus from conformance based on shared code to conformance based on correct API semantics. This is as it should be. OpenStack is moving from being a collective experiment in developing a complex, open-source distributed system to becoming a mainstream component of the IT world. This can only happen if we recognize that most of the stakeholders are going to be outside the OpenStack developer community. The governance required for the code, created by a few hundred individuals, is going to be different from that needed for APIs consumed by tens of thousands of users.
  • This shift in governance is tied up with the issue that I raised in my blog piece earlier this week. Today, it is too easy for a low-level implementation choice to introduce an incompatible change to a user-facing API. The challenge for the OpenStack leadership is to figure out how to provide stability and predictability for the users of OpenStack without stifling the work of the implementors. This is a good problem to have.

That’s enough for now – I have to get over to the last day of the Summit for the session on comparing OpenStack and EC2 network architectures.

Comments are closed.