In several recent blog pieces, here and here, I’ve noted that the central use case for OpenStack – to implement an AWS-like NIST-compliant infrastructure-as-a-service – has broadened over the last three years. Today, OpenStack is being used (or at least considered) for automating the management of traditional enterprise data centers, including infrastructure and applications which don’t fit the original model very well. We can see this in developments such as enabling multiple hypervisors in a single OpenStack cloud, and adding support for Fibre Channel SANs to the Cinder storage service. We’re also seeing interest in the use of specialized resources to allow performance-sensitive scale-up applications to run under OpenStack.
All of this means that the original vision of cloud infrastructure – homogeneous, pooled, highly abstracted – is being replaced with a more complex environment. We have specialized resources from different vendors, including physical devices and virtual appliances. And when you have a heterogeneous environment, you need some kind of policy based automation to allocate the right resource to the right task. Unfortunately, the OpenStack networking architecture, Neutron, does not accommodate heterogeneity very well, and there is no standardized framework for managing virtual appliances.
I work at Brocade, which has a particular interest in this problem. We sell the most popular virtual network appliance, the Vyatta vRouter. And while we have a broad range of IP and SAN products, most of which are supported in OpenStack, almost all of our customers are running multivendor networks.
This blueprint proposes the addition to OpenStack of a framework for dynamic network resource management (DNRM). This framework includes a new OpenStack resource management and provisioning service, a refactored scheme for Neutron API extensions, a policy-based resource allocation system, and dynamic mapping of resources to plugins. It is intended to address a number of use cases, including multivendor environments, policy-based resource scheduling, and virtual appliance provisioning. We are proposing this as a single blueprint in order to create an efficiently integrated implementation.
This is being submitted now for discussion in Hong Kong. We also plan to demonstrate a proof-of-concept at the summit. Target for this work is Icehouse.
I’ve submitted a proposal for a DNRM session at the OpenStack Summit in Hong Kong in November at which I’ll present the architectural features and customer benefits. If you’re involved in OpenStack (and even if you’re not!), I hope that you’ll vote for it.