Typical cloud deployments – be it OpenStack, CloudStack, Eucalyptus etc – have a separate control layer installed and upgraded using separate tools (which might be hand-configured PXE + preseeding, Cobbler, Orchestra/MAAS) As a result you have two distinct provisioning systems in play, which allows for more user error and increased special cases in automation. Add to this the increasing complexity of the puppet or chef descriptions of such setups, and you can soon go mad and be stymied by fear of change.
Contrast that with the new generation easy things we can do in the cloud. Upgrade? Why bother – just spin up the new version of your app in parallel, then let your HA system fail over to it, then de-provision the old version.
So what if we could get all of the power that we have in the cloud when we’re running our datacenters, without the cost overhead of virtualization?
Starting with a bare-metal driver for OpenStack Compute (nova) and coupling that with OpenStack Orchestration (heat) and a pipeline for building cloud-style non-installer based disk images, we can do just that. Provision and manage your entire data center with the flexibility of a managed virtual cloud and all the power of running on actual metal.
The best part? It’s all open source and works!
With the release of GlusterFS 3.4 and OpenStack Grizzly, there is now integration across all major storage interfaces in OpenStack and GlusterFS. Specifically, the Glance, Cinder and Swift interfaces all now have direct access to GlusterFS volumes. Furthermore, with the QEMU/KVM integration released with 3.4, GlusterFS can now host and manage your KVM-based virtualization stack. With these new features, GlusterFS is now capable of acting as a general purpose distributed storage platform for OpenStack deployments.
This talk will provide an overview of all the storage interfaces, where they stand now, and what we're working on for the future. A short demonstration will show these pieces in action with the most recent releases of each.
An introduction to the OpenStack DNSaaS project Designate.
An overview of its architecture and how it integrates with other OpenStack components.
A live demo of the API and its integration with a Bind 9 nameserver.
As the size and performance requirements of cloud deployments increases, storage architects are increasingly seeking new architectures that scale.
Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables.
Ceph's architecture is based on RADOS, an object store with support for snapshots and distributed computation. Ceph offers a Swift and S3-compatible REST API for seamless data access to RADOS. Ceph's network block device can be used to store large images and volumes in RADOS, supporting thin provisioning and snapshots. Ceph has native support for Qemu/KVM, libvirt, CloudStack and OpenStack which makes it an attractive storage option for cloud deployments.
As a result of participating in this session, attendees will gain a full understanding of the Ceph architecture, its current status, and plans for future development. Attendees will also learn how Ceph integrates with OpenStack to provide a unified object and block storage service.
In our company, we use OpenStack as a private cloud. We stand up and manage collections of OpenStack deployments and move both our company code and the "stack" its lives on through a workflow of staging servers, with each release a "dark launch" in miniature.
As if that complexity wasn't enough, different company teams had requirements for different types of access and tailored data reports. We "wrapped" the current OpenStack APIs and user interfaces to provide more granular interfaces to these teams while still appeasing our security group.
This session shows how we accomplished our goals with OpenStack software, including demonstrating the software, tools and processes, as well as the lessons we learned from the experience.
OpenStack is not only the fastest-growing open-source cloud project but is also a large-scale, complex system with a rapidly expanding code base and more than 1,000 contributors to date. Handling the quantity and pace of contributions is a huge challenge on its own.
We've been able to handle the dramatic scale of development by having automation systems that allow us to treat all developers equally from a process perspective and keep our trunk always clean by performing testing pre-merge. The beautiful thing about this approach has been that it doesn't just keep up with demands, it facilities and encourages more development.
This talk will cover the design and implementation of the current system, based around a combination of Gerrit and Jenkins, as well as the workflow that we support and require, how we implemented it and what the challenges were. At the end of this talk you should have a good understanding of how OpenStack handles up to 200 contribution activities an hour.
OpenStack networking supports several pluggable backends. Perhaps the most flexible and intriguing of these is the OpenVSwitch plugin, which is built atop the OpenFlow protocol.
This talk describes the low-level details of the OpenFlow protocol, how it is implemented in both the software OpenVSwitch and in off-the-shelf physical switches, and how it is used by OpenStack Neutron to provide such features as tenant-isolated networks and overlapping subnets.
Additionally, an OpenFlow sandbox development environment will be demonstrated, which provides a simple way to begin getting one's hands dirty this exciting technology.
Open Stack does not have a monitoring solution wired in. It is the resposibility of the implementer to architect a monitoring framework that verifies that the underpinnings of the Cloud are functioning properly.
Sensu was built to work with Configuration Management and provides a fast path to monitoring your Open Stack deployment.
It is a light weight but powerful alerting/metrics bus that is:
- Designed to work with elastic infrastructure.
- Easily customized
- built on top of modern services like RabbitMQ and Redis
OpenStack is an amazing technology. Or rather, set of technologies. And that set of technologies is vast. Architecting, deploying, configuring, and maintaining an OpenStack cloud is not as easy as say -- your traditional 3-tiered web architecture. It is *hard*. You have to know the fiddly bits up and down the entire stack.
This talk will cover some of the pains of delivering a moderately large, production scale cloud infrastructure using OpenStack. I'm not sure I'll provide anyone solutions at this talk, but I may scare up some business for the crazies who install and run these things for a living!
Think of it less as a conference talk and more like a scary ghost story you tell your engineering friends around a campfire.