Deployers are faced with tremendous amount of flexibility in hardware selection, especially as storage systems become more defined by software. In this session SwiftStack presents their results testing OpenStack Swift against a wide-variety of hardware and storage workloads. Various storage benchmarking tools available to the community will be examined. This session will cover tips for selecting and sourcing optimal hardware for cloud storage, with reference architecture advice on optimal deployments choices for OpenStack Swift.
Network Functions Virtualization (NFV) , as described by the ETSI NFV Industry Specification Group (ISG), involves the implementation of network functions in software that can run on a range of industry standard server hardware, and that can be moved to, or instantiated in, various locations in the network as required, without the need for installation of new equipment. Network functions such as Load balancing, Firewall, DPI, WAN optimization, etc. can now be virtualized and deployed along with the actual application workloads in private clouds or public clouds. The NFV technology is embraced by the network operators who aim to benefit with reduced OPEX costs through reduced equipment costs and power consumption, greater flexibility to scale up or down, and quick deployment of newer network services, to name a few benefits. In this talk, we focus on the deployment of these NFV services in multiple NFV Data Centers (DCs) interconnected by MAN/WAN using a cluster of OpenStack instances. The problem of achieving network efficiency and simultaneously energy efficiency with NFV DC deployments comprises of the following steps 1) Choosing the right set of energy efficient physical servers in the NFV DC 2) Consolidating Virtual Machines (VMs) used by a NFV network function such as virtual CDN into a minimal set of servers 3) Optimizing network distance while being of aware of application characteristics. We have already proposed constraint-based SolverSchedulers in the Openstack compute project Nova , where we can specify varied constraints and cost metrics to optimize and thus enable optimal compute placements. Using this SolverScheduler, we can model the NFV placement problem as a constraint optimization problem achieving increased network and energy efficiency by optimally placing NFV VMs in OpenStack Clouds.
There is no lack of deployment software in the OpenStack market, and each one of them provides great value from different perspective and scenario. These deployment solutions did a great job on software deployment layer based upon OS provisioning tools like Cobbler and configuration management tools like Puppet/Chef, but they generally do not go deep down to networking and server hardware level configuration, which is a key step for large scale OpenStack solution. As part of a networking company and a server vendor, Huawei Cloud team are developing Compass, a system not only for OpenStack software deployment, but also for fully automated hardware level server and networking gear configuration. Our system automates hardware resource discovery, hardware configuration (e.g., hardware RAID configuration, switch configuration), topology-aware OpenStack service deployment and etc. Therefore, end users have a streamlined OpenStack deployment experience with Compass. We are using Compass for deploying OpenStack cloud in our Telco customer sites and got very constructive feedback for us to make more robust and open deployment solution. We plan to open source our project. One major goal we have in designing the system is to achieve true openness at hardware level, so that other hardware vendors can write plug-in toward their special server designs. Just like OpenStack openness makes it a valuable game-changing cloud solution, we hope we can work with other hardware vendors to make OpenStack universally available on various hardware platforms in a streamlined fashion.
Typical cloud deployments – be it OpenStack, CloudStack, Eucalyptus etc – have a separate control layer installed and upgraded using separate tools (which might be hand-configured PXE + preseeding, Cobbler, Orchestra/MAAS) As a result you have two distinct provisioning systems in play, which allows for more user error and increased special cases in automation. Add to this the increasing complexity of the puppet or chef descriptions of such setups, and you can soon go mad and be stymied by fear of change.
Contrast that with the new generation easy things we can do in the cloud. Upgrade? Why bother – just spin up the new version of your app in parallel, then let your HA system fail over to it, then de-provision the old version.
So what if we could get all of the power that we have in the cloud when we’re running our datacenters, without the cost overhead of virtualization?
Starting with a bare-metal driver for OpenStack Compute (nova) and coupling that with OpenStack Orchestration (heat) and a pipeline for building cloud-style non-installer based disk images, we can do just that. Provision and manage your entire data center with the flexibility of a managed virtual cloud and all the power of running on actual metal.
The best part? It’s all open source and works!
With the release of GlusterFS 3.4 and OpenStack Grizzly, there is now integration across all major storage interfaces in OpenStack and GlusterFS. Specifically, the Glance, Cinder and Swift interfaces all now have direct access to GlusterFS volumes. Furthermore, with the QEMU/KVM integration released with 3.4, GlusterFS can now host and manage your KVM-based virtualization stack. With these new features, GlusterFS is now capable of acting as a general purpose distributed storage platform for OpenStack deployments.
This talk will provide an overview of all the storage interfaces, where they stand now, and what we're working on for the future. A short demonstration will show these pieces in action with the most recent releases of each.
An introduction to the OpenStack DNSaaS project Designate.
An overview of its architecture and how it integrates with other OpenStack components.
A live demo of the API and its integration with a Bind 9 nameserver.
As the size and performance requirements of cloud deployments increases, storage architects are increasingly seeking new architectures that scale.
Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables.
Ceph's architecture is based on RADOS, an object store with support for snapshots and distributed computation. Ceph offers a Swift and S3-compatible REST API for seamless data access to RADOS. Ceph's network block device can be used to store large images and volumes in RADOS, supporting thin provisioning and snapshots. Ceph has native support for Qemu/KVM, libvirt, CloudStack and OpenStack which makes it an attractive storage option for cloud deployments.
As a result of participating in this session, attendees will gain a full understanding of the Ceph architecture, its current status, and plans for future development. Attendees will also learn how Ceph integrates with OpenStack to provide a unified object and block storage service.
In our company, we use OpenStack as a private cloud. We stand up and manage collections of OpenStack deployments and move both our company code and the "stack" its lives on through a workflow of staging servers, with each release a "dark launch" in miniature.
As if that complexity wasn't enough, different company teams had requirements for different types of access and tailored data reports. We "wrapped" the current OpenStack APIs and user interfaces to provide more granular interfaces to these teams while still appeasing our security group.
This session shows how we accomplished our goals with OpenStack software, including demonstrating the software, tools and processes, as well as the lessons we learned from the experience.
OpenStack is not only the fastest-growing open-source cloud project but is also a large-scale, complex system with a rapidly expanding code base and more than 1,000 contributors to date. Handling the quantity and pace of contributions is a huge challenge on its own.
We've been able to handle the dramatic scale of development by having automation systems that allow us to treat all developers equally from a process perspective and keep our trunk always clean by performing testing pre-merge. The beautiful thing about this approach has been that it doesn't just keep up with demands, it facilities and encourages more development.
This talk will cover the design and implementation of the current system, based around a combination of Gerrit and Jenkins, as well as the workflow that we support and require, how we implemented it and what the challenges were. At the end of this talk you should have a good understanding of how OpenStack handles up to 200 contribution activities an hour.