Videos provided by OpenStack Summit via OpenStack Foundations YouTube Channel
The Hong Kong Tourism Board will host a short lion dance to kick off the OpenStack Summit. Arrive early so you don't miss out!
OpenStack Foundation executive director Jonathan Bryce will provide a community update, focusing on why OpenStack is quickly gaining adoption from a user perspective. You will also hear directly from OpenStack users at Concur, DigitalFilm Tree and Shutterstock.
OpenStack continues to improve with every release and many organisations now run Ubuntu OpenStack in production. But standalone clouds on their own do not deliver business value, this requires a cloud connected to business systems and able to run applications more flexibly and efficiently than before. Mark Shuttleworth will discuss how the advances in interoperability being made with Ubuntu OpenStack and a wide variety of technologies and applications is enabling organisations to get greater value from their Open Stack cloud faster and more easily than ever before.
In a hyper-connected world with mobile, social and embedded intelligence, businesses face pressure to leverage technology to differentiate like never before. Winners will be based on how quick and responsive they are to customer needs and how rapidly they innovate amid pervasive change in a heterogeneous landscape. As an industry, we have an opportunity to be drivers of this next era of rapid innovation by supporting and strengthening interoperability. In his keynote address, Dr. Daniel Sabbah, IBM CTO and General Manager for Next Generation Platform walks the audience through the state of the industry and the imperative of an Open Cloud Architecture to enable rapid and constant innovation. In the last few decades, IBM has been at the forefront of open interoperability initiatives, now including OpenStack, and has helped unlock its potential for its clients. Dr. Sabbah will host one such client, Paul Lu CEO of Wuxi Lake Cloud Tai Cloud Computing, on the stage to talk about their experience with ease in integrating their solution based on OpenStack. IBM applauds OpenStack’s growth, relevance and role in an Open Cloud Architecture.
OpenStack is quickly becoming the defacto standard for Open Cloud platforms. But how can someone quickly get started with learning this exciting new technology? This workshop will walk participants through an overview of the OpenStack components and offer practical suggestions and resources for learning OpenStack. To demonstrate one way to get started, we will setup a multi-node OpenStack cloud using RDO and the Packstack utility. The installation will be performed on an RPM-based system.Participants will be introduced to a range of cloud functionalityincluding: Adding new users Adding an image to glance Defining networks in Neutron Starting a new virtual server Creating and attaching persistent storage volumes to virtual servers Storing objects in Swift Using the Horizon Dashboard user interface Instructions to prepare for the workshop can be found athttp://openstack.redhat.com/GettingStartedHavana_w_GRE.A must for OpenStack newbies!
OpenStack Heat is gaining momentum as a DevOps tool to orchestrate the creation of OpenStack cloud environments. Heat is based on a DSL describing simple orchestration of cloud objects, but lacks better representation of the middleware and the application components as well as more complex deployment and post-deployment orchestration workflows. The Heat community has started discussing a higher level DSL that will support not just infrastructure components. This session will present a further extended suggestion for a DSL based on the TOSCA specification, which covers broader aspects of an application behavior and deployment such as the installation, configuration management, continuous deployment, auto-healing and scaling. We will also share some of our thoughts on how this DSL can interface with native OpenStack projects, such as Heat, Keystone and Ceilometer.
This lighthearted talk discusses the cultural divide between the OpenStack developers, system admins and business folks and urges both sides to show each other some love. I will review real use cases/situations and share lessons I've learned to overcome the artificial barriers so we can continue to have nice things like OpenStack.
CERN, the European Laboratory for particle physics, is for a long time running OpenStack test deployments among other leading open source tools to build its private Cloud Infrastructure that helps scientists around the world to uncover the mysteries of the Universe. During the Grizzly cycle CERN moved its production workload to its High Available Openstack Cloud Infrastructure that has multiple cells and is deployed across two computer centers (Geneva and Budapest) totalizing up to 6 MW of capacity. With this talk you will learn the history, architecture, tools and technical decisions behind the CERN Cloud Infrastructure along with tips and techniques for production deployment of OpenStack and the plans for the future.
A Strong Foundation was Built A modern, supported, and secure OS. Virtualization in production. A capable staff.Agility, Agility, Agility At a Personal Level An agile methodology is critical to the continued success of the Cloud team at PayPal, but its acceptance amongst “old school” infrastructure engineers was not always easy or expected. Each person took on a personal transformation to remain relevant and important. They accepted new roles that were not always visibly aligned with their previous capabilities. They embraced change on their own schedules, finding their own path, but ultimately embracing it. The team was strong. Now we are stronger and more productive (measurable by # of meaningful deliverables year-over-year).Finding Consistency in a Constantly Changing World These keep us sane: JIRA, Kanban, and Daily Stand-ups JIRA: Easy to track your work. Kanban: Easy to understand what you should be working on. Daily Stand-ups: Ensuring work is happening, and clearing any roadblocks.Get Ahead of the Game and Stay Ahead It's easy to fall behind. We did. Get ahead by being identifying where you haven't been smart. Stay ahead by automating the things that caused you to fall behind. Have each technology or service owner run a simple gap analysis by asking 6 things: Is it highly available and resilient? Is it able to be consistently deployed and managed? Does it utilize configuration management for installation and configuration? Is it documented? Is it secure? Is it monitored?If any of them are no, you've found a gap to address. Once you can answer yes to all of them, your next deployment or upgrade will require much less time. And before you know it, you're ahead.Lessons Learned Each person will transition on their own schedule based on their willingness to accept change. Don't expect an overnight turn-around in the boots on the ground. When it comes though, be prepared to deliver more products and projects than you ever thought possible, and have fun while doing it with cutting edge technologies. There is not one way to do agile. We started with a scrum approach and quickly realized that bi-weekly sprints really didn't work well for us. We found kanban and find it well-balanced for the type of work we are doing.I'm doing Support?! L1+L2+L3 Support is not ideal, but for an immature product, it is required. DevOps until it's documented well and supportable by L1+L2.
This presentation will focus on how we are working with the community to deployment a highly available and qualified OpenStack architecture that meets the requirements and solves the needs of enterprises today. We take a deep look at this highly available active/active architecture stack, how the architecture is constructed and deployed while keeping total cost of ownership low. Each Openstack component in the stack will be examined, we look at how we improve database stability, how we created an active/active OpenStack load balancer to solving this issues associated with using active/passive, and more
Network Virtualization promises to impact the networking industry much like server virtualization impacted IT. What this means for you is that the networks in your OpenStack cloud can be architected and operated in a way that is drastically better than the manual and opaque networks of the traditional enterprise era. Specifically, network virtualization provides fully-automated provisioning of network connectivity and policy and opens up new opportunities for network visualization, troubleshooting, and security. Now that network virtualization has been around a few years and exists in many production OpenStack deployments, we can talk concretely about which of the promised benefits of network virtualization are already being realized today, and which remain as promising ideas yet to be fully realized. This view into the deployments of network virtualization pioneers, along with a view of key industry trends will help your organization navigate the transition to network virtualization.
As the core network abstractions provided by Neutron is getting widespread usage, newer abstractions for network services (such as service insertion and service chaining) are being worked out by the Neutron community. However, it remains unclear if the provided level of abstraction is suitable at higher layers of the stack. In particular, applications that are used for deploying various workloads may find a different and higher level network abstraction more useful. In this talk we first present the Neutron abstraction for networking resources including the newly developed network services. We then show how different levels of network abstraction can be more useful at higher layers and propose alternative workload-centric abstractions for network resources. In particular, we discuss Heat and TOSCA as they relate to specifying the network requirements of various real-world workloads.
PayPal has adopted a hypervisor agnostic stance within our Openstack Grizzly cloud. This presentation will cover the details surrounding our grizzly implementation and integration of both KVM and ESX hypervisors under one management umbrella. Grizzly deployment details configuration details for ESX integration Reasons for execution of this strategy benefits and pitfalls of this plan This will be an audience modified presentation of one that I am giving at VMWorld 2013 in San Francisco in August 2013. PayPal is the faster, safer way to pay and get paid online, via a mobile device and in store. The service gives people simpler ways to send money without sharing financial information, and with the flexibility to pay using their account balances, bank accounts, credit cards or promotional financing. With 128 million active accounts in 193 markets and 25 currencies around the world, PayPal enables global commerce, processing more than 7.6 million payments every day. Because PayPal helps people transact anytime, anywhere and in any way, the company is a driving force behind the growth of mobile commerce and expects to process $20 billion in mobile payments in 2013. PayPal is an eBay (Nasdaq:EBAY) company and contributed 40 percent of eBay Inc.'s revenues in 2012. PayPal is headquartered in San Jose, Calif. and its international headquarters is located in Singapore. Scott Carlson is currently a Senior Engineer at PayPal and has been in the field of infrastructure and information security for 15 years within the online banking, education, and payment sectors.
The suite of APIs that enable programmatic interaction with OpenStack environments are among the most elegant API suites in the cloud computing world. Currently, however, there's a disconnect between the design of the APIs and how they operate in the real world. This presentation provides an overview of the APIs that comprise the OpenStack API suite and then dives into how they function in the real world across different deployments and how they have changed as OpenStack has evolved. Topics covered in this presentation include: A quick tour of the OpenStack APIs Keystone and OpenStack API security The challenges in working with multiple OpenStack versions Nova volumes and Cinder Nova networks and Quantum Comparing the utility of the native OpenStack APIs to the EC2 APIs
Learn how we built a business case for OpenStack adoption at Workday and where we are in our journey, some lessons learned, and how you might go from zero to hero. As a rising star in the enterprise software landscape, Workday supplies a software as a service that runs in our home-grown cloud. Adopting OpenStack required identifying and socializing benefits and key performance indicators, not to mention socializing a utopian vision of OpenStack goodness. Once we had buy-in, we began our adoption process with an actionable plan that included evaluating partners, operating systems, components and accessories. By identifying and aiming for early wins, we followed our roadmap to success, and you can too.
Shutterstock was founded in 2003 before AWS or the advent of consumer cloud technologies. Over the years Shutterstock has scaled significantly and needed to solve specific challenges of low latency, high volume deployment, the shift to services architecture, high performance databases transactions, unstructured data, and requiring PBs of storage. The infrastructure at Shutterstock is unique, embracing strong expertise in systems management and design as well as high levels of distributed networking pricipals (anycast, ospf, bgp). This talk will be focused on the design concepts of building a custom cloud with shared operational responsibility between your engineering and operations teams, as well as discussion on theory of building the right type of teams for modern infrastructure platforms.
One of the elements that any software development groups envies OpenStack is the project ability to coordinate and maintain a working product based on several hundred contributors wihtout missing a single deadine. This is mostly due to the organisation of the OpenStack project and to the great work of the infrastructure team within the project. As more and more of our contacts are wondering about this miracle that happens every 6 months, we realized that many of them would like to use similar tooling to deliver their own project. For the past few months eNovance has been working on streamlining tooling to deliver this and started a new product offering called the “Software Factory”. This session will explain what changes and modules we had to make to Git, Gerrit, Jenkins and other tools to support OpenStack Nova, Swift, Cinder out of the box in order to provide and intergrated software factory for about any language.
With the growth and adoption of OpenStack, the real possibility of federating multiple clouds is emerging. But are the software and the community ready for this? What are the real use cases for a hybrid OpenStack cloud? Come and hear how CERN is working with Rackspace to develop solutions for connecting multiple OpenStack clouds, and how a hybrid cloud can help further CERN important research into the origins of the universe. Tim Bell, Manager of Infrastructure for CERN, and Toby Owen, Head of Technical Strategy for Rackspace, will discuss their joint Openlab research project into federation of OpenStack clouds. They will explain the problems they are aiming to solve with federation, potential architectures and blueprints that will guide the research project, current progress, and future plans.
As we known, it is a big challenge to deploy and operate a large-scale Openstack cloud for tens of thousands of nodes. We've builded a 6,400 nodes openstack platform on TH-2 since a years ago, an enormous amount of work has gone into the deployment and optimization to create this large cloud. In this presentation, we will talk about the architectures for our cloud, the deployment strategies we have used, the experiencies we have gained, and the problems we have met.
OpenStack developed largely with KVM; ESXi and VMware are widely considered one and the same. Is it possible to mix and match, and why would you want to? What are the opportunities and challenges of running OpenStack orchestration on an ESXi hypervisor? In this talk, wel look at some real world examples of hypervisor compatibility with OpenStack, examine some performance data and sensitivity analysis, and identify practical approaches to configuring an ESXi and mixed hypervisor environment in your OpenStack cloud.
In the search to find an automation framework to help manage the deployment of your OpenStack clusters, youl find several tool chains and frameworks to choose from. Symantec embarked on a Proof-of-Concept pilot to test various solution options. Symantec initial list of proposed features and capabilities included: ability to manage the bare metal provisioning enrollment of nodes in the cluster deployment of the OpenStack implementation on the cluster full lifecycle management of the node from birth, provisioning, operations, and decommissioning high availability configurations of OpenStack components RBAC user/group security mechanisms multiple physical network segments to segregate various traffic types for security multiple-cluster control capabilities multi-region management capabilities ability to scale to many thousands of nodes Symantec initial design posed significant challenges for all of the frameworks under consideration which required a high-touch professional services engagement with each vendor. This presentation will discuss the pros and cons of the various vendor solutions for deploying OpenStack. The vendor solutions under test were as follows: Mirantis Fuel http://www.mirantis.com/ Canonical JuJu/MaaS http://maas.ubuntu.com/ Dell Crowbar - http://dell.com/crowbar Triple-O - https://wiki.openstack.org/wiki/TripleO Foreman - http://www.theforeman.org/ RackSpace http://www.rackspace.com/ Did we find an implementation that hit all the requirements? Come find out ... !
Running your own infrastructure *can* be as little as half the cost of running on AWS once you are at scale. OpenStack-based cloud systems can provide the same or similar economies of scale if you leverage the lessons of AWS and GCE when building your cloud. This talk will discuss the economic factors in designing a cost-efficient AWS + OpenStack hybrid cloud. Wel look at the issues involved in repatriating existing applications, and wel show a couple of real-world demonstration of tools that can assist in the repatriation process. Repatriation isn quite as simple as hitting the Easy button, but if you plan your deployment correctly, you can make it work, both technically and economically.
Hear directly from the executives who are building the future of cloud technology today.
This session is a panel discussion of OpenStack users having experience deploying OpenStack Clouds in production environments. The panel will be asked to discuss specific Organizational challenges faced and transformation required to bring Cloud into their organization. The challenges of bringing together Server, Network & Storage teams. The rise of converged teams or virtual “Cloud” teams. What does the term “Cloud Architect” mean and is the Infrastructure guy, network guy, server guys, a mix? or something totally new?
As OpenStack expands its footprint into the enterprise, more and more people who are familiar with VMware vSphere concepts and terminology will be exposed to OpenStack. In order for OpenStack to be successful in enterprise environments, these people who are familiar with VMware concepts and terminology will need to understand the concepts and terminology in OpenStack. In this session, VMware experts who have moved into the OpenStack space will talk about how to bridge the gap between vSphere concepts and terminology and OpenStack concepts and terminology, showing the value in OpenStack for VMware environments and the value of OpenStack to VMware administrators. Please note that this session does not an attempt to give attendees a detailed understanding of how vSphere-related integrations work or are configured, but instead focuses on educating VMware-savvy administrators and users on OpenStack. This session will be valuable for anyone who needs a better grasp of how to talk about both VMware and OpenStack within an enterprise context.
OpenStack has experienced a phenomenal growth, but, can it make the leap from 10s of thousands of community members to 10s of millions? Many large software ecosystems that have thrived recently have been powered by developers. So far, enterprise developers have remained largely in the periphery as far as OpenStack is concerned although there are serveral use cases like SDLC, Big Data, PaaS atop OpenStack, Hybrid cloud development, etc. that is aimed squarely at developers. Perhaps, IaaS will never be interesting to developers? Can some of the newer initiatives (like Heat) change this? Attend this panel to hear different perspectives of should OpenStack appeal to enterprise developers and if so how. Please be armed with the tough questions and opinions of what does OpenStack mean to enterprise developers. These stackers have worked with large ecosystem of developers in their past lives and represent different parts of the OpenStack ecosystem. More than anything they are not hesitant to speak their mind and so should you.
Chef, Crowbar, Puppet, Tripple-O, Piston, Nebula, eDeploy, etc... each offer more or less complementary techniques to deploy OpenStack, but which one should you chose if you start a new OpenStack project today? What about upgrading your OpenStack, what is the best method to keep your deployment up to date. The goal of this round table is to invite the main experts from the various projects to come and share with us their vision and try to explain why their method is better than the others.
While all the hype around cloud computing would suggest that everyone is already doing it, the reality is just a bit behind. There's a learning curve for most organizations that is holding back cloud adoption but in turn presents opportunities for the OpenStack community to accelerate them into the cloud era. In this session, Forrester Research analysts James Staten and Charlie Dai will detail the true picture of enterprise cloud adoption worldwide and here in Asia using fresh Forrsights survey data and qualitative analysis from enterprise client conversations. In this session you will learn: * The true pace of adoption of Infrastructure as a Service (both public and private clouds) * What challenges are holding enterprises back from wider adoption (and what you can do about it) * Where are the sweet spots for enterprise adoption (and how to best capitalize on them) About the speakers:
What is the industry doing to advance the architecture of Software-Defined Storage? SDS is going to be very big (>$10B over the next 4 years according to IDC), set to grow rapidly from the small fraction it represents in today market. Listen to what companies, both established and new alike, are doing to enable this data growth, and help transform how clouds consume and deliver storage as a service.
My job at Mirantis is to figure out how to co-opete in this crowded space. Yes, we are all in OpenStack to build a great open source community. But let face it, we are REALLY in it to make money So. Does Rackspace regret letting go of control? Will Red Hat OpenStack mean death for the start-up distros? And what is the game plan for IBM and HP? I spend my days analyzing every move anybody makes in the ecosystem and figuring what that means for the industry. In this talk Il shed some light on OpenStack coopetition politics, share subjective view on strategies of individual players, and offer some predictions on future OpenStack competition landscape.
Wee from Intel IT Engineer Computing, which is a group to provide Intel private cloud solution. Currently, we have a 36-node OpenStack Cluster (based on Grizzly) providing over 1000 VMs to our internal customers. Our OpenStack journey started from Essex. During this one year, we ran into a lot of problems and resolved them. It time for us to share with the community about our experience. (1) How to manage version update? I believe it very painful for each half year update. We use Continuous Integration to resolve this problem. We track the source code from github every day. Once the code is changed, our CI server will check out the code and deploy it to our test environment and run all the tests. We add our customized tests into tempest. By using CI locally, we can integrate the lasted code or tagged code daily and resolve the update issue in every dayswork. At the same time, we can track the progress of the community. (2) Tempest is not perfect. Wee using tempest in our CI environment. However, tempest will cause many issues to your environment after you run it. We have added a new module called tempest launcher to help us clean up the environment after running tempest. (3) More powerful and friendly portal. We develop a customized portal called Taurus which is an operation one-stop-shop center. Taurus not only covers all the functions Horizon have, but also integrate with nagios & ganglia. You can also use Taurus to trigger Tempest and view its result friendly. (4) Improve the speed of snapshot. Original snapshot is slow because of rebase and uploading each time. We did some improvement in this feature to reduce the snapshot time, which save our customers lots of time. (5) One click deployment. Based on dodai, we burn a live customized image into USB. And using this USB, it very easy to deploy an OpenStack Cluster. (6) BI in cloud data analysis. We launch Hadoop on our OpenStack Cluster to analysis this cluster log data nightly and archive the data into Ceph.
Synopsis: OpenStack has been called the largest DevOps open source project in the world and as such there are a lot of lessons we can learn from the project. During this talk, I'll discuss some of the practices OpenStack follows to make the lives of Devs and Ops better as well as discussing where OpenStack can fit into your ToolChains. Overview: 1) Defining the ToolChain 2) Review of OpenStack Practices 3) Where OpenStack fits in 4) What's missing? 5) Q/A and Community Input
Private clouds have recently begun to gain momentum among enterprises--and with good reason. They're able to deliver the agility and ease-of-use that users have come to expect--without sacrificing any of the security or control that is critical to so many industries. But implemented incorrectly, there's no question that private clouds can be a costly and frustrating undertaking. In this talk, Steve will spell out the most common mistakes that he and his colleagues have seen in their experience working with companies that have attempted to implement private clouds. He'll talk about what has gone wrong for these firms, as well as what steps your company can take to avoid falling into similar traps as you pursue your own private cloud.
Whenever you submit a patch to OpenStack it is run through an extensive set of automated tests designed to ensure we always have a working upstream OpenStack. If any of these fail on your patch submission, it will be blocked from merging, and will become a low priority for anyone to review. Debugging these failures is often not straight forward, especially for a new contributor to OpenStack. This talk will look at all those tests in detail, and how to debug and address a failure at each level. What logs to look for, how to understand the interactions between services, and how to handle scenarios that you can't seem to replicate locally, but that are hit in the OpenStack gate. Sean Dague is the OpenStack QA Program Project Technical Lead, and a Nova Core member. He's contributed to OpenStack since the Essex release.
Yahoo Japan IT infrastructure is a compound of massive, scale-out IT data centers that cover the Tokyo metropolitan area and East Japan. Two years ago, Yahoo Japan initiated a private cloud project. Given the heterogeneous nature of their equipment and future scaling requirements, an open and innovative architecture was paramount. Open source pioneers, Yahoo Japan decided on OpenStack for orchestration. For Server Load Balancer (SLB), the Brocade ADX was selected to ensure application delivery at the speed of the business. SLB Direct Server Return (DSR) is an effective method of deploying a load-balancer in terms of network configuration. The rate of change for this configuration is very high because directly correlated to workloads deployment; hence adding this step to the orchestration workflow is critical to Yahoo Japan modus operandi. DSR is a discussion topic for a design session at the OpenStack Summit in November 3 in Hong Kong, leaving Yahoo Japan today with a gap in the planning to full automation. This session will discuss how Brocade has shaped up LBaaS to address Yahoo Japan cloud necessities.
China is often regarded as a huge but mysterious market. In the market of consumer Internet, Google, Amazon, Paypal and a long list of U.S Internet companies are almost all lost in China, leaving the huge market to their Chinese counterparts. By contrast, the traditional IT giants, such as IBM, HP, VMware and etc. have dominated the Chinese IT hardware, software and services business for a long time while the local enterprise technology and software companies have few chances to win. What exactly the background reasons behind these facts? How about the cloud business in China? Which type of cloud business is most suitable for the market? How do the Chinese government policies influence the cloud business? Since the cloud ecosystem is still in flux, but rapidly involving, the market pattern will be formed in a few years, and WHO will be the last cloud winner in China, the Internet cloud companies like Amazon, Google, or the traditional IT giants in China, or the Chinese local companies?As the founder and CEO of UnitedStack, the first dedicated OpenStack company in China, Hui Cheng has deep insight of Chinese cloud business and strategies. In this topic, he will try to answer the above questions, and also will deeply dive into the cloud status quo, as well as OpenStack opportunities in China.
Using diskimage-builder your application deployments can be tested easily and deployed quickly. If you are interested in learning how to create custom golden disk images and use them in your CI system and then deployments, you should come to this talk. In OpenStack Deployment (tripleo) we have built a lightweight golden disk image creator called diskimage-builderwhich we use to make testing and deploying OpenStack itself easy and fast. The same tool can be used to prepare golden disk images of any Linux application. I will cover the architecture of diskimage-builder, how to extend it with your own elements and use the resulting images for deploying within an OpenStack cloud.
Liveperson Infrastructure in a nutshell Requirements and goals achieved! OpenStack is running the Core business of Liveperson The diffrent openstack components in use and why 1 year in Production and Growing like Crazy Infrastructure as a building block Tips to get you started Future plans to continue with Openstack and with bleeding core edge projects.
This presentation covers practical experience from building an OpenStack private cloud with a specific emphasis on high availability. Based on a production deployment of OpenStack Grizzly at Pixelpark (a hosting company based in Berlin, Germany), we explore and explain highly-available data storage with Ceph, OpenStack service high availability with Pacemaker, virtual machine high availability in OpenStack Compute, and high availability considerations for OpenStack Networking. We also outline the lessons a previously conventional hosting company has learned from moving to a private cloud deployment, both from a technical and organizational perspective. About the speakers: Sebastian Kachel is an all-around devops guy at Pixelpark, and has been involved in the OpenStack deployment project since its inception. Florian Haas has provided consulting, training and guidance to Pixelpark over the course of the project.
This time the Mercadolibre Cloudbuilders are going to explain their Openstack multidisk Swift Architecture, the journey that they traveled to reach the their current deployment, configuration and tuning hints to achieve constant 300.000rpm and persist the initial performance in time, common mistakes, common failure scenarios and how to go trough them. - Learn how the biggest e-commerce company on latin-america stores all the images, of all their items on Openstack Swift - How we moved from a trillions of dollars storage schema to an OpenSource , hardware comodity based object storage system - How we can handle +300.000 rpm and +85 millions of users, with no performance degradation , key caching and black magic - In depth technical explanation of our Swift multidisk architecture - Lessons learned - How we planned to scale up and out Dont be afraid to Swift up your storage!
In this session, Mark Collier, COO of the OpenStack Foundation, will spotlight the OpenStack community and adoption in China through OpenStack users Ctrip, iQiyi and Qihoo360.
In the Portland Summit, we surveyed various approaches to surviving a full data center disaster with OpenStack. Ensuring the ability to recover the technology infrastructure after a disaster is hard. It is hard since it requires geographic distribution, where data written by an application is replicated to the data center which will be used for recovery. OpenStack, as we described in the Portland Summit, is still immature in this respect. As an emerging IaaS platform that is seeing more and more enterprise use, we need OpenStack to evolve to easily support disaster recovery (DR). We believe OpenStack should provide a consistent mechanism to abstract the DR support built into many enterprise systems, and higher level automation, e.g., via Heat, should be able to easily configure these mechanisms into a DR solution appropriate for a workload. In this presentation, we will review basic DR concepts and present our OpenStack DR vision.
Mark McLoughlin, with an introduction from Brian Stevens, will give his take on our community’s incredible momentum, the key ingredients to our success, and where we go from here.
There is much debate about which cloud operating system or platform will win and that current proprietary clouds will capture the majority of the mindshare and revenue. There are also questions of whether OpenStack is mature or secure enough for the enterprise. Join Monty Taylor, HP Distinguished Technologist and OpenStack board member, as he debunks these theories and shows how OpenStack is not only on the right track from an innovation and technology standpoint but also from a customer and market perspective. In the $80 billion cloud computing market, there will be more than one winner, and Monty will illustrate how OpenStack is one of those. He will also discuss what the organization and its key contributors must do to move the software forward and achieve this market success, such as interoperability, upgradability and other enterprise-grade functionality.
OpenDaylight is an exciting new community-led, open source project focused on accelerating adoption of software-defined networking (SDN) by providing a robust SDN platform on which the industry can build and innovate. An OpenDaylight controller provides flexible management of both physical and virtual networks. The open source nature of the project and its flexible network management capabilities make it an ideal SDN platform to integrate with Neutron. In this session, OpenDaylight community members from Cisco, IBM, RedHat, and Ericsson will describe the OpenDaylight project goals and platform architecture, as well as the roadmap and progress to date. OpenDaylight brings together a number of virtual networking approaches, and we will discuss integration approaches with OpenStack Neutron that provide flexibility for OpenStack administrators and users. Details of our initial Neutron integration will also be demonstrated for attendees. Attendees will leave this session with a greater understanding of what OpenDaylight is, and how it can integrate with OpenStack Neutron to provide a powerful SDN-based networking solution for OpenStack Clouds.
A fixture at recent OpenStack Summits, this presentation gives an updated overview of the state of high availability in OpenStack. It outlines and summarizes recent changes and additions to OpenStack in the high-availability space in Havana, and gives an outlook into high availability in OpenStack Icehouse.
Ctrip(http://www.ctrip.com) is the leading OTA company in China. Ctrip sells 70 billion RMB travel products last year, including flight/hotel/travel/train/ And still under fast growing, so we need a solid and elastic infrastructure to support our business. We started build private cloud for Ctrip since last year. After evaluated CloudStack, OpenStack, OpenNebula, we finally chose OpenStack. We developed based on OpenStack Grizzly, and now it manages our QA Farm/Dev Farm, part of production environment with thousands of bare metal and vms. We use VMware, and working on migrate into KVM. I did a sharing on 3rd anniversary party of OpenStack in Shanghai. I hope I can get an chance to have a talk in HongKong summit, and share what we did in Ctrip. We now have 3 teams (15+ Engineers) working on: Bare Metal Provisioning developed based on Razor extend VMware management in OpenStack Building VDI, plan for more than 8000+ employees in Call Center, we will rollout 500~1000 thin-clients in Oct. Detail introduction: Bare Metal Provisioning: (with Demo) After server been putted on rack, powered on, connected to switch with the provisioning vlan, it can automatically does below tasks. We also have a UI for OPS to submit requests. Baking for a given period, and provides report Configure BIOS Upgrade firmware Setup ILO Configure RAID OS installation supports Windows 2k8 Ubuntu 12.04 Centos 6.4 ESXi 5.1 Postinstall Windows KMS activation Patching Network configuration Linux All kinds of configuration It's based on Razor, and we are working on how to merge it with Ironic, and contribute into community. Extend VMware management in OpenStack (with Demo) We did a lot of enhancements and extension on vmware driver. Each tenants can map to an HA cluster/folder in vcenter. add a nova extention as vmware environment to be associate with a tenant each environment contains vcenter/datacenter/hypervisor cluster or folder/target vm folder datastore regex AD OU it belongs to, and AD admin groups Support deploy from template in datastore with customisation spec template management in horizon Multi-compute, each manage one cluster Add datastore filter Add hypervisor filter Support vswitch/dvswitch, use vlan mode, working perfectly with Neutron, both support vmware and kvm(Openvswitch). Extend keystone a lot to support AD integration (AD/LDAP as a Service) Not only manage OpenStack permission, but also take care system(os) permissions of vms or bare metal vms will join Domain, put in specific OU (supports both windows and linux) Support AD group policy update AD user management in horizon Blueprint to extend vsphere support in Neutron: https://blueprints.launchpad.net/neutron/+spec/quantum-vmware-plugin integrate with zabbix As for DNS as a Service, we are going to contribute our windows driver to Designate project. we will start code contribution soon. I believe we will have code commit before this summit. VDI for call center: (I think till HK summit, we will have a lot of progress) Use OpenStack to manage the whole VDI Based on kvm/optimized Spice Arm thin-client Under heavy development and hardware LPT testing Try to rollout 500~1000 clients in Oct.
VMware has been deeply engaged in helping its customers in OpenStack deployments. This includes deployments that leverage both the VMware NSX networking platform (formerly Nicira) and also deployments that leverage VMware vSphere, the world's most advanced compute hypervisor platform. In this session, we will feature stories from these customers and also give a look ahead at the latest and greatest OpenStack integration work coming out of VMware.
Barbican, the open source key management system for OpenStack, is proud to announce our 1.0 Havana release. This release includes support for symmetric key management suitable for securely encrypting data at rest. Come check out our first release including discussion and code for integrating Barbican with Cinder and Swift and discussion of future features like federation and SSL. There will be demonstrations of the system including a Swift encrypting proxy and integration with a popular web framework.
OpenStack adoption is growing constantly, but there are still areas where vanilla OpenStack cannot be used out of the box. One of those fields is financial sector requiring infrastructure to be properly hardened and secured to meet challenging industry compliance requirements. To make OpenStack suitable for security demanding environments wee analysed most of the components and developed set of modules for system hardening. Wel show a pragmatic set of guidelines to deploy OpenStack based cloud infrastructure able to meet most of PCI DSS requirements.
Cloud computing services are quickly becoming the ideal platform for the rapid development and deployment of elastic, scalable, web applications and services on virtualized infrastructure. As OpenStack evolves, even more infrastructure services such as load-balancing, firewall, and other virtualized network functions (NFV) are moving into this infrastructure-as-a-service cloud platform layer. Most recently, underlying physical systems are also evolving to become application aware to better meet the performance and resource orchestration requirements of these new, network-centric cloud applications. So, whereas some think of cloud computing as being designed for the deliver of web or enterprise applications, several of the leading consumer-facing service providers are turning to OpenStack as a platform for delivery of a broad set of network-centric services such as video-on-demand, mobile apps, or network function virtualization (NFV). This talk will describe the opportunity presented by OpenStack to evolve its capabilities in this direction in concert with application aware infrastructure to simultaneously deliver faster service delivery and an enhanced user experience.
OpenStack enables the ability to mix and match infrastructure technologies and deployment options based on application workloads. In this session, you will learn how Citrix products including XenServer, NetScaler and CloudPortal integrate with OpenStack to accelerate time-to-value. Attendees will leave with an understanding of the technical capabilities and integration points between Citrix products and OpenStack.
The Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) currently has more than 100 active users and roughly 35 research projects running on its OpenStack cloud. That research runs the gamut – from analyzing how computers understand text, to conducting hardware research. In this featured presentation, hear from MIT CSAIL Senior Architect Jon Proulx about how OpenStack helped the MIT-based lab break away from proprietary clouds and how it empowers developers and users to conduct sophisticated research on OpenStack. Joining Jon will be Rackspace Director of Training and Certification Tony Campbell, whose team brought Rackspace Training for OpenStack to MIT, which helped researchers and students at the university sharpen their OpenStack expertise.
We are building our infrastructure-aas based on openstack from 2011. Relative to Public Cloud, Private Cloud has more especially requirements, such as the special network architecture,the special strategy for the allocation of resources. This session will introduce the problems and demands we have met in our private cloud. Now, we're also building our network platform via sdn to provide a dynamic and programmable network for our infrastructure.
During this session we’ll productively use our time to give away 5 Samsung Galaxy Tab 3 with Android to the audience in a trivia spectacular we call “The OpenStack Elimination Game Show.” We’ll present difficult OpenStack technical questions to a panel of 3 experts. One of them will provide the correct answer; the other two will bluff.Everybody in the audience gets flashcards to hold over their head and vote which panelist is telling the truth. Raise your flashcards to identify which answer is right. Get it wrong – you are out. Last five standing get an Samsung Galaxy Tab 3 with Android….and fame associated with being an OpenStack guru.
The flexibility of the OpenStack architecture is key to fitting into the enterprise data center and, at the same time, can present a challenge for the data center teams seeking a prescriptive, and optimal configuration. Using two installations from the insurance and financial services companies, this session will outline some real-world integration challenges and prescriptive solutions to integrate OpenStack in these environments. We will: Review some of the enterprise requirements which led to key architectural decision points for the OpenStack configuration. Highlight unique and innovative use cases and designs that were enabled by OpenStack Discuss specific approaches to leveraging existing enterprise assets, such as ActiveDirectory, storage, and networking. Review the lessons learned and how they could be simplified by leveraging features of Havana and future releases of OpenStack. The audience should gain a broader perspective on how OpenStack is being integrated in enterprise data centers today.
What started out as a developer cloud during the Summer of 2012, the OpenStack based private cloud at eBay has since grown into a multi-tenant multi-region self-service cloud. The work loads we now support range from development use cases, to CI farms, as well as external facing business critical applications. We act and think like a public cloud and follow certain princples and practices on how we design, build, and operate . These pracrices enable us to maintain developer agility, operational efficiency and elasticity. In this session, we would like to share our adoption story from the time it began till now, and present our design principles, architecture, practices, and challenges in operating multi-tenant private clouds using OpenStack at the core.
Open vSwitch is used in neutron with the OpenvSwitch plugin and many other vendor plugins. But the cases to use Open vSwitch in production are rare. We are trying to help to use open vSwitch in production. This presentation will include: Open vSwitch in neutron the performance of Open vSwitch Open vSwitch in hardware
Companies of all sizes are struggling to adopt a Continuous Delivery mindset in order to compete effectively. Although technology plays a significant role (having a Cloud platform designed for agility, scale and elasticity is a must), the culture of the development and delivery organization turns out to be equally if not more important. IBM has a long history of software development beginning on the mainframe decades ago, and as such has a very strong culture around how software should be delivered. With this session, we will talk about how we used OpenStack as enabling technology along with adopting many of the Community development principles as a way to change ourselves from a “Plan everything up front, deliver in large chunks every 9 months” to a “Roadmap, iteratively deliver every two weeks, feedback and improve” type of culture. We will also talk about how we have begun to use our frequently delivered solution as a way to change the way we serve our customers - including solution testing to support.
In this panel we'll be bringing together the foremost experts in network virtualization and openstack neutron to discuss the future.
The Cyberport Technology Center launched its new 3D Cloud service by offering free use of this cutting-edge cloud-based 3D rendering technology to contestants of the Hong Kong Youth 3D Animation Competition since 2012. The 3D Cloud which decreased the need for traditional and high costing rendering equipment, provides a cutting-edge cloud-based 3D rendering platform to local CG production house and local students. This OpenStack platform offers the incubatess, tenants and ICT practitioners to experience the scalable and flexible computing resources they ever have upon their application/project requirement. More importantly, the platform provides them the opportunity of cloud adoption which enables them to implement a cost-effective IT infrastructure plan for lowering technology costs and creating time for market advantages.
The Titan (formerly named Pandora), made in Hong Kong, by a Hong Kong man, is a new OpenStack management portal. It aims at improving and fine-tuning some features of the original OpenStack Cloud administration. Titan is a central administration software for OpenStack. It relies on horizon server, adding extra functions to it. Users can use it to operate the OpenStack cloud. The goal is simple: bringing OpenStack to commercial. The OpenStack project is great, but it splits into many pieces, customers expect a powerful tool to administrate all of them in one. Billing system and monitoring system are on the road map. With these functions, Internet service providers will be getting easier to adopt into OpenStack ecosystem. Helping people and enhancing OpenStack is their missions. The Pandora which is now open-sourced for everyone will focus on private cloud deployment in helping small and medium enterprises to build and manage their private cloud easier and faster without paying expensive licenses.
The OpenStack project is designed to make infrastructure deployment flexible and to enable all types of clouds; public, private and hybrid. Learn how Docker enables developers to build, package and deploy applications as lightweight portable containers, which run virtually anywhere, including OpenStack clouds, bare-metal servers and dedicated servers. The combination of Docker and OpenStack makes for a powerful tool for cross cloud application development and deployment. In this session, the Docker and Rackspace teams will automatically build and test open-source Margarine (our friendly test application) from source using Docker. Once complete, we will deploy Margarine from a laptop virtual environment to: 1. Rackspace public cloud 2. Rackspace private cloud 3. HP public cloud 4. OpenStack multi-node cluster in a colo-facility All with little to no modification and virtually no delay! We will wrap-up this fantastic session by discussing the recent integration of Docker into Nova, and future plans for Docker and OpenStack.
In this presentation, I'll talk about building a culture of contribution within your organization. I'll cover important aspects, such as: the business case for contributing to OpenStack (or other open source applications) getting the Legal team to agree to the contributors license agreement engagement and communication with the greater community getting your patches "good enough" for the world to see encouraging team members to be community leaders Along the way, I'll share my experiences with helping Bluehost build the world's largest OpenStack installation, along with experieces at various other employers, and how I've helped them create communities of contribution. I'll also discuss the perspective of communities and community leaders (from my experience as the Fedora Project Leader).
OpenStack is winning support across the IT industry while NFV (Network Function Virtualization) has been widely endorsed by the CT (Communication Technology) industry. OpenStack will become the future IT control panel while NFV is the blueprint of future telecom infrastructure. The combination of these two fundamental changes will shape both IT and CT, transforming them into a unified ICT infrastructure. This convergence will bring openness and wide adoption, but also will impose unique challenges (performance and reliability) to build cloud platforms. Huawei, as the key CT vendor and an emerging IT player, is committed to contribute to both OpenStack and NFV, and to lead the convergence of them. In this talk, we will talk about the business opportunity, technical challenges, ecosystem and our vision and practices around the OpenStack and NFV journey.
Would you be excited to have a single API and single pipeline for deploying your applications on your physical servers, as well as your virtual servers? OpenStack can now do this. OpenStack enables running applications on virtual machines in a cloud - a public cloud, or your own private cloud. But, what if your application requires physical servers, or your production environment is too performance sensitive to run in VMs? The conventional approach is to use separate tooling, such as crowbar or razor, but deploying dev/test and production environments differently is a bad idea. Using OpenStack Bare Metal Provisioning, it is possible to use the same API and the same pipeline to build, test, and deploy applications on both virtual and physical machines. I am leading the development effort behind the Ironic project, and will talk about the project history, current status and what you can expect from the code today, and our plans as it matures. I will touch on a few deployments that are live, and even though baremetal as-a-service is a very compelling use case, I will explain what remains to be done to get there.
Adding an OpenStack environment into an already complex IT architecture threatens to overwhelm IT staff who are already spending countless hours managing existing IT architectures. Is it possible to unify the operational management of existing data center virtualization with an OpenStack deployment? What about adding a public cloud provider into the mix? What about adding Platform as a Service capabilities? In this session, Oleg and James will demonstrate how CloudForms can provide a unified operational management framework for all of these scenarios and help IT staff keep their sanity in the process. Subjects explained will include how to:- Discover and monitor new and existing OpenStack environments- Provide Showback and Chargeback of guest workloads- Provision workloads via self-service catalogs to OpenStack- Create migration analysis reports from datacenter virtualization platforms (including Red Hat Enterprise Virtualization, Microsoft Hyper-V, VMware vSphere) to OpenStackJames and Oleg will also provide an overview of Red Hat's Open Hybrid Cloud Architecture and CloudForms' upstream open source community. Attendees will leave this session with a solid understanding of how to unify operations management of OpenStack with existing data center virtualization and public clouds in their organization.
Weather information is my personal interests, I started a open source project hk0weather to archive the concept of open data. hk0weather uses scrapy on python and crawl Hong Kong Observatory website, turn weather data provided from its HTML website into JSON format. In this talk, Sammy will talk about how to use OpenStack Object Storage (Swift) to store weather data for the ‘hk0weather’ project.
In this session, Ayal Baron & Doug Williams will detail the range of performance considerations important when sizing and deploying OpenStack configurations. Specifically, he will discuss: Kernel-based Virtual Machine (KVM) hypervisor performance across a range of workloads Performance considerations for storing Nova instances , including both direct-attached disks and locally attached SSD Performance of OpenStack networking based on Linux bridge and Open vSwitch Capacity planning for OpenStack provisioning services, including Nova, Cinder, Keystone, and Glance
Background: Recent research indicates that Australia is one of the most highly virtualised countries in the world. VMware undoubtedly represents the largest proportion of hypervisor technology in Australia. Aptira is an Australian systems integrator with strong involvement in OpenStack deployments in Australia.Bringing OpenStack to Australia: Aptira has spent a lot of time speaking to customers large and small about OpenStack. Many of these customers are very interested in adopting cloud, but at the same time they have built an existing business around VMware, with all datacenter infrastructure, operational procedures, tacit company knowledge based on this. This translates to a strong inertia, and when the word “CapEx” is mentioned to buy new infrastructure for OpenStack, the inertia becomes even stronger. Aptira sees the potential in the marrying of OpenStack with VMware solutions for the Australian market, allowing customers with existing VMware deployments to deliver open cloud capabilities to their own customers or internal stakeholders. Customers don't need to go out and buy expensive new SDN capable hardware, either. Aptira is driving the network virtualisation component of these clouds using Nicira, which works using existing, in-place network hardware, offering a much smoother migration to the private cloud model. We consider our internal use case is shared by many Australian companies employing virtualisation: we have an existing VMware solution in place and to provide OpenStack compute and networking we would otherwise be required to invest in and operate KVM and ESXi hypervisors seperately, along with installing new networking infrastructure into every rack. Exposing existing VMware solutions as OpenStack clouds creates a large resource efficiency without any additional CapEx required and minimal (if any) changes to existing infrastructure architectures. We'll talk about our experience trying to achieve our use cases and meet real customer requirements using OpenStack + VMware: What worked What didn't What improvements we'd like to see in the future Working with VMware and Nicira engineers on OpenStack code
The Volume Encryption feature in OpenStack presents a normal block storage device to the VM but encrypts the data in the virtualization host before writing to a remote disk. This provides data confidentiality against network traffic interception, compromised storage hosts, and stolen disk drives. To the end user, the block server operates exactly as it would when reading and writing unencrypted blocks. It includes a key manager interface that supports key generation and storage, and the interface allows different key managers to be supported such as Barbican or a KMIP server. This session will be split into two parts, the first covering the set up of the Barbican key management service and the second covering the configuration and use of Cinder and Nova to provide encrypted block storage.
Learn how to create a complete OpenStack deployment with UCS, Nexus and the Cisco OpenStack Installer. Cisco's OpenStack solution supports multiple distributions including Red Hat OpenStack and Ubuntu. Also learn how the Cloud Services Router (CSR) integration with OpenStack enables additional billable services for per-tenant routing and security. In this session you will learn down to increase the networking scalability of OpenStack with both hardware and software Nexus products including Nexus 1000V on KVM. Orchestrating next generation data center architectures including Dynamic Fabric Automation using OpenStack will be detailed . This session will also discuss how UCS Manager makes it easy to configure a bare metal node for OpenStack pilot and production deployments with both blade and rack mount servers. The Cisco OpenStack Installer will also be featured in this presentation with instructions on how to install OpenStack with High-Availability (HA) on UCS with Nexus.
GreenXity is the next generation ECO social platform, adopting a Cloud-in-Cloud architecture. Powered by Openstack in the host of Hong Kong Cyberport, GreenXity embraces ECO focused services ranging from Green Transport Cloud, ECO Games Cloud, ECO E-commerce Cloud to ECO Social Cloud. This platform is a vertical integration of all ECO related solutions from professional to laymen uses. GreenXity also creates a unique one-stop shopping, social entertainment and learning experience for global users enabled by our carbon footprint analytics and 3D virtual world stimulation game engine. It makes ECO life become easier, cheaper, more fun and with peace of mind.
The cloud computing has proved it can deliver lower IT costs, reduced infrastructure complexity, and enhanced flexibility. Hong Kong is an ideal place in the Asia-Pacific region for cloud computing development and adoption, with its robust telecommunications infrastructure, pro-business environment, effective protection of data privacy and information security, knowledgeable professionals and government support. Cyberport has been piloting with early releases of OpenStack cloud software at the Technology Centre to assist startup entrepreneurs with improved collaboration, lower IT maintenance costs and ultimately, higher productivity. The speaker will talk about a number of key considerations to build up cloud platform and share best practice and challenges through different use-cases.
MyCloudLab, which is being developed in the Chinese University of Hong Kong, is a OpenStack cloud testbed with a talior-made web-based management system for the cloud computing administration. It aims at providing an open sourced platform for students to easily experience the environment of cloud computing platform and learn essential techniques of administering a cloud computing system.
Try to guess what the hot topic around devops in OpenStack, yeah you guessed right it how difficult it is to update an OpenStack cloud from a version to another. At eNovance we have our own public cloud and we would like to share our experience on how we did it. From nova, keystone, swift, cinder and ceilometer the team eNovance is going to show you how to do the update the *right* way.
The rise of cloud computing not only inspires wide-ranging development of cloud-based services in Asia-Pacific, but also helps creat an environment conducive to open-source developments. While the market for OpenStack has been picking up steam during the past 12 months, there are variations between Asian economies when it comes to the commercial use of OpenStack in a production environment. What is the current state of play in the APAC region? Who is investing? What are key lessons learned from early adopters?
We will present an overview of Taskflow: a Python library for OpenStack that helps make task execution easy, consistent, and reliable. TaskFlow allows the creation of lightweight task objects and/or functions that are combined together into flows (aka: workflows). It includes components for running these flows in a manner that can be stopped, resumed, and safely reverted. Projects implemented using the Taskflow library enjoy added state resiliency, and fault tolerance. The library also simplifies crash recovery for resumption of tasks, flows, and jobs. With Taskflow, interrupted actions may be resumed or rolled back automatically when a manager process is resumed. TaskFlow is proposed to be the foundation library for Convection (Cloud based Workflow-as-a-Service capabilities for OpenStack). An update on the progress for TaskFlow (features and capabilities ready to be consumed by OpenStack projects) and a proposed path for possible integration into Oslo and development of Convection will be presented.  https://wiki.openstack.org/wiki/TaskFlow  https://wiki.openstack.org/wiki/Convection
When launching a IaaS Provider, time to market, reliability, scalability and cost are everyday concerns. Widely adopted by successful service providers, OpenStack seems to be the way to go but is it comparable, in terms of features, with Cloud Providers Packages from vendors such as Joyent, OnApp, Citrix or Parallels? While commercial cloud service providers packages may not offer the flexibility needed or can kill any business case, the Do It Yourself approach is certainly not cutting the time to market or complexity. The same applies for the hardware while OCP based servers or Open Storage could bring big savings, they are not for every organization. In that session, we will have a close look at the different options, pros and cons of integrated solutions versus custom integrations, all from an OpenStack perspective. We will talk about where the competitive advantage can come when using a packaged solution but also have a look at the differentiators and value added a custom integration can bring.
OpenStack Networking has come a long way from its initial inclusion in the Folsom release a year ago. With the Havana release, it now has capabilities beyond that of Nova networking and supports a wider range of network deployment options for overlay and provider networks as well new Layer 3 services including firewall, VPN and load balancing. At this session there will be a panel discussion on these alternatives, their trade-offs and when each might be the appropriate deployment choice.
Red Hat Cloud Infrastructure. It lays a solid foundation for you to build and manage private cloud Infrastructure-as-a-Service (IaaS)—using truly open, interoperable, easy-to-manage Red Hat products that won’t lock you in. Red Hat Cloud Infrastructure is built on the trusted Red Hat Enterprise Linux platform and Red Hat Enterprise Virtualization. Using lower-cost open-virtualization technologies lets you increase your private-cloud environment's capacity and combine it with the proprietary virtualization capacity you already have.
IAAS enables Enterprises, Organizations and SME to enjoy the benefits of Cloud Computing.The Automated Cloud manage your cloud as unified service, with one platform, one point ofcontact and one bill. It enables Physical, Virtual, Private and Public Cloud to be utilized and be supported around the clock. Empowering the utilization of the different building blocks is the true essence to Automated Cloud.
This session is a 201 level technical deep dive on the OpenStack Neutron with VMware NSX Network Virtualization. NSX is a virtual networking platform powering many OpenStack production environments as the networking engine behind Neutron. In this session wel explore the distributed systems architecture of the NSX Controller Cluster, the core functionality and behavior of NSX primary system components, and the logical networking devices and security tools NSX produces for consumption. High availability deployments, and packet flows for common scenarios will be discussed. And finally, wel take a look at how the physical network fabric can be architected for Next Generation NSX deployments. Some of the session topics include: System Components Review Scale-out control plane & HA control and management channels NSX enabled Hypervisors Scale-out data plane NSX Security Groups NSX Logical Network Devices Connecting to external networks Physical network design for Network Virtualization OpenStack Neutron Integration and OpenStack Distributions for VMware NSX
DNS is something everyone who uses any Internet service uses and often doesn't give much though to. DNS is a service that uses a distributed database to provide a mapping of IP addresses to domain names and hosts to access resources on the Internet (and internal networks). Without DNS, maintaining lookup data would be a difficult endevor. With cloud environments, including Openstack, there is an obvious need for a DNS service. The project that intends to provide DNS needs for Openstack is Designate. Designate is a an Openstack-inspired DNS as-a-service project. It is intended to be used to provide a DNS service from the entry point of creating, updating, maintaining and deleting DNS data using the Designate API, to providing a means to update any number of DNS server backends. Designate is a modular project, allowing for the use of the DNS serveror backend database for storage of DNS data. Designate provides a RESTful API the provides functionality that one would expect to find with any Openstack project which can be used by customers be it front-end interaction by users or by other projects that require DNS. Designate also has service for listening to Nova and Neutron notifications to make it possible those services to be the source of DNS modifications through Designate. Designate is an ideal project to use for developing DNS as a service for an organization, and HP in particular has based their generally available DNSaaS product on Designate. This discussion will provide an overview of Designate as well as in-depth discussion of the various components such as: * Designate processes and configuration * How Designate allows multiple DNS server backends and creating new backends * How Designate allows backend database storage * The Designate API
There are a few critical components to any Openstack deployment that can be considered show stoppers if they go wrong. Perhaps the most important of these is the messaging system. When Bluehost set out to run a large deployment of Openstack we initially evaluated RabbitMQ and Qpid. We chose Qpid and have had numerous problems with reliability, resiliency and speed. I would like to talk about our transition from Qpid to 0mq in our production environment, the lessons learned along the way, and why brokerless Openstack is the way of the future.
This session will feature Daniel Pays, CTO at Cloudwatt and Raphael Ferreira, CEO at eNovance who will present an overview of their Openstack strategy and deployment plans followed by a Q&A session. Cloudwatt is currently one of the biggest Openstack deployment in Europe. Cloudwatt offers to French and European companies of all sizes, as well as public, competitive cloud infrastructure (Infrastructure as a Service - IaaS). Providing the necessary guarantees of security, privacy and resilience, Cloudwatt also provides special protection against the risks associated with the requirements of local legislation. With a capital of 225 million euros financed by two major players - Orange and Thales - with the support of the French Government, the company should create a European-wide ecosystem of nearly 1,000 jobs by 5 to 7 years. Its primary mission is to help boost the economic fabric and in particular that of SMEs, to enhance their competitiveness and to ensure their compliance with french and European regulations. About eNovance : eNovance, is one of the European leader in Open Source Cloud computing. The company delivers services in two key areas: integrating and operating public and private open source clouds and 24/7 managed services for the world largest public clouds (Cloudwatt, AWS, Google Compute Engine, etc.). With its extensive investment in R&D, in less than three years eNovance has become the world 7th largest contributor to the OpenStack initiative, and the only European company on the foundation board of directors alongside Rackspace, Redhat, Cisco and HP. With nearly 200 clients such as CloudWatt, Warner, Chronopost, Safran, and Canal Plus, eNovance has become a partner of choice for rolling out critical applications in the cloud
In this session we will look at how Juju and MAAS can help grow a traditional OpenStack deployment to a more complex landscape where multiple hypervisors co-exist. We will focus primarily on the integration between VMware vSphere and Nova and demonstrate how the orchestration model Juju provides helps make this dream a reality.
In the current implementation of Swift, the entire object is stored mostly as a single file on object server. Idea behind striping the object is basically to have parallel read/writes of object stripes from multiple object servers. The whole idea is to stripe the object, write stripes in parallel across the available and possible object servers, and read it in parallel from multiple object servers. Vectored I/O to object can be the extended functionality along with object striping. Advantages: Participation of maximum available and possible object servers in object reads and writes. Increase in Swift throughput: Especially in case of large size objects (object size more than stripe/chunk size like 1MB, 2MB and so on). Load sharing across multiple object servers can be achieved. Update operation support on objects can be added using vectored I/O. Author/Presenter: Bipin Kunal, Kashish Bhatia, Shriram Pore
Presenters: Ting Zou, Director of R&D Cloud Data Center, Huawei Technology USA Tanay Nagjee, Solution Architect, Electric Cloud USA Engineering productivity and DevOps are growing areas of focus for the technology industry. Software companies of every size are adopting continuous delivery. With over 140,000 employees worldwide, Huawei's strategy for the adoption of a DevOps cloud R&D environment requires careful planning and coordination. We combined a suite of technologies and open source tools including Jenkins, Subversion, Redmine, Review Board, Chef, and OpenStack etc. to provide a continuous delivery cloud solution targeted to deploy in stages across Huawei R&D data center facilities in multiple sites. In this implementation, ElectricCommander from Electric Cloud orchestrates the integration of the various tools and interactions of each R&D process and activity into one suite, such as integrating the OpenStack API to dynamically deploy virtual machines, launch and monitor the build/test/deploy processes on them, and tear them down when finished. This dynamic deployment and teardown creates an elastic cloud that minimizes resources required and maximizes productivity. The solution provides developer, testing, or release engineers with the capability for integrated end-to-end automation, from bug fixes through local build/unit testing, bug status updates, code review requests, SCM merge/commit, continuous integration/build, parallel function/regression testing execution all the way to the end of production deployment. Learn how this integration works to orchestrate resource usage across the R&D environment in this expansive company.
The phenomenal growth of OpenStack will necessitate larger deployments, often in a federated fashion. Often such deployments require that we leverage existing IPv6 networks for both infrastructure as well fortenant networks. Thus, there is an urgent need to study large federated deployments of OpenStack on pure IPv6 backbone networks.In this talk, we share our experiences with our IPv6 deployment of OpenStack across three academic sites on a large IPv6 backbone (CERNET2). In addition, we present an architecture and some solution concepts that enable such deployments. In particular we present best practices for Neutron
Get some lab time with Nova, the OpenStack compute project. In this hands-on lab, we will show you how easy it is to spin up and spin down compute instances, attach block storage and connect them to a network. Attendees will have the opportunity to get hands-on experience and access to experts who can help connect the dots.
The reach of OpenStack extends beyond compute, and Swift is more than just a backend for Glance! This session explores how Swift is used to support one of the largest enterprise SaaS-based ERP product companies at Concur, a leading provider of integrated travel and expense management solutions. In this session we will briefly analyze options in the market and cover the advantages of the Swift architecture to support SaaS applications, and show how Swift fits into the architecture at Concur.
Cloud computing is not a one size fits all solution. Customers making the transition to cloud computing need to address a variety of requirements including: 1. Flexibility, scale, and control of open source software2. Freedom to work with multiple hypervisors and operating systems to meet their complex workload requirements3. Ease of management and governance of multiple cloud environments - private, public, and managed private4. Ease of integration with existing storage, networking, security, financial, and identity management systems5. Improve deployment and operational efficiencies Dell delivers comprehensive OpenStack-powered cloud solutions to address the broad range of requirements enabling customer choice and allowing for broader adoption of OpenStack-based solutions. Hear how Dell is responding to customer requirements by enhancing OpenStack flexibility, scale, and control abilities with field-proven Reference Architectures, networking designs, multi-hypervisor support (including Hyper-V), and partnerships with leading Linux vendors to support multiple operating systems and OpenStack distributions. Learn how customers can easily manage and govern their use of OpenStack in conjunction with other public and private clouds using the Dell Multi-Cloud Manager (from the Enstratius acquisition). Hear how best to gain cost benefits and to manage cloud efficiently at scale. Dell offers managed private clouds for those customers who want to outsource cloud operations and focus on core competencies. For customers who want to capture best practices around operations and automate their deployments and operations at scale, Dell has developed the Crowbar tool to provide an architectural framework and open source software for automated deployments and management of OpenSource clouds from the bare metal up. Finally, hear how Dell Cloud Transformation Services can combine the best of the broad offerings above to provide turnkey consulting and implementation services to build and/or manage an OpenStack-powered cloud that fits the specific custom requirements of each customer. It all about customer choice and Dell delivers it with the power of OpenStack.
Title : Security on Openstack Presenters : Brian Chong Audience : Security Architects or CISO that are looking to deploy Openstack in a Enterprise Datacenter to run SaaS/PaaS/IaaS applications. Objective : This presentation will cover different security concepts that needed to be addressed in an overall Openstack design. Due to the additional control plane elements that are introduced in the messaging tier as well as the API level there are now several new attack vectors that must be considered when desigining your Enterprise Network and Security strategy. In the world of Software Defined Data Center it does not eliminate the need for the physical plane of control (switches, routers, hosts and storage) but it adds a new control plane element which is the Data Center Controller which never existed before. All of the old issues of device managers must be still designed while adding this element without losing the security we had before with separate device element managers. This means a new approach on network design and network segmentation must come into play when designing a security architecture for the future data centers.
In this presentation, we discuss Kickstack (https://github.com/hastexo/kickstack), a pure-Puppet orchestration layer for the automatic deployment of OpenStack. Using a simple scheme of “node roles” expressed as classes in a Puppet module, you can use Kickstack to easily run, deploy and configure OpenStack using nothing but a Puppet manifest, an External Node Classifier (ENC), or both. This practical hands-on session was first presented at this year's OSCON 2013 (http://www.oscon.com/oscon2013/public/schedule/detail/29620). This is a condensed, updated version. Attendees wishing to follow along with the practical content presented in the workshop should bring their laptops with VirtualBox pre-installed.
The massive computing and storage resources that are needed to support big data applications make cloud environments an ideal fit. Having said that virtulization overhead, security and regulation minimised the use of Big Data on the Cloud. Can OpenStack change that? Can Big Data on OpenStack become a first class citizen just like any other framework? How does new features such as Bare Metal support effect the use of Big Data on OpenStack? How centralized cloud-based storage will influence BigData applications?Our panel speakers represent Mirantis (Ilya Elterman), HP (Bruce Mathews), RedHat (Brent Holden), Craig Tracey (Hubspot), Nati Shalom (GigaSpaces) will lead the discussion as moderator.
Moonshot Servers are the world first software defined servers tailored and optimized for specific workloads. Moonshot servers are tailored servers that would address specific and unique business needs. The Moonshot servers are designed for workloads like web hosting, cloud services and multi tier architecture applications. The software defined server architecture is driven by application workloads Process of selecting software defined server for deployment of workloads is to list of all matching servers supporting the workloads which are available from the inventory. Rank the servers by the most suitable for deployment condition/requirement. Example for requirement can be application performance, clustering for High availability or Load balancing. The presentation will describe the journey of provisioning workload on Software define servers (Moonshot) using OpenStack This session will highlight the challenges and changes made to OpenStack for provisioning workload on Software define servers (Moonshot) using OpenStack It should also serve as a catalyst for ongoing discussions on IRONIX project
The OpenStack application programming interface (API) is accessible via web services. However, the application developers who are buildling solutions on top of OpenStack do not want to talk to that API directly. They want to talk to OpenStack in the programming language of their choice. That means using software development kits (SDK) written in a variety of programming languages. These SDKs allow the developers to be more efficient and productive when using OpenStack. In this session you will learn the following about software development kits: Why they are necessary. What they can do for developers. Which ones are available. How to use them. Who is developing them. What comes next.
OpenStack's testing infrastructure uses Diablo and trunk based public clouds for running tests. Diablo had 469,000 lines of code while Havana already has over 1.3 million lines, but if Diablo was mostly working what have all those developers been working on? In addition to new user facing features thousands of bugs have been fixed and internals have been improved for performance and scalability all while supporting the same API. This talk will compare Diablo and Havana to show how OpenStack has evolved and matured.
SUSE has recently released the second iteration of its enterprise-ready infrastucture-as-a-service product based on OpenStack: SUSE Cloud 2.0. Find out about the direction SUSE is taking in providing enterprise support for cloud software, as well as the additional tools & features SUSE is providing over & above the standard OpenStack distribution.
As OpenStack continues to grow in popularity among cloud providers and developers, it is also making inroads into enterprise IT organizations. As this occurs, the OpenStack community needs to understand what these organizations require to consume the open cloud technology.Many enterprises rely on Microsoft solutions and toolsets to manage their IT assets and service their end users, so finding ways to better integrate these Microsoft tools with OpenStack opens the doors to broader adoption of OpenStack as a cloud option for the enterprise.In this session, moderated by Microsoft’s Peter Pouliot, hear about how key OpenStack community leaders are working to enable OpenStack with Hyper-V, Windows, and other Microsoft tools. Panel members will include: Joseph George (Dell), Das Kamhout (Intel) and Georgiy Okrokvertskhov (Mirantis)
DevOps, is changing how cloud applications and infrastructure is deployed and managed, DevOps aims to speed up process for delivering new features and releases of our applications. OpenStack is a fast growing infrastructure itself. In this panel we will discuss the specific challenges for delving DevOps and continues deployment models on-top of an OpenStack infrastructure that is continuously evolving. We've assembled a panel of top industry experts to discuss how we can push updates to our applications without downtime, how we can also update our OpenStack infrastructure as we do that, we will learn about the tools and framework that is available on the OpenStack ecosystem that can help us in the implementation of a successful DevOps project.Our panel speakers represent RackSpace (Tony Campbell), CloudScaling (Azmir Mohamed), Scalr (Sebastian Stadil), Canonical (Mark Ramm-Christensen) aand Liveperson (Toby Holzer). Nati Shalom (GigaSpaces) will lead the discussion as moderator.Come preared with a list of questions.
In Portland, Rackspace invited the OpenStack community to peer inside their public cloud at the challenges experienced while deploying from OpenStack trunk during the Grizzly cycle. In this presentation, Rackspace will provide an update on the challenges, triumphs, and lessons learned operating a production OpenStack public cloud during the Havana cycle. Wel conclude by sharing our vision for OpenStack deployments in the I Cycle and beyond.Discussion Outline Recap of Grizzly Challenges Scaling Hurdle 1: Deploy Mechanism Scaling Hurdle 2: Throughput & Trunk CatchupSummary of the Havana Experience Successes Lessons Learned Look Ahead to “I” Architectural Challenges Process Challenges
This talk is about provisioning and managing Hadoop clusters on OpenStack using Savanna project. Savanna supports two key use cases: on-demand cluster provisioning and on-demand Hadoop tasks execution (Elastic Data Processing). This presentation will be focused around EDP functionality. Wel give an introduction to Savanna project, review features implemented on 0.3 version and talk about further plans, cover key architectural aspects and make the live demo. In this demo wel show how user can execute Hadoop job in one click on data stored in Swift using pre-configured Hadoop cluster template.
I'll admit that I am a bit biased, but I think Openstack Swift is one of the best kept secrets of Openstack. It has been the engine of Rackspace Cloud Files for more than 3 years, and in that time has grown phenomenally. For the first time, we will allow an intimate look at the scale at which we run Openstack Swift at Rackspace. From early beginnings, to growing pains, to where we are now and beyond. Join me as we unlock the mysteries of, perhaps, the largest Openstack Swift clusters in the World.
Getting from here to there can be difficult at times. In our case, we want to bring the benefits of OpenStack to the enterprise. This means working with platforms and technologies that are already in place within the enterprise. For VMware, this means ensuring that existing VMware customers can benefit from OpenStack and Cinder without making wholesale changes to their existing storage architecture. For vSphere, we do this through the datastore construct. Until now, the native vSphere datastore has not been available to Cinder Volume consumers directly. In this session, we will discuss the new VMware VMDK Cinder Driver and how it will allow existing VMware customers to simply add Cinder to their internal clouds. Please join us for an in-depth review of the driver, the code that we are submitting to the community, a demo of the prototype and our goals for making VMware vSphere work seamlessly in an OpenStack environment.
Intel is both a contributor to, and major user of OpenStack. In this session, we will discuss the contributions Intel has made across compute, networking, and storage, including trusted compute pools, scheduler enhancements and object stores. The session will conclude with the latest developments in Intel’s own deployment and use of OpenStack for their hybrid cloud.
RestComm is a Telephone Applications Development platform designed for web developers. It exposes an XML instruction set also known as the RestComm Markup Language or RCML as well as a set of restful APIs. The combination of RCML and the restful APIs provide the web developer community with a superset of the Twilio APIs in an open source package. RestComm levels the playing field by allowing enterprises, telecom carriers and VoIP service providers to promote innovation through affordable and abundant access to a standard set of light weight telecom APIs. Such innovation should lead to start-ups and new products that extend the offline communication mode of social media applications to real time communication workflows for businesses and consumers. RestComm opens the gates to the public switched telephone networks and VoIP networks around the world, which are feature rich and provide a lot of room for new ideas to flourish.
The first rule of OpenStack scalability and performance is, don't talk about scalability and performance; benchmark it. Working with customers and partners, wee pushed OpenStack to its limits to see where it stumbles and where it breaks. In this session, wel describe the details behind the process, talk about the tools performance assessment strategies we used. We'll also share our findings on key scalability and performance bottlenecks, validations approaches, and suggest solutions.
As far as we know, a lot of stackers are still confused by the neutron and its concept. Sometimes, it is a headache for them to choose between nova-network and neutron when they deploy OpenStack. On Grizzly and Havana design summits, we had workshops to show stackers the basic behaviour of neutron. In this session, we will dive into neutron so that stackers could understand what is inside the neutron. It includes: 1. Big picture of neutron components and its position in a Openstack deployment 2. Deep dive into main neutron API methods and their implementation under Linux bridge, ovs and ryu plugins 3. Neutron routers and floating IPs and its iptables rules 4. Introduction of neutron load balancer as a service, firewall as a service and VPN as a service After this session, hope users are more confident with deploying neutron.
There is no lack of deployment software in the OpenStack market, and each one of them provides great value from different perspective and scenario. These deployment solutions did a great job on software deployment layer based upon OS provisioning tools like Cobbler and configuration management tools like Puppet/Chef, but they generally do not go deep down to networking and server hardware level configuration, which is a key step for large scale OpenStack solution. As part of a networking company and a server vendor, Huawei Cloud team are developing Compass, a system not only for OpenStack software deployment, but also for fully automated hardware level server and networking gear configuration. Our system automates hardware resource discovery, hardware configuration (e.g., hardware RAID configuration, switch configuration), topology-aware OpenStack service deployment and etc. Therefore, end users have a streamlined OpenStack deployment experience with Compass. We are using Compass for deploying OpenStack cloud in our Telco customer sites and got very constructive feedback for us to make more robust and open deployment solution. We plan to open source our project. One major goal we have in designing the system is to achieve true openness at hardware level, so that other hardware vendors can write plug-in toward their special server designs. Just like OpenStack openness makes it a valuable game-changing cloud solution, we hope we can work with other hardware vendors to make OpenStack universally available on various hardware platforms in a streamlined fashion.
It's a multi-cloud world and your code needs to run somewhere. However, the cloud you choose today may not be the cloud you need tomorrow. Changes in reliability, performance, cost, and privacy may drive you to research alternative public clouds, a private cloud, or a hybrid of the two. Considering application portability upfront can be crucial in avoiding lock-in. The tools you use to interact with the cloud will play a large part in how portable your application is between clouds.This panel will discuss the different approaches to application portability, e.g. API compatibility, multi-cloud SDKs, image portability, application architecture portability, etc. Is application portability a myth or reality? What are the pros and cons? Bring your questions to be answered by our panel of experts
IPv6 usage is on the increase and some studies report that IPv6 usage has doubled within the last year. OpenStack Networking's Neutron (formerly Quantum) has added many new IPv6 features during the Havana cycle. In this session, we'll examine the current state of tenant IPv6 within Neutron and discuss the new features available in the Havana release. The session will cover a real world deployment by looking at DreamHost's DreamCompute and how IPv6 is being delivered to its customers.
In this workshop the Swift experts at SwiftStack will walk you through deployment and configuration of OpenStack Swift. We will guide you through the architecture of Swift while we walk through a step-by-step installation from the ground up. Attendees will learn hands-on by doing rather than listening! Swift's architecture (The Ring, Zones, Partitions, Accounts & Containers) How to bootstrap a basic Swift installation The guts of how swift works Swift failure recovery mechanisms
Put all bricks together for a production-ready load balancing: multi-vendor support in Neutron LBaaS, elasticity via Heat templates and service monitoring via Ceilometer. The session will cover new features introduced in Havana release cycle as well as briefly cover the way of writing vendor drivers for the service. The talk also will cover the roadmap planned for Icehouse.Key highlights: data model change, LBaaS API extensibility, hardware appliance support, integration of LBaaS with Heat and Ceilometer.
There are many different blueprints describing how high-availability can be achieved underneith an OpenStack cloud. At PayPal, we have chosen to utilize some of the common OpenStack best practices as well as introducing common Data Center best practices to bring high availability to the management/control infrastructure within our cloud. Topics Included: Design of our Openstack Control infrastructure Pros and Cons of management and infrastructure racks separate from a compute rack High Availability requirements by component Pros and cons of High Availability choices external to and within the cloud Trade-offs that need to be made now to ensure availability
Come learn how iWeb leveraged OpenStack Cinder to built their cloud hosting business with guaranteed Quality-of-Service (QoS). In this session John Griffith, the OpenStack Block Storage PTL will discuss the goals iWeb had for their Cloud Hosting business, how leveraging the latest OpenStack Cinder features delivered predictable performance at scale and why iWeb chose the OpenStack and SolidFire combination over a number of other Cloud Platforms and storage solutions.iWeb Cloud Architect Boris Deschenes will also discuss how iWeb embraced OpenStack for additional OSS activities like bare metal provisioning and how they are contributing their ideas and use cases back in to the OpenStack code base.
This is going to be a full hands-on workshop about all the best practices, tips and tricks on how to deploy Windows in OpenStack, starting from Windows Server 2003 up to the forthcoming 2012 R2. On the Hypervisor side, we'll provide extensive KVM and Hyper-V examples, but Xen and XenServer will be covered as well! The workshop will be heavily scenario driven, with demostrations about how to automate instances with: Windows images for KVM with VirtIO drivers and Cloudbase-Init Active Directory and domain membership SMB File Servers Remote Desktop Servers IIS web servers, including load balancing and reverse proxying SQL Server 2008 / 2008 R2 / 2012 Exchange Server 2010 / 2013 Sharepoint 2010 / 2013 VDI, including accelerated GPU! Automatic per tenant Windows updates with WSUS How to secure networking, including public and private tenant networks Running custom PowerShell user_data scripts We'll have answers for all the common questions: How can I set the administrator's password? What about VirtIO drivers on KVM? Which hypervisor should I choose for Windows? KVM, Hyper-V, XenServer, ESXi, etc How can I tell Nova to use KVM for Linux images and Hyper-V for Windows? How does the Windows licensing work in OpenStack? How much RAM, CPU, disk do I need for a given Windows image? What about Windows 7/8/8.1 clients? How can I do all the above with Chef / Puppet / SaltStack? From Cloudbase Solutions, the makers of Cloudbase-Init and mantainers of the Hyper-V Nova driver and Neutron plugin! :-)
This talk will provide a brief tutorial concerning the steps to take to develop a Neutron plugin from scratch, after discussing whether a new plugin is really required or instead the best strategy would consist in reusing and/or extending one of the existing Neutron plugins. The target audience for this talk are engineers working for companies providing network products/services which might be interested in interfacing them with Neutron, as well as engineers deploying Openstack with Neutron which are interested at looking how they can extend plugin capabilities to implement their requirements. The agenda of the talk can be quickly summarized as follows: * What is a Neutron plugin, and how Neutron API calls are dispatched to it * Do I need to write a new plugin? * What are my options for adding capabilities to existing plugins? * So I need to write my own plugin... * Where to start from * Implementing the Neutron API - the 'sendmail' plugin
Want to learn more about using Chef to deploy OpenStack and manage infrastructure on top of it, but not sure where to start? This in-depth, hands-on deployment session will cover the Chef and OpenStack ecosystem and how to get started with the Chef cookbooks in the StackForge repositories. We'll cover the current Grizzly OpenStack resources and the related cookbooks and content in the Chef community. Topics covered will include: * Deployment configuration and techniques * StackForge repository code walkthrough * Cookbook development and testing * Deploying and managing infrastructure on OpenStack with the knife-openstack plugin * Documentation The session is intended for folks already familiar with Chef and interested in deploying OpenStack. This is intended to be a very interactive session with many questions and guided code and deployment walkthroughs. Attendees are expected to provide their own laptops capable of running a single-node OpenStack virtual machine. Want to learn more about using Chef to deploy OpenStack and manage infrastructure on top of it, but not sure where to start? This in-depth, hands-on deployment session will cover the Chef and OpenStack ecosystem and how to get started with the Chef cookbooks in the StackForge repositories. We'll cover the current Grizzly OpenStack resources and the related cookbooks and content in the Chef community. Topics covered will include: Deployment configuration and techniques StackForge repository code walkthrough Cookbook development and testing Deploying and managing infrastructure on OpenStack with the knife-openstack plugin Documentation The session is intended for folks already familiar with Chef and interested in deploying OpenStack. This is intended to be a very interactive session with many questions and guided code and deployment walkthroughs. Attendees are expected to provide their own laptops capable of running a single-node OpenStack virtual machine.
The primary requirements for OpenStack based clouds (public, private or hybrid) is that they must be massively scalable, highly available and so on. MySQL is the basis for many of these OpenStack deployments since critical components like Keystone and Nova depend on it. Attend this session for a technical overview of how to make MySQL highly available, using either the a) MySQL/Galera or b) DBRD/Pacemaker and c) MySQL cluster with HAProxy, keepalived and VRRP approaches. The presentation compares and contrasts the approaches and discusses some best practices and recommendations. After attending this session, attendees will have a good perspective on making MySQL highly available for their OpenStack implemenations.
LDAP integration in OpenStack can go way past just basic integration with Keystone. This presentation will discuss LDAP integration for private clouds for global users and groups, per-tenant local users and groups, per-tenant sudo, per-tenant autofs, and alternative methods for storing and using SSH keys. I'll also cover how to extend multi-tenancy information into other applications like Gerrit, puppet, and saltstack using LDAP. The majority of these examples use a common pattern which can be extended to most applications, I'll discuss the pattern and how you can use it in your own non-OpenStack applications.
Speakers: Sean Roberts, Colin McNamara, Kyle Mestery, Shannon McFarland, Nati Shalom Moderator: Tom Fifield
For more than a year, Ceph has become increasingly popular and saw several deployments inside and outside OpenStack. The community and Ceph itself has greatly matured. Ceph is a fully open source distributed object store, network block device, and file system designed for reliability, performance, and scalability from terabytes to exabytes. Ceph utilizes a novel placement algorithm (CRUSH), active storage nodes, and peer-to-peer gossip protocols to avoid the scalability and reliability problems associated with centralized controllers and lookup tables. Since Grizzly, the Ceph integration gained some good additions: Havana definitely brought tons of awesome features. It also made the integration easier and definitely removed all the tiny hacks. All these things, will certainly encourage people to use Ceph in OpenStack. Ceph is excellent to back OpenStack platforms, no matter how big and complex the platform.. The main goal of the talk is to convince those of you who aren't already using Ceph as a storage backend for OpenStack to do so. I consider the Ceph technology to be the de facto storage backend for OpenStack for a lot of good reasons that I'll expose during the talk. In this session, Sebastien Han from eNovance will go through several subjects such as (depends on the time that I'll get): Quick Ceph overview (for those of you who are not familiar with it) Quick state of the integration with OpenStack (general state and Havana's best additions) Quick cinder drivers overview and comparisons Building a Ceph cluster - general considerations Use cases and design examples (hardware harware hardware) Achieve HA with Ceph Operations: backups, monitoring, upgrades Tips and best practices
VPNaaS was a complex blueprint and back in April 2013, three or more blueprints were submitted on the same topic. It was difficult to come to a consensus since each submitter had different views and priorities. Despite of these difficulties, VPNaaS was agreed by all parties and scheduled for Havana a few weeks later. Upstream University is a training program designed to help Free Software contributors get their feature and patches accepted quicker. It played a small but essential role, behind the scene, in the acceptance of the VPNaaS blueprint. The story will be told, with testimonials from the people involved and a few anecdotes.
While OpenStack projects provide a variety of ways to be notified of or extract telemetry-based metrics and usage information, today there exists no singular way of capturing both performance metrics and congruently logged information that correspond along the same time scale. In many cases it’s extremely valuable to have performance metrics and congruent log patterns collected, analyzed, and paired together in real-time.This talk will explore the use of Riemann (http://riemann.io), a distributed systems monitor capable of handling tens of thousands of events per second, per core. Its abilities extend beyond simple metrics into service, state, tagging, descriptions (mayhap log messages), and can also scale beyond a single node; it is possible to construct a topology of Riemann servers that filter and pass events to each other for processing.A separate web-based dashboard project has thus far been capable of receiving thousands of updates per second via websockets, and is a good demonstration of Riemann's capabilities.Discussion and demo will illustrate how we can publish system-level metrics and Ceilometer-collected data to Riemann, while also collecting OpenStack log output--all while acting upon the following:*specific log patterns*service states*inactivity detection*activity rates for types of events*metric rates, values, and percentilesThis data can be normalized, analyzed, graphed, used to trigger automated first-response/front-line actions, sent to other systems, retrieved through a simple API to be presented within other systems, presented in other dashboards, and so forth.No previous experience with Riemann or Ceilometer is necessary for this session.
This presentation introduces the Havana release's new Modular Layer 2 (ML2) plugin for OpenStack Neutron. The ML2 plugin is a community-driven framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex, real-world data centers. ML2 currently works with the Open vSwitch, Linux Bridge, and Hyper-V L2 agents, and is intended to replace and deprecate those agents' monolithic plugins. The ML2 plugin also works with SDN controllers and network hardware devices, and is designed to greatly simplify adding support for new L2 networking technologies into OpenStack Neutron. In this session, Cisco and Red Hat representatives will: Introduce the Modular Layer 2 (ML2) plugin for OpenStack Neutron Provide an overview of ML2, discussing its design principles and detailing use case examples Describe ML2's architecture and its driver APIs Demonstrate an OpenStack deployment with ML2 utilizing multiple segmentation methods and multiple L2 networking mechanisms to show the power of the ML2 plugin Attendees will leave this session with an understanding of ML2, the use cases it was designed to solve, how to deploy ML2 in an OpenStack Havana environment, and how existing Neutron deployments can migrate to ML2.
EIG/Bluehost has been successfully managing one of the largest Openstack environments with more than 17,000 compute nodes wherein over 20,000 instances are running for a year now. We were happy to share some of our experiences and findings at the Portland summit and are grateful to see that many of our concerns have been aggressively addressed in the Havana release. However, we think that there is still a significant lack of SDN functionality which is open source, free, and does not require specialized networking equipment. More specifically, one of the default Neutron plugins, OpenvSwitch, has not changed much since the Folsom release. In this talk, we would like to share how we have been developing our own SDN plugin for our production environment at Bluehost. We hope to share our experiences and designs with the community so that we can facilitate the discussion towards truly open and commoditized SDN for the masses.
Nova vSphere support was added in Grizzly and enhanced in Havana. Likewise, Havana includes new support for a Cinder driver that uses vSphere datastores. Come to this hands-on workshop to learn more about how VMware vSphere works with OpenStack! In this session, each small group of 2-3 people will get access to a remot lab environment that consists of: - An OpenStack “controller” node. - A windows host running vCenter - Several ESX hypervisors. - A host provided shared storage. The session will walk you through the key steps in configuring the system for use with vCenter, provisioning servers + volumes using standard OpenStack interfaces, and viewing the resulting changes via vCenter to understand how the Nova + Cinder drivers for vSphere consume capacity from the underlying vCenter-managed infrastructure. We will also highlight some troubleshooting capabilities enabled by the use of an OpenStack-aware plugin for vCenter. An Internet connected laptop with a standard browser is required for this session.
OpenStack in three short years has become one of the most successful,most talked about and most community-driven Open Source projects inhistory.In this joint presentation Randy Bias (Cloudscaling) and Scott Sanchez(Rackspace) will examine the progress from Grizzly to Havana and delveinto new areas like refstack, tripleO, baremetal/Ironic, the move from"projects" to "programs", and AWS compatibility.They will show updated statistics on project momentum and a deep diveon OpenStack Orchestrate (Heat), which has the opportunity to changethe game for OpenStack in the greater private cloud game. The duo willalso highlight the challenges ahead of the project and what should bedone to avoid failure.Joint presenters: Scott Sanchez, Randy Bias
MySQL, MariaDB or Percona Server are the de-facto standard for OpenStack in terms of internal database and DBaaS. Today, MySQL is offered as a single instance, typically unintegrated in the IaaS layer, or in the best case it is provided with standard replication, but without full control on availability and failover. Last but not least, MySQL security is not combined with the security mechanisms in the cloud.This presentation is an introduction to MariaDB for OpenStack, with an extended set of API that automates the provisioning, deployment and configuration of a all-active set of servers with Galera synchronous replication.We will cover these topics: • MySQL, MariaDB and OpenStack - what is the current status • Deploying a MariaDB cluster in OpenStack: Glance & Nova integration, Red Dwarf compatibility and issues • Automatic provisioning using Juju, Puppet and Chef • DBA Daily operations and integration with Object and Block storage • High Availability components: replication and automatic failover, HA options • Security of the cluster: Keystone integration, database secure connections and tunnelling
Earlier this year, Dave Neary presented the theory of personas to attendees of the OpenStack Summit in Portland. Attendees were excited about creating a set of personas for the OpenStack project, as they allow you to have a much clearer idea of your target audience, what their needs are, and how you can reach them. They also allow much easier communication around feature discussions, user interface design and marketing strategy. Based on data from the user committee survey and user interviews, a personas working group is being created to answer the question: “Who uses OpenStack?” In this session Dave will return to present an initial set of OpenStack personas, discuss how they were created, and detail what conclusions we can draw from them.
Users will get access to a live OpenStack + Neutron setup and be able to walk through key neutron deployment use cases, with members of the Neutron core development team available to provide guidence and answer questions. At the past two OpenStack conferences we presented a similar Neutron hands on lab led by several members of the Neutron core team and it was standing room only. We'd like to run another session this time, incorporating lessons learned from the previous sesion and also including new Neutron capabilities introduced in the Havana release. Demonstrated features will include: private L2 networks using tunnels rather than vlans. Including support for overlapping IPs. L3 + NAT via neutron logical routers Firewall as a Service VPN as a Service Loadbalancer as a Service and more!
Sheepdog is purely userspace distributed storage system for QEMU. It is essentially an object storage system that manages disks and aggregates the space and performance of disks linearly in hyper scale on commodity hardware in a smart way. On top of its object store, sheepdog provides elastic volume service (support of glance and cinder has been merged) and http service (in the development, plans to be Swift API compatible). Sheepdog doesn't assume anything about kernel version and can work nicely with a xattr-supported file systems. In this presentation, I'll concentrate on the technical aspects of sheepdog: 1. How sheepdog works internally regards of thin-provisioning volume, snapshot, clone and node managements. 2. What sheepdog can provides for Openstack. 3. Some performance numbers. 4. Demo of live sheepdog cluster
Come to this session to get an update on Marconi, an OpenStack queuing and notification service described at http://wiki.openstack.org/marconi Marconi aims to be pragmatic, building upon the real-world experiences of teams who have solid track records running and supporting web-scale message queuing systems. Users can customize Marconi to achieve a wide range of performance, durability, availability, and efficiency goals. As a message bus, Marconi allows cloud developers to use a REST API to easily distribute tasks to multiple workers across the components of an OpenStack deployment. Publish-subscribe semantics are also supported, allowing notifications to be distributed to multiple listeners at once. Join Rackspace's Kurt Griffiths, Principal Architect, and Allan Metts, Engineering Director, to learn about the work that has been done and the path ahead -- including a description of the project, real-world performance metrics, and a live demo.
Speakers: Tristan Goode, Trung The Nguyen, Sajid Akhtar, Moderator: Loic Dachary
Ceilometer is a tool that collects usage and performance data, while Heat orchestrates complex deployments on top of OpenStack. Heat aims to autoscale its deployments, scaling up when they're running hot and scaling back when idle. Ceilometer can access decisive data and trigger the appropriate actions in Heat. The result of these two OpenStack projects meeting is value creation in the form of an alarming API in Ceilometer and its consumption in Heat. In this session, we will detail how the two projects work together to deliver autoscaling, providing both background information and a technical deep dive.
Often ignored or hidden away in risk registers the consequences of hypervisor breakouts are incredibly high. In this presentation I describe potential exploitation vectors in common virtualization stacks before diving into hands on, practical guidance for securing your hypervisor and addressing breakout vulnerabilities when they occur.
Just getting a cloud environment up and running is no longer enough. The challenge that OpenStack faces is how to get people, applications and services working on OpenStack out of the box and to ensure that the “unboxing” experience is as seamless and painless as possible. Organizations expectations for deploying cloud now include being able to rapidly make services and applications available as soon as they have IaaS deployed. To meet the expectations of most organizations, adding a PaaS layer has become an essential part of every Cloud deployment strategy. The OpenShift Origin PaaS project is backed by the fastest growing open-source community of developers, cloud architects, devops, and end users intent on creating the next generation of PaaS and ensuring that the tools for deploying, managing & scaling it for OpenStack are freely available. To do this on OpenStack, the OpenShift community has “adopted” Heat, OpenStack's orchestration engine, and delivered a set of Heat templates for deploying, managing and auto-scaling OpenShift on any OpenStack distribution. This talk provides an overview of OpenShift, RedHat's Platform as a Service and a deep dive into deploying OpenShift using Heat, OpenStack's template language, and a live demonstration of Heat technologies to deploy AND autoscale OpenShift using repeatable orchestration templates. We will demonstrate the power of Heat, OpenStack's orchestration engine and how we leverage Heat to orchestrate cloud infrastructure resources such as storage, networking, and instances to deploy OpenShift into a repeatable running environment for OpenStack IaaS platforms. OpenStack Summit attendees can learn about both the OpenShift Origin Project, and the emerging Heat template technologies and its impact on Linux and open source cloud communities. The speakers are both experienced with live demonstrations, and make the technical difficulty of this topic easily approachable through real-life examples. Speakers: Diane Mueller, Krishna Raman & Chris Alfonso
Speakers: Daniel Izquierdo, Alex Freedland, Dan Stangel, Qingye Jiang Moderator: Stefano Maffulli
Deployment of large enterprise applications is a complex problem. Such applications consist of numerous software components spread across multiple VMs with a variety of dependencies between them, and require a large number of configuration parameters to be specified. Pattern technologies such as OpenStack Heat help automate the deployment of distributed applications. However, Heat in its current form only has limited supported for software orchestration to address the aforemenionted issues. Furthermore, especially for application components initial deployment is just the begining and more orchestration is necessary throughout the comlete lifecycle of an application to consider aspects such as high availability, storage and network configurations. In this session we will discuss automated deployment and maintenance of enterprise applications using OpenStack Heat. Our talk will foucs on the following issues and how they can be addressed using base Heat capabilities together with extensions to the Heat engine. 1. Enterprise application deployment: 1.1. complete declarative specification of application model; 1.2. cross component dependencies; and 1.3. software stack configuration and coordination.2. Enterprise application maintainance: 2.1. virtual machine high availability; 2.2. storage; and 2.3. network.
Networking with OpenStack Neutron requires a different mindset around IP networking than conventional physical topologies. Understanding Linux namespaces is critical to troubleshooting Openstack Neutron networking and understanding of neutron network topology. A key element of this shift is use of Linux network namespaces, introduced in Folsom. What more, without a thorough understanding of how namespaces organize and abstract L3 routers, DHCP servers within the network and subnets spaces, network-induced downtime can be difficult to resolve. Namespaces enables multiple instances of a routing table to co-exist within the same Linux box (like virtual routing and forwarding (vrf) in routers), within the network and subnets spaces, per tenant. It introduces a whole realm of networking flexibility, which can be critical in production Openstack deployments -- but can also contradicts the logic applied by experienced IP network admins and lead troubleshooting off a cliff. This technical deep dive into Openstack Neutron Namespaces and IPtables wil give attendees will get a clear understanding of these building blocks of OpenStack L3 and DHCP agents. Wel show how to go about troubleshooting L3 issues, and how to apply this more robust networking abstraction in distributed OpenStack environments.
Over the last six months Rackspace has made major improvements to the StackTach monitoring application. Much of this has focused on reconciliation of notifications with backend billing systems and extensive reports for error detection. If you are a service provider you'll be interested in learning more about StackTach. This session will provide an overview and demo of the StackTach application and these recent improvements. Also, we'll discuss our strategy and progress on bringing this functionality into Ceilometer. https://github.com/rackerlabs/stacktach
Cloud Management used to be about launching servers and monitoring them, potentially autoscaling them. This was all good, but it not nearly enough! We need to make applications Cloud Aware! The idea is that your applications need be ready to be deployed in radically different contexts, and they need to understand the context theye running in. In this talk for application infrastructure developers (aka devops), we explain what is required to adapt traditional (or hastily built) cloud application infrastructure into Cloud Aware applications, so you fully leverage the power of OpenStack.
With the rising of cloud computing and big data, more and more companies start to build their own cloud storage system. Swift and Ceph are two popular open source software stacks, which are well deployed in today's OpenStack based cloud environment to implement object and virtual block service. In this presentation, we do a deep study on both swift and Ceph performance on Commodity x86 platform, including testing environment, methodolgy and thorough analysis. Tuning guide and optimization BKM (best known method) will also be shared for reference.
Many OpenStack deployments use VLANs to separate traffic into different virtual networks. Existing solutions require either that all VLANs must be trunked to every host or the use of vendor-supplied plugins paired with specific (proprietary) hardware.Cumulus Networks and Metacloud will be presenting a plugin (targeted for inclusion in Icehouse) that allows Neutron to extend the configuration of VLANs to Linux based switches such that individual VLANs are only trunked to ports and hosts that need them. The use of Linux based switches allows for rapid prototyping of advanced configurations in VMs or on real hardware. We will be demonstrating this plugin controlling bare-metal hardware-accelerated Linux switches running Cumulus Linux for production ready deployments. Additionally, we will discuss an L3 mode, that has similar functionality but uses entirely L3 concepts. The discussion will include solutions to issues around connectivity, security, mobility, and public IP access.