Video recording and production done by DevOpsDays.
The traditional overview and insights by no-one else but John Willis aka @botchagalupe
Presented by Dave Zwieback - mindweather.com - twitter.com/mindweather
Your organization has embraced the Devops philosophy, and is growing. So you set out to hunt for Devops practitioners, and quickly find that usual hiring approaches (e.g., recruiters looking on LinkedIn) simply don’t work.
- What do these these mythical Devops creatures look like? (Hint: a lot like unicorns and combs).
- What is their natural habitat? (Shockingly, they don’t hang out on LinkedIn).
- How can you capture them?
It is commonly accepted gospel amongst the DevOpsDays target audience that software developers and operational staff need to not only work closer together, but share each other's duties and burdens. But it seems that very large sites are slow to adopt this model. Is this a behemoth's inherent inability to adapt quickly? Or does DevOps perhaps not scale well?
The truth lies in the middle (shocking, I know). There are still a few things that current DevOps practices or culture changes cannot address. A lot of current DevOps techniques and practices are based on a number of common open source tools, but in very large environments, these tend not to scale well; as a result, companies on that end of the spectrum have been building their own software for a long time. Perhaps they have been more "DevOps" than either we or they care to admit?
In this talk, I'd like to take a look at what prevents very large scale deployments to fall into the, I suppose, "traditional DevOps" model (and discuss whether such a thing exists). Lessons we may need to (re)learn include: automation is a means, not an end, it requires safeguards and accountability, audit trails and, in many cases, a human decision; logging all the things is only helpful if all the things can be processed, otherwise we drown in a sea of data; removing barriers between roles flattens the trust hierarchy, raising each user's impact (and ability to cause harm, intentionally or not).
As programmers and system administrators we have the seemingly unlimited power of computers at our finger tips - with time, motivation, and enough caffeine we can build almost anything. So why do we waste so much time doing so many things in our jobs by hand? It’s not just about being lazy, its about putting our tools to work for us for consistency, speed, and ease. In a lot of ways Ruby was almost designed for this task and there are a ton of tools out there to make automation painless.
At Paperless Post, we’ve moved our very slow and painful workflow into a toolset that can accomplish most things with a single click or command. We understand that continuous deployment, while sounding like a miracle cure, isn’t just a switch you flip, especially for companies with long running/large projects and codebases. I would argue that taking the small steps toward that larger goal are not only worth it for every size company, but are often more valuable then the final pinnacle of one click deploy. I’ll walk through the tools we built and use, how we approached the problem, and how you can start taking steps to automate your day to day.
At Travis CI, we've learned a ton from failures, and we're still learning. The project has grown from a side project to a full time product that's running tens of thousands of builds per day.
All of our failures though, are home-grown. For lack of knowing better, we made design decisions that we're now paying back for, big time.
And yet, it's hard to see how we could have avoided this. The culture and knowledge of web operations and scalability is one that's kept mostly in the minds of smart people instead of discussed openly, without blame, without shame.
This talk is an attempt to get these things out in the open, to foster a community where problems are discussed constructively, where patterns and experiences, good and bad, are shared openly.
There has been a huge amount of interest and development around virtualization, IaaS platforms, and "cloud computing" in general. However, many of us still need to manage bare metal because we have latency sensitive applications, or simply need to squeeze every last cycle out of our hardware. Let's take a quick look at this neglected class of tooling to see what the current state of the art is.
Presidential Campaign is a billion dollar, over 6000 person organization that exists for roughly two years, and then disappears.
How do you setup a systems infrastructure for such an organization? How do you avoid organizational silos? Why even do DevOps? What was the culture like and how did it meld with the traditional culture of political campaigns and committees? What are some lessons learned from the 2012 campaign -- what things worked and what didn't?
Come learn about the answers to all of these questions, as well as our technology stack (AWS/Puppet/Ubuntu/CentOS/Github/Juniper/Nagios/OpsView & much more), why we chose it, how we set it up, and how we made it work.