One of the central themes of DevOps is applying software engineering techniques to operations work so that operations people could have better things: CI/CI/infra as code/repeatable builds/code reviews/etc. And while DevOps, in general, is much larger in concept than that one initiative and set of goals, people are are still stuck on that. We consistently see software delivered that isn't operable. DevOps must be reanimated with the mission of applying operations engineering techniques to software engineering work, so that DevOps can live on.
Infrastructure as Code (IaC) is the approach that takes proven coding techniques used by software systems and extends them to infrastructure. It is one of the key DevOps practices that enables teams to deliver infrastructure rapidly, reliably and at scale, and thereby, also software running on that infrastructure.
The primary goal of Continuous Delivery (CD) is to ensure that the software can be reliably released at any time and integrating IaC as part of the CD pipeline helps in achieving that goal.
With over 13 years of engineering and DevOps experience, Adarsh Shah has helped organizations from various domains adopt IaC and CD. In this presentation, he will show how to integrate Infrastructure as Code into a Continuous Delivery pipeline by applying some of the best practices used by software systems, as well as highlighting other aspects to consider.
Benefits and challenges of integrating IaC to CD pipeline
Best practices and patterns to use for integrating IaC to CD pipeline
Source Control - structure and strategies
Testing for IaC
Security and Compliance
Provisioning - Patterns for server provisioning
Building and deploying pipelines
Black Mirror presents a haunting view of how modern technology places society a “minute away” from a dystopian future. DevOps and those of us that practice it find ourselves in a similar situation - partially mature technologies whose implications we don’t yet fully understand. Heartbleed, Equifax and now Meltdown & Spectre can make us feel like there is no escaping this dark future. But just as Black Mirror examines the extremes of these concepts as a canary in the mine shaft for society, we too can carefully employ practices that will prevent season 5 from featuring Site reliability engineer, DevOps engineer, or CISO characters.
In this talk, we'll learn how to use the powerful concepts and tools behind DevOps for good...with great power comes great responsibility....but also great opportunity to do good for our businesses, each other and our world. By working together with product, business, and external teams; embedding security into how we operate; and measuring everything we do we can empower our teams to thrive.
Navigating policy, regulatory, and legal constraints is a challenge for building products in almost every industry-- especially in the US Government. Working with stakeholders that aren’t on the same page about approaches to software development increases complexity for moving projects forward. At the USDS, we’ve worked to increase collaboration between policy and technical experts so that agencies can deliver technology enabled services to Americans. Our contracting partners have been integral to this success and share in their own pain points of trying to implement modern technology in a not so modern culture. In this session, both USDS and government contractors will discuss the barriers we faced, successes we had, and areas that still need improvement during our path to building a product that impacts millions of Americans, and three percent of the economy.
This talk is intended to help folks who are managing technical projects avoid common pitfalls, and help technical teams better prepare managers for overall project success.
DevOps cannot be achieved without considering many different aspects of software quality, including security. The term DevSecOps was developed to highlight that security was being focused on as part of the pipeline, not a second-class citizen.
Fortunately, DevOps and continuous delivery practices give us opportunities to add different types of security testing to our pipeline so that security can be part of our definition of done. Continuous integration can invoke static analysis tools to test for simple security errors and check if components with known vulnerabilities are being used. Automated deployments and virtualization make dynamic environments available for testing in a production-like setting. Regression test suites can be used to drive traffic through proxies for security analysis. From the code to the systems where the software is being deployed, the process can make sure that security best practices are followed and insecure software is not being produced.
Gene will talk about how to construct a definition of done that focuses on security along with other types of quality in a DevOps pipeline. He will discuss how to define security practices and criteria that are appropriate for our teams and our projects to be confident that we are doing DevSecOps, and how those practices and criteria might mature over time.
Who says successful summer internship programs in engineering are only for Silicon Valley, Seattle and the large enterprise software companies? Not this guy. Startups are a great place to run and organize an internship. Baltimore has the best of what's to offer any college student looking for the total internship experience.
At Contrast Security, we've run a successful summer internship program in engineering for University students for the last 3 years. Our software engineering interns go through an elaborate, yet fun 3-month program with an emphasis of preparing them for a career in agile software development and a front-row seat at VC-backed startup.
We would like to share with the Baltimore DevOps community our journey building a successful program. We will talk about many of these topics:
How to recruit at local and national colleges/universities?
Involving your staff in the recruiting and interviewing process.
Establishing a structure for the program with the right kinds of goals and objectives.
Giving your interns something for their GitHub portfolio
The start-up experience
Stickers, shirts and other swag
No one likes to be on call, especially over the weekend or holidays. Yet, someone has to. Someone has to stay home and anxiously wait for that dreaded, and sometimes false positive, page instead of meeting friends at the bar or joining colleagues at the holiday party. In this talk I'll discuss a few different things you can do to prepare in advance to minimize the number of unnecessary pages so you don't have to miss out on that holiday party!
As technologists, our most challenging -- and most important -- job is that of platform selection. By far, the largest chunks of code in what we deploy will not be code that we've written, but the dependencies and platforms that our code runs on top of.
We spend an embarassingly small amount of time talking directly about this work, though. Instead, we most often seem to approach it as a political problem: some sort of technology popularity contest. This is bad! Very bad! Technology selection isn't some dark, mystical art. And it's far too important a task to just leave up to the whims of the HIPPO (highest paid person's opinion).
Like everything else, this is a skill that we all need to learn, practice, and eventually pass on to others. So let's break it down and discuss some concrete approaches for tackling it. By the end of this quick chat, we'll all have a shared understanding of what the true goals are for technology selection problems, and how to analyze our options to best meet those goals.
While no one disputes the good in finding and fixing issues before deploying to production, relying on traditional testing methods in the age of data-intensive internet scale software has proven to be incomplete. The ability to identify and fix production issues quickly is crucial and requires insight into usage patterns and trends across the entire application architecture. This talk touches on deficiencies of common testing methods, provide real-world examples of discovering odd edge cases with monitoring, and offer recommendations on metric instrumentation to help companies identify and act on business-affecting problems.
How do you migrate a large and diverse legacy portfolio to a modern, continuously evolving DevOps pipeline?
How do you ensure smooth transition from 100% on-prem to a hybrid of on-prem and multiple clouds?
How do you test complex, interdependent services and ensure reliability and uptime?
How do you overcome technical and cultural challenges and encourage empathy and continuous improvement?
In this talk, we'll share our latest experience in addressing these questions at the National Center for Biotechnology Information (NCBI) using recent technologies, including cluster scheduling and the service mesh.NCBI serves about 4.2 million users a day, at peak rates of around 7,000 web hits a second. As part of the National Institutes of Health (NIH), NCBI has been developing databases and information resources for the biomedical community since 1988. It has gone through many technology cycles while maintaining long term archival resources, such as the US DNA sequence database, GenBank, and PubMed.
Waffle House has a hurricane disaster plan. It has everything that you want from your IT disaster plans, including contact trees, failover states, and runbooks on partial operation. I think if we took some time to look at how other industries handle outages, we could construct better state machines about our own up/down conditions. I also think that we could use status instrumentation to trigger feature flags that route people to different versions of our products, depending on status.
This talk explores lessons about state that we can adapt from the world outside computers, how to quantify them using a finite state machine, and then how to explore implementing them automatically while we are in a less-than-perfect condition.
Audiences will leave with a new way to think about resilience in the face of downtime and the shades of down that can still be operational. They may also depart with a deep need for smothered hashbrowns.
This year DevOpsDays Baltimore faced a challenge of dealing with a possible weather emergency, and had to make some tough decisions based on an ever counting down clock and a fair amount of incomplete information. This panel will take a look at what happened and discuss how DevOps principles around Incident Management and Learning Reviews can help all organizations. We'll also get some audience participation and see if your confirmation biases match up with ours.
DevOps seems to have broken an age old rule that said IT investment don’t affect the bottom line and can’t create a strategic advantage. In this talk I’ll explore how the ‘people’ part of people, process and tools is the key difference between devops and every other silver bullet to come before it.
Every day we make decisions. Those decisions can be around improving your product, process, company; sometimes even bettering yourself. But many of our decisions are not made consciously. We all have biases. And even though most of our biases are unconscious, those biases still can get in a way of making the best possible decision. This is not a talk about gender and racial bias in tech hiring (altho I will also touch on hiring procedures and biases). This is a talk about different biases that skew our technical decision process that we deal with in out day-to-day operations.
I want you to imagine your organization as a freeway/highway and vehicles on it as the teams within the organization. Now, imagine if you could strip every major attribute you know about driving safely on a highway. What would it reveal about you, your team and your organization? In this talk, we will do this quick exercise and hopefully have some catharsis-based-revelations along the way.
DevOps trends are clear on measuring systems Mean Time To Recovery rather than Mean Time Between Failures. I argue that worrying about time between failures actually causes more harm than worrying about recovery. But do we think of our human systems the same way as our digital? I’ll apply lessons learned in SysOps to HumanOps. I’ll talk about how our complex social systems act like complex computer systems and how focusing on MTTR rather than MTBF is a good thing between people, not just machines. I'll cover the environmental requirements for focusing on MTTR and discuss potential conflict resolution steps for a jumping off point in your organization or community.
Learn how a Fortune 100 energy company that generates revenues of approximately $34.5 billion and employs approximately 34,000 people has embraced DevOps. What are the challenges facing a large company as they attempt to realign development and operations into one homogenous group? Get exposed to the secret of change, where instead of fighting the old, we invent the new. Study how security was mitigated when we embraced the cloud, experience the challenges we faced when we attempted to embrace open source software. Stories will be told, and our lessons learned will be shared for all to learn from.