In the 15 or so years we've been on a DevOps journey as an industry, we've seen several waves of tooling and processes. When teams were wholly separated, each brought their tools to bear on the problem. Then, as organizations brought teams together, they began to standardize onto a set of best-in-class tools. And more recently, groups have spent time on "DIY DevOps" - stitching together tools with custom integrations to attempt to create the original synergies promised by the concept of DevOps.
But recently, we've seen yet another shift from two related forces. As software has eaten the world, businesses need ideas to move at the speed of software. Gone are the days of massive, long waterfall-style projects that gather requirements and then spend effort passing them from one team to another. Companies in every industry are now judged on their ability to iterate and ship software and value to customers quickly. Anytime wasted on undifferentiated "DevOps duct tape" has to be eliminated to free up teams to work on value for stakeholders.
At the same time, the world of DevOps tooling has seen a shift toward consolidation. While previously best-in-class tools reigned supreme, teams are now looking to DevOps platforms to get more done with less integration, less wasted cycles, less opaque processes, and less distraction. All of this has led to a new era of platforms - and not just in DevOps. The tools we use to plan, build, deploy, operate, maintain and measure the software we produce must be more than tightly integrated - they must be one platform to allow for the seamless flow of information and ideas.
And this principle also applies to the methods we use to deploy the software. The same shift to platforms - from premise purpose-built hardware to on-demand cloud-based infrastructure and beyond to homogenous platforms like Kubernetes which abstract away the hardware entirely has occurred on the infrastructure side. But each of these remains a tool - and as technology leaders, we have to take a step back and ask: what is our actual goal? Why do we consider any of these tools at all?
That goal - the ability to ship better, more secure software to our stakeholders with less time between iterations - has to be at the forefront of the decision-making process when evaluating the tools and techniques we utilize to get the job done. We should measure what matters to know that we've made the right decisions. For over a decade, the DevOps Research and Assessment group (DORA) has been looking at correlation and causation of software delivery and operational performance. They have come to understand that measuring four key metrics - the lead time between changes, deployment frequency, change failure rate, and the mean time to restore services - are not only correlated to software and operational excellence but also indicators of that excellence. Organizations that perform well in these categories not only outpace the software performance of others, but see positive impacts on organizational goals such as revenue and employee job satisfaction that are directly related to these measures.
To measure these metrics effectively and efficiently, a new era is needed - the era of platforms. Putting down the "screwdriver in a server" and letting your teams instead focus on software delivery and value creation is how organizations will compete and win in the coming decade.