Gary Gruver has created an operations focused DevOps book that scales. Your deployment pipeline will thank you. The entire 10 chapter e-book is based on one simple rule: optimize your deployment pipeline. Gruver often reminds the reader that DevOps is not a one shoe fits all methodology. However it does not take long for the focus to shift from team implementation to large scale adoption. First, it is much harder to plan accurately because everything you are asking your teams to do represents something they are being asked to do it for the first time.
Second, if software is developed correctly with a rigorous DP, it is relatively quick and inexpensive to change.
Q&A on Starting and Scaling DevOps in the Enterprise
You also use up a significant amount of your capacity planning instead of delivering real value to your business. Organizations that use waterfall planning also tend to build up lots of requirements inventory in front of the developer. This inventory tends to slow down the flow of value and creates waste and inefficiencies in the process. As the Lean manufacturing efforts have clearly demonstrated, wherever you have excess inventory in the system tends to drive waste in terms of rework and expediting.
This creates waste and rework in the system.
The other challenge with having excess inventory of requirements in front of the developer is that as the marketplace evolves, the priorities should also evolve. This leads to the organization having to reprioritize the requirements on a regular basis or, in the worst case, sticking to a committed plan and delivering features that are less likely to meet the needs of the current market. If these organizations let the planning process lock them into committed plans, it creates waste by delivering lower value features.
If the organizations reprioritize a large inventory of requirements, they will likely deprioritize requirements that the organization has invested a lot of time and energy in creating. Either way, excess requirements inventory leads to waste. The next step is getting an environment where the new feature can be deployed and tested. The job of providing environments typically belongs to Operations, so they frequently lead this effort.
Start and Scaling Devops in the Enterprise
In small organizations using the cloud, this can be very straightforward and easy. In large organizations using internal datacenters, this can be a very complex and timely process that requires working through extensive procurement and approval processes with lengthy handoffs between different parts of the organization. Getting an environment can start with long procurement cycles and major operational projects just to coordinate the work across the different server, storage, networking, and firewall teams in Operations.
This is frequently one of the biggest pain points that cause organizations to start exploring DevOps.
There is one large organization that started their DevOps initiative by trying to understand how long it would take to get up Hello World! They did this to understand where the biggest constraints were in their organization. They quit this experiment after days even though they still did not have Hello World! Next, they ran the same experiment in Amazon Web Services and showed it could be done in two hours.
This experiment provided a good understanding of the issues in their organization and also provided a view of what was possible. Once the environment is ready, the next step is deploying the code with the new feature into the test environment and ensuring it works as expected and does not break any existing functionality. This step should also ensure that there were no security or performance issues created by the new code. Tree issues typically plague traditional organizations at this stage in their DP: repeatability of test results , the time it takes to run the tests , and the time it takes to fix all the issues.
- Faster Iterations?
- Starting and Scaling DevOps in the Enterprise By Gary Gruver!
- by Gary Gruver.
- Book: Starting and Scaling DevOps in the Enterprise!
- Nanoparticles: Building Blocks for Nanotechnology (Nanostructure Science and Technology).
Repeatability of the results is a big source of inefficiency for most organizations. They waste time and energy debugging and trying to find code issues that end up being problems with the environment, the code deployment, or even the testing process. This makes it extremely difcult to determine when the code is ready to flow into production and requires a lot of extra triaging effort for the organization.
Large, complex, tightly coupled organizations frequently spend more time setting up and debugging these environments than they do writing code for the new capabilities. This testing is typically done with expensive and time-consuming manual tests that are not very repeatable. The time it takes to run through a full cycle of manual testing delays the feedback to developers, which results in slow rework cycles, which reduces flow in the DP. At the end of the day, it is important to remember that DevOps is not about what you do on what platform, but what your outcomes are.
by Gary Gruver
Once you get a view and understanding of your deployment pipelines, you need to make decisions on which pipelines should be optimized first. Decisions should be made based on the original business objectives. With fixed IT budgets, optimizations should not only reduce lead times, but reduce costs as well.
Only by factoring in both cost and lead times will organizations be able to satisfy previously set objectives. For example, you may have a collection of loosely-coupled pipelines that support high-growth applications and drive a lot of changes. These pipelines may require more resources. You might also have several tightly-coupled pipelines that are low-growth and support your existing revenue stream.
Starting and Scaling DevOps in the Enterprise: The Basic Deployment Pipeline
These pipelines may have a high cost of support. In this case, where should you prioritize optimization efforts? Many would take the tightly-coupled pipelines off the table and focus on optimizing the loosely-coupled, high growth pipelines. But, if your business objectives are to free up capacity for innovation and not be a bottleneck to the business, you can accomplish this by optimizing the tightly-coupled pipelines first, and quite frequently, these pipelines have the longest lead times, so you are satisfying both objectives.
Once you have a prioritized list of deployment pipelines, you can begin drilling down into pipeline optimization. Your value stream mapping exercise should provide you with guidance on sources of waste and long lead times, which may include:. Automation is often times a common place to start in order to reduce long lead times and increase efficiency. Automating your provisioning, testing and deployments will dramatically reduce manual effort, increase deployment frequency, decrease lead times, and produce fewer production incidents, enabling you to recapture capacity for innovation and redeploy it elsewhere — all at a lower cost.
As a result, teams spend less time on delivering applications and more time on doing innovative work that adds real value to the organization.