Do you really continuously integrate or continuously deliver? The answer would be very clear based on the track record of the build history - did changes get pooled into the main branch or was every earnest attempt made to deploy the incremental change to production almost immediately?
I feel that we all need to have a honest conversation between continuous integration and continuous deployment. Just because each release is in theory ready for promoting the changes into production, does not make a practitioner of continuous delivery.
Often, I hear a lot of protest about why the sacred release button is not pushed on every push. This essay would help identify some topics which often muddles the ability to get a continuous delivery pipeline.
Releases and Deployments
The primary reason for a new commit being appended to the main branch is simple - the interface or business rules has changed, but is not fully complete due to the remainder of the orchestrated changes have yet to be written. Hence why all the incremental changes are pooled.
This issue is able to be side-stepped by realizing that releases can be controlled by alternative approaches - feature switches, configuration modes, and so on.
A distinct advantage of releases being a concern independent of deployments, rollbacks can be almost instantaneous and probably would be faster than redeploying an old build.
Furthermore, a new capability is now available - be selective in the treatment between your customers, or even performing incremental rollout.
Sometimes, a change needs to be coordinated with a release from another application/pipeline. This is a bit of false comfort as the product attracts active users, the probability of a user interacting with the uncoordinated combination exists.
Building API’s to be backwards compatible is plausible and has been written and spoken about at length.
It may also be worthwhile to explore Hypermedia concepts such that the only contract which exists between client and server is the control instructions, in which the server is enabled to influence the user flow and coordination.
But what about Sign-Off of Risk? (A Case Study)
I would like to share a personal experience in which multiple teams transformed into a continuous deployment model.
The initial situation was various teams had different release cycles (usually pooling commits) and the weekly releases into production induced incidents. Often, the cadence of the weekly schedule was interrupted due to testing indicating a critical regression being present.
Due to various motivators, the teams decided that upon every merge of their pull request, no other merges may occur until that change set was deployed into the production environment. This policy was applied across multiple repositories - all change sets were on hold until the prior build got promoted to the production environment.
What were the results? Incidents reduced to zero within two days. More surprisingly, frequency of deployments into the testing environment increased!
There were other engineering practices adopted over time to help reduce the occurrences of rollbacks into the main branch. But regardless, the customer saw incremental changes more often with less down time by deploying more often.
That’s a win in everybody’s books.
Ensure that your pipeline is continuously executed from start to finish. If the “go” button is never pressed, and it hurts when it does, maybe doing it more often will ensure deployment becomes less painful. Quality will then be incentivized to be created before that merge request is created.
Always continuously pipeline your changes; everything else will automatically improve.