In the second part of this series on configuration and change management in the TfL hybrid Agile, DevOps & ITIL world, I’ll take a look at infrastructure as code and the CMDB.
We have successfully adopted and transitioned into DevOps, Agile and continuous integration with weekly deployments of code, new features and functionality to the TfL website. This is the first in a series of posts looking at how we manage these processes behind the scenes.
I get the honour of the final blog post of 2015, and it’s a pleasing one to be able to write as we look back on what the team in TfL Online have achieved during this year.
We started off the year accomplishing one deployment of new code to the website every two weeks, and that’s increased to the point where we’ve now managed our 57th release to the site in 2015. There are a lot of variables and moving parts that we monitor and co-ordinate simultaneously to deliver a safe, quality assured, zero defect, zero outage deployment to the website every single week.
While we’ve successfully made these 57 separate releases of code, optimisations, reference data refresh, bug fixes, enhancements and new features to tfl.gov.uk, we’ve managed to do all this seamlessly without a single planned maintenance window, so our customers haven’t experienced any down-time at all while we’ve been making these improvements.
So, here’s how we believe we’ve fed into the ‘Every Journey Matters’ ethos:
Building on the recent ‘Agile continuous delivery in the cloud’ 4-part series on this blog, this post summarises our approach to agile deployments to tfl.gov.uk.
I was recently in a meeting explaining our release and deployment process to some internal stake-holders and found that sometimes a picture really can say a thousand words, and a visual representation can describe the topic nicely. So this post will be short and sweet, and I’ll let the image below do the talking.
Let’s quickly consider getting products to market; in our context this means new features, enhancements, and updates to the website – doing the right thing, in the right way, over and over, continuously improving tfl.gov.uk.
If you have any questions, queries or feedback on this or any of our 4-part series on Agile continuous delivery in the cloud, please do leave a comment below. Thanks.
In this, the final blog of the series, I’ll discuss some of the advantages of adopting agile continuous integration & delivery.
So far in this series of posts, I’ve talked about continuous integration, the practice in software engineering of merging all developer working copies with a shared trunk frequently. This enables fast feed-back loops and resolution of bugs at source, quickly and early, to prevent issues being introduced into the live system later down-stream.
We discussed that developing software involves teams of people working together and the synergies of adopting a DevOps model. Automating testing, integration and provisioning of environments saves a lot of time, so that “we work smarter, not harder”. Cloud hosted environments enable us to spin up, and automatically provision, pay-as-you-go, on-demand project development environments with a just a few mouse clicks.
In the third part of this blog, I’ll talk about our route to market – the global release pipe-line.
Global release pipe-line
To ensure that projects do not encounter delays in getting code from development into production and for continuous delivery to work smoothly, a robust, repeatable, reliable, fault-tolerant deployment system is required.
Our global release pipe-line is a sequence of cloud based environments that we use to prove that all new products or code are fit for purpose, and quality assured ready to go live. The environments represent real world production environments. We currently have 4 cloud based environments (called Red, Mauve, Amber & Pre-prod) in the global release pipe-line.
The journey down the pipe-line begins when the team checks in code to the version control system. It ends when the change goes to production – in between, a lot can happen!
To declare a build to be ready to enter the global release pipe-line, it must pass basic functional tests, so all development teams work from the same code repository, write a test case for the functionality they develop, build and test their functionality locally, refactor, and verify—only then committing their changes (frequently) to the source code repository. At key stages of the pipeline, we tag our code commits to ensure we have a full audit trail and then send the build down-stream towards production and operations.
In this post, I’ll talk about keeping the lines of communication open, testing & DevOps.
Software is abstract until it is operational
In line with our ethos for all development teams to get code to a production-ready state as quickly as possible, operations, testers, project managers, scrum master and software developers are all part of the same agile process and team-working. DevOps came about from breaking down silos, so that people work together to improve collaboration, whilst at the same time building trust and relationships.
Everyone involved in software development works together on all aspects of delivery, enabling collaboration across functional boundaries. There are lots of moving targets in DevOps & continuous delivery, we have some automation in place (e.g. automatic tests, continuous integration) but we also use tools and engineering practices, e.g. Jira, BitBucket, Confluence, Teamcity, GIT, – the rest we manage by hand.