Both Continuous Integration (CI) and Continuous Delivery (CD) help achieve the Release Early Release Often software development philosophy.
In this blog post I’ll define the terms, briefly outline some history and look at deployment types. I’ll examine AWS specific examples of the deployment methods in part 2
Continuous Integration (CI)
Continuous Integration is a process of automating regular code commits followed by an automated build and test process designed to highlight integration issues early.
Continuous Integration came about to combat Integration Hell, where changes by one developer incompatible with those of another causing compile failures. Integration usually happened on deployment day, which was usually a Friday which spawned the term “Friday Afternoon Deployments”. The longer code remained checked out of the repository the higher the chance of issues caused when re-integrating.
Tools like Jenkins provide a customisable “workflow based” integration/build processes.
Continuous Deployment (CD)
Continuous Deployment takes the form of a workflow based process which accepts a tested software build payload from a CI server. The majority of major CI servers incorporate functionality allowing CD.
AWS provide CodeDeploy and CodePipeline services. CodePipeline is used to orchestrate CI and CD; it can trigger CI builds (e.g. on Jenkins), and then deploy the resulting artifacts.
Single Target Deployment
Build === Deploy ===> Target
The simplest deployment type: overwrite the old version on the same server
- Generally there is a brief outage
- There is no secondary server, so testing is limited
- Rollback involves removing the new version and installing the previous one
Build === Deploy ===> Target1 ===> Target2 ... ===> TargetN
Deploy to multiple targets at the same time.
- Often requires orchestration tooling
- Same as Single Target Deployment
- No DNS changes required. Changing DNS adds another layer of complication due to how different clients at different layers treat the DNS TTL.
Minimum In-service Deployment
Deploy to as large a number of targets as possible while keeping the defined minimum number in service. Only deploy to more instances once recently deployed instances are healthy.
- Happens in multiple stages, so orchestration is required.
- Allows for testing
- Generally there is no downtime
- It is often quicker than a rolling deployment (see below)
Happens in multiple stages, with a number of targets defined by the user.
Was considered to the be cheapest / best way to do deployment, but with modern hourly and consumption based billing that isn’t true anymore. (See blue / green deployment and A/B Testing below)
- Orchestration and health checks are required, which you can use to pause or rollback
- Allows for automated testing i.e. deployment targets are assessed prior to continuing
- Generally there is no downtime
- Can be paused allowing limited multi-version testing
- Overall application health isn’t necessarily maintained (advanced software may be aware of overall health)
- Can be the least efficient in terms of time taken
Blue Green Deployment
Performs a deploy to a whole new environment allowing for isolated evaluation
Blue represents the current version, and Green represent the version you want to shift to.
- Requires advanced orchestration tooling
- Deployment process is rapid: All-at-once
- Cutover is clean and controlled using a DNS change. The same is true of a rollback.
- Health of entire environment can be tested prior to cutover.
- Process can be fully automated.
- Carries extra cost, but this is limited by per-hour billing
Elastic Beanstalk uses a DNS change to switch between deployments. Instead, you could have one load balancer and attach/detach two different auto scaling groups.
aws autoscaling attach-load-balancers --auto-scaling-group-name my-asg --load-balancer-names my-lb
Or using a target group for an application load balancer:
aws autoscaling attach-load-balancer-target-groups --auto-scaling-group-name my-asg --target-group-arns my-targetgroup-arn
Send a percentage of traffic to Blue (A) and a percentage to (B). It can be used to switch environments, but in a more granular way that Blue Green deployment.
- Blue Green Deployment’s aim is to switch environments only checking for major faults
- AB Testing’s aim is to test a new feature, gradually assessing performance/stability/health.
An example of this would be using Route 53 Weighted records to send a percentage to each deployment.
Server / Code Management Types
The Pets vs Cattle analogy - do you love, care and worry over it, or do you just kill it when it gets sick?
Pets = Bootstrapping, Cattle = Immutable Architecture (see below)
Start with an AMI and via automation (Configuration Management) build on it to create a more complex object. For example:
- cloud-init - a set of Python scripts that allows you to run directives on boot e.g. Amazon Linux has this configured in /etc/sysconfig/cloudinit and one of the configured actions is to run the user data script.
- cfn-init - CloudFormation init
The advantage with bootstrapping is your reduce the number of AMIs i.e. you don’t need an AMI for each change (AMI Baking).
The practice of replacing infrastructure instead of upgrading or repairing components. Nothing is bootstrapped, nothing is changed, you pre-create an image for every version. Servers are throw-away objects.
“Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.” (https://en.wikipedia.org/wiki/Docker_(software))
- Image - e.g. Ubuntu. Can be built upon in layers.
- Container - built from an image. It can be run, stopped, moved etc.
- Layers / Union File System - branches in a file system are over-layed i.e. only the changes are applied
- DockerFile - a set of instructions on how to build an image
- Docker Daemon / Engine
- Docker Client
- Docker Registries / Docker Hub (libray of images)