Understanding CodePipeline
This is the last section for this course, and it’s the natural bookend to the first section in which we covered CloudFormation. Since that first section we’ve been running deployment scripts to do things like build the Lambdas, upload them to S3, upload the templates and OpenAPI files, and create or update CloudFormation stacks. Now we’re finally going to demonstrate how you can fully automate these tasks, running native pipeline and build services in the cloud.
Build Pipeline Basics
So, for those that don’t know, what’s a build pipeline? It’s a defined series of tasks, analogous to a workflow, which automates the software build and deployment process. Typically the activities in a workflow take the form of a graph, which can include control flow elements like decision points, branches, and loops. Generally, though, build pipelines are simple sequences of steps that sometimes include activities that halt the process pending some kind of approval.
Fully Cloud Native
A benefit to using CodePipeline and its supporting services, like CodeBuild and S3, is that they’re all cloud native services. As a result, at no point when using these services do you have to worry about managing servers, updating software, or ensuring capacity. Like other cloud native services, these are also fully integrated into the AWS ecosystem when it comes to managing access, monitoring usage, and recording audit trails of changes.
Deploying CodePipeline with CloudFormation
One of the quirks with CodePipeline is that it doesn’t have a way to receive input parameters at runtime. With some build pipeline tools you can deploy a single, parameterized pipeline, and then run that pipeline to deploy different environments. But with CodePipeline you can’t do this. So you have to deploy separate instances of the service, one for each target environment. This might be more of an issue were it not for the fact that CodePipeline and all its constituent parts are themselves just AWS resources that can also be deployed using CloudFormation. So it’s not difficult to deploy multiple instances of the pipeline.
Stepping Through the Sample Pipeline
Now, at a high level, let’s step through the example pipeline that’s included in the sample code. This is a simplified pipeline that’s intended to demonstrate how the service works. So it doesn’t include things like notifications or approvals. Nor does it have any automated triggers.
Here’s a diagram of the pipeline, which simplified though it is, still has quite a few moving parts:
On the left side of the diagram, you can see the sequence of steps for the parent CodePipeline service. At the top, something has to trigger the pipeline, which in our case is done manually in the console. It’s possible to define automated triggers that run the pipeline when, for example, code is checked into the repository, or a branch is merged. Within the pipeline, shown on the left in the shaded box, you have a series of stages, each of which includes one or more actions. Each of these stages define source and destination paths to which the actions read and write as they execute. There is an S3 bucket specified for the pipeline that hosts these paths.
The actions that you add to the stages are from a set of pre-defined types. The GitHub and CloudFormation actions that you see above have limited functionality. These are basically built to do one thing, like checkout code, or create/update CloudFormation stacks. When you need to run custom commands you can use CodeBuild actions. With a CodeBuild action you define a build agent type and the required environment variables. Then you reference what’s called a “buildspec” file, in which you specify the software to install on the build agent, define the secrets to make available, and the individual commands or scripts to be executed.
Stepping through the stages shown above, the first is the Source Stage. This stage has two actions, one which checks out the .Net or Java versions of the serverless sample code from GitHub, and another which checks out the postman test collections from the common repository. Here we’re demonstrating access to code and packages in GitHub but do note that the source actions can also work with AWS CodeCommit repositories and AWS CodeArtifact package hosts.
Next, we have the Build Stage, which executes a CodeBuild action that largely reproduces what the build.zsh scripts in the sample code do. This action builds the Lambda deployment package, uploads it to the deployment bucket in S3, and uploads all the templates and OpenAPI files. Then we have the Deploy Stage, which creates or updates the CloudFormation stack. Note that the pipeline is configured to supply static parameter values to CloudFormation, which are set when it’s deployed.
Lastly, we have the Test Stage. To keep the sample code as simple as possible, it doesn’t include any unit- or component-level tests. But if it did, those tests would be run as part of the build stage. What the Test Stage demonstrates is the use of Newton to execute automated Postman API tests from the command line. Performing run-time tests on services that have just been deployed isn’t something that works in every case, but this stage shows how it can be done, including the mechanics of obtaining security credentials using the AWS secrets manager from within the CodeBuild task.