Moving to the DevOps approach

Modis Posted 07 October 2022

When AWS first started talking about DevOps, there was a focus on improving developer capability to repeatably and reliably deploy workloads. Now, we’re very comfortable having repeatable large-scale deployments driven by code, rather than by manual run sheets and facilitated by SSH and RDP.

The lower human effort for a release powered by DevOps approaches means that more releases can be done for the same cost, with less risk of inconsistent changes.

This naturally leads people to thinking about reliable operations not just pre and post a release to an environment, but during as well. Rolling updates and considered structured changes with backwards and/or forwards compatibility taking into account data structures, database design, APIs and more, means we have fluid updates that don’t wait until the lowest customer usage time of day (often sometime after midnight).

The ability to move to a full stack DevOps pipeline is not simple. It requires commitment to automate every part of a deployment. We look at this as an up-front investment in the automation, the payback for which will occur in the years of agility to come.

A key element is knowing where your release artifacts are.

These should all come from a revision control system, such as a Git repository. For this, we often use AWS CodeCommit: we check in the binaries and scripts used to deploy a workload, even if they are as part of a third-party software vendors supplied package (COTS).

The approach works for both virtual-machine-based workloads, and Serverless workloads.

In addition to our own code, there are several CloudFormation templates that are also checked in, as well as a database schema management tooling and scripting, if required.

A simple static web site

A simple workload to start with is a static web site. It’s cost effective and easy to combine the components to make this a Well-Architected workload, but then add a DevOps approach to bring automation to this.

There is two parts to what goes under the DevOps approach:

  • Cloud Infrastructure
  • Web Content

    The cloud infrastructure for the CV challenges typically looks like:

  • An S3 Bucket, with Public Access blocked, hosting content in some prefix. We’ll chose one prefix for production, and one for non-production for our content.
  • A second S3 Bucket, with Public Access blocked, but configured to accept logging from S3 Access Logging, and from CloudFront, with a lifecycle policy for these logs.
  • An ACM certificate for the desired view URL, and a DNS entry configured to permit issuance
  • A pair of CloudFront distributions, one for production and one for testing, with the origin set to the S3 bucket using an Origin Access Identity, and the relevant prefix for production versus testing, and using the aforementioned ACM certificate, and logging to the S3 Bucket. Bonus points for various security headers being set, and strong, modern TLS and ciphers being used.
  • A pair of DynamoDB tables, one in production and one in testing, that will keep a hit count.
  • A pair of Lambda functions, one for production and one for testing, which will access the DynamoDB tables accordingly for the hit count.
  • A Lambda function that will flush (invalidate) the CloudFront cache.

The above is implemented as CloudFormation templates, with parameters to indicate the environment when deployed.

Next, we set up a git repository, where we load a set of web documents (images, CSS, JS, and HTML pages). You can develop the web content as you want, but I choose to use Bootstrap as a web framework (particularly Bootstrap 5+ without the need for jQuery).

When the content is checked in a CodePipeline, it reacts by unpacking the content of the repo into an S3 Bucket.  The pipeline then calls a Lambda function to flush the CloudFront cache for the testing environment, and then waits for confirmation to proceed. It does a similar task for production.

Moving beyond one environment

We can use the same S3 Bucket to host the content for multiple web sites, with both non-production and production copies, and a common CloudFront Origin Access Identity used for CloudFront to securely fetch the content from the edge.

Hence our Website Bucket may have a structure of:

  • /dev/site1/
  • /dev/site2/
  • /prod/site1/
  • /prod/site2/

We can publish updates to site1 in development, and then in production. Alternatively, we may want a separate bucket – in a separate AWS Account – for our non-production environments.

Note that we’re not using S3’s native (and legacy) website hosting option; all content is served from CloudFront.

Logging is an important access, and there are a minimum of two logs that we should cater for: The S3 Server Access Log, which should only contain entries as CloudFront fetches content, and the CloudFront logs for each distribution. One S3 Bucket, with a LifeCycle Policy, should be able to satisfy this in the single-AWS-account deployment; Bucket Policies must be set to permit S3 and CloudFront to log to the nominated logging Bucket.

The initial development environment’s CloudFront distribution may have a hostname (CNAME) of “dev.$domain”, and this should fetch from the correct prefix in S3.

The above architecture diagram shows the CodePipeline unpacking content to a prefix and flushing the CloudFront distribution (to make the changes visible as soon as possible). The pipeline is now extended with a validation step, and then a repeat to unpack to the production S3 prefix, and a flush of the production CloudFront distribution.

The Cloud Resume Challenge

A great way to start is to have a simple web workload that everyone can buy into their own CV. Forrest Brazeal has written about The Cloud Resume Challenge. It essentially engages individuals to host their own CV as a web page. But rather than setting the bar as simple as this, it suggests to the reader to extend themselves a little, an include a dynamic hit counter, displayed on the page, rather than just static code.

When individuals do this, they strive to be as cost effective on their own wallet as possible. It causes focus. And if they can do that for themselves, then learning that approach for their employers’ workloads is a key advantage.

We extend this challenge to add a DevOps pipeline across the publication of content. This has two benefits: it shows more capability in the individual, and it permits the user to update their CV more easily in future.

We also extend this to push the barrier in online security: is CloudFront configured securely, and does the static site score well on 3rd party web security tools, like SSL Labs, Security Headers, or Hardenize.

Footnote: a personal example

I invite you to browse my CV, at The footnotes demonstrate the refinement of this to provide a comprehensive insight into the process. I can vouch for this not having cost me more than a few cents since I wrote this, some three or four years ago, but you’ll find the web template is reasonably up to date (Boostrap 5.2 as I write this), and the ratings on the referenced tools above are quite strong. (Tip: try using those tools against your corporate home page).



About Modis (soon to be Akkodis)

Modis, soon to become Akkodis, is a global leader in the engineering and R&D market that is leveraging the power of connected data to accelerate innovation and digital transformation.
With a shared passion for technology and talent, 50,000 engineers and digital experts deliver deep cross-sector expertise in 30 countries across North America, EMEA and APAC. Modis offers broad industry experience, and strong know-how in key technology sectors such as mobility, software & technology services, robotics, testing, simulations, data security, AI & data analytics. The combined IT and engineering expertise brings a unique end-to-end solution offering, with four service lines – Consulting, Solutions, Talent, and Academy – to support clients in rethinking their product development and business processes, improve productivity, minimize time to market and shape a smarter and more sustainable tomorrow.
Modis is part of the Adecco Group.

Whether you’re a candidate seeking a data analytics role or an employer needing to staff a data analytics practice, we’d be happy to talk with you to help identify and address your needs. Our deep industry knowledge and experience means we’re ideally placed to help you take the next steps, so don’t hesitate to get in touch.Contact us
Our staffing team have a global network of technology professionals ready to place in your business, please contact us, we would love to support you.Get in touch