The Cloudkrunch Blog
— Github Actions, CICD, AWS, CloudFront, S3, Route53 — 7 min read
The Why
I am creating the Cloudkrunch blog to exhibit my personal projects, aiming to establish a space to showcase my expertise. Previously, my engineering skills were sufficient to secure interviews, but after much contemplation, I have come to realize the importance of having a portfolio to showcase my capabilities in today's cut-throat job market. Companies are seeking ways to streamline their recruitment processes, and failure to provide concrete evidence of your competence can be a costly risk for them to take on. In this post, I will detail the steps I took to set up the blog, including the code and processes I utilized to establish and run my infrastructure.
What you will need to replicate this setup
- An AWS account
- A registered domain that you can change the DNS records of
Terraform
CLI installed on your environment- I used
Terragrunt
to manage the providers and backend state of my infrastructure, that's not represented in this article
- I used
- npm
Overview of architecture
Here's an overview of what hosts the website
A list of what was built to facilitate it:
- Route53 alias to point the
cloudkrunch.com
domain to Cloudfront's generated domain - ACM certificate to return from Cloudfront
- S3 Bucket to store website
- IAM role for Github Actions to publish to the S3 Bucket
- Github action for deployments
- Gatsby template that will be the blog
Implementation
S3 Bucket
First I started by making the S3 Bucket. This needs to be configured with CORS, a public-read
ACL, and website configuration.
resource "aws_s3_bucket" "bucket" { bucket = local.bucket_name}
resource "aws_s3_bucket_policy" "bucket" { bucket = aws_s3_bucket.bucket.id policy = templatefile("templates/s3-policy.json", { bucket = "${local.website_domain_name}" })}
resource "aws_s3_bucket_acl" "bucket" { bucket = aws_s3_bucket.bucket.id acl = "public-read"}
resource "aws_s3_bucket_website_configuration" "bucket" { bucket = aws_s3_bucket.bucket.id
index_document { suffix = "index.html" }
error_document { key = "404.html" }}
resource "aws_s3_bucket_cors_configuration" "bucket" { bucket = aws_s3_bucket.bucket.id
cors_rule { allowed_headers = ["Authorization", "Content-Length"] allowed_methods = ["GET", "POST"] allowed_origins = ["https://www.${local.website_domain_name}"] max_age_seconds = 3000 expose_headers = [] }}
You'll need to fill in the values for local.website_domain_name
and local.bucket_name
on your side in a locals block. Note that bucket
names have to be globally unique, so it might fail if the bucket already exists.
ACM Certificate
Next, we'll need a certificate to use in the Cloudfront distribution. This certificate needs to be made in a specific region. For all US based regions
it has to be made in us-east-1
.
resource "aws_acm_certificate" "cert" { domain_name = local.website_domain_name validation_method = "DNS"
subject_alternative_names = ["*.cloudkrunch.com"]
key_algorithm = "RSA_2048"
lifecycle { create_before_destroy = true }}
I already had a hosted zone by creating a registered domain on Route53, so you will need to configure that if you want to use this code.
resource "aws_acm_certificate" "cert" { domain_name = local.website_domain_name validation_method = "DNS"
subject_alternative_names = ["*.cloudkrunch.com"] lifecycle { create_before_destroy = true }}
Make the DNS CNAME for domain validation.
resource "aws_route53_record" "cert_dns" { allow_overwrite = true name = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_name records = [tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_value] type = "CNAME" zone_id = local.zone_id ttl = 60}
Ask ACM to validate that you own the domain. This can take a couple minutes to finish running.
resource "aws_acm_certificate_validation" "cert_validation" { certificate_arn = aws_acm_certificate.cert_dns.arn validation_record_fqdns = [aws_route53_record.cert_dns.fqdn]}
Cloudfront
Next, we need to create a Cloudfront distribution that points to our newly made bucket. I went ahead and made it so that www and non-www calls link to the same site, but you can create a redirct to either one by setting up a second origin on your Cloudfront distribution and making another S3 Bucket.
# Cloudfront distribution for s3 site.resource "aws_cloudfront_distribution" "s3_distribution" { origin { domain_name = aws_s3_bucket_website_configuration.bucket.website_endpoint origin_id = local.origin_id
custom_origin_config { http_port = 80 https_port = 443 origin_protocol_policy = "http-only" origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"] } }
enabled = true is_ipv6_enabled = true default_root_object = "index.html"
aliases = ["${local.website_domain_name}", "www.${local.website_domain_name}"]
custom_error_response { error_caching_min_ttl = 0 error_code = 404 response_code = 200 response_page_path = "/404.html" }
default_cache_behavior { allowed_methods = ["GET", "HEAD"] cached_methods = ["GET", "HEAD"] target_origin_id = local.origin_id
forwarded_values { query_string = false
cookies { forward = "none" } }
viewer_protocol_policy = "redirect-to-https" min_ttl = 31536000 default_ttl = 31536000 max_ttl = 31536000 compress = true }
restrictions { geo_restriction { restriction_type = "none" } }
viewer_certificate { acm_certificate_arn = aws_acm_certificate.cert.arn ssl_support_method = "sni-only" minimum_protocol_version = "TLSv1.1_2016" }}
I went ahead and put the max cache duration because the website is static, but feel free to change them if you use this code.
Hosted Zone Aliases
Now that we have the S3 Bucket and Cloudfront up and running, we need to put our domain to the Cloudfront domain that was created.
resource "aws_route53_record" "root-a" { zone_id = local.zone_id name = local.website_domain_name type = "A"
alias { name = aws_cloudfront_distribution.s3_distribution.domain_name zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id evaluate_target_health = false }}
resource "aws_route53_record" "www-a" { zone_id = local.zone_id name = "www.${local.website_domain_name}" type = "A"
alias { name = aws_cloudfront_distribution.s3_distribution.domain_name zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id evaluate_target_health = false }}
This redirects www and non-www DNS calls to Cloudfront.
CICD user
I use Github Actions to deploy this website, but before I show that part we need to create a user that the action can use to do the deployments for us. I create two policies, one allows the user to put/delete objects into website's S3 Bucket and the other let's it invalidate the cache of our Cloudfront distribution.
resource "aws_iam_user" "github_cicd_website" { name = "github-cicd-website"}
# Create policy for cloudfront update permissionsdata "aws_iam_policy_document" "cloudfront_policy_website_doc" { statement { actions = ["cloudfront:CreateInvalidation"] resources = ["${aws_cloudfront_distribution.s3_distribution.arn}"] }}
resource "aws_iam_policy" "cloudfront_policy_website" { name = "cloudfront-policy" description = "A cloudfront policy that allows the user to update the website version" policy = "${data.aws_iam_policy_document.cloudfront_policy_website_doc.json}"}
resource "aws_iam_user_policy_attachment" "cloudfront_policy_website" { user = aws_iam_user.github_cicd_website.name policy_arn = aws_iam_policy.cloudfront_policy_website.arn}
data "aws_iam_policy_document" "s3_policy_website_doc" { statement { actions = [ "s3:PutObject", "s3:ListBucket", "s3:DeleteObject", "s3:GetBucketLocation", "s3:PutBucketWebsite" ] resources = [ "arn:aws:s3:::${local.bucket_name}/*", "arn:aws:s3:::${local.bucket_name}", ] }}
resource "aws_iam_policy" "s3_policy_website" { name = "s3-policy-website" description = "A S3 policy that allows the user to update the website in S3" policy = "${data.aws_iam_policy_document.s3_policy_website_doc.json}"}
resource "aws_iam_user_policy_attachment" "s3_policy_website" { user = aws_iam_user.github_cicd_website.name policy_arn = aws_iam_policy.s3_policy_website.arn}
Gatsby template
All the core infrastructure is created now, except for the Github Action we will use to deploy the site. To make the site I used a Gatsby template to create the blog because I'm not the best at Frontend, plus it saved me a bunch of time creating the site and let me focus on cloud engineering. I started by generating from this template.
npx gatsby new gatsby-starter-minimal-blog https://github.com/LekoArts/gatsby-starter-minimal-blog
I did some custom configuration to get the HCL code blocks working with Prism JS.
src/@lekoarts/gatsby-theme-minimal-blog/styles/code.tsx
:
import Code from "@lekoarts/gatsby-theme-minimal-blog/src/components/code"import { Prism } from "prism-react-renderer"
// @ts-ignore(typeof global !== "undefined" ? global : window).Prism = Prism;// Add new languages by `require()`-ing it.// See https://github.com/PrismJS/prism/tree/master/components for a full list.require("prismjs/components/prism-hcl");
export default Code
I added the gatsby-plugin-s3
to easily do the S3 deploys.
npm i gatsby-plugin-s3
Added this to the gatsby-config.js
{ resolve: `gatsby-plugin-s3`, options: { bucketName: 'cloudkrunch.com', acl: null // This is super important so that it doesn't try to make the bucket },},
Then put the deploy method in package.json. You will need replace the {{ distribution }}
with your Cloudfront distribution ID.
"deploy": "gatsby-plugin-s3 deploy --yes; export AWS_PAGER=\"\"; aws cloudfront create-invalidation --distribution-id {{ distribution }} --paths '/*';",
That's all you need to set it up!
Github Action
For this part I created a super simple deployment.
.github/workflows/deploy.yml
:
name: Deploy Gatsby Website
on: push: branches: - main
jobs: build: runs-on: ubuntu-latest timeout-minutes: 10 steps: - uses: actions/checkout@v2
- uses: actions/setup-node@v2 with: node-version: 18
- name: Install dependencies run: npm install // I put my site in this folder, remove if you're you don't need to go into a directory working-directory: ./cloudkrunch
- name: Build run: npm run build working-directory: ./cloudkrunch
- name: Set AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: us-west-2
- name: Check user run: aws sts get-caller-identity
- name: Deploy to S3 run: npm run deploy working-directory: ./cloudkrunch
You will need to create an AWS Access Key for the user we created before in the IAM console. Copy the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY into the secrets for the repository.
Overview
That's all of it! This was a really fun project to take on and please check back for whatever I might get up to next. Thank you for reading the article, please share it if you thought it was helpful.