You may now be wondering… “Why?”
Well if you are here, then you were likely looking for a way to make these two work together.
The problem
There are 3 ways you can host a website in S3 (or I just know 3, if you know more, let me know in the comments).
Enable static website hosting on S3 and then go to your DNS manager and update the root record to be a CNAME pointing to your S3 URL.
Do everything on step 1 but this time create an AWS CloudFront distribution that points to your S3 bucket and generates a certificate for your domain. This will allow your CloudFront distribution to have that domain as an alias. Then map the CloudFront distribution to your root record as a CNAME.
Do everything on step 2, but entirely on AWS, so use Certificate Manager together with Route53 and then connect them both with the CloudFront distribution you just created and automate everything using Terraform or CloudFormation.
So, let’s go with option 1
It works flawlessly, but you don’t have any caching, meaning, every time users go to your website, it will be fetched from S3 directly, taking some of your free tiers with it.
What about option 3? (I know, I skipped one)
Well, this is the best choice, having everything on AWS, taking advantage of incredibly low latency and having everything automated and working. But what if you can’t have the domain on AWS? then things get a bit more complex and you go back to option 2.
This happened to me today, I had the domain in Namecheap and updating the records was a manual job, taking some of my precious time to log in into namecheap.com and update the records myself, with the information that was given to me from AWS Certificate Manager after creating the certificate MANUALLY.
But what if we fusion both options 2 and 3? 🤪
Yeah, I know I was starting to think I went crazy too. But it was an interesting process and I realized Terraform was more powerful than I thought it could be.
Let's start from the beginning.
Note: If you have not yet purchased the domain, please do so in AWS, it’s the best option so far. But if you want a cheaper domain to go with Cloudflare and you can follow this tutorial to hook the domain to your AWS Infra
The solution
We are going to do several things today
Point our domain to Cloudflare DNS
Generate our Cloudflare API Key with the DNS Edit scope
Setup a terraform directory with AWS and Cloudflare providers
Create terraform scripts for the S3 bucket, the Certificate, CloudFront distribution
Create terraform scripts for the Cloudflare records using the Certification validation DNS challenge.
Have some pisco sour since today is National Pisco Sour Day in Peru and doesn’t matter where or when you are, every day is a good day to have Pisco Sour 🍸
I won’t explain step 1 and 2, because these are pretty basic and you can get around googling.
Certificate creation and validation
Let’s start with our Terraform files, there will be a lot, cause we want to keep things clean.
So, on our main.tf we need to set up our 2 providers and that’s it, it will look something like this…
terraform {
backend "http" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.74.0"
}
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 3.0"
}
}
}
provider "cloudflare" {
api_client_logging = true
}
provider "aws" {
region = "us-east-1"
}
Pretty simple right? now let’s start with the action.
We will create our Certificate using the AWS Provider
resource "aws_acm_certificate" "certificate" {
domain_name = "example.com"
validation_method = "DNS"
}
This certificate resource will then output some things (more info here), the only thing we currently care about is domain_validation_options
these (or this) will be the records we need to register on our DNS Manager, in this case, Cloudflare.
So now we need to create a data
entity that will retrieve our zone, previously created in Cloudflare. This is needed to get our zone_id
which Cloudflare will ask for when updating the records.
data "cloudflare_zone" "zone" {
name = "example.com"
}
resource "cloudflare_record" "records" {
for_each = {
for dvo in aws_acm_certificate.certificate.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
zone_id = data.cloudflare_zone.zone.id
name = each.value.name
value = each.value.record
type = each.value.type
ttl = 1
allow_overwrite = true
}
We can leave this as is, but since we want to automate everything, we will automate the validation process too! In fact, it is already here, but terraform doesn’t know if the validation was successful or not. So in order to achieve this we can create a aws_acm_certificate_validation
object.
resource "aws_acm_certificate_validation" "validation" {
certificate_arn = aws_acm_certificate.certificate.arn
validation_record_fqdns = [for record in cloudflare_record.records : record.hostname]
}
This object will then wait for the certificate to be validated on AWS before marking itself as deployed.
Yey! we are done with the certificates! Now let’s move to the easy part, the S3 bucket.
The S3 Bucket
This is pretty much standard, so I’ll just show you the code for this.
resource "aws_s3_bucket" "website" {
bucket = "example.com"
acl = "private"
force_destroy = true
lifecycle {
prevent_destroy = false
}
}
You may find force_destroy
and prevent_destroy
keys are somewhat disturbing. These are needed for the CI/CD pipeline, otherwise, whenever we need to recreate the bucket it will throw an error saying that the bucket is not empty and cannot be deleted 🙄.
Before we jump into the CloudFront part, we need to create a CloudFront Origin Access Identity that will allow us to protect our bucket from the outside and only allow CloudFront to get inside.
resource "aws_cloudfront_origin_access_identity" "oai" {
comment = "Cloudfront Origin Access Identity"
}
This does not require that many parameters, in fact, all of them are optional. but it will create an IAM identity for our CloudFront distribution to authenticate against the bucket. We will do that now.
We will create a policy that will only allow our “CloudFront origin access identity”, access to this bucket.
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.website.arn}/*"]
principals {
type = "AWS"
identifiers = [aws_cloudfront_origin_access_identity.oai.iam_arn]
}
}
}
resource "aws_s3_bucket_policy" "s3_policy" {
bucket = aws_s3_bucket.website.id
policy = data.aws_iam_policy_document.s3_policy.json
}
Ok, now that our S3 bucket is protected our work here is done. Let’s move on
CloudFront 😈
Well if you are at this point and have read everything above then I will trust you with a tiny secret. I am no expert when it comes to CloudFront
- When did you become an expert in CloudFront? - Last night
Sorry about that gif but that’s exactly how I feel right now. This was my first project using CloudFront and I am not proud but it does the job
Well, back to business
We will create our CloudFront distribution, since there are a couple of sections where we need to use a single id we will also create that as a local value.
locals {
s3_origin_id = "example.com_origin"
}
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = aws_s3_bucket.website.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
aliases = [var.domain]
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = true
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
ordered_cache_behavior {
path_pattern = "*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = true
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_100"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
tags = {
Environment = "production"
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.certificate.acm_certificate_arn
ssl_support_method = "sni-only"
}
}
As you can see, we are using some things from other scripts. like the bucket regional name, the origin access identity and of course our certification arn
for the https feature. Great! now one last thing to have our website up and running, the CNAME
that will redirect the domain over to the CloudFront distribution
resource "cloudflare_record" "records" {
zone_id = data.cloudflare_zone.zone.id
name = "example.com"
value = aws_cloudfront_distribution.s3_distribution.domain_name
type = "CNAME"
ttl = 1
allow_overwrite = true
}
Aaaaaaand we are done!
Now we just need to upload our website files over to the S3 bucket and you’ll have your website deployed on AWS, your DNS Management on Cloudflare and your domain wherever you want!
This is actually the first time I do something like this in AWS, I know I have a big journey ahead so if you have any comments or suggestions please let me know on the comment section below