Deploying a Hugo Blog with AWS S3
Pardon the Cliché…
This is hardly the first blog to debut with a “How I Built This” writeup, and I’m certainly not the first to use Hugo and AWS S3—the inspiration to use S3 came from Graham Helton’s excellent blog. That said, I didn’t see anyone who used the AWS CLI for deployment. Using the AWS CLI means you should be able to copy-and-paste everything in this article, provided the AWS API doesn’t move around too much.
How it Works
This is a Hugo blog hosted via S3 and served through CloudFront. It’s not the easiest way to deploy a blog, but there are many advantages:
- S3 is cheap, so cost is extremely low.
- CloudFront is fast, so performance is high.
- Hugo has a lot of fantastic themes available, so upfront front-ending is nominal.
- AWS patches for me, so security maintenance is minimal.
And, most importantly:
- It’s really, really easy to publish. I write an article in Obsidian, paste some shell commands, and the post is live.
The biggest downside is that the setup process can be unwieldy, but this guide will give you the exact shell commands I used to get a working blog. That should at least simply things!
Requirements
You’ll need:
- An AWS account.
- A domain.
- A *NIX shell: WSL, MacOS, any Linux distro. I use Ubuntu.
I’d recommend purchasing your domain via Route53, as that’s what is used in this guide. I’d also recommend deploying to us-east-1
for the same reason. Neither of these are required; you’ll just be able to copy my process exactly.
Additionally, my *NIX shell is an Ubuntu VM on my home Proxmox server, meaning it’s always online and remotely available. This has some nice advantages, but if you need to do everything from a single Windows machine, I’d recommend checking out Graham Helton’s post on using WSL.
With all that out of the way, let’s begin.
Deployment
There are 6 steps to go from blogless to blogful:
- Setting up Hugo
- Configuring the AWS CLI
- Creating an S3 Bucket
- Generating an HTTPS Certificate
- Distributing via CloudFront
- Updating your DNS Records
There’s also an optional step where we deploy Nginx to serve a private, always-on “preview” version of our blog, which makes it easy to tweak your blog’s appearance and preview articles before you make them public. This step is optional and only makes sense if you have a separate VM for Hugo like I do.
Hugo Installation
Open your *NIX shell, wherever it is, and install some required packages. Assuming you have apt
:
sudo apt install hugo unzip jq -y
Initialize a new site and update permissions:
sudo hugo new site /var/www/staging-blog
# Every command hence will assume we're in this dir:
cd /var/www/staging-blog
sudo chown -R "$USER":www-data *
Next, pick a theme. Hugo provides a gallery here. Most will link to a demo site, and all will link to a git repo. Copy the repo’s git link, e.g. https://github.com/athul/archie.git
.
Once you’ve picked a theme, grab it for your site:
sudo git clone https://github.com/athul/archie.git themes/archie/
Next, we’ll copy the theme’s example site over our blog. This gives us a starting point:
sudo rm -rf content/ static/ config.toml
sudo cp -r themes/archie/exampleSite/* .
Next, let’s update the config
file. Use whichever text editor you prefer; I like vi
:
vi config.*
# If you know the domain you'll use, put that here
baseURL = "https://thievi.sh/"
# Update the title of your blog
title = "Thievi.sh"
# The name of your theme, per the directory in "themes/"
theme="archie"
Finally, build the site:
sudo hugo
If you aren’t going to be using Nginx, you can instead use Hugo’s built-in webserver to preview your site.
# Site will be accessible via http://<srv_ip>:1313
# or http://127.0.0.1:1313 if running locally
sudo hugo server --bind=0.0.0.0
You can look into updating the visuals of your theme if you’d like; steps will vary depending on theme. If you make changes, rebuild and restart your Hugo server to view them.
If you aren’t interested in using Nginx, proceed to AWS CLI Installation. Otherwise, see below.
(Optional) Nginx Installation
First, grab the nginx package:
sudo apt install nginx -y
Next, remove the default site configuration and start a file for your staging site:
sudo rm /etc/nginx/sites-available/default
sudo vi /etc/nginx/sites-available/staging-blog
Here’s a basic HTTP config to get you started:
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/staging-blog/public;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Now, full disclosure, I run a Certificate Authority within my homelab and already generated a server certificate for my Hugo server. This lets me use HTTPS. Here’s my config, which I only provide in case you, too, run your own CA:
server {
listen 443 ssl;
listen [::]:443 ssl;
# Note: this is my Ubuntu server's domain name
# Not my blog's public domain name
server_name melete.gard.en;
ssl_certificate /etc/nginx/ssl/melete.gard.en.crt;
ssl_certificate_key /etc/nginx/ssl/melete.gard.en.key;
root /var/www/staging-blog/public;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Finally, create a symlink to enable the site and restart nginx:
sudo ln -s /etc/nginx/sites-available/staging-blog /etc/nginx/sites-enabled/
sudo systemctl restart nginx; sudo systemctl status nginx
Now, you can preview your blog at http://<srv_ip>
(or https://<src_ip>
if you went through the extra work of certificates).
As before, you can look into updating the visuals of your theme, which will vary depending on what you chose. Furthermore, if you leave the nginx server running, you can preview your blog posts whenever you’d like without having to start/stop a web server. Just run sudo hugo
!
AWS CLI Installation
As mentioned, we’re going to do everything from the AWS CLI.
Well, almost. First we need to set up the user we’ll be invoking from the CLI, so we’ll actually need to use the GUI just a little bit:
- Open the AWS management console in your browser and sign-in to your account.
- Open the IAM dashboard, and click
Users > Create user
. - Name your user. I went with “blogger”.
- When setting the permissions, click
Attach policies directly
. - Search for and add these permissions:
- S3FullAccess
- Route53FullAccess
- AWSCertificateManagerFullAccess
- CloudFrontFullAccess
- Once the user is created, click their name (e.g.
blogger
) in the Users list. - Click
Create access key > CLI > Next > Create access key
. - Copy the access key and secret access key to a temporary location.
With that out of the way, we can switch to the CLI. Open a shell on your Hugo server, wherever it is, and install the AWS CLI:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "/home/$USER/awscliv2.zip"
unzip ~/awscliv2.zip -d ~
sudo ~/aws/install
Next, add the user you created to your AWS config. We’re going to use us-east-1
as our region, but feel free to use something else:
aws configure
AWS Access Key ID: <ACCESS_KEY>
AWS Secret Access Key: <SECRET_ACCESS_KEY>
Default region name: us-east-1
Default output format: [Enter]
Remember to delete your access keys from wherever you were temporarily storing them.
Creating an S3 Bucket
First, to save time, let’s set an environmental variable:
# Replace this with your blog domain
# e.g. example.com
SITE_NAME='thievi.sh'
Next, create the S3 bucket that will hold our site:
aws s3api create-bucket --bucket $SITE_NAME
Remove the public access block from our bucket:
aws s3api delete-public-access-block --bucket $SITE_NAME
Next, we’ll create a bucket policy that will allow public access. AWS stuff like this is done in JSON, so we’ll quickly create a JSON file with our policy (and pretty-print it using jq
so you can read it more easily):
echo '{"Version": "2012-10-17","Statement":[{"Sid":"PublicReadGetObject","Effect":"Allow","Principal":"*","Action":"s3:GetObject","Resource":"arn:aws:s3:::SITE_NAME/*"}]}' | sed "s/SITE_NAME/$SITE_NAME/g" | tee ~/public_access_policy.json | jq
Then we’ll apply this policy to our bucket:
aws s3api put-bucket-policy --bucket $SITE_NAME --policy file://$HOME/public_access_policy.json
Next, we’ll create a static website for our bucket. This will be the web server that presents our site:
aws s3 website s3://$SITE_NAME --index-document index.html
Finally, we upload our site to the S3 bucket:
aws s3 sync /var/www/staging-blog/public/ s3://$SITE_NAME/
Generating an HTTPS Certificate
Presumably, you’re going to want your site to work via HTTPS, which means we’re going to need a certificate. Since we’re already using AWS, we can use AWS Certificate Manager.
First, we’ll request the certificate:
CERT_ARN=$(aws acm request-certificate --domain-name $SITE_NAME --validation-method DNS | jq -r '.CertificateArn')
Then, we’ll grab the DNS records required for validation:
aws acm describe-certificate --certificate-arn $CERT_ARN | jq -r '.Certificate.DomainValidationOptions[] | select(.ResourceRecord != null) | .ResourceRecord | "\(.Name) \(.Value)"' | uniq
This should give you output like this:
_d5ec35a010fd0c0dd93e94117f132c43.thievi.sh. _ca49fe1de6de8efc3138264fd354a7f5.mhbtsbpdnt.acm-validations.aws.
These records are provided so that you can prove your ownership of the domain. To do so, you create a CNAME record for the first value, pointed at the second value.
If your domain wasn’t purchased through Route53, you’ll have to do that manually. If it was, then we can do it from the AWS CLI.
First, we create a JSON file containing our DNS changes. Remember to update the “Name” and “Value” fields with your own values:
echo '{"Changes":[{"Action":"UPSERT","ResourceRecordSet":{"Name":"_d5ec35a010fd0c0dd93e94117f132c43.thievi.sh.","Type":"CNAME","TTL":300,"ResourceRecords":[{"Value":"_ca49fe1de6de8efc3138264fd354a7f5.mhbtsbpdnt.acm-validations.aws."}]}}]}' | tee ~/verify_domain.json | jq
Then, we submit said changes:
aws route53 change-resource-record-sets --hosted-zone-id $HOSTED_ZONE_ID --change-batch file://$HOME/verify_domain.json
After a little while, we can check if validation was successful:
# Will return "SUCCESS" if domain was validated
aws acm describe-certificate --certificate-arn $CERT_ARN | jq -r '.Certificate.DomainValidationOptions[] | "\(.DomainName): \(.ValidationStatus)"'
Setting up CloudFront
As mentioned, we’ll use CloudFront to serve our S3 bucket. CloudFront is a globally distributed Content Delivery Network (CDN), meaning our website will be cached on hundreds of AWS proxy servers around the world. This is really good for performance, and gives us access to CloudFront’s neat analytics dashboards… but it’s also required to make HTTPS actually work with our current setup.
First, prepare a JSON file with our distribution config:
vi ~/create_distribution.json
Here’s a sample config which you can copy-and-paste directly. Essentially, we’re distributing our S3 bucket’s static site via port 80 and port 443, redirecting HTTP traffic to HTTPS, and presenting the SSL certificate we generated earlier: Note: don’t replace the “SITE_NAME” stuff yet, we’ll do that in a sec!
{
"CallerReference": "SITE__NAME-distribution",
"Aliases": {
"Quantity": 1,
"Items": ["SITE_NAME"]
},
"Origins": {
"Quantity": 1,
"Items": [
{
"Id": "SITE__NAME-S3-origin",
"DomainName": "SITE_NAME.s3-website-us-east-1.amazonaws.com",
"CustomOriginConfig": {
"HTTPPort": 80,
"HTTPSPort": 443,
"OriginProtocolPolicy": "http-only"
}
}
]
},
"DefaultCacheBehavior": {
"TargetOriginId": "SITE__NAME-S3-origin",
"ViewerProtocolPolicy": "redirect-to-https",
"AllowedMethods": {
"Quantity": 2,
"Items": ["GET", "HEAD"]
},
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "none"
}
},
"MinTTL": 0
},
"Comment": "CloudFront distribution for SITE_NAME",
"Enabled": true,
"ViewerCertificate": {
"ACMCertificateArn": "CERT_ARN",
"SSLSupportMethod": "sni-only"
}
}
Next, we’ll update the config with your own site name and certificate. You can just copy-and-paste these directly:
SITE__NAME=$(echo -n "${SITE_NAME}" | sed 's/\./-/g')
sed -i "s/SITE_NAME/$SITE_NAME/g" ~/create_distribution.json
sed -i "s/SITE__NAME/$SITE__NAME/g" ~/create_distribution.json
sed -i "s/CERT_ARN/$CERT_ARN/g" ~/create_distribution.json
Then, we can create the distribution using the JSON file:
aws cloudfront create-distribution --distribution-config file://$HOME/create_distribution.json
Finally, we’ll want to grab the distribution domain for later:
DIST_DOMAIN=$(aws cloudfront list-distributions | jq -r --arg SITE_NAME "$SITE_NAME" '.DistributionList.Items[] | select(.Aliases.Items[] == $SITE_NAME) | .DomainName')
Setting Up Your Domain
The last step is pointing our domain to our CloudFront distribution. All we need for this is an ALIAS record aimed at our CloudFront distribution domain.
As before, we’ll be using Route53. As before, you can do this manually if you aren’t using Route53. I don’t know how, admittedly, so you’ll have to do your own research here.
If you are using Route53, though, then I’ve got shell commands for you. Prepare a JSON file with our required record:
# If you aren't using us-east-1, update the HostedZoneId
echo '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "SITE_NAME","Type": "A","AliasTarget": {"HostedZoneId": "Z2FDTNDATAQYW2","DNSName": "DIST_DOMAIN","EvaluateTargetHealth": false}}}]}' | sed "s/SITE_NAME/$SITE_NAME/g" | sed "s/DIST_DOMAIN/$DIST_DOMAIN/g" | tee ~/set_dns.json | jq
As our very last step, submit the DNS record to Route53:
aws route53 change-resource-record-sets --hosted-zone-id $HOSTED_ZONE_ID --change-batch file://$HOME/set_dns.json
And you should be good to go! It may take a moment for your DNS changes to propagate. After a couple minutes, you should be able to visit your freshly deployed blog.
Next Steps
The example site you deployed probably has some sample blog posts that you'll want to remove. After that, when you're ready to write your own post, here's the process:
- Use any markdown editor to write your article. I like Obsidian.
- Copy the resulting markdown file to the
/var/www/staging-blog/content/
directory. - Run
sudo hugo
- Run
aws s3 sync /var/www/staging-blog/public/ s3://$SITE_NAME/
- Voilà!
Here’s how I do it from my Mac:
# Write the article in Obsidian, and then...
scp ~/obsidian/thievish/example.md melete:/var/www/staging-blog/content/
ssh melete -t 'cd /var/www/staging-blog/; sudo hugo'
# Preview the blog post locally if I want, and then...
ssh melete -t 'aws s3 sync /var/www/staging-blog/public/ s3://$SITE_NAME/'
And that’s it!
Conclusion
If this was helpful to you, do reach out and let me know—I’d be happy to check out your blog. If this guide was terrible and catastrophic for you, let me know about that too—I want to know where this article could be better, and will make updates accordingly.
This isn’t the easiest method to host a blog, but I think it’s a really good one. The initial effort is paid for via advantages in cost, performance, security, maintenance, and ease-of-publishing. Still, there are many ways to build a blog; choose whatever works for you. Cheers!