Welcome back! In this part of our Self-Hosted Blog series, we'll build out the core network and server infrastructure using Terraform. Since I've commented the code I won't delve into every line but instead highlight some more interesting bits of the deployment.
Terraform
In its simplest form, a Terraform template can be written as a single file of resource declarations. This may be fine if you don't plan to reuse elements of your template but what if you decide to later? Having a monolithic file can be inefficient because you're forced to tear out resources and rework them into a new project. To make the code more modular we can use Terraform Modules. Terraform Modules will allow us to write mini templates that expose configuration options by using variables within the module. This results in each module becoming a self-contained block of infrastructure that can be cleanly integrated into another stack by simply defining a few parameters.
To help put this into context, let's take a look at the files and folder structure:
main.tf - Stores our configuration definitions.
variables.tf - Stores our variables.
outputs.tf - Allows us to export values from a module for use elsewhere in the project.
Folder Structure:
As you can see below, we have a hierarchy of the same types of files.
- main.tf
- variables.tf
- modules/
- network/
- main.tf
- variables.tf
- outputs.tf
- server/
- main.tf
- variables.tf
- outputs.tf
The 'main.tf' file in the root directory will orchestrate the modules below it by defining which modules to include and which configuration values are to be passed into each module. Modules can pass outputs up to the parent main.tf for use elsewhere thought the template. A main.tf file will look like the below when it's used to orchestrate other modules:
provider "aws" {
profile = var.aws_profile
region = var.aws_region
}
module network{
source = "./modules/network"
tag_terraform-stackid = var.tag_terraform-stackid
vpc_cidr = var.vpc_cidr
subnet_cidr = var.subnet_cidr
your_public_ip = var.your_public_ip
}
Depending on our project we may find that some variables are common throughout multiple modules. To simplify this and avoid typo's we can centralise these values passed into the modules by setting them as variables in the root 'variables.tf' file. This will provide one place to populate all variables for the project. We achieve this by setting the 'default' value for a given variable. A variables file looks like this:
variable "aws_profile" {
type = string
description = "The name of the AWS Profile to use on your machine:"
default = "default"
}
variable "aws_region" {
type = string
description = "AWS Region (e.g. us-east-2, ap-southeast-2):"
default = "us-east-2"
}
variable "tag_terraform-stackid" {
type = string
description = "The AWS resource tag 'Stack ID' applied to resources in this deployment:"
default = "blog"
}
variable "vpc_cidr" {
type = string
description = "Network block for your VPC in CIDR format (X.X.X.X/XX):"
default = "10.10.0.0/16"
}
variable "subnet_cidr" {
type = string
description = "Network block for the subnet within the VPC in CIDR format (X.X.X.X/XX):"
default = "10.10.1.0/24"
}
Network
Location: /modules/network/main.tf
Before we define a server we need to create a network for it to communicate over. Since this is a pretty basic build we'll just need the essentials.
- VPC: A network
- Subnet: A smaller slice of the network.
- Route Table: Instructions on how traffic should flow in and out of the network.
- Internet Gateway: Connection to the internet.
- Security Group: Effectively AWS level firewall rules attached to the server.
If we had requirements for increased availability we could also throw in:
- Additional Subnets in other Availability Zones.
- Auto-Scaling Group to automatically create and remove servers as load increases or decreases.
- Load Balancer to manage the connections to the group of servers in the autoscaling group.
Server
Location: /modules/server/main.tf
We'll define our server in a module the same way we did the network so we can reuse it later. Here are a few highlights from the server module.
Dynamically AMI Retreival:
When building an EC2 instance, you need to select an Amazon Machine Image (AMI). This tells AWS what type of virtual machine you want (Windows, Linux, etc) and which version it should be running (Ubuntu 18.04, 20.04, etc). As you change AWS regions this AMI ID will change. So Ubuntu 20.04 in Sydney won't necessarily have the same ID as Ubuntu 20.04 in North Virginia even though they're the same operating system. Since I don't know where you're going to deploy this stack, the below resource will detect the region you're running the template in and find the appropriate AMI ID automagically!
Warning: You may want to consider hard coding this after initial deployment. Since the latest available AMI will change over time, if you modify the Terraform template when a newer version is available, Terraform will see the existing instance's AMI doesn't match and suggest to replace the instance with a new one based on the latest AMI. You can avoid this by taking note of the AMI used to run the initial deployment and hard code it into your template. If you didn't take note of the initial deployment, running a 'terraform plan' will show you the expected changeset including the old and new AMI ID. You can copy the old AMI into your template and re-run 'terraform plan' to verify the changeset before running 'terraform apply' to avoid complications.
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
SSH Key-Pair Injection
Instead of using passwords to protect the admin account on our server we'll stick with best practice and use ssh keys. If you don't already have an ssh key pair you'll need to generate one. If you already have one, just drop the text from your public key in the allocated variable of the root 'variable.tf' file as a string. This will inject it into the EC2 instance when it's built.
Cloud-Init
Awesome, we have a server... but it's not doing anything. Most cloud providers and major operating systems support Cloud-Init which will run whatever code you want on the first system startup. This means we can automate the install of Ghost when our server first starts up. Thankfully, Ghost has a comprehensive CLI to deploy the entire product including free Let's Encrypt SSL certificates which we'll use for backend encryption between Cloudflare and the server.
Just a word on database security, if you're particularly cautious you can change your 'root' database password after you've deployed the stack. The database password in our 'variable.tf' file that gets populated into the Ghost CLI is only temporarily used by Ghost to generate its own set of credentials so changing the 'root' database password after the fact won't break anything.
locals {
userDataScript = <<EOF
#cloud-config
system_info:
default_user:
name: ${var.sys_username}
repo_update: true
repo_upgrade: all
runcmd:
- export PATH=$PATH:/usr/local/bin
- apt-get update
- apt-get upgrade -y
- apt-get install nginx -y
- ufw allow 'Nginx Full'
- apt-get install mysql-server -y
- mysql --host="localhost" --user="root" --execute="ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '${var.db_pass}';"
- curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash
- apt-get install -y nodejs
- npm install ghost-cli@latest -g
- mkdir -p /var/www/${var.cf_zone}
- chown ${var.sys_username}:${var.sys_username} /var/www/${var.cf_zone}
- chmod 775 /var/www/${var.cf_zone}
- cd /var/www/sitename
- sudo su ${var.sys_username} --command "cd /var/www/${var.cf_zone} && ghost install"
- sudo su ${var.sys_username} --command "cd /var/www/${var.cf_zone} && ghost setup --url '${var.subdomain}.${var.cf_zone}' --sslemail '${var.ssl_email}' --db 'mysql' --dbhost 'localhost' --dbuser 'root' --dbpass '${var.db_pass}' --dbname '${var.db_name}'"
- sudo su ${var.sys_username} --command "cd /var/www/${var.cf_zone} && ghost start"
EOF
}
Also for those of us who struggle to deal with code that's not properly tabbed inline, sorry but you'll have to close your eyes for this one. If you tab the cloud-init script inline with the rest of the code the first few characters of the file will be spaces and the system won't interpret it appropriately.
Up Next
So that's our core network and server deployed. Next, we'll build out Cloudflare and deploy some automation!