terraform & ansible for deploying an 11ty site at digitalocean

Introduction

This post will document my method for creating a droplet (VPS) at digitalocean to host an 11ty website. This will be a single nginx server with a domain, a certificate setup for https, a firewall using ufw, etc. The goal is to make this process quick and reproducible using terraform and ansible. These tools are probably too much for this application, it is much easier to use doctl or do this by hand instead of learning terraform and ansible.

However, my longterm goal is to use these tools for applications where mutiple servers with a load balancer are created. For that task, using terraform and ansible makes much more sense-- think of these notes as documenting practice & learning.

prerequisites

In order to get started the following should be available ( see see here for a github repository ):

  • The 11ty-base-blog will be used an example of a current eleventy site that can used for demonstration purposes. However, this is also provided in the github repository for this post, linked above)
  • Docker should be installed ( see Install docker engine on Pop!_OS 22.04 for my notes ) if you want to use Docker to build the 11ty blog like I do here. This is NOT required if you already have nodejs installed, as detailed below.
  • Ansible should be installed ( see install ansible on linux for my notes ),
  • terraform should be installed ( see install terraform on linux for my notes ),

Build the 11ty blog

The section below, using Docker, is completely optional. If nodejs is already installed on your machine, you can install 11ty, build the blog, and move the output _site directory to root of the repository using

$ cd eleventy-base-blog-main/   
$ npm install
$ npm run build
$ cp -r _site/ ../
$ rm -r _site
$ cd ..

The above command result in a built blog with the output directory in the location expected by later ansible commands.

Using Docker to build the blog

If you have Docker installed, the github repository for this blog post has a Dockerfile and a bash script that will build the blog. There is no need to install nodejs locally because Docker will do that in a container:

  • Docker will download nodejs, install 11ty, and build the blog.
  • The bash script runs Docker, copies the created _site directory from the Docker container to the local directory and does some cleanup of the Docker container and image used.
  • A blog post with much more detail about this approach is located here

The Dockerfile, called Dockerfile.build, looks like this:

Dockerfile.build

## Dockerfile.build

# set the base imaage
FROM node:20-alpine

WORKDIR /app

# copy package.json and package-lock.json to image
COPY eleventy-base-blog-main/package*.json ./

# install 11ty dependencies
RUN npm install

# copy files to /app in container
COPY eleventy-base-blog-main/ .

# run build command for production
RUN npm run build

and the bash script that runs Docker and does some cleanup is called 11ty-build.sh:

11ty-build.sh

#!/bin/bash

if [ -d _site ]; then
# if _site exists, delete and reconstruct
echo "...removing old version of _site"
rm -r _site
fi

# create 11ty-build image
docker build -f Dockerfile.build -t 11ty-build .

# create 11ty-container
docker run --name 11ty-container 11ty-build

# copy _site directory to host
docker cp 11ty-container:/app/_site _site

# [optional] cleanup image and container
docker container rm 11ty-container
docker rmi 11ty-build:latest

Building the blog

Using the files above, the blog is created by running the bash script:

$ ./11ty-build.sh

This will create the _site directory that contains the 11ty blog html/js/css and it ready to upload to a webserver.

Terraform to create the server

The next step is to create a droplet at digitalocean using terraform. The goal is to have infastructure in place, but leave installing much of the software to ansible. In this process, terraform has to interact with digitalocean and requires that the user

  1. has a digital ocean account, see here ,
  2. has uploaded an ssh key to digitalocean, see here ,
    • in this example it is assumed that the ssh key uploaded to digitalocean has been named do_test
  3. has generated a private access token for digitalocean and stored it locally for access via the shell, see here . Be careful with this token, it is private like a password, and is only shown once, at the time of creation! Also, you want to have full read/write permissions for the token.
    • in a previous post I showed how to store this token using secret-tool to safely store and access this private token when using terraform.

I originally created multiple files for the terraform code, mimicking a common practice for larger projects. However, this creates a lot of clutter and I decided to move back to one file: digitalocean-ubuntu-nginx.tf. The different parts of the file will still be discussed separately below.

terraform file

This section defines the provider information, in this case digitalocean. Notice the token is provided using a variable. This variable is defined later in the file and assigned a value when terraform is actually run.

digitalocean-ubuntu-nginx.tf - provider

##
## provider
##
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}

provider "digitalocean" {
token = var.do_token
}

The second section, server, defines the properties of a digitalocean droplet, running ubuntu 22.04, that will be created. The ssh key is obtained from the data section, shown below. A resource for the domain is added and the domain name is assigned by a variable declared below. The value of the domain is assigned when running terraform. Finally, the "output" command prints out the ip address for the droplet, once it is created.

digitalocean-ubuntu-nginx.tf - server

##
## server
##
resource "digitalocean_droplet" "www" {
image = "ubuntu-22-04-x64"
name = "www-nginx"
region = "nyc3"
size = "s-1vcpu-1gb"
ipv6 = true
ssh_keys = [
data.digitalocean_ssh_key.do_test.id
]
}

# add domain
resource "digitalocean_domain" "default" {
name = var.domain
ip_address = digitalocean_droplet.www.ipv4_address
}

# add an A record for www.domain
resource "digitalocean_record" "www" {
domain = digitalocean_domain.default.name
type = "A"
name = "www"
value = digitalocean_droplet.www.ipv4_address
}

# output server's ip
output "ip_address" {
value = digitalocean_droplet.www.ipv4_address
}

The data section retreives the ssh key named "do_test" from digitalocean. This key should already be uploaded to digitalocean at their website, or using doctl.

digitalocean-ubuntu-nginx.tf - data

##
## data
##
# ssh key uploaded to digitalocean
data "digitalocean_ssh_key" "do_test" {
name = "do_test"
}

The final section of the file defines the two variables used above. The actual values assigned to variables are provided at the command line, or in a script/makefile/etc.

digitalocean-ubuntu-nginx.tf - variables

##
## variables
##
variable "do_token" {
type = string
description = "Personal access token setup at digitalocean."
}

variable "domain" {
type = string
description = "The domain name for the server"
default = "example.com"
}

terraform commands

The basic commands that are needed to deploy the digitalocean droplet are are mostly straightforward. I will go through all of the main commands here, but note that there is a makefile to collect them all provided below and make them easy to use. To start, the digitalocean module needs to be downloaded and installed using:

$ terraform init

The above command only needs to be run once. Once the digitalocean module is installed we can start. It's good to check formating and validate the files we've created using

$ terraform fmt .
$ terraform validate .

Note that both of those commands end with a "dot", or "period", telling terraform to consider the current directory. The terraform fmt . command will output the names of any files changed-- these files might need to be saved if they are loaded in an editor. The terraform validate . command will verify valid code, or list errors to correct. Both format and validate commands can and should be run again when there are any changes to any of the terraform files.

$ terraform plan \
-var "do_token=$(secret-tool lookup token digitalocean)" \
-var "domain=example.com"

This command should output the details for creating the domain and the droplet. At this point nothing has been created, this is a list of potential actions if we apply them. So, check that things look okay before moving on.

A couple of notes on the command:

  • The secret-tool command has been used to store, and now lookup the private digitalocean token. See here for more info . You don't have to secure your token this way, but it needs to be accessed in this command.
  • The domain has been set to example.com. This should be changed to whatever domain you have registered and pointed towards the digitalocean name servers.

warning

A reminder that launching a droplet on digitalocean is NOT free. So, proceed with caution and make sure you have the funds available to move forward :)

Finally, a droplet can be created according to our specs using:

$ terraform apply \
-var "do_token=$(secret-tool lookup token digitalocean)" \
-var "domain=example.com"

This command will require you to say yes to a prompt before launching. Remember, this droplet costs money ($6/month in Mar 2025) as long as it up and running. Once the command is completed, the ip assigned to the droplet will be printed out.

At this point a droplet should be up and running. You can login to your account at digitalocean and see that the droplet and domain are listed. You should also be able to ssh to the server using the root account.

Alternatively, if you have doctl installed ( see here for more on doctl ) you can try the following commands to list all of your droplets and all of your domains:

$ doctl compute droplet list
$ doctl compute domain list

These lists should include droplet and domain you just added. Next up, we use ansible to install software and setup the website. However, we note that you might want to destroy these resources after seeing how it works. If so, see below for reference.

detroying the droplet

Finally, you'll want to destroy these resources after you are finished playing with them. Remember, droplets do cost money proportional to the time they are up and running. Fortunately, this is simple:

$ terraform apply -destroy \
-var "do_token=$(secret-tool lookup token digitalocean)" \
-var "domain=example.com"

a makefile for terraform commands

Rather than remember all of the above commands for terraform listed above, I like to use a makefile, like the one shown below:

mk-terraform

SHELL=/bin/bash

# get digitalocean token
DO_TOKEN := $(shell secret-tool lookup token digitalocean)

# set domain name
DOMAIN := example.com

# a default announcement
define ANNOUNCE_BODY
Makefile for terraform commands.

tf-init
- runs "terraform init"
- this should be run first!

tf-fmt
- runs "teffaform fmt ."
- this formats terraform files in current directory

tf-validate
- runs "terraform validate ."
- validates all files in current directory

tf-plan
- runs "terraform plan"

tf-apply
- runs "terraform apply"
- you will have to type 'yes'

tf-destroy
- runs "terraform apply -destroy"
- you will have to type 'yes'

tf-refresh
-runs "terraform refresh"

endef

export ANNOUNCE_BODY
all:
@echo "$$ANNOUNCE_BODY"

tf-init:
@ terraform init

tf-fmt:
terraform fmt .

tf-validate:
terraform validate .

tf-plan:
@ terraform plan -var "do_token=$(DO_TOKEN)" -var "domain=$(DOMAIN)"

tf-apply:
@ terraform apply -var "do_token=$(DO_TOKEN)" -var "domain=$(DOMAIN)"

tf-refresh:
@ terraform refresh -var "do_token=$(DO_TOKEN)" -var "domain=$(DOMAIN)"

tf-destroy:
@ terraform apply -destroy -var "do_token=$(DO_TOKEN)" -var "domain=$(DOMAIN)"

Notice that the

  • digitalocean token, and
  • the domain name
are asigned at the top of the top of the file-- these should be change to reflect your setup!

Running the makefile without a target shows the "ANNOUNCE_BODY":

$ make -f mk-terraform 
Makefile for terraform commands.

tf-init
- runs "terraform init"
- this should be run first!

tf-fmt
- runs "teffaform fmt ."
- this formats terraform files in current directory

tf-validate
- runs "terraform validate ."
- validates all files in current directory

tf-plan
- runs "terraform plan"

tf-apply
- runs "terraform apply"
- you will have to type 'yes'

tf-destroy
- runs "terraform apply -destroy"
- you will have to type 'yes'

tf-refresh
-runs "terraform refresh"

Running the makefile with a target selected from above runs the commands with the proper token, domain, etc. For example, plan with the command

$ make -f mk-terraform tf-plan

If you look carefully at the makefile, many of the terraform commands start with a @, preventing sensitive tokens from being echoed to the terminal or logged, nice.

That's it for terraform in this post. There are two methods to create and destroy all of the resources we need for the ubuntu/nginx server: (1) use single commmands at the terminal and/or (2) use the makefile detailed above. Either will work.

Ansible to provision the server

Ansible will be the primary tool I use for installing and updating software, uploading the website, handling certbot for creation of the SSL certificate, etc. Useful references for more information about Ansible are

The starting point for using ansible is one or more servers, in this case ubuntu, up and running. The terraform code above "outputs" an ip once digitalocean has provisioned the droplet. This ip should be added to an inventory file, as shown below.

inventory file

At first, I'll setup an inventory file for the root account used by terraform, named inventory_root.ini. An inventory file lists all machines (only one machine in this case) that ansible will access. In this simple example, the file looks like

inventory_root.ini

[webservers]
www-nginx ansible_host=xxx.xxx.xx.xxx

[webservers:vars]
ansible_user=root
ansible_ssh_private_key_file=/home/username/.ssh/do_test
ansible_python_interpreter=/usr/bin/python3

There should be one line, with an ip for each server under the "webservers" section. Notice that the user, ssh key, and location of the python interpreter are listed in the variables. These could be applied to all "webservers", if there were more.

a first ssh

Before starting with ansible commands the server fingerprint needs to be added to your ~/.ssh/known_hosts. A simple way to do this is to ssh to the server so that you can accept the fingerprint with the first session. Obviously, substitute the ip assigned to your droplet.

$ ssh root@xxx.xxx.x.xx

ping the server

Next up, we can "ping" the server to make sure that communication is working. The response should look like this:

$ ansible all -i inventory_root.ini -m ping
www-nginx | SUCCESS => {
"changed": false,
"ping": "pong"
}

create non-root user and secure server

Before moving onto updating the server and installing software it is good practice to create a non-root sudo user. A first step is to create an ssh key in a local directory ( this follows guidance at digitalocean ):

$ mkdir ssh
$ ssh-keygen -t rsa -b 4096 -f ssh/ansible_user
$ l ssh/
ansible_user ansible_user.pub

Listing the contents of the ssh directory shows that the ssh key pair has been created locally.

Next, up we'll use ansible to setup the ansibleuser and the ssh key created above. The tasks needed to do this are in the "SERVER SETUP" section of the playbook. Note that I have defined variables used throughout the playbook at the top of the file. I have also "tagged" these tasks with "never" and "setup". This combination of tags means the tasks will NEVER BE RUN unless the --tags "setup" flag is passed. This section of the file looks like:

playbook.yml -- SERVER SETUP

---
- hosts: all
become: true
vars:
username: ansibleuser
domain: example.com
local_html: _site
email: me@example.com

#
# SERVER SETUP
#
# reference
# https://www.digitalocean.com/community/tutorials/how-to-use-ansible-to-automate-initial-server-setup-on-ubuntu-22-04
#
- name: Setup passwordless sudo
lineinfile:
path: /etc/sudoers
state: present
regexp: '^%sudo'
line: '%sudo ALL=(ALL) NOPASSWD: ALL'
validate: '/usr/sbin/visudo -cf %s'
tags: [ never, setup ]

- name: Create a new regular user with sudo privileges
user:
name: "{{ username }}"
state: present
groups: sudo
append: true
create_home: true
tags: [ never, setup ]

- name: Set authorized key for remote user
ansible.posix.authorized_key:
user: "{{ username }}"
state: present
key: "{{ lookup('file', 'ssh/ansible_user.pub') }}"
tags: [ never, setup ]

- name: Disable password authentication for root
lineinfile:
path: /etc/ssh/sshd_config
state: present
regexp: '^#?PermitRootLogin'
line: 'PermitRootLogin prohibit-password'
tags: [ never, setup ]

This "SERVER SETUP" section of the playbook, tagged "setup" does the following:

  • Enables "passwordless" sudo,
  • Creates a user, using the username variable, set at the the top of the playbook,
  • Sets the authorized ssh key for the user to be ssh/ansible_user.pub, created above, and
  • Disables password authentication for the root user.

To run this part of the playbook, use the command line:

$ ansible-playbook -i inventory_root.ini playbook.yml --tags "setup"

Once the playbook completes, a new inventory file using the ansibleuser information should be created:

inventory_ansibleuser.ini

[webservers]
www-nginx ansible_host=xxx.xxx.xx.xxx

[webservers:vars]
ansible_user=ansibleuser
ansible_ssh_private_key_file=ssh/ansible_user
ansible_ssh_extra_args="-o IdentitiesOnly=yes"
ansible_python_interpreter=/usr/bin/python3

The ansibleuser inventory will be used from here. As with the root user, the fingerprint needs to added to known_hosts by an initial ssh to the server:

$ ssh -o "IdentitiesOnly=yes" -i ssh/ansible_user ansibleuser@xxx.xx.xxx.xx

A test of the setup can be done by "pinging" the server, as before:

$ ansible all -i inventory_ansibleuser.ini -m ping
www-nginx | SUCCESS => {
"changed": false,
"ping": "pong"
}

update the server

A new digitalocean droplet always needs to be updated when first created. Luckily ansible can do this, motivated by this post from Jeff Geerling. Updating a few of the commands, we can create a section of the playbook.yml file with tasks to update the server and reboot if needed:

playbook.yml -- SERVER UPDATE

#
# SERVER UPDATE
#
# reference
# https://www.jeffgeerling.com/blog/2022/ansible-playbook-upgrade-ubuntudebian-servers-and-reboot-if-needed
# - some commands out of date
#
- name: Run apt update and apt upgrade
ansible.builtin.apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400 # one day, in seconds
tags: [ never, update ]

- name: Check if reboot is required
ansible.builtin.stat:
path: /var/run/reboot-required
get_checksum: false
register: reboot_required_file
tags: [ never, update ]

- name: Reboot the server (if needed)
ansible.builtin.reboot:
when: reboot_required_file.stat.exists == true
tags: [ never, update ]

- name: Autoremove deps that are no longer needed
ansible.builtin.apt:
autoremove: true
tags: [ never, update ]

This file is executed from top to bottom and does the following:

  • Runs apt-get update and apt-get upgrade,
  • Checks if update of the server is needed,
  • Reboots the server if the answer is yes, and
  • Run autoremove to clean unneeded dependencies.

As before, the "never" and "update" tags limit when these tasks will be executed. To run this "update" section of the playbook, use the command line:

$ ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "update"

Typically, this will take a while to run the first time because there are almost always a large number of updates on a new droplet and the server usually needs to be rebooted. However, this can be used to update the server at any point and later runs should be much quicker.

nginx setup

Next up is installing nginx, setting up a firewall, and organzing the directories for the html. The playbook for these tasks is below ( if you are using it, be sure to change the domain variable to the domain you have secured ):

playbook.yml -- NGINX SETUP

#
#
# NGINX SETUP
#
#
- name: Update apt and install required system packages
apt:
pkg:
- ufw
- nginx
state: latest
update_cache: true
tags: [ never, nginx_setup ]

- name: Ensure Nginx is runnning
service:
name: nginx
state: started
enabled: yes
tags: [ never, nginx_setup ]

- name: UFW - Allow SSH connections
community.general.ufw:
rule: allow
name: OpenSSH
tags: [ never, nginx_setup ]

- name: UFW - Allow HTTP and HTTPS connections
community.general.ufw:
rule: allow
name: Nginx Full
tags: [ never, nginx_setup ]

- name: UFW - Enable and deny by default
community.general.ufw:
state: enabled
default: deny
tags: [ never, nginx_setup ]

- name: Create remote html directory
ansible.builtin.file:
path: /var/www/{{ domain }}/html
state: directory
mode: '0755'
tags: [ never, nginx_setup ]

- name: Change ownership of html directory to ansibleuser
ansible.builtin.file:
path: /var/www/{{ domain }}/html
state: directory
recurse: yes
owner: ansibleuser
group: ansibleuser
tags: [ never, nginx_setup ]

- name: Apply Nginx template
template:
src: ansible_files/nginx.conf.j2
dest: /etc/nginx/sites-available/default
notify: Restart Nginx
tags: [ never, nginx_setup ]

- name: Enable new site
file:
src: /etc/nginx/sites-available/default
dest: /etc/nginx/sites-enabled/default
state: link
notify: Restart Nginx
tags: [ never, nginx_setup ]

At the bottom of the playbook we add a "handler" for restarting nginx after changes to nginx server block. Above, you can see that this handler is called a couple of time using the notify property.

playbook.yml -- HANDLERS

#
#
# HANDLERS
#
#
handlers:
- name: Restart Nginx
service:
name: nginx
state: restarted

Finally, before running this section of the playbook, a template for the nginx.conf needs to be created:

$ mkdir ansible_files
$ touch ansible_files/nginx.conf.j2

The contents should be something like ( the nginx.conf.j2 template uses the domain variable assigned at the top of the playbook to complete the file ):

ansible_files/nginx.conf.j2

server {
listen 80;
listen [::]:80;

server_name {{ domain }} www.{{ domain }};

root /var/www/{{ domain }}/html;
index index.html index.htm;

location / {
try_files $uri $uri/ =404;
}
}

Now we can run the playbook with the new ansibleuser inventory and the nginx_setup tag:

$ ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "nginx_setup"

https with certbot

Next we have ansible install and use certbot to generate a "let's encrypt" certificate and modify the nginx config appropriately. The tasks below follow the outline given at the eff.org page on certbot with nginx & snap . I'll use the fact that Ubuntu has snap installed by default to my advantage. This section of the playbook looks like:

playbook.yml -- CERTBOT SETUP

#
#
# CERTBOT SETUP
#
#
- name: Install Certbot
community.general.snap:
name: certbot
classic: true
tags: [ never, certbot ]

- name: Prepare certbot command
file:
src: /snap/bin/certbot
dest: /usr/bin/certbot
state: link
tags: [ never, certbot ]

- name: Generate certificate
ansible.builtin.shell:
cmd: "certbot --nginx --email {{ email }} --eff-email --agree-tos -d {{ domain }} -d www.{{ domain}}"
notify: Restart Nginx
tags: [ never, certbot ]

- name: Certbot renewal dry-run
ansible.builtin.shell:
cmd: "certbot renew --dry-run"
register: dryrun_output
tags: [ never, certbot ]

- name: Print renewal dry-run output
debug:
var: dryrun_output.stdout_lines
tags: [ never, certbot ]

Note that this section of the playbook even does the "dry-run" to test that certificate renewal is setup correctly. As with other sections, this can be run with:

$ ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "certbot"

website rsync

Next, we want to get the contents of _site uploaded to the created directory. There are "copy" and "synchronize" commands in ansible, but I had trouble getting them to work at acceptable speeds and have resorted to running an rync command using ansible:

playbook.yml -- WEB RSYNC

#
#
# WEB RSYNC
#
#
- name: rsync local html with server html directory
ansible.builtin.shell:
cmd: "rsync -av -e 'ssh -o \"IdentitiesOnly=yes\" -i ssh/ansible_user' {{ local_html }}/ ansibleuser@{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:/var/www/{{ domain }}/html/"
delegate_to: localhost
register: rsync_output
tags: [ never, web_rsync ]
vars:
ansible_become: false

- name: Print rsync output
debug:
var: rsync_output.stdout_lines
tags: [ never, web_rsync ]
vars:
ansible_become: false

The above section of the playbook syncs the contents of the _site directory with the correct location on the server. The output of the rsync command is saved and output to the terminal. This section is re-useable and can be run again when change are made to the 11ty site. As usual, the command is

$ ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "web_rsync"

I have also added a makefile for the anibsle commands detailed above. The file looks like this:

mk-ansible

SHELL=/bin/bash

# set ip
IP = 104.131.15.134

# directory for ansibleuser ssh key
ssh-dir = ssh

# a default announcement
define ANNOUNCE_BODY

Makefile for ansible and ssh commands.

ssh-root:
- ssh to the server with root credentials

ping-root:
- pings the server with root credentials

create-ansibleuser-key:
- creates ssh key for ansibleuser

ssh-ansibleuser:
- ssh to the server with ansibleuser credentials

ping-ansibleuser:
- pings the server with ansibleuser credentials

server-setup:
- create ansibleuser user and enable passwordles sudo
- disable root login with password

server-update:
- update the server using the ansibleuser account
- this can be reused to update the server at any time

nginx-setup:
- install nginx and ufw
- setup nginx config, create dir, enable site

certbot-certificate:
- user certbot to create lets encrypt certificate and setup https

web-rsync:
- sync the website in _site with server
- this can be reused as the site is changed

endef

export ANNOUNCE_BODY

all:
@echo "$$ANNOUNCE_BODY"

ssh-root:
ssh root@${IP}

ping-root:
ansible all -i inventory-root.ini -m ping

create-ansibleuser-key:
mkdir ssh
ssh-keygen -t rsa -b 4096 -f ssh/ansible_user

$(ssh-dir):
@echo "Directory 'ssh' does not exist; creating dir and key"
mkdir ssh
ssh-keygen -t rsa -b 4096 -f ssh/ansible_user

ssh-ansibleuser: | $(ssh-dir)
ssh -o "IdentitiesOnly=yes" -i ssh/ansible_user ansibleuser@${IP}

ping-ansibleuser: | $(ssh-dir)
ansible all -i inventory_ansibleuser.ini -m ping

server-setup: | $(ssh-dir)
ansible-playbook -i inventory_root.ini playbook.yml --tags "setup"

server-update: | $(ssh-dir)
ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "update"

nginx-setup: | $(ssh-dir)
ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "nginx_setup"

certbot-certificate: | $(ssh-dir)
ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "certbot"

web-rsync: | $(ssh-dir)
ansible-playbook -i inventory_ansibleuser.ini playbook.yml --tags "web_rsync"

As a quick example, the web-rsync cab be run using the command:

$ make -f mk-ansible web-rsync

That's it

I have used all of the material above to set an ubuntu/nginx server and upload the 11ty demo blog. I know these notes with be helpful for me- maybe they will be helpful to you as well. Be sure to update ips, domains, emails, etc if you use this material! Again, don't forget the files are all available at github ).