About a year ago I’ve written about how I installed Letsencrypt on this server. Back then I’ve been using Chef to automate tasks like this. A lot has happened in the meantime: Letsencrypt has evolved quite a bit and I’ve been using other means of automating provisioning tasks (namely Puppet and Ansible) as well. For a long time I’ve been thinking about setting up a personal cloud server and in fact I have had an Owncloud instance running for some time. Since it’s running on a remote machine, it’s fairly restricted in terms of available disk space, though. Also, some months ago, Nextcloud appeared – it’s a fork of Owncloud and it’s considered a more community-driven project than Owncloud has become.
For some weeks now I’ve been working on setting up a server in my own home. A home server – on the flip side – will have less bandwidth available than a server hosted at a datacenter – I can accept that for my requirements. In this article I’m not going into detail on the Nextcloud part of my new server, instead I want to focus on setting up Letsencrypt using Ansible. There will be a follow-up post on the rest of the setup, though.
Defining a reasonable static hostname
Most internet connections that you get come with a dynamic IP address and since you need a fixed domain name to get a TLS certificate, the first thing to take care of is getting a suitable domain name first. I’ve already been using the services of ddnss.de, a free DynDNS provider, for some time. The hostname provided by ddnss.de
is updated automatically by my router and theoretically you can use it for getting a TLS certificate. However, Letsencrypt has some rate limits in place and one of them is named Certificates per Registered Domain: you’re only allowed to get 20 certificates per week for a top-level-domain. This means, that if other’s using the same DynDNS provider are also using Letsencrypt to get certificates, you might hit this rate limit. This is especially dangerous as you might not be able to renew you certificate later. Also, a custom domain name look much nicer than a DynDNS hostname, doesn’t it?
For these reasons I’ve configured a new subdomain for one of my own domains, let’s say nextcloud.reinhard.codes
. Then, using the DNS editing tools of my domain reseller, I configured this subdomain to be a CNAME alias of my dynamically updated hostname, lets say nextcloud123.ddns.de
. This way I can create certificates for nextcloud.reinhard.codes
and at the same time use the dynamically updated hostname provided by ddnss.de
.
Basic node setup
With this problem solved, Letsencrypt needed to be installed on my server – I’m using a Raspberry Pi 3 Model, by the way. Starting off with a blank server, we only need a webserver to get going. I didn’t want to use Chef again, instead I wanted to take the opportunity to work with Ansible some more, as I think it’s much easier. In the past I’ve mostly used Apache2 as a webserver, but starting with a clean slate I wanted to use Nginx this time.
Before working with Ansible, we need to do some basic setup of our Raspberry Pi. This means installing Raspbian on a SD-card and setting up an SSH key, so we don’t have to enter a password everytime we log into the node. Since Ansible uses SSH to connect to the nodes, this makes provisioning easy. As I want to use the default pi
user when connecting to the host, I also have some SSH configuration in place (~/.ssh/config
):
Host nextcloud
HostName 192.168.1.23
User pi
IdentityFile ~/.ssh/id_rsa
With this configuration I don’t need to specify the user everytime I log in and my SSH private key is presented automatically when I connect to this node. Also, I can use the nextcloud
alias instead of the IP address when connecting.
Basic Ansible setup
Before starting, we need an Ansible inventory
to define the nodes we are working with. For this scenario it will contain only one entry: the Raspberry Pi node. Also, I’m telling ansible to use the default pi
user to log in to the machine:
[nextcloud]
192.168.1.23 ansible_user=pi
Furthermore we need a playbook (webserver.yml
) that we execute against the inventory. It defines which roles to assign to which hosts and at this point it’s quite straightforward:
---
- hosts: nextcloud
roles:
- base
Before we actually start to configure Nginx, there’s some base setup that should be in place. There’s some minor stuff that doesn’t warrant much attention (like installing the Vim
editor), but I want to point out one particular thing. Some software that we will be installing isn’t readily available through the standard APT
repository for the Raspbian Jessie distribution. We will later add other repositories so that we can install those packages. But as a default, we want APT
to use the Jessie repository. So I’m setting a default in the base role:
# Ensure other roles can add repositories without affecting the default
- name: Ensure APT uses Jessie packages as default
template:
src: apt_preferences.j2
dest: /etc/apt/preferences
owner: root
group: root
mode: 0755
become: yes
The template file apt_preferences.j2
then looks like this:
Package: *
Pin: release n=jessie
Pin-Priority: 600
Have a look at the full base role here.
Provisioning the node
At this point we can run our first provisioning:
ansible-playbook -i inventory webserver.yml
The -i
option defines the inventory file to use and webserver.yml
is the playbook we want to execute. After Ansible has finished, there should be a file /etc/apt/preferences
with the contents from the role template above. Also, the rest of the role task should be reflected on the role (like Vim
being present).
Installing Nginx
With some previous Ansible experience, installing Nginx is not a big deal. After creating a role I’m using the package module to get the default package of nginx for my distribution – it’s a bit out-of-date, but for now this is good enough for me.
---
- name: Ensure Nginx is installed
package: name=nginx state=present
become: yes
- name: Ensure www home exists
file:
path: "{{ nginx.www_home }}"
state: directory
owner: "{{ nginx.webserver.user }}"
group: "{{ nginx.webserver.group }}"
mode: 0755
become: yes
- name: Ensure default webroot is disabled
file:
path: "{{ nginx.configuration_root }}/sites-enabled/default"
state: absent
become: yes
I’m not going into every Ansible detail here, most of it is self-explanatory, anyway. Notice the become: yes
option, however. It tells Ansible to use sudo
when processing the corresponding module. Since the pi
user is able to use password-less sudo, we don’t need to specify a sudo password when Ansible runs. Here’s the complete role for reference.
Let’s add the role to the webserver playbook:
---
- hosts: nextcloud
roles:
- base
- nginx
After re-running the provisioning, we should have Nginx installed on the Raspberry Pi. You should be able to access its default page through a webbrowser. There’s other options that you can pass along, for instance if we wouldn’t have password-less sudo
, we could add the --ask-become-pass
option – Ansible would then ask for a sudo password in the beginning.
Installing Letsencrypt
I’ve done some research first to find the most reasonable way to install Letsencrypt onto the node. There’s actually an Ansible module called letsencrypt
(flagged as preview) and I was excited to see that. However, I couldn’t make it work. A lot of other resources that I found are a bit out of date, so in the end I decided to do it manually and create my own role (which we’ll have to add to the webserver playbook, again).
Creating the challenge endpoint
The first thing to do is creating an endpoint for the http-challenge that Letsencrypt uses to proove domain ownership. These need to be accessible using HTTP
.
- name: Ensure letsencrypt challenge root exists
file:
path: "{{ nginx_tls.challenge.root }}"
state: directory
owner: "{{ nginx.webserver.user }}"
group: "{{ nginx.webserver.group }}"
mode: 0755
become: yes
- name: Ensure letsencrypt challenge endpoint is configured
template:
src: nginx_letsencrypt_challenge.conf.j2
dest: "{{ nginx_tls.challenge.configuration }}"
become: yes
register: challengeEndpointConfigured
- name: Ensure letsencrypt challenge endpoint is enabled
file:
src: "{{ nginx_tls.challenge.configuration }}"
dest: "{{ nginx_tls.challenge.configuration_enabled }}"
state: link
become: yes
register: challengeEndpointEnabled
- name: Ensure Nginx is restarted immediately when necessary
service: name=nginx state=restarted
become: yes
when: challengeEndpointConfigured.changed or challengeEndpointEnabled.changed
This first creates the corresponding directory, adds a Nginx configuration file and enables this configuration. It then restarts the webserver if necessary. Since we re-run the complete provisioning everytime something on the node changes, the definitions need to be idempotent. I’m registering the variables challengeEndpointConfigured
and challengeEndpointEnabled
for that reason and only restart Nginx if something changed for these commands.
Here’s the configuration for the endpoint:
server {
listen 80;
listen [::]:80;
server_name {{ nginx_tls.domain }};
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /var/www/letsencrypt-challenge;
}
location = /.well-known/acme-challenge/ {
return 404;
}
location / {
return 301 https://$server_name$request_uri;
}
}
As you can see, I only allow the challenge endpoint to use HTTP
, all other requests to port 80 are redirected to HTTPS
.
Installing Certbot
The preferred way to use Letsencrypt is through the official ACME client certbot
. There’s an APT
package for it, but you can’t install it using the official repository for Jessie – instead you have to use the Jessie backports repository. Letsencrypt has fantastic documentation and it’s not hard to find this information. But how can you do this with Ansible? There’s a few things to take care of:
- name: Ensure jessie-backports repository is enabled
apt_repository:
repo: deb http://ftp.debian.org/debian jessie-backports main
state: present
become: yes
- name: Ensure Debian GPG signing keys are available on node
copy:
src: "{{ item }}.gpg.asc"
dest: "/var/downloads/{{ item }}.gpg.asc"
become: yes
with_items:
- 8B48AD6246925553
- 7638D0442B90D010
- name: Ensure Debian GPG signing keys are installed with APT
apt_key:
file: "/var/downloads/{{ item }}.gpg.asc"
state: present
become: yes
with_items:
- 8B48AD6246925553
- 7638D0442B90D010
- name: Ensure apt is up-to-date
apt:
update_cache: yes
become: yes
- name: Ensure letsencrypt certbot is installed
apt:
name: certbot
state: present
default_release: jessie-backports
become: yes
First we need to add the Jessie backports repository. Since this repository uses new GPG signing keys, I’m installing them onto the node next. I’ve downloaded them manually and added them to the files/
section of the new role. I then copy them to the node and add them for APT
to use. After updating the packages cache, we can now install certbot
. Note the default_release
option: It tells Ansible to use the jessie-backports
repository to install the package (remember: we’ve set the Jessie
repository as a default in the base role!).
Initializing the certificates
Since I don’t want to get new certificates everytime I’m provisioning my server, I’m using a conditional include in my role. By overriding the variable nginx_tls.initialize_tls
with true
(using --extra-vars "nginx_tls.initialize_tls"
) the initialisation part of the role gets executed:
---
- name: Check that no previous letsencrypt certificates exist
stat: path=/etc/letsencrypt
register: letsencryptInstalled
- name: Ensure initial certificates are created
raw: certbot certonly --webroot -w {{ nginx_tls.challenge.root }} -d {{ nginx_tls.domain }} --email {{ letsencrypt.contact_email }} --agree-tos --non-interactive
become: yes
when: letsencryptInstalled.stat.exists == false
The first task makes sure that there are no pre-existing certificates on the node, the second task then executes the certbot
to retrieve the certificates. Please consult the certbot man pages for a detailed explanation of the options I’m providing.
Renewing the certificates
The certificates issued by Letsencrypt are valid for three months. We don’t want to be in a position where we can forget about renewing them, so we need to automate this as well. In fact, certbot does install a cron job for this. While it will renew the certificates, it doesn’t reload the webserver configuration afterwards – at least with my setup. I’ve decided to remove the default cron job and provide my own:
- name: Ensure default certbot cron is disabled
file:
dest: /etc/cron.d/certbot
state: absent
become: yes
- name: Ensure script for certificate renewal is present
template:
src: letsencrypt_renew_certs.sh.j2
dest: "{{ base.scripts_folder }}/letsencrypt_renew_certs.sh"
mode: 0755
become: yes
- name: Ensure cron job for certificate renewal is present
template:
src: letsencrypt_renewal_cron.j2
dest: /etc/cron.d/letsencrypt_renewal_cron
mode: 0755
become: yes
For a complete reference, you can look at the complete role with (almost) all the files. I’ve removed one potentially sensible file (dhparams.pem) that customizes the TLS encryption.
Using the certificates with Nginx
The last thing to take care of is actually using the certificates in the webserver configuration. I’ve used information from this DigitalOcean page and integrated it into the role that installs Nextcloud. I won’t go into more detail on this now, but as said before, I’m planning a follow-up post on the required steps to enhance the basic webserver that I described in this post to a fully configured nextcloud server (installing MariaDB, PHP7) with a RAID-1 disk setup. I hope you find my explanation useful so far, let me know if you have questions or if you think I could do things differently.