Secure your Nginx proxy with HTTPS

If you have CloudFlare serving your site over HTTPS you might be worried that the traffic between CloudFlare and your server is still cleartext. We can fix that by generating a certificate for our Nginx proxy.

Why a self-signed certificate?

As long as any certificate is used, the communication between the two endpoints is encrypted in transit. The "verified" certificates that you purchase or generate with Let's Encrypt serve the same purpose while also displaying a padlock in a browser telling the viewer that you are who you say you are.

All HTTPS certificates provide encryption.

Verified HTTPS certificates prove you are talking to the right server.

For the setup of (Client) - (CloudFlare) - (Website) it is enough for CloudFlare to provide a nice padlock.

Generating HTTPS certificate for Nginx

I tend to put directories I'm going to mount inside a container under /opt. You can put them wherever.

The nginx-proxy container assumes that the .crt .key files are named same as hostname of the website.

mkdir -p /opt/nginx/certs

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /opt/nginx/certs/stanislav.se.key -out /opt/nginx/certs/stanislav.se.crt  

You'll be asked a bunch of questions. It does not really matter what you answer. The Common Name is supposed to be the hostname of your server, like stanislav.se.

Optionally, you may generate Diffie-Hellman parameters. This takes a while.

openssl dhparam -out /opt/nginx/certs/stanislav.se.dhparam.pem 2048  

Configure nginx-proxy stack file

A quirk is that when a volume mount is defined in a container's Dockerfile, if you override that with a host directory mount you have to make it writable, otherwise it does not apply (but it shows up in docker inspect)

nginx-proxy:  
  image: 'jwilder/nginx-proxy:latest'
  autoredeploy: true
  deployment_strategy: every_node
  ports:
    - '443:443'
  restart: always
  volumes:
    - '/var/run/docker.sock:/tmp/docker.sock:ro'
    - '/opt/nginx/certs:/etc/nginx/certs' 

I opted to turn off plain HTTP, but if you want to keep that going, have a - '80:80' under ports as well.

Note that when you redeploy the nginx-proxy stack you should uncheck volume re-use.
Turn off volume re-use

The proxy will lose all virtualhost configuration until you also redeploy your website so that the configuration gets regenerated.

Verify Nginx configuration

In Docker Cloud navigate to your nginx-proxy service and find the port 443 endpoint.
nginx-proxy HTTPS endpoint Try opening it in your browser. If everything is correct, you should get a certificate warning. If you get a connection timeout, go back to configuring and redeploying nginx-proxy. If you get a HTTP code 503 proceed to check nginx configuration as described below.

You can check what configuration Nginx is running by using docker exec to run a shell command inside the nginx-proxy container. You'll need a shell on the docker host for this.

docker ps | grep nginx  
48c2624a4fe7        jwilder/nginx-proxy:latest  
bash  
docker exec 48c2624a4fe7 cat /etc/nginx/conf.d/default.conf  

CloudFlare Full SSL

Make sure your website hostname points to the nginx-proxy service endpoint, go to Crypto settings for your website and the only thing you need to do is to choose "Full" in the SSL dropdown.
CloudFlare Full SSL