Minimal guide on Reverse proxy Using different reverse proxies – Caddy, nginx-proxy & Traefik

##EDIT: Added HAProxy!

# Introduction

**[Github repo for the examples below](https://github.com/azrikahar/docker-reverse-proxies)**

I personally love to use anologies or have a side by side comparison among different tools that tackle the same thing, so I came up with this little guide to get the ball rolling for everyone. **Each reverse proxy example resolves the exact same domain, subdomain and even subpath.** New users can easily dive into reverse proxy with the simple Caddy wheareas users that are more familiar with Nginx can opt for nginx-proxy instead. Users that are looking to move from Caddy or nginx-proxy to Traefik can also have a decent one-to-one mapping so they can understand what are the terms/components used in Traefik that means the same for Caddy or Nginx.

Often times Traefik will be the go-to recommendation whenever a question about reverse proxy is raised in this sub and it is no doubt a very feature-rich reverse proxy compared to others, but I do personally think there is nothing wrong in going with a simpler one like Caddy for smaller setups. I believe it can be a great starting point to wrap their head around reverse proxies. Once they are comfortable with it, they can then move on to Traefik to fit their more robust needs with hopefully a lesser learning curve.

I do implore new users to do some googling & research to find out the benefits of reverse proxies. I had to choose not to cover it for simplicity sake (though I'm already 3 paragraphs in now...).

# Disclaimer

**This MINIMAL guide will not be covering HTTPS or TLS certs.** Since we will need to add few more configurations to each reverse proxy example to enable HTTPS, I intend to make a separate guide for it.

Do understand that these examples are the bare minimum to setup a working reverse proxy and that each reverse proxy have far greater capabilities not highlighted such as load balancing. **This comparison between reverse proxies are for educational & clearer understanding purpose. There is no intent to spark arguments of which reverse proxy is superior. In the end, each of them fit various scale of projects and different use cases.**

# Things to know before going through the 4 setups

- **Why not put the reverse proxy & services in the same compose file?** I initially did this, but I have chosen to separate the reverse proxy into a dedicated docker-compose.yml file so it is easier for you when you wish to add another project/repo but want to use the same reverse proxy. Just think of the `project` folder in each of the example as a unique repo with one or more services with a dedicated docker-compose.yml file, which means you can have multiple projects. `project` is just a placeholder name.
- **What does the `docker ps -a --format "table {{.ID}}\t{{.Names}}" | (read -r; printf "%s\n" "$REPLY"; sort -k 2)` command do?** This basically just to modify the output of `docker ps -a` using `--format` to show only IDs and Names. Then we use the latter half of the command to sort the second column (-k 2) to see things clearer.
- **What does the `networks` -> `default` -> `name: [custom-name]-net` in each REVERSE PROXY's compose file do?** When you run `docker-compose up -d`, docker compose will create a default network to you. The name will be the project name appended with `_default`. The project name of a docker-compose file will be the directory name by default. For example, if we run `docker-compose up -d` in the directory named `testdir`, your default network name will be `testdir_default`. You can verify this in your existing setups using `docker network ls` and look for it. **Note: This custom name option is only available for compose "3.5", which is docker-compose [version 1.18.0](https://docs.docker.com/compose/compose-file/compose-versioning/#version-35) and above.**
- **what does the `networks` -> `[custom-name]-net` -> `external: true` in each PROJECT's compose file do?** Since our reverse proxy already has an existing network that it lives in, we are using this few lines to tell your docker compose that there is an existing network called `[custom-name]-net` outside of this project or compose file, so there is not need for your compose file to re-create it. Then we add the `networks` option in each services and tell them to join the external network we defined earlier. The reason why we need to ensure your services/containers are in the same docker network with the reverse proxy is because containers must be in the same network to be able to see each other. _Note: If you did put your reverse proxy and services in the single same compose file, you can skip these networks setup._
- **Why do you use `curl -H "Host: example.com" 127.0.0.1` in the results?** Since the domains used in the example is just a fake/local one, we need to specify to look for our fake domain in localhost (127.0.0.1) instead of via public dns. If you do own an actual domain, you can just run `curl example.com` instead. If you wish to make every devices in your home network recognize the fake/local domain name, you need to use a local dns server and configure your router to use the dns server instead of a public one.
- The `jwilder/whoami` image is used to showcase that the routes work as intended since it shows the container ID to us. There is `containous/whoami`, but I like jwilder's one for simpler output.
- Once again the reason why the folders are named `-basic` like `caddy-basic` is because I intend to release a HTTPS version with folder name `caddy-secure` and so on.

---

# Caddy | [Docker Hub](https://hub.docker.com/_/caddy) | [Official Docs](https://caddyserver.com/docs/)

## Caddy - Overview

**Directory structure:**

caddy-basic/
├── caddy/
│ ├── Caddyfile
│ └── docker-compose.yml
└── project/
└── docker-compose.yml

Explanation:

- the Caddyfile will be where we define all the reverse proxy configurations. Then we bind mount it to the caddy container and we're done. That's all!

**caddy/Caddyfile:**

http://example.com, http://www.example.com {
reverse_proxy main:8000
reverse_proxy /api/* main-api:8000
}

http://sub.example.com {
reverse_proxy sub:8000
}

Explanation:

- We must add `http://` in front because Caddy will automatically look to generate certs for you if you leave it out. Since our `example.com` is just for demo purpose and not a domain we own, it will have error trying to grab a cert for a nonexistent domain.
- If we use `reverse_proxy `, this means any request to this virtual host or domain will be proxied directly to the target. Note that we must specify the port of the target service, which in this case is all `8000`.
- If we use `reverse_proxy `, the is to capture which path prefix will be proxied to another container. The reason why we need to use `/api/*` instead of `/api*` is because the second one will be valid for `/apineapple` which is surely not what we intend to do. Or is it? *plays vsauce music*
- Note there there are no semicolons (;) at the end of statements, unlike Nginx.

**caddy/docker-compose.yml:**

version: "3.8"

services:
caddy:
image: caddy
container_name: caddy
ports:
- 80:80
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro

networks:
default:
name: caddy-net

**project/docker-compose.yml:**

version: "3.8"

services:
main:
image: jwilder/whoami
networks:
- caddy-net

main-api:
image: jwilder/whoami
networks:
- caddy-net

sub:
image: jwilder/whoami
networks:
- caddy-net

networks:
caddy-net:
external: true

## Caddy - Deployment steps & results

**Step 1:** First we start caddy using docker compose.

[[email protected] ~/docker-reverse-proxies/caddy-basic/caddy]$ docker-compose up -d
Creating network "caddy-net" with the default driver
Creating caddy ... done

**Step 2:** Then we go to the directory of our project and run docker compose.

[[email protected] ~/docker-reverse-proxies/caddy-basic/caddy]$ cd ../project/
[[email protected] ~/docker-reverse-proxies/caddy-basic/project]$ docker-compose up -d
Creating project_main_1 ... done
Creating project_sub_1 ... done
Creating project_main-api_1 ... done

**Step 3:** List out container ID and names for our reference.

[[email protected] ~]$ docker ps -a --format "table {{.ID}}\t{{.Names}}" | (read -r; printf "%s\n" "$REPLY"; sort -k 2)
CONTAINER ID NAMES
adf7a7fe7e6e caddy
e0814a284e59 project_main_1
85447c75ec50 project_main-api_1
eb7b0092c3c1 project_sub_1

**Step 4:** Use curls to verify the result by comparing the container IDs with above..

[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1
I'm e0814a284e59
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1
I'm e0814a284e59
[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1/api/
I'm 85447c75ec50
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1/api/
I'm 85447c75ec50
[[email protected] ~]$ curl -H "Host: sub.example.com" 127.0.0.1
I'm eb7b0092c3c1

---

# HAProxy | [Docker Hub](https://hub.docker.com/_/haproxy) | [Official Docs](https://cbonte.github.io/haproxy-dconv/)

## HAProxy - Overview

**Directory structure:**

haproxy-basic/
├── haproxy
│ ├── docker-compose.yml
│ └── haproxy.cfg
└── project
└── docker-compose.yml

Explanation:

- Similar with Caddy, we will need a configuration file called `haproxy.cfg` and bind mount it into our HAProxy container.

**haproxy/haproxy.cfg:**

# global
# maxconn 5000

defaults
mode http
timeout connect 10s
timeout client 30s
timeout server 30s
default-server init-addr none

resolvers docker_resolver
nameserver dns 127.0.0.11:53
resolve_retries 10

frontend http-in
bind *:80

acl host_main hdr(host) -i example.com
acl host_main hdr(host) -i www.example.com
acl path_api path_beg -i /api
acl host_sub hdr(host) -i sub.example.com

use_backend main-api if host_main path_api
use_backend main if host_main
use_backend sub if host_sub

backend main
server main main:8000 check resolvers docker_resolver

backend main-api
server main-api main-api:8000 check resolvers docker_resolver

backend sub
server sub sub:8000 check resolvers docker_resolver

Explanation:

- haproxy configuration has various sections, but the 4 essential ones we will be using are `defaults`, `resolvers`, `frontend` and `backend`. The frontend & backend structure might be familiar if you had experience with Traefik v1 which has frontend & backend as well. (Traefik v2 has 3 parts instead: router, middleware, service)
- **Highly recommend to check out [this official blog post by HAProxy on the essential sections in the haproxy.cfg file](https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/)** since I will basically base a lot of explanations below from this source.
- **global section:**
- The reason why I commented out this part is I feel like leaving out may seem like a disservice since this section is pretty prominent in most guides. But for minimal sake, I still didn't include alot of things. The only thing I want to mention is the `maxconn`. By default it allows 2000 max connections ([source](https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/)) and that should be more than enough for typical setups. Just note on this if your project/environment requires higher demand.
- **defaults section:**
- `mode http`: The mode setting defines whether HAProxy operates as a simple TCP proxy or if it’s able to inspect incoming traffic’s higher-level HTTP messages. Since we are serving web services, we use `http` as the mode. [More info over here](https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/)
- `timeout connect`, `timeout client`, `timeout server`:
- The `s` suffix in `10s` and `30s` denotes seconds. Without any suffix, the time is assumed to be in milliseconds.
- `timeout connect`: configures the time that HAProxy will wait for a TCP connection to a backend server to be established.
- `timeout client`: measures inactivity during periods that we would expect the client to be speaking, or in other words sending TCP segments.
- `timeout server`: measures inactivity when we’d expect the backend server to be speaking. When a timeout expires, the connection is closed.
- `default-server init-addr none`: The init-addr none parameter allows HAProxy to start up, even if it can’t resolve the hostname right away. That’s really useful for dynamic environments where the backend servers may be created after HAProxy starts. [Source](https://www.haproxy.com/blog/dns-service-discovery-haproxy/)
- **resolvers section:**
- We create a resolver called `docker_resolver`.
- `nameserver dns 127.0.0.11:53`: This allows HAProxy to query the given nameservers and continually check for updates to the DNS records. Since we are using docker, and docker default dns server for a given network is at `127.0.0.11`([source 1](https://github.com/ant30/docker-haproxy-resolver/blob/master/haproxy/haproxy.cfg) & [source 2](https://stackoverflow.com/questions/41152408/dynamic-dns-resolution-with-haproxy-and-docker), can't seem to find official docs on this IP), hence why we tell haproxy to resolve dns over this nameserver.
- We also need this section because when HAProxy starts, it will try to resolve `main`, `main-api` and `sub` service directly, but since they are not up when you start your reverse proxy, it will think that these 3 names does not exist. This is why we will add a check in the below part of the haproxy.cfg file to allow haproxy to detect when they are discoverable.
- **frontend section:**
- When you place HAProxy as a reverse proxy in front of your backend servers, a frontend section defines the IP addresses and ports that clients can connect to. You may add as many frontend sections as needed for exposing various websites to the Internet. Each frontend keyword is followed by a label, such as `http-in`, to differentiate it from others.
- `bind *:80`: A bind setting assigns a listener to a given IP address and port, so we listens on any source IP to port 80 which is HTTP port.
- `acl` ([Recommend reading this official blog post for overall understand](https://www.haproxy.com/blog/introduction-to-haproxy-acls/)):
- acl basically are rules that we can define and HAProxy provides many ways to match a particular rule for us.
- `acl host_main hdr(host) -i example.com`: This mean we create an acl rule named `host_main` and tells it to match the request header "host", which is why we use `example.com`. The `-i` means to ignore case sensitivity.
- `acl host_main hdr(host) -i www.example.com`: Notice how we re-use the same name for the acl, `host_main`. This means now these 2 rules applies to `host_main` at the same time.
- `acl path_api path_beg -i /api`: Here we create an acl rule named `path_api` and use `path_beg` (means path beginning) and check for path begins with `/api`.
- `use_backend`:
- **The order of these statements MATTER**, so do be careful.
- `use_backend main-api if host_main path_api`: Here we tell HAProxy to go to our backend called `main-api` IF we match 2 rules, `host_main` and `path_api`. Since we combined them together, this means both of them must be true for this to work.
- `use_backend main if host_main`: This should be self-explanatory. However notice why we put this below the above statement. This is because `host_main` only checks for domain name, this means even `example.com/api` is true for this condition. This is why we must first rule out `/api` and let the above statement catch it first. Anything other than `/api` will only be passed to this check.
- **backend section:**
- A backend section defines a group of servers that will be load balanced and assigned to handle requests. You’ll add a label of your choice to each backend. Here we just coincidentally use the exact same name with our services for simplicity.
- `server main main:8000`: The server setting is the heart of the backend. Its first argument is a name (main), followed by the IP address and port of the backend server. You can specify a domain name instead of an IP address. In that case, it will be resolved at startup or, if you add a resolvers argument, it will be updated during runtime. This is why we use `main:8000` instead of IP:PORT.
- `check resolvers docker_resolver`: This is used to ask HAProxy to periodically check and resolve the dns for `main`, `main-api` and `sub` using our resolver that actually uses the docker embedded dns server.

**haproxy/docker-compose.yml:**

version: "3.8"

services:
haproxy:
image: haproxy
container_name: haproxy
ports:
- 80:80
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro

networks:
default:
name: haproxy-net

**project/docker-compose.yml:**

version: "3.8"

services:
main:
image: jwilder/whoami
networks:
- haproxy-net

main-api:
image: jwilder/whoami
networks:
- haproxy-net

sub:
image: jwilder/whoami
networks:
- haproxy-net

networks:
haproxy-net:
external: true

## HAProxy - Deployment steps & results

**Step 1:** First we start haproxy using docker compose.

[[email protected] ~/docker-reverse-proxies/haproxy-basic/haproxy]$ docker-compose up -d
Creating network "haproxy-net" with the default driver
Creating haproxy ... done

**Step 2:** Then we go to the directory of our project and run docker compose.

[[email protected] ~/docker-reverse-proxies/haproxy-basic/project]$ docker-compose up -d
Creating project_main-api_1 ... done
Creating project_sub_1 ... done
Creating project_main_1 ... done

**Step 3:** List out container ID and names for our reference.

[[email protected] ~]$ docker ps -a --format "table {{.ID}}\t{{.Names}}" | (read -r; printf "%s\n" "$REPLY"; sort -k 2)
CONTAINER ID NAMES
2eb81cffddf7 haproxy
729b03798d3d project_main_1
3d5f4b6fe702 project_main-api_1
1f556cc4036f project_sub_1

**Step 4:** Use curls to verify the result by comparing the container IDs with above..

[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1
I'm 729b03798d3d
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1
I'm 729b03798d3d
[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1/api/
I'm 3d5f4b6fe702
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1/api/
I'm 3d5f4b6fe702
[[email protected] ~]$ curl -H "Host: sub.example.com" 127.0.0.1
I'm 1f556cc4036f

---

# jwilder/nginx-proxy | [Docker Hub](https://hub.docker.com/r/jwilder/nginx-proxy/)

## jwilder/nginx-proxy - Overview

**Directory structure:**

nginx-proxy-basic/
├── nginx-proxy
│ ├── docker-compose.yml
│ └── vhost.d
│ ├── example.com
│ └── www.example.com
└── project
└── docker-compose.yml

**vhost.d/example.com & vhost.d/www.example.com:**

location /api {
proxy_pass http://main-api:8000;
}

Explanation:

- Both the files share the exact same content.
- Similar with Caddy, we must specify the target port as well.
- The reason why we can bind these 2 files (they MUST MATCH the VIRTUAL_HOST values used in the compse file) is because the container will read this directory and apply the custom configs we added. Refer to the **Per-VIRTUAL_HOST** section of the [docker hub page](https://hub.docker.com/r/jwilder/nginx-proxy/). I will add two quotes from the section into below:

> To add settings on a per-VIRTUAL_HOST basis, add your configuration file under /etc/nginx/vhost.d. Unlike in the proxy-wide case, which allows multiple config files with any name ending in .conf, the per-VIRTUAL_HOST file must be named exactly after the VIRTUAL_HOST.

> If you are using multiple hostnames for a single container (e.g. VIRTUAL_HOST=example.com,www.example.com), the virtual host configuration file must exist for each hostname.

**nginx-proxy/docker-compose.yml:**

version: "3.8"

services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
volumes:
- ./vhost.d:/etc/nginx/vhost.d:ro
- /var/run/docker.sock:/tmp/docker.sock:ro

networks:
default:
name: nginx-proxy-net

Explanation:

- We added the `vhost.d` bind mount to let it load the custom configs for each VIRTUAL_HOST used in the project compose file below.
- We bind the nginx-proxy to the docker socket using `/var/run/docker.sock:/tmp/docker.sock:ro` so the nginx-proxy can detect when another container is created and check whether they have the environment variable `VIRTUAL_HOST` or not.

**project/docker-compose.yml:**

version: "3.8"

services:
main:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=example.com,www.example.com
networks:
- nginx-proxy-net

main-api:
image: jwilder/whoami
networks:
- nginx-proxy-net

sub:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=sub.example.com
networks:
- nginx-proxy-net

networks:
nginx-proxy-net:
external: true

Explanation:

- Unlike Caddy which is manual (Caddy can be auto, but not out-of-the-box), nginx-proxy will actually recognize the environment variable `VIRTUAL_HOST` used in each container to automatically setup the configuration for you! For mutiple domains, we just need to use commas to separate them.

## jwilder/nginx-proxy - Deployment steps & results

**Step 1:** First we start nginx-proxy using docker compose.

[[email protected] ~/docker-reverse-proxies/nginx-proxy-basic/nginx-proxy]$ docker-compose up -d
Creating network "nginx-proxy-net" with the default driver
Creating nginx-proxy ... done

**Step 2:** Then we go to the directory of our project and run docker compose.

[[email protected] ~/docker-reverse-proxies/nginx-proxy-basic/nginx-proxy]$ cd ../project/
[[email protected] ~/docker-reverse-proxies/nginx-proxy-basic/project]$ docker-compose up -d
Creating project_main_1 ... done
Creating project_sub_1 ... done
Creating project_main-api_1 ... done

**Step 3:** List out container ID and names for our reference.

[[email protected] ~]$ docker ps -a --format "table {{.ID}}\t{{.Names}}" | (read -r; printf "%s\n" "$REPLY"; sort -k 2)
CONTAINER ID NAMES
fef5d27f3da8 nginx-proxy
b7cbfa1967eb project_main_1
b7b92a199db8 project_main-api_1
2258da98cee9 project_sub_1

**Step 4:** Use curls to verify the result by comparing the container IDs with above.

[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1
I'm b7cbfa1967eb
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1
I'm b7cbfa1967eb
[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1/api/
I'm b7b92a199db8
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1/api/
I'm b7b92a199db8
[[email protected] ~]$ curl -H "Host: sub.example.com" 127.0.0.1
I'm 2258da98cee9

---

# Traefik | [Docker Hub](https://hub.docker.com/_/traefik) | [Official Docs](https://docs.traefik.io/)

## Traefik - Overview

**Directory structure:**

traefik-basic/
├── project/
│ └── docker-compose.yml
└── traefik/
└── docker-compose.yml

Explanation:

- There is no extra configuration files unlike Caddy and Nginx-proxy because Traefik can work just using labels! More info below.

**traefik/docker-compose.yml:**

version: "3.8"

services:
traefik:
image: traefik
container_name: traefik
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro

networks:
default:
name: traefik-net

Explanation:

- We use commands to configure our Traefik. The commands used are:
- `--providers.docker=true`: This means we are enabling docker provider since Traefik supports several other providers as well. [More info](https://docs.traefik.io/providers/overview/)
- `--providers.docker.exposedbydefault=false`: We set this false so not every container will be automatically exposed by Traefik. They must have the label `traefik.enable=true` for them to be discovered by Traefik. [More info](https://docs.traefik.io/providers/docker/#exposedbydefault)
- `--entrypoints.web.address=:80`: We create an entrypoint called `web` that points to port 80. Later, we can use this `web` entrypoint in any container via labels so they can be reached via HTTP (port 80).
- Similar with nginx-proxy, we bind it to the docker socket so it can detect other containers with traefik-specific labels.

**project/docker-compose.yml:**

version: "3.8"

services:
main:
image: jwilder/whoami
labels:
- "traefik.enable=true"
- "traefik.http.routers.main.rule=Host(`example.com`) || Host(`www.example.com`)"
- "traefik.http.routers.main.entrypoints=web"
networks:
- traefik-net

main-api:
image: jwilder/whoami
labels:
- "traefik.enable=true"
- "traefik.http.routers.main-api.rule=(Host(`example.com`) && PathPrefix(`/api`)) || (Host(`www.example.com`) && PathPrefix(`/api`))"
- "traefik.http.routers.main-api.entrypoints=web"
networks:
- traefik-net

sub:
image: jwilder/whoami
labels:
- "traefik.enable=true"
- "traefik.http.routers.sub.rule=Host(`sub.example.com`)"
- "traefik.http.routers.sub.entrypoints=web"
networks:
- traefik-net

networks:
traefik-net:
external: true

Explanation:

- As I mentioned, Traefik works just by using labels, so we will use them liberally here.
- Label `traefik.enable=true` is used to tell Traefik that this service will need to be routed by Traefik.
- Label `` traefik.http.routers.main.rule=Host(`example.com`) || Host(`www.example.com`) `` essentially means we create a router called `main`, then we specify rule. For this specific line, we use the OR condition (||) so Traefik will know requests either `example.com` or `www.example.com` will both be routed to this container/service.
- Label `traefik.http.routers.main.entrypoints=web` means we define the entrypoints of the router called `main` as `web`. Since we declared `web` points to port 80 in the Traefik compose file, this means we can access this service via port 80 which is HTTP.
- The above 3 points for Labels apply to other services similarly.
- Notice how unlike Caddy and Nginx-proxy, we didn't tell Traefik that it should go to the port `8000` of the services. This is because **if a container exposes only one port, then Traefik automatically uses this port**. More info on this behavior and what should we do when there's multiple ports [over here](https://docs.traefik.io/providers/docker/#port-detection).
- Just to explain `` Host(`example.com`) && PathPrefix(`/api`) ``, this means we are trying to match `example.com/api` since we used AND condition (&&). [More info on matching paths](https://docs.traefik.io/v1.4/basics/#matchers)

## Traefik - Deployment steps & results

**Step 1:** First we start traefik using docker compose.

[[email protected] ~/docker-reverse-proxies/traefik-basic/traefik]$ docker-compose up -d
Creating network "traefik-net" with the default driver
Creating traefik ... done

**Step 2:** Then we go to the directory of our project and run docker compose.

[[email protected] ~/docker-reverse-proxies/traefik-basic/traefik]$ cd ../project/
[[email protected] ~/docker-reverse-proxies/traefik-basic/project]$ docker-compose up -d
Creating project_sub_1 ... done
Creating project_main-api_1 ... done
Creating project_main_1 ... done

**Step 3:** List out container ID and names for our reference.

[[email protected] ~]$ docker ps -a --format "table {{.ID}}\t{{.Names}}" | (read -r; printf "%s\n" "$REPLY"; sort -k 2)
CONTAINER ID NAMES
afee6c0788bc project_main_1
f1fc25ebe396 project_main-api_1
fdef95555c6d project_sub_1
8ab61ae674c6 traefik

**Step 4:** Use curls to verify the result by comparing the container IDs with above.

[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1
I'm afee6c0788bc
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1
I'm afee6c0788bc
[[email protected] ~]$ curl -H "Host: example.com" 127.0.0.1/api/
I'm f1fc25ebe396
[[email protected] ~]$ curl -H "Host: www.example.com" 127.0.0.1/api/
I'm f1fc25ebe396
[[email protected] ~]$ curl -H "Host: sub.example.com" 127.0.0.1
I'm fdef95555c6d

---

I hope this minimal guide with one-to-one mapping among 4 different reverse proxies helps to give a better understanding on the concept as a whole and also somewhat showcase the different amount of effort or configurations needed for each of them. Do let me know if there's any error or you can just submit a pull request. 🙂

8 thoughts on “Minimal guide on Reverse proxy Using different reverse proxies – Caddy, nginx-proxy & Traefik”

  1. You commented on something similar today or yesterday which was also fantastic, thank you for being so helpful! The part about using different compose files has baffled me for a while as I could never get them to join my already made proxy network! Thank you

    Reply
  2. This is REALLY awesome! But I had to let out a laugh after reading the title “Minimal guide”, and then scrolling, and scrolling, and scrolling 😀

    Reply
  3. >Once again the reason why the folders are named -basic like caddy-basic is because I intend to release a HTTPS version with folder name caddy-secure and so on.

    Did you released it?

    Reply

Leave a Comment