Hi all. I'm working on an app hosted in Kubernetes that lets users add custom domains. Each time a domain is added, the app creates an ingress (virtual host) and a certificate in Kubernetes. The database used by Kubernetes (etcd) can only scale up to a point because of db size limits, plus there will be a limit to the number of virtual hosts that Nginx can handle. Because of these two reasons I am going to re-architect the app so it can spread users across many small Kubernetes clusters instead of a single big one. Questions:
1. How many virtual hosts can an Nginx instance handle comfortably from your experience? Does anyone have experience with, say, thousands of virtual hosts or more? I will need to do some testing but I would like an idea of what to expect more or less.
2. When there are many virtual hosts, can it happen that Nginx drops connections when reloading the config when a virtual host is created/updated/deleted?
3. Is Nginx quick enough when reloading a config that includes many virtual hosts?
4. Would something like Traefik or Haproxy be any more efficient with many virtual hosts and with dynamically reloading/updating its configuration?
Thanks a lot in advance!
I don’t have experience with thousands of vhosts, the most we regularly use is around 50. Here are some relevant links:
https://nginx.org/en/docs/http/server_names.html#optimization
https://nginx.org/en/docs/hash.html
It should be pretty easy to test out – just generate a config file with a few thousand junk server names and compare the request latency and the time a config reload takes against some baseline config with only a couple of server names.