haproxyconf-2022/README.en.md

510 lines
24 KiB
Markdown
Raw Normal View History

2022-09-12 21:34:18 +02:00
# haproxyconf-2022
2022-10-01 18:32:00 +02:00
The purpose of this presentation is to show how we can use HAProxy and Varnish together to make web-sites and web-apps more performant and reliable.
2022-09-18 22:38:33 +02:00
2022-10-01 18:32:00 +02:00
We will start with the general ideas then go deeper to the details.
2022-09-18 22:38:33 +02:00
2022-10-01 18:32:00 +02:00
## Why HAProxy and Varnish ?
2022-09-18 22:38:33 +02:00
2022-10-01 18:32:00 +02:00
They are both open-source, mature and performance-oriented.
They are not alon inb their categories, but they are the ones we know and love the most.
They are eaisly available on many platforms. They have great documentation and vast/vibrant communities.
They both are supported by companies that offer professional support and services, while keeping the core open-source and free for everybody.
2022-09-18 22:38:33 +02:00
2022-09-20 22:10:42 +02:00
### HAProxy
2022-10-01 18:32:00 +02:00
HAProxy will be the first web-server for the HTTP client.
It will be the TLS/SSL termination. It will decide if the request is accepted or rejected,and finaly will pass the request to the upstream server or servers.
It is acting as a proxy, with load-balancing and fault-tolerance capabilities.
2022-09-18 22:38:33 +02:00
2022-10-01 18:32:00 +02:00
To summarize briefly the basic principles of HAProxy, let's mention the _frontend_, the _backend_ and the intermediate processing.
2022-09-20 22:10:42 +02:00
2022-10-01 18:32:00 +02:00
A frontend is an entry point. HAProxy is listening on the TCP or HTTP layer, on one or more `IP:PORT` pairs, with options regarding logs, timeouts, supported, protocols and much more.
Many frontend can coexist in an HAProxy instance,to accomodate different use cases. For example : a traditionnal web-site in one, and an API for third-parties in another one, each with their settings and configuration.
2022-09-20 22:10:42 +02:00
2022-10-01 18:32:00 +02:00
HAProxy is able to parse and process the full TCP or HTTP request. It exposes an internal API to change the request or the response, decide how to deliver them upstream or not… in a very optimized and reliable manner.
2022-09-20 22:10:42 +02:00
2022-10-01 18:32:00 +02:00
A backend is an exit point. There must be one for each group of final webservers. If there is only one server in a backend, it will deal with every request, but if there are more than one, HAProxy will balance the requests, according to an algorithm.
That's where we begin to mention load-balancing. And when we introduce som logic to deal with misbeahaving or missing servers, it becomes fault-tolerant.
2022-09-20 22:10:42 +02:00
### Varnish
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
Varnish is going to store in cache the result of some requests. Later on, a client who makes the same request might get a result from the cache if it is preset andfresh enough.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
With its default configuration, Varnish has a lot of good practice already in place and jst a few adjustments might be necessary to work with most of the situations.
Every step of the request/response processing is done by functions that can be customized by something that looks a bit like inheritance in programming. At startup, everything is validated and compiled into optimized code.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
Like HAProxy, Varnish parses the whole request/response inside the functions, so we can decide if the request/response needs to be modified (like adding or removeing headers), if a response can be served from the cache, if it need to talk to the final server, and if the responsae can be stored in cache for future use.
Varnish stores the content, its objects, into memory to be really fast. if you have a lot of trafic, give it enough RAM to keep a lot of content available.
2022-09-18 22:38:33 +02:00
2022-09-20 22:10:42 +02:00
## Comment combiner HAProxy et Varnish
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
So we have decided to have them work together, in a coordinated way, for an efficient and feature-rich setup.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
Has we've seen earlier, HAProxy uses frontends as entry-points and backend asexit-points.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
In our case, after accepting the request from the HTTP client, HAProxy will pass it to Varnish, via a dedicated backend.
We have chosen to place HAProxy and Varnish on the server server, and connect them with unix sockets, but it's an implementation detail andwe could have placed them on separate servers, communicating through regular TCP sockets.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
Once the request is passed to Varnish, it will parse it too and decide if it is able to serve a resposne from the cache or pass it to the final server.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
In the most common case with Varnish, the request to the final server and passed directly. It is evencapable of some rather basic load-balancing. But since we have HAProxy at hands and it is more capable in this area, we have decided to pass the request back to HAProxy.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
To prevent loops, and to bypass the processing that has already been done, the request will reenter HAProxy throughanother frontend. This one is much simpler and its responsibility is basically to choose which backend to use for each request. If we manage sites and apps on more than one set of servers, we have to create as much backends as we have sets of servers.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
In the end, the request is passed to a server in a backend, be it a static web site, a dynamic application programmed with a framework, or the ingress of a big Kubernetes cluster.
2022-09-18 22:38:33 +02:00
2022-10-01 18:05:16 +02:00
La réponse fera ensuite le chemin inverse, en revenant à HAProxy, puis Varnish (qui décidera s'il met la réponse en cache), puis HAProxy puis le client HTTP qui a fait initialement la requête.
2022-09-17 15:16:01 +02:00
2022-10-01 23:19:37 +02:00
The response will eventually travl the same way backwards, through HAProxy, Varnish (which will decide to store it in the cache or not) and HAProxy again to the original HTTP client.
Chain :
2022-09-19 14:30:24 +02:00
2022-10-01 23:19:37 +02:00
1. HAProxy _frontend_ « external » (general)
2022-09-20 22:10:42 +02:00
2. HAProxy _backend_ « varnish »
2022-09-17 15:16:01 +02:00
3. Varnish
2022-10-01 23:19:37 +02:00
4. HAProxy _frontend_ « internal » (general)
5. HAProxy _backend_ « site X » (per site)
2022-09-17 15:16:01 +02:00
6. Web-server
2022-09-20 22:10:42 +02:00
## Multi-sites
2022-10-01 23:19:37 +02:00
Even if we have only one frontend to manage all the requests with the same settings, we can have many groups of web-servers,for differents web-apps of web-sites. In our case, we have many clients on one HAProxy, with one frontend, but many backends, one for each client and their own servers.
HAProxy does not have a concept of VirtualHost as Apache or Nginx do, but we can use some lower-levels conditionals to have a similar result.
We can write (at least) an ACL t detect if the Host header is for a web site or another and then use this ACL to trigger some processing adnd chose the backend to use.
An ACL is like a boolean variable.
2022-09-20 22:10:42 +02:00
```
frontend external
acl example_com_domains hdr(host) -i example.com
acl foo_bar_domains hdr(host) -i foo-bar.com foo-bar.org
[…]
use_backend example_com if example_com_domains
use_backend foo_bar if foo_bar_domains
```
2022-10-01 23:19:37 +02:00
## Pass the request to Varnish
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
In the HAProxy configurtion, we've put a "varnish" backend, with only one server.
2022-09-18 22:38:33 +02:00
```
backend varnish
option httpchk HEAD /varnishcheck
server varnish_sock /run/varnish.sock check observe layer7 maxconn 3000 inter 1s send-proxy-v2
```
2022-10-01 23:19:37 +02:00
The `httpchk` option tells HAProxy to use the "layer 7" to check for Varnish health, by regularly checking on a special URL.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
In the Varnish configuration we have a behaviour for this URL:
2022-09-18 22:38:33 +02:00
```
sub vcl_recv {
# HAProxy check
if (req.url == "/varnishcheck") {
return(synth(200, "Hi HAProxy, I'm fine!"));
}
[…]
}
```
2022-10-01 23:19:37 +02:00
The response is extremely fast, which allow for frequent checks by HAProxy.
2022-09-18 22:38:33 +02:00
2022-10-01 23:19:37 +02:00
In the frontend section of HAProxy, there is an ACL that tells if Varnish is knownto be available.
It is then possible to decide if we can pass the requets to VArnish or bypass it:
2022-09-18 22:38:33 +02:00
```
frontend external
# Is the request routable to Varnish ?
acl varnish_available nbsrv(varnish) gt 0
# Use Varnish if available
2022-10-01 18:05:16 +02:00
use_backend varnish if varnish_available
2022-09-18 22:38:33 +02:00
# … or use normal backend
2022-10-01 18:05:16 +02:00
use_backend default_backend
2022-09-18 22:38:33 +02:00
backend varnish
option httpchk HEAD /varnishcheck
server varnish_sock /run/varnish.sock check observe layer7 maxconn 3000 inter 1s send-proxy-v2
2022-10-01 18:05:16 +02:00
backend default_backend
2022-09-18 22:38:33 +02:00
server example-hostname 1.2.3.4:443 check observe layer4 ssl
```
2022-10-01 23:19:37 +02:00
Nothing forces us to use VArnish for every request and all web-sites of apps.
2022-10-01 18:05:16 +02:00
2022-10-01 23:19:37 +02:00
We ca either use an ACL :
2022-10-01 18:05:16 +02:00
```
frontend external
acl example_com_domains hdr(host) -i example.com
[…]
use_backend varnish if example_com_domains
```
2022-10-01 23:19:37 +02:00
We can use a more dynamic condition, with a file containing all the domains whose trafic should go through Varnish:
2022-10-01 18:05:16 +02:00
```
frontend external
acl use_cache if hdr(host) -f /etc/haproxy/cached_domains
[…]
use_backend varnish if use_cache
```
2022-10-01 23:19:37 +02:00
In some cases, we could decide to bypass Varnish when caching makes no sense. For example, if the HTTP method is neither `GET`, `HEAD` or `PURGE`:
2022-10-01 18:05:16 +02:00
```
frontend external
2022-10-01 23:19:37 +02:00
acl varnish_http_verb method GET HEAD PURGE
2022-10-01 18:05:16 +02:00
[…]
use_backend varnish if varnish_http_verb
```
2022-10-01 23:19:37 +02:00
And we can combine several of these conditions together:
2022-10-01 18:05:16 +02:00
2022-10-01 23:19:37 +02:00
* should the domaine be cached?
* is the request eligible for cache?
* is Varnish available?
2022-10-01 18:05:16 +02:00
2022-09-17 15:16:01 +02:00
## PROXY protocol
2022-10-01 23:19:37 +02:00
### What for?
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
Using the PROXY Protocol is not a necessity at all, but it adds a certain amount of comfort when we have proxies involved, like HAProxy and Varnish.
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
At the TCP level, when HAProxy talks to Varnish, then Varnish to HAproxy and finally HAroxy to the final web server, they are all seen as regular HTTP clients.
Without any modification, each element reports the IP of the previous elements as the client IP, and the final server thinks that HAProxy is the only client accessing it. It's bad for IP based filtering, logging…
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
At the HTTP level, we've had the `X-Forwarded-For` header for a long time. I you look at its presence and you know how to parse it, you can use the ocrrect value in your web-server of application. It's cumbersome, and error-prone, and at the TCP level, this is invisible.
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
The PROXY protocol is simple extension of the TCP protocol. It add the same kinf of header, but at the TCP level. The downside is that both parties must support the PROXY protocol andbe configured to use it. The upside is that its completely transparent after that. There is nothing to do at the application level.
The PROXY protol has been designed by Willy Tarreau, creator of HAProxy, in 2010. It has sincebeen adopted by many products like Varnish, Apache, Nginx, Postfix, Docecot…
2022-09-18 22:38:33 +02:00
### Comment?
2022-10-04 09:05:57 +02:00
In HAProxy we find in the backend to Varnish, in the frontend used by Varnish to pass the request to HAProxy, and possibly ibn the backend to the final web-servers.
2022-09-18 22:38:33 +02:00
```
backend varnish
server varnish_sock /run/varnish.sock check observe layer7 maxconn 3000 inter 1s send-proxy-v2
frontend internal
bind /run/haproxy-frontend-default.sock user root mode 666 accept-proxy
backend example_com
server example-hostname 1.2.3.4:443 check observe layer4 ssl verify none send-proxy-v2
```
2022-10-04 09:05:57 +02:00
With HAProxy, for the listening side we use the `accept-proxy` option in the frontend section and for the emitting side we use the `send-proxy-v2` option in the backendsection.
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
With Varnish, for the listening side we use the `PROXY` option in the startp command line :
2022-09-18 22:38:33 +02:00
```
2022-10-01 18:05:16 +02:00
/usr/sbin/varnishd […] -a /run/varnish.sock,PROXY […]
2022-09-18 22:38:33 +02:00
```
2022-10-04 09:05:57 +02:00
And for the emitting side, we use the `proxy_header = 1` setting in the backend section. :
2022-09-18 22:38:33 +02:00
```
backend default {
.path = "/run/haproxy-frontend-default.sock";
.proxy_header = 1;
[…]
}
```
2022-10-04 09:05:57 +02:00
### How to debug with PROXY protocol
Even though it's a valuable optimization, the PROXY protocol is not supported by many tools, especially some very low level ones, like a forged HTTP request on a telnet connection.
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
This is especially if you want to bypass HAProxy and debug at the varnish level directly.
2022-09-18 22:38:33 +02:00
2022-10-04 09:05:57 +02:00
The startup command accepts many listening addresses, each one with its options.
We can add one restricted on localhost and a custom firewalled port, and have a debug back-door :
2022-09-18 22:38:33 +02:00
```
/usr/sbin/varnishd […] -a 127.0.0.1:82 […]
```
2022-10-04 09:05:57 +02:00
It become really easy to make a direct request:
2022-10-01 18:05:16 +02:00
```
curl --verbose \
--resolve www.example.com:82:127.0.0.1 \
--header "X-Forwarded-Proto: https" \
http://www.example.com:82/foo/bar
```
2022-10-04 09:05:57 +02:00
And yes, curl supports the PROXY protocol since version 7.37.0 and with the `--proxy-header`.
But it's an example and it's good to know that you can do that manually if you want or need.
2022-10-01 18:05:16 +02:00
2022-10-04 09:05:57 +02:00
### What about the final servers?
2022-10-01 18:05:16 +02:00
2022-10-04 23:00:22 +02:00
If you have control over the final servers and they support the PROXY protocol, you can event do this final connection with the PROXY protocol.
You might want to do the same trick and use a custom port for this and keep the regular 80/443 without the PROXY protocol.
2022-10-01 18:05:16 +02:00
2022-10-04 23:00:22 +02:00
Note: If you use Apache, I encourage you to take a look at the "ForensicLog" module. It adds a special log where you can find a complete trace of each request with all the headers. It is specially useful to see what arrives at the end of the chain. I don't know if something similar exists for Nginx or other web servers.
```
+X@Ike1sspdiNAko5YHK9HAAAAC4|GET /blog/ HTTP/1.1|user-agent:curl/7.64.0|accept:*/*|host:jeremy.lecour.fr|x-forwarded-for:1.2.3.4, 4,5,6,7|accept-encoding:gzip|x-varnish:65545|x-forwarded-port:443|x-forwarded-proto:http|connection:close
-X@Ike1sspdiNAko5YHK9HAAAAC4
```
2022-09-18 22:38:33 +02:00
## HTTP tagging
2022-10-04 23:00:22 +02:00
With the same focus on tracability easy debugging, we are using HAProxy and Varnosh to add some HTTP headers with information about their behaviour.
2022-10-01 18:05:16 +02:00
2022-10-04 23:00:22 +02:00
Let'sremember that the HTTP standard has normalized a list of headers. For example : `Host`, `Set-Cookie`, `Cache-Control`
But it also normalized a way to use custom, non-standard, HTTP headers with an `X-` prefix. Some have almost become standard and are used very frequently.
2022-10-01 18:05:16 +02:00
2022-09-18 22:38:33 +02:00
### X-Forwarded-*
2022-10-04 23:00:22 +02:00
Even though we use the PROXY protocol internally (and possibly to the final servers) we usually keep the commonmy used `X-Forwarded-For` HTTP header.
2022-09-18 22:38:33 +02:00
2022-10-04 23:00:22 +02:00
We also use the `X-Forwarded-Port` header to tell on which port of HAProxy the request arrived to.
Anf finally, we use the `X-Forwarded-Proto` header toindicate if the original request was on a secure connecion or not.
2022-09-18 22:38:33 +02:00
```
frontend external
bind 0.0.0.0:80,:::80
bind 0.0.0.0:443,:::443 ssl […]
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Proto https if { ssl_fc }
```
2022-10-04 23:00:22 +02:00
That last header is important when the request goes to an application that enforces HTTPS. If it receives a request from HAProxy on a clear-text HTTP connection, it ight trigger a redirect. But many framework detect the `X-Forwarded-Proto https` header and understand the external part of the request was encrypted.
2022-10-01 18:05:16 +02:00
2022-09-18 22:38:33 +02:00
### X-Unique-ID
2022-10-04 23:00:22 +02:00
It is very useful to mark an incomming request with a unique identifier that can be transmitted from proxy to proxy, un tile the final server. It even canbe sent back by the application and be tracable all the way back to the original client.
2022-09-18 22:38:33 +02:00
```
frontend external
[…]
http-request set-header X-Unique-ID %[uuid()] unless { hdr(X-Unique-ID) -m found }
```
2022-10-04 23:00:22 +02:00
There is no real consensus on the name of the header. You will commonly find `X-Unique-ID` or `X-Request-ID`.
2022-09-18 22:38:33 +02:00
2022-09-19 14:30:24 +02:00
### X-Boost-*
2022-09-18 22:38:33 +02:00
2022-10-04 23:00:22 +02:00
"Boost" is the name we've given to our HAProxy and Varnish setup.
Hence the name `X-Boost-*` of a few custom HTTP headers that we add to the request and the response
2022-09-18 22:38:33 +02:00
2022-10-04 23:00:22 +02:00
On the "external" frontend, we add one to tell that the request passed this step.
This is usefull for the final server to know which steps the request passed.
2022-09-18 22:38:33 +02:00
2022-10-04 23:00:22 +02:00
When the response exists to the original client, we also add the same header to inform the client of the same steps.
We also inform the client of the name of the Boost instance that processed the request.
2022-09-18 22:38:33 +02:00
```
frontend external
[…]
http-request add-header X-Boost-Step1 haproxy-external
http-response add-header X-Boost-Step1 "haproxy-external; client-https" if { ssl_fc }
http-response add-header X-Boost-Step1 "haproxy-external; client-http" if !{ ssl_fc }
http-response set-header X-Boost-Server my-hostname
```
2022-10-04 23:00:22 +02:00
On the "internal" frontend, we apply the same principle, but this time it's "step 3" instead of "step 1" :
2022-09-18 22:38:33 +02:00
2022-10-01 18:05:16 +02:00
```
2022-09-18 22:38:33 +02:00
frontend internal
[…]
http-request add-header X-Boost-Step3 haproxy-internal
http-response add-header X-Boost-Step3 "haproxy-internal; SSL to backend" if { ssl_bc }
http-response add-header X-Boost-Step3 "haproxy-internal; no SSL to backend" if !{ ssl_bc }
```
2022-10-04 23:00:22 +02:00
In the final backend, we add a header to indicate if HAProxy used an encrypted connection or not.
2022-09-18 22:38:33 +02:00
```
backend example_com
[…]
http-response set-header X-Boost-Proto https if { ssl_bc }
http-response set-header X-Boost-Proto http if !{ ssl_bc }
server example-hostname 1.2.3.4:443 check observe layer4 ssl verify none
```
2022-10-04 23:00:22 +02:00
In Varnish (the "step 2"), we tag the request to tell the final server that Varnish has seen it.
2022-09-18 22:38:33 +02:00
```
sub vcl_recv {
[…]
2022-10-04 23:00:22 +02:00
set req.http.X-Boost-Step2 = "varnish";
2022-09-18 22:38:33 +02:00
}
```
2022-10-04 23:00:22 +02:00
And when the response is sent back, we tell in a header if the server has set cache-control or some cookies.
2022-09-18 22:38:33 +02:00
2022-10-01 18:05:16 +02:00
```
2022-09-18 22:38:33 +02:00
sub vcl_deliver {
[…]
if (resp.http.Set-Cookie && resp.http.Cache-Control) {
set resp.http.X-Boost-Step2 = "varnish WITH set-cookie AND cache-control on backend server";
} elseif (resp.http.Set-Cookie) {
set resp.http.X-Boost-Step2 = "varnish WITH set-cookie and NO cache-control on backend server";
} elseif (resp.http.Cache-Control) {
set resp.http.X-Boost-Step2 = "varnish with NO set-cookie and WITH cache-control on backend server";
} else {
set resp.http.X-Boost-Step2 = "varnish with NO set-cookie and NO cache-control on backend server";
}
```
2022-10-04 23:00:22 +02:00
By default, Varnish also adds many HTTP headers with information about the cache : hit or miss, age of the object……
2022-10-01 18:05:16 +02:00
2022-10-04 23:00:22 +02:00
Note: The syntax of our `X-Boost-*` custom headers is still experimental. Nous plan to change them to be more terse and easier to parse.
2022-10-01 18:05:16 +02:00
2022-09-18 22:38:33 +02:00
### Log HAProxy complet
2022-10-01 18:05:16 +02:00
Dans des situations avancées de debug, nous pouvons aussi activer l'ajout dans un en-tête HTTP de la totalité de la ligne de log :
2022-09-18 22:38:33 +02:00
```
frontend external
http-response add-header X-Haproxy-Log-external "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
frontend internal
http-response add-header X-Haproxy-Log-Internal "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
```
:warning: Il vaut mieux ne pas activer cela en production, mais ça peut être très utile pour permettre à un client en mode test/préprod de vérifier comment se comporte le proxy.
2022-09-19 14:30:24 +02:00
## Haute disponibilité
### Côté applicatif
2022-10-01 18:05:16 +02:00
Lorsque le site ou l'application web finale dispose de plusieurs serveurs pour gérer les requêtes, on peut prendre en charge directement dans HAProxy la répartition de charge.
2022-09-19 14:30:24 +02:00
Lorsque c'est possible, on fait de préférence un `round-robin` sur les serveurs web.
Lorsque l'application gère mal les sessions ou des aspects bloquants, on met un serveur en actif et les autres en backup, pour basculer sur un secours si le primaire n'est plus disponible.
### Côté HAProxy
Pour éviter que HAProxy + Varnish ne soient un « Single Point Of Failure », nous avons mis en place 2 serveurs différents dans 2 réseaux différents et nous faisons un round-robin DNS en amont.
Cette approche n'est pas la plus avancée, car en cas de panne d'un serveur il faut intervenir au niveau DNS. Cependant elle a le mérite de la simplicité et les pannes complète de machines virtuelles tournant sur de la virtualisation redondante sont heureusement très rares. En revanche, ce système nous permet pas mal de souplesse.
Il serait aussi possible de faire du « actif-passif » en mettant une IP flottante (keepalived/vrrp) en amont de deux serveurs, pour avoir une bascule automatique sur le secondaire en cas de panne du primaire.
2022-10-04 23:00:22 +02:00
On aller encore plus loin et faire du « actif-actif » avec 2 HAProxy en « layer 4 » qui renvoient ensuite sur 2 HAproxy en « layer 7 »
2022-09-19 14:30:24 +02:00
Ces approches permettent une reprise d'activité quasi immédiate en cas de panne, mais elles impliquent plus de ressources et une topologie réseau particulière.
## Fonctionnalités complémentaires
### Filtrage TCP
Dans le frontend « external » qui gère le trafic entrant, nous pouvons vérifier si le client doit être immédiatement rejeté, par exemple selon son adresse IP :
```
frontend external
[…]
# Reject the request at the TCP level if source is in the denylist
tcp-request connection reject if { src -f /etc/haproxy/deny_ips }
```
Cela ne remplace pas un vrai firewall, mais ça permet de facilement exclure des client au niveau TCP (couche 4).
### Filtrage HTTP
En analysant la requête au niveau HTTP (couche 7), on peut filtrer de manière beaucoup plus fine.
2022-10-01 18:05:16 +02:00
Par exemple, si un site est passé en mode maintenance (détaillé plus loin), on peut contourner ce mode maintenance pour une liste d'IP particulière qui pourra tout de même consulter le site et vérifier par exemple l'état des opérationsde maintenance.
2022-09-19 14:30:24 +02:00
```
frontend external
[…]
# List of IP that will not go the maintenance backend
acl maintenance_ips src -f /etc/haproxy/maintenance_ips
# Go to maintenance backend, unless your IP is whitelisted
use_backend maintenance if !maintenance_ips
backend maintenance
http-request set-log-level silent
# Custom 503 error page
errorfile 503 /etc/haproxy/errors/maintenance.http
2022-10-01 18:05:16 +02:00
# With no server defined, a 503 is returned for every request
2022-09-19 14:30:24 +02:00
```
Pour les outils locaux de monitoring (exemple: Munin) ou pour les challenges ACME, il peut être utile de renvoyer sur un serveur web local (exemple: Apache ou Nginx), au lieu de renvoyer sur Varnish ou les serveurs web appliatifs.
```
frontend external
[…]
# Is the request coming for the server itself (stats…)
acl server_hostname hdr(host) -i my-hostname
acl munin hdr(host) -i munin
# Detect Let's Encrypt challenge requests
acl letsencrypt path_dir -i /.well-known/acme-challenge
use_backend local if server_hostname
use_backend local if munin
use_backend letsencrypt if letsencrypt
backend letsencrypt
# Use this if the challenge is managed locally
server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10
# Use this if the challenge is managed remotely
### server my-certbot-challenge-manager 192.168.2.1:80 maxconn 10
backend local
option httpchk HEAD /haproxy-check
server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10
```
### Mode maintenance
Le mode maintenance est une sorte de coupe-circuit qui permet de basculer toute l'installation, ou juste un site web en mode maintenance.
Pour la coupure globale, on utilise un backend spécial qui ne définit pas de serveur web final et provoquera toujours une erreur 503.
On peut aussi définir une page d'erreur particulière pour cette situation.
Nous avons fait le choix de ne pas logguer les requêtes lorsque ce mode et activé.
```
frontend external
[…]
# List of IP that will not go the maintenance backend
acl maintenance_ips src -f /etc/haproxy/maintenance_ips
# Go to maintenance backend, unless your IP is whitelisted
use_backend maintenance if !maintenance_ips
backend maintenance
http-request set-log-level silent
# Custom 503 error page
errorfile 503 /etc/haproxy/errors/maintenance.http
2022-10-01 18:05:16 +02:00
# With no server defined, a 503 is returned for every request
2022-09-19 14:30:24 +02:00
```
2022-10-01 18:05:16 +02:00
Pour une coupure par site, il faut définir un backend spécial pour ce site et activer l'utilisation de ce backend pour les requêtes. On retrouve notre ACL par domaine.
2022-09-19 14:30:24 +02:00
```
frontend external
[…]
acl example_com_domains hdr(host) -i example.com
acl maintenance_ips src -f /etc/haproxy/maintenance_ips
acl example_com_maintenance_ips src -f /etc/haproxy/example_com/maintenance_ips
use_backend example_com_maintenance if example_com_domains !example_com_maintenance_ips !maintenance_ips
```
### Personnalisation par site
On peut définir des ACL pour chaque site et ainsi provoquer des comportements lorsqu'une requête est destinée à ce site.
2022-09-17 15:16:01 +02:00
2022-09-19 14:30:24 +02:00
Par exemple, les redirections http vers https, ou pour des sous-domaines, les en-têtes HSTS
2022-09-18 22:38:33 +02:00
2022-10-01 18:05:16 +02:00
## Configuration complète
Parcourons la configuration complète de HAProxy puis Varnish, pour commenter petit à petit tout ce qu'on a vu jusque là.