spellcheck main content

This commit is contained in:
Jérémy Lecour 2022-10-15 15:47:10 +02:00 committed by Jérémy Lecour
parent 008f48e8ed
commit c96e8d4728

View file

@ -14,50 +14,48 @@ They both are supported by companies that offer professional support and service
### HAProxy
HAProxy will be the first web-server for the HTTP client.
It will be the TLS/SSL termination. It will decide if the request is accepted or rejected,and finaly will pass the request to the upstream server or servers.
It will be the TLS/SSL termination. It will decide if the request is accepted or rejected,and finally will pass the request to the upstream server or servers.
It is acting as a proxy, with load-balancing and fault-tolerance capabilities.
To summarize briefly the basic principles of HAProxy, let's mention the _frontend_, the _backend_ and the intermediate processing.
A frontend is an entry point. HAProxy is listening on the TCP or HTTP layer, on one or more `IP:PORT` pairs, with options regarding logs, timeouts, supported, protocols and much more.
Many frontend can coexist in an HAProxy instance,to accomodate different use cases. For example : a traditionnal web-site in one, and an API for third-parties in another one, each with their settings and configuration.
Many frontends can coexist in an HAProxy instance,to accommodate different use cases. For example : a traditional web-site in one, and an API for third-parties in another one, each with their settings and configuration.
HAProxy is able to parse and process the full TCP or HTTP request. It exposes an internal API to change the request or the response, decide how to deliver them upstream or not… in a very optimized and reliable manner.
A backend is an exit point. There must be one for each group of final webservers. If there is only one server in a backend, it will deal with every request, but if there are more than one, HAProxy will balance the requests, according to an algorithm.
That's where we begin to mention load-balancing. And when we introduce som logic to deal with misbeahaving or missing servers, it becomes fault-tolerant.
That's where we begin to mention load-balancing. And when we introduce som logic to deal with misbehaving or missing servers, it becomes fault-tolerant.
### Varnish
Varnish is going to store in cache the result of some requests. Later on, a client who makes the same request might get a result from the cache if it is preset andfresh enough.
Varnish is going to store in cache the result of some requests. Later on, a client who makes the same request might get a result from the cache if it is present and fresh enough.
With its default configuration, Varnish has a lot of good practice already in place and just a few adjustments might be necessary to work with most of the situations.
Every step of the request/response processing is done by functions that can be customized by something that looks a bit like inheritance in programming. At startup, everything is validated and compiled into optimized code.
Like HAProxy, Varnish parses the whole request/response inside the functions, so we can decide if the request/response needs to be modified (like adding or removeing headers), if a response can be served from the cache, if it need to talk to the final server, and if the responsae can be stored in cache for future use.
Like HAProxy, Varnish parses the whole request/response inside the functions, so we can decide if the request/response needs to be modified (like adding or removing headers), if a response can be served from the cache, if it need to talk to the final server, and if the response can be stored in cache for future use.
Varnish stores the content, its objects, into memory to be really fast. if you have a lot of trafic, give it enough RAM to keep a lot of content available.
Varnish stores the content, its objects, into memory to be really fast. if you have a lot of traffic, give it enough RAM to keep a lot of content available.
## Let's combine HAProxy and Varnish
So we have decided to have them work together, in a coordinated way, for an efficient and feature-rich setup.
Has we've seen earlier, HAProxy uses frontends as entry-points and backend asexit-points.
Has we've seen earlier, HAProxy uses frontends as entry-points and backends as exit-points.
In our case, after accepting the request from the HTTP client, HAProxy will pass it to Varnish, via a dedicated backend.
We have chosen to place HAProxy and Varnish on the server server, and connect them with unix sockets, but it's an implementation detail andwe could have placed them on separate servers, communicating through regular TCP sockets.
We have chosen to place HAProxy and Varnish on the server server, and connect them with unix sockets, but it's an implementation detail and we could have placed them on separate servers, communicating through regular TCP sockets.
Once the request is passed to Varnish, it will parse it too and decide if it is able to serve a resposne from the cache or pass it to the final server.
Once the request is passed to Varnish, it will parse it too and decide if it is able to serve a response from the cache or pass it to the final server.
In the most common case with Varnish, the request to the final server and passed directly. It is evencapable of some rather basic load-balancing. But since we have HAProxy at hands and it is more capable in this area, we have decided to pass the request back to HAProxy.
In the most common case with Varnish, the request to the final server and passed directly. It is even capable of some rather basic load-balancing. But since we have HAProxy at hands and it is more capable in this area, we have decided to pass the request back to HAProxy.
To prevent loops, and to bypass the processing that has already been done, the request will reenter HAProxy throughanother frontend. This one is much simpler and its responsibility is basically to choose which backend to use for each request. If we manage sites and apps on more than one set of servers, we have to create as much backends as we have sets of servers.
To prevent loops, and to bypass the processing that has already been done, the request will reenter HAProxy through another frontend. This one is much simpler and its responsibility is basically to choose which backend to use for each request. If we manage sites and apps on more than one set of servers, we have to create as much backends as we have sets of servers.
In the end, the request is passed to a server in a backend, be it a static web site, a dynamic application programmed with a framework, or the ingress of a big Kubernetes cluster.
La réponse fera ensuite le chemin inverse, en revenant à HAProxy, puis Varnish (qui décidera s'il met la réponse en cache), puis HAProxy puis le client HTTP qui a fait initialement la requête.
The response will eventually travl the same way backwards, through HAProxy, Varnish (which will decide to store it in the cache or not) and HAProxy again to the original HTTP client.
The response will eventually travel the same way backwards, through HAProxy, Varnish (which will decide to store it in the cache or not) and HAProxy again to the original HTTP client.
Chain :
@ -71,10 +69,10 @@ Chain :
## Multi-sites
Even if we have only one frontend to manage all the requests with the same settings, we can have many groups of web-servers,for differents web-apps of web-sites. In our case, we have many clients on one HAProxy, with one frontend, but many backends, one for each client and their own servers.
Even if we have only one frontend to manage all the requests with the same settings, we can have many groups of web-servers, for different web-apps of web-sites. In our case, we have many clients on one HAProxy, with one frontend, but many backends, one for each client and their own servers.
HAProxy does not have a concept of VirtualHost as Apache or Nginx do, but we can use some lower-levels conditionals to have a similar result.
We can write (at least) an ACL t detect if the Host header is for a web site or another and then use this ACL to trigger some processing adnd chose the backend to use.
We can write (at least) an ACL to detect if the Host header is for a web site or another and then use this ACL to trigger some processing and chose the backend to use.
An ACL is like a boolean variable.
@ -99,7 +97,7 @@ backend varnish
The `httpchk` option tells HAProxy to use the "layer 7" to check for Varnish health, by regularly checking on a special URL.
In the Varnish configuration we have a behaviour for this URL:
In the Varnish configuration we have a behavior for this URL:
```
sub vcl_recv {
@ -113,8 +111,8 @@ sub vcl_recv {
The response is extremely fast, which allow for frequent checks by HAProxy.
In the frontend section of HAProxy, there is an ACL that tells if Varnish is knownto be available.
It is then possible to decide if we can pass the requets to Varnish or bypass it:
In the frontend section of HAProxy, there is an ACL that tells if Varnish is known to be available.
It is then possible to decide if we can pass the request to Varnish or bypass it:
```
frontend external
@ -146,7 +144,7 @@ frontend external
use_backend varnish if example_com_domains
```
We can use a more dynamic condition, with a file containing all the domains whose trafic should go through Varnish:
We can use a more dynamic condition, with a file containing all the domains whose traffic should go through Varnish:
```
frontend external
@ -179,11 +177,11 @@ Using the PROXY Protocol is not a necessity at all, but it adds a certain amount
At the TCP level, when HAProxy talks to Varnish, then Varnish to HAproxy and finally HAproxy to the final web server, they are all seen as regular HTTP clients.
Without any modification, each element reports the IP of the previous elements as the client IP, and the final server thinks that HAProxy is the only client accessing it. It's bad for IP based filtering, logging…
At the HTTP level, we've had the `X-Forwarded-For` header for a long time. I you look at its presence and you know how to parse it, you can use the ocrrect value in your web-server of application. It's cumbersome, and error-prone, and at the TCP level, this is invisible.
At the HTTP level, we've had the `X-Forwarded-For` header for a long time. I you look at its presence and you know how to parse it, you can use the correct value in your web-server of application. It's cumbersome, and error-prone, and at the TCP level, this is invisible.
The PROXY protocol is simple extension of the TCP protocol. It add the same kinf of header, but at the TCP level. The downside is that both parties must support the PROXY protocol andbe configured to use it. The upside is that its completely transparent after that. There is nothing to do at the application level.
The PROXY protocol is simple extension of the TCP protocol. It add the same kind of header, but at the TCP level. The downside is that both parties must support the PROXY protocol and be configured to use it. The upside is that its completely transparent after that. There is nothing to do at the application level.
The PROXY protol has been designed by Willy Tarreau, creator of HAProxy, in 2010. It has sincebeen adopted by many products like Varnish, Apache, Nginx, Postfix, Docecot…
The PROXY protocol has been designed by Willy Tarreau, creator of HAProxy, in 2010. It has since been adopted by many products like Varnish, Apache, Nginx, Postfix, Dovecot…
### Comment?
@ -200,7 +198,7 @@ backend example_com
server example-hostname 1.2.3.4:443 check observe layer4 ssl verify none send-proxy-v2
```
With HAProxy, for the listening side we use the `accept-proxy` option in the frontend section and for the emitting side we use the `send-proxy-v2` option in the backendsection.
With HAProxy, for the listening side we use the `accept-proxy` option in the frontend section and for the emitting side we use the `send-proxy-v2` option in the backend section.
With Varnish, for the listening side we use the `PROXY` option in the startp command line :
@ -258,17 +256,17 @@ Note: If you use Apache, I encourage you to take a look at the "ForensicLog" mod
## HTTP tagging
With the same focus on tracability easy debugging, we are using HAProxy and Varnosh to add some HTTP headers with information about their behaviour.
With the same focus on traceability easy debugging, we are using HAProxy and Varnish to add some HTTP headers with information about their behavior.
Let's remember that the HTTP standard has normalized a list of headers. For example : `Host`, `Set-Cookie`, `Cache-Control`
But it also normalized a way to use custom, non-standard, HTTP headers with an `X-` prefix. Some have almost become standard and are used very frequently.
### X-Forwarded-*
Even though we use the PROXY protocol internally (and possibly to the final servers) we usually keep the commonmy used `X-Forwarded-For` HTTP header.
Even though we use the PROXY protocol internally (and possibly to the final servers) we usually keep the commonly used `X-Forwarded-For` HTTP header.
We also use the `X-Forwarded-Port` header to tell on which port of HAProxy the request arrived to.
Anf finally, we use the `X-Forwarded-Proto` header toindicate if the original request was on a secure connecion or not.
Anf finally, we use the `X-Forwarded-Proto` header to indicate if the original request was on a secure connection or not.
```
frontend external
@ -283,11 +281,11 @@ frontend external
http-request set-header X-Forwarded-Proto https if { ssl_fc }
```
That last header is important when the request goes to an application that enforces HTTPS. If it receives a request from HAProxy on a clear-text HTTP connection, it ight trigger a redirect. But many framework detect the `X-Forwarded-Proto https` header and understand the external part of the request was encrypted.
That last header is important when the request goes to an application that enforces HTTPS. If it receives a request from HAProxy on a clear-text HTTP connection, it might trigger a redirect. But many framework detect the `X-Forwarded-Proto https` header and understand the external part of the request was encrypted.
### X-Unique-ID
It is very useful to mark an incomming request with a unique identifier that can be transmitted from proxy to proxy, un tile the final server. It even canbe sent back by the application and be tracable all the way back to the original client.
It is very useful to mark an incomming request with a unique identifier that can be transmitted from proxy to proxy, un tile the final server. It can even be sent back by the application and be traceable all the way back to the original client.
```
frontend external
@ -303,7 +301,7 @@ There is no real consensus on the name of the header. You will commonly find `X-
Hence the name `X-Boost-*` of a few custom HTTP headers that we add to the request and the response
On the "external" frontend, we add one to tell that the request passed this step.
This is usefull for the final server to know which steps the request passed.
This is useful for the final server to know which steps the request passed.
When the response exists to the original client, we also add the same header to inform the client of the same steps.
@ -370,11 +368,11 @@ Note: The syntax of our `X-Boost-*` custom headers is still experimental. We pla
### Full HAProxy log
In advanced debugging situation, we can also enable a custome header with the full HAProxy log line:
In advanced debugging situation, we can also enable a custom header with the full HAProxy log line:
```
frontend external
http-response add-header X-Haproxy-Log-external "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
http-response add-header X-Haproxy-Log-External "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
frontend internal
http-response add-header X-Haproxy-Log-Internal "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
@ -397,7 +395,7 @@ If a server is not available, we need to change the DNS zone to disable the faul
It would definitely be possible to have an "active-standby" setup, with a virtual IP (with keepalived/vrrp), and have an automatic failover if the main server fails.
or we could go even further with an "active-active" setup with 2 "layer 4" HAProxy servers, then 2 or more "layer 7" Haproxy servers.
Those options (and some more that I ddn't cover) allow for a much faster recovery in case of an incident, but they are much more complex.
Those options (and some more that I didn't cover) allow for a much faster recovery in case of an incident, but they are much more complex.
You have to decide for yourself.
## Additional features
@ -413,7 +411,7 @@ frontend external
tcp-request connection reject if { src -f /etc/haproxy/deny_ips }
```
This is not a replacement for a traditional firewall, but it's much more powefull since any ACL can be used.
This is not a replacement for a traditional firewall, but it's much more poweful since any ACL can be used.
### HTTP filtering
@ -436,7 +434,7 @@ backend maintenance
# With no server defined, a 503 is returned for every request
```
For some local monitoring tools (like Munin) or for ACME challenges, it is usefull to directly proxy the requests to a local web-server (like Apache or Nginx), instead or going through Varnish or other servers.
For some local monitoring tools (like Munin) or for ACME challenges, it is useful to directly proxy the requests to a local web-server (like Apache or Nginx), instead or going through Varnish or other servers.
```
frontend external
@ -485,7 +483,7 @@ backend maintenance
# With no server defined, a 503 is returned for every request
```
For a "maintenance mode" per site, we also need a custom backend with the same principle as before, and we need to use the domain ACL to restict to this site only.
For a "maintenance mode" per site, we also need a custom backend with the same principle as before, and we need to use the domain ACL to restrict to this site only.
```
frontend external
@ -500,9 +498,9 @@ frontend external
### Per site customization
We can use many ACL for each site or application to trigger custom behaviour when a request arrives for this site.
We can use many ACL for each site or application to trigger custom behavior when a request arrives for this site.
A typical exemple is http-to-https redirects, subdomains redirects or HSTS headers.
A typical example is http-to-https redirects, subdomains redirects or HSTS headers.
## Complete configuration rundown