From 53de8fa40dd2fce867091c300d2d385a6b465a1c Mon Sep 17 00:00:00 2001 From: Jeremy Lecour Date: Sun, 16 Oct 2022 09:06:19 +0200 Subject: [PATCH] update content --- README.en.md | 113 ++++++++++++++++++------------------------- README.fr.md | 132 +++++++++++++++++++++++---------------------------- 2 files changed, 105 insertions(+), 140 deletions(-) diff --git a/README.en.md b/README.en.md index e2ebe26..ff49373 100644 --- a/README.en.md +++ b/README.en.md @@ -68,7 +68,6 @@ Chain : ## Multi-sites - Even if we have only one frontend to manage all the requests with the same settings, we can have many groups of web-servers, for different web-apps of web-sites. In our case, we have many clients on one HAProxy, with one frontend, but many backends, one for each client and their own servers. HAProxy does not have a concept of VirtualHost as Apache or Nginx do, but we can use some lower-levels conditionals to have a similar result. @@ -200,7 +199,7 @@ backend example_com With HAProxy, for the listening side we use the `accept-proxy` option in the frontend section and for the emitting side we use the `send-proxy-v2` option in the backend section. -With Varnish, for the listening side we use the `PROXY` option in the startp command line : +With Varnish, for the listening side we use the `PROXY` option in the startup command line : ``` /usr/sbin/varnishd […] -a /run/varnish.sock,PROXY […] @@ -238,8 +237,7 @@ curl --verbose \ http://www.example.com:82/foo/bar ``` -And yes, curl supports the PROXY protocol since version 7.37.0 and with the `--proxy-header`. -But it's an example and it's good to know that you can do that manually if you want or need. +And yes, curl supports the PROXY protocol since version 7.60.0 with the `--haproxy-protocol` option, but it only supports version 1 and I've never managed to make work with local UNIX sockets. ### What about the final servers? @@ -328,15 +326,6 @@ frontend internal http-response add-header X-Boost-Step3 "haproxy-internal; no SSL to backend" if !{ ssl_bc } ``` -In the final backend, we add a header to indicate if HAProxy used an encrypted connection or not. - -``` -backend example_com - […] - http-response set-header X-Boost-Proto https if { ssl_bc } - http-response set-header X-Boost-Proto http if !{ ssl_bc } - server example-hostname 1.2.3.4:443 check observe layer4 ssl verify none -``` In Varnish (the "step 2"), we tag the request to tell the final server that Varnish has seen it. @@ -352,20 +341,18 @@ And when the response is sent back, we tell in a header if the server has set ca sub vcl_deliver { […] if (resp.http.Set-Cookie && resp.http.Cache-Control) { - set resp.http.X-Boost-Step2 = "varnish WITH set-cookie AND cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; set-cookie; cache-control"; } elseif (resp.http.Set-Cookie) { - set resp.http.X-Boost-Step2 = "varnish WITH set-cookie and NO cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; set-cookie; no-cache-control"; } elseif (resp.http.Cache-Control) { - set resp.http.X-Boost-Step2 = "varnish with NO set-cookie and WITH cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; no-set-cookie; cache-control"; } else { - set resp.http.X-Boost-Step2 = "varnish with NO set-cookie and NO cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; no-set-cookie; no-cache-control"; } ``` By default, Varnish also adds many HTTP headers with information about the cache : hit or miss, age of the object…… -Note: The syntax of our `X-Boost-*` custom headers is still experimental. We plan to change them to be more terse and easier to parse. - ### Full HAProxy log In advanced debugging situation, we can also enable a custom header with the full HAProxy log line: @@ -400,7 +387,7 @@ You have to decide for yourself. ## Additional features -### TCP filtering +### Filtering at HAProxy level The "external" frontend handles incoming requests. We can verify if the client must be rejected, by it's client IP for example: @@ -413,53 +400,19 @@ frontend external This is not a replacement for a traditional firewall, but it's much more poweful since any ACL can be used. -### HTTP filtering - Since HAProxy has access to every aspect of the request on the "layer 4", we can also filter on a much finer level. -For example, if a website is in maintenance mode (detailed further), we can bypass this for a list of IP addresses. Very useful to check on your deployment before opening to the general public. +We can mandate an authentication token, with a list of users or groups of users. We can also do some redirects for https or other domains. ``` +userlist vip_users + user johndoe password $6$k6y3o.eP$JlKBx9za9667qe4(…)xHSwRv6J.C0/D7cV91 + frontend external - […] - # List of IP that will not go the maintenance backend - acl maintenance_ips src -f /etc/haproxy/maintenance_ips - # Go to maintenance backend, unless your IP is whitelisted - use_backend maintenance if !maintenance_ips - -backend maintenance - http-request set-log-level silent - # Custom 503 error page - errorfile 503 /etc/haproxy/errors/maintenance.http - # With no server defined, a 503 is returned for every request -``` - -For some local monitoring tools (like Munin) or for ACME challenges, it is useful to directly proxy the requests to a local web-server (like Apache or Nginx), instead or going through Varnish or other servers. - -``` -frontend external - […] - # Is the request coming for the server itself (stats…) - acl self hdr(host) -i my-hostname my-hostname.domain.tld - acl munin hdr(host) -i munin - - # Detect Let's Encrypt challenge requests - acl letsencrypt path_dir -i /.well-known/acme-challenge - - use_backend local if self - use_backend local if munin - - use_backend letsencrypt if letsencrypt - -backend letsencrypt - # Use this if the challenge is managed locally - server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 - # Use this if the challenge is managed remotely - ### server my-certbot-challenge-manager 192.168.2.1:80 maxconn 10 - -backend local - option httpchk HEAD /haproxy-check - server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 + […] + redirect scheme https code 301 if !{ ssl_fc } + redirect prefix https://example-to.org code 301 if { hdr(host) -i example-from.org } + http-request auth realm "VIP Section" if !{ http_auth(vip_users) } ``` ### Maintenance mode @@ -496,12 +449,38 @@ frontend external use_backend example_com_maintenance if example_com_domains !example_com_maintenance_ips !maintenance_ips ``` +### Local services + +For some local monitoring tools (like Munin) or for ACME challenges, it is useful to directly proxy the requests to a local web-server (like Apache or Nginx), instead or going through Varnish or other servers. + +``` +frontend external + […] + # Is the request coming for the server itself (stats…) + acl self hdr(host) -i my-hostname my-hostname.domain.tld + acl munin hdr(host) -i munin + + # Detect Let's Encrypt challenge requests + acl letsencrypt path_dir -i /.well-known/acme-challenge + + use_backend local if self + use_backend local if munin + + use_backend letsencrypt if letsencrypt + +backend letsencrypt + # Use this if the challenge is managed locally + server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 + # Use this if the challenge is managed remotely + ### server my-certbot-challenge-manager 192.168.2.1:80 maxconn 10 + +backend local + option httpchk HEAD /haproxy-check + server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 +``` + ### Per site customization We can use many ACL for each site or application to trigger custom behavior when a request arrives for this site. -A typical example is http-to-https redirects, subdomains redirects or HSTS headers. - -## Complete configuration rundown - -Let's go through the whole configuration to see the whole picture and comment what we've seen so far. \ No newline at end of file +A typical example is http-to-https redirects, subdomains redirects or HSTS headers. \ No newline at end of file diff --git a/README.fr.md b/README.fr.md index 1f6da66..f312915 100644 --- a/README.fr.md +++ b/README.fr.md @@ -231,7 +231,7 @@ curl --verbose \ http://www.example.com:82/foo/bar ``` -Alors oui, curl supporte le PROXY protocol depuis laversion 7.37.0 avec l'option `--proxy-header`, mais c'est un exemple. +Alors oui, curl supporte le PROXY protocol depuis laversion 7.60.0 avec l'option `--haproxy-protocol`, mais ça ne supporte que la version 1, et puis je n'ai pas réussi à combiner ça avec une connexion sur socket Unix locale. ### Jusqu'au serveur final @@ -285,6 +285,8 @@ Il n'y a pas de réel consensus sur le nommage de l'en-tête. On trouve souvent Boost est le nom que nous avons donné à notre système basé sur HAProxy et Varnish. Nous ajoutons donc plusieurs en-têtes `X-Boost-*` pour ajouter des informations utiles. +#### Step 1 + Sur le frontend « external » nous marquons la requête comme étant passée en étape 1 par « haproxy-external ». Les autres étapes de la requête sauront alors que la requête est entrée par là. Lorsque réponse resortira de ce backend pour aller au client, on la marque aussi pour indiquer que l'étape 1 était « haproxy-external » en précisant si la connexion était en http ou https. @@ -300,26 +302,7 @@ frontend external http-response set-header X-Boost-Server my-hostname ``` -Au niveau du frontend « internal » on applique le même principe, mais en indiquant que c'est pour l'étape 3. - -``` -frontend internal - […] - http-request add-header X-Boost-Step3 haproxy-internal - - http-response add-header X-Boost-Step3 "haproxy-internal; SSL to backend" if { ssl_bc } - http-response add-header X-Boost-Step3 "haproxy-internal; no SSL to backend" if !{ ssl_bc } -``` - -Dans le backend final du site, on marque la réponse pour indiquer si la connexion avec le serveur final s'est faite en http ou https. - -``` -backend example_com - […] - http-response set-header X-Boost-Proto https if { ssl_bc } - http-response set-header X-Boost-Proto http if !{ ssl_bc } - server example-hostname 1.2.3.4:443 check observe layer4 ssl verify none -``` +#### Step 2 Au niveau de Varnish, nous marquons la requête transmise pour indiquer qu'elle est passée par Varnish : @@ -336,19 +319,30 @@ Lorsque la réponse est renvoyée, elle est marquée sur l'étape 2 de détails sub vcl_deliver { […] if (resp.http.Set-Cookie && resp.http.Cache-Control) { - set resp.http.X-Boost-Step2 = "varnish WITH set-cookie AND cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; set-cookie; cache-control"; } elseif (resp.http.Set-Cookie) { - set resp.http.X-Boost-Step2 = "varnish WITH set-cookie and NO cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; set-cookie; no-cache-control"; } elseif (resp.http.Cache-Control) { - set resp.http.X-Boost-Step2 = "varnish with NO set-cookie and WITH cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; no-set-cookie; cache-control"; } else { - set resp.http.X-Boost-Step2 = "varnish with NO set-cookie and NO cache-control on backend server"; + set resp.http.X-Boost-Step2 = "varnish; no-set-cookie; no-cache-control"; } ``` Par ailleurs, Varnish ajoute par défaut de nombreux en-têtes sur l'utilisation ou non du cache, l'âge de la ressource… -Note: La syntaxe de ces en-têtes `X-Boost-*` est encore expérimentale. Nous prévoyons de les rendre plus compacts et faciles à traiter automatiquement. +#### Step 3 + +Au niveau du frontend « internal » on applique le même principe, mais en indiquant que c'est pour l'étape 3. + +``` +frontend internal + […] + http-request add-header X-Boost-Step3 haproxy-internal + + http-response add-header X-Boost-Step3 "haproxy-internal; SSL to backend" if { ssl_bc } + http-response add-header X-Boost-Step3 "haproxy-internal; no SSL to backend" if !{ ssl_bc } +``` ### Log HAProxy complet @@ -356,7 +350,7 @@ Dans des situations avancées de debug, nous pouvons aussi activer l'ajout dans ``` frontend external - http-response add-header X-Haproxy-Log-external "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r" + http-response add-header X-Haproxy-Log-External "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r" frontend internal http-response add-header X-Haproxy-Log-Internal "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r" @@ -384,7 +378,7 @@ Ces approches permettent une reprise d'activité quasi immédiate en cas de pann ## Fonctionnalités complémentaires -### Filtrage TCP +### Filtrage au niveau HAProxy Dans le frontend « external » qui gère le trafic entrant, nous pouvons vérifier si le client doit être immédiatement rejeté, par exemple selon son adresse IP : @@ -397,53 +391,20 @@ frontend external Cela ne remplace pas un vrai firewall, mais ça permet de facilement exclure des client au niveau TCP (couche 4). -### Filtrage HTTP En analysant la requête au niveau HTTP (couche 7), on peut filtrer de manière beaucoup plus fine. -Par exemple, si un site est passé en mode maintenance (détaillé plus loin), on peut contourner ce mode maintenance pour une liste d'IP particulière qui pourra tout de même consulter le site et vérifier par exemple l'état des opérationsde maintenance. +On peut exiger une authentification basée sur une liste d'utilisateur ou de groupes, ou encore des redirectionsvers https ou d'autres domaines. ``` +userlist vip_users + user johndoe password $6$k6y3o.eP$JlKBx9za9667qe4(…)xHSwRv6J.C0/D7cV91 + frontend external - […] - # List of IP that will not go the maintenance backend - acl maintenance_ips src -f /etc/haproxy/maintenance_ips - # Go to maintenance backend, unless your IP is whitelisted - use_backend maintenance if !maintenance_ips - -backend maintenance - http-request set-log-level silent - # Custom 503 error page - errorfile 503 /etc/haproxy/errors/maintenance.http - # With no server defined, a 503 is returned for every request -``` - -Pour les outils locaux de monitoring (exemple: Munin) ou pour les challenges ACME, il peut être utile de renvoyer sur un serveur web local (exemple: Apache ou Nginx), au lieu de renvoyer sur Varnish ou les serveurs web appliatifs. - -``` -frontend external - […] - # Is the request coming for the server itself (stats…) - acl server_hostname hdr(host) -i my-hostname - acl munin hdr(host) -i munin - - # Detect Let's Encrypt challenge requests - acl letsencrypt path_dir -i /.well-known/acme-challenge - - use_backend local if server_hostname - use_backend local if munin - - use_backend letsencrypt if letsencrypt - -backend letsencrypt - # Use this if the challenge is managed locally - server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 - # Use this if the challenge is managed remotely - ### server my-certbot-challenge-manager 192.168.2.1:80 maxconn 10 - -backend local - option httpchk HEAD /haproxy-check - server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 + […] + redirect scheme https code 301 if !{ ssl_fc } + redirect prefix https://example-to.org code 301 if { hdr(host) -i example-from.org } + http-request auth realm "VIP Section" if !{ http_auth(vip_users) } ``` ### Mode maintenance @@ -481,13 +442,38 @@ frontend external use_backend example_com_maintenance if example_com_domains !example_com_maintenance_ips !maintenance_ips ``` +### Services locaux + +Pour les outils locaux de monitoring (exemple: Munin) ou pour les challenges ACME, il peut être utile de renvoyer sur un serveur web local (exemple: Apache ou Nginx), au lieu de renvoyer sur Varnish ou les serveurs web appliatifs. + +``` +frontend external + […] + # Is the request coming for the server itself (stats…) + acl server_hostname hdr(host) -i my-hostname + acl munin hdr(host) -i munin + + # Detect Let's Encrypt challenge requests + acl letsencrypt path_dir -i /.well-known/acme-challenge + + use_backend local if server_hostname + use_backend local if munin + + use_backend letsencrypt if letsencrypt + +backend letsencrypt + # Use this if the challenge is managed locally + server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 + # Use this if the challenge is managed remotely + ### server my-certbot-challenge-manager 192.168.2.1:80 maxconn 10 + +backend local + option httpchk HEAD /haproxy-check + server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10 +``` ### Personnalisation par site On peut définir des ACL pour chaque site et ainsi provoquer des comportements lorsqu'une requête est destinée à ce site. Par exemple, les redirections http vers https, ou pour des sous-domaines, les en-têtes HSTS - -## Configuration complète - -Parcourons la configuration complète de HAProxy puis Varnish, pour commenter petit à petit tout ce qu'on a vu jusque là. \ No newline at end of file