You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Jérémy Lecour f18a06df41 update slides and code 5 months ago
code-snippets remove image snippets 6 months ago
etc update slides and code 5 months ago
images fix IP addresses mistakes 5 months ago
.gitignore ignore all the hidden stuff 6 months ago
LICENSE Initial commit 7 months ago
README.en.md update content 5 months ago
README.fr.md update content 5 months ago
haproxyconf.drawio fix IP addresses mistakes 5 months ago
slides.odp update slides and code 5 months ago
slides.pdf update slides and code 5 months ago

README.en.md

haproxyconf-2022

The purpose of this presentation is to show how we can use HAProxy and Varnish together to make web-sites and web-apps more performant and reliable.

We will start with the general ideas then go deeper to the details.

Why HAProxy and Varnish ?

They are both open-source, mature and performance-oriented. They are not alone in their categories, but they are the one we know and love the most. They are easily available on many platforms. They have great documentation and vast/vibrant communities. They both are supported by companies that offer professional support and services, while keeping the core open-source and free for everybody.

HAProxy

HAProxy will be the first web-server for the HTTP client. It will be the TLS/SSL termination. It will decide if the request is accepted or rejected,and finally will pass the request to the upstream server or servers. It is acting as a proxy, with load-balancing and fault-tolerance capabilities.

To summarize briefly the basic principles of HAProxy, let's mention the frontend, the backend and the intermediate processing.

A frontend is an entry point. HAProxy is listening on the TCP or HTTP layer, on one or more IP:PORT pairs, with options regarding logs, timeouts, supported, protocols and much more. Many frontends can coexist in an HAProxy instance,to accommodate different use cases. For example : a traditional web-site in one, and an API for third-parties in another one, each with their settings and configuration.

HAProxy is able to parse and process the full TCP or HTTP request. It exposes an internal API to change the request or the response, decide how to deliver them upstream or not… in a very optimized and reliable manner.

A backend is an exit point. There must be one for each group of final webservers. If there is only one server in a backend, it will deal with every request, but if there are more than one, HAProxy will balance the requests, according to an algorithm. That's where we begin to mention load-balancing. And when we introduce som logic to deal with misbehaving or missing servers, it becomes fault-tolerant.

Varnish

Varnish is going to store in cache the result of some requests. Later on, a client who makes the same request might get a result from the cache if it is present and fresh enough.

With its default configuration, Varnish has a lot of good practice already in place and just a few adjustments might be necessary to work with most of the situations. Every step of the request/response processing is done by functions that can be customized by something that looks a bit like inheritance in programming. At startup, everything is validated and compiled into optimized code.

Like HAProxy, Varnish parses the whole request/response inside the functions, so we can decide if the request/response needs to be modified (like adding or removing headers), if a response can be served from the cache, if it need to talk to the final server, and if the response can be stored in cache for future use.

Varnish stores the content, its objects, into memory to be really fast. if you have a lot of traffic, give it enough RAM to keep a lot of content available.

Let's combine HAProxy and Varnish

So we have decided to have them work together, in a coordinated way, for an efficient and feature-rich setup.

Has we've seen earlier, HAProxy uses frontends as entry-points and backends as exit-points.

In our case, after accepting the request from the HTTP client, HAProxy will pass it to Varnish, via a dedicated backend. We have chosen to place HAProxy and Varnish on the server server, and connect them with unix sockets, but it's an implementation detail and we could have placed them on separate servers, communicating through regular TCP sockets.

Once the request is passed to Varnish, it will parse it too and decide if it is able to serve a response from the cache or pass it to the final server.

In the most common case with Varnish, the request to the final server and passed directly. It is even capable of some rather basic load-balancing. But since we have HAProxy at hands and it is more capable in this area, we have decided to pass the request back to HAProxy.

To prevent loops, and to bypass the processing that has already been done, the request will reenter HAProxy through another frontend. This one is much simpler and its responsibility is basically to choose which backend to use for each request. If we manage sites and apps on more than one set of servers, we have to create as much backends as we have sets of servers.

In the end, the request is passed to a server in a backend, be it a static web site, a dynamic application programmed with a framework, or the ingress of a big Kubernetes cluster.

The response will eventually travel the same way backwards, through HAProxy, Varnish (which will decide to store it in the cache or not) and HAProxy again to the original HTTP client.

Chain :

  1. HAProxy frontend « external » (general)
  2. HAProxy backend « varnish »
  3. Varnish
  4. HAProxy frontend « internal » (general)
  5. HAProxy backend « site X » (per site)
  6. Web-server

Multi-sites

Even if we have only one frontend to manage all the requests with the same settings, we can have many groups of web-servers, for different web-apps of web-sites. In our case, we have many clients on one HAProxy, with one frontend, but many backends, one for each client and their own servers.

HAProxy does not have a concept of VirtualHost as Apache or Nginx do, but we can use some lower-levels conditionals to have a similar result. We can write (at least) an ACL to detect if the Host header is for a web site or another and then use this ACL to trigger some processing and chose the backend to use.

An ACL is like a boolean variable.

frontend external
    acl example_com_domains hdr(host) -i example.com
    acl foo_bar_domains     hdr(host) -i foo-bar.com foo-bar.org
    […]
    use_backend example_com if example_com_domains
    use_backend foo_bar     if foo_bar_domains

Pass the request to Varnish

In the HAProxy configurtion, we've put a "varnish" backend, with only one server.

backend varnish
    option httpchk HEAD /varnishcheck
    server varnish_sock /run/varnish.sock check observe layer7 maxconn 3000 inter 1s send-proxy-v2

The httpchk option tells HAProxy to use the "layer 7" to check for Varnish health, by regularly checking on a special URL.

In the Varnish configuration we have a behavior for this URL:

sub vcl_recv {
    # HAProxy check
    if (req.url == "/varnishcheck") {
        return(synth(200, "Hi HAProxy, I'm fine!"));
    }
    […]
}

The response is extremely fast, which allow for frequent checks by HAProxy.

In the frontend section of HAProxy, there is an ACL that tells if Varnish is known to be available. It is then possible to decide if we can pass the request to Varnish or bypass it:

frontend external
    # Is the request routable to Varnish ?
    acl varnish_available   nbsrv(varnish) gt 0

    # Use Varnish if available
    use_backend varnish if varnish_available

    # … or use normal backend
    use_backend default_backend

backend varnish
    option httpchk HEAD /varnishcheck
    server varnish_sock /run/varnish.sock check observe layer7 maxconn 3000 inter 1s send-proxy-v2

backend default_backend
    server example-hostname 1.2.3.4:443 check observe layer4 ssl

Nothing forces us to use VArnish for every request and all web-sites of apps.

We can either use an ACL :

frontend external
    acl example_com_domains hdr(host) -i example.com
    […]
    use_backend varnish if example_com_domains

We can use a more dynamic condition, with a file containing all the domains whose traffic should go through Varnish:

frontend external
    acl use_cache if hdr(host) -f /etc/haproxy/cached_domains
    […]
    use_backend varnish if use_cache

In some cases, we could decide to bypass Varnish when caching makes no sense. For example, if the HTTP method is neither GET, HEAD or PURGE:

frontend external
    acl varnish_http_verb method GET HEAD PURGE
    […]
    use_backend varnish if varnish_http_verb

And we can combine several of these conditions together:

  • should the domaine be cached?
  • is the request eligible for cache?
  • is Varnish available?

PROXY protocol

What for?

Using the PROXY Protocol is not a necessity at all, but it adds a certain amount of comfort when we have proxies involved, like HAProxy and Varnish.

At the TCP level, when HAProxy talks to Varnish, then Varnish to HAproxy and finally HAproxy to the final web server, they are all seen as regular HTTP clients. Without any modification, each element reports the IP of the previous elements as the client IP, and the final server thinks that HAProxy is the only client accessing it. It's bad for IP based filtering, logging…

At the HTTP level, we've had the X-Forwarded-For header for a long time. I you look at its presence and you know how to parse it, you can use the correct value in your web-server of application. It's cumbersome, and error-prone, and at the TCP level, this is invisible.

The PROXY protocol is simple extension of the TCP protocol. It add the same kind of header, but at the TCP level. The downside is that both parties must support the PROXY protocol and be configured to use it. The upside is that its completely transparent after that. There is nothing to do at the application level.

The PROXY protocol has been designed by Willy Tarreau, creator of HAProxy, in 2010. It has since been adopted by many products like Varnish, Apache, Nginx, Postfix, Dovecot…

Comment?

In HAProxy we find in the backend to Varnish, in the frontend used by Varnish to pass the request to HAProxy, and possibly ibn the backend to the final web-servers.

backend varnish
    server varnish_sock /run/varnish.sock check observe layer7 maxconn 3000 inter 1s send-proxy-v2

frontend internal
    bind /run/haproxy-frontend-default.sock user root mode 666 accept-proxy

backend example_com
    server example-hostname 1.2.3.4:443 check observe layer4 ssl verify none send-proxy-v2

With HAProxy, for the listening side we use the accept-proxy option in the frontend section and for the emitting side we use the send-proxy-v2 option in the backend section.

With Varnish, for the listening side we use the PROXY option in the startup command line :

/usr/sbin/varnishd […] -a /run/varnish.sock,PROXY […]

And for the emitting side, we use the proxy_header = 1 setting in the backend section. :

backend default {
    .path = "/run/haproxy-frontend-default.sock";
    .proxy_header = 1;
    […]
}

How to debug with PROXY protocol

Even though it's a valuable optimization, the PROXY protocol is not supported by many tools, especially some very low level ones, like a forged HTTP request on a telnet connection.

This is especially if you want to bypass HAProxy and debug at the varnish level directly.

The startup command accepts many listening addresses, each one with its options. We can add one restricted on localhost and a custom firewalled port, and have a debug back-door :

/usr/sbin/varnishd […] -a 127.0.0.1:82 […]

It become really easy to make a direct request:

curl --verbose \
     --resolve www.example.com:82:127.0.0.1 \
     --header "X-Forwarded-Proto: https" \
     http://www.example.com:82/foo/bar

And yes, curl supports the PROXY protocol since version 7.60.0 with the --haproxy-protocol option, but it only supports version 1 and I've never managed to make work with local UNIX sockets.

What about the final servers?

If you have control over the final servers and they support the PROXY protocol, you can event do this final connection with the PROXY protocol. You might want to do the same trick and use a custom port for this and keep the regular 80/443 without the PROXY protocol.

Note: If you use Apache, I encourage you to take a look at the "ForensicLog" module. It adds a special log where you can find a complete trace of each request with all the headers. It is specially useful to see what arrives at the end of the chain. I don't know if something similar exists for Nginx or other web servers.

+X@Ike1sspdiNAko5YHK9HAAAAC4|GET /blog/ HTTP/1.1|user-agent:curl/7.64.0|accept:*/*|host:jeremy.lecour.fr|x-forwarded-for:1.2.3.4, 4,5,6,7|accept-encoding:gzip|x-varnish:65545|x-forwarded-port:443|x-forwarded-proto:http|connection:close
-X@Ike1sspdiNAko5YHK9HAAAAC4

HTTP tagging

With the same focus on traceability easy debugging, we are using HAProxy and Varnish to add some HTTP headers with information about their behavior.

Let's remember that the HTTP standard has normalized a list of headers. For example : Host, Set-Cookie, Cache-Control… But it also normalized a way to use custom, non-standard, HTTP headers with an X- prefix. Some have almost become standard and are used very frequently.

X-Forwarded-*

Even though we use the PROXY protocol internally (and possibly to the final servers) we usually keep the commonly used X-Forwarded-For HTTP header.

We also use the X-Forwarded-Port header to tell on which port of HAProxy the request arrived to. Anf finally, we use the X-Forwarded-Proto header to indicate if the original request was on a secure connection or not.

frontend external
    bind 0.0.0.0:80,:::80
    bind 0.0.0.0:443,:::443 ssl […]

    option forwardfor

    http-request set-header X-Forwarded-Port  %[dst_port]

    http-request set-header X-Forwarded-Proto http  if !{ ssl_fc }
    http-request set-header X-Forwarded-Proto https if  { ssl_fc }

That last header is important when the request goes to an application that enforces HTTPS. If it receives a request from HAProxy on a clear-text HTTP connection, it might trigger a redirect. But many framework detect the X-Forwarded-Proto https header and understand the external part of the request was encrypted.

X-Unique-ID

It is very useful to mark an incomming request with a unique identifier that can be transmitted from proxy to proxy, un tile the final server. It can even be sent back by the application and be traceable all the way back to the original client.

frontend external
    […]
    http-request set-header X-Unique-ID %[uuid()] unless { hdr(X-Unique-ID) -m found }

There is no real consensus on the name of the header. You will commonly find X-Unique-ID or X-Request-ID.

X-Boost-*

"Boost" is the name we've given to our HAProxy and Varnish setup. Hence the name X-Boost-* of a few custom HTTP headers that we add to the request and the response

On the "external" frontend, we add one to tell that the request passed this step. This is useful for the final server to know which steps the request passed.

When the response exists to the original client, we also add the same header to inform the client of the same steps.

We also inform the client of the name of the Boost instance that processed the request.

frontend external
    […]
    http-request add-header X-Boost-Step1 haproxy-external

    http-response add-header X-Boost-Step1 "haproxy-external; client-https" if  { ssl_fc }
    http-response add-header X-Boost-Step1 "haproxy-external; client-http"  if !{ ssl_fc }
    http-response set-header X-Boost-Server my-hostname

On the "internal" frontend, we apply the same principle, but this time it's "step 3" instead of "step 1" :

frontend internal
    […]
    http-request add-header X-Boost-Step3 haproxy-internal

    http-response add-header X-Boost-Step3 "haproxy-internal; SSL to backend" if  { ssl_bc }
    http-response add-header X-Boost-Step3 "haproxy-internal; no SSL to backend" if !{ ssl_bc }

In Varnish (the "step 2"), we tag the request to tell the final server that Varnish has seen it.

sub vcl_recv {
    […]
    set req.http.X-Boost-Step2 = "varnish";
}

And when the response is sent back, we tell in a header if the server has set cache-control or some cookies.

sub vcl_deliver {
    […]
    if (resp.http.Set-Cookie && resp.http.Cache-Control) {
      set resp.http.X-Boost-Step2 = "varnish; set-cookie; cache-control";
    } elseif (resp.http.Set-Cookie) {
      set resp.http.X-Boost-Step2 = "varnish; set-cookie; no-cache-control";
    } elseif (resp.http.Cache-Control) {
      set resp.http.X-Boost-Step2 = "varnish; no-set-cookie; cache-control";
    } else {
      set resp.http.X-Boost-Step2 = "varnish; no-set-cookie; no-cache-control";
    }

By default, Varnish also adds many HTTP headers with information about the cache : hit or miss, age of the object……

Full HAProxy log

In advanced debugging situation, we can also enable a custom header with the full HAProxy log line:

frontend external
    http-response add-header X-Haproxy-Log-External "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

frontend internal
    http-response add-header X-Haproxy-Log-Internal "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"

⚠️ It's probably not a good idea to do that in production, but it can be very useful to see on the client side exactly what happened inside HAproxy.

High-availability

For the application

When a web site or web application relies on multiple webservers to handle the requests, we can use HAProxy to do the load-balancing. When possible, we prefer using the round-robin algorithm. It's the simplest. But when the application has trouble with persisted data like sessions or stored files, we can set an active-backup configuration. HAproxy passes all the requests to the same server until it's failing then moves to the next one.

For HAProxy itself

To prevent our HAProxy+Varnish server from turning into a "Single Point of Failure", we have installed two of them, in two different networks and datacenters, and we use a round-robin DNS. If a server is not available, we need to change the DNS zone to disable the faulty server. It takes time to detect,change and propagate. But we use virtual servers on redundant hardware. Over the last several years, this has proven to be very reliable. And it is also very easy to change and adapt.

It would definitely be possible to have an "active-standby" setup, with a virtual IP (with keepalived/vrrp), and have an automatic failover if the main server fails. or we could go even further with an "active-active" setup with 2 "layer 4" HAProxy servers, then 2 or more "layer 7" Haproxy servers. Those options (and some more that I didn't cover) allow for a much faster recovery in case of an incident, but they are much more complex. You have to decide for yourself.

Additional features

Filtering at HAProxy level

The "external" frontend handles incoming requests. We can verify if the client must be rejected, by it's client IP for example:

frontend external
    […]
    # Reject the request at the TCP level if source is in the denylist
    tcp-request connection reject if { src -f /etc/haproxy/deny_ips }

This is not a replacement for a traditional firewall, but it's much more poweful since any ACL can be used.

Since HAProxy has access to every aspect of the request on the "layer 4", we can also filter on a much finer level.

We can mandate an authentication token, with a list of users or groups of users. We can also do some redirects for https or other domains.

userlist vip_users
    user johndoe password $6$k6y3o.eP$JlKBx9za9667qe4(…)xHSwRv6J.C0/D7cV91

frontend external
   […]
   redirect scheme https code 301 if !{ ssl_fc }
   redirect prefix https://example-to.org code 301 if { hdr(host) -i example-from.org }
   http-request auth realm "VIP Section" if !{ http_auth(vip_users) }

Maintenance mode

The "maintenance mode" is some kind of circuit-breaker that you can use to divert all the requests or only per site.

For a global "maintenance mode", we use a special backend that doesn't specify any server, which will trigger 503 errors. If we have defined a custom error page for this backend, it will be displayed

frontend external
    […]
    # List of IP that will not go the maintenance backend
    acl maintenance_ips src -f /etc/haproxy/maintenance_ips
    # Go to maintenance backend, unless your IP is whitelisted
    use_backend maintenance if !maintenance_ips

backend maintenance
    http-request set-log-level silent
    # Custom 503 error page
    errorfile 503 /etc/haproxy/errors/maintenance.http
    # With no server defined, a 503 is returned for every request

For a "maintenance mode" per site, we also need a custom backend with the same principle as before, and we need to use the domain ACL to restrict to this site only.

frontend external
    […]
    acl example_com_domains hdr(host) -i example.com

    acl maintenance_ips src -f /etc/haproxy/maintenance_ips
    acl example_com_maintenance_ips src -f /etc/haproxy/example_com/maintenance_ips

    use_backend example_com_maintenance if example_com_domains !example_com_maintenance_ips !maintenance_ips

Local services

For some local monitoring tools (like Munin) or for ACME challenges, it is useful to directly proxy the requests to a local web-server (like Apache or Nginx), instead or going through Varnish or other servers.

frontend external
    […]
    # Is the request coming for the server itself (stats…)
    acl self      hdr(host) -i my-hostname my-hostname.domain.tld
    acl munin     hdr(host) -i munin

    # Detect Let's Encrypt challenge requests
    acl letsencrypt path_dir -i /.well-known/acme-challenge

    use_backend local       if self
    use_backend local       if munin

    use_backend letsencrypt if letsencrypt

backend letsencrypt
    # Use this if the challenge is managed locally
    server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10
    # Use this if the challenge is managed remotely
    ### server my-certbot-challenge-manager 192.168.2.1:80 maxconn 10

backend local
    option httpchk HEAD /haproxy-check
    server localhost 127.0.0.1:81 send-proxy-v2 maxconn 10

Per site customization

We can use many ACL for each site or application to trigger custom behavior when a request arrives for this site.

A typical example is http-to-https redirects, subdomains redirects or HSTS headers.