What is HAProxy & how to get the most from it?

Credit to HAProxy Technologies

Why we need Load Balancers?

As the world evolves, we’re bound to progress and be able to respond to the vast majority of requests coming along our way. With today’s fast pace world & ever-growing size of the population, every website and every service on this planet is receiving more and more attention and requests (that is of course if the service is worth it).

We also tend to become less patient when it comes to satisfying a need. Take standing up in the line for a McDonald Burger for example; or even waiting for a Starbucks coffee. Both of these would most probably be satisfying when we’re delivered what we came for ASAP. We wouldn’t be very happy if it takes an hour.

Even in the world of technology, when we’re waiting for a website page to load, no one wants to sit for a long time. If I don’t get what I came for in less than a minute, then that tab is closed (probably for good).

There are a couple of solutions to reduce response time. One of which is to add more instances of the server, and being able to respond to more users at the same time (simultaneous users).

This is made possible by residing the servers behind a load balancer, and probably a reverse proxy (I will write about NGINX later). It would balance the loads across different instances of the server and avoid distributing all the traffics to the same server which would result in a bottleneck.

Photo by Riccardo Annandale on Unsplash

What is a load balancer?

All that being said, this brings us to our main topic about load balancers. Load balancers, as the name suggests, are distributing the load across many channels. A very nice necessity of a load balancer is defined below:

Load balancing consists in aggregating multiple components in order to achieve a total processing capacity above each component’s individual capacity, without
any intervention from the end user and in a scalable way. This results in more operations being performed simultaneously by the time it takes a component to
perform only one. — From HAProxy doc

One of the most optimal & lightweight load balancers is HAProxy. HAProxy is, as they claim:

The Reliable, High Performance TCP/HTTP Load Balancer

It has the following advantages:

  • Proxying both TCP (IPv4 & IPv6 sockets) & HTTP (gateway) traffics and flowing the traffic in both directions.
  • Offloading/Initiating SSL connections to ensure a secure connection.
  • Normalizing HTTP traffics to allow valid requests only.
  • Content-based switching allows deciding the server to handle the request.
  • Load balancing the Server on different packets/requests.
  • Regulating traffics to apply rate-limiting.
  • Protecting against DDoS by maintaining statistics on IP, URL, etc.
  • Observation point for network troubleshooting using informative logs.
  • HTTP compression offloading.
  • Caching proxy to return repetitive and valid responses.

How does it work?

In short:

HAProxy is an event-driven, non-blocking engine combining a very fast I/O layer
with a priority-based, multi-threaded scheduler. — From HAProxy doc

Simply put, it does the following steps:

  1. Receive the traffic, either layer 4 (TCP) or layer 7 (HTTP)
  2. Manipulate it somehow according to our config (e.g. changing a header, decompressing, offloading SSL, etc.)
  3. Decide on the server that should receive the traffic (using ACLs which we’ll cover later)
  4. Receive the response from the server and after doing some of the above steps, deliver the response back to the client.

This pretty much sums it up on how HAProxy works. If you want to get a little bit deeper, follow along as we try to go from downloading and installing, configuring, and running it to accept connections.

Prerequisite

Downloading

To download your desired version of HAProxy, refer to the official link. There you can either download the latest stable release (preferably LTS or Long Term Support) or download the latest development version. Either way, you may use some form of the following command:

v2.2.3 is the latest stable LTS release by the time this article is being written (September 2020)

wget 'http://www.haproxy.org/download/2.2/src/haproxy-2.2.3.tar.gz'

After which you would need to decompress using a command such as the following:

tar xf haproxy-2.2.3.tar.gz

Installing

This part can be done in 2 ways. You either want a quick install or a complete install. The former provides the minimal amount of features to start with & the latter would require you to consult the documentation to enable your desired capabilities. These so-called capabilities include but not limited to the following list:

  • compression algorithm
  • regex
  • cryptography
  • systemd integration

To get a quick HAProxy up and running just for testing purposes, you can simply consult the file INSTALL inside the directory you just decompressed and find a simple line like the following:

make clean
make -j $(nproc) TARGET=linux-glibc \
USE_OPENSSL=1 USE_ZLIB=1 USE_LUA=1 USE_PCRE=1 USE_SYSTEMD=1
sudo make install

But if you want to have a production-ready HAProxy, you should take the time to dig a little bit further than that, taking a concise look in the Makefile file and making sure every customization and optimization is applied according to your machine and your desire.

I would normally go for no less than something like this:

make -j $(nproc) TARGET=linux-glibc USE_OPENSSL=1 USE_ZLIB=1 \
USE_LUA=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_SYSTEMD=1 \
USE_THREAD=1 USE_STATIC_PCRE2=1 USE_LIBCRYPT=1 USE_GETADDRIFO=1 \
USE_TFO=1 USE_NS=1

If you received an error regarding missing a package or library, you would normally go for something like this:

apt install libopenssl

or

apt install zlib1g-dev

or

apt install libcrypt-dev

And so on. I’m sure you get the point.

Photo by Jukan Tateisi on Unsplash

How to run it?

I would like to say that you’re done. But honestly, it was just the beginning. The configuration & the management part of the HAProxy is much more complex and need a lot of careful & delicate touches.

Running the binary file is as simple as specifying a config file, but the configuration part needs a little bit of work. So let’s take a look at the configuration first.

Configuration file

You can always refer to the official documentation for a full reference, but I’m providing you with a shortcut just so that it won’t get any harder than it already is.

Below, I’m providing a sample config file from which I can explain in more detail what each component is all about.

There are 3 main sections to talk about.

  1. The command-line arguments which take precedence.
  2. The “global” section which sets process-wide parameters.
  3. The proxies section consisting of “defaults”, “listen”, “frontend” & “backend”.

1. Command-line arguments

After finishing the config file, running the application would be as simple as this:

sudo haproxy -f <config_file> -c  # validate config file first
sudo haproxy -f <config_file> # then run the server

You should run the application with sudo privileges but should specify the user to run the instance inside the config file (or in command line) to drop the privileges after starting. For security reasons this is the desired behaviour.

But haproxy binary takes many more options. You can take a look at either the documentation or simply run this in your terminal:

haproxy -h  # showing a list of available options

I have included many of the options in the config file above and there wouldn’t be a need for any other command-line arguments, but you are always free to take a look anyway.

2. global section

This section defines global configurations to apply over all the proxies.

Proxy is what receives & handles traffic according to the defined configurations.

In the config file above, here are the parameters used:

  • daemon: Run the process in the background (recommended mode).
  • description: An explanation of the process. Also shown in the stats webpage.
  • maxconn: Maximum number of per-process connections. After reaching this limit, every new connection either goes to the backlog or is dropped if that limit is reached too.
  • uid & gid: Set the process’ user-id & group-id. Requires a useradd beforehand if you plan to run it with a different user than your current ones. Because you have to run the haproxy binary with sudo, You should specify this parameter to avoid receiving the connections asroot .
  • hard-stop-after: Maximum amount of time allowed to perform a soft clean-stop using SIGUSR1. After which the process will receive a SIGKILL.
  • log <address> <facility>: The destination where logs are expected to be received. localhost local0 is the default expected destination when logging in the same host as the one running haproxy. It would redirect every log to Syslog and you can easily access them as simple as running journalctl.
  • log-send-hostname: Set the hostname field in the Syslog header. You can either provide a custom string or leave the argument empty, which would be replaced by the hosts’ hostname.
  • mworker-max-reloads: The maximum amount of time a worker can survive a reload before receiving a SIGTERM. This is much more useful when having multiple instances of haproxy running in master-worker mode.
  • nbthread: Set the number of threads the process should run on. Only available if you specified the following option during build: USE_THREAD=1.
  • pidfile: Write the PIDs of all the daemons into this file.
  • stats socket: This makes it possible to bind statistics to a UNIX socket. All the options are the exact ones used in the bind keyword as well. /var/run/haproxy.sock is the path of the socket, mode is for setting the socket permission, expose-fd listeners makes it that another haproxy process will be able to read the socket, level user will only show non-sensitive stats, maxconn was explained earlier, name will be the title of the tab in your browser, tfo if allowed will be used to receive the response as early as possible (read more in the doc).

This is the terse explanation of the parameters used inside a global section of an HAProxy config file. There are lots of other parameters you can insert here but this will do for our little tutorial (and even in simple production cases).

Photo by John Adams on Unsplash

3. Proxies

Proxies are the main part of the HAProxy config file. They receive the traffic, manipulate it accordingly, and forward the traffic to their appropriate destinations. They sit between the user and the server running the application. You would want to make the most out of them cause they are pretty handy.

The “defaults” section defines the default configuration which will be applied to all the proxies unless overridden. So typically the common configurations go here.

Here are the parameters used in the above config file:

  • mode: The running mode of the instance. Could be tcp, http or health. The first two are the most common. tcp operates on layer 4 & http operates on layer 7.
  • backlog: The number of connections waiting in line to be processed after maxconn has reached its limit.
  • balance: The algorithm to be used while trying to distribute the traffic over several servers. There are a couple of choices to be applied here, which you can find the names and the explanations of each in the doc.
  • compression: The algorithm of the HTTP compression.
  • timeout: The amount of timeout on different occasions. connect refers to the connection to the server, client which refers to the maximum inactivity time of the client, tarpit is the amount of time to maintain tarpitted connections, server is the maximum amount of inactivity time of the server.
  • option: This parameter has a lot of usages. forwardfor is used to add X-Forward-For header when delivering the traffic to the server.

Now that we’re done with the “defaults” section, it’s time to get to the interesting part.

listen, backend & frontend

The difference between a backend and a frontend is that the traffic received from the user is always delivered firstly to the frontend. From there every frontend decides where to deliver the traffic to each backend.

No backend is directly accessible by any traffic sent from the user unless a frontend captures the packet first.

There is one exception though, and that’s the listen proxy.

“listen” acts as both frontend and backend in which it is accessible by the user and it forwards the traffic to the server as well.

There are two listen sections in the config file above, one for statistical reports of the HAProxy itself, and the other is for a sample Elasticsearch server.

Let’s find what each parameter does in both:

  • bind: Specify the address to bind to, which will expose a port on the host on either of the network interfaces it has. For example, bind 192.168.60.60:3000 will expose port 3000 on the interface that has the IP address 192.168.60.60.

bind keyword receives ssl & crt to secure the connection over TLS. I will write another article regarding what it is and how you can take advantage of an HTTPS encrypted connections using certbot.

  • maxconn & backlog are the same as the ones used in the global section.
  • stats: Has a couple of parameters to configure the statistical reports. auth is for specifying a username & a password for viewing the page, enable is used to activate the stats reports, show-legends is to show additional information on the page, uri is the prefix to append when trying to view the page. I have provided a sample report at the end of the article so that you’ll get a good feeling of how it operates.
  • server: This is used to specify the IP addresses of each server. The balance parameter explained earlier is used here to distribute between servers you specify here.

NOTE: There is one acl that filters URIs beginning with /.well-known/acme-challenge which is for certbot fetching certificates. I will publish another article regarding how to get a certificate later.

Photo by Jason Hogan on Unsplash

Now let’s discuss the sections that handle traffic; frontend & backend.

In the frontend section, you specify which port to listen to when receiving the traffic from the user. This part exposes a port and listens for connections, and after careful and defined manipulation, it will forward the packet to the desired backend.

Here is the list of parameters used in the above configuration file related to the frontend section:

  • bind is the same as how we used earlier.
  • acl might be one of the most important section of the config. It defines some sort of conditions that will later be used when we’re trying to specify a backend to forward the traffic to. We might want the traffic of a certain domain name to be forwarded to some specific backend. We can also distinguish between different ports that the traffic was received from. There are a lot of conditions that we can define here, so check the doc for more information. As for this configuration, path_beg checks for the beginning of the URI, hdr_end(host) & hdr(host) checks for the Host header in the HTTP request & src_port specifies the port specified in the request. Also -i specified in some of the ACLs is to apply case-insensitive comparison.
  • use_backend: This parameter specifies where to redirect the traffic to based on a condition. These conditions are the same ones as we defined using theacl. We can put several conditions in here, either ANDing or ORing multiple conditions. The AND part is the default form if we do not put any operator between conditions. We can put || between conditions to imply OR, or we can negate the condition with the exclamation mark (!).
  • redirect: This has several options to redirect to. One common form is to redirect the HTTP traffics to HTTPS. That is of course based on a condition of having received the traffic in a non-secure connection which we specify using !{ ssl_fc }. This implies that if we didn’t securely receive the traffic, redirect it to HTTPS.
  • default_backend: This would be the default backend if no other backend has been selected after evaluating the ACLs. This is the desired configuration as we may expect some other traffic that we didn’t expect.

After receiving the traffic in the frontend from the user, we want to forward the traffic to a defined backend and that’s where we define the servers we want the traffic to be delivered ultimately. We can define some sort of configuration in the backend section too, which I explain below:

  • http-request: This parameter will manipulate the request in some way so that the ultimate server will receive an expected form of a request. For example, we can modify the URI in some format that the server expects and remove the rest of it. This is exactly what we did when we defined http-request replace-uri ^(/)auth[/]?(.*) \1\2 which will remove the auth part of the address and will deliver the rest to the server. This goes without saying that we have used regex in the above config.
  • balance: This is also an available parameter that we can use to override the balance algorithm defined in the default section of the configuration.

Systemd Integration

Running haproxy from the command-line is one of the several solutions one has. The other, more maintainable solution is to add it as a systemd service to your Linux host. It would be a lot more manageable & would give you the benefit of making it start on system boot.

To take advantage of the above benefits, here’s a complete sample systemd service file.

After adding the above file to your system, simply run the following command to enable starting upon system boot:

sudo systemctl daemon-reload  # reread the systemd-wide services
sudo systemctl enable --now haproxy.service

If you want to read more about how to write a systemd service, check the manual using this command: man 5 systemd.service.

Statistical Report

To get a feel and look on how a statistical report webpage would look like, here’s an image from one of the servers I previously configured.

One of my previous HAProxy configured servers statistical reports

Conclusion

And that brings us to the end of this article about how to configure an HAProxy. I have provided a sample configuration, explained what each parameter in the configuration file means, provided a systemd service, and also depicted what a statistical report would look like.

There are lots of parameters we did not cover but that requires a deep dive into the documentation and it wouldn’t fit into one Medium article.

Acknowledgment

Thanks for reading this piece. I hope you could get a lot from it.

If you have any further questions, feel free to comment below & I’ll make sure you get your answers.

If you enjoyed the above content you might also like my other works as well. Take a look if you’re interested.

Resources

While writing the above contents, here are the links I referred to:

--

--

--

Fulltime Software Engineer | Parttime Blogger | Opensource Enthusiast | Bodybuilder | Pianist | Gamer | Traveler | A Cool Mentor | I live to learn 😇

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Use Docker images for stability, reproducibility, and scalability

Best Docker Container Tools and Resources

ML Kit and Face Detection in Flutter

How to use WATIR in IRB

How to contribute to freeCodeCamp’s YouTube channel

The Symmetries of the Triangle using GAP

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Meysam Azad

Meysam Azad

Fulltime Software Engineer | Parttime Blogger | Opensource Enthusiast | Bodybuilder | Pianist | Gamer | Traveler | A Cool Mentor | I live to learn 😇

More from Medium

How to use Putty SSH Keys with Puppet Enterprise Console

How to install Chocolatey in windows- DevOps Champ

Prometheus, TEMPer and Grafana to monitor my house temperature

Installing Docker on LMDE 5 “Elsie”: a problem and a solution