Tuesday, November 17, 2015

RabbitMQ Best Practices

Hi,
In the last Openstack summit in Tokyo one lecture caught my eye, a lecture by Michael Kishlin, a RabbitMQ operator. He had some tips about how to configure Linux OS parameters in order to deal with the load of these modern times. I took the liberty to summarize his lecture and add some comments, when I didn't know what he was talking about.

Personally, I've only started testing systems with these changes so I don't know what are the implications in the long run, so please, be cautious, don't do anything just because someone on the internet told you to do it.

Adapt Your OS Resources

Inadequate OS Settings

In the world we're living today, each service operators are running and managing has different requirements, but a lot of them are running them on OS with the default parameters. unfortunately,   Linux OS defaults are not suitable for these tasks, they are suitable for operations as if they were in the 1990’s.
Here are some points about how you, as the operator, can change the OS parameter to improve performances.

Open Files Limits

The default number of open files in the Linux OS is 1024 files, to get parameter
# ulimit -n
1024
The recommended number of open file for a Linux OS with Rabbit MQ is 500K open files

To change the limit of open files temporarily (till the next reboot)
# ulimit -n 512000

Changing the parameter permanently requires to add 2 lines in /etc/security/limits.conf
*               soft    nofile          512000
*               hard    nofile         512000

The default parameter of file-max is set as 10% of the system’s total RAM. The recommendation for a system with Rabbit MQ is 500K.

Changing the number of file handles requires the following:
Set the integer in /proc/sys/fs/file-max
# echo 512000 >> /proc/sys/fs/file-max
This is not a permanent change, though.
Set the parameter fs.file-max = 512000 in /etc/sysctl.conf

Restart the network process for the changes to take affect
# systemctl restart network

TCP Keep Alive 

Linux built-in support for checking keeping alive connections over TCP. The keepalive configuration is set with three parameters:

net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9

net.ipv4.tcp_keepalive_time sets the time in seconds in which a process will wait until it will send a keepalive probe. net.ipv4.tcp_keepalive_intvl is the time of the intervals, in seconds, between probes. net.ipv4.tcp_keepalive_probes is the number of times a probe will be launched. Keeping a connection open for 2 hours (7200 seconds) is not suitable for the demand on an OS with Rabbit MQ, the recommendation is to decrease the amount of time and intervals in order to close the connections quickly as possible.

In the lecture, it is said to keep a connection open for 6 seconds until sending a probe, after some consulting with my colleagues I would recommend to set the keep alive time to 30 seconds.  

Changing the values temporarily (till the next reboot)
# sysctl -w net.ipv4.tcp_keepalive_intvl=3
# sysctl -w net.ipv4.tcp_keepalive_time=30
# sysctl -w net.ipv4.tcp_keepalive_probes=3

Changing the setting permanently requires adding the parameters to /etc/sysctl.conf
net.ipv4.tcp_keepalive_probes=3
net.ipv4.tcp_keepalive_intvl=3
net.ipv4.tcp_keepalive_time=30

In Openstack setup the different components are basically Rabbit MQ clients, the default configuration set the heartbeat interval to 60 by default. The recommendation is to set it to a value between 6-12 seconds. For an example
heartbeat_timeout_threshold=10

High Throughput and Concurrent Connections

TCP Buffers

Most Linux servers are working with the default parameters and working well, but for some the defaults are not enough. For some servers high throughput is a priority and we’ll need to change two parameters, net.core.rmem_max and net.core.wmem_max with higher values. Currently, the default values are:

net.core.rmem_max = 212992
net.core.wmem_max = 212992
Which is about ~208K.

The recommendation is to increase the maximum to 16M
Note: This is a recommendation, it depends whether the hardware supports it, the requirements can differ in each case.

Set the new values temporarily:
# sysctl -w net.core.wmem_max = 16777216
# sysctl -w net.core.rmem_max = 16777216

To set it permanently, add the parameters are values to /etc/sysctl.conf
# echo ‘net.core.wmem_max = 16777216’ >> /etc/sysctl.conf
# echo ‘net.core.rmem_max = 16777216’ >> /etc/sysctl.conf

A large amount of connections cost the OS RAM, we would like our OS to avoid leaving a connection in the state of TIME_WAIT for long periods of time (the default value, as said, is 2 hours).

tcp_fin_timeout sets the number of seconds that the OS will wait in the state FIN-WAIT-2 . By reducing the time from the default 60, let’s say, 5 seconds, a connection will wait in FIN-WAIT-2 state less time, thus shortening the process by 5 seconds. To set a new value to the parameter temporarily, till the next reboot

# sysctl -w net.ipv4.tcp_fin_timeout = 10
To set it permanently
# echo ‘net.ipv4.tcp_fin_timeout = 10’ >> /etc/sysctl.conf

tcp_tw_reuse allows the OS to reuse an outgoing connection that waits in TIME_WAIT state, an exmaple of a use case for it is web servers, enable the OS to create a lot of short connections. By default tcp_tw_reuse is set to 0, meaning that the OS will wait until the connection is fully closed before using a used port again. It is recommended to enable it on client side rather than server, because it does not handle incoming connections. To enable it temporarily

# sysctl -w net.ipv4.tcp_tw_reuse =1
To set permanently
# echo ‘net.ipv4.tcp_tw_reuse = 1’ >> /etc/sysctl.conf

Disclaimer:  the tcp_tw_reuse is not safe 100% of the time, in general the TIME-WAIT state of TCP connections is working well. Do not enable this parameter without considerable thought.

net.core.somaxconn is the size of the listen queue, the amount of connections accepted in the same time. The default value is either 128 or 256. A bigger number will enable the RabbitMQ to support a burst of incoming connections, when client reconnect, like after restarting a service. It is recommended to increase the number to 4096.

Set it temporarily
# sysctl -w net.core.somaxconn = 4096
Or permanently
# echo ‘net.core.somaxconn = 4096’ >> /etc/sysctl.conf

In conclusion, the sysctl configuration file should have following
# cat /etc/sysctl.conf
fs.file-max = 512000
net.ipv4.tcp_keepalive_probes=3
net.ipv4.tcp_keepalive_intvl=3
net.ipv4.tcp_keepalive_time=30
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_tw_reuse =1
net.core.somaxconn = 4096

RabbitMQ configuration changes

RabbitMQ’s TCP buffers are set in /etc/rabbitmq/rabbitmq.confing. In the configuration file there are three parameters in the tcp_listen_options for us to change. The backlog is the number of inbound connections allowed at the same time, the default parameter is 128, it is recommended to increase it to 4096.
The inbound and outbound buffers, recbuf and sndbuf, according to Michael Kishlin, the RAM used for each connection can be reduced by 10 if these parameters will be set to 16384, but it will reduce the throughput, one should have to test it and find the right balance. In /etc/rabbitmq/rabbitmq.config, the tcp_listen_options should be
[
{rabbit, [
      {tcp_listen_options, [
                                          {packet, raw},
                                          {reuseaddr, true},
                                          {backlog, 4096},
                                          {recbuf,16384},
                                          {sndbuf,16384},
                                          {nodelay, true},
                                          {exit_on_close, false},
                                          {keepalive, true}
      ]}
]

No comments:

Post a Comment