The Core module controls essential Nginx features, some of which will have a direct impact on performance, such as the number of worker processes. It also includes some directives that are useful for debugging.
Official Nginx docs on the Core module
Below, I cover the directives that I think have the greatest impact on performance or are critical to change from default values for security purposes.
error_log file_path log_level
Accepted values for
This directive can be placed in
location blocks to indicate specific rules for logging.
Nginx file importing. See Nginx Configuration Syntax.
Default: Defined at compile time
Defines the path of the pid file.
Defines whether worker processes will accept all new connections (on), or one new connection at a time (off).
Default: Nginx will automatically choose the fastest one
Specifies the connection processing method to use. The available methods are
For Linux systems,
epoll seems to yield the best performance. Here is an interesting post comparing
user username groupname
Defines the user that will be used to start the worker processes. It’s dangerous to set the user and group of worker processes to
root. Instead, create a new user specifically for Nginx worker processes (
www-data is canonical).
user root root; # change to user www-data www-data;
This sets the number of connections that can be received by each worker process. If you have 4 worker processes that can accept 1024 connections each, your system can accept a total of 4096 simultaneous connections. Related to
worker_cpu_affinity 1000 0100 0010 0001
Allows you to assign worker processes to CPU cores. For example, if you’re running 3 worker processes on a dual-core CPU (which you shouldn’t, see below), you can configure the directive to assign 2 worker processes to the first CPU core and 1 to the second CPU core:
worker_cpu_affinity 10 01 10
There are 3 blocks for 3 worker processes, and each block has 2 digits for 2 CPU cores.
Adjusts the priority level of worker processes. Decrease this number if your system is running other processes simultaneously and you want to micromanage their priority levels.1
This number should match the number of physical CPU cores on your system.
worker_proceses 1; # set to match number of CPU cores worker_processes 4; # assuming a quad-core system
Default: None, system determined
worker_rlimit_nofile sets the limit on the number of file descriptors that Nginx can open. You can see the OS limit by using the
Check out this excellent post for more on
worker_rlimit_nofile. An excerpt:
When any program opens a file, the operating system (OS) returns a file descriptor (FD) that corresponds to that file. The program will refer to that FD in order to process the file. The limit for the maximum FDs on the server is usually set by the OS. To determine what the FD limits are on your server use the commands
ulimit -Snwhich will give you the per user hard and soft file limits.
If you don’t set the
worker_rlimit_nofiledirective, then the settings of your OS will determine how many FDs can be used by NGINX (sic).
worker_rlimit_nofiledirective is specified, then NGINX asks the OS to change the settings to the value specified in
In some configurations I’ve seen, the value is set at 2 times worker_connections to account for the two files per connection. So if
worker_connectionsis set to 512, then a value of 1024 for
HTTP Load Testing with Autobench
The basic idea behind testing would be to test multiple load scenarios against different Nginx configuration settings to identify the ideal configuration settings for your expected load.
A Perl wrapper around httperf that automatically runs httperf at increasing loads until saturation is reached. It can also generate
.tsv graph files that can be opened in Excel.
Download it from here,
A test command looks like this:
$ autobench --single_host --host1 192.168.1.10 --uri1 /index.html --quiet --low_rate 20 --high_rate 200 --rate_step 20 --num_call 10 --num_conn 5000 --timeout 5 --file results.tsv
Some Autobench options:
--host1: The website host name you wish to test
--uri1: The path of the file that will be downloaded
--quiet: Does not display httperf information on the screen
--low_rate: Connections per second at the beginning of the test
--high_rate: Connections per second at the end of the test
--rate_step: The number of connections to increase the rate by after each test
--num_call: How many requests should be sent per connection
--num_conn: Total amount of connections
--timeout: The number of seconds elapsed before a request is considered lost
--file: Export results as specified (.tsv file)
TODO: Find out more about Apache JMeter and locust.io
To upgrade Nginx gracefully, we need to:
- Replace the binary
- Replace the old pid file and run the new binary2
$ ps aux | grep nginx | grep master root 19377 0.0 0.0 85868 152 ? Ss Jul07 0:00 nginx: master process /usr/sbin/nginx $ kill –USR2 19377
- Kill all old worker processes
$ kill -WINCH 19377
- Kill the old master process
$ kill -QUIT 19377
Priority levels range from -20 (most important) to 19 (least important). 0 is the default priority level for processes. Do not adjust this number past -5 as that is the priority level for kernel processes. ↩
See this Stack Overflow answer for what