Load Balancing with Nginx

In a previous post we saw how simple it is to set up nginx in front of apache, and in this post I’ll show you it’s just as easy to use nginx as a load balancer.

Load balancing can be left to either hardware or software. For most of us, the expensive hardware is out of the question, but cheap (free) software will solve our needs just fine. Here’s a look at how nginx does load balancing

[code lang="bash"]upstream mysite {
server www1.mysite.com;
server www2.mysite.com;
}

server {
server_name www.mysite.com;
location / {
proxy_pass http://mysite;
}
}[/code]

The above configuration will send 50% of requests for www.mysite.com to www1.mysite.com and the other 50% to www2.mysite.com. However, if you add a “weight” tag onto the end of the “server” definition you can modify the percentages. Other useful options include max_fails and fail_timeout. For sticky sessions use ip_hash. Refer to the full documentation for further details.

Now that you know how to load balance, you will need to learn how to sync your files between multiple web servers.

21 thoughts on “Load Balancing with Nginx

  1. Pingback: Sameer Parwani » Nginx As a Front End to Apache

  2. Pingback: Sameer Parwani » Syncing Files Between Web Servers

  3. Pingback: Sameer Parwani » Handling Sessions with MySQL

  4. smcnally

    Thanks for the article, Sameer.

    I’ve read this and your nginx setup article.

    I’m shooting for a slightly different config: nginx behaves solely as a load balancer. apache2 is serving all static and dynamic content. (it’s running wordpress mu, in fact).

    I’m not sure what my nginx.conf looks like in total.

    So far, this is what I have – it’s not yet working as I’d hoped (i.e. passing all http requests through to one or more of the apache2 servers). Any direction and assistance you can provide is greatly appreciated.

    user www-data;
    worker_processes 1;

    error_log /var/log/nginx/error.log;
    pid /var/run/nginx.pid;

    events {
    worker_connections 1024;
    }

    http {
    upstream www-cluster {
    ip_hash;
    server 000.20.000.108:80; # ts-www0
    server 000.20.000.000:80; # ts-www1
    server 000.20.000.000:80; # ts-www2
    }
    }

    my actual public ips are in the server entries.

    Many thanks, and please let me know if there’s other info I can provide to assist in troubleshooting

    Reply
  5. Sameer Post author

    You are missing the server block within the http section. If you look above you need both upstream and server. Upstream defines a set of hosts that requests will be passed to. But you still need the server block to define which requests are passed to which upstream. What I’ve written in the original post, I believe, is exactly what you are looking for.

    Reply
  6. smcnally

    Hello –

    A few differences – and the source of my confusion – are on the following directives:

    server {
    listen 80;

    I’ve got apache on port 80, so I presume nginx out to be listening on another port. True statement?

    # static files
    location ~* ^.+.(jpg|jpeg|gif|png|ico|css|txt|js|htm|html)$ {
    root /path/to/webroot;
    }

    I don’t plan to serve any content directly via nginx. I’ve commented this section out.

    # pass all else onto apache waiting at localhost:8080
    location / {
    proxy_pass http://localhost:8080;

    so, here’s where I presumed to add my ip_hash of servers:

    My update:

    # pass all else onto apache waiting :80
    location / {
    proxy_pass http {
    upstream www-cluster {
    ip_hash;
    server 000.20.86.000:80; # ts-www0
    server 000.20.86.000:80; # ts-www1
    server 000.20.87.000:80; # ts-www2

    Am I on the correct path?

    Again, I appreciate your time and assistance a great deal.

    Reply
  7. Sameer Post author

    Okay first both apache and nginx default to listen on port 80. Since you are using nginx as your frontend you want the experience to be seamless for users, so you keep nginx on 80 and you set apache to listen to another port. This assumes that nginx and apache are running on the same server and listening on the same ip. If not you don’t have to mess with port settings. So, anyway, setting nginx as port 80 allows people to visit your site via http://www.mysite.com instead of http://www.mysite.com:8080/

    Yes it makes sense to comment out that block of code. As for the rest, I’m not quite sure if your syntax is valid. But heres how I would do it…

    So lets say you have nginx running on 000.20.86.000 on port 80 and you have apache running on port 8080 on 000.20.86.000, 000.20.87.000, and 000.20.88.000. This is how your nginx configuration file should look.

    upstream mysite  {
       server 000.20.86.000:8080; # ts-www0
       server 000.20.87.000:8080; # ts-www1
       server 000.20.88.000:8080; # ts-www2
    }
    
    server {
       listen 80;
       server_name http://www.mysite.com;
       location / {
          proxy_pass  http://mysite;
       }
    }
    

    And this is how apache should look (also remember to change it to listen on port 8080 instead of 80): I’m using brackets instead of arrows since wordpress seems to be choking on the arrows.

    NameVirtualHost 000.20.86.000:8080
    {VirtualHost 000.20.86.000:8080}
            DocumentRoot /path/to/site
            ServerName http://www.mysite.com
    {/VirtualHost}
    
    Reply
  8. smcnally

    Thank you, again, Sameer –
    nginx and apache are on different physical hosts. Both listening on port 80.

    nginx is still fulfilling requests I presumed it would be passing along.

    I’m seeing nothing in the logs or from the clients (workstation web browser) to say anything’s going beyond nginx’ default config:

    http://209.20.83.83/

    173.68.142.30 – - [01/Apr/2009:02:42:15 +0000] “GET / HTTP/1.1″ 304 0 “-” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/528.16 (KHTML, like Gecko) Version/4.0 Safari/528.16″

    I’m sure I’m missing something simple.

    smcnally@lb1:/var/log/nginx$ sudo cat /etc/nginx/nginx.conf
    [sudo] password for smcnally:
    user www-data www-data;
    worker_processes 1;

    error_log /var/log/nginx/error.log;
    pid /var/run/nginx.pid;

    events {
    worker_connections 1024;
    }

    http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    access_log /var/log/nginx/access.log;

    sendfile on;
    #tcp_nopush on;

    #keepalive_timeout 0;
    keepalive_timeout 65;
    tcp_nodelay on;

    gzip on;

    include /etc/nginx/sites-enabled/*;

    # this is where you define your mongrel clusters.
    # you need one of these blocks for each cluster
    # and each one needs its own name to refer to it later.
    upstream stag-ts {
    ip_hash;
    server 209.20.86.108:80; # ts-www0
    server 209.20.86.117:80; # ts-www1
    server 209.20.87.81:80; # ts-www2
    }

    server {
    listen 80;
    server_name http://stage.authorified.com;
    location / {
    proxy_pass http://stag-ts;
    }
    }

    }

    My goal is to pass all requests to one of the apache2 nodes. Apache logfiles show nothing.

    Requesting the IP or domain name via browser client passes nothing to www0 – www2.

    I’m fronting a different physical boxes (x3) for apache. Those refer to a different physical box for mysql.

    The goal is for requests to http://209.20.83.83/ to be distributed to

    server 209.20.86.108:80; # ts-www0
    server 209.20.86.117:80; # ts-www1
    server 209.20.87.81:80; # ts-www2

    I would have private messaged, but did not see that option. With hope, all can learn.

    Many thanks, best regards, and please let me know what *stupid* thing I’m doing.

    S

    Reply
  9. James Litton

    This post makes the config easy to understand. Is there a way to dynamically update the nginx config? For example, if I want to take a back-end server out to update software or if I want to add and additional server in to help manage the load?

    I could rebuild the nginx config with a script that grabbed the backend servers from ldap/sql/memcache or a local file, but would running a sighup to reload the config kill the current connections or handle them gracefully?

    Also what happens if a server stops responding? does nginx continue to attempt to send requests to that machine and fail after a timeout or does it mark it as failed in some way?

    Reply
  10. paaru

    Hi Sameer,
    I need ur help to solve my big and ur small probs.
    Actually i want to configure a high load site with nginx + ruby.
    We are expecting 400,000 concurrent connections.
    Is it possibile with nginx.
    We bought two high configuration servers located in US.
    Pls assist me to solve my great problem.
    I hav implemented a testing environment in our LAN, but after 200 concurrent connections it seems that the site is toooo busy…
    pls help me friends…..

    Reply
  11. Adrian Ruloso

    Hi, I´m trying to use a load balancer across multiple websites in 2 servers (multiple apache virtual servers), but i’m not able to get the right conf. All request are sent from nginx to default apache virtualhost and not each virtualhost.

    Here is my conf:

    NGINX

    upstream pool_ws {
    server webserver1;
    server webserver2;
    }

    server {
    server_name http://www.mysite-A.com;
    location / {
    proxy_pass http://pool_ws;
    }
    }

    server {
    server_name http://www.mysite-B.com;
    location / {
    proxy_pass http://pool_ws;
    }
    }

    Reply
  12. Adhi

    I tried this for a WordPress MU installation, doesn’t seem to work, gets stuck in an infinite redirect loop. Is there a way out?

    Thanks in advance.

    Reply
  13. t0m_a

    Hi Sameer,
    Thanks for that article, nicely written!
    I have a question related to network configuration in a web servers cluster.

    According to the configuration exposed in this article:

    I have a front end nginx server on one host to balance load between two back end server on two different host.
    All of them on a private network behind an adsl box router which permits only to forward port 80 requests to one IP address (the front end server) with a NAT rule.

    So I’d like to know in this configuration is the front end nginx receiving answers from the back end servers and then serving them to the client or is it one of the back end used who tries to answer directly to the client?

    It may have something to do with HTTP sessions but I don’t know much about that topic for the moment.

    I hope I’m clear enough in my explanation and in my english

    Cheers!
    Tom

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>