2009/03/28

Redundancy using Nginx for failover


I've got a website running on two servers, for redundancy, with an nginx load-balancer in front of them. If one server goes down, all requests will be sent to the other using this stanza in the nginx config;




upstream webservers {
server 1.2.3.4; # web1
server 2.3.4.5; # web2
}



This uses a round-robin system to send alternate requests to alternate servers. That's great for spreading the load around, but not exactly what I want.




The webservers are not heavily loaded, but they do run some quite complex reports against the database, the results of which are cached on the webserver's local filesystem. The cache is expired periodically.




Using a round-robin algorithm, we end up calculating each report twice per period - i.e. once on each webserver. What I really want is for all requests to go to web1, so that it's using its cache as efficiently as possible. web2 should only start receiving traffic if web1 goes down.




I don't know of a way to make nginx do exactly this, but you can use the 'weight' parameter to fake it, like this;




upstream webservers {
server 1.2.3.4 weight=100000; # web1
server 2.3.4.5; # web2
}



Now, only one in every 100,000 web requests will go to web2, so the cache on web1 is being utilised to a much greater extent. If web1 goes down, web2 gets all the traffic.




Redundancy plus efficient caching.

2009/03/08

Rewired State

I went to the Rewired State hack day yesterday.



A brilliant time was had by all. Photos here, video here.


Our hack was What's On Their Minds? (if that URL doesn't work, try this link).