Node: 100s of simultaneous requests slows down server substantially. O.S issue?

850 views Asked by At

My node application is making requests to two Servers, A and B. To server A, it waits for one request to finish before making the next one. To server B it makes 20 requests a second without waiting. When I'm making the requests to Server B, the requests to Server A take a very long time. When I don't make the requests to server B, they go quickly. The requests to server B pile up, but there are no more than a few hundred in process simultaneously.

I've run the exact same application, with the same node version on a Joyent smartos instance and I don't have this problem, so I assume its an issue with the limits the operating system sets, and not with the limits that node sets. In node I do have maxSockets set to 10000 as explained here, http://markdawson.tumblr.com/post/17525116003/node

I'm running my application with upstart though I don't know if I have the problem without it (that would be my next test). In my upstart config file I have limit nofile 90000 90000. There are some other limits I can raise as documented here, http://upstart.ubuntu.com/wiki/Stanzas#limit, but I don't know what they do. Could one of these be causing the problem? Where else might my Ubuntu machine's limits be set?

I should add that I'm launching the upstart program via Monit in case that's relevant.

1

There are 1 answers

0
Tracker1 On

You don't mention how you are talking to ServerA or ServerB, but Node's HTTP library has a default limit of six connections per host (protocol/server/port) combination. You can increase this with http.globalAgent.maxSockets = 20; or whatever you would like the maximum to be.

Other issues could be related to open file/socket limits in your OS... You want to look at /proc/sys/fs/file-max instead

From recent linux/Documentation/sysctl/fs.txt:

file-max & file-nr:

The kernel allocates file handles dynamically, but as yet it doesn't free them again.

The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit.

Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles.

Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached".


Specifically to Ubuntu, if you have a lot of ufw (firewall) and/or iptables rules in place this can effect things too.