Assume makes an ass out of u and me

But mostly me. ;)

During yesterday’s investigations into handling lots (30,000+) of socket connections to a server built with The Server Framework I took a few things for granted. I should have been a bit more thorough rather than just assuming I knew what I was doing.

Today I did some more tests.

I was a little surprised that flicking the switch on my server framework so that it posted zero byte reads had no affect on the limit of connections that I could support. I’ve never actually needed the zero byte read functionality but a user requested it, so I added it and that was that. I realised that I now wasn’t 100% sure of what it was supposed to fix, just that it helps when you have lots of connections that don’t send data that often. I looked it up in Network Programming for Microsoft Windows and the reason for using zero byte reads is that there is a fixed amount of memory that Windows will allow to be locked at any one time. This limit, the number of locked pages, is separate to the non-paged pool limit. I was getting the two confused. Issuing zero byte reads doesn’t help reduce non-paged pool usage. It works to prevent you exceeding the I/O page lock limit. The way it works is that when you issue an overlapped read or write request it is probable that the data buffers involved will be locked. The locking granularity is the page size of the system, and since there’s a limit on the number of pages that you can have locked at any one time if you have lots of connections and each has a large number of pages locked due to pending read requests then you can run our of resources and be unable to issue any more reads or writes. The zero byte read “trick” works around this by issuing reads with a zero length buffer. This means that a read is pending but there are no locked memory pages. This allows you to have many connections with pending reads and no page locks. When data arrives on a connection you get a read completion and you can either issue a synchronous read to read the data into a real buffer or you can do what I do which is post another overlapped read request with a real buffer. If you have lots of connections and each connection only receives data infrequently then this technique can help you avoid the I/O page lock limit. Overlapped operations return WSAENOBUFS when the limit is reached.

My problems at ~32,000 connections yesterday were not due to the I/O page lock limit so switching on zero byte reads had no effect.

To prove to myself that the zero byte read code did actually do something I increased the size of the buffers used for each IO operation and ran my test again. The I/O page lock limitation could be clearly seen. The server continued to accept connections but the reads that it was issuing on the connections were failing with WSAENOBUFS. This is another issue to address in the framework as it would be useful for the server to deal with this situation a little better than it currently does, but it’s not a real problem for any of my clients right now so it has gone on the list of things to do. It’s also, obviously, not the factor that’s limiting the number of connections that the server can support.

The next potential limit is the one that I was talking about yesterday; the non-paged pool limit. The problem with me assuming that my server was hitting this limit was that I never bothered to look too closely at how much non-paged pool was actually being consumed. Like duh! Today I did that and I was surprised to see that the server wasn’t anywhere near to the non-paged pool limit. A standard Windows XP box with 1GB of ram has 256MB of non-paged pool. The figures used in Network Programming for Microsoft Windows suggest that a connected socket uses around 2KB of non-paged pool, an accepted socket uses 1.5KB (less due to the fact that it only needs to store the remote address) and each pending IO operation uses around 500 bytes. This morning, whilst looking for ways to adjust the non-paged pool size (if I reduced it I should get less connections) I discovered an article on that mentioned two registry keys that affect non-paged pool usage. Both values like under:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management

The first, NonPagedPoolSize, lets you change the size of the non-paged pool and the second, NonPagedPoolQuota, relates to a per process quota that the system imposes on each process. Not wanting to play with these values just yet I decided to test my theory that the limitation wasn’t related to non-paged pool use by having each connection post several overlapped read requests when the connection was established, rather than just one. This resulted in the server really running out of non-paged pool (possibly hitting the process quota) after using around 75MB of non-paged pool (based on task manager’s display of the process) with a total of 208MB of non-paged pool consumed in total (I assume the rest is in the networking subsystem or a rogue LSP)…

So, that’s where I am at the end of today. I understand the potential problems more and yet I understand the actual problem less. Next on the list are running the tests on one of my server boxes that doesn’t have any wonderful LSPs installed, running in a clean VMWare machine and installing some more memory to see if the problem changes when I have 2GB rather than just the one…