The Server Framework now includes UDP multicast. There are a couple of new examples; a server that joins a multicast group and a client that sends to a multicast group.
The latest version of The Server Framework contains a memory leak in the CSocketServerEx class. It’s quite a good leak, it leaks an IO buffer on every connection attempt. The fix is as follows:
Change this code:
void CStreamSocketServerEx::PostAcceptRequest() { if (m_listeningSocket != INVALID_SOCKET) { if (m_connectionLimiter.CanCreateConnection()) { IncrementPendingReceives(); const IFullAddress &address = GetAddress(); IStreamSocketEx *pSocket = AllocateSocket(CreateSocket(address.Type())).Detach(); IBuffer *pBuffer = AllocateBuffer(); pBuffer->SetUserPtr(m_socketIndex, pSocket); PostIoOperation(pBuffer, IO_Accept_Request); } else to this:
I’ve just fixed a problem in The Server Framework that was reported to me by one of my clients. There’s a race condition during connection establishment which can be demonstrated by a client that connects and then terminates the connection very quickly. The bug could be said to be in the example servers rather than the framework itself but it can be fixed by a change to the framework…
In the non AcceptEx() version of the TCP server code, during connection establishment OnConnectionEstablished() is called from the code that processes a successful Accept().
I’ve just had an question from a reader via email:
“I’m developing my huge server (I dont have much experience with programming servers) and as far as I know your source is the best available out there.. I would like fit it to my needs and use more c++ standard template libary instead of those buffer/linked list classes you made by yourself, but I’m afraid it would end in loss of preformance speed.
I’ve just made a small change to The Server Framework. The change is in how the AsyncConnect() function reports connection errors and the potential problem can occur if there’s a race condition between a call to Close() on a socket that has not yet successfully connected. If the call to Close() completes before connection is attempted by the IO thread pool (possible but usually unlikely in normal operation) then the connection failure doesn’t currently make its way back to OnOutgoingConnectionFailed() which means that any code you might have in there for dealing with connection failures wont get called…
There’s a bug in one of the constructors for CSocket::InternetAddress which means that changing the example servers to use a specific network adapter’s address, rather than INADDR_ANY will generate an exception from the call to bind which says that the address is invalid.
The code currently reads like this:
CSocket::InternetAddress::InternetAddress( const unsigned long address, const unsigned short port) { sin_family = AF_INET; sin_port = htons(port); sin_addr.s_addr = htonl(address); } and it should read like this:
The latest release of the free version of my asynchronous, windows, IOCP based, socket server framework can now be obtained from here at ServerFramework.com.
The latest release is really just a bug fix and compiler compatability release. The code now supports Visual Studio 2005 (as well as VC6, Visual Studio .Net (2002) and Visual Studio 2003). I’ve fixed the memory leak bugs that were present in the ThreadPoolLargePacketEchoServer sample and worked around the iostream leak that’s present in the Visual Studio 2005 version of STL.
A while ago I reported that I’d been seeing very strange memory usage patterns in the debug build of the simple echo server when built with Visual Studio 2005 using the standard version of STL that ships with it. The thing that interests me most about this leak is that it seems to be more of a ‘strange allocation strategy’ rather than a leak as I was previously able to get the memory usage to grow to a large amount and then suddenly it dropped back to something reasonable and then started to grow again.
My VoIP client has been stress testing the UDP version of The Server Framework and they had what they thought was a deadlock. It wasn’t a deadlock it was more of a lazy server… What seemed to have happened is that they had been thrashing a server with lots of concurrent datagrams and pushed the machine until it stopped receiving packets due to the resource limitations of the machine being hit (probably either non-paged pool exhaustion or the i/o page lock limit).
Recently I finished developing a high performance ISO-8583 financial transaction authorisation server for a client using The Server Framework and whilst I was running the final black-box tests against the server I realised that these particular tests were dependant on the system date of the machine running the server. The server uses the current time and date to make some decisions on whether it can authorise a particular transaction or not.