One of the questions that comes up time and again from users of The Server Framework is “How to I access the list of current connections within the framework”. My answer is, you don’t, you build your own collection and manage it yourself. Usually these people want to write some form of chat or gateway server; client A connects to the server and needs to send data to client B who is also connected to the server.
I realised this morning that part of my “The life of a stream socket connection” document about the safe use of server callbacks in my server framework was wrong.
I said this:
At any time after a connection is established, including before you’ve had a chance to handle the connection establishment event, you might get a connection termination event.
and this:
Note that it IS possible that you could receive error, client disconnect, connection reset or connection closed events before or during the processing of the connection established callback.
As I mentioned here I’ve recently adjusted how socket callbacks are dispatched in The Server Framework.
Once you’ve written a TCP server or client you will find that you spend a lot of time dealing with the lifecycle of the connections that are created. You get to deal with a number of events which take place during a connections lifetime and, due to the way that the framework works, you can select just the events that interest you and ignore the rest.
The latest release of The Server Framework is now available. This release includes the following changes.
The following changes were made to the libraries.
Some whitespace changes to remove incorrect tabs. Admin Library - 5.2.3
Added a check for _WIN32_WINNT >= 0x0600 and JETBYTE_PLATFORM_SDK_VERSION < 0x060A as targetting Vista or later means we enable stuff that’s only available in the v6.0a Platform SDK.
Added The \ref AdminFAQ “Frequently asked project related questions” page.
I started to document part of The Server Framework’s behaviour as I expect that a client will be asking questions about it in the near future. Whilst writing the documents I found myself writing this:
“It’s pretty easy to deadlock The Server Framework in complex servers if you don’t abide by the rules. Unfortunately, the rules weren’t documented until now and, although I knew them, it’s probably more accurate to say “I knew of them”.
There was a change in release 5.2.1 of The Server Framework which has caused some issues with clean shutdown. This issue is also present in 5.2.2.
Prior to 5.2.1 the CSocketServer or equivalent object that did the bulk of the work with regards to connections could be destroyed whilst there were sockets that it managed still in existence. This wasn’t usually a problem but it meant that it was possible for a socket to make a callback to code that didn’t exist anymore; which is a Bad Thing.
Well, of course, the day after I released the 5.2.2 version of The Server Framework I get a bug report from a client using 5.2.1 listing a couple of memory leaks. One of them fixed in 5.2.2 and one not.
The leak that survived 5.2.2 is in CReadSequencingStreamSocketConnectionFilter::FilterSocketReleased() which currently looks like:
void CReadSequencingStreamSocketConnectionFilter::FilterSocketReleased( IIndexedOpaqueUserData &userData) { delete userData.GetUserPointer(m_userDataIndex); } and which should look like:
void CReadSequencingStreamSocketConnectionFilter::FilterSocketReleased( IIndexedOpaqueUserData &userData) { CInOrderBufferList *pBuffers = *reinterpret_cast<CInOrderBufferList *>(socket.
The latest release of The Server Framework is now available. This release includes the following changes.
This is the first release built using Compuware BoundsChecker and there have been some resource leaks fixed. The following changes were made to the libraries.
Admin Library - 5.2.2
Added JETBYTE_MINIMUM_SUPPORTED_WINDOWS_VERSION and JETBYTE_MINIMUM_SUPPORTED_NTDDI_VERSION to Admin.h. These are currently set to _WIN32_WINNT_WIN2K and NTDDI_WIN2K.
Added JETBYTE_PLATFORM_SDK_VERSION which you can use to tell the libraries which version of the Platform SDK you’re using.
I’ve finished the write completion driven outbound data flow control connection filter that I started work on a while ago. This provides a way to deal with the problem of having more data to send to a client than the client can receive within a reasonable time. Rather than simply continuing to send and building up a massive amount of buffered data in the TCP/IP stack the connection filter keeps track of write completions and begins to buffer data for you when there are ’too many’ write operations outstanding.
As I said in my recent posting about Data Distribution Servers, “Next on the list is writing a more focused server and clients.”. Tick.
I started out by writing the data feed. This was a simplified version of the echo server test harness that I’d extended to use a controllable TCP receive window. The data feed is just a client that generates random data in packets that have simple header, length, sequence number, type and sends it to the server that it’s connected to.