Socket Servers

Tracking invocations

The simple client/server request/response protocol that I’m currently harvesting uses the concept of an ‘invocation’ to tie a request to a response. An id is generated on the client and placed in the request header. The server simply copies the id from the request header to the response header and the client can then match responses to requests. This works nicely but the implementation has evolved with the protocol. The first version used a 4 byte invocation id, allocated an instance of an invocation data class and stored the allocation address of the object as the invocation id.

Custom client/server request/response protocols

Quite often my customers use The Server Framework for both ends of their communication channels. The Server Framework has fully supported developing clients as well as servers for a long time now and although many of my customers build either servers or clients with the framework some build both. One of the things that often comes up in discussions with these customers is how to develop a custom request/response protocol for their servers and clients to communicate with.

SSPI Negotiation; NTLM and Kerberos clients and servers

I’ve been working on a library that works in a similar way to our SChannel code and allows the use of the Microsoft “Negotiate” Security Support Provider Interface (SSPI) to provide NTLM and Kerberos support (see SPNEGO for more details). Things are going well and, in general, using Negotiate or NTLM or Kerberos is easier than using SChannel and the structure that was originally born to work with OpenSSL and then adapted for SChannel works well with the new security providers.

Where's the catch(...)

As of the next release of The Server Framework use of catch(...) handlers at thread boundaries, in ’no throw’ functions and in destructors will be configurable. At present, in v6.0 and earlier of The Server Framework, we use catch(...) judiciously at thread boundaries in functions that are either marked as ’no throw’ (with throw()) or which are ‘implicitly no throw’ and in some destructors. Generally these handlers let the user of the framework know about the unexpected exception via a callback.

Allocating page aligned buffers

Back in October 2007 I briefly looked at, what seemed to be at the time, a simple change to The Server Framework so that you had the option to use buffers that were aligned to page boundaries. This could help you scale better as one of the two problems with scalability at the time was the ‘I/O page lock limit’; there’s a finite limit to the number of memory pages that can be locked at one time and in some circumstances data in transit via sockets is locked in memory whilst it is sent.

Structured exception translation is now optional

I’ve had a couple of requests from clients recently that they be able to handle any structured exceptions in The Server Framework themselves. Up until now all framework threads install a SEH translator and catch the resulting C++ exceptions in their outer catch handlers and report the error to the framework user via various mechanisms. This generally works well and, prevents exceptions going unreported but sometimes users want to integrate the framework with code that deals with uncaught structured exceptions in other ways.

New Windows Services library

I’m currently working on a new version of the Windows Services library that ships as part of the licensed I/O Completion Port Server Framework. The Services library allows you to turn your server into a Windows Service very easily and also allows you to run it as a normal executable inside the debugger, etc. It integrates nicely with our Performance Monitoring library for exposing perfmon counters and comes with several example servers that show you how to use it (see here and here).

Race condition during service shutdown

There’s a race condition in the service shutdown code which is most likely to show up if there’s an exception thrown from your implementation of ContinueService(), PauseService() or StopService() but that could show up during any service shutdown sequence. This race condition is present in all versions of the Service Library and so far has only been reported by one client. A fix is available, please contact me directly if you need it, or think you need it.

Bug fix in performance counter instance activation code

There’s a bug in all releases of our performance counter library that may cause the creation of an instance with a name that has been previously used as an instance but that has been released to fail by connecting the new instance to the previously released instance data structure. The bug is in PerformanceDataBlock.cpp, the else if around line 167 in AllocateObjectInstance() should be changed from: if (pInstance->NameLength == 0 && !

Everything you need to know about timers and periodic scheduling in the server framework

Often when you’re writing a server or client application with The Server Framework you will find yourself needing to perform operations periodically or after a timeout. The Server Framework provides a light-weight timer queue that can be used to schedule timers and that uses minimal system resources so that there’s no problem with having multiple timers per connection; even on systems with 10,000 connections. The class used to manage per connection timers is the CThreadedCallbackTimerQueue class.