Every now and then I come across a situation where encapsulation has been taken slightly too far. Usually, or at least most recently, these over encapsulated designs have had problems because they’ve blocked access to an event handle. It’s great to wrap up a bunch of Win32 primitives into a nice coherent class but if you expose a method that allows the user of the class to wait for something to happen then it’s probably also a good idea to expose a handle they can wait on as well. Failure to do so imposes unnecessary restrictions on how the user of the class can wait for completion.
Let’s assume that we have a very simple shared memory buffer that we use to transfer information between processes and the class that wraps it up includes an event that’s set when a message is available. The class might expose a single
ReadMessage() function that blocks until a message is available, this would probably be seen as being limiting so perhaps a timeout would be added, with the function now indicating if there was a message available or the wait timed out. Alternatively you might split the functionality so that you have two functions, a blocking
ReadMessage() and a
WaitForMessage() that takes a timeout… Though these designs may at first appear fine they all exhibit the same problem; it’s not possible to multiplex the wait on this class with waits on other things.
Say we want to spin the message handling up in its own thread and wrap that thread in another object. I have a common pattern for this kind of “threaded object”; the object contains an event that it sets to indicate the thread should exit. In the object’s destructor we can set the “shutdown” event and then wait on the thread handle to become signalled and we’re quite safe from destroying the object out from under the thread that’s using it. Normal usage for such an object is to create it on one thread and for it to spin up the worker thread in its constructor and then shut down the worker thread in its destructor. The main loop of the thread might look like this:
handlesToWaitFor = m_shutdownEvent.GetEvent();
bool done = false;
DWORD result = ::WaitForMultipleObjects(..., handlesToWaitFor, false, INFINITE);
if (result == WAIT_OBJECT_0)
done = true; // Shutdown event has become signalled
// handle errors from WaitForMultipleObjects...
Of course the array of handles to wait for can be any size you like and you can add additional branches to the
if statement to deal with the real work of the thread. In a situation like this classes that fully encapsulate their handles are hard to use… If they expose a handle to wait on then you can simply add that handle to your array of handles and you’re done, you wait for the class and for shutdown and do whatever needs doing when a handle becomes signalled. Likewise you can multiplex waiting for the class with waiting for other handles…
To this end I now tend to breach encapsulation a little by exposing a handle from these kinds of classes. It’s not an ideal design because a user might chose to use the handle for something other than a wait, but it’s better than making it impossible for the user to multiplex their waits. I expect a “better” design might have these objects return an object that can be waited on or that can be added to an object that waits on multiple objects and that eventually we’d get down to some code that somehow got at the underlying handle. But just exposing the handle works OK most of the time…
Today’s rant is brought to you by the API
WaitForDebugEvent() and the number 0 (which is the number of handles it exposes for you to wait on!). To multiplex
WaitForDebugEvent() you have to poll it with a small timeout value, which is unfortunate. However, reading back through some very old information about writing Win32 debuggers on MSDN I found this “In Win32, threads cannot share handles to event objects, so the debug thread must open its own array of handles to access the same event objects.” in the “Using Event Objects for Communicating Between Debugger Threads” section… That information is clearly outdated given the current documentation for
CreateEvent() which clearly states that “Any thread of the calling process can specify the event-object handle in a call to one of the wait functions.” and originally I thought that perhaps it was the thread affinity of the old event objects that affected the design (sounds like the kind of thing Raymond Chen might know the answer to). But then I realised that
WaitForDebugEvent() has thread affinity anyway (it can only be called from the thread that calls
CreateProcess() on the process being debugged), so it’s unlikely that this is the reason…