Shared locks and unique locks

In one of the original articles that introduced The Free Framework I mentioned that the original design was possibly lacking in that it required a critical section per connection and that this could be resource intensive. Shortly after I changed what was to become The Server Framework to use a lock factory which passed out instances of critical sections based on a hashing algorithm applied to a key that the caller supplied. This allows a known set of critical sections to be shared (evenly, we hope) across all connections. This means that we know in advance how many critical sections we’re going to use in the server and this can be useful if controlling resource usage per connection is important to us. Of course the downside to this is that unrelated connections are now sharing locks, this could mean that there’s unexpected contention between code that has no need to cause contention; operations on connection A may cause contention with connection B merely because the two connections happen to share locks. Back in 2002 this kind of resource management was, possibly, more important than it is now.

Unfortunately the original lock factory design made it impossible to provide a lock factory that provided unique locks. The factory only provided a GetCriticalSection() call with no corresponding ReleaseCriticalSection() call to return a lock to the factory once it was no longer required. Because of this it was impossible to write a factory that simply dynamically allocated a new lock whenever it was asked (as it would leak locks when they were no longer required). This meant that although the lock factory was used via an interface it was impossible to change the locking policy in a server without making code changes; and due to how the lock factory was used by the socket allocator the changes required would have been considerable. This means that you’re forced to use some form of shared locking in your servers even when you’re dealing with a small number of connections or when you know that the resources used by the locks are not going to be an issue for you.

I’m in the process of testing the first changes for v6.0 of The Server Framework and these include changes in how a socket’s locking policy are configured. Lock factories now require that you release your shared locks back to them when you’re done; this doesn’t do anything for our shared lock factory but it allows us to write a unique lock factory that doesn’t leak. Since managing a resource that you MUST release is error prone there’s a CSmartSharedCriticalSection object that does the hard work for you. The need to a shared critical section to be released back to the factory means that someone needs to keep track of the factory that it was obtained from. It seems sensible that the shared lock itself does this housekeeping and so our lock factories now return ISharedCriticalSection rather than ICriticalSection not only does this provide the Release() method but it means that the implementation can include a reference to the factory that allocated it.

Including a reference to the allocating factory in every lock increases the size of the lock object. This is unfortunate and not a price that every user may wish to pay. To that end there are now some new socket types; some which use shared locks and some which use critical sections directly. You decide which socket type you want to use by either supplying a lock factory to your socket allocator (you use “shared locks” supplied by the factory) or not (you use unsharable critical sections). This means that there are now three locking policies supported by the framework:

  • Shared locks - by passing an instance of the CSharedCriticalSectionFactory to your socket allocator’s constructor you get the same behaviour as the pre v6.0 framework (but at present the locks take up slightly more memory, this may be changed before release of v6.0). A fixed pool of locks is shared across all connections.

  • Unique locks from a factory - by passing an instance of the CUniqueCriticalSectionFactory to your socket allocator’s constructor you get a unique lock per connection. The lock object is the same size as the one supplied by the shared lock factory. The lock factory may pool locks for reuse but each lock will only be used by one connection at a time.

  • Unique locks - by opting not to pass a lock factory to your socket allocator’s constructor the allocator will use the new CUniqueLockXXXSocket classes which contain their own critical sections. Each connection uses a unique lock, the locks are not pooled or reused and they take up the minimum space.

Another change to the lock factories concerns the potential of unexpected lock inversions if you obtain multiple locks from a factory for a single connection and then lock them at the same time. Say your connection needs two locks, A and B. If you obtain both of these from the same lock factory then your code has the potential to deadlock due to lock inversions even if you ensure that you always lock your locks in the same order. The reason for this is that the locks are from a single pool and another connection may have also got your lock A but as its lock B. If it also has your lock B as its lock A then you have the potential for a lock inversion deadlock. The recommended way to avoid this in the existing framework is to always use a distinct lock factory for each type of lock that will be used. This means that in the example above you would have a lock factory for locks of type A and a lock factory for locks of type B. Even though the locks are shared across connections as before, you can guarantee that a connection cannot have a B lock from the A lock pool and so all lock inversions are of your own making.

The new lock factory interface makes this behaviour slightly easier to achieve by adding the concept of a ’lock pool’ to the factory. You request pool Id’s when you create your factory and before you allocate any locks and then can use these pool Id’s to access distinct pools of locks within the factory. This removes the need to use multiple factories.