This week I integrated the new data provider with the rest of the client’s existing code. The integration was pretty easy as the existing code deals with the data provider via a single method on a COM interface. All that was needed was to adjust the calling code to use the COM object rather than the local implementation and everything just worked. The good news was that the call to get the data was now 100 times faster than it was with the old code; the bad news was that processing that data took almost three times as long. The end result was that the whole process was slower rather than faster…
The wonders of COM and the construction of the data object we were using meant that each call to access the data now had to cross process boundaries. This wasn’t what we wanted but the object we had didn’t support marshal by value semantics and so we needed to marshal the object across the process boundary ourselves.
Luckilly the object supports
IPersistStream so our simple data access COM interface grew another method,
StreamData() which took an
IStream and wrote the object to the stream using
OleSaveToStream() rather than just returning an interface to it. Back in the calling code we took the stream and passed it to
OleLoadFromStream() to create the data object in process from the serialised contents. This worked nicely but it was considerably slower than I expected. The new component was now about twice as fast as the old code but, as the new component acts as a cache for these data objects all of the time was being spent in the serialisation.
After a few moments it became obvious that I’d simply reversed the problem; the serialisation consisted of many hundreds of calls to write small amounts of data to the
IStream and each call crossed a process boundary. In the words of Homer Simpson, doh!
The solution was simple. Inside our implementation of
StreamData() I created my own stream, streamed the data object to the in process stream and then used
CopyTo() to write the contents of the local stream to the remote stream in one hit. Obvious when you think about it and probably a mistake I wouldn’t have made back when COM was my bread and butter work…
The revised marshalling code was much faster. The new component now provides the data in a tenth of the time that the original code did, the data processing takes the same amount of time and the whole process now runs in half the time that it did before. Not quite as nice a win as the 100 times speed up that we had, but still more than adequate.
The lesson from this is that often you need to understand what’s going on at the levels below the abstraction that you’re working at so that you can diagnose unexpected problems. It’s great that COM lets me use an object that’s out of process without thinking about it but it’s not so great that I can use an object out of process without thinking about it…