One of my long-term clients has hundreds of cloud machines running instances of their server, each server maintains thousands of reliable UDP connections using a custom protocol that we’ve developed over the years. When things go wrong it’s often hard to work out why. Even though we have reasonable unit test coverage of the code that runs the UDP protocol, it’s hard to build tests that cover every possible scenario.
One of the things that came out of my conversations with clients last night was an interest in hosting .Net Core from native code.
Of course we already host the CLR and provide an easy way to write servers that do the heavy lifting in native code and call out to managed code for the business logic. We have several clients using this to host managed “plugins” inside a native host and it works very well.
Performance is always important for users of The Server Framework and I often spend time profiling the code and thinking about ways to improve performance. Hardware has changed considerably since I first designed The Server Framework back in 2001 and some things that worked well enough back then are now serious impediments to further performance gains. That’s not to say that the performance of The Server Framework today is bad, it’s not, it’s just that in some situations and on some hardware it could be even better.
I’ve built a small Windows Service which exposes perfmon counters to track sockets in TIME_WAIT state. It can be downloaded from the links later in this post.
Back in 2011 I was helping a client look for issues in their systems caused by having too many sockets in a TIME_WAIT state (see here for why this can be a problem). This was affecting their connectivity. Rather surprisingly there seemed to be no way to track the number of sockets in TIME_WAIT using perfmon as there didn’t seem to be a counter exposed.
Back in January 2010 I discovered that if FILE_SKIP_COMPLETION_PORT_ON_SUCCESS is enabled on a datagram socket and a datagram arrives when a read is NOT currently pending and the datagram is bigger than the buffer supplied to the next read operation then no error is returned and the read will never complete. This was confirmed as a Windows bug and I’m pleased to see that it’s been fixed in Windows 8 and Server 2012.
I’ve just released new versions of my Lock Explorer tools, LID and LIA. This is quite a big release as it increases the number of locking APIs that the tools instrument from 1 to 3. We now track Slim Reader Writer locks and Mutexes.
Arguably the tools should always have tracked these, and possibly more API calls, but the tools have always been first and foremost to assist in the development and testing of The Server Framework and, well, we only use Critical Sections.
I have clients asking me about this all the time. This article is pretty concise about the tools that you need to use to map an open port to the process that has it open.
I’m still working on my investigation of the Windows Registered I/O network extensions, RIO, which I started back in October when they became available with the Windows 8 Developer Preview. I’ve improved my test system a little since I started and now have a point to point 10 Gigabit network between my test machines using two 2 Intel 10 Gigabit AT2 cards wired back to back.
My test system isn’t symmetrical, that is I have a much more powerful machine on one end of the link than on the other.
I know I’ve said this before, but now it’s really done…
The WebSocket protocol is now an official RFC. There are a small number of changes between RFC 6455 and the draft WebSocket protocol version 17; the only important ones being he addition of two new close status codes. The rest is just a case of tidying up the draft.
There will be a 6.5.3 release of The Server Framework to include these changes.
Before I started to look at RIO for inclusion in The Server Framework I did a quick check on the Microsoft BUILD site to see if there were any sessions that dealt with it specifically, I didn’t find any. Once I posted my blog posting I did another check and found this video that deals specifically with RIO. This gives some in depth details of how RIO works and the kinds of performance improvements that Microsoft has witnessed in their labs.