I must admit that I didn’t really see how Azure could be of much use to anyone except really die-hard, bleeding edge, Microsoft only shops; that is up until yesterday.
The new Azure, which you can read about here on Scott Guthrie’s blog, seems much more usable for general purpose cloud solutions. Durable VMs, Linux VMs, easy migration to/from your own non Azure VHDs, direct access to their new low latency distributed cache from Memcached with no code changes necessary, lots of great new tooling and a REST based management API.
The best thing about Visual Studio 11 is that it doesn’t matter if you like the new style IDE or not. The project files are, at last, backwards compatible, so you can load them in Visual Studio 2010 and build with the new tool chain even though you ignore the new IDE - if that’s what you want to do.
I don’t like the new icons, but I find I can work fine in the IDE as long as I don’t think about it too much… Probably pretty much like how I felt about all previous versions when they were at the beta stage…
I know I’ve said this before, but now it’s really done…
The WebSocket protocol is now an official RFC. There are a small number of changes between RFC 6455 and the draft WebSocket protocol version 17; the only important ones being he addition of two new close status codes. The rest is just a case of tidying up the draft.
There will be a 6.5.3 release of The Server Framework to include these changes.
I’m just back from a wonderfully relaxing holiday in Italy. This time we had internet connectivity all the time, so I’m up to date on email, etc. The first thing that I do after all the ‘we’re back, how’s the house, and you need to go to bed even though you’re excited to see all of the toys you’ve missed’ stuff is to fire up all my machines and make sure that they do their windows update stuff… Then the NAS devices need to be started and allowed to settle in, then the VPN needs to be checked, dyndns kicked, etc.
I’ve been working with the WebSocket protocol recently, updating the code that implements the protocol in The Server Framework to the latest version of the draft standard (the HyBi 09 version). Whilst this looks like it’s almost a real standard, there are still lots of potentially open issues as can be seen from the HyBi discussion mailing list.
It’s quite clear from some of the less cohesive parts of the draft spec (and more so from the mailing list) that the protocol is very much a design by committee effort.
I have a client who is possibly suffering from TIME_WAIT exhaustion and I thought that the best way to find out for sure was to get them to add the TIME_WAIT perfmon counter to their normal counter logs so that we could see how and when sockets in TIME_WAIT accumulate on the machine.
The problem is that there doesn’t seem to be a perfmon counter for this, which is unfortunate, especially since you can easily get the number of established and reset connections from the TCPv4 and TCPv6 performance objects.
After a week or so of serious dog fooding I’ve finally got a nice reliable lock inversion detector running as part of my build system for The Server Framework’s example servers.
Note: the deadlock detector mentioned in this blog post is now available for download from www.lockexplorer.com.
The build system has always run a set of ‘black box’ server tests for each server example as part of the build. These start up the example server in question, run a set of connections against it and shut the server down.
My tangential testing that began with my problems with commands run via WinRs during some distributed load testing are slowly unravelling back to the start. I now have a better build and test system for the server examples that ship as part of The Server Framework. I have a test runner that runs the examples with memory limits to help spot memory leak bugs and a test runner that checks for lock inversions.
I’ve had one of those days. In fact I’ve had one of those days for a couple of days this week…
It started when I decided to improve the profiling that I was doing on a new memory allocator for The Server Framework by extending the perfmon and WinRS based distributed server testing that I wrote about a while back. This allows me to create a set of perfmon logs, start a server to collect data about and then start a remote client to stress the server.
I’m still considering my options with regards to intrusive containers to replace the STL maps that I’m using in my timer queue implementation. I think I may be able to use the boost::intrusive sets in place of a true map in most of the situations that I need (I just have convince myself that a) it will work with all the compilers that I need to support, b) that adding a dependency to part of boost is a good idea for me and my clients and c) that even though my gut reaction on seeing the code is that it’s pointlessly clever and bound to be a bitch to debug it’s probably better than rolling my own).