Testing payback

I’m building a new data provider for one of my clients. It breaks a huge chunk of their existing codebase out into a new component that hides behind a simple COM interface and abstracts away lots of nasty stuff so that they don’t need to worry about it. The inflection point is small; a method or two on a single COM interface. This is good as it means we have a single, easy to test, integration point. Beyond the interface I can do what I like; all I need to worry about is that the stuff I throw back through the keyhole is the same stuff they’d get from their existing ‘solution’. I started by writing some tests, so it’s not really that hard…

Before we broke this chunk of functionality out of the legacy system we put together some tests so that we could refactor the interface a little and merge in another legacy system… As we were building the tests we did the usual; slipped in some interfaces and wrote some mocks. Then we wrote some instrumented service providers (mocks that save down data that passes through them) so that we could build up a set of ‘canned’ data for our test runs. By saving the input data from all systems we access and saving the output data we produce we can write tests that prove that our changes haven’t changed the output data for a given set of input data - that’s a surprisingly valuable thing…

That was a while ago. Now we’re at the point where we can integrate the new service provider with the old system and, of course, we can use the old canned data as test data for our new system. Our new system bears no resemblance to the old code; it’s all new. Some of it was a rewrite (the architecture has been changed to protect our sanity) and some of it was, shall we say, ‘aggressive’ refactoring. The nice thing is that we can poke the old test data in one end of the system and check that the same results are produced; they weren’t to start with, they are now.

What’s even better is that due to the way the new service provider has been implemented (JITT rules), we have enough tests to support us and we can easily add more when we find bugs. Today all of that paid off nicely. I’d been working on the ‘aggressive refactoring’ part of the project and got that section to pass all the old tests. I then had to integrate that with the ‘brave new world’ section of the code. As is often the way, the interfaces from one part didn’t quite fit with the interfaces from the other part; it was probably a communication problem between the developers - I’d written one part two weeks ago and then spent time away from the code before writing the other part this week; I obviously don’t talk to myself enough. Anyway; square peg, round hole. Both pieces had test harnesses so as I shaved the bits off the peg so that it fitted the hole I could see what I broke as I did it and fix things as I went. To start with I was a bit nervous about making the changes to the code that I hadn’t touched for a couple of weeks; once I started I felt empowered by the speed of the feedback that I got when I made mistakes or broke contracts. The tests meant that I knew straight away when things stopped working and that made me bold and helped me move faster and more confidently.

We now have a system that’s internally integrated and, at present, appears to be producing the same numbers as the old system. That’s a start. Now we need to do the real integration, but first, we’ll write a few tests for the legacy component that we’re replacing…