I agree 100% with this piece from Jimmy Nilsson’s blog.
I’ve seen a lot of blog postings recently that pour scorn on the ideas behind TDD. Ah well, if you don’t like it, don’t do it. I’m more than happy if our competitors decide that TDD isn’t for them. In fact, testing is bad, don’t do it, move along now, don’t read any of the testing articles here…
The jarring of the “real world” work we’re doing is made worse by work we’re doing for another client at the moment.
Craig Andera talks about TDD. I couldn’t have said it better myself.
Way back in mid November, before we dumped our building’s managing agent for being worse than useless and possibly stealing from us, I was working on some POP3 code. I had some down time today so I decided to drop back into it and see if I could move things along a little.
In summary, having lots of tests helped…
I haven’t done any development on the POP3 code for 3 months.
We’re developing some code for a client. There’s a standalone server, which we’ve completed, and a small stub of code that allows the client’s existing system to talk to our new server in place of the old thing they used to talk to…
This morning I stubbed out the new stub and put together a test harness project. Unfortunately, due to the way the client’s code is coupled it could prove difficult to test the new stub…
Earlier I said “I’ll probably keep the code for now, but it’s not the code I’d have written first if I was working from a test. I wouldn’t need it yet… I may never need it in production…” I lied. It’s in CVS if I need it, so why keep it cluttering up the code when I don’t need it…
Way back in June I was playing around with OBEX. I’ve had a quiet day today and went back to the code to progress it a little more (a client is making interested noises so I need to get back up to speed again). The code I wrote in June was before I’d become test infected…
The first thing to do, of course, was to add a test harness and a mock object library to the OBEX library.
Barry suggests that to do meaningful performance tests you need to know a bit out the way the thing that’s under test operates.
I guess he has a point given his reason for performance testing was to compare a new version of the thing under test with an older version of the same thing…
Personally, I’d be tempted to leave the poorly performing tests in the test harness. Then add some comments about why these particular situations are unlikely to show up as real usage patterns and why the object under test performs poorly in these particular situations.
During the recent library adjustments the main aim was to add tests. As we write tests we create lots of mock objects. Our libraries are dependant on each other, as libraries tend to be, and this means that often a library uses an interface from another library… When it comes to test the dependant library it’s often handy to be able to use the mock object that you created to test the library that you’re dependant on… If you see what I mean… The, slightly labored, point being, it’s important where you keep your mock objects…
After breaking the socket server into more manageable chunks I started writing the tests. CAsyncSocketConnectionManager is pretty easy to test, it only has one public function; Connect().
So, what exactly do these tests look like?
I don’t use a unit testing framework at present, I haven’t found a need. Whilst NUnit and JUnit are great I don’t feel that the C++ version adds much as C++ doesn’t support reflection and that’s where these frameworks really seem to win.