Adventures with \Device\Afd - test driven understanding
I’ve been investigating the ‘sparsely documented’ \Device\Afd
interface
that lies below the Winsock2 layer. Today I use a test driven method for understanding and documenting the API.
TDU - Test Driven Understanding
When trying to understand a new API I always like to end up with executable documentation in the form of tests that show the behaviour of the API. I write these tests in the same way that I write any tests; writing a test that fails and then adjusting so that it passes. The difference with this kind of work is that initially I may not know what I need to do to make the test pass and so ‘whatever ends up working’ is good for a pass. As I add more tests I understand more about the API and refine earlier tests.
Code
Full source can be found here on GitHub.
We’re now working with the understand
project.
This isn’t production code, error handling is simply “panic and run away”.
This code is licensed with the MIT license.
GoogleTest
Since I’m currently exploring GoogleTest I’ll use that to build my TDU. I’ve added GoogleTest as a git submodule and I’m using a variation on the wrapper that I put in place for my last Practical Testing episode. This makes it easy, for me at least, to make sure the test code builds using the same options as the code under test.
Code structure
Since I want to focus on exploring the API I want the tests to be as clean as I can make them so that
there’s no noise. As things are understood I’ll wrap them so that they don’t distract from subsequent
tests that build on the understanding. This means that lots more code is moving into the shared
directory,
with new headers, afd.h
and socket.h
to group the code according to function. I didn’t spend much
time thinking about how to structure this code or where it should live. All of this code is “throw away code”.
Once I understand the API I’ll decide if I will actually use it and if so I will look at wrapping it
properly for production use. If things don’t prove useful then I may just decide to leave this work and move on.
The code that I initially played with in the explore
project from the first article
can now be boiled down to this:
TEST(AFDExplore, TestConnectFail)
{
static LPCWSTR deviceName = L"\\Device\\Afd\\explore"; // Arbitrary name in the Afd namespace
auto handles = CreateAfdAndIOCP(deviceName);
// As we'll see below, the 'PollData' (socket, status block and outbound poll info) need to stay
// valid until the event completes...
// This is per connection data and per operation data but we only ever have one operation
// per connection...
PollData data(CreateNonBlockingTCPSocket());
// It's unlikely to complete straight away here as we haven't done anything with
// the socket, but I expect that once the socket is connected we could get immediate
// completions and we could, possibly, set 'FILE_SKIP_COMPLETION_PORT_ON_SUCCESS` for the
// AFD association...
EXPECT_EQ(false, SetupPollForSocketEvents(handles.afd, data, AllEvents));
// poll is pending
ConnectNonBlocking(data.s, InvalidPort);
// connect is pending, it will eventually time out...
PollData *pData = GetCompletion(handles.iocp, INFINITE);
// the completion returns the PollData which needs to remain valid for the period of the poll
EXPECT_EQ(pData, &data);
EXPECT_EQ(AFD_POLL_CONNECT_FAIL, pData->pollInfo.Handles[0].Events);
}
The bulk of the tests will use simple test fixtures to set up the afd
handle, the IOCP
and
the single TCP socket used for the test and I use a custom test main()
to initialise the
networking stack once. This leaves the actual tests short and to the point.
TEST_F(AFDUnderstand, TestConnectCancel)
{
EXPECT_EQ(false, SetupPollForSocketEvents(handles.afd, data, AllEvents));
ConnectNonBlocking(data.s, NonListeningPort);
CancelPoll(handles.afd, data);
PollData *pData = GetCompletion(handles.iocp, INFINITE, ERROR_OPERATION_ABORTED);
EXPECT_EQ(pData, &data);
EXPECT_EQ(0, pData->pollInfo.Handles[0].Events);
}
Understanding through tinkering
From this point on it’s a case of sketching out ideas for how the API might work and then writing simple tests to understand what actually happens. The next test may be seeing how things work when a connect attempt completes. A test like this will show us that:
TEST_F(AFDUnderstand, TestConnect)
{
EXPECT_EQ(false, SetupPollForSocketEvents(handles.afd, data, AllEvents));
auto listeningSocket = CreateListeningSocket();
ConnectNonBlocking(data.s, listeningSocket.port);
// connect will complete immediately and report the socket as writable...
PollData *pData = GetCompletion(handles.iocp, 0);
EXPECT_EQ(pData, &data);
EXPECT_EQ(AFD_POLL_SEND, pData->pollInfo.Handles[0].Events);
// Note that at present the remote end hasn't accepted
sockaddr_in addr {};
int addressLength = sizeof(addr);
SOCKET s = accept(listeningSocket.s, reinterpret_cast<sockaddr *>(&addr), &addressLength);
if (s == INVALID_SOCKET)
{
ErrorExit("accept");
}
// accepted...
closesocket(s);
// disconnected...
EXPECT_EQ(true, SetupPollForSocketEvents(handles.afd, data, AllEvents));
pData = GetCompletion(handles.iocp, 0);
EXPECT_EQ(pData, &data);
EXPECT_EQ(AFD_POLL_SEND | AFD_POLL_DISCONNECT, pData->pollInfo.Handles[0].Events);
}
Polling again with no further events expected can show that polling is level triggered (results are reported again even if they haven’t changed) rather than edge triggered (results are only reported when a change occurs).
Polling before disconnecting the socket and not retrieving the results from the IOCP until after disconnecting can show that the result retrieved is the state when the poll was issued rather than the state when the result was retrieved, etc.
Enabling FILE_SKIP_COMPLETION_PORT_ON_SUCCESS
on the IOCP when registering the
afd
handle can show that we can poll and retrieve results in a single call.
No test should be considered too simplistic or too obvious as the intention is to build an executable specification for this unknown API.
Wrapping up
With the aid of unit tests we can explore the \Device\Afd
API and work out how we can
use it. The result is an executable specification.
These tests can be built by looking at other people’s use of the code, but written
in a way that makes sense to how I work. Your tests will likely be structured differently
as there is, of course, only one true way ;)
Full source can be found here on GitHub.
This isn’t production code, error handling is simply “panic and run away”.
This code is licensed with the MIT license.
More on AFD
- Adventures with
\\Device\\Afd
- Test Driven Understanding - this post