Reputation: 1431
I wrote mini-checkers that represents board in two formats: long positionID
and byte[][] board
. Former is cheaper to store, latter is easier to represent/manipulate. Conversion itself is simple (see the code below).
TDD states "write one failing test, then write production code". How should this be done with representation conversion? Unit test like
assertEquals(0L, toIndex(new byte[6][6]))
doesn't provide much coverage. Testing for Long myPosID = 42L; assertEquals(myPosID, toIndex(toBoard(myPosID))
doesn't add much value. Testing whole range will take forever. Running unit test for several random myPosID
values (Monte-Carlo simulation) seems better, but even then passing test doesn't mean much.
How should it be done in TDD?
/*
This class manipulates checkers board representation. Position is stored as long and represented as byte[height][width] board.
For board representation white = 0, black = 1, empty = 2.
Long positionID to byte[][] board:
get ternary numeral system representation of positionID, place positional values to corresponding squares.
For conversion from byte[][] board to long positionID:
long positionID = 0; for each (byte playableSquare : board){playable square positionID = positionID*3. positionID+= playableSquare}
*/
public final int boardHeight = 6;
public final int boardWidth = 6;
public long toIndex(byte[][] board) {
byte coords[] = new byte[boardHeight * boardWidth / 2];
int totalSquares = boardHeight * boardWidth / 2;
byte k = 0;
for (int i = 0; i < boardHeight; i++) {
for (int j = 0; j < boardWidth / 2; j++) {
byte makeItCheckers = (byte) ((i + 1) % 2);
coords[k] = board[i][j * 2 + makeItCheckers];
k++;
}
}
long positionID = 0;
for (int i = totalSquares - 1; i >= 0; i--) {
positionID = positionID * 3 + coords[i];
}
return positionID;
}
public byte[][] toBoard(long positionID) {
int totalSquares = boardHeight * boardWidth / 2;
int[] coords = new int[totalSquares];
for (int i = 0; i < totalSquares; i++) {
coords[i] = (int) (positionID % 3L);
positionID = positionID / 3L;
}
byte[][] board = new byte[boardHeight][boardWidth];
Arrays.fill(board, 2);
byte k = 0;
for (int i = 0; i < boardHeight; i++) {
for (int j = 0; j < boardWidth / 2; j++) {
byte makeItCheckers = (byte) ((i + 1) % 2);
board[i][j * 2 + makeItCheckers] = (byte) coords[k];
k++;
}
}
return board;
}
Upvotes: 1
Views: 267
Reputation: 11
Checking formulas with TDD is hard. You can use a variant of Monte-Carlo. Generate 1000 (or 100 000) random Long testID
numbers. Save them somewhere. Always check back and forward conversion with this list. IDs will be random, but will not change from test to test. This way you follow "tests must produce same results".
TDD seems to work well when company has lots cheap but mediocre employees. Then manager can enforce writing tests (it is easy to check if methods are missing tests) and it is harder for other coders to submit patch that breaks existing code (your commit doesn't pass JUnit test - go and redo!!). Tests will slow down workers, but that doesn't matter, as long as tests don't slow down manager. Just hire more coders.
This works especially well when the project is starting from scratch.
If coders are decent, their labor is expensive and a project is maturated, then you are better off with behavior driven tests.
Upvotes: 1
Reputation: 131396
TDD is writing tests before writing implementation.
You seem to do the contrary way.
To write TDD and more generally unit tests for your conversion processing, you have to think in terms of acceptance tests.
You have to identify possible scenarios of the conversion processing.
What you have as input and what you expect as output.
Testing whole range will take forever
Indeed, if you have hundreds or even thousands of scenarios, you should not test all of them because these will become long to implement and besides unit tests may become too long to execute.
It is contrary to unit test principles.
Unit tests has to be executed fast as these are executed very often.
Running unit test for several random myPosID values (Monte-Carlo simulation) seems better, but even then passing test doesn't mean much.
Testing with random values as you suggest should not use random series
that are generated differently at each time the tests are executed because these may not be reproducible.
It is also contrary to unit test principles.
An unit test has to produce the same result in any environment and in any time.
Otherwise it means that the test is not reliable.
So, the idea for creating unit tests is in TDD way is writing as many unit tests as type of case to handle.
For example : you have 3 ways of represent a cell :
white = 0, black = 1, empty = 2.
These may be the creation of 3 acceptance tests for the conversion from Long
to byte[][]
and reversely.
1) When I have as a Long
value, only empty cells, I am waiting for a byte array representation as...
2) When I have as a Long
value, 1 white cell and empty cells for the rest , I am waiting for a byte array representation as...
3) When I have as a Long
value, 1 black cell and empty values for the rest, I am waiting for a byte array representation as...
You may then go more far.
Why not creating an acceptance test that mix white and black cells to check that mixing them doesn't create side effects.
4) When I have as a Long
value, 3 white cells, 4 black cells and empty cells for the rest , I am waiting for a byte array representation as...
At last, about your question whether you should test all cases, I think that you should rather try to concentrate on "big cases" as these shown above.
It should be fine.
Upvotes: 2
Reputation: 533
There is a similar problem in competition programming: when you submit code, system can't verify 100% correctness of the code since did not pass all possible inputs. Instead, system runs many tests, that falls into three categories:
So you can follow this techinque as well, but in scale, that fits you.
Also, I should mention, that "canonical TDD" does not work for formula-like methods, since you always can pass test with another if
. Instead, we should focus on fact, that tests gives us not only correct algorithm implementation, but correct design as well.
Upvotes: 1