• The cover of the 'Perl Hacks' book
  • The cover of the 'Beginning Perl' book
  • An image of Curtis Poe, holding some electronic equipment in front of his face.

Managing a Test Database

minute read



Find me on ... Tags

The Problem

Test databases are very easy to get wrong. Very easy. Decades ago when I first learned testing, the team shared a test database. If you ran your test at the same time another developer, both of your test suites would fail! However, we were using a database where we had to pay for individual licenses, so we were limited in what we could do.

Later, I worked for a company using MySQL and I created an elaborate system of triggers to track all database changes. This let me “fake” transactions by starting a test run, see what had changed last time, and automatically reverting those changes. It had the advantage that multiple database handles could see each other’s changes (hard to do for many databases if you have separate transactions). It had the disadvantage of everything else: it was fragile and slow.

Later, I started using xUnit frameworks, eventually writing a new one that’s popular for companies needing a large-scale testing solution. With this, it was easy for each test class to run in a separate transaction, cleaning itself up as it went. Using transactions provides great isolation, leverages what databases are already good at, and let’s you run many classes in parallel.

But it can easily break embedded transaction logic. And you have to guarantee everything shares the same database handle, and you can’t really test the transactions in your code, and, and, and ...

What finally drove me over the edge was writing some code for a client using the Minion job queue . The queue is solid, but it creates new database connections, thus ensuring that it can’t see anything in your database transactions. I figured out a (hackish) solution, but I was tired of hackish solutions.

While I was researching the solution, Matt Trout was reminding me (again) why the “database transactions for tests” approach was broken. Just spawn off temporary test databases and use those, throwing them away when you’re done.

The Client

So a company wanting to hire me gave me a technical test and there was a task to add a small, simple feature to a Catalyst web application . It was trivial. They handed me a Vagrant file and after a quick vagrant up and vagrant ssh, I was ready to begin. Then I looked at the test they had for the controller:

use strict;
use warnings;
use Test::More;

use Catalyst::Test 'Client';

ok( request('/some_path')->is_success, 'Request should succeed' );
done_testing();

The task involved a POST to a URL. There was no test for the existing feature that I was adding to, but any test I wrote meant I’d be changing the state of the database. Run the code multiple times and I’d leave junk in the database. There were various ways I could approach this, but I decided it was time to build a quick database on the fly, write to that, and then dispose of it after. The code for this was trivial:

package Test::DB;

use File::Temp qw(tempfile);
use DBI;
use parent 'Exporter';
use Client::Policy;

BEGIN {
    if ( exists $INC{'Client/Model/ClientDB.pm'} ) {
        croak("You must load Test::DB before Client::Model::ClientDB");
    }
}
use Client::Model::ClientDB;

our @EXPORT_OK = qw(test_dbh);

my $connect_info = Client::Model::ClientDB->config->{connect_info};
my $dsn          = $connect_info->{dsn};
my $user         = $connect_info->{user};
my $password     = $connect_info->{password};

# $$ is the process id (PID)
my $db_name = sprintf 'test_db_%d_%d', time, $$;
my ( $fh, $filename ) = tempfile();

my $dbh = DBI->connect(
    $dsn, $user, $password,
    { RaiseError => 1, AutoCommit => 1 } 
);
$dbh->do("CREATE DATABASE $db_name");
system("mysqldump -u $user --password=$password test_db > $filename") == 0
  or croak("mysqldump failed: $?");
system("mysql -u $user --password=$password $db_name < $filename") == 0
  or croak("importing schema to mysql failed: $?");

# XXX We’re being naughty in this quick hack. We’re writing
# this back to the Model so that modules which use this connect 
# to the correct database.
$connect_info->{dsn} = "dbi:mysql:$db_name";

# This is just a quick hack to get tests working for this code.
# A catastrophic failure in the test means this might not get
# run and we have a junk test database lying around.
# Obviously we want something far more robust

END { $dbh->do("DROP DATABASE $db_name") }

sub test_dbh () { $dbh }

1;

The above is very naughty in many ways, but the client hinted that how fast I returned the test might be a factor (or maybe they didn’t and I misread the signals). They also made it clear they were looking at how I approached problems, not whether or not the code was perfect. Thus, I thought I was on safe territory. And it meant I could do this in my test:

use strict;
use warnings;
use Test::More;
use lib 't/lib';
use Test::DB;

use Catalyst::Test 'Client';

ok( request('/some_path')->is_success, 'Request should succeed' );

# anything I do here is against a temporary test database
# and will be discarded when the test finishes

done_testing();

The Test::DB code was quick and easy to write and made it trivial for me to safely write tests. I was pleased.

What’s Wrong With Test::DB?

For a junior developer, Test::DB might look awesome. For an experienced developer, it’s terrible. So what would I do to make it closer to production ready?

Here are just a few of the things I would consider.

Stronger Data Validation

First, let’s look at our connection information:

my $connect_info = Client::Model::ClientDB->config->{connect_info};
my $dsn          = $connect_info->{dsn};
my $user         = $connect_info->{user};
my $password     = $connect_info->{password};

The above relied on how Catalyst often sets up its DBIx::Class (a Perl ORM) model:

package Client::Model::ClientDB;

use strict;
use base 'Catalyst::Model::DBIC::Schema';

__PACKAGE__->config(
    schema_class => 'Client::Schema::ClientDB',
    connect_info => {
        dsn      => 'dbi:mysql:test_db',
        user     => 'root',
        password => 'rootpass',
    }
);

Once you load that class, you get a config class method which can tell you how that class is configured. However, there’s no guarantee in the Test::DB side that the data is structured the way that I expect. Thus, I need to validate that data and throw an exception immediately if something has changed.

And how do we create our test database?

$dbh->do("CREATE DATABASE $db_name");
system("mysqldump -u $user --password=$password test_db > $filename") == 0
  or croak("mysqldump failed: $?");
system("mysql -u $user --password=$password $db_name < $filename") == 0
  or croak("importing schema to mysql failed: $?");

The CREATE DATABASE command is fast, so I’m not worried about that. And the test had a single table with very little data, so this was quick. But for Tau Station , we have a couple of hundred tables and tons of data. This would be slow. For any reasonably mature system, dumping the database each time would be a bad idea. There are also ways you could easily avoid dumping it multiple times, but that hits the next problem: adding that data to your new test database. That would need to be done for each test and that is not something you can trivially speed up.

For a more robust system, I’d probably create a local database service that would simply build a set number of test databases and have them waiting. The test would request the next test database, the service would regist that the database had been taken, and create a new test database in the background while your test runs. The service would also probably clean up old test databases based on whatever policies you think are appropriate.

No Action At A Distance

This line is terrible:

$connect_info->{dsn} = "dbi:mysql:$db_name";

The reason that works is because the config data in Client::Model::ClientDB is global and mutable and $connect_info is merely a reference to that data. Instead, if I have a “database service” that tells the code which database it can use, then Test::DB can call that service, and so can Client::Model::ClientDB. Everything relies on a single source of truth instead of hacking global variables and hoping you don’t mess up.

Don’t Drop The Test Database

If there is one thing which I hate about many testing systems, it’s a watching a test horribly fail, but the database is cleaned up (or dropped) and I can’t see the actual data after the test is done. What I often have to do is fire up the debugger and run the code up to the test failure and grab a database handle and try to inspect the data that way. It’s a mess.

Here, we can fix that by simply dropping this line:

END { $dbh->do("DROP DATABASE $db_name") }

At the beginning and end of every test run, we can diag the test database name and if I need to see if there’s an issue in the database, I can still use it. Our database service would have code to drop the database on:

  • The next day
  • The next test run
  • After exceeding a threshold of databases
  • ... or whatever else you need

In short, keep the valuable data around for debugging.

Rapid Database Development

The database service solution would also have to tie into your database change management strategy. I heavily use sqitch to manage database changes and I’ve written a lot of code to support odd edge cases. It wouldn’t be hard to write code to let the database service see if it’s up to date with your local version of sqitch. Whatever database change management strategy you use, it needs to be discoverable to properly automate the database service.

Of course, you think, that’s obvious. Yet you’d be shocked how many times I’ve worked with clients whose database change management strategy involves listing a bunch of SQL files and checking their mtime to see which ones need to be applied to your database. Yikes!

Faster Tests

If this is done well, your tests should also be faster. You won’t have the overhead of transactions beyond what your code already has. Plus, you can avoid issues like this:

sub test_basic_combat_attack_behavior ($test,$) {
    my $ovid    = $test->load_fixture('character.ovid');
    my $winston = $test->load_fixture('character.winston');
    my $station = $test->load_fixture('station.tau-station');

    $test-> move_to($station->port, $ovid, $winston);
    ok !$ovid->attack($winston),
      'We should not be able to attack someone on the home station.';
    ...
}

In the above, we’re loading some fixtures. Sometimes those fixtures are very complicated and loading them takes time. For one client, when I would run $test->load_all_fixtures('connection');, that would add an extra couple of seconds to every test which needed to do that.

Instead, pre-built test databases can have the test fixtures already loaded. Further, having a pre-populated database helps your code deal with something closer to a real-world problem instead of dealing with an empty database and not catching corner cases that might cause.

Conclusion

By using a database service which merely hands you a temporary test database, you don’t have to worry about leaving the database a mess, managing transactions in tests, or having nasty hacks in your tests to workaround these issues. Most importantly, you’re not changing the behavior of your code. You just use the database like normal. It might be a little bit more work up front to create that database, but it’s worth the effort.

I really do want to get around to creating a proper database tool like this some day. Today is not that day. But I was delighted how even my quick hack, written in just a couple of minutes, made it so much easier to test my code. I should have done this ages ago.

Please leave a comment below!

Full-size image


If you'd like top-notch consulting or training, email me and let's discuss how I can help you. Read my hire me page to learn more about my background.


Copyright © 2018-2024 by Curtis “Ovid” Poe.