The best kittens, technology, and video games blog in the world.

Saturday, April 30, 2016

Fun and Balance mod for EU4 1.16.3

king of the castle by allenthepostman from flickr (CC-SA)
Fun and Balance is available updated for 1.16.3:
Not counting minor bug fix with partial westernization (which is disabled by default anyway), it's just compatibility update.

Full feature list and links to older versions in case you want them are all here.

For all my CK2 and EU4 mods, check my Steam Workshop page

Thursday, April 21, 2016

Dealing with transient test failures due to database results order

Fox by kindl_jiri from flickr (CC-ND)
The most #mildlyinfuriating aspect of testing are tests which work most of the time, but fail occasionally. Even worse are tests which always work when you run them individually, but sometimes fail when ran as part of a test suite.

The most common category of transient test errors I've seen are Capybara Javascript testing, and I don't have a great solution for those, but there's another category of really obnoxious tests - tests which expect results from database to be in specific order.

They usually work, as databases do the laziest thing possible, so when you ask one for a bunch of records without specifying any particular ordering, it will usually return them in whichever order they are physically on the disk, which usually corresponds to order of their creation, which then usually corresponds to their serial or GUID primary key, so you get that implicit ORDER BY id, most of the time.

Which is just fine, except once in a while the database will reorder physical records to compact tables, causing physical order of record to no longer correspond to primary key order, and test to fail.

Then you rerun it individually over and over, and it works every time, as this kind of reordering is not going to happen mid-test, only once enough repeatedly created and deleted data accumulated in the table. Like once every 20 full test runs, each half an hour long. Very frustrating to debug.

What if you could force such tests to reveal themselves somehow? Most databases won't be of much help, and ORM is standing in the way of adding ORDER BY to every query which doesn't have one, but fortunately it's not too difficult to tell ActiveRecord to shuffle results of everything that didn't request specific order, by placing this kind of code in your spec/spec_helper.rb or equivalent:

ObjectSpace.each_object(Class) do |cls|
  if cls.ancestors[1..-1].include?(ActiveRecord::Base)
      cls.instance_eval do
        default_scope { order('rand()') }
      warn "Can't order #{cls}"

I wouldn't recommend running it like that every time, and especially not in production, as ORDER BY RAND() everywhere is going to have annoying performance impact, but enabling it temporarily just to debug already existing transient test failures might be just the right tool.

Wednesday, April 20, 2016

Patterns for testing command line scripts

Lab Mouse checkin out the camera by Rick Eh? from flickr (CC-NC-ND)
It's relatively easy to test code which lives in the same process as test suite, but a lot of the time you'll be writing standalone scripts, and it's a bit more complicated to test those. Let's talk about some patterns for testing them.

Examples in RSpec, but none of these patterns depend on test framework.

Manual Testing Only

That's actually perfectly legitimate. If your script is a few lines of straightforward code, you can just check that it works manually a few times, and then completely forget about it. Usefulness of automated tests in such case probably won't be very high.

I'd recommend not relying on just that for more complicated scripts.

STDOUT testing

A lot of scripts take arguments from command line or STDIN, and output results to STDOUT, possibly STDERR or exit code as well.

A bunch of expect(`script --input`).to eq("output\n") style tests are very easy to use and can go very far.

If you need to test a bit more complicated interactions - setting environmental variables, writing to STDIN, reading from both STDOUT and STDERR, checking error code etc. - IO.popen and Open3 module offer reasonably convenient APIs.

Of course only certain category of scripts can be reasonably tested this way, but it's a fairly big category.

Testing as library code

A fairly common pattern is to move most of the code from "script" file to a separate "library" file, which can be required by both. It's a bit awkward, as script no longer lives in one file.

It's not always obvious where to divide the library from the script - if you put everything in the library, it makes it pretty much useless for anything except program itself. If you keep things like parsing command line arguments separate, that results in possibly useful "library", but leaves more "script" code untested.

if __FILE__ == $0

It used to be a very common pattern which I don't see that often these days. What if we have a file which works as a library you can require, but it acts as a script if it's ran directly? Here's the typical code for such script:

class Script
  def initialize(*args)
  def run!

if __FILE__ == $0*ARGV).run!

Depending on how you feel you might do command line argument parsing either in initializer, or in if __FILE__ == $0 block.

Code written in this style generally doesn't intend to be used as a library, and this hook is there primarily for sake of testing.

Temporary directory

Frequently scripts interact with files. That's more complicated to setup. Don't try anything silly like using current directory or single tmp where leftovers from previous test runs might be left.

I'd recommend creating new temporary directory and going there. Add code like this to your test helpers:

def Pathname.in_temporary_directory(*args)
  Dir.mktmpdir(*args) do |dir|
    Dir.chdir(dir) do

Then you can then use Pathname.in_temporary_directory do |dir| ... end in your tests, and it will handle switching back to previous directory and removing temporary one automatically.

In every such block you can write files you want, run command, and check any generated files, without worrying about contaminating filesystem anywhere.

There's just a minor complication here - you'll be changing your working directory, so you'll need to call your script using absolute rather than relative path. Simply do something like:

let(:script) { Pathname(__dir__) + "../bin/script"  }

To get absolute path to your script and then use that.

Mocking network

All that covers most of possible scripts, but I recently figured out one really fun trick - how to test scripts which read from network?

Within our tests we have gems like webmock and vcr can fake network communication, but what if we want to run a script? Well, just save this file as mock_network.rb:

require "webmock"
require "vcr"

VCR.configure do |config|
  config.cassette_library_dir = Pathname(__dir__) + "vcr"
  config.hook_into :webmock

VCR.insert_cassette('network', :record => ENV["RECORD"] ? :new_episodes : :none)

END { VCR.eject_cassette }

And then run your script as system "ruby -r#{__dir__}/mock_network #{script} #{arguments}", possibly in conjunction with any other of the techniques presented here.

To record network traffic you can run your tests with RECORD=1 rspec, then once you're finished just run rspec normally and it will use recorded requests.

Mocking other programs

Previous pattern assumed the script was using some Ruby library like net/http or open-uri for network requests. But it's very common to use a program like curl or wget instead.

In such case:
  • write your mock curl, doing whatever you'd like it to do for such test
  • within test, change ENV["PATH"] to point to directory containing your mock curl as first element
  • run script under test
This works reasonably well, as almost all programs call each other via ENV["PATH"] search, not by absolute paths, and usually expect fairly simple interactions.

Like all heavy handed mocking, this can fail miserably if the program decides to pass slightly different options to curl etc., and unlike webmock this style of interaction doesn't block network access so you can miss something.

All these patterns leak

None of these pattern are perfect - they assume how script is going to interact, and they don't actually isolate script from network, filesystem (outside temporary directory you created), Unix utilities etc., so a buggy script can still rm -rf your home directory.

For testing very complicated interactions, you might need to use virtual machine, or some OS-specific isolation mechanism like chroot. Fortunately only relatively few scripts really need such techniques.

Tuesday, April 19, 2016

Automatically managing db/schema.rb

peering thru the bed rails watching coco play by damselfly58 from flickr (CC-NC-ND)
I've been living in schemaless NoSQL wonderland for quite a while, but I'm currently working on some mysql-based Rails applications, and managing db/schema.rb is a massive pain.

It's an automatically generated file, and such conventionally don't go into version control, but it also literally says "It's strongly recommended that you check this file into your version control system" in it. I'm still not completely convinced it's the right thing to do, but let's assume we follow the recommendations.

The problem is that you're probably not starting a fresh database for every branch - or carefully rolling back to master schema before you switch - you'll be switching branches a lot, and migrations will be applied in a different order than what they'd end up on master - so whenever you generate db/schema.rb you'll need to look at it manually to figure out what should be committed and what shouldn't. It's a very error prone process for something as frequent as writing migrations.

You could drop your database and recreate it from migrations every now and then, but you probably have some data you'd rather keep there.

Fortunately there's a solution! Oh, you can't just switch your team to a schemaless database? Well, in such case use this script:

Script regenerate_schema_rb:
#!/usr/bin/env ruby

require "fileutils"
require "pathname"

fake_database_yml = Pathname(__dir__) + "database.yml"
real_database_yml = Pathname("config/database.yml")

unless real_database_yml.exist?
  STDERR.puts "It doesn't seem like you're in a Rails application"
  exit 1

unless `git status` =~ /nothing to commit, working directory clean/
  STDERR.puts "Do not run this script unless git status says clean"
  exit 1

system "echo 'DROP DATABASE IF EXISTS schema_db_regeneration' | mysql -uroot"
system "echo 'CREATE DATABASE schema_db_regeneration' | mysql -uroot"

FileUtils.cp fake_database_yml, real_database_yml

system "rake db:migrate"
system "git checkout #{real_database_yml}"

Fake database.yml:

  adapter: mysql2
  encoding: utf8
  database: schema_db_regeneration
  pool: 5
  username: root

What it does is very straightforward - it keeps your existing database, and simply repoints database.yml at a fake one, and runs all the migrations against it. No manual edits necessary, and your next_tumblr_development database is safe.