People are often confused by what tests actually do - their role is not merely indicating that something is wrong.
If somehow Flying Spaghetti Monster bestowed upon you a divine gift of a Perfect Test Server, which would literally read your mind, and tested any of your programs instantaneously indicating by either green or red light if the tested program Does What You Mean - such a gift would be pretty much worthless, except for some extremely narrow domains like crypto protocols (which actually tend to fail because your threat model was insufficiently imaginative...).
First, it would universally show the red fail light on any program, because with sufficiently strong testing some imperfection will be eventually found, and second - without some kind of indication what's the nature of the problem there's pretty much nothing useful you can do other than stare at code and hope for sudden burst enlightenment.
This by the way is another reason why static typing and proving properties of code are a total waste of time - try making a typo in any nontrivial C++ STL program, and see for yourself how much compiler's perfectly correct information that "something's wrong with your program" is going to help you.
The primary function of a test suite is helping you locate the nature of a problem, and which code is likely to be responsible for the problem. That's why tests produce meaningful messages on what differed between expectation and actual result, why we have different kinds of tests (with very inconsistently applied names like "unit", "functional", "integration", "regression" and so on), why continuous integration servers test every commit to correlate code change with test failure and so on.
Piles of microassertions anti-patternOne fairly common testing anti-patterns which annoys me a lot are tests which have ton of microassertions, which tend to pass together or all fail together, moral equivalent of this:
assert_equal "Hello", hw.message assert_equal ", ", hw.separator assert_equal "world", hw.target assert_equal "!", hw.punctuation
Now imagine that hello world package accidentally changed to German somehow, or UTF-16BE, or some other crazy thing - every single assertion will fail simultaneously. Unfortunately you will never get any information about what actually happened with any assertion other than first - and debug prints will come.
This can be improved somewhat to not terribly pretty but much more useful:
assert_equal ["Hello", ", ", "world", "!"], [hw.message, hw.separator, hw.target, hw.punctuation]
If they fail together, you'll be given full information on what precisely happened.
This kind of structure-vs-structure comparison is much more useful - especially for regression testing where you test against saved known complex output, and integration testing where you test outputs of individual component against outputs of entire subsystem. Probably less so in low level unit tests where you'd actually have to type expected value manually.
Unfortunately we immediately run into second problem - when such comparison fails, we get a massive message in which it might be hard to localize which parts are the same and which differ.
HighlightingThis is where my small library comes into play. It overrides assert_equal, and if expected and actual value aren't equal, it calls #inspect on them, tokenizes them with a simple regular expression, uses diff/lcs library to compute diffs, and then outputs the same message as plain old test/unit's assert_equal except with added and deleted parts highlighted using ANSI color codes which should work on just about any kind of terminal.
It also includes a small hack to make TextMate's test runner window display this highlighting.
Now this library has just been extracted from old-style Rails plugin, it's a bit messy, but it doesn't depend on Rails anymore.
It shouldn't be too hard to adapt to other testing libraries if you need to do so.