Times change. Back when I started this blog ten years ago serious automated testing was something people have generally heard of, but very few actually did. It was very common for even big projects to have literally zero tests, or if they did they were token tests or at best some regression checks.
Then TDD's shaming campaign happened, and it was even more effective than shaming campaigns against smoking, and now not testing is the exception, and most fights are over what kind of testing is most appropriate.
It was mostly cultural change. Ruby or Java were pretty much just as testable 10 years ago as they are now, but underlying technology changed considerably as well. Just some of such changes:
- Very low level languages like C/C++ where any bug just corrupts memory at random are extremely hard to test - they're far less popular than they used to be (and the ones that still exist usually have nonexistent or very shitty tests)
- Languages like Perl which didn't even have working equality and had a lot of context dependence are much less popular - Perl was still possible to test, but it was a bit awkward
- Headless browsers made it possible to reasonably test javascript
- jQuery and greater compatibility between browsers made cross-browser javascript testing basically unnecessary
- Web-based user interfaces are far easier to test than most native interfaces
- Going all web made cross-OS testing unnecessary, and if you really need them VMs are far easier to setup than ever
- Application logic in database paradigm mostly died out, and much easier to test application logic in application paradigm is clearly dominant now
- Complex multithreading never got popular, and it's more common to have isolated services communicating over HTTP or other messaging
- Cloud makes it much easier to replicate production setup in test environment for reliable system-level testing
- All languages have a lot more testing libraries, so things like mocking network or filesystem communication which used to be massive pain to setup are now nearly trivial.
- There are now ways to test with multiple browsers at once, even if it's still not quite as simple.
And yet, one technology from dark days before testing is still with us, and shows no sign of either going away or becoming testable. CSS.
Let's just cover a few things which would be difficult to automatically validate, and in theory they ought to be possible to automate, but there are no good ways to do that:
- Site works with no major glitches on different browsers. Any major difference should be flagged, but what counts as "major" difference would probably need somewhat complex logic in testing library.
- Site looks reasonable on different screen sizes. There will be differences, and testing library would need to contain a lot of logic to determine what's fine and what's not. Some examples would be maximum/minimum element sizes, no content missing unless specifically requested to be hidden, no content cut by overflow, no horizontal scrollbars etc.
- All CSS rules in your application.css are actually used. It seems everybody's CSS accumulates leftovers after every refactoring, and with some browser hooks it ought to be possible to automatically flag them.
- When you do CSS animations, start and end state show what they ought to. Even disregarding transitions. Some kind of assertions like "X is fully visible and not covered by any other element or overflow: hidden", "Y cannot be seen" would be great, but they're not easy to do now.
I don't have any solutions. Hopefully over next few years it will get better or we'll replace CSS with something more testable.
Screenshot tests FTW.
ReplyDelete