The best kittens, technology, and video games blog in the world.

Saturday, February 27, 2010

Wikipedia bias in video game articles

Cat and Wii by plynoi from flickr (CC-NC)

In spite of all claims to neutrality, Wikipedia can be ridiculously biased at times - it just outsources its biases to some external authority.


Here's Wikipedia article about Empire Total War - but you can look at any video game review. Here's an excerpt:
Empire: Total War received acclaim from reviewers upon release; several critics commended it as one of the foremost strategy titles of recent times. Praise was bestowed upon the extensive strategy breadth, accurate historical challenges and visual effects. The real-time land battles, with a far greater focus on gunpowder weaponry than earlier Total War titles, were thought to be successfully implemented.

Here's another:
And entire Reception section is one big praise-fest, just look at it yourself.


In reality Empire is a bug-infested game with crappy AI, "historical accuracy" is sacrificed to gameplay or general coolness without giving it a second thought, and neither users nor even authors received in particularly well. Some but not all problems were fixed with patches - but reviews were based on buggiest pre-releases or very early versions, so they must have noticed the bugs - just failed to mentioned them.

Here's Metacritic users:

Here's Amazon:

Here's one of the developrs Mike Simpson (about pre-1.5 patch version):
I had 6 copies of Empire: Total War sat on my shelf intended for close gamer friends that I didn’t send out because I was too embarrassed about the flaws.

How did it go so wrong?

Users and even the developers were highly disappointed by the game, and outright hated first versions. About the only group of people who absolutely loved it were game reviewers.

But game reviewers are not independent! They're publishers' outsourced PR department! Any review outlet which is overly critical of publisher's dearest games will lose access to pre-release games - and more importantly advertisement money. If your job depends on not being able to see flaws in games...

What annoys me is not so much reviewers - everyone knows they're corrupt spineless wankers - it's how Wikipedia instead of making neutral reviews based on users' opinions outsources its bias to reviewers. Here's a diagram of bias flow:
Publishers → Reviewers → Wikipedia

There's as little place for these unofficial PR departments in Wikipedia as there is for publishers' official PR departments. Unlike users who don't have any conflict of interest, there's nothing remotely independent or unbiased about professional game reviewers. This is to some extent true for all kinds of reviewers, but in game reviews situation is far more pathological than with movies or books or almost any other kind of reviews I can think of.

Wikipedia failed its mission hard. Outsourced bias is still bias. How come it can write about abortion without much less bias than about video games?

Saturday, February 20, 2010

A more honest review of Empire: Total War

Combat Laila by mize2oo5 from flickr (CC-NC-ND)

As we all know professional game reviewers spend most of their time fellating big publishers - and if one isn't good enough they lose all access to pre-releases and have to find a real job. So imagine my shock when the "universally acclaimed" game like Empire: Total War didn't live up to the fella... I mean critical acclaim.

By the way, this spinelessness only affects game reviewers - movie reviewers seem to be fairly honest, even if they're highly opinionated pricks. I guess it's mostly because it's so difficult to control access to movies - you can write your review 2 hours after theatrical premiere (or scene premiere, which often happens a couple of weeks earlier) even if the distributors hate you and want you to die.

But it is somewhat disappointing - as previous two games in the series were actually good, especially after a mod or two.

What's good about Empire: Total War

I will let my hate out in due time. Let's concentrate on the good bits.

Micromanagement which plagued Medieval 2 Total War and turned it into Eve Online once your empire reached 20 or so settlements (at least in vanilla, there's a mod for that) is trimmed to more reasonable levels.

Diplomacy works from diplomacy screen, there's no need for diplomats and princesses... It was particularly bad part of M2TW - in Rome diplomats were very powerful as they could bribe units and settlements - there was never any point in engaging in actual "diplomacy" due to bugs like this one, still unfixed at time of writing. Medieval made bribing nearly impossible, and added extra type of diplomat - Princess - with even more useless options.

Then there were priests of all kinds, heretics, witches (I never had a witch do anything other than fool around), inquisitors (supposedly trying to burn your agents on stakes, but they always had very low levels and I don't remember them succeeding even once) - what it really was was infuriating micromanagement.

And then there were spies - useful agent type for a change; assassins - ridiculously useless as the only enemy agents worth killing were the ones particularly difficult to kill; and merchants - somewhat profitable if you loved micromanagement or play Moors and use the fort merchant stack cheat.

Empire gets rid of all that nonsense, leaving just Gentleman, Rake (Spy), and Missionary. And you can actually see where cities are on the map, and who owns what. This part of the game has been improved - but when you think about it "oh look, we removed a lot of annoying shit we put in previous games" is not exactly such a great achievement.

You get nice explanations why units on the battlefield feel better or worse - I vaguely recall them being there in Rome and not any more in Medieval... And why different powers feel what they feel about you - not that it matters that terribly much, they'll eventually all attack you anyway.

I'm ambiguous about the new system of buildings, social classes, town wealth, trade goods and all that. I'm not terribly enthusiastic about it, as I got used to Rome/Medieval system, but then it doesn't strike me as particularly bad - just different.

What's bad about Empire: Total War

Enough with the praise. Let's get to the parts that suck hard. Do you remember M2TW and its gunpowder units? Artillery was somewhat useful for sieges - I remember my Citadel with Cannot Towers and relatively weak garrison destroying 3 full Mongol stacks invading all together; and even simplest bombard or catapult could make a lot of holes in most city walls unless defenders had their own artillery (or balls to charge knights into your artillery, which AI has never quite developed).

And in a field battle it would shoot itself out of ammo without hitting stationary enemy once. You know how to detect a n00b in M2TW multiplayer? Start a no-rules game (normally artillery is banned). If they buy any artillery - they're a total n00b and they will suffer.

And that was perfect. Medieval artillery was a siege toy. The first effective use of artillery in the field was by Jan Žižka's Hussite armies in 1419 - and these were wagon forts - imagine a group of medieval battle tanks - not cannons standing in the field and waiting to be run over knights.

Amongst their weaponry were such diverse elements as: fear, surprise, ruthless efficiency, an almost fanatical devotion to the cause of Utraquism... I mean pikes and crossbows. Widespread use of field artillery had to wait until 17th century.

It gets worse. Remember how firearm-wielding infantry in Medieval 2 Total War was good at maybe scaring off elephants and Indians (not that you ever got to elephants or Indians, by that time the campaign was almost over), and Pavise Crossbowmen could absolutely massacre any firearm units? That even if they got to shot anything, if anyone as much as looked funnily at them, musketeers and arquebusiers of M2TW would get all confused spent the rest of the battle changing and rechanging formation.

So how awesome would be the idea of taking two most broken unit types from Medieval 2 Total War - artillery, and firearm infantry - and basing an entire game around them? About that awesome.

They are still broken. Artillery is completely useless for hitting anything smaller than fort walls, while missile infantry confusingly runs around trying to form nice lines instead of actually shooting anyone.

To balance it out they decided to nerf cavalry. Now cavalry in Rome and M2TW was ridiculously overpowered. And you know what? That's historically accurate (except it should historically be way more expensive than infantry, but wasn't).

It's a myth that coming of gunpowder brought end of knightly heavy cavalry. That's what began it - only once infantry gots crossbows and firearms knight started wearing full plate armor! And the greatest cavalry charge in history - and the reason women without burqas are on the street - was Polish hussar charge of 1683 breaking Turkish siege of Vienna.

Early 1600s hussars - almost time of the Empire Total War - routinely massacred professional armies ten time their size. Here's an example. Here's another. How about this?

Yes, not every cavalry unit was that great, and a century of progress in firearms does its thing. But Empire Total War cavalry got so ridiculously nerfed that the only reason it's useful at all is all other units being so bad! Pretty much their only functions would be killing off routing units and destroying artillery - as if all that crappy artillery was worth the bother.

User Interface

I'm not over. I hate what they did to the user interface. All units look the same. Now I know this is more historically accurate - but I cannot see anything on a battlefield. They don't even bother properly highlighting unit flag or card as they did in previous games. Radar is horrible - displaying vague blobs instead of locations of units... you cannot cycle through units with Tab - you have to select them manually one by one, try to figure out which one got selected as highlighting is horrible - and give orders which it will then ignore running around looking confused.

I almost feel as if all my input was limited to setting up initial deployment before battle starts, and then chasing routing enemy units with my cavalry. Bad user interface and bad unit AI synergistically work together to create a really horrible user experience.

There are naval battles - which is a massive ridiculous waste of effort as in no Total War game ever navies had any significant use. Oh sorry, in M2TW other countries with which I had friendly relations or even an alliance, and with which I had no common border would sometimes get a mission to blockade my ports, which they would do starting the most pointless war imaginable.

And just like with land battles, once it gets past 2 or 3 ships per side it turns into a massive clusterfuck of confusing interface and AI ignoring your commands. Someone even made a Hitler Downfall video about it - it's that bad.

What shocks me is how easy would it be to fix the UI - give units distinct looks, make highlighting better etc... If you have any mod recommendations, please tell me.

But then, why should I act surprised if half the people buy game based on bribed reviews and initial shininess, and the other half would pirate it anyway, so why should they bother finishing it before release?

tl;dr version: Would be good if they bothered finishing it instead of rushing the release - even more so than previous Total War games; also professional game reviewers suck publishers' cocks.

Friday, February 12, 2010

Game theory, video game piracy, and market failure

AAAAR2D2 by Kaptain Kobold from flickr (CC-NC-SA)

Most games are shit - this is a true fact with which there will be no arguing. And while we cannot really expect every game to be Portal level of quality, it is quite puzzling why average quality is so dismal. It's surely no lack of money - in spite of game piracy being so easy, people pay tens of billions of dollars for video games.

That's a very interesting slideshow  describing gaming industry. A first good hint of what's wrong with gaming can be seen from this costs breakdown:

So publishers, retailers, marketing, console vendors, and other parasites take almost all money, and just 16% of what customer pays goes into development cost. Why is it so?

Name recognition

The core of the problem is games being what economists call "experience goods". No, it doesn't mean games are "experienced" - what is means is that customers have no reliable way of telling if the game is any good before playing it (and paying for it). So what can customers do instead? Rely on any signals which correlate at all with quality - like being sequels to known good games, or being based on something well known from outside gaming. Let's leave games for a moment, do you know what the 50 highest grossing movies ever were (inflation etc. aside for now, it doesn't change that much)? Original content in bold (by with I mean content without major name recognition, I know Forest Gump was based on a book; Avatar on Pocahontas; and Titanic - well, on a Titanic - but in such cases name recognition didn't play a major role).
  1. Avatar
  2. Titanic
  3. The Lord of the Rings: The Return of the King
  4. Pirates of the Caribbean: Dead Man's Chest
  5. The Dark Knight
  6. Harry Potter and the Philosopher's Stone
  7. Pirates of the Caribbean: At World's End
  8. Harry Potter and the Order of the Phoenix
  9. Harry Potter and the Half-Blood Prince
  10. The Lord of the Rings: The Two Towers
  11. Star Wars Episode I: The Phantom Menace
  12. Shrek 2
  13. Jurassic Park
  14. Harry Potter and the Goblet of Fire
  15. Spider-Man 3
  16. Ice Age: Dawn of the Dinosaurs
  17. Harry Potter and the Chamber of Secrets
  18. The Lord of the Rings: The Fellowship of the Ring
  19. Finding Nemo
  20. Star Wars Episode III: Revenge of the Sith
  21. Transformers: Revenge of the Fallen
  22. Spider-Man
  23. Independence Day
  24. Shrek the Third
  25. Harry Potter and the Prisoner of Azkaban
  26. E.T. the Extra-Terrestrial
  27. Indiana Jones and the Kingdom of the Crystal Skull
  28. The Lion King
  29. Spider-Man 2
  30. Star Wars Episode IV: A New Hope
  31. 2012
  32. The Da Vinci Code
  33. The Chronicles of Narnia: The Lion, the Witch and the Wardrobe
  34. The Matrix Reloaded
  35. Up
  36. Transformers
  37. The Twilight Saga: New Moon
  38. Forrest Gump
  39. The Sixth Sense
  40. Ice Age: The Meltdown
  41. Pirates of the Caribbean: The Curse of the Black Pearl
  42. Star Wars Episode II: Attack of the Clones
  43. Kung Fu Panda
  44. The Incredibles
  45. Hancock
  46. Ratatouille
  47. The Lost World: Jurassic Park
  48. The Passion of the Christ
  49. Mamma Mia!
  50. Madagascar: Escape 2 Africa

So 34 out of top 50 (68%), or 16 out of top 20 (80%) are not original content. And most of the best movies ever are not on the list. And if you think about it, what Pixar (Finding Nemo, Up, Ratatouille, The Incredibles), Dreamworks (Kung Fu Panda and all the Shreks), and Disney (The Lion King) are doing is not as much original content standing on its own as a series of good but fairly formulaic animations - Pixar could as well call Ratatouille "Pixar 8: Ratatouille" and people would come for name recognition (have you ever seen a Pixar movie in which the protagonist was not an adolescent male?), and Dreamworks, well...

Yes, most movies on the list are pretty good, but that's not even strictly necessary - look at #11 for evolution's sake! And most often sequels which are demonstrably worse than originals earn more money anyway, and it's not just due to inflation. Just looking at the list - Matrix, Star Wars, Madagascar, Shrek, Pirates of the Caribbean all went downhill in quality and skywards in revenue.

I don't know how to easily measure relative contributions of genuine quality (well, I could perhaps rank-correlate imdb scores with some Bayesian filtering for rating uncertainty, maybe some other time...) and name recognition - but the latter factor is just huge.

You thought movies are bad?

That's how bad situation with movies is, and movies are the easiest case. Yes, you only know if a movie is good or not after you see it - but movies are all the same format (not counting 3D), similar length, usually in a small number of easily classifiable genres - so it's not that hard. Within a single genre you can "almost objectively" say that The Dark Knight was better than Spider-Man 2, or something like that.

Yes, there will be individual differences, but it's a simple problem of low dimensionality - completely non-personalized imdb score is a pretty good predictor of how much you'll like a movie. Add or subtract a few points depending on what genre you like, and other trivial criteria, and it gets eerily accurate. And if that's not enough Netflix Prize shows that if you rank just a handful of movies, it will be enough to predict if you like any movie ever made very very accurately.

How about books?

So movies are solved, at least in theory. How about books? It seems that the bestsellers are the Bible, and list of quotations by Chairman Mao. Followed by more religious, books, more Communist propaganda, a dictionary of Chinese... All right, that list makes no sense whatsoever, let's forget about it. And we all know people buy books mostly based on series and author name recognition, J. K. Rowling and Stephanie Meyer can both tell you that.

But even though books with books, Amazon can be quite decent in its recommendations. Unfortunately its algorithm seems to work only with books. Here what it does with movies (recommendations for New Hope):

And with games (recommendations for Medieval 2 Total War):
How useless is that?

Back to video games

Game theory time. What are the choices game developer and publishers face?
  • Make genuinely good games - expensive, risky, if not flashy enough customers often won't even know these games exist
  • Make games that make good first impression - often due to name recognition, and are really shitty once you get to play them for more than 15 minutes - cheap, easy, makes shitloads of money
Which one would gamers prefer to happen? Which one will actually happen? That's my point. Until we find a way to reliably tell which games are good before paying for them - most games will be flashy shit.

As I'm blogging about games sucking let me give you an example of particularly egregious abuse. First "Far Cry 2". It is just like Far Cry except it's developed by another company, has different genre, entirely unrelated world, not a single character or plot element in common, and basically there's nothing - absolutely nothing - linking Far Cry and Far Cry 2.

You know what it is? That's criminal fraud. Publishers of Far Cry 2 should be prosecuted for fraud. Calling their game something it is not to deceive customers is exactly that. It's also widely accepted marketing practice, as if that made it any less criminal. The point of trademarks should be protecting customers from deception, not making trademark owners shitloads of money as it is now - and it was once like that - a century ago Coca-Cola was once sued by American government because it changed its formula to no longer contain extracts of coca leaves or kola nuts. That was before corporate interests took over all our governments. Now trademarks no longer serve customers - they serve corporations exclusively.

So how can a gamer find out if a game is any good or not? First, the problem is extremely difficult compared to one with movies. Games offer very wide variety of different experiences and it's really difficult to compare different games. Someone might like mercenary-shooting missions in the original Far Cry, but hate Trigen missions with passion. By someone I mean virtually every single games - but if different parts of the same can evoke such reactions, how can you give such game a single score?

Or even worse, people might absolutely love the game, except for a huge number of severe bugs which make it virtually unplayable. Like every single Total War game ever released. You know, it's been 3 years and they still haven't fixed those few semicolons in Medieval 2 Total War which completely break diplomacy system.

You rarely have such varied reactions to a single movie (shitty ending appended to the Shindler's List for no reason notwithstanding). Maybe with so-bad-it's good genre, but it's an entirely different matter.

Example of a movie 10.6% consider so-bad-it's-good, most people think it's plain bad

So how can you find out if a game is good or not? Clearly you cannot rely on Amazon, and game reviewers are some of the most incompetent and corrupt group of journalist ever. I wonder why Fox News didn't start a game review channel - they would all feel just at home there. Not that it's much different from other journalist who need pre-release access to products provided by publishers to stay in business. Publishers have them by the balls.

Once upon a time there was a tradition of game demos and "shareware" - you could give a game a try, and if you liked it you could purchase full version. This tradition almost died, partially because most genres don't really work well in demo format, and partially because it's so easy to get full version to try.

So as a gamer you have the following three options to get 1 good game:
  1. Pay for games that make good first impression, most of which will be shit. You pay for 1 good game and 9 shitty games.
  2. Bittorrent games without paying, test which ones you like, then buy games you like. You pay for 1 good game and delete 9 shitty games after quick testing.
  3. Bittorrent games without paying, never pay.
Now in the perfect world everyone would follow #2 option - publishers would earn money for good games, and would earn nothing on shitty games like Far Cry 2. There's just one problem here - once you've found your game, you really have no interest in paying for it. Whatever little stigma or risk is attached to piracy, it is the same in both option #2 and option #3. And publishers try hard (and fail hard) to make #3 difficult, and to extent they're successful make #2 equally difficult in process.

As the result almost everyone does #1 or #3. #1 creates incentive for production of shitty games. #3 creates no incentive whatsoever - and so games stay shitty, gamers stay unhappy, and everyone blames piracy.

Getting rid of piracy would not solve the problem at all - just look at Playstation 3. Everyone would have to do #1 - pay loads of money for shitty games, resulting in high prices, and shitty games. Nobody would be happy.

The only way #2 could happen in the short term would be some sort of honor system - but it's hard to create a honor system if most publishers are assholes who treat gamers like shit (starting from unskippable ads every time you start a game and obnoxious DRMs), developers get only 16% of what customers pay, and people who do #2 are treated just as those who don't pay for anything. Not to mention option #2 being technically illegal. Honor systems can be extremely powerful, but they just won't happen in an environment like that.

What else could happen? Well, someone might figure out ways of figuring out if game is any good other than bittorrenting it. Remember - once incentive for creating shitty games disappears piracy won't be necessary. That doesn't sound terribly likely due to difficulty of the problem. And even if such system existed, it would have to be used by most gamer, and it would have to be difficult to abuse by publishers. Remember - we already have systems which can tell you if you'll like a movie or not with very high accuracy - and yet most people don't use them, going for name recognition instead - resulting in endless stream of shitty sequels.

Or we could use something like flattr. You'd pay some sum every month for playing any video games you want - and after end of the month your money would be distributed to developers of games you played most (or marked as ones you liked most). This would be almost as good as option #2 above, and probably the most realistic way to solve it for good. Of course publishers won't go for it easily, as they'd lose control over market, all the powers going to developers and gamers.

Until that happens, long live Pirate Bay!

Thursday, February 11, 2010

Could Mercurial please die already? kthxbye

I don't understand why programmers - a crowd on whom you can usually rely to have healthy amount of hatred for competing technologies - became suddenly so lovey-dovey when it came to DVCSs.
Here's an example.

This is all wrong. Mercurial should die. So should bazaar, darcs, and countless other systems I won't even bother listing here. Why?

The best tool for the job fallacy

Very often people act as if choices made by one person in one context didn't affect other people, or even the same person in other contexts. What's the name of this problem? Externality denialism - that's what it is!

It makes very little difference what font you use in your editor - if every single person used different fonts and different colours, nobody would suffer from it, except for maybe occasional odd person forced to work on someone else's machine and have their eyes burned by Comics Sans. Best font for the job, whatever.

And it likewise makes very little difference which editor you're using (well, maybe there would be more nice plug-ins if everyone used TextMate, but the effect is not that big), or which DVCS GUI frontend, and so on.

It makes some difference what programming language you use. If everyone used one of the small number of programming languages, they would have all the libraries, all the tools, and all the support needed. But if everyone uses something else - some use RLisp, some use colorForth, some use Megan-Fox-picture - the resulting chaos means nobody has tools, libraries, or support needed to do any work done. Yes, Megan-Fox-picture might be a better language than Ruby, but the net effect of everybody using a different language is a disaster.

Does it mean everyone should use Java? Not necessarily. It only means that the threshold for choosing an unusual language should be considerably higher than "slightly better tool for the job". Is Ruby better enough than Java? Obviously. If we lived in alternative world in which everyone used Python, would it make sense to write your program in Ruby? Well, it's a borderline case, but maybe, maybe not. Should we rewrite Linux kernel in Scheme? Maybe not. Or we could have different languages for genuinely different situations - with each dominating a well specified niche.

And even with programming languages it's not such big of a deal if you use an unusual language every now and then. Programs just do stuff. They can survive on their own. And most programming languages, diverse as they are, rely on the same standards for interoperability - POSIX, Unicode, TCP/IP and so on. Even if your program is written in LOLCODE, and mine in Clojure they can probably talk with each other reasonably well.

It can get painful if interaction gets too close - for example if a C library with fake OO is used from Ruby, and memory management of both is entirely incompatible and results in random memory leaks - yes I'm talking to you Gtk+ (not that anybody cares, Linux on desktop is dead). But basic interactions between different programs sort should work even across language gap.

Version Control Systems

It's not so with DVCSs. DVCSs are interoperability technology. If I like git and you like Mercurial we cannot just both "use best tool for the job", as neither of us will be able to talk to another. Imagine that in addition to HTTP, HTML, CSS and so on, Microsoft released its Microsoft Transfer Protocol, Microsoft Markup Language, and so on - do you have any idea how much mess it would get us into? Oh wait, we're already sort of there.

Here's the game theoretic matrix for DVCS choice. Let's say half of developers like git more and half like Mercurial more:
  • everyone uses git - Mercurial-lovers a bit grumpy, but otherwise we're all happy and interoperable
  • everyone uses Mercurial - git-lovers a bit grumpy, but otherwise we're all happy and interoperable
  • git-lovers use git, Mercurial-lovers use Mercurial - there is no interoperability and everyone suffers
  • git-lovers use Mercurial, Mercurial-lovers use git - now that's just being silly
This situation is called Battle of the sexes is game theory. So imagine the real men like git, and total pussies like Mercurial. Or something.

The solution? We have to agree to one DVCS or another. Yes, some day someone will make even better one, and we'll all switch again, but neither git nor Mercurial are sufficiently better from the other to win on merits alone, and there's really no point in having them both.

Right now git is significantly more popular. So what should happen? Mercurial should die. Yes, you put a lot of effort into it, and we're very sorry etc. etc. but seriously - we need a second DVCS as much as we need a second character set, or a second .

Friday, February 05, 2010

What is all this Perl doing in my Ruby?

Spacecat. by kmndr from flickr (CC-BY)

First, some quick background. C is a very simple programming language and doesn't have exceptions - problems are indicated with return codes, which you're supposed to check but always forget about it, resulting in all sorts of problems. C++ tried to retrofit exceptions on top of that, and it was a spectacular failure due to bad interactions between exceptions and manual memory management, but let's skip that.

Shell doesn't have exceptions either, but almost all problems result in some error message being printed to stderr, so at least you know that something went wrong.

Perl is trying to be higher level but is modeled after C and shell, so while it sort of support exceptions for some high level packages, almost all of its basic OS-interacting functions like open will fail quietly and you need to manually check their return codes - at least it's easier than C, and ... or die "Cheeseburger acquisition failed: $!"; usually suffices.

Ruby mostly copies Perl when it comes to OS interaction, but fixes this particular problem - OS interaction always raises an exception when something goes wrong. Or does it?


There is one really infuriating exception, where not only Perl error handling is worse than both C and shell, Ruby copies this design failure straight from it, and it's not even fixed in Ruby 1.9 yet!

C function system is fairly straightforward - it executes whatever string you pass to it in shell. So if there's an error and let's say the command fail doesn't exist - int main(){ system("fail"); return 0; } results in "sh: fail: command not found" printed on stderr, or somesuch depending on your variant of Unix. Just like shell would do it, and what would be sane.

Both Perl and Ruby copy this function - except they do it wrong! system funciton is not terribly efficient - it first spawns shell process, which only then executes the relevant command. So some smarty pants decided to optimize it a bit - if the string passed to Perl/Ruby system looks straightforward enough, Perl/Ruby will execute it directly (split on spaces, fork, pass to exec) without spawing the shell process.

And in this micro-shell implementation inside system they both just forgot to check for error conditions altogether. system "fail >/dev/null" (in either of these languages) looks "non-trivial", so it spawns shell process, and results in sh: fail: command not found. But system "fail" - as it's so simple - goes straight to the optimized micro-shell, and fail silently. No exception, no stderr warning, no error code, nothing.

Well yes, you could check process return code - but process return code is non-zero not only on errors, but as a generic way for Unix processes to communicate - for example diff will return non-zero if files differ, which is in no way an error condition.

The fix

The optimization should be either fixed or turned off. As a trivial workaround - because the triviality check verifies that string contains none of *?{}[]<>()~&| \ $;'`" or newline, prepending "" in front of the first non-empty character passed system() seems to work well enough. "" evaluates to an empty string in shell.

$ ruby -e 'system "\"\"fail"'
sh: fail: command not found
$ ruby1.9 -e 'system "\"\"fail"'
sh: fail: command not found
$ perl -e 'system "\"\"fail"'
sh: fail: command not found

 But seriously, please fix this, okay? Even Python gets it right already.