Tuesday, November 28, 2017

How to watch high speed let's plays on London Underground

le petit chat by FranekN from flickr (CC-NC-ND)

Apparently the idea that some places - like London trains - are offline - never occurred to anyone in California or Seattle or wherever people who write mobile software tend to live. And even support for high speed playback is not quite as common as it should be.

So I came up with a process to deal with it - which even with all the scripts to automate it still has far too many steps. I'm not saying I recommend it to anyone, but maybe someone needs to do something similar, and they might use it as a starting point.

Download let's plays

First, let's find a bunch of let's plays we want to watch, it's best to use playlists instead of individual videos to reduce URL copy and pasting time, but it works for both.

To download them we can use youtube-dl, which is available as a homebrew package (brew install youtube-dl), or you can get it from here.

$ youtube-dl -t -f "bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best"
  "url1" "url2" "url3"

Youtube offers videos in many formats, and arguments above are what I found to result in highest quality and best compatibility with various video players. Default settings often ended up causing issues.

Speed up the videos

There's plenty of command line tools to manipulate audio and video, and they tend to have ridiculously complicated interfaces.

I wrote speedup_mp3 script (available in my unix-utilities repository) which wraps all such tools to provide easy speedup of various formats - including of video files.

The script uses ffmpeg to speedup videos - as well as sox and id3v2 to deal with audio files if you need to do so to some podcasts. All those dependencies you can satisfy with brew install ffmpeg sox id3v2.

The script can speedup a whole directory of downloaded videos at once by 2.0x factor:

$ speedup_mp3 -2.0 downloaded_videos/ fast_videos/

Adjust that number to your liking. Factors higher than 2.0 are not currently supported, as ffmpeg requires multiple speedup rounds in such case. I plan to add support for that later.

The process takes a lot of time, so it's best left overnight to do its thing.

The script already skips videos which exist in target directory, so you can add more videos to the source, run it again, and it won't redo videos it already converted.

Put them on Dropbox

Infuriatingly there doesn't seem to be any good way to just send files to Android tablet from a laptop over WiFi. There used to be some programs, but they got turned into microtransaction nonsense.

If you already use Dropbox, the easiest way is to put those sped up files there. This step is a bit awkward of course, as video files are big, people's upload speeds are often low, and free Dropbox plan is pretty small.

If that doesn't discourage you, open Dropbox app on your tablet, and use checkbox to make your files available offline. You don't need to wait for it to finish syncing, Dropbox should keep updating files as they get uploaded.

After that any video player works. I use VLC. Just open Dropbox folder, click on the video, it will open in VLC and play right away. First time you do it, make sure to set VLC as default app to avoid an extra dialog.

Isn't this ridiculously overcomplicated?

Yeah, it sort of is. Some parts of it will probably get better - for example speed controls on video/audio playback are getting more common, so you could skip that part (watching at 100% speed is of course totally silly). It still makes some sense to pre-speedup to save space and battery on the device, as faster files are proportionally smaller, but if you feel it's not worth the hassle, you can probably find a video player with appropriate functionality.

TfL showed zero interest in fixing lack of connectivity on London underground, and mobile ecosystem assumes you're always online or everything breaks, so this part will probably be a major pain point for very long time.

The part I find most embarrassing is lack of any builtin way to just send files over to a device. Hopefully this gets fixed soon.

Saturday, November 18, 2017

10 Unpopular Opinions

Cat by kimashi tower from flickr (CC-BY)

I posted these on twitter a while back on Robin's request, but I wanted to elaborate a bit and give some context.

The list avoids politics, anything politics-adjacent like economics, and is not about just preferences.

If these turn out to be not controversial enough, I might post another list sometime in the future.

Listening to audio or watching videos at 100% speed is a waste of life

People speak very slowly, and for a good reason. When you talk with another person you need to not just process what they said, you're also preparing your responses, considering their reaction to your responses, and so on. With more than two people involved, it gets even more complicated.

None of this applies when you're just listening to something passively. Using audio speeds designed to leave you with enough brainpower to model your interlocutor when there's no interlocutor to model is just wasting it.

It will probably take a while to get used to it, but just speed it up, 200% should be comfortable for almost every content - podcasts, audiobooks, let's plays, TV etc. At first I used slight speedups like 120%, but I kept increasing it.

Side effect of this is that you might end up listening to music at higher speeds too (I end up using 140%), and people find this super weird.

I recommend this Chrome extension to control exact playback speed. It works with all major video sites.

I also wrote speedup_mp3 command line tool for podcasts and audiobooks, but nowadays most devices have builtin methods.

Oh and back in analog days, speeding up audio messed up the pitch, so everything sounded funny. It's not true for modern methods.

Any programmer who does not write code recreationally is invariably mediocre at best

This comes up every now and then on sites like reddit and the masses of mediocre programmers are always like "oh it's totally fine to just code at work". It's not.

Coding is unique in its ability to change the world - with even tiny amounts of effort you can affect reality. If someone never codes recreationally, this means one of:
  • They're so content they never needed or wanted to create something that didn't exist before
  • They coded some stuff, but never bothered to Open Source it
  • They'd like to, but they're just not good enough
So when you're hiring, all CVs without github link should go straight to the bin.

Couldn't be bothered to Open Source used to be sort of excusable, but it's nowadays just so easy to push something to github, Signaling 101 strongly implies people without github account are just bad.

And that applies to even junior / graduate roles. Even if you don't have anything amazingly useful to show yet, you can still share as you learn.

Avoidance of suffering can't be basis of morality - if it was, knocking out a few pain genes would be highest moral imperative

Nobody buys morality systems based on "God said so" or "Kant said so", and when people spend too much time on utilitarianism, they run into all kinds of problems.

So it became fashionable to ignore all pleasurable parts of utilitarianism, and just focus on minimizing suffering.

This is a total nonstarter. "Pain" and "suffering" are not exactly the same thing, but if you want to minimize suffering getting rid of pain is pretty much mandatory, and it's just a few simple gene edits to abolish it completely.

So far nobody's interested in researching some gene edits for humans or animals to get rid of pain, so by revealed preference they don't actually buy their own stated believes that avoidance of suffering is terribly important.

An obvious objection might be that people with congenital insensitivity to pain keep getting themselves in physically dangerous situations, but it's completely irrelevant. They live in the world of pain-sensitive people, which is currently full of objects dangerous to people without pain sensitivity. It would take very modest effort to redesign common risk factors for greater safety, and establish cultural norms to always see medical help just in case whenever something unusual is happening to one's body, not just when it's painful (since nothing ever will be).

Or even if that as somehow unachievable, we could simply reduce pain sensitivity without completely losing it as a signal. If it was really key to all morality, science should drop everything and focus on it.

Any takers? No? I thought so.

Mobile "games" are closer to fidget spinner than to real games

As a proud gamer, I find it infuriating that people call those mobile things "games".

It's not that they're bad games - I have no problem with bad games. They are not games.

For a good analogy, let's say you're into movies. And then someone is like "oh I totally love movies, I put news on tv playing in the background every morning while I get ready to go out". Ridiculous, isn't it? Somehow everybody else is spared from this nonsense, except gamers.

A game - like a movie - is something you actually get fully into. In game time, or movie time.

A mobile "game" - like background TV news - is something happening part-time mentally, just to fill otherwise dead time. Like on a train, in a long queue, or otherwise when you can't do anything better.

You know what mobile "games" are closer to? Fidget spinners. Rubik cubes. Sudokus. Toys. Not games.

That's not to say there aren't some legit games on mobile platforms, like let's say Hearthstone. They have nothing in common with all that fidget spinnery stuff.

Future medicine will develop easy fitness pill/hardware, and modern diet/exercise obsession will be wtf-tier to them

We evolved in very different world, and recently nearly everyone all over the world is getting overweight, horribly unfit, and suffering from all kinds of chronic conditions as a result.

Currently the best way people have to deal with it is to go on ever crazier diets, spend billions on "healthy" food and weight loss preducts, spend hours every week in gyms, and all that effort has at most modest effect.

But why is any of that even remotely necessary? You already have all the genes necessary to be fit, healthy, and attractive (and if you don't, most simple genetic problems can be fixed with simple medical interventions). If that fails, it's because something about current environment messes up with your body's regulatory system so much the result is failure to achieve your biological potential.

Contrary to "calorie" nonsense, all the dieting and exercise is just attempt to make your regulatory system work more like it's evolutionarily designed to.

At some point we'll inevitably figure out some ways to monitor and affect that body's regulatory system directly, skipping this insanity of self-denial and waste of endless hours for very modest result.

For a good example, consider 20th century's biggest health menace - smoking cigarettes. It led to enormous social campaign, punitive taxation, and in some specially evil countries like UK government is literally using death panels against smokers. Then vaping came, and you can get basically all the benefits of smoking cigarettes with basically zero of health risks.

Problem is completely solved. Well, at least it would be if governments and society fully embraced vaping instead of treating it as smoking tier evilness.

For older example, people used to have crazy complicated dietary cleanliness rules to reduce their exposure to pathogens. All forgotten now, except among religious nuts. Food sold in supermarkets is pathogen free, we moved on.

We already have some examples of this direct approach working - stomach surgery has far stronger and immediate results than all kinds of diets and exercise put together with zero effort needed - and there are less invasive methods in development.

There were also a lot of pills which improved fitness and reduced obesity greatly, but they foolishly keep getting themselves banned due to rare side effects, or as part of the evil War on Drugs.

Or alternatively maybe sexbots are going to get so good everyone is going to get many hours of intense exercise every night without any self-denial. But whichever way, it's going to get solved.

MongoDB figured out the one true way to represent data as JSON documents - now if only everything else about it was any good

Relational database are sort of insane. They essentially model data as a collection of Excel spreadsheets. There's some irrelevant mathematical nonsense like relational calculus, but it only has the most remote relationship with RDBMSs.

Would you consider writing a program where the only data type was Excel spreadsheet? What kind of question is that, obviously not, yet a lot of you use relational database, and some ORM to make those Excel spreadsheets look kinda like something more useful, and it's painful.

Sure, they have a lot of nice stuff on top of that Excel spreadsheets - like ability to merge multiple Excel spreadsheets into a new temporary Excel spreadsheets, but that's all they ever do.

We don't need any of that. MongoDB style storing data as collections of JSON documents is close to perfect as it gets. And its performance can be pretty amazing.

It's just not very good for anything else. Lack of good query language, and the silly thing of building JSON query trees is not even remotely acceptable. Take a look at this website which translates very simple SQL into MongoDB queries. They are insane.

If we had MongoDB style data modelling, and good query language on top of it, it would win all database wars.

By the way, programs which literally use Excel as their backend engine are an actual thing.

Farm animals are generally better off than wild animals - enjoy that chicken

Wild animals live on edge of Malthusian equilibrium - with lives just tolerable enough to survive, generally on edge of starvation, death by predation, or by disease. And in times of abundance, they just fight for status in their pack, with a lot more losers than winners. It's not a great life.

None of that applies to domesticated animals. They have safety, abundance of food, freedom from disease, and their lives end as painlessly as possible in their prime, saving them from degenerations of old age.

That's not to say their lives are anywhere near optimized for greatest happiness, but by any dispassionate evaluation the contrast in really one sided.

And it's not like going vegan would somehow reduce suffering - those cows and chickens would simply never exist.

So enjoy the meat.

Popularity of javascript won't last long - compiling real languages to web assembly is near future

Javascript was never meant as a "real" general purpose language. It was created for 10 line hacks to validate some online forms and other such trivial things, and it was perfectly adequate for it. Then jQuery happened, and it turned even more into special purpose language for browser APIs.

Thanks to great success of web browsers as a platform, it somehow managed to tag along and is enjoying temporary time of popularity, being used for things far bigger than it's reasonable to.

But Javascript has no real competitive advantages. All advantages are in browser APIs, and any language which compiles to something browsers can run can use them.

Right now the Web has a mix of:
  • sites with old style trivial Javascript, jQuery, and simple plugins like Facebook buttons - that's close to 99% of the web
  • sites with new Javascript frameworks - they're so rare you can't even see them in popularity statistics except Angular somehow gets over 1% mark
  • very small number of high profile custom written sites like Google Maps and Gmail
There are two orders of magnitude gaps between these categories.

Anyway, the interesting thing is that in framework world, people already abandoned Javascript, and use various Javascript++ languages like CoffeeScript, JSX, TypeScript, whatever Babel does etc. And it's all compiled, with browser never seeing raw code.

This is all intermediate situation, and the only long term equilibrium will be Javascript++ being displaced by actual programming languages like Ruby or Python.

Right now all ways to use them in browser like Opal are in infancy, but when you look at numbers, everything about Javascript frameworks is in its infancy.

Widespread piracy alternative motivated game companies to treat gamers well - less piracy led to anti-gamer behaviour like loot boxes

Video game piracy is much less common than it used to be. There are many factors, both positive and negative - Steam and other online retailers made it far easier to buy games without waiting a week for the box, there are many discount sites and promotions so even people with less money can buy legit games, many games focus on online play and that's harder for pirates to emulate, there's been aggressive DRM effort that mostly worked on consoles, and is even causing some delays on PCs, and popular pirating sites keep getting shut down or get infected by malware.

It's still possible to pirate, but it all adds up to much lower rates (unlike let's say TV shows, where it's as rampant as ever). Whatever the reasons, the result is horrible for gamers.

Back when everyone had alternative of easy piracy, companies were essentially forced to treat gamers well, as any bullshit would just lead to alt-tab to The Pirate Bay. Now that piracy is much more niche, companies can do whatever they want.

Day one DLCs, DLCs which are basically bugfixes, DLCs while the game is still in Early Access, DLCs and cost $500+, all kinds of Pay-to-Win schemes, lootboxes, all that crap is happening not because companies are getting greedier, but because abused gamers are less likely to exercise the pirate option than in the past.

There are no easy ways. Outrage campaigns just slow down these abusive practices. Platforms like Steam could ban some of the worst abuses, and in theory even game rating agencies and governments could intervene, for example treating lootboxes as gambling, and completely ban it for under-18s games. In practice governments are by anti-gamer old people, and they're more likely to cause even more harm.

Keeping piracy option alive is the best way we have if we want to be treated with dignity.

For all its historical significance, apt-get is not really a good package manager

Twenty years ago Linux's main selling point were package managers like apt-get. You didn't need to download software from twenty sites, and chase incompatibilities, you just typed one command and it was all setup properly. It even upgraded everything with one command, rarely breaking anything in process.

It was amazing. It also didn't age well.

Just to cover some difference between modern (mostly OSX) environment:
  • There's no reason for admin access to install most software
  • Programs are self-updating
  • Many programs have some kind of plugin system
  • Pretty much every programming language has its own package system
  • Quite often you need to install multiple versions
apt-get really doesn't deal with any of it.

A while ago I'd have thought it's really funny, but OSX-style package management like Linuxbrew and Nix are now a thing.

On Cloud servers people usually use language-specific package managers, or nowadays even occasionally something like Docker.

Either way, Linux is still not on desktop. I guess lack of usable graphics card drivers in any distro might be among the reason.

Wednesday, November 01, 2017

Architecture of z3 gem

Kitten by www.metaphoricalplatypus.com from flickr (CC-BY)

This post is meant for people who want to dig deep into Z3 gem, or who want to learn from example how to interface with another complex C library. Regular users of Z3 are better off checking out some tutorials I wrote.

Architecture of z3 gem The z3 theorem prover is a C library with quite complex API, and z3 gem needs to take a lot of steps to provide good ruby experience with it.

Z3 C API Overview

The API looks conventional at first - a bunch of black box data types like Z3_context Z3_ast (Abstract Syntax Tree), and a bunch of functions to operate on them. For example to create a node representing equality of two nodes, you call:

Z3_ast Z3_API Z3_mk_eq(Z3_context c, Z3_ast l, Z3_ast r);

A huge problem is that so many of those calls claim to accept Z3_ast, but it needs to be particular kind of Z3_ast, otherwise you get a segfault. It's not even static limitation - l and r can be anything, but they must represent the same type. So any kind of thin wrapper is out of the question.

Very Low Level API

The gem uses ffi to setup Z3::VeryLowLevel with direct C calls. For example the aforementioned function is attached like this:

attach_function :Z3_mk_eq, [:ctx_pointer, :ast_pointer, :ast_pointer], :ast_pointer

There's 618 API calls, so it would be tedious to do it manually, so instead a tiny subproject lives in api and generates most of it with some regular expressions. A list of C API calls is extracted from Z3 documentation into api/definitions.h. They look like this:

def_API('Z3_mk_eq', AST, (_in(CONTEXT), _in(AST), _in(AST)))

Then api/gen_api script translates it into proper ruby code. It might seem like it could be handled by ffi library, but there are too many Z3-specific hacks needed. A small number of function calls can't be handled automatically, so they're written manually.

For example Z3_mk_add function creates a node representing addition of any number of nodes, and has signature of:

attach_function :Z3_mk_add, [:ctx_pointer, :int, :pointer], :ast_pointer

Low Level API

There's one intermediate level between raw C calls and ruby code. Z3::LowLevel is also mostly generated by api/gen_api. Here's an example of automatically generated code:

def mk_eq(ast1, ast2) #=> :ast_pointer
  VeryLowLevel.Z3_mk_eq(_ctx_pointer, ast1._ast, ast2._ast)
end

And this one is written manually, with proper helpers:

def mk_and(asts) #=> :ast_pointer
  VeryLowLevel.Z3_mk_and(_ctx_pointer, asts.size, asts_vector(asts))
end

A few things are happening here:
  • Z3 API requires Z3_context pointer for almost all of its calls - we automatically provide it with singleton _ctx_pointer.
  • We get ruby objects, and extract C pointers from them.
  • We return C pointers FFI::Pointer and leave responsibility for wrapping them into ruby objects to the caller, as we actually don't have enough information here to do so.
Another thing Z3::LowLevel API does is setting up error callback, to convert Z3 errors into Ruby exceptions.

Ruby objects

And finally we get to ruby objects like Z3::AST, which is a wrapper for FFI::Pointer representing Z3_ast. Other Z3 C data types get similar treatment.

module Z3
  class AST
    attr_reader :_ast
    def initialize(_ast)
      raise Z3::Exception, "AST expected, got #{_ast.class}" unless _ast.is_a?(FFI::Pointer)
      @_ast = _ast
    end

    # ...

    private_class_method :new
  end
end

First weird thing is this Python-style pseudo-private ._ast. This really shouldn't ever be accessed by user of the gem, but it needs to be accessed by Z3::LowLevel a lot. Ruby doesn't have any concept of C++ style "friend" classes. I've chosen Python pseudo-private convention as opposed to a lot of .instance_eval or similar.

Another weird thing is that Z3::AST class prevents object creation - only its subclasses representing nodes of specific type can be instantiated.

Sorts

Z3 ASTs represent multiple things, mostly sorts and expressions. Z3 automatically interns ASTs, so two identically-shaped ASTs will be the same underlying objects (like two same Ruby Symbols), saving us memory management hassle here.

Sorts are sort of like types. The gem creates a parallel hierarchy so every underlying sort gets an object of its specific class. For example here's whole Z3::BoolSort, which should only have a single object.

module Z3
  class Sort < AST
    def initialize(_ast)
      super(_ast)
      raise Z3::Exception, "Sorts must have AST kind sort" unless ast_kind == :sort
    end
    # ...

module Z3
  class BoolSort < Sort
    def initialize
      super LowLevel.mk_bool_sort
    end

    def expr_class
      BoolExpr
    end

    def from_const(val)
      if val == true
        BoolExpr.new(LowLevel.mk_true, self)
      elsif val == false
        BoolExpr.new(LowLevel.mk_false, self)
      else
        raise Z3::Exception, "Cannot convert #{val.class} to #{self.class}"
      end
    end

    public_class_method :new
  end
end

ast_kind check is for additional segfault prevention.

BoolSort.new creates Ruby object with instance variable _sort pointing to Z3_ast describing Boolean sort.

It seems a bit overkillish to setup so much structure for BoolSort with just two instance values, but some Sort classes have multiple Sort instances. For example Bit Vectors of width n are:

module Z3
  class BitvecSort < Sort
    def initialize(n)
      super LowLevel.mk_bv_sort(n)
    end

    def expr_class
      BitvecExpr
    end    

Expressions

Expressions are also ASTs, but they all carry reference to Ruby instance of their sort.

module Z3
  class Expr < AST
    attr_reader :sort
    def initialize(_ast, sort)
      super(_ast)
      @sort = sort
      unless [:numeral, :app].include?(ast_kind)
        raise Z3::Exception, "Values must have AST kind numeral or app"
      end
    end

This again might seem like an overkill for expressions representing Bool true, but it's extremely important for BitvecExpr to know if it's 8-bit or 24-bit. Because if they get mixed up, segfault.

Building Expressions

Expressions can be built from constants:

IntSort.new.from_const(42)

Declared as variables:

IntSort.new.var("x")

Or created from one or more of existing expression nodes:

module Z3
  class BitvecExpr < Expr
    def rotate_left(num)
      sort.new(LowLevel.mk_rotate_left(num, self))
    end

As you can see, the low level API doesn't know how to turn those C pointers into Ruby objects.

This interface is a bit tedious for the most common case, so there are wrappers with simple interface, which also allow mixing Z3 expressions with Ruby expressions, with a few limitations:

Z3::Int("a") + 2 == Z3::Int("b")

For some advanced use you actually need the whole interface.

Creating Sorts and Expressions from raw pointers

For ASTs we construct we track their sorts. Unfortunately sometimes Z3 gives us raw pointers and we need to guess their types - most obviously when we actually get a solution to our set of constraints.

Z3's introspection API lets us figure this out, and find out proper Ruby objects to connect to.

It has unfortunate limitation that we can only see underlying Z3 sorts. I'd prefer to have SignedBitvectorExpr and UnsignedBitvectorExpr as separate types with nice APIs, unfortunately there's no way to infer if answer Z3 gave came from Ruby SignedBitvectorExpr or UnsignedBitvectorExpr, so that idea can't work.

Printer

Expressions need to be turned into Strings for human consumption. Z3 comes with own printer, but it's some messy Lisp-like syntax, with a lot of weirdness for edge cases.

The gem instead implements its own printer in traditional math notation. Right now it sometimes overdoes explicit parentheses.

Examples

The gem comes with a set of small and intermediate examples in examples/ directory. They're a good starting point to learn common use cases.

There are obvious things like sudoku solvers, but also regular expression crossword solver.

Testing

Testing uses RSpec and has two parts.

Unit tests require a lot of custom matchers, as most objects in the gem override ==.

Some examples:

let(:a) { Z3.Real("a") }
let(:b) { Z3.Real("b") }
let(:c) { Z3.Real("c") }
it "+" do
  expect([a == 2, b == 4, c == a + b]).to have_solution(c => 6)
end

Integration tests run everything in examples and verify that output is exactly as expected. I like reusing other things as test cases like this.

How to properly setup RSpec

kitten by trash world from flickr (CC-NC-ND)

This post is recommended for everyone from total beginners to people who literally created RSpec.

Starting a new project

When you start a new ruby project, it's common to begin with:

$ git init
$ rspec --init

to create a repository and some sensible TDD structure in it.

Or for rails projects:

$ rails new my-app -T
$ cd my-app

Then edit Gemfile adding rspec-rails to the right group:

group :development, :test do
  gem "rspec-rails"
end

And:

$ bundle install
$ bundle exec rails g rspec:install

I feel all those Rails steps really ought to be folded into a single operation. There's no reason why rails new can't take options for a bunch of popular packages like rspec, and there's no reason why we can't have some kind of bundle add-development-dependency rspec-rails to manage simple Gemfile automatically (like npm already does).

But this post is not about any of that.

What test frameworks are for

So why do we even use test frameworks really, instead of using plain ruby? A minimal test suite is just a collection of test cases - which can be simple methods, or functions, or code blocks, or whatever works.

The most important thing test framework provides is a test runner, which runs each test case, gathers results, and reports them. What could be possible results of a test case?
  • Test case could pass
  • Test case could have test assertion which fails
  • Test case could crash with an error
And here's where everything went wrong. For silly historical reasons test frameworks decided to treat test assertion failure as if it was test crashing with an error. This is just insane.

Here's a tiny toy test, it's quite compact, and reads perfectly fine:

it "Simple names are treated as first/last" do
  user = NameParser.parse("Mike Pence")
  expect(user.first_name).to eq("Mike")
  expect(user.middle_name).to eq(nil)
  expect(user.last_name).to eq("Pence")
end

If assertion failures are treated as failures, and first name assertion fails, then we still have no idea what the code actually returned, and at this point developer will typically run binding.pry or equivalent just to mindlessly copy and paste checks which are already in the spec!

We want the test case to keep going, and then all assertion failures to be reported afterwards!

Common workarounds

There's a long list of workarounds. Some people go as far as recommending "one assertion per test" which is an absolutely awful idea which would result in enormous amounts of boilerplate and hard to read disconnected code. Very few real world projects follow this:

describe "Simple names are treated as first/last" do
  let(:user) { NameParser.parse("Mike Pence") }

  it do
    expect(user.first_name).to eq("Mike")
  end

  it do
    expect(user.middle_name).to eq(nil)
  end

  it do
    expect(user.last_name).to eq("Pence")
  end
end

RSpec has some shortcuts for writing this kind of one assertion tests, but the whole idea is just misguided, and very often it's really difficult to twist test case into a sets of reasonable "one assertion per test" cases, even disregarding code bloat, readability, and performance impact.

Another idea is to collect all tests into one. As vast majority of assertions are simple equality checks, this usually sort of works:

it "Simple names are treated as first/last" do
  user = NameParser.parse("Mike Pence")
  expect([user.first_name, user.middle_name, user.last_name])
    .to eq(["Mike", nil, "Pence])
end

Not exactly amazing code, but at least it's compact.

Actually...

What if test framework was smart enough to keep going after assertion failure? Turns out RSpec can do just that, but you need to explicitly tell it to be sane, by putting this in your spec/spec_helper.rb:

RSpec.configure do |config|
  config.define_derived_metadata do |meta|
    meta[:aggregate_failures] = true
  end
end

And now the code we always wanted to write magically works! If parser fails, we see all failed assertions listed. This really should be on by default.

Limitations

This works with expert and should syntax, and doesn't clash with any commonly used RSpec functionality.

It does not work with config.expect_with :minitest, which is how you can use assert_equal syntax with RSpec test driver. It's not a common thing to do, other than to help migration from minitest to RSpec, and there's no reason why it couldn't be made to work in principle.

What else can it do?

You can write a whole loop like:

it "everything works" do
  collection.each do |example|
    expect(example).to be_valid
  end
end

And if it fails somehow, you'll get a list of failing examples only in test report!

What if I don't like the RSpec syntax?

RSpec syntax is rather controversial, with many fans, but many other people very intensely hating it. It changed multiple times during its existence, including:

user.first_name.should equal("Mike")
user.first_name.should == "Mike"
user.first_name.should eq("Mike")
expect(user.first_name).to eq("Mike")

And in all likelihood it will continue changing. RSpec sort of supports more traditional expectation syntax as a plugin, but it currently doesn't support failure aggregation:

assert_equal "Mike", user.first_name

When I needed to mix them for migration reasons I just defined assert_equal manually, and that was good enough to handle vast majority of tests.

In long term perspective, I'd of course strongly advise every other test frameworks in every language to abandon the historical mistake of treating test assertion failures as errors, and to switch to this kind of failure aggregation.

Considering how much time a typical developer spends dealing with failing tests, even this modest improvement in the process can result in significantly improved productivity.