Over a million developers have joined DZone.

How to Properly Setup RSpec

DZone's Guide to

How to Properly Setup RSpec

If you use Ruby then you likely also use (or should be using) RSpec to help test your code. In this post we take a look at how to properly setup RSpec, whether you're a beginner or an expert.

· Web Dev Zone ·
Free Resource

Learn how error monitoring with Sentry closes the gap between the product team and your customers. With Sentry, you can focus on what you do best: building and scaling software that makes your users’ lives better.

kitten by trash world from flickr (CC-NC-ND)
This post is recommended for everyone from total beginners to people who literally created RSpec.

Starting a New Project

When you start a new Ruby project, it's common to begin with:

$ git init
$ rspec --init

This creates a repository and some sensible TDD structure in it.

Or for Rails projects:

$ rails new my-app -T
$ cd my-app

Then edit




to the right group:

group :development, :test do
  gem "rspec-rails"


$ bundle install
$ bundle exec rails g rspec:install

I feel all those Rails steps really ought to be folded into a single operation. There's no reason why

rails new

can't take options for a bunch of popular packages like


, and there's no reason why we can't have some kind of

bundle add-development-dependency rspec-rails

 to manage a simple


 automatically (like


already does).

But this post is not about any of that.

What Test Frameworks Are For

So why do we even use test frameworks really, instead of using plain Ruby? A minimal test suite is just a collection of test cases - which can be simple methods, or functions, or code blocks, or whatever works.

The most important thing a test framework provides is a test runner, which runs each test case, gathers results, and reports them. What could be the possible results of a test case?

  • The test case could pass.
  • The test case could have a test assertion which fails.
  • The test case could crash with an error.

And here's where everything went wrong. For silly historical reasons, test frameworks decided to treat test assertion failure as if it was a test crashing with an error. This is just 



Here's a tiny toy test, it's quite compact, and reads perfectly fine:

it "Simple names are treated as first/last" do
  user = NameParser.parse("Mike Pence")
  expect(user.first_name).to eq("Mike")
  expect(user.middle_name).to eq(nil)
  expect(user.last_name).to eq("Pence")

If assertion failures are treated as failures, and first name assertion fails, then we still have no idea what the code actually returned, and at this point, a developer will typically run


or something equivalent just to mindlessly copy and paste checks which are already in the spec!

We want the test case to keep going, and then all assertion failures to be reported afterward!

Common Workarounds

There's a long list of workarounds. Some people go as far as recommending "one assertion per test" which is an absolutely awful idea which would result in enormous amounts of boilerplate and hard to read, disconnected code. Very few real-world projects follow this:

describe "Simple names are treated as first/last" do
  let(:user) { NameParser.parse("Mike Pence") }

  it do
    expect(user.first_name).to eq("Mike")

  it do
    expect(user.middle_name).to eq(nil)

  it do
    expect(user.last_name).to eq("Pence")
RSpec has some shortcuts for writing this kind of assertion test, but the whole idea is just misguided, and very often it's really difficult to the twist test case into a set of reasonable "one assertion per test" cases, even disregarding code bloat, readability, and performance impact. 

Another idea is to collect all tests into one. A vast majority of assertions are simple equality checks this usually, sort of, works:

it "Simple names are treated as first/last" do
  user = NameParser.parse("Mike Pence")
  expect([user.first_name, user.middle_name, user.last_name])
    .to eq(["Mike", nil, "Pence])

Not exactly amazing code, but at least it's compact.


What if a test framework was smart enough to keep going after an assertion failure? Turns out RSpec can do just that, but 

you need to explicitly tell it to be sane

, by putting this in your



RSpec.configure do |config|
  config.define_derived_metadata do |meta|
    meta[:aggregate_failures] = true

And now the code we always wanted to write magically works! If parser fails, we see all failed assertions listed. This really should be on by default.


This works with




syntax, and doesn't clash with any commonly used RSpec functionality.

It does not work with config.expect_with :minitest, which is how you can use theassert_equal syntax with the RSpec test driver. It's not a common thing to do, other than to help migration from a minitest to RSpec, and there's no reason why it couldn't be made to work, in principle.

What Else Can It Do?

You can write a whole loop like:

it "everything works" do
  collection.each do |example|
    expect(example).to be_valid

And if it fails somehow, you'll get a list of failing examples only in the test report!

What If I Don't Like the RSpec Syntax?

RSpec syntax is rather controversial, with many fans, but many other people very intensely hate it. It changed multiple times during its existence, including:

user.first_name.should equal("Mike")
user.first_name.should == "Mike"
user.first_name.should eq("Mike")
expect(user.first_name).to eq("Mike")

And, in all likelihood, it will continue changing. RSpec sort of supports more traditional expectation syntax as a plugin, but it currently doesn't support failure aggregation:

assert_equal "Mike", user.first_name

When I needed to mix them for migration reasons I just defined


manually, and that was good enough to handle the vast majority of tests.

In the long-term perspective, I'd of course strongly advise every other test framework in every language to abandon the historical mistake of treating test assertion failures as errors and to switch to this kind of failure aggregation.

Considering how much time a typical developer spends dealing with failing tests, even this modest improvement in the process can result in significantly improved productivity.

What’s the best way to boost the efficiency of your product team and ship with confidence? Check out this ebook to learn how Sentry's real-time error monitoring helps developers stay in their workflow to fix bugs before the user even knows there’s a problem.

framework ,rspec ,ruby ,web dev ,web application testing

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}