Programming, philosophy, pedaling.

Code coverage with SimpleCov across multiple processes

Apr 1, 2018

Tags: kbsecret, devblog, programming, ruby

This is a short writeup of how I got SimpleCov coverage reports working across multiple Ruby processes, specifically when created through Kernel#fork.


As part KBSecret 1.3 (soon to be released!), I've significantly refactored the way in which KBSecret executes commands (e.g., kbsecret list and kbsecret new) as part of a larger effort to simplify the codebase and improve performance.

KBSecret now executes commands "in-process," meaning that it does not exec or otherwise spawn a fresh Ruby interpreter to handle the command. This has two important consequences:

However, KBSecret commands still behave as if they're in complete control of the process — they call exit and abort on error conditions, fiddle with I/O, and do all sorts of other things. This makes testing difficult, especially when the tests are of error conditions — calling exit in the command takes the entire test harness down with it.

Testing commands with fork and pipes

fork is the conceptually simple solution to the problem of testing programs that terminate or otherwise modify the process state. Ruby even provides a nice Kernel#fork method that takes a block:

# BAD! This will take down the test harness if the command decides to exit.
KBSecret::CLI::Command.run! cmd, *args

# GOOD! The command's termination has no (direct) impact on the test harness.
fork do
  KBSecret::CLI::Command.run! cmd, *args

# We want to make sure our forked process finishes before we test its state.

However, fork comes with its own challenges — now that we're in a separate (child) process, we no longer have direct access to the child's standard I/O descriptors. Since commands communicate with the user through stdin, stdout, and stderr, we'll need to introduce a pipe for each:

def kbsecret(cmd, *args, input: "")
  pipes = {
    stdin: IO.pipe,
    stdout: IO.pipe,
    stderr: IO.pipe,

  # Send our input into the write-end of our stdin pipe, for the child to read.
  pipes[:stdin][1].puts input

  fork do
    # Child: close those pipe ends we don't need.

    # Reassign the child's global standard I/O handlers to point to our pipes.
    $stdin = pipes[:stdin][0]
    $stdout = pipes[:stdout][1]
    $stderr = pipes[:stderr][1]

    # ...and run the command.
    KBSecret::CLI::Command.run! cmd, *args

  # Parent: close those pipe ends we don't need.

  # Wait for our child to finish.

  # Finally, return the contents of the child's stdout and stderr streams for testing.
  [pipes[:stdout][0].read, pipes[:stderr][0].read]

This works as expected:

>> # a command that runs normally
>> kbsecret "version"
=> ["kbsecret version 1.3.0.pre.3.\n", ""]
>> # a command that terminates via `exit` due to a bad flag
>> kbsecret "list", "-z"
=> ["", "\e[31mFatal\e[0m: Unknown option `-z'.\n"]

Introducing code coverage to the forked processes

So far, we have commands running in their own processes for the purposes of resiliency/testing failure conditions. That's cool, but what we ultimately want is coverage statistics from those child processes. How do we get there?

Well, because we're using fork, our child processes share the same library context as their parents. That means that anything we get a copy of anything required or loaded pre-fork, including SimpleCov's state.

To take advantage of this, we need to modify our coverage preamble slightly, from something like this:

  require "simplecov"

to this:

  require "simplecov"

  # Only necessary if your tests *might* take longer than the default merge
  # timeout, which is 10 minutes (600s).

  # Store our original (pre-fork) pid, so that we only call `format!`
  # in our exit handler if we're in the original parent.
  pid = Process.pid
  SimpleCov.at_exit do
    SimpleCov.result.format! if Process.pid == pid

  # Start SimpleCov as usual.

We also need to add a tiny bit of code to our fork block:

fork do
    # Give our new forked process a unique command name, to prevent problems
    # when merging coverage results.
    SimpleCov.command_name SecureRandom.uuid

  # Same as the fork-block code above...

And ta-da, multi-process coverage reports:

SimpleCov results. Each UUID above is a separate process.

SimpleCov results. command/new.rb, command/list.rb, and command/rm.rb are all tested under separate processes.

Afternote: Uploading to Codecov

This technique works great locally, but not so great on remote services like Codecov. To get properly merged multi-process coverage results on Codecov, you'll need to do some additional post-processing.

Here's an example rake task:

desc "Upload coverage to codecov"
task :codecov do
  require "simplecov"
  require "codecov"

  formatter = SimpleCov::Formatter::Codecov.new

This handles uploading to Codecov, so there's no need to require "codecov" in your helper.rb or equivalent file.

Thus, the complete workflow:

# Run unit tests with code coverage enabled.
$ COVERAGE=1 bundle exec rake test
# Stitch the previous results together and send the merged result to Codecov.
$ bundle exec rake codecov

Check out KBSecret's repository for a working example.

Thanks for reading!

Reddit discussion