Test-Driven to Distraction

Share this article

Ruby and TDD are inseparable. The language lends itself to test driven development to the point that it ships with the Test::Unit library as standard.

We all know that we ‘should’ test and that the process will yield more robust, better quality code. But what other benefits do we get from writing tests, when often the tests end up being more lines than the code under test?

Throughout the article I will be using MiniTest::Spec. This is a super cool spec style ‘framework’ that also ships as standard with Ruby 1.9. I love RSpec and having the same kind of syntax at my fingertips is a big plus. But your test framework of choice doesn’t really matter here, everything discussed can be applied. Still, I highly recommend trying MiniSpec.

Why All These Tests?

We have said testing your code makes it more robust and improves quality. Well, that is not strictly true. You don’t get robustness or quality for free, as your code will only be as good or robust as your tests.

So we will write good tests, we will yield code that handles lots of different scenarios but is that all we get? It hardly seems worth the effort. With experience and good knowledge of the story behind what we are coding, we can accomplish robust code without wasting time writing tests.

As I’m sure you already know, ‘We get a lot more’.

Tests give us the ability to protect against regression. We find a bug, write a test, implement the solution, and that bug will never see the light of day again.

Having tests also gives us the ability to refactor our code in confidence. I cannot imagine working in any decent sized application without tests existing. How could I implement a new feature without that firewall of tests protecting legacy code?

Tests also form a contract or specification that your peers will readily understand. The tests will explain the expected behaviour of our objects. For a language like Ruby that scoffs at formal Interface constructs, tests are a must when communicating with other developers.

On a level geared more toward our development style, the act of writing and evolving tests actually leads us to refined solution. We gain knowledge in the domain as we baby step through our tests.

Testing is also fun. But, lets focus on what we just talked about regarding TDD.

Red, Green, Start Working

Quoting ‘Red, Green, Refactor’ makes the whole process of TDD seem almost trivial. You write the test, watch it fail, and do as little as possible to make it pass. Now, we have a couple of choices.

  1. Rework the test in question?
  2. Add more tests?
  3. Refactor the code under a green light?

I’m sure we have all read that before and thought ‘Yip, no problems’. But, is it that simple when we come to the mechanics of writing these little pieces of magic? Well, tests can quickly become downright frustrating and non-productive chores. They become hurried, monoliths of junk that simply tick a box on the ‘Good Developer Checklist’ that exists only in our own heads, and we move on unwilling to play in that piece of code again.

Let us not set foot anywhere near that unhappy place. Instead, we will gather knowledge of what we are trying to achieve and walk baby steps till we get there. I can here an example coming, and it’s processing a file containing fixed width, non-delimited data.

We have an example file that represents classes in the Cartoons Who Code School. The file has a header that defines the class name, a footer that provides a checksum on the enclosed rows, and the rows of data containing all the students first name, last name, age and gender.

Test Unit 101
WIGGUM        RALF          13MALE
COYOTE        WYLIE         72MALE
RABBIT        JESSICA       25FEMALE
003

Our goal here? Well, we want to process these files. We want to validate the file, extract the contents for later use. For now, however, we will just spit it out on the console.

So let’s look at a first pass of test driving the code to do accomplish this goal.

require 'minitest/spec'
require 'minitest/autorun'

require './student_file'

describe StudentFile do

  describe "parse" do
    it "will return an instance of StudentFile" do
      StudentFile.parse('test_file.txt').must_be_kind_of(StudentFile)
    end
  end

  it "will return the correct class name" do
    valid_file.class_name.must_equal "Class Name"
  end

  it "will return true if the file is valid" do
    valid_file.valid_file?.must_be :==, true
  end

  it "will return false if the file is invalid" do
    invalid_file.valid_file?.must_be :==, false
  end

end

def valid_file
  StudentFile.new(["Class Namen", "RABBIT        JESSICA       25FEMALEn", "001n"])
end

def invalid_file
  StudentFile.new(["Class Name"])
end

So, we are part of the way there. We have established there should be a class level methodparsethat takes a file name and returns a new instance ofStudentFile. We can now test that the class name is what we would expect and validate the checksum line count expected in the file.

Conversing with Your Code

While reading Getting Real, there is a chapter about listening to your code pushing back. Guiding you to the cheap and the light solution for features. For tests, I find if it’s difficult to test, it’s probably not been designed as well as it should be, and start to rethink my interfaces. So instead of the cheap and light, we have missed a step somewhere that is making the test process harder than it should be. We are posed with a couple of interesting problems when coming to test the output of our classmates to the console.

How do we test output on the console? No doubt the seasoned pros will know, but this made me rethink what we are trying to achieve. I mentioned “spit it out to the console” earlier, this was just a blasé comment for a temporary output that has caused us a minor headache. However, if we focus on just outputting the data rather than where, we start to see a straightforward solution. We pass the output to the constructor. StringIO will do a great job for test purposes, we can set up a default for the STDOUT console and voila! we have something easier to test while being a lot more flexible.

def self.parse(filename, output=STDOUT)
  f = File.new(filename)
  self.new(f.readlines, output)
end

This allows our tests to look something like

it "will display error if file is invalid" do
  out = StringIO.new
  invalid_file(out).print
  out.string.must_be :==, "INVALID FILE"
end

def invalid_file(op=STDOUT)
  StudentFile.new(["An invalid file"], op)
end

Just Mock it

Before we go any further, let’s talk briefly about mocks and stubs. When looking at sending the parsed content to the console, we could have used the MiniTest mocking framework or something like flexmock. However, for this example I didn’t feel the need, since StringIO will do the job just as adequately. Als,o the output is within the domain of what we are testing. Is it really a good idea to mock that kind of thing?

I tend to only use mocks in a few situations.

  1. When relying on responses/behaviours from a 3rd party (e.g. a gem or web service)
  2. When relying on the responses/behaviour of a piece of code outside the current domain. (e.g. A database wrapper, emailer script that has been tested elsewhere)
  3. When it allows me to focus on an element in a complex test.

The third situation does not occur that often. However, when we have a piece of code with a lot dependencies it makes sense to mock these out.

On With the Code

We now have a way to test the output to the console. Before going ahead with that, let’s look at the data within the file. My first implementation looked something like this:

def students
  row = []
  @lines[1..-2].each do |line|
    row << ("| " << name(line) << "| "  << age(line) << "| " << gender(line) << "|")
  end
  row.join("n")
end 

def name(row)
  (row[14..27].strip << " " << row[0..13].strip).ljust(28)
end

def age(row)
  row[28..29].ljust(4)
end

def gender(row)
  row[30..-1].strip.ljust(13)
end

It didnt feel right even as I wrote it. Passing thatline variable just felt horrible. However, I had a passing test, so I could now refactor in confidence, even when extracting the student lines to its own class.

By extracting the lines of student data to their own objects we can write tests for the lines, thus removing the need to test the lines of data in our student file tests.

describe StudentLine do

  before do
   @student = StudentLine.new("RABBIT        JESSICA       25FEMALE")
  end

  it "will return the name of the student" do
    @student.full_name.musti_equal "JESSICA RABBIT"
end

it "will return the age of the student" do
  @student.age.must_equal "25"
  end

  it "should return the gender" do
    @student.gender.must_equal "FEMALE"
  end

end

Not only does the above class clean up the implementation of the StudentFile class, it also insulates the test code for student files from the ins and outs of parsing student data from a string of text. So, if we wrap the printing of data in a private method (which technically we shouldn’t test) like so:

def initialize(lines, output=STDOUT)
  @class_name = lines.shift.strip
  @footer = lines.pop
  @lines = []

  lines.each do |line|
    @lines << StudentLine.new(line) if line.length > 33
  end

  @output = output
end

def print
  if self.valid_file?
    @output.print formatted_output
  else
    @output.print "INVALID FILE"
  end
end

private

def formatted_output
  [class_name.upcase, separator, header, separator, students, separator].join("n")
end

def header
  "| STUDENT                     | AGE | GENDER       |"
end

def separator
  "+-----------------------------+-----+--------------+"
end

def students
  row = []
  @lines.each do |line|
    row << ("| " << line.full_name.ljust(28) << "| "  << line.age.ljust(4) << "| " << line.gender.ljust(13) << "|")
  end
  row.join("n")
end

Wrapping Up

We have a script that parses our file and produces the results we want. You can view the finished source at this gist. Truth is, we have hit a refactor stage again. We could carry on another few interations and clean up the code further. However, we can’t keep doing that. We have to draw the line at some point instead of obsessing that our code isn’t as amazing as it could be with continued refactoring. It probably never will get there. However, we have a well tested script. It does what we require, and next time we can afford to give it some love, that is exactly what we will do. Our tests have our back.

Hopefully, this article shows that tests are not a “nice to have”, they are essential to any project. There are a wealth of tools available at our disposal, and I urge you to try out MiniTest::Spec. I leae you with a few tips for productive testing:

  • Don’t worry about which test framework is better, they all have merits. It’s more important that you are actually testing your code.
  • Use your tests to get a feel for the domain in which your working, baby steps are far superior to large integration style tests.
  • Focus on what you are trying to achieve at the micro level, but dont’t forget the bigger picture that drives your tests.
  • Mocks and Stubs are great, but do not over use them they can lie.
  • There is a lot of work in refactor steps, don’t scrimp.
  • Draw the line. We could refactor forever. Give a little love, as often as possible.
Dave KennedyDave Kennedy
View Author

Dave is a web application developer residing in sunny Glasgow, Scotland. He works daily with Ruby but has been known to wear PHP and C++ hats. In his spare time he snowboards on plastic slopes, only reads geek books and listens to music that is certainly not suitable for his age.

Share this article
Read Next
Get the freshest news and resources for developers, designers and digital creators in your inbox each week