Annotation Done Right

Or, think carefully about your APIs

This is such a common design pattern we’ve probably all done it one way or another. We have some data stream — likely XML or JSON or just plain old text — and we want to wrap it inside another element. For example, taking a name like “Zac” and turning it into “<name>Zac</name>” like so:1

var s = s"<name>$firstName</name>"

Of course if you do this enough, you start thinking it would be nice to have a function on hand:

def wrap(s: String, w: String): String = s"<$w>$s</$w>"

What’s wrong with that?

That’s all well and good, but after a while we realize a couple of problems with this very simple API:

  1. I’d say the API itself is pretty bad. I would love to have a syntax that feels more, well, functional… like "Zac" wrap "name".
  2. It’s not really “wrapping” the text string — it is in fact “XML’ising” the text string. There’s more going on here than just bracketing a string with another string.
  3. It’s not much of a stretch to see how this generic wrap() function could end up getting in the way (either confusing someone about its true purpose, or getting in the way of other string-oriented functions).
  4. And, why limit this handy little function? What happens if we want to wrap something that’s not a String?

Let’s tackle these one at a time. The first limitation is elegantly managed by introducing infix notation with an implicit class in Scala.

implicit class EnrichedString(s: String) {
  def wrap(w: String) = s"<$w>$s</$w>"
}

"thing".wrap("foo") // the usual syntax
"thing" wrap "foo"  // or using infix notation

When you try to call the wrap() function, the implicit class essentially gives the compiler a hint about where to look for the function. Since it isn’t on the String class itself, it starts to search implicit scope. The compiler finds the EnrichedString class, realizes it can convert a standard String to an EnrichedString, and gains access to the wrap() function.

One possible negative side effect of this is boxing. The source string, in our case “Zac”, will get boxed into EnrichedString("Zac") so the compiler can call wrap(). Depending on how you feel about this, you can get around it by using AnyVal instead:

implicit class EnrichedString(val s: String) extends AnyVal { ... }

A thoughtful API

That’s a nice way of improving the usability of our API, but it’s still a pretty bad API. I still haven’t addressed most of the problems I brought up:

  1. We want to wrap strings with an opening and closing XML element. We should create an API that accurately describes this.
  2. By drawing on the desired goal of avoiding boxing, we could pretty easily apply this to just about any type. So, why not? Who’s to say we might not want to wrap an Int?
  3. We should also consider other possible uses of our API. What if I wanted to ask a question in Spanish — ?Justo como esto¿
  4. Finally, I don’t know about you but I always do BDD, so we should have a test harness to make sure our API does the right thing.

Let’s start with the test harness. I love starting here, because I’m thinking about what I want the API to do, not what the code is doing. Let’s think up a test that fits all of our goals (a good API, asymmetric tokens, and a functional style):

def annotationWorksAsExpected = {
  "foo" wrap ("[", "]") === "[foo]"
  "foo" wrap (("[", "]")) === "[foo]"
  1 wrap "?" === "?1?"
  "foo" + "bar" wrap Some("...") === "...foobar..."
  "foo" + "bar" wrap Some(("<", ">")) === "<foobar>"
  "foo" + "bar" wrap None === "foobar"
  "foo" + "bar" wrap(("?", "¿")) === "?foobar¿"
  "foo" makeElementOf("around") === "<around>foobar</around>"
  "foo" + "bar" makeElementOf(("start", "end")) === "<start>foobar</end>"
  "foo" + "bar" makeElementOf((1, 2)) === "<1>foobar</2>"
  "foo" + "bar" makeElementOf("enclose") === "<enclose>foobar</enclose>"
  500 makeElementOf(0) === "<0>500</0>"
  500 makeElementOf("int") === "<int>500</int>"
  500 + " tiene razón" wrap(("?", "¿")) === "?500 tiene razón¿"
}

The === is a specs2 matcher that specifies equality. So, I defined every case I could think of, within reason, that someone might expect of my API. I’ve separated out the idea of “wrapping” and “XML’ising” a string — and while I’m at it, I decided it shouldn’t be limited to strings. I also introduced the idea of asymmetric tokens, such as “start” and “end” and “?” and “¿” (in this case, by using a tuple to represent a pair of opening and closing tokens). I also threw in an Option to give some flexibility while coding.

After sitting back and thinking about it, I feel like this is an API that won’t offend or get in too many people’s way. Now, to make my tests pass:

implicit class EnrichedAny(val s: Any) {
	def wrap(y: Option[Any]): String = y match {
		case Some(y: Tuple2[Any, Any]) => s wrap(y._1, y._2)
		case Some(y: Any) => s wrap y
		case _ => s.toString
	}

	def wrap(y: Any): String = y.toString + s + y.toString
	def wrap(y: Tuple2[Any, Any]): String = y._1.toString + s + y._2.toString

	def makeElementOf(y: Any) = s wrap(s"<${y}>", s"</${y}>")
	def makeElementOf(y: Tuple2[Any, Any]) = s wrap(s"<${y._1}>", s"</${y._2}>")
}

Hopefully the basic pattern is recognizable, but now with a few much needed improvements:

  1. We abstracted the entire function to work over Any so I’m no longer limited to String instances.
  2. I added support for Option, really just trying to be functional and think of likely use cases here.
  3. By adding support for a Tuple2 I can now use asymmetric tokens, like “start” and “end.”
  4. And I’ve separated the idea of wrapping from “XML’ising.”

I think the outcome is pretty good. We have a wrap API that does pretty much what you would expect — it puts an unmodified wrapper around another object. And we have my original goal, an XML-oriented API that puts properly formed starting and ending elements around an object. More important, it’s general enough that I won’t have to reimplement exactly the same thing on different types. And of course, it’s all tested to make sure it does what we expect.

We could probably take it a little further, perhaps using generics or a bit of recursion to support tuples of any size… but I’ll leave that as an exercise for the reader…

The real point here is to think about your APIs carefully when designing them. Don’t build an API that is limited, or confusing, or obfuscated. Look at your code in the context of the entire ecosystem you work in — and build something that fits well in that ecosystem.

  1. Obviously, if you are really doing a lot of XML specific work, you might just want to look at a library like Rapture (there are many options out there).

Be Lazy!

Or, why lazy evaluation is such a good thing

I was recently challenged, while teaching a Scala class, to explain exactly why lazy evaluation is important. That had me looking at the use cases enumerated in the course notes, and it wasn’t really compelling. Sure, there are some obvious reasons that lazy evaluation makes sense — but I really think there are even more compelling reasons to use lazy evaluation.

What it means to be lazy

Most languages are not inherently lazy. Java and Scala, for instance, depend on specific syntax to use lazy evaluation:

lazy val later = calculateTheNumberz()

By using the lazy keyword we defer evaluation of an expression until the defined name is first referenced. In other words, in the above declaration the value of later won’t be calculated until we first reference later. At that point, it will be calculated (just once) and any future reference will return the calculated value. In contrast, this expression:

val later = calculateTheNumberz()

Will immediately calculate the value of later. The obvious advantage here is really twofold:

  1. Potentially, we never call calculateTheNumberz() and, presuming this is an expensive operation, we never incur the overhead of that calculation.
  2. If we do reference later, we at least defer calling calculateTheNumberz() until we actually needed it. This can mean faster program startup, for instance.

So, lazy evaluation is a method of evaluation. It means that expressions are not evaluated when they are bound to variables, but their evaluation is deferred until the results are needed. Consequently, arguments are not evaluated before they are passed to a function, but only when their values are actually used. This kind of evaluation is call-by-name, as opposed to call-by-value or eager evaluation.

def someComputation(x: => Int, y: => Int) = if (x > 0) x else y

In the above example, the Scala function someComputation() takes two parameters. The => notation indicates lazy evaluation (or “call-by-name”), so the value of x and y will not be computed until referenced. In the expression if (x > 0) x else y the value x will be computed and, if positive, y will never be computed. If we imagine that y is a potentially expensive operation, this can be a huge performance saver.

Lazy by nature

Some languages are lazy by nature. These tend to be functional languages, such as Haskell. A potential advantage to languages that evaluate everything lazily is that you don’t have to remember to use the lazy syntax.

Another hidden advantage of “lazy by nature” is that programs tend to be more efficient without forethought. You can bang out a potentially expensive expression, like this:

if (someCondition) expensiveCall() else heftyCall()

We don’t have to worry about such an expression, because only one of expensiveCall() or heftyCall() will be evaluated. In contrast, languages that default to eager evaluation may evaluate both functions before deciding which one to call. Some languages are smart enough to avoid unnecessary function calls, but in a truly “lazy by nature” setting, even expressions such as 2 * 2 will not evaluate unless referenced.

The less obvious

Clearly, lazy evaluation readily translates into pragmatic efficiency — most obviously, by avoiding unnecessary computation. But what else does it do?

One benefit is working with infinitely large data sets. For instance, you can model an infinite list of numbers or an infinite stream of data. In a lazy evaluation model, only the head element of such a stream will be evaluated, whereas an eager evaluation will calculate the entire data set:

def listRange(lo: Int, hi: Int): List[Int] = {
	print(s"${lo}")
	if (lo >= hi) Nil
	else lo :: listRange(lo + 1, hi)
}

scala> listRange(5, 10)
5678910 res0: List[Int] = List(5, 6, 7, 8, 9)

In this example, listRange() creates a list of integers from a low to a high range. Since it uses call-by-value evaluation, the entire data set is calculated (as we can see from the output). If this were, say, a huge or infinite data set we would have a problem.

Lazy evaluation let’s us solve this problem. Here’s the same implementation, using a Scala Stream:

def streamRange(lo: Int, hi: Int): Stream[Int] = {
	print(s"${lo} ")
	if (lo >= hi) Stream.empty
	else Stream.cons(lo, streamRange(lo + 1, hi))
}

scala> streamRange(5, 10)
5 res0: Stream[Int] = Stream(5, ?)

In this case, the lazy Stream class only calculates the head value of the data set, returning the first integer. The tail calculation is left for later, giving us an easy (and efficient) way to work with arbitrarily large data sets. Only when we ask for the next value will the stream calculate whether we are done with the set.

This works because the Stream class uses a very simple call-by-name lazy calculation similar to the following:

class Stream[T]
object Stream {
	def cons[T](h: T, t: => Stream[T]) = new Stream[T] {
		def isEmpty = false
		def head = h
		def tail = t
	}
}

Notice how the tail of the stream is constructed using call-by-name semantics: t: => Stream[T]. The value t is not calculated until we ask for it some time in the future.

But there’s more!

We’ve discussed some very practical advantages to using lazy evaluation, but I think there are even more important reasons to use it liberally.

Lazy languages are pure, because it is very hard to reason about side effects in a lazy language. By “pure” I mean pure in the sense of functional programming: pure functions are referentially transparent. In a language that doesn’t evaluate expressions until they are referenced, it’s more natural to think in terms of deferring evaluation — and that strikes me as conducive to functionally pure reasoning.

Also, laziness allows you to define new control structures in the language or implement DSLs. You can’t write conditional evaluators (think if-then-else expressions) in a strictly evaluated language. Likewise it’s even harder when you consider how to deal with loops or infinite data sets.

And finally — and maybe most important — the best approach to dealing with side-effects in the type system, like monads, can only be expressed effectively by relying on laziness. Think about how hard it would be to implement monadic operations without violating at least one of the monad laws (for instance, mapping over a Future).

Functional Programming is Better

No, seriously — it is. But I realize that’s a loaded statement and likely to draw an argument. Nevertheless, I’ll stick to my premise and lay out my reasoning.

First, though, a little background. Functional programming is a paradigm shift, a different style of programming. Just as object oriented programming upset the procedural cart, so functional programming puts object oriented on its head.

Where did it come from?

The roots of functional programming likely dates back to the 1930’s, when Alonzo Church codified lambda calculus as “A formal system of mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution.”

In order words, lambda calculus consists of constructing lambda terms and performing reduction operations on them. It describes how we evaluate the symbology of calculus — and it does so by evaluating and reducing terms to their simplest form.

What matters for us is that everything in lambda calculus is a function expression, and we evaluate every expression — much like you would do if evaluating the function x = 2y.

Programming without side effects

In it’s simplest definition, one could say that functional programming is “programming without side effects.” In a more restricted sense, it is programming without mutable variables. That means no assignments, no loops or other imperative controls structures. It puts a focus on functions, not on program flow (whereas more traditional languages, such as Java or C, focus on an imperative style that emphasizes program flow using statements, or commands).

So back to my premise — exactly why is this better?

  1. It’s easier to write functional code. It is much more modular and self-contained by nature, since functions are entirely enclosed. So called “spaghetti code” is eliminated, and confusing scope issues (such as those created by free variables) go away.
  2. It’s easier to understand. Functional programs uphold referential transparency by using “expressions that can be replaced with its corresponding value without changing the program’s behavior.” In other words, there are no external side effects to a function. Given any function, you can replace any occurrence of that function with its output and the program will run unaltered. Since there are no looping control structures and no scope violations (such as Java’s for loop and global variables) functions always produce the same output, given the same input.
  3. It’s easier to test. By eliminating side effects, global or free variable scope, spaghetti code, and sticking to the principle of referential transparency we end up with programs that are very easy to test. Functions always behave the same way. They can be easily isolated and tested.
  4. It’s far better for parallel computation. Without mutation and side effects, we don’t have to worry about two parallel processes clobbering each other’s data. Parallel computation is vastly simplified.
  5. It represents business logic more easily. Have you ever tried to walk through a large program’s flow control diagram with your business stakeholders? Loopbacks, free variables, “spaghetti calls” and mutation make it a complicated process. Functional programming maps easily to the business domain. It does so naturally, because business functions (and business flowcharts) tend to map very cleanly to function definition. You get better stakeholder understanding and involvement in decision making.

Everyone has favorites

Mine is Scala. But the good news is, you can use functional programming principles in most languages. Java, Go, and others support function lambdas (being able to pass functions as arguments), giving you composability. Most languages allow you to write code without relying on mutation. And you can use discipline to stop the spaghetti code.

The one big limiting factor will be whether your language of choice handles recursion efficiently (and, specifically, can optimize for tail call recursion). To truly avoid imperative programming, we really need to rely on recursion. Consider, for example, how to iterate over an array if you don’t have any control statements (such as a for loop). How do you do it? The functional programming answer is recursion:

val coins = List(1, 2, 5, 10, 20, 50)
def count(l: List[Int]): Int = l match {
	case Nil => 0
	case h :: t => h + count(t)
}
count(coins)
// res1: Int = 88

This function uses tail call recursion to call itself, iterating through a list of coins and adding up the value 88. The trick to success is that the compiler can optimize the recursive call, eliminating any risk of stack overflow and also doing it just as efficiently as an imperative for loop.

Recently, I tried pushing Go as far as I could as a functional language. Ultimately, it turned out that from a language feature perspective, I could actually achieve a pretty good result. The code ended up being functional (I could avoid statements and mutation). Unfortunately, there’s one problem: Go’s recursion doesn’t support tail call optimization — as a result, it was about four times slower than imperative programming, and vulnerable to stack overflows. The net result: I felt like I could get “halfway there” in Go. It’s not possible to completely avoid mutation or rely exclusively on recursion, but you can still write code that looks very functional. Under the covers, there will be some control statements and mutation — but you can hide it away, encapsulate it, and structure it well. And in so doing, you can realize many of the benefits of functional programming.

If you’d like to learn more about functional programming in Go, I highly recommend Francesc Campoy Flores’ Goto talk on the subject.

Treat Yourself to a Listy Tuple!

We don’t use tuples enough, and part of the reason is, they’re kind of ugly to use. I mean, using whatever._1 to access a value is kind of ugly.

shapeless to the rescue!

The opening sentence on the shapeless site is kind of off-putting: “shapeless is a type class and dependent type based generic programming library for Scala.” Ok, that doesn’t really tell me what’s in it for me… so I thought I’d write up a few examples.

Back to those tuples. Wouldn’t it be cool if you could treat tuples just like other collection types in Scala? For instance, if you could get the head and tail of a tuple?

import syntax.std.tuple._
(23, "foo", true).head
// res0: Int = 23

Nifty! You can use tail, drop, and take too, as you might expect. You can also append, prepend, and concatenate tuples much like you can other container types:

(23, "foo") ++ (true, 2.0)
// res1: (Int, String, Boolean, Double) = (23,foo,true,2.0)

And perhaps best of all, you can now map, flatMap, fold and otherwise chop, spindle, and manipulate your tuples:

import poly._

object option extends (Id ~> Option) {
  def apply[T](t: T) = Option(t)
}

(23, "foo", true) map option
// res2: (Option[Int], Option[String], Option[Boolean]) = (Some(23),Some(foo),Some(true))

Before we look at some of the really cool things this enables, here’s one more tuple-trick, thanks to singleton-typed literals:

val t = (23, "foo", true)
t(1)
// res0: String = foo

So yes, if you really hate it that much, you can now effectively be done with the ._1 syntax. But shapeless does a lot more than make working with tuples easier. For instance, it provides an implementation of extensible records. This makes it possible to create extensible records like this book:

import shapeless._ ; import syntax.singleton._ ; import record._

val book =
  ("author" ->> "Benjamin Pierce") ::
  ("title"  ->> "Types and Programming Languages") ::
  ("id"     ->>  262162091) ::
  ("price"  ->>  44.11) ::
  HNil

And then operate on the book:

book("title")   // Note result type ...
// res1: String = Types and Programming Languages

I’ll leave exploring shapeless’ extensible records to you (check out the GitHub Feature Overview page).

One more feature I feel compelled to mention, just because I love them, are lenses. The shapeless implementation supports boilerplate-free lens creation for arbitrary case classes. Unless you’re already married to Scalaz or Monacle, you’ll want to give the shapeless lens a try:

ageLens = lens[Person] >> 'age
age1 = ageLens.get(person)
// age1: Int = 37

Lenses are pretty cool once you get used to them. They lead to some marvelously readable, maintainable code. Check them out, along with shapeless’ typesafe case operator, cast. If you’ve ever been bitten by type erasure, this just might be the solution you’ve been looking for.

Once you get excited about shapeless you might want to pick up a copy of The Type Astronaut’s Guide to Shapeless. It’s free!

What is Functional Programming?

While looking for great Scala or Erlang programmers, one of the first things I ask is “what does functional programming mean to you?” Most of the time, the answer hovers around “programming with higher order functions,” or “programming with functions instead of objects.” Both are good language features that belong to functional languages. Neither answer is what I’m looking for.

The problem with both of these answers is that they hint at non-functional thinking. It’s still looking at programming through an imperative or object oriented perspective.

For example, let’s consider two different approaches to representing a Set of integers (an arbitrary collection of integer values). The imperative approach tends to make a list of values. Here’s a simple Set object that defines both an add and remove method:

object Set {
	val values: List[Int] = List()
	def add(q: Int) = q :: values :: Nil
	def remove(q: Int) = values.filter(_ != q)
}

At first glance this appears to tick all the boxes for a “functional language.” It uses immutable values. The functions are side-effect free (they don’t change values outside of their own scope).

When I hear the goal is “programming without side-effects,” I’m usually pretty happy. It’s a good layman’s definition of functional programming (for those of us that don’t remember it’s all about referential transparency). Avoiding side-effects tends to check most of the important functional boxes. If you can’t have side-effects, you usually lean toward immutability. You also write functions that don’t reach outside their own scope, so your functions always produce the same outputs given the same inputs. You avoid exceptions because, let’s face it, they’re just a way of making your problem someone else’s problem. And all of that combined tends to produce code that is referentially transparent.

But we’re still in imperative-land. If we really want to think in functional terms, we need to eschew imperative thinking entirely. We need to think like a mathematician. Mathematicians think very purely about functions in the mathematic sense. (It’s a shame that programming uses the term “function” instead of “method,” I think this leads to a lot of functional programming terminology confusion).

For example, a Set to a mathematician is simply a collection of values:

s = { 3 }
t = { 7, 9 }

Here we have two Sets, one representing the single-value set of 3, and the other, both 7 and 9. We can model the collection of both sets like so:

c = s ∪ t

This models the set c as the union of both s and t, so c is a set containing 3, 7 and 9.

In Scala, a pure functional approach to this might look like this:

type Set = Int => Boolean

In other words, a function taking an Int and returning a Boolean. Given a single integer value the function tells us if the value is in the set. You could then write a function that tests whether a given value exists within the Set:

def contains(s: Set, n: Int): Boolean = s(n)

In this function, given a Set s and an integer n, we see if the set is true for n.

You could represent a Set that contains many values by writing a function that combines two:

def union(s: Set, t: Set): Set = (x => s(x) || t(x))

Here we return a new Set that is composed to two different Sets, s logically or’d with t. This works because Scala is a functional language and is able to model programming functionally, meaning in a non-imperative way.

The native Scala Set class demonstrates this thinking. If you look at scala.collection.Set you’ll find it’s a trait defined as as a function that takes a value of type A and returns a Boolean:

trait Set[A] extends (A => Boolean)

Thinking functionally is not the same as thinking imperatively, or using an object oriented approach. It really is a fundamental shift in how we approach programming. It takes some time to adopt. The joy about Scala is that you can start from an object oriented background and move toward a more functional nature over time. The language allows you to blend features of both paradigms.

Elegance Can Be the Enemy of Efficiency

I’ve been auditing some of the advanced Scala courses offered by École Polytechnique Fédérale de Lausanne, and as always am curious about my own solutions compared against what other people come up with.

One of the things I love about Scala is its elegance. Once you understand the language, you can achieve some really complex things quickly and easily — and, elegantly. So, a problem posed in the coursework is to write a function that, given a character in a game field, returns the first index of that character in the field.

After writing my own solution, I Googled a few other solutions. Here’s one:

def findChar(c: Char, levelVector: Vector[Vector[Char]]): Pos = {
   val ys = levelVector map (_ indexOf c)
   val x = ys indexWhere (_ >= 0)
   Pos(x, ys(x))
}

This is pretty elegant. In fact, I was first taken by how functional it looked — more functional than my own code. But there’s a problem. This elegant code is potentially very inefficient.

Let’s assume the game field is big. It consists of a Vector of Vectors, and the above code searches the entire game field. That is, even if the character matches the very first element in the field, all the rest of the elements will be scanned:

  1. ys maps over the entire Vector of Vector scanning for the index of c.
  2. x then scans the vector ys for a positive hit.
  3. Then the coordinate is returned from the points described by x and ys(x).

So the obvious problem here is our elegant, functional, good looking code could be terribly inefficient. Here’s my implementation:

def findChar(c: Char, levelVector: Vector[Vector[Char]]): Pos = { 
   @tailrec def iterate(x: Int): Pos = levelVector(x).indexOf(c) match { 
      case y if y > -1 => Pos(x, y) 
      case _ => iterate(x + 1) 
   } 
   iterate(0) 
}

Ok, it’s not as pretty, it’s longer, but — it’s still functional by design. More important, it will look for the character using a linear search, and stops when it finds the character. It’s efficient:

  1. We start in the first Vector, checking for the index of the character.
  2. As soon as we find it, the function returns. Otherwise, it recurses and moves to the next Vector.

It’s not as pretty but it preserves efficient design. Just because we can write pretty, elegant code doesn’t mean we always should.

Incidentally, you could also achieve nearly the same efficiency using something like this:

val row = levelVector.indexWhere(_.contains(c))
val col = levelVector(row).indexOf(c)

Personally, I prefer to avoid any potential inefficiency — but, over-optimizing can also be a problem. It’s probably best to find a happy balance. Don’t ignore efficiency, but don’t spend all your time optimizing situations that, quite likely, don’t really need the effort.

You’re Testing It Wrong…

I really like test driven development. It lets me be lazy. I don’t have to worry about my software quality, or that something I did broke some other thing. And with good dependency injection to make sure every component is working right, “it just works.” Now I code using TDD (writing my tests first, then coding to fulfill them), and I focus our QA efforts on making sure we have great test plans, and great coverage.

Closed System
A closed system wants to be tested.

So, when one of my project teams kept telling me they couldn’t write tests because the database wasn’t ready I got worried. Our team had been immersed in TDD for months — and every single engineer had nodded vigorously when I set expectations. The team leader recited the definition of “dependency injection,” just to drive home how ready they were to embrace it!

But when I asked to see the what was wrong, I knew we had a problem. The team’s tests were not injecting mock objects the right way. The idea behind dependency injection is to replace the smallest component possible in a closed system with another object, a “mock.” That mock can then monitor the system around it, inject different behaviors, and create desired results.

For example, let’s say we have a program that connects to a gizmo — your home thermostat. The thermostat itself is a separate component that lives outside your program. We can expect the thermostat to behave like a thermostat should… reporting the current temperature, and letting the home owner enter a desired room temperature. Pretty straight forward.

So the first step is to write a program that talks to the thermostat. We can wire up a real thermostat, but we’ve got a problem right off the bat. We want to know how our program behaves as the ambient temperature changes — 65 degrees, 32 degrees, or 100 degrees. But a real thermostat is only going to report the actual room temperature, and making the room frigid or boiling just isn’t going to be very comfortable or practical.

Not Mocking
Faking is not mocking.

This is where dependency injection comes in — wouldn’t it be great if we could inject a new gizmo, one that behaves according to our test plan?

It turns out that my team had been taking the wrong approach — but one that is pretty easy to make if you’re new to the idea of mocking and dependency injection. Unfortunately, it meant that they weren’t really testing the application. The were testing something else entirely.

Once we walked through the system, the mistake was clear. During application start up, it created a connection to a database. My team’s approach had been to add a “mocking” variable to the application. In effect, it created a test condition; if the application was in “mocking mode” it would only simulate database calls. If the application was not in “mocking mode” it sent real requests to a real database. Sounds great, right?

But it’s all wrong. Here’s the problem: The application was faking real world behavior. That is, throughout the program there were dozens of little tests, in effect asking, “if we are mocking, then don’t check the database for a user record, instead just return this fake user record.”

This meant that the real program — the actual application logic that would be deployed to the real world — was never tested. Instead, an alternate branch of logic was tested — a fake program, if you will. So two things happened:

  1. We weren’t testing the real program, we were testing something else altogether.
  2. The program itself became terribly complicated because of all the checks to find out “are we mocking?” and the subsequent code to do something else entirely.

And all of that is why my team said they couldn’t really test the system, because the database wasn’t up and running.

So what does real dependency injection look like? It’s simple: You want to change the actual gizmo, but change it in the most subtle way possible — and then you want to put that actual gizmo right back into your program.

Mocking
Real mocking doesn’t affect the original program flow.

Getting back to the thermostat example, an ideal solution would be to modify a real thermostat. You could crack it open, remove the temperature sensor, and add a little dial to it that lets you change the reported temperature. Then you plug the “mock thermostat” into your program, and you change the temperature manually! A potentially better approach would be to change the software that talks to your thermostat, and instrument it so that you can override the actual reported temperature. Your program would still think it’s talking to a real thermostat, and the connecting software could change the actual temperature before handing it off to your program.

In our case, the right solution could be injecting a simple mock component at the just the right point in our program.

For example, lets say our application uses an Authenticator object to log in users. The Authenticator checks the validity of a user in the database, and then returns a properly constructed User object. We can use dependency injection to substitute our own test data by overriding the single function we care about:

object fakeAuthenticator extends Authenticator {
    override def getUser(id: Int): Option[User] = {
        Some(User(id: -1, name: &quot;Fake User&quot;))
    }
}

On line 2, we replace the real Authenticator’s getUser function. The overridden method returns a hard-wired User object (in this case, one that clearly doesn’t represent a valid user account). By overriding the Authenticator in the test package only, the original program is not altered — all that’s left is to inject our altered Authenticator into the program.

The old fashioned way of doing injection is still reliable: Don’t tell, ask. Use a factory object to ask for the Authenticator. Given a factory in the application (let’s call it the AuthenticatorFactory) we can override what the factory actually returns in our test case only:

AuthenticatorFactory.setAuthenticatorInstance(fakeAuthenticator)

A slightly more modern approach is to use a dependency injection framework, but the underlying principle is exactly the same.

Likewise we can take the concept of mock objects further by using frameworks such as Mockito (a framework that works wonderfully with specs2). Mockito makes it easy to instrument real objects with test driven behavior. For example, Mockito will produce a mock object that acts just like a real object, but fulfills expectations (such as testing to make sure that a specific function is called a certain number of times).

Whatever tools and frameworks you use, test driven development has proven itself over the past decade. My own experience is the same: Every TDD project has produced more predictable results, has better velocity, and has been more reliable overall. It’s why I don’t do any coding without following TDD.