Polymorphic Class Representation in JSON

Have you ever needed to write your polymorphic class or trait to JSON, and realized that you can’t just do it with one line of code? (Oh, c’mon, sure you have — we all have at one time or another!)

Ok, so here’s the situation: I really like the play-json library put out by Lightbend. I mean, the convenience of being able to do this is awesome:

import play.api.libs.json._

implicit val geolocation = Json.format[Geolocation]

Json.toJson(someGeoLocation)

The fact that I don’t actually have to write my JSON serializer and deserializer code is just convenient. Of course other libraries do this too, but many rely on reflection. Not so with play-json since it figures it all out at compile-time. It’s fast and efficient, and I love it. Plus, the dialect for working with JSON directly is pretty straightforward.

But every now and then I do run into a limitation. Let’s say I’ve got a polymorphic structure like this:

trait Customer {
	val customerNumber: Option[String]
}

case class BusinessCustomer(name: String, ein: String, customerNumber: Option[String] = None) extends Customer
case class IndividualCustomer(firstName: String, lastName: String, ssn: String, customerNumber: Option[String] = None) extends Customer

implicit val individualFormat = Json.format[IndividualCustomer]
implicit val businessFormat = Json.format[BusinessCustomer]

Unfortunately, I can’t just declare an implicit Format for the Customer trait. If you think about it, that makes sense… how would the compiler be able to figure that out? But it leaves me with a bit of a problem. Let’s say I serialize an IndividualCustomer, I end up with this:

val u = IndividualCustomer("Zaphod", "Beeblebrox", "001-00-0001")
Json.toJson(u)
// {"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}

There’s no context there. Now when I go to deserialize the JSON representation I have to know, in advance, that it’s going to be an IndividualCustomer and not a BusinessCustomer. Hence, I can’t just deserialize a Customer and get the right type, automatically.

Containers to the rescue

So there is a simple solution, it turns out. What we can do is wrap the JSON in some kind of contextual information. For example, if we can change the JSON to include type information:

{ "type":"IndividualCustomer",
  "value": {
    "firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"
}}

This is pretty straightforward to implement using a simple container class. The container itself is going to have to handle the abstraction of the Customer, so I’ll write a custom Format that adds the extra context I need:

case class Container(customer: Customer)

implicit val containerFormat = new Format[Container] {
  def writes(container: Container) = Json.obj(
    "type" -> container.customer.getClass.getSimpleName,
    "value" -> {
      container.customer match {
        case c: IndividualCustomer => Json.toJson(c)
        case c: BusinessCustomer => Json.toJson(c)
      }
    }
  )

  def reads(json: JsValue) = {
    val v: JsValue = (json \ "value").get
    val c: String = (json \ "type").as[String]

    val z = c match {
      case "IndividualCustomer" => v.as[IndividualCustomer]
      case "BusinessCustomer" => v.as[BusinessCustomer]
    }

    new JsSuccess(Container(z))
  }
}

With this custom Format (and the corresponding reads and writes functions) I can now serialize any Container to JSON, and get enough contextual information so that I can deserialize back to the correct type:

val c1 = Container(u)
Json.toJson(c1)
// yay! context!
// {"type":"IndividualCustomer","value":{"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}}

Since we are using the Container around all instances of the Customer trait, we get context. Now we can read and write our abstracted Customer instances to and from JSON:

val someJsonString = """{"type":"IndividualCustomer","value":{"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}}"""

val someJsValue = Json.parse(someJsonString)
val backAgain = Json.fromJson[Container](someJsValue)
val maybeContainer = backAgain.asOpt

assert(maybeContainer.isInstanceOf[Option[Container]])

Alternatives

Another approach I’ve used is to put the wrapper logic into the trait itself. Personally, I don’t like that approach for a couple of reasons. First of all, it clutters the trait with things that have nothing to do with the trait’s purpose. Second of all, it eliminates an otherwise nice separation of concerns. I’d rather keep my traits pure, and add in something like Container (or, perhaps call it a CustomerJSONContainer if you like). I could easily enough see a hybrid approach that implements a trait, such as AddsCustomerContext, and then mixing that trait in.

About the only thing I’m not happy about is the relatively static bit of code in Container. If I could find a way to avoid match cases on each Customer type, that would be ideal… but then, if I could do that, Scala would probably be able to create an implicit Format[Customer] and none of this would be necessary in the first place.

Annotation Done Right

Or, think carefully about your APIs

This is such a common design pattern we’ve probably all done it one way or another. We have some data stream — likely XML or JSON or just plain old text — and we want to wrap it inside another element. For example, taking a name like “Zac” and turning it into “<name>Zac</name>” like so:1

var s = s"<name>$firstName</name>"

Of course if you do this enough, you start thinking it would be nice to have a function on hand:

def wrap(s: String, w: String): String = s"<$w>$s</$w>"

What’s wrong with that?

That’s all well and good, but after a while we realize a couple of problems with this very simple API:

  1. I’d say the API itself is pretty bad. I would love to have a syntax that feels more, well, functional… like "Zac" wrap "name".
  2. It’s not really “wrapping” the text string — it is in fact “XML’ising” the text string. There’s more going on here than just bracketing a string with another string.
  3. It’s not much of a stretch to see how this generic wrap() function could end up getting in the way (either confusing someone about its true purpose, or getting in the way of other string-oriented functions).
  4. And, why limit this handy little function? What happens if we want to wrap something that’s not a String?

Let’s tackle these one at a time. The first limitation is elegantly managed by introducing infix notation with an implicit class in Scala.

implicit class EnrichedString(s: String) {
  def wrap(w: String) = s"<$w>$s</$w>"
}

"thing".wrap("foo") // the usual syntax
"thing" wrap "foo"  // or using infix notation

When you try to call the wrap() function, the implicit class essentially gives the compiler a hint about where to look for the function. Since it isn’t on the String class itself, it starts to search implicit scope. The compiler finds the EnrichedString class, realizes it can convert a standard String to an EnrichedString, and gains access to the wrap() function.

One possible negative side effect of this is boxing. The source string, in our case “Zac”, will get boxed into EnrichedString("Zac") so the compiler can call wrap(). Depending on how you feel about this, you can get around it by using AnyVal instead:

implicit class EnrichedString(val s: String) extends AnyVal { ... }

A thoughtful API

That’s a nice way of improving the usability of our API, but it’s still a pretty bad API. I still haven’t addressed most of the problems I brought up:

  1. We want to wrap strings with an opening and closing XML element. We should create an API that accurately describes this.
  2. By drawing on the desired goal of avoiding boxing, we could pretty easily apply this to just about any type. So, why not? Who’s to say we might not want to wrap an Int?
  3. We should also consider other possible uses of our API. What if I wanted to ask a question in Spanish — ?Justo como esto¿
  4. Finally, I don’t know about you but I always do BDD, so we should have a test harness to make sure our API does the right thing.

Let’s start with the test harness. I love starting here, because I’m thinking about what I want the API to do, not what the code is doing. Let’s think up a test that fits all of our goals (a good API, asymmetric tokens, and a functional style):

def annotationWorksAsExpected = {
  "foo" wrap ("[", "]") === "[foo]"
  "foo" wrap (("[", "]")) === "[foo]"
  1 wrap "?" === "?1?"
  "foo" + "bar" wrap Some("...") === "...foobar..."
  "foo" + "bar" wrap Some(("<", ">")) === "<foobar>"
  "foo" + "bar" wrap None === "foobar"
  "foo" + "bar" wrap(("?", "¿")) === "?foobar¿"
  "foo" makeElementOf("around") === "<around>foobar</around>"
  "foo" + "bar" makeElementOf(("start", "end")) === "<start>foobar</end>"
  "foo" + "bar" makeElementOf((1, 2)) === "<1>foobar</2>"
  "foo" + "bar" makeElementOf("enclose") === "<enclose>foobar</enclose>"
  500 makeElementOf(0) === "<0>500</0>"
  500 makeElementOf("int") === "<int>500</int>"
  500 + " tiene razón" wrap(("?", "¿")) === "?500 tiene razón¿"
}

The === is a specs2 matcher that specifies equality. So, I defined every case I could think of, within reason, that someone might expect of my API. I’ve separated out the idea of “wrapping” and “XML’ising” a string — and while I’m at it, I decided it shouldn’t be limited to strings. I also introduced the idea of asymmetric tokens, such as “start” and “end” and “?” and “¿” (in this case, by using a tuple to represent a pair of opening and closing tokens). I also threw in an Option to give some flexibility while coding.

After sitting back and thinking about it, I feel like this is an API that won’t offend or get in too many people’s way. Now, to make my tests pass:

implicit class EnrichedAny(val s: Any) {
	def wrap(y: Option[Any]): String = y match {
		case Some(y: Tuple2[Any, Any]) => s wrap(y._1, y._2)
		case Some(y: Any) => s wrap y
		case _ => s.toString
	}

	def wrap(y: Any): String = y.toString + s + y.toString
	def wrap(y: Tuple2[Any, Any]): String = y._1.toString + s + y._2.toString

	def makeElementOf(y: Any) = s wrap(s"<${y}>", s"</${y}>")
	def makeElementOf(y: Tuple2[Any, Any]) = s wrap(s"<${y._1}>", s"</${y._2}>")
}

Hopefully the basic pattern is recognizable, but now with a few much needed improvements:

  1. We abstracted the entire function to work over Any so I’m no longer limited to String instances.
  2. I added support for Option, really just trying to be functional and think of likely use cases here.
  3. By adding support for a Tuple2 I can now use asymmetric tokens, like “start” and “end.”
  4. And I’ve separated the idea of wrapping from “XML’ising.”

I think the outcome is pretty good. We have a wrap API that does pretty much what you would expect — it puts an unmodified wrapper around another object. And we have my original goal, an XML-oriented API that puts properly formed starting and ending elements around an object. More important, it’s general enough that I won’t have to reimplement exactly the same thing on different types. And of course, it’s all tested to make sure it does what we expect.

We could probably take it a little further, perhaps using generics or a bit of recursion to support tuples of any size… but I’ll leave that as an exercise for the reader…

The real point here is to think about your APIs carefully when designing them. Don’t build an API that is limited, or confusing, or obfuscated. Look at your code in the context of the entire ecosystem you work in — and build something that fits well in that ecosystem.

  1. Obviously, if you are really doing a lot of XML specific work, you might just want to look at a library like Rapture (there are many options out there).

Functional Programming is Better

No, seriously — it is. But I realize that’s a loaded statement and likely to draw an argument. Nevertheless, I’ll stick to my premise and lay out my reasoning.

First, though, a little background. Functional programming is a paradigm shift, a different style of programming. Just as object oriented programming upset the procedural cart, so functional programming puts object oriented on its head.

Where did it come from?

The roots of functional programming likely dates back to the 1930’s, when Alonzo Church codified lambda calculus as “A formal system of mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution.”

In order words, lambda calculus consists of constructing lambda terms and performing reduction operations on them. It describes how we evaluate the symbology of calculus — and it does so by evaluating and reducing terms to their simplest form.

What matters for us is that everything in lambda calculus is a function expression, and we evaluate every expression — much like you would do if evaluating the function x = 2y.

Programming without side effects

In it’s simplest definition, one could say that functional programming is “programming without side effects.” In a more restricted sense, it is programming without mutable variables. That means no assignments, no loops or other imperative controls structures. It puts a focus on functions, not on program flow (whereas more traditional languages, such as Java or C, focus on an imperative style that emphasizes program flow using statements, or commands).

So back to my premise — exactly why is this better?

  1. It’s easier to write functional code. It is much more modular and self-contained by nature, since functions are entirely enclosed. So called “spaghetti code” is eliminated, and confusing scope issues (such as those created by free variables) go away.
  2. It’s easier to understand. Functional programs uphold referential transparency by using “expressions that can be replaced with its corresponding value without changing the program’s behavior.” In other words, there are no external side effects to a function. Given any function, you can replace any occurrence of that function with its output and the program will run unaltered. Since there are no looping control structures and no scope violations (such as Java’s for loop and global variables) functions always produce the same output, given the same input.
  3. It’s easier to test. By eliminating side effects, global or free variable scope, spaghetti code, and sticking to the principle of referential transparency we end up with programs that are very easy to test. Functions always behave the same way. They can be easily isolated and tested.
  4. It’s far better for parallel computation. Without mutation and side effects, we don’t have to worry about two parallel processes clobbering each other’s data. Parallel computation is vastly simplified.
  5. It represents business logic more easily. Have you ever tried to walk through a large program’s flow control diagram with your business stakeholders? Loopbacks, free variables, “spaghetti calls” and mutation make it a complicated process. Functional programming maps easily to the business domain. It does so naturally, because business functions (and business flowcharts) tend to map very cleanly to function definition. You get better stakeholder understanding and involvement in decision making.

Everyone has favorites

Mine is Scala. But the good news is, you can use functional programming principles in most languages. Java, Go, and others support function lambdas (being able to pass functions as arguments), giving you composability. Most languages allow you to write code without relying on mutation. And you can use discipline to stop the spaghetti code.

The one big limiting factor will be whether your language of choice handles recursion efficiently (and, specifically, can optimize for tail call recursion). To truly avoid imperative programming, we really need to rely on recursion. Consider, for example, how to iterate over an array if you don’t have any control statements (such as a for loop). How do you do it? The functional programming answer is recursion:

val coins = List(1, 2, 5, 10, 20, 50)
def count(l: List[Int]): Int = l match {
	case Nil => 0
	case h :: t => h + count(t)
}
count(coins)
// res1: Int = 88

This function uses tail call recursion to call itself, iterating through a list of coins and adding up the value 88. The trick to success is that the compiler can optimize the recursive call, eliminating any risk of stack overflow and also doing it just as efficiently as an imperative for loop.

Recently, I tried pushing Go as far as I could as a functional language. Ultimately, it turned out that from a language feature perspective, I could actually achieve a pretty good result. The code ended up being functional (I could avoid statements and mutation). Unfortunately, there’s one problem: Go’s recursion doesn’t support tail call optimization — as a result, it was about four times slower than imperative programming, and vulnerable to stack overflows. The net result: I felt like I could get “halfway there” in Go. It’s not possible to completely avoid mutation or rely exclusively on recursion, but you can still write code that looks very functional. Under the covers, there will be some control statements and mutation — but you can hide it away, encapsulate it, and structure it well. And in so doing, you can realize many of the benefits of functional programming.

If you’d like to learn more about functional programming in Go, I highly recommend Francesc Campoy Flores’ Goto talk on the subject.

Treat Yourself to a Listy Tuple!

We don’t use tuples enough, and part of the reason is, they’re kind of ugly to use. I mean, using whatever._1 to access a value is kind of ugly.

shapeless to the rescue!

The opening sentence on the shapeless site is kind of off-putting: “shapeless is a type class and dependent type based generic programming library for Scala.” Ok, that doesn’t really tell me what’s in it for me… so I thought I’d write up a few examples.

Back to those tuples. Wouldn’t it be cool if you could treat tuples just like other collection types in Scala? For instance, if you could get the head and tail of a tuple?

import syntax.std.tuple._
(23, "foo", true).head
// res0: Int = 23

Nifty! You can use tail, drop, and take too, as you might expect. You can also append, prepend, and concatenate tuples much like you can other container types:

(23, "foo") ++ (true, 2.0)
// res1: (Int, String, Boolean, Double) = (23,foo,true,2.0)

And perhaps best of all, you can now map, flatMap, fold and otherwise chop, spindle, and manipulate your tuples:

import poly._

object option extends (Id ~> Option) {
  def apply[T](t: T) = Option(t)
}

(23, "foo", true) map option
// res2: (Option[Int], Option[String], Option[Boolean]) = (Some(23),Some(foo),Some(true))

Before we look at some of the really cool things this enables, here’s one more tuple-trick, thanks to singleton-typed literals:

val t = (23, "foo", true)
t(1)
// res0: String = foo

So yes, if you really hate it that much, you can now effectively be done with the ._1 syntax. But shapeless does a lot more than make working with tuples easier. For instance, it provides an implementation of extensible records. This makes it possible to create extensible records like this book:

import shapeless._ ; import syntax.singleton._ ; import record._

val book =
  ("author" ->> "Benjamin Pierce") ::
  ("title"  ->> "Types and Programming Languages") ::
  ("id"     ->>  262162091) ::
  ("price"  ->>  44.11) ::
  HNil

And then operate on the book:

book("title")   // Note result type ...
// res1: String = Types and Programming Languages

I’ll leave exploring shapeless’ extensible records to you (check out the GitHub Feature Overview page).

One more feature I feel compelled to mention, just because I love them, are lenses. The shapeless implementation supports boilerplate-free lens creation for arbitrary case classes. Unless you’re already married to Scalaz or Monacle, you’ll want to give the shapeless lens a try:

ageLens = lens[Person] >> 'age
age1 = ageLens.get(person)
// age1: Int = 37

Lenses are pretty cool once you get used to them. They lead to some marvelously readable, maintainable code. Check them out, along with shapeless’ typesafe case operator, cast. If you’ve ever been bitten by type erasure, this just might be the solution you’ve been looking for.

Once you get excited about shapeless you might want to pick up a copy of The Type Astronaut’s Guide to Shapeless. It’s free!

What is Functional Programming?

While looking for great Scala or Erlang programmers, one of the first things I ask is “what does functional programming mean to you?” Most of the time, the answer hovers around “programming with higher order functions,” or “programming with functions instead of objects.” Both are good language features that belong to functional languages. Neither answer is what I’m looking for.

The problem with both of these answers is that they hint at non-functional thinking. It’s still looking at programming through an imperative or object oriented perspective.

For example, let’s consider two different approaches to representing a Set of integers (an arbitrary collection of integer values). The imperative approach tends to make a list of values. Here’s a simple Set object that defines both an add and remove method:

object Set {
	val values: List[Int] = List()
	def add(q: Int) = q :: values :: Nil
	def remove(q: Int) = values.filter(_ != q)
}

At first glance this appears to tick all the boxes for a “functional language.” It uses immutable values. The functions are side-effect free (they don’t change values outside of their own scope).

When I hear the goal is “programming without side-effects,” I’m usually pretty happy. It’s a good layman’s definition of functional programming (for those of us that don’t remember it’s all about referential transparency). Avoiding side-effects tends to check most of the important functional boxes. If you can’t have side-effects, you usually lean toward immutability. You also write functions that don’t reach outside their own scope, so your functions always produce the same outputs given the same inputs. You avoid exceptions because, let’s face it, they’re just a way of making your problem someone else’s problem. And all of that combined tends to produce code that is referentially transparent.

But we’re still in imperative-land. If we really want to think in functional terms, we need to eschew imperative thinking entirely. We need to think like a mathematician. Mathematicians think very purely about functions in the mathematic sense. (It’s a shame that programming uses the term “function” instead of “method,” I think this leads to a lot of functional programming terminology confusion).

For example, a Set to a mathematician is simply a collection of values:

s = { 3 }
t = { 7, 9 }

Here we have two Sets, one representing the single-value set of 3, and the other, both 7 and 9. We can model the collection of both sets like so:

c = s ∪ t

This models the set c as the union of both s and t, so c is a set containing 3, 7 and 9.

In Scala, a pure functional approach to this might look like this:

type Set = Int => Boolean

In other words, a function taking an Int and returning a Boolean. Given a single integer value the function tells us if the value is in the set. You could then write a function that tests whether a given value exists within the Set:

def contains(s: Set, n: Int): Boolean = s(n)

In this function, given a Set s and an integer n, we see if the set is true for n.

You could represent a Set that contains many values by writing a function that combines two:

def union(s: Set, t: Set): Set = (x => s(x) || t(x))

Here we return a new Set that is composed to two different Sets, s logically or’d with t. This works because Scala is a functional language and is able to model programming functionally, meaning in a non-imperative way.

The native Scala Set class demonstrates this thinking. If you look at scala.collection.Set you’ll find it’s a trait defined as as a function that takes a value of type A and returns a Boolean:

trait Set[A] extends (A => Boolean)

Thinking functionally is not the same as thinking imperatively, or using an object oriented approach. It really is a fundamental shift in how we approach programming. It takes some time to adopt. The joy about Scala is that you can start from an object oriented background and move toward a more functional nature over time. The language allows you to blend features of both paradigms.

You’re Testing It Wrong…

I really like test driven development. It lets me be lazy. I don’t have to worry about my software quality, or that something I did broke some other thing. And with good dependency injection to make sure every component is working right, “it just works.” Now I code using TDD (writing my tests first, then coding to fulfill them), and I focus our QA efforts on making sure we have great test plans, and great coverage.

Closed System
A closed system wants to be tested.

So, when one of my project teams kept telling me they couldn’t write tests because the database wasn’t ready I got worried. Our team had been immersed in TDD for months — and every single engineer had nodded vigorously when I set expectations. The team leader recited the definition of “dependency injection,” just to drive home how ready they were to embrace it!

But when I asked to see the what was wrong, I knew we had a problem. The team’s tests were not injecting mock objects the right way. The idea behind dependency injection is to replace the smallest component possible in a closed system with another object, a “mock.” That mock can then monitor the system around it, inject different behaviors, and create desired results.

For example, let’s say we have a program that connects to a gizmo — your home thermostat. The thermostat itself is a separate component that lives outside your program. We can expect the thermostat to behave like a thermostat should… reporting the current temperature, and letting the home owner enter a desired room temperature. Pretty straight forward.

So the first step is to write a program that talks to the thermostat. We can wire up a real thermostat, but we’ve got a problem right off the bat. We want to know how our program behaves as the ambient temperature changes — 65 degrees, 32 degrees, or 100 degrees. But a real thermostat is only going to report the actual room temperature, and making the room frigid or boiling just isn’t going to be very comfortable or practical.

Not Mocking
Faking is not mocking.

This is where dependency injection comes in — wouldn’t it be great if we could inject a new gizmo, one that behaves according to our test plan?

It turns out that my team had been taking the wrong approach — but one that is pretty easy to make if you’re new to the idea of mocking and dependency injection. Unfortunately, it meant that they weren’t really testing the application. The were testing something else entirely.

Once we walked through the system, the mistake was clear. During application start up, it created a connection to a database. My team’s approach had been to add a “mocking” variable to the application. In effect, it created a test condition; if the application was in “mocking mode” it would only simulate database calls. If the application was not in “mocking mode” it sent real requests to a real database. Sounds great, right?

But it’s all wrong. Here’s the problem: The application was faking real world behavior. That is, throughout the program there were dozens of little tests, in effect asking, “if we are mocking, then don’t check the database for a user record, instead just return this fake user record.”

This meant that the real program — the actual application logic that would be deployed to the real world — was never tested. Instead, an alternate branch of logic was tested — a fake program, if you will. So two things happened:

  1. We weren’t testing the real program, we were testing something else altogether.
  2. The program itself became terribly complicated because of all the checks to find out “are we mocking?” and the subsequent code to do something else entirely.

And all of that is why my team said they couldn’t really test the system, because the database wasn’t up and running.

So what does real dependency injection look like? It’s simple: You want to change the actual gizmo, but change it in the most subtle way possible — and then you want to put that actual gizmo right back into your program.

Mocking
Real mocking doesn’t affect the original program flow.

Getting back to the thermostat example, an ideal solution would be to modify a real thermostat. You could crack it open, remove the temperature sensor, and add a little dial to it that lets you change the reported temperature. Then you plug the “mock thermostat” into your program, and you change the temperature manually! A potentially better approach would be to change the software that talks to your thermostat, and instrument it so that you can override the actual reported temperature. Your program would still think it’s talking to a real thermostat, and the connecting software could change the actual temperature before handing it off to your program.

In our case, the right solution could be injecting a simple mock component at the just the right point in our program.

For example, lets say our application uses an Authenticator object to log in users. The Authenticator checks the validity of a user in the database, and then returns a properly constructed User object. We can use dependency injection to substitute our own test data by overriding the single function we care about:

object fakeAuthenticator extends Authenticator {
    override def getUser(id: Int): Option[User] = {
        Some(User(id: -1, name: &quot;Fake User&quot;))
    }
}

On line 2, we replace the real Authenticator’s getUser function. The overridden method returns a hard-wired User object (in this case, one that clearly doesn’t represent a valid user account). By overriding the Authenticator in the test package only, the original program is not altered — all that’s left is to inject our altered Authenticator into the program.

The old fashioned way of doing injection is still reliable: Don’t tell, ask. Use a factory object to ask for the Authenticator. Given a factory in the application (let’s call it the AuthenticatorFactory) we can override what the factory actually returns in our test case only:

AuthenticatorFactory.setAuthenticatorInstance(fakeAuthenticator)

A slightly more modern approach is to use a dependency injection framework, but the underlying principle is exactly the same.

Likewise we can take the concept of mock objects further by using frameworks such as Mockito (a framework that works wonderfully with specs2). Mockito makes it easy to instrument real objects with test driven behavior. For example, Mockito will produce a mock object that acts just like a real object, but fulfills expectations (such as testing to make sure that a specific function is called a certain number of times).

Whatever tools and frameworks you use, test driven development has proven itself over the past decade. My own experience is the same: Every TDD project has produced more predictable results, has better velocity, and has been more reliable overall. It’s why I don’t do any coding without following TDD.

Do hackers make the best testers?

Recently, I was asked “what makes a good software tester,” and as a subtext, whether hacking and testing share a similar mindset, and how wide a skill set testers need to have.

I think the most valuable asset a Software Tester can have is an attitude of gleeful problem discovery. Someone that loves to break systems, discover their imperfections, and explore their weaknesses makes a great tester. I haven’t met many people that really enjoy and excel at this, but it’s probably is an attribute that is common with Hackers as well.

It’s also wonderful to have a tester that really cares about the quality of the product. It’s absolutely essential for someone that wants to excel as a tester. That means having the patience and desire to work closely with the Quality Assurance group, to understand what a “good customer experience” means, and to really grasp things like quality of services, user experience, and customer needs.

Part of being a good tester means enjoying running down the rabbit hole. Where the hole leads is a mystery. Perhaps testing discovers problems stemming from poor UI design, SQL injection problems, performance issues caused by heavy loading, or playing the clueless user that always clicks the wrong thing and triggers a logic error.

The “how” of testing is another matter though. Yes, there are well understood principles and techniques, and often tools, for testing all of these things. I have found that in most cases, good testers tend to specialize. I don’t expect to find one person that can find the flaws in the user interface, perform load testing, and also look for SQL injection vulnerabilities. To get really good at all of these things, you need a team — some of those team members will focus on the back end, some on security, some on database systems, some on the front end. Finding someone that’s great at tackling a couple of those verticals is pretty rare. That said, every tester should have an adequate, at least shallow understanding of all of these areas. In order to properly localize a problem, you need to understand what could be causing it. But having other resources to bring in to help diagnose the specialty areas is critical.