Polymorphic Class Representation in JSON

Have you ever needed to write your polymorphic class or trait to JSON, and realized that you can’t just do it with one line of code? (Oh, c’mon, sure you have — we all have at one time or another!)

Ok, so here’s the situation: I really like the play-json library put out by Lightbend. I mean, the convenience of being able to do this is awesome:

import play.api.libs.json._

implicit val geolocation = Json.format[Geolocation]

Json.toJson(someGeoLocation)

The fact that I don’t actually have to write my JSON serializer and deserializer code is just convenient. Of course other libraries do this too, but many rely on reflection. Not so with play-json since it figures it all out at compile-time. It’s fast and efficient, and I love it. Plus, the dialect for working with JSON directly is pretty straightforward.

But every now and then I do run into a limitation. Let’s say I’ve got a polymorphic structure like this:

trait Customer {
	val customerNumber: Option[String]
}

case class BusinessCustomer(name: String, ein: String, customerNumber: Option[String] = None) extends Customer
case class IndividualCustomer(firstName: String, lastName: String, ssn: String, customerNumber: Option[String] = None) extends Customer

implicit val individualFormat = Json.format[IndividualCustomer]
implicit val businessFormat = Json.format[BusinessCustomer]

Unfortunately, I can’t just declare an implicit Format for the Customer trait. If you think about it, that makes sense… how would the compiler be able to figure that out? But it leaves me with a bit of a problem. Let’s say I serialize an IndividualCustomer, I end up with this:

val u = IndividualCustomer("Zaphod", "Beeblebrox", "001-00-0001")
Json.toJson(u)
// {"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}

There’s no context there. Now when I go to deserialize the JSON representation I have to know, in advance, that it’s going to be an IndividualCustomer and not a BusinessCustomer. Hence, I can’t just deserialize a Customer and get the right type, automatically.

Containers to the rescue

So there is a simple solution, it turns out. What we can do is wrap the JSON in some kind of contextual information. For example, if we can change the JSON to include type information:

{ "type":"IndividualCustomer",
  "value": {
    "firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"
}}

This is pretty straightforward to implement using a simple container class. The container itself is going to have to handle the abstraction of the Customer, so I’ll write a custom Format that adds the extra context I need:

case class Container(customer: Customer)

implicit val containerFormat = new Format[Container] {
  def writes(container: Container) = Json.obj(
    "type" -> container.customer.getClass.getSimpleName,
    "value" -> {
      container.customer match {
        case c: IndividualCustomer => Json.toJson(c)
        case c: BusinessCustomer => Json.toJson(c)
      }
    }
  )

  def reads(json: JsValue) = {
    val v: JsValue = (json \ "value").get
    val c: String = (json \ "type").as[String]

    val z = c match {
      case "IndividualCustomer" => v.as[IndividualCustomer]
      case "BusinessCustomer" => v.as[BusinessCustomer]
    }

    new JsSuccess(Container(z))
  }
}

With this custom Format (and the corresponding reads and writes functions) I can now serialize any Container to JSON, and get enough contextual information so that I can deserialize back to the correct type:

val c1 = Container(u)
Json.toJson(c1)
// yay! context!
// {"type":"IndividualCustomer","value":{"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}}

Since we are using the Container around all instances of the Customer trait, we get context. Now we can read and write our abstracted Customer instances to and from JSON:

val someJsonString = """{"type":"IndividualCustomer","value":{"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}}"""

val someJsValue = Json.parse(someJsonString)
val backAgain = Json.fromJson[Container](someJsValue)
val maybeContainer = backAgain.asOpt

assert(maybeContainer.isInstanceOf[Option[Container]])

Alternatives

Another approach I’ve used is to put the wrapper logic into the trait itself. Personally, I don’t like that approach for a couple of reasons. First of all, it clutters the trait with things that have nothing to do with the trait’s purpose. Second of all, it eliminates an otherwise nice separation of concerns. I’d rather keep my traits pure, and add in something like Container (or, perhaps call it a CustomerJSONContainer if you like). I could easily enough see a hybrid approach that implements a trait, such as AddsCustomerContext, and then mixing that trait in.

About the only thing I’m not happy about is the relatively static bit of code in Container. If I could find a way to avoid match cases on each Customer type, that would be ideal… but then, if I could do that, Scala would probably be able to create an implicit Format[Customer] and none of this would be necessary in the first place.

Annotation Done Right

Or, think carefully about your APIs

This is such a common design pattern we’ve probably all done it one way or another. We have some data stream — likely XML or JSON or just plain old text — and we want to wrap it inside another element. For example, taking a name like “Zac” and turning it into “<name>Zac</name>” like so:1

var s = s"<name>$firstName</name>"

Of course if you do this enough, you start thinking it would be nice to have a function on hand:

def wrap(s: String, w: String): String = s"<$w>$s</$w>"

What’s wrong with that?

That’s all well and good, but after a while we realize a couple of problems with this very simple API:

  1. I’d say the API itself is pretty bad. I would love to have a syntax that feels more, well, functional… like "Zac" wrap "name".
  2. It’s not really “wrapping” the text string — it is in fact “XML’ising” the text string. There’s more going on here than just bracketing a string with another string.
  3. It’s not much of a stretch to see how this generic wrap() function could end up getting in the way (either confusing someone about its true purpose, or getting in the way of other string-oriented functions).
  4. And, why limit this handy little function? What happens if we want to wrap something that’s not a String?

Let’s tackle these one at a time. The first limitation is elegantly managed by introducing infix notation with an implicit class in Scala.

implicit class EnrichedString(s: String) {
  def wrap(w: String) = s"<$w>$s</$w>"
}

"thing".wrap("foo") // the usual syntax
"thing" wrap "foo"  // or using infix notation

When you try to call the wrap() function, the implicit class essentially gives the compiler a hint about where to look for the function. Since it isn’t on the String class itself, it starts to search implicit scope. The compiler finds the EnrichedString class, realizes it can convert a standard String to an EnrichedString, and gains access to the wrap() function.

One possible negative side effect of this is boxing. The source string, in our case “Zac”, will get boxed into EnrichedString("Zac") so the compiler can call wrap(). Depending on how you feel about this, you can get around it by using AnyVal instead:

implicit class EnrichedString(val s: String) extends AnyVal { ... }

A thoughtful API

That’s a nice way of improving the usability of our API, but it’s still a pretty bad API. I still haven’t addressed most of the problems I brought up:

  1. We want to wrap strings with an opening and closing XML element. We should create an API that accurately describes this.
  2. By drawing on the desired goal of avoiding boxing, we could pretty easily apply this to just about any type. So, why not? Who’s to say we might not want to wrap an Int?
  3. We should also consider other possible uses of our API. What if I wanted to ask a question in Spanish — ?Justo como esto¿
  4. Finally, I don’t know about you but I always do BDD, so we should have a test harness to make sure our API does the right thing.

Let’s start with the test harness. I love starting here, because I’m thinking about what I want the API to do, not what the code is doing. Let’s think up a test that fits all of our goals (a good API, asymmetric tokens, and a functional style):

def annotationWorksAsExpected = {
  "foo" wrap ("[", "]") === "[foo]"
  "foo" wrap (("[", "]")) === "[foo]"
  1 wrap "?" === "?1?"
  "foo" + "bar" wrap Some("...") === "...foobar..."
  "foo" + "bar" wrap Some(("<", ">")) === "<foobar>"
  "foo" + "bar" wrap None === "foobar"
  "foo" + "bar" wrap(("?", "¿")) === "?foobar¿"
  "foo" makeElementOf("around") === "<around>foobar</around>"
  "foo" + "bar" makeElementOf(("start", "end")) === "<start>foobar</end>"
  "foo" + "bar" makeElementOf((1, 2)) === "<1>foobar</2>"
  "foo" + "bar" makeElementOf("enclose") === "<enclose>foobar</enclose>"
  500 makeElementOf(0) === "<0>500</0>"
  500 makeElementOf("int") === "<int>500</int>"
  500 + " tiene razón" wrap(("?", "¿")) === "?500 tiene razón¿"
}

The === is a specs2 matcher that specifies equality. So, I defined every case I could think of, within reason, that someone might expect of my API. I’ve separated out the idea of “wrapping” and “XML’ising” a string — and while I’m at it, I decided it shouldn’t be limited to strings. I also introduced the idea of asymmetric tokens, such as “start” and “end” and “?” and “¿” (in this case, by using a tuple to represent a pair of opening and closing tokens). I also threw in an Option to give some flexibility while coding.

After sitting back and thinking about it, I feel like this is an API that won’t offend or get in too many people’s way. Now, to make my tests pass:

implicit class EnrichedAny(val s: Any) {
	def wrap(y: Option[Any]): String = y match {
		case Some(y: Tuple2[Any, Any]) => s wrap(y._1, y._2)
		case Some(y: Any) => s wrap y
		case _ => s.toString
	}

	def wrap(y: Any): String = y.toString + s + y.toString
	def wrap(y: Tuple2[Any, Any]): String = y._1.toString + s + y._2.toString

	def makeElementOf(y: Any) = s wrap(s"<${y}>", s"</${y}>")
	def makeElementOf(y: Tuple2[Any, Any]) = s wrap(s"<${y._1}>", s"</${y._2}>")
}

Hopefully the basic pattern is recognizable, but now with a few much needed improvements:

  1. We abstracted the entire function to work over Any so I’m no longer limited to String instances.
  2. I added support for Option, really just trying to be functional and think of likely use cases here.
  3. By adding support for a Tuple2 I can now use asymmetric tokens, like “start” and “end.”
  4. And I’ve separated the idea of wrapping from “XML’ising.”

I think the outcome is pretty good. We have a wrap API that does pretty much what you would expect — it puts an unmodified wrapper around another object. And we have my original goal, an XML-oriented API that puts properly formed starting and ending elements around an object. More important, it’s general enough that I won’t have to reimplement exactly the same thing on different types. And of course, it’s all tested to make sure it does what we expect.

We could probably take it a little further, perhaps using generics or a bit of recursion to support tuples of any size… but I’ll leave that as an exercise for the reader…

The real point here is to think about your APIs carefully when designing them. Don’t build an API that is limited, or confusing, or obfuscated. Look at your code in the context of the entire ecosystem you work in — and build something that fits well in that ecosystem.

  1. Obviously, if you are really doing a lot of XML specific work, you might just want to look at a library like Rapture (there are many options out there).

Functional Programming is Better

No, seriously — it is. But I realize that’s a loaded statement and likely to draw an argument. Nevertheless, I’ll stick to my premise and lay out my reasoning.

First, though, a little background. Functional programming is a paradigm shift, a different style of programming. Just as object oriented programming upset the procedural cart, so functional programming puts object oriented on its head.

Where did it come from?

The roots of functional programming likely dates back to the 1930’s, when Alonzo Church codified lambda calculus as “A formal system of mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution.”

In order words, lambda calculus consists of constructing lambda terms and performing reduction operations on them. It describes how we evaluate the symbology of calculus — and it does so by evaluating and reducing terms to their simplest form.

What matters for us is that everything in lambda calculus is a function expression, and we evaluate every expression — much like you would do if evaluating the function x = 2y.

Programming without side effects

In it’s simplest definition, one could say that functional programming is “programming without side effects.” In a more restricted sense, it is programming without mutable variables. That means no assignments, no loops or other imperative controls structures. It puts a focus on functions, not on program flow (whereas more traditional languages, such as Java or C, focus on an imperative style that emphasizes program flow using statements, or commands).

So back to my premise — exactly why is this better?

  1. It’s easier to write functional code. It is much more modular and self-contained by nature, since functions are entirely enclosed. So called “spaghetti code” is eliminated, and confusing scope issues (such as those created by free variables) go away.
  2. It’s easier to understand. Functional programs uphold referential transparency by using “expressions that can be replaced with its corresponding value without changing the program’s behavior.” In other words, there are no external side effects to a function. Given any function, you can replace any occurrence of that function with its output and the program will run unaltered. Since there are no looping control structures and no scope violations (such as Java’s for loop and global variables) functions always produce the same output, given the same input.
  3. It’s easier to test. By eliminating side effects, global or free variable scope, spaghetti code, and sticking to the principle of referential transparency we end up with programs that are very easy to test. Functions always behave the same way. They can be easily isolated and tested.
  4. It’s far better for parallel computation. Without mutation and side effects, we don’t have to worry about two parallel processes clobbering each other’s data. Parallel computation is vastly simplified.
  5. It represents business logic more easily. Have you ever tried to walk through a large program’s flow control diagram with your business stakeholders? Loopbacks, free variables, “spaghetti calls” and mutation make it a complicated process. Functional programming maps easily to the business domain. It does so naturally, because business functions (and business flowcharts) tend to map very cleanly to function definition. You get better stakeholder understanding and involvement in decision making.

Everyone has favorites

Mine is Scala. But the good news is, you can use functional programming principles in most languages. Java, Go, and others support function lambdas (being able to pass functions as arguments), giving you composability. Most languages allow you to write code without relying on mutation. And you can use discipline to stop the spaghetti code.

The one big limiting factor will be whether your language of choice handles recursion efficiently (and, specifically, can optimize for tail call recursion). To truly avoid imperative programming, we really need to rely on recursion. Consider, for example, how to iterate over an array if you don’t have any control statements (such as a for loop). How do you do it? The functional programming answer is recursion:

val coins = List(1, 2, 5, 10, 20, 50)
def count(l: List[Int]): Int = l match {
	case Nil => 0
	case h :: t => h + count(t)
}
count(coins)
// res1: Int = 88

This function uses tail call recursion to call itself, iterating through a list of coins and adding up the value 88. The trick to success is that the compiler can optimize the recursive call, eliminating any risk of stack overflow and also doing it just as efficiently as an imperative for loop.

Recently, I tried pushing Go as far as I could as a functional language. Ultimately, it turned out that from a language feature perspective, I could actually achieve a pretty good result. The code ended up being functional (I could avoid statements and mutation). Unfortunately, there’s one problem: Go’s recursion doesn’t support tail call optimization — as a result, it was about four times slower than imperative programming, and vulnerable to stack overflows. The net result: I felt like I could get “halfway there” in Go. It’s not possible to completely avoid mutation or rely exclusively on recursion, but you can still write code that looks very functional. Under the covers, there will be some control statements and mutation — but you can hide it away, encapsulate it, and structure it well. And in so doing, you can realize many of the benefits of functional programming.

If you’d like to learn more about functional programming in Go, I highly recommend Francesc Campoy Flores’ Goto talk on the subject.

Treat Yourself to a Listy Tuple!

We don’t use tuples enough, and part of the reason is, they’re kind of ugly to use. I mean, using whatever._1 to access a value is kind of ugly.

shapeless to the rescue!

The opening sentence on the shapeless site is kind of off-putting: “shapeless is a type class and dependent type based generic programming library for Scala.” Ok, that doesn’t really tell me what’s in it for me… so I thought I’d write up a few examples.

Back to those tuples. Wouldn’t it be cool if you could treat tuples just like other collection types in Scala? For instance, if you could get the head and tail of a tuple?

import syntax.std.tuple._
(23, "foo", true).head
// res0: Int = 23

Nifty! You can use tail, drop, and take too, as you might expect. You can also append, prepend, and concatenate tuples much like you can other container types:

(23, "foo") ++ (true, 2.0)
// res1: (Int, String, Boolean, Double) = (23,foo,true,2.0)

And perhaps best of all, you can now map, flatMap, fold and otherwise chop, spindle, and manipulate your tuples:

import poly._

object option extends (Id ~> Option) {
  def apply[T](t: T) = Option(t)
}

(23, "foo", true) map option
// res2: (Option[Int], Option[String], Option[Boolean]) = (Some(23),Some(foo),Some(true))

Before we look at some of the really cool things this enables, here’s one more tuple-trick, thanks to singleton-typed literals:

val t = (23, "foo", true)
t(1)
// res0: String = foo

So yes, if you really hate it that much, you can now effectively be done with the ._1 syntax. But shapeless does a lot more than make working with tuples easier. For instance, it provides an implementation of extensible records. This makes it possible to create extensible records like this book:

import shapeless._ ; import syntax.singleton._ ; import record._

val book =
  ("author" ->> "Benjamin Pierce") ::
  ("title"  ->> "Types and Programming Languages") ::
  ("id"     ->>  262162091) ::
  ("price"  ->>  44.11) ::
  HNil

And then operate on the book:

book("title")   // Note result type ...
// res1: String = Types and Programming Languages

I’ll leave exploring shapeless’ extensible records to you (check out the GitHub Feature Overview page).

One more feature I feel compelled to mention, just because I love them, are lenses. The shapeless implementation supports boilerplate-free lens creation for arbitrary case classes. Unless you’re already married to Scalaz or Monacle, you’ll want to give the shapeless lens a try:

ageLens = lens[Person] >> 'age
age1 = ageLens.get(person)
// age1: Int = 37

Lenses are pretty cool once you get used to them. They lead to some marvelously readable, maintainable code. Check them out, along with shapeless’ typesafe case operator, cast. If you’ve ever been bitten by type erasure, this just might be the solution you’ve been looking for.

Once you get excited about shapeless you might want to pick up a copy of The Type Astronaut’s Guide to Shapeless. It’s free!

What is Functional Programming?

While looking for great Scala or Erlang programmers, one of the first things I ask is “what does functional programming mean to you?” Most of the time, the answer hovers around “programming with higher order functions,” or “programming with functions instead of objects.” Both are good language features that belong to functional languages. Neither answer is what I’m looking for.

The problem with both of these answers is that they hint at non-functional thinking. It’s still looking at programming through an imperative or object oriented perspective.

For example, let’s consider two different approaches to representing a Set of integers (an arbitrary collection of integer values). The imperative approach tends to make a list of values. Here’s a simple Set object that defines both an add and remove method:

object Set {
	val values: List[Int] = List()
	def add(q: Int) = q :: values :: Nil
	def remove(q: Int) = values.filter(_ != q)
}

At first glance this appears to tick all the boxes for a “functional language.” It uses immutable values. The functions are side-effect free (they don’t change values outside of their own scope).

When I hear the goal is “programming without side-effects,” I’m usually pretty happy. It’s a good layman’s definition of functional programming (for those of us that don’t remember it’s all about referential transparency). Avoiding side-effects tends to check most of the important functional boxes. If you can’t have side-effects, you usually lean toward immutability. You also write functions that don’t reach outside their own scope, so your functions always produce the same outputs given the same inputs. You avoid exceptions because, let’s face it, they’re just a way of making your problem someone else’s problem. And all of that combined tends to produce code that is referentially transparent.

But we’re still in imperative-land. If we really want to think in functional terms, we need to eschew imperative thinking entirely. We need to think like a mathematician. Mathematicians think very purely about functions in the mathematic sense. (It’s a shame that programming uses the term “function” instead of “method,” I think this leads to a lot of functional programming terminology confusion).

For example, a Set to a mathematician is simply a collection of values:

s = { 3 }
t = { 7, 9 }

Here we have two Sets, one representing the single-value set of 3, and the other, both 7 and 9. We can model the collection of both sets like so:

c = s ∪ t

This models the set c as the union of both s and t, so c is a set containing 3, 7 and 9.

In Scala, a pure functional approach to this might look like this:

type Set = Int => Boolean

In other words, a function taking an Int and returning a Boolean. Given a single integer value the function tells us if the value is in the set. You could then write a function that tests whether a given value exists within the Set:

def contains(s: Set, n: Int): Boolean = s(n)

In this function, given a Set s and an integer n, we see if the set is true for n.

You could represent a Set that contains many values by writing a function that combines two:

def union(s: Set, t: Set): Set = (x => s(x) || t(x))

Here we return a new Set that is composed to two different Sets, s logically or’d with t. This works because Scala is a functional language and is able to model programming functionally, meaning in a non-imperative way.

The native Scala Set class demonstrates this thinking. If you look at scala.collection.Set you’ll find it’s a trait defined as as a function that takes a value of type A and returns a Boolean:

trait Set[A] extends (A => Boolean)

Thinking functionally is not the same as thinking imperatively, or using an object oriented approach. It really is a fundamental shift in how we approach programming. It takes some time to adopt. The joy about Scala is that you can start from an object oriented background and move toward a more functional nature over time. The language allows you to blend features of both paradigms.

You’re Testing It Wrong…

I really like test driven development. It lets me be lazy. I don’t have to worry about my software quality, or that something I did broke some other thing. And with good dependency injection to make sure every component is working right, “it just works.” Now I code using TDD (writing my tests first, then coding to fulfill them), and I focus our QA efforts on making sure we have great test plans, and great coverage.

Closed System
A closed system wants to be tested.

So, when one of my project teams kept telling me they couldn’t write tests because the database wasn’t ready I got worried. Our team had been immersed in TDD for months — and every single engineer had nodded vigorously when I set expectations. The team leader recited the definition of “dependency injection,” just to drive home how ready they were to embrace it!

But when I asked to see the what was wrong, I knew we had a problem. The team’s tests were not injecting mock objects the right way. The idea behind dependency injection is to replace the smallest component possible in a closed system with another object, a “mock.” That mock can then monitor the system around it, inject different behaviors, and create desired results.

For example, let’s say we have a program that connects to a gizmo — your home thermostat. The thermostat itself is a separate component that lives outside your program. We can expect the thermostat to behave like a thermostat should… reporting the current temperature, and letting the home owner enter a desired room temperature. Pretty straight forward.

So the first step is to write a program that talks to the thermostat. We can wire up a real thermostat, but we’ve got a problem right off the bat. We want to know how our program behaves as the ambient temperature changes — 65 degrees, 32 degrees, or 100 degrees. But a real thermostat is only going to report the actual room temperature, and making the room frigid or boiling just isn’t going to be very comfortable or practical.

Not Mocking
Faking is not mocking.

This is where dependency injection comes in — wouldn’t it be great if we could inject a new gizmo, one that behaves according to our test plan?

It turns out that my team had been taking the wrong approach — but one that is pretty easy to make if you’re new to the idea of mocking and dependency injection. Unfortunately, it meant that they weren’t really testing the application. The were testing something else entirely.

Once we walked through the system, the mistake was clear. During application start up, it created a connection to a database. My team’s approach had been to add a “mocking” variable to the application. In effect, it created a test condition; if the application was in “mocking mode” it would only simulate database calls. If the application was not in “mocking mode” it sent real requests to a real database. Sounds great, right?

But it’s all wrong. Here’s the problem: The application was faking real world behavior. That is, throughout the program there were dozens of little tests, in effect asking, “if we are mocking, then don’t check the database for a user record, instead just return this fake user record.”

This meant that the real program — the actual application logic that would be deployed to the real world — was never tested. Instead, an alternate branch of logic was tested — a fake program, if you will. So two things happened:

  1. We weren’t testing the real program, we were testing something else altogether.
  2. The program itself became terribly complicated because of all the checks to find out “are we mocking?” and the subsequent code to do something else entirely.

And all of that is why my team said they couldn’t really test the system, because the database wasn’t up and running.

So what does real dependency injection look like? It’s simple: You want to change the actual gizmo, but change it in the most subtle way possible — and then you want to put that actual gizmo right back into your program.

Mocking
Real mocking doesn’t affect the original program flow.

Getting back to the thermostat example, an ideal solution would be to modify a real thermostat. You could crack it open, remove the temperature sensor, and add a little dial to it that lets you change the reported temperature. Then you plug the “mock thermostat” into your program, and you change the temperature manually! A potentially better approach would be to change the software that talks to your thermostat, and instrument it so that you can override the actual reported temperature. Your program would still think it’s talking to a real thermostat, and the connecting software could change the actual temperature before handing it off to your program.

In our case, the right solution could be injecting a simple mock component at the just the right point in our program.

For example, lets say our application uses an Authenticator object to log in users. The Authenticator checks the validity of a user in the database, and then returns a properly constructed User object. We can use dependency injection to substitute our own test data by overriding the single function we care about:

object fakeAuthenticator extends Authenticator {
    override def getUser(id: Int): Option[User] = {
        Some(User(id: -1, name: &quot;Fake User&quot;))
    }
}

On line 2, we replace the real Authenticator’s getUser function. The overridden method returns a hard-wired User object (in this case, one that clearly doesn’t represent a valid user account). By overriding the Authenticator in the test package only, the original program is not altered — all that’s left is to inject our altered Authenticator into the program.

The old fashioned way of doing injection is still reliable: Don’t tell, ask. Use a factory object to ask for the Authenticator. Given a factory in the application (let’s call it the AuthenticatorFactory) we can override what the factory actually returns in our test case only:

AuthenticatorFactory.setAuthenticatorInstance(fakeAuthenticator)

A slightly more modern approach is to use a dependency injection framework, but the underlying principle is exactly the same.

Likewise we can take the concept of mock objects further by using frameworks such as Mockito (a framework that works wonderfully with specs2). Mockito makes it easy to instrument real objects with test driven behavior. For example, Mockito will produce a mock object that acts just like a real object, but fulfills expectations (such as testing to make sure that a specific function is called a certain number of times).

Whatever tools and frameworks you use, test driven development has proven itself over the past decade. My own experience is the same: Every TDD project has produced more predictable results, has better velocity, and has been more reliable overall. It’s why I don’t do any coding without following TDD.

Do hackers make the best testers?

Recently, I was asked “what makes a good software tester,” and as a subtext, whether hacking and testing share a similar mindset, and how wide a skill set testers need to have.

I think the most valuable asset a Software Tester can have is an attitude of gleeful problem discovery. Someone that loves to break systems, discover their imperfections, and explore their weaknesses makes a great tester. I haven’t met many people that really enjoy and excel at this, but it’s probably is an attribute that is common with Hackers as well.

It’s also wonderful to have a tester that really cares about the quality of the product. It’s absolutely essential for someone that wants to excel as a tester. That means having the patience and desire to work closely with the Quality Assurance group, to understand what a “good customer experience” means, and to really grasp things like quality of services, user experience, and customer needs.

Part of being a good tester means enjoying running down the rabbit hole. Where the hole leads is a mystery. Perhaps testing discovers problems stemming from poor UI design, SQL injection problems, performance issues caused by heavy loading, or playing the clueless user that always clicks the wrong thing and triggers a logic error.

The “how” of testing is another matter though. Yes, there are well understood principles and techniques, and often tools, for testing all of these things. I have found that in most cases, good testers tend to specialize. I don’t expect to find one person that can find the flaws in the user interface, perform load testing, and also look for SQL injection vulnerabilities. To get really good at all of these things, you need a team — some of those team members will focus on the back end, some on security, some on database systems, some on the front end. Finding someone that’s great at tackling a couple of those verticals is pretty rare. That said, every tester should have an adequate, at least shallow understanding of all of these areas. In order to properly localize a problem, you need to understand what could be causing it. But having other resources to bring in to help diagnose the specialty areas is critical.

First, care. Care intensely.

Excellent advice found on 43 folders: Before you sweat the logistics of focus: first, care. Care intensely. We spend a great deal of time working on “engaging the team” or engaging ourselves when what we really need to do is find the willpower to focus on the foremost problem at hand. As Merlin points out, “Obsessing over the slipperiness of focus, bemoaning the volume of those devil ‘distractions,’ and constantly reassessing which shiny new ‘system’ might make your life suddenly seem more sensible–these are all terrifically useful warning flares that you may be suffering from a deeper, more fundamental problem.”

Common oversights in choosing methodology

Changing the way a business operates is a daunting task. It involves assessing and understanding the strengths and weaknesses of the current organization, identifying solutions to the weaknesses without compromising the strengths and, ultimately, changing the way people work. Above all, people tend to be resistant to change — and this is the most common issue that arises when adopting a new methodology.

This translates into preparation, more than anything else: Preparing by understanding your options, preparing the organization for change, and preparing to measure your success.

Be thorough during evaluation

The most common oversight in preparing to adopt a new methodology is simply not evaluating all of the available options. It’s an easy pitfall to succumb to: There are so many processes, so many methodologies, so many choices, how can someone possibly make the right choice? Surely all of these published techniques are mature and “real,” does it even matter which methodology we choose? Yes. It matters a great deal. Each methodology has its strengths and weaknesses and very few methodologies can be applied to every development project.

The wide variety of methodologies is a reflection of the complexity of the software development industry. We have many choices in executing any strategic operation, whether a military incursion, a football game or planning for building a house. Likewise, the software industry has evolved a wide variety of processes, each one suitable for different scenarios. While it is certainly true that many methodologies can be successfully applied to many different projects we can’t make the assumption that any one methodology will work equally well in every situation. Adopting a heavy process in a project involving a small team and a short-term schedule is almost always a poor idea, as it leads to extending the project timeline to support unnecessary project artifacts. But less obvious is the impact of pairing a lightweight process with a medium-sized project. How many people is “too many” for an Extreme Programming (“XP”) project? At what point does the lack of formal project controls start to make the project unpredictable? Will the business stakeholders feel the project is not adequately managed? These questions, and many more, emphasize how important it is to prepare thoroughly before choosing a methodology.

Given the plethora of potential methodologies, it’s easy to just pick one and get started. The temptation to simply choose a well-regarded methodology, buy a well-reviewed book on the subject, and forge ahead can be strong. But this “textbook approach” can prove deadly. Without studying the methodology beforehand it’s easy to choose the wrong methodology — and even if a mistake of this magnitude becomes clear over time, it’s usually too late to change course. And much like reading instructions too quickly, it’s easy to realize too late that the process is wrong: Incorrectly implemented, or not the right fit for the situation.

Another pitfall to the “textbook approach:” It leads to following a process blindly and over-adopting, particularly with more comprehensive methodologies that have more to offer. The fallout from this: Teams come to think that comprehensive methodology is a “bad thing,” heavily weighted and full of red tape, unnecessary work and overhead. Using the textbook as an instruction manual makes it impossible to have a complete view of processes and artifacts offered by the methodology and, therefore, the value and appropriateness of each.

Prepare the team and the organization

Just as evaluating and selecting new methodology can be a mine field, so can the actual adoption process. A common oversight when preparing to adopt a new methodology is not planning for the upheaval it will cause: Training and learning curves, changes in operational behavior and metrics, and impact the schedules. Changing the way a business works means everyone has to relearn what they do on a daily basis. This means considering what it will take to implement the methodology within an organization as a whole, and achieving a level of investment in the effort by all the stakeholders.

Team members need to be trained, business units need to be integrated into the process, schedules adjusted to accommodate the new methodology and in most situations a significant learning curve will translate into a slow, steady adoption — as opposed to a sudden, rapid adoption. The former approach provides an opportunity for participants to learn the usefulness of different aspects of the methodology and to gauge its success. The latter approach — attempting to make a complete, rapid transition — often leads to failure during adoption. Too many interdependent processes that are not well-understood by the team leads to poor execution. This can lead to missteps during a pilot project, a time at which such mistakes are highly visible. Not having a steady, progressive and measurable improvement against existing techniques means criticism will come easily.

Measure your success

Creating positive, measurable metrics that demonstrate the benefit of a new methodology is critical. Part of the process is making sure training costs and the cost of adoption is tied directly to business goals. By coupling the business to the methodology, all stakeholders have a vested interested in success. Good metrics demonstrate that progress is being made — both providing a positive measure of success, and avoiding the need for a “big bang” success right out the gates. And, if you aren’t already tracking metrics and measuring success, this is an ideal time to find a management methodology that will.

Why Agile isn’t enough (and why it doesn’t work)

Agile methods are powerful tools when used properly — but as with all tools, they can be misused. The critics of agile methods are many and vocal, often looking at agile as a host of poorly thought-out and incomplete “shortcuts” that fail to get the job done. And with  90% of projects failing to meet objectives, the criticism is valid.

So is Agile just hype or is there something to it?

There are strengths to the agile way of thinking, and many of them bring useful perspectives to software and systems development that are new and even revolutionary. Here are some of the things that work — and, potentially, that radically change our old-world practices.

Whereas most legacy methods stem from industrial process — that is, assembling a product using a set of defined, predictable steps — the agile method is empirical. It recognizes that development is more like invention and research, more akin to scientific study, than assembly. This empirical nature is at the heart of the agile mantra: Deliver, measure, adjust and repeat. The strength of this approach often bears itself out in fantastically hyperproductive teams that deliver working product far more quickly than legacy methods, such as waterfall, could ever achieve.

Agile does this by cutting through complexity. Every agile-based methodology focuses on simplification of otherwise complicated problems. For example, XP and Scrum both emphasize development of near-term, complete deliverables. This means carving out tangible and reasonably independent pieces of work, focusing on that work, and then — at least as much as possible — moving on to other work. This approach requires that large, complex problems are broken down into manageable pieces and thought of on a micro-deliverable level. Likewise, this approach minimizes ceremony and eschews as much procedure as possible. Some agile methods go to extremes in this regard, focusing entirely on delivering work product and not at all on procedure. This translates into minimizing complexity on a large scale.

Closely related to eliminating complexity is agile’s focus on progress measurement. Most agile methods measure progress chiefly, if not exclusively, in terms of delivered work product. Most methods also are quite stringent in defining progress only when finished work is delivered, which means you can’t work for nine months on a single big feature. Instead, micro-deliverables target key features, deliver those features into the customer’s hands, then moving on to new features. This can be a huge strength because the customer gets working product in-hand to review early, and often. It involves the customer early in product evolution, leading to a host of benefits including better product targeting, prioritized development and improved quality.

These characteristics of agile methods combine to fundamentally change the way software and systems development is practiced. Agile also empowers individuals to become stellar performers. In fact, all forms of agile rely on this to some degree — with more lightweight agile methods being completely dependent on individual empowerment. The idea is that an empowered team will leap over constraints to get the job done, no matter how “out of the box” the thinking needs to be. It’s a refreshing concept and one that can indeed be supremely successful, but it does require the team to embrace the idea wholeheartedly.

Another benefit, at least in some situations, is the creation of self-organizing teams. Partly because of the light ceremony, the fast pace, and the penchant for empowerment and accountability, teams become self-organizing. This works when the team has the right make-up, as individuals step up to take on tasks best suited to individual skills. Self-organized, empowered teams become very powerful and very productive, provided that the team members are up to the job.

There is absolutely no doubt that agile methods make it possible to get things done quickly. That’s what it’s all about, after all. The real question to me is how much of this tradeoff is really desirable? How often do we want to eschew process and maturity in favor of getting things done quickly?

More importantly: Can we effectively merge the best attributes of Agility with the most valuable benefits of established processes and standards?

Why Agile doesn’t work

When an agile project fails, it generally does so spectacularly and predictably. The common failings of agile-based projects are just that… common. We see the same problems over and over again, and this has become the basis for many critiques of agile methodology. After all, if we keep seeing the same problems crop up again and again, isn’t this proof enough that the process is flawed? This becomes clear in hindsight, so why do we continue to see 90 percent of projects missing the mark?

The fact is, agile by itself is just one tool in the toolbox that should be applied with other implements of the trade. In my experience, the problem comes in most often because small- and mid-sized organizations experience brilliant success with agile and then assume it can work everywhere. They throw out the toolbox (or perhaps never buy one in the first place). Yes, agile can succeed. Yes, it can deliver fantastic productivity and stellar results. But not always — in fact, I will go so far as to say not often.

This isn’t because of agile’s limitations. Instead, it’s because of overconfidence by those putting it to use, and the mistakes an immature organization makes as it grows and applies it inappropriately.

Immature companies and teams are cutting their teeth, again and again, on the limitations of agile.

All agile methods make it easy to oversimplify complexity. In fact, agile’s strength of eliminating complexity might be better stated as “ignoring complexity.” There are appropriate situations for this but, more often than not, ignoring complexity leads to problems. Most business cases don’t call for undefined delivery dates or loosely changing requirements and partial deliveries. These are risks that most business models are incompatible with. If the risks aren’t something that your business can sustain, adopting a purely agile process is taking a huge gamble.

Likewise, focusing on the near-term is an agile attribute that introduces a lot of unknowns into the business-end of an equation. Few people will contend that agile is appropriate for mission critical efforts such as, say, launch vehicle development, as sometimes requirements need to be set in stone before anyone starts development. But what about situations where some degree of fuzziness is acceptable or even beneficial? Agile advocates compatibility with change, sidestepping change control procedures that would otherwise place tight controls over requirements. Requirements change carries with it a heavy burden, particularly when it comes to the cross-organizational impact to marketing, budget, quality management and the customer. However, cutting change control, requirements management, and configuration management from the process can lead to long-term disaster that the short-term perspective of most agile methods will overlook.

This theme of reducing structure and control has cut out many waterfall-origin processes. The danger often manifests as small-scale agile projects are successful, leading to wider-scale adoption of agile. But, as the projects grow in complexity and criticality, major missing components in the process become evident. For example, no agile methods today integrate comprehensive quality assurance procedures (in fact, thanks to some early mistakes, such as MIL-STD-498[#], most people think quality assurance is software testing — it’s not). Structured software testing often becomes an afterthought, and risk management programs tend to be regarded as “fuzzy disciplines.” Yet, these are the processes that successfully put man on the moon, that develop health care and financial services systems, and ensure that nuclear plant regulatory systems don’t fail after delivery. Of course, there is a cost to each of these processes, and every business needs to weigh the cost-benefit of adopting more process against cutting those processes. This needs to be an on-going evaluation, made as projects, organizations, and teams evolve — it’s not a decision that stands alone.

From a purely hands-on, management level, agile methods pose “people problems” as well. The strong emphasis on self-organization and empowerment can easily backfire. The former relies heavily on people that are capable of self-management and self-direction. Not everyone can live up to that expectation. The latter, delivering empowerment to the team and individual, can lead to a hero mentality and silo’d teams that refuse to play well with others. As projects grow in size, complexity, and dependency on other teams and resources, these characteristics become the drawback of an immature organization.

Almost all agile methods oversimplify valuable processes. In some situations, the project survives the oversimplification. Sometimes the business is tolerant of the fallout. In every case, agile methods expose the project to risks that stakeholders should be — and often are not — aware of.

What to do about it

We need to be cognizant that one solution does not fit all problems. While an agile method such as XP or Scrum may have led to success in one project, this doesn’t make it a foregone conclusion that it will do so again. Each project is different, and organizations evolve over time. Adopting one process to solve all problems is a sure recipe for failure. On the other hand, having a well-versed team that can draw on several methodologies, as appropriate for the job, is a recipe for success.

If your organization is looking for the one-size hammer to hit every nail, make sure it’s as configurable a hammer as possible. Don’t choose something that is either too lightweight, such as XP, because many projects will overreach the capabilities of such a lightweight process. Likewise, don’t try to implement a full-on waterfall style methodology either because, while definitely thorough and capable of getting the job done, it’s just overkill for many smaller projects. If you must choose a single process, pick one that’s efficient, borrows from both agile and waterfall, and is highly configurable, such as Rational Scrum or the Rational Unified Process. Both of these have the maturity to deliver large-scale projects, but also support starting small and adopting minimum ceremony.

A better awareness of what specific agile practices can and cannot accomplish is key. For example, Scrum is not a development methodology, and it cannot effectively deliver software or hardware projects unless it wraps itself around one. Yet today many organizations are employing Scrum as if it were a development methodology. I’ve even seen an organization of several hundred developers “force fed” Extreme Programming from the top down. The outcome of that particular operation: Mid-level management hid the fact that they didn’t use XP from top-level management after everyone realized what a mistake it was. Perhaps we’ll have to wait for mature standards in education and certification to evolve, but personally I’m not sitting idly by.

One of my personal pet peeves in the technology industry is a relative lack of standards and qualifications. Would you go to a doctor that didn’t have a medical degree? Would you hire an architect that didn’t have an appropriate engineering degree? Yet we hire software professionals (much less often hardware professionals) without adequate education, current qualifications, or meaningful certifications. For that matter, the proliferation of meaningless qualifications (such as Scrum Master certification) continues to weaken the industry. In the long run, we need better standards regarding education, accreditation and certification.

Understand agile methods for what they are. Keep in mind that lightweight process carries risk. Use the right tools in the right situation.

Coming full circle

If we add all of these things to agile methods, won’t we just end up using waterfall process all over again?

I don’t think so. Waterfall-based process, the original behemoth processes born out of industrial process, are widely recognized as inefficient. There are tremendous advantages to pressing forward with a merger between waterfall practices and agile practices. I hope the end result is a new generation of software and hardware development methodology — a generation that we’re just starting to see as processes such as Rational Scrum come to the fore. It’s time for development methodologies to evolve, and there’s no holding that back.