Polymorphic Class Representation in JSON

Have you ever needed to write your polymorphic class or trait to JSON, and realized that you can’t just do it with one line of code? (Oh, c’mon, sure you have — we all have at one time or another!)

Ok, so here’s the situation: I really like the play-json library put out by Lightbend. I mean, the convenience of being able to do this is awesome:

import play.api.libs.json._

implicit val geolocation = Json.format[Geolocation]

Json.toJson(someGeoLocation)

The fact that I don’t actually have to write my JSON serializer and deserializer code is just convenient. Of course other libraries do this too, but many rely on reflection. Not so with play-json since it figures it all out at compile-time. It’s fast and efficient, and I love it. Plus, the dialect for working with JSON directly is pretty straightforward.

But every now and then I do run into a limitation. Let’s say I’ve got a polymorphic structure like this:

trait Customer {
	val customerNumber: Option[String]
}

case class BusinessCustomer(name: String, ein: String, customerNumber: Option[String] = None) extends Customer
case class IndividualCustomer(firstName: String, lastName: String, ssn: String, customerNumber: Option[String] = None) extends Customer

implicit val individualFormat = Json.format[IndividualCustomer]
implicit val businessFormat = Json.format[BusinessCustomer]

Unfortunately, I can’t just declare an implicit Format for the Customer trait. If you think about it, that makes sense… how would the compiler be able to figure that out? But it leaves me with a bit of a problem. Let’s say I serialize an IndividualCustomer, I end up with this:

val u = IndividualCustomer("Zaphod", "Beeblebrox", "001-00-0001")
Json.toJson(u)
// {"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}

There’s no context there. Now when I go to deserialize the JSON representation I have to know, in advance, that it’s going to be an IndividualCustomer and not a BusinessCustomer. Hence, I can’t just deserialize a Customer and get the right type, automatically.

Containers to the rescue

So there is a simple solution, it turns out. What we can do is wrap the JSON in some kind of contextual information. For example, if we can change the JSON to include type information:

{ "type":"IndividualCustomer",
  "value": {
    "firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"
}}

This is pretty straightforward to implement using a simple container class. The container itself is going to have to handle the abstraction of the Customer, so I’ll write a custom Format that adds the extra context I need:

case class Container(customer: Customer)

implicit val containerFormat = new Format[Container] {
  def writes(container: Container) = Json.obj(
    "type" -> container.customer.getClass.getSimpleName,
    "value" -> {
      container.customer match {
        case c: IndividualCustomer => Json.toJson(c)
        case c: BusinessCustomer => Json.toJson(c)
      }
    }
  )

  def reads(json: JsValue) = {
    val v: JsValue = (json \ "value").get
    val c: String = (json \ "type").as[String]

    val z = c match {
      case "IndividualCustomer" => v.as[IndividualCustomer]
      case "BusinessCustomer" => v.as[BusinessCustomer]
    }

    new JsSuccess(Container(z))
  }
}

With this custom Format (and the corresponding reads and writes functions) I can now serialize any Container to JSON, and get enough contextual information so that I can deserialize back to the correct type:

val c1 = Container(u)
Json.toJson(c1)
// yay! context!
// {"type":"IndividualCustomer","value":{"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}}

Since we are using the Container around all instances of the Customer trait, we get context. Now we can read and write our abstracted Customer instances to and from JSON:

val someJsonString = """{"type":"IndividualCustomer","value":{"firstName":"Zaphod","lastName":"Beeblebrox","ssn":"001-00-0001"}}"""

val someJsValue = Json.parse(someJsonString)
val backAgain = Json.fromJson[Container](someJsValue)
val maybeContainer = backAgain.asOpt

assert(maybeContainer.isInstanceOf[Option[Container]])

Alternatives

Another approach I’ve used is to put the wrapper logic into the trait itself. Personally, I don’t like that approach for a couple of reasons. First of all, it clutters the trait with things that have nothing to do with the trait’s purpose. Second of all, it eliminates an otherwise nice separation of concerns. I’d rather keep my traits pure, and add in something like Container (or, perhaps call it a CustomerJSONContainer if you like). I could easily enough see a hybrid approach that implements a trait, such as AddsCustomerContext, and then mixing that trait in.

About the only thing I’m not happy about is the relatively static bit of code in Container. If I could find a way to avoid match cases on each Customer type, that would be ideal… but then, if I could do that, Scala would probably be able to create an implicit Format[Customer] and none of this would be necessary in the first place.

Annotation Done Right

Or, think carefully about your APIs

This is such a common design pattern we’ve probably all done it one way or another. We have some data stream — likely XML or JSON or just plain old text — and we want to wrap it inside another element. For example, taking a name like “Zac” and turning it into “<name>Zac</name>” like so:1

var s = s"<name>$firstName</name>"

Of course if you do this enough, you start thinking it would be nice to have a function on hand:

def wrap(s: String, w: String): String = s"<$w>$s</$w>"

What’s wrong with that?

That’s all well and good, but after a while we realize a couple of problems with this very simple API:

  1. I’d say the API itself is pretty bad. I would love to have a syntax that feels more, well, functional… like "Zac" wrap "name".
  2. It’s not really “wrapping” the text string — it is in fact “XML’ising” the text string. There’s more going on here than just bracketing a string with another string.
  3. It’s not much of a stretch to see how this generic wrap() function could end up getting in the way (either confusing someone about its true purpose, or getting in the way of other string-oriented functions).
  4. And, why limit this handy little function? What happens if we want to wrap something that’s not a String?

Let’s tackle these one at a time. The first limitation is elegantly managed by introducing infix notation with an implicit class in Scala.

implicit class EnrichedString(s: String) {
  def wrap(w: String) = s"<$w>$s</$w>"
}

"thing".wrap("foo") // the usual syntax
"thing" wrap "foo"  // or using infix notation

When you try to call the wrap() function, the implicit class essentially gives the compiler a hint about where to look for the function. Since it isn’t on the String class itself, it starts to search implicit scope. The compiler finds the EnrichedString class, realizes it can convert a standard String to an EnrichedString, and gains access to the wrap() function.

One possible negative side effect of this is boxing. The source string, in our case “Zac”, will get boxed into EnrichedString("Zac") so the compiler can call wrap(). Depending on how you feel about this, you can get around it by using AnyVal instead:

implicit class EnrichedString(val s: String) extends AnyVal { ... }

A thoughtful API

That’s a nice way of improving the usability of our API, but it’s still a pretty bad API. I still haven’t addressed most of the problems I brought up:

  1. We want to wrap strings with an opening and closing XML element. We should create an API that accurately describes this.
  2. By drawing on the desired goal of avoiding boxing, we could pretty easily apply this to just about any type. So, why not? Who’s to say we might not want to wrap an Int?
  3. We should also consider other possible uses of our API. What if I wanted to ask a question in Spanish — ?Justo como esto¿
  4. Finally, I don’t know about you but I always do BDD, so we should have a test harness to make sure our API does the right thing.

Let’s start with the test harness. I love starting here, because I’m thinking about what I want the API to do, not what the code is doing. Let’s think up a test that fits all of our goals (a good API, asymmetric tokens, and a functional style):

def annotationWorksAsExpected = {
  "foo" wrap ("[", "]") === "[foo]"
  "foo" wrap (("[", "]")) === "[foo]"
  1 wrap "?" === "?1?"
  "foo" + "bar" wrap Some("...") === "...foobar..."
  "foo" + "bar" wrap Some(("<", ">")) === "<foobar>"
  "foo" + "bar" wrap None === "foobar"
  "foo" + "bar" wrap(("?", "¿")) === "?foobar¿"
  "foo" makeElementOf("around") === "<around>foobar</around>"
  "foo" + "bar" makeElementOf(("start", "end")) === "<start>foobar</end>"
  "foo" + "bar" makeElementOf((1, 2)) === "<1>foobar</2>"
  "foo" + "bar" makeElementOf("enclose") === "<enclose>foobar</enclose>"
  500 makeElementOf(0) === "<0>500</0>"
  500 makeElementOf("int") === "<int>500</int>"
  500 + " tiene razón" wrap(("?", "¿")) === "?500 tiene razón¿"
}

The === is a specs2 matcher that specifies equality. So, I defined every case I could think of, within reason, that someone might expect of my API. I’ve separated out the idea of “wrapping” and “XML’ising” a string — and while I’m at it, I decided it shouldn’t be limited to strings. I also introduced the idea of asymmetric tokens, such as “start” and “end” and “?” and “¿” (in this case, by using a tuple to represent a pair of opening and closing tokens). I also threw in an Option to give some flexibility while coding.

After sitting back and thinking about it, I feel like this is an API that won’t offend or get in too many people’s way. Now, to make my tests pass:

implicit class EnrichedAny(val s: Any) {
	def wrap(y: Option[Any]): String = y match {
		case Some(y: Tuple2[Any, Any]) => s wrap(y._1, y._2)
		case Some(y: Any) => s wrap y
		case _ => s.toString
	}

	def wrap(y: Any): String = y.toString + s + y.toString
	def wrap(y: Tuple2[Any, Any]): String = y._1.toString + s + y._2.toString

	def makeElementOf(y: Any) = s wrap(s"<${y}>", s"</${y}>")
	def makeElementOf(y: Tuple2[Any, Any]) = s wrap(s"<${y._1}>", s"</${y._2}>")
}

Hopefully the basic pattern is recognizable, but now with a few much needed improvements:

  1. We abstracted the entire function to work over Any so I’m no longer limited to String instances.
  2. I added support for Option, really just trying to be functional and think of likely use cases here.
  3. By adding support for a Tuple2 I can now use asymmetric tokens, like “start” and “end.”
  4. And I’ve separated the idea of wrapping from “XML’ising.”

I think the outcome is pretty good. We have a wrap API that does pretty much what you would expect — it puts an unmodified wrapper around another object. And we have my original goal, an XML-oriented API that puts properly formed starting and ending elements around an object. More important, it’s general enough that I won’t have to reimplement exactly the same thing on different types. And of course, it’s all tested to make sure it does what we expect.

We could probably take it a little further, perhaps using generics or a bit of recursion to support tuples of any size… but I’ll leave that as an exercise for the reader…

The real point here is to think about your APIs carefully when designing them. Don’t build an API that is limited, or confusing, or obfuscated. Look at your code in the context of the entire ecosystem you work in — and build something that fits well in that ecosystem.

  1. Obviously, if you are really doing a lot of XML specific work, you might just want to look at a library like Rapture (there are many options out there).

Be Lazy!

Or, why lazy evaluation is such a good thing

I was recently challenged, while teaching a Scala class, to explain exactly why lazy evaluation is important. That had me looking at the use cases enumerated in the course notes, and it wasn’t really compelling. Sure, there are some obvious reasons that lazy evaluation makes sense — but I really think there are even more compelling reasons to use lazy evaluation.

What it means to be lazy

Most languages are not inherently lazy. Java and Scala, for instance, depend on specific syntax to use lazy evaluation:

lazy val later = calculateTheNumberz()

By using the lazy keyword we defer evaluation of an expression until the defined name is first referenced. In other words, in the above declaration the value of later won’t be calculated until we first reference later. At that point, it will be calculated (just once) and any future reference will return the calculated value. In contrast, this expression:

val later = calculateTheNumberz()

Will immediately calculate the value of later. The obvious advantage here is really twofold:

  1. Potentially, we never call calculateTheNumberz() and, presuming this is an expensive operation, we never incur the overhead of that calculation.
  2. If we do reference later, we at least defer calling calculateTheNumberz() until we actually needed it. This can mean faster program startup, for instance.

So, lazy evaluation is a method of evaluation. It means that expressions are not evaluated when they are bound to variables, but their evaluation is deferred until the results are needed. Consequently, arguments are not evaluated before they are passed to a function, but only when their values are actually used. This kind of evaluation is call-by-name, as opposed to call-by-value or eager evaluation.

def someComputation(x: => Int, y: => Int) = if (x > 0) x else y

In the above example, the Scala function someComputation() takes two parameters. The => notation indicates lazy evaluation (or “call-by-name”), so the value of x and y will not be computed until referenced. In the expression if (x > 0) x else y the value x will be computed and, if positive, y will never be computed. If we imagine that y is a potentially expensive operation, this can be a huge performance saver.

Lazy by nature

Some languages are lazy by nature. These tend to be functional languages, such as Haskell. A potential advantage to languages that evaluate everything lazily is that you don’t have to remember to use the lazy syntax.

Another hidden advantage of “lazy by nature” is that programs tend to be more efficient without forethought. You can bang out a potentially expensive expression, like this:

if (someCondition) expensiveCall() else heftyCall()

We don’t have to worry about such an expression, because only one of expensiveCall() or heftyCall() will be evaluated. In contrast, languages that default to eager evaluation may evaluate both functions before deciding which one to call. Some languages are smart enough to avoid unnecessary function calls, but in a truly “lazy by nature” setting, even expressions such as 2 * 2 will not evaluate unless referenced.

The less obvious

Clearly, lazy evaluation readily translates into pragmatic efficiency — most obviously, by avoiding unnecessary computation. But what else does it do?

One benefit is working with infinitely large data sets. For instance, you can model an infinite list of numbers or an infinite stream of data. In a lazy evaluation model, only the head element of such a stream will be evaluated, whereas an eager evaluation will calculate the entire data set:

def listRange(lo: Int, hi: Int): List[Int] = {
	print(s"${lo}")
	if (lo >= hi) Nil
	else lo :: listRange(lo + 1, hi)
}

scala> listRange(5, 10)
5678910 res0: List[Int] = List(5, 6, 7, 8, 9)

In this example, listRange() creates a list of integers from a low to a high range. Since it uses call-by-value evaluation, the entire data set is calculated (as we can see from the output). If this were, say, a huge or infinite data set we would have a problem.

Lazy evaluation let’s us solve this problem. Here’s the same implementation, using a Scala Stream:

def streamRange(lo: Int, hi: Int): Stream[Int] = {
	print(s"${lo} ")
	if (lo >= hi) Stream.empty
	else Stream.cons(lo, streamRange(lo + 1, hi))
}

scala> streamRange(5, 10)
5 res0: Stream[Int] = Stream(5, ?)

In this case, the lazy Stream class only calculates the head value of the data set, returning the first integer. The tail calculation is left for later, giving us an easy (and efficient) way to work with arbitrarily large data sets. Only when we ask for the next value will the stream calculate whether we are done with the set.

This works because the Stream class uses a very simple call-by-name lazy calculation similar to the following:

class Stream[T]
object Stream {
	def cons[T](h: T, t: => Stream[T]) = new Stream[T] {
		def isEmpty = false
		def head = h
		def tail = t
	}
}

Notice how the tail of the stream is constructed using call-by-name semantics: t: => Stream[T]. The value t is not calculated until we ask for it some time in the future.

But there’s more!

We’ve discussed some very practical advantages to using lazy evaluation, but I think there are even more important reasons to use it liberally.

Lazy languages are pure, because it is very hard to reason about side effects in a lazy language. By “pure” I mean pure in the sense of functional programming: pure functions are referentially transparent. In a language that doesn’t evaluate expressions until they are referenced, it’s more natural to think in terms of deferring evaluation — and that strikes me as conducive to functionally pure reasoning.

Also, laziness allows you to define new control structures in the language or implement DSLs. You can’t write conditional evaluators (think if-then-else expressions) in a strictly evaluated language. Likewise it’s even harder when you consider how to deal with loops or infinite data sets.

And finally — and maybe most important — the best approach to dealing with side-effects in the type system, like monads, can only be expressed effectively by relying on laziness. Think about how hard it would be to implement monadic operations without violating at least one of the monad laws (for instance, mapping over a Future).

Functional Programming is Better

No, seriously — it is. But I realize that’s a loaded statement and likely to draw an argument. Nevertheless, I’ll stick to my premise and lay out my reasoning.

First, though, a little background. Functional programming is a paradigm shift, a different style of programming. Just as object oriented programming upset the procedural cart, so functional programming puts object oriented on its head.

Where did it come from?

The roots of functional programming likely dates back to the 1930’s, when Alonzo Church codified lambda calculus as “A formal system of mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution.”

In order words, lambda calculus consists of constructing lambda terms and performing reduction operations on them. It describes how we evaluate the symbology of calculus — and it does so by evaluating and reducing terms to their simplest form.

What matters for us is that everything in lambda calculus is a function expression, and we evaluate every expression — much like you would do if evaluating the function x = 2y.

Programming without side effects

In it’s simplest definition, one could say that functional programming is “programming without side effects.” In a more restricted sense, it is programming without mutable variables. That means no assignments, no loops or other imperative controls structures. It puts a focus on functions, not on program flow (whereas more traditional languages, such as Java or C, focus on an imperative style that emphasizes program flow using statements, or commands).

So back to my premise — exactly why is this better?

  1. It’s easier to write functional code. It is much more modular and self-contained by nature, since functions are entirely enclosed. So called “spaghetti code” is eliminated, and confusing scope issues (such as those created by free variables) go away.
  2. It’s easier to understand. Functional programs uphold referential transparency by using “expressions that can be replaced with its corresponding value without changing the program’s behavior.” In other words, there are no external side effects to a function. Given any function, you can replace any occurrence of that function with its output and the program will run unaltered. Since there are no looping control structures and no scope violations (such as Java’s for loop and global variables) functions always produce the same output, given the same input.
  3. It’s easier to test. By eliminating side effects, global or free variable scope, spaghetti code, and sticking to the principle of referential transparency we end up with programs that are very easy to test. Functions always behave the same way. They can be easily isolated and tested.
  4. It’s far better for parallel computation. Without mutation and side effects, we don’t have to worry about two parallel processes clobbering each other’s data. Parallel computation is vastly simplified.
  5. It represents business logic more easily. Have you ever tried to walk through a large program’s flow control diagram with your business stakeholders? Loopbacks, free variables, “spaghetti calls” and mutation make it a complicated process. Functional programming maps easily to the business domain. It does so naturally, because business functions (and business flowcharts) tend to map very cleanly to function definition. You get better stakeholder understanding and involvement in decision making.

Everyone has favorites

Mine is Scala. But the good news is, you can use functional programming principles in most languages. Java, Go, and others support function lambdas (being able to pass functions as arguments), giving you composability. Most languages allow you to write code without relying on mutation. And you can use discipline to stop the spaghetti code.

The one big limiting factor will be whether your language of choice handles recursion efficiently (and, specifically, can optimize for tail call recursion). To truly avoid imperative programming, we really need to rely on recursion. Consider, for example, how to iterate over an array if you don’t have any control statements (such as a for loop). How do you do it? The functional programming answer is recursion:

val coins = List(1, 2, 5, 10, 20, 50)
def count(l: List[Int]): Int = l match {
	case Nil => 0
	case h :: t => h + count(t)
}
count(coins)
// res1: Int = 88

This function uses tail call recursion to call itself, iterating through a list of coins and adding up the value 88. The trick to success is that the compiler can optimize the recursive call, eliminating any risk of stack overflow and also doing it just as efficiently as an imperative for loop.

Recently, I tried pushing Go as far as I could as a functional language. Ultimately, it turned out that from a language feature perspective, I could actually achieve a pretty good result. The code ended up being functional (I could avoid statements and mutation). Unfortunately, there’s one problem: Go’s recursion doesn’t support tail call optimization — as a result, it was about four times slower than imperative programming, and vulnerable to stack overflows. The net result: I felt like I could get “halfway there” in Go. It’s not possible to completely avoid mutation or rely exclusively on recursion, but you can still write code that looks very functional. Under the covers, there will be some control statements and mutation — but you can hide it away, encapsulate it, and structure it well. And in so doing, you can realize many of the benefits of functional programming.

If you’d like to learn more about functional programming in Go, I highly recommend Francesc Campoy Flores’ Goto talk on the subject.

Treat Yourself to a Listy Tuple!

We don’t use tuples enough, and part of the reason is, they’re kind of ugly to use. I mean, using whatever._1 to access a value is kind of ugly.

shapeless to the rescue!

The opening sentence on the shapeless site is kind of off-putting: “shapeless is a type class and dependent type based generic programming library for Scala.” Ok, that doesn’t really tell me what’s in it for me… so I thought I’d write up a few examples.

Back to those tuples. Wouldn’t it be cool if you could treat tuples just like other collection types in Scala? For instance, if you could get the head and tail of a tuple?

import syntax.std.tuple._
(23, "foo", true).head
// res0: Int = 23

Nifty! You can use tail, drop, and take too, as you might expect. You can also append, prepend, and concatenate tuples much like you can other container types:

(23, "foo") ++ (true, 2.0)
// res1: (Int, String, Boolean, Double) = (23,foo,true,2.0)

And perhaps best of all, you can now map, flatMap, fold and otherwise chop, spindle, and manipulate your tuples:

import poly._

object option extends (Id ~> Option) {
  def apply[T](t: T) = Option(t)
}

(23, "foo", true) map option
// res2: (Option[Int], Option[String], Option[Boolean]) = (Some(23),Some(foo),Some(true))

Before we look at some of the really cool things this enables, here’s one more tuple-trick, thanks to singleton-typed literals:

val t = (23, "foo", true)
t(1)
// res0: String = foo

So yes, if you really hate it that much, you can now effectively be done with the ._1 syntax. But shapeless does a lot more than make working with tuples easier. For instance, it provides an implementation of extensible records. This makes it possible to create extensible records like this book:

import shapeless._ ; import syntax.singleton._ ; import record._

val book =
  ("author" ->> "Benjamin Pierce") ::
  ("title"  ->> "Types and Programming Languages") ::
  ("id"     ->>  262162091) ::
  ("price"  ->>  44.11) ::
  HNil

And then operate on the book:

book("title")   // Note result type ...
// res1: String = Types and Programming Languages

I’ll leave exploring shapeless’ extensible records to you (check out the GitHub Feature Overview page).

One more feature I feel compelled to mention, just because I love them, are lenses. The shapeless implementation supports boilerplate-free lens creation for arbitrary case classes. Unless you’re already married to Scalaz or Monacle, you’ll want to give the shapeless lens a try:

ageLens = lens[Person] >> 'age
age1 = ageLens.get(person)
// age1: Int = 37

Lenses are pretty cool once you get used to them. They lead to some marvelously readable, maintainable code. Check them out, along with shapeless’ typesafe case operator, cast. If you’ve ever been bitten by type erasure, this just might be the solution you’ve been looking for.

Once you get excited about shapeless you might want to pick up a copy of The Type Astronaut’s Guide to Shapeless. It’s free!