Showing posts with label functional. Show all posts
Showing posts with label functional. Show all posts

Sunday, July 30, 2017

Domain Models - Late Evaluation buys you better Composition

In the last post we talked about early abstractions that allow you to design generic interfaces which can be polymorphic in the type parameter. Unless you abuse the type system of a permissive language like Scala, if you adhere to the principles of parametricity, this approach helps you implement abstractions that are reusable under various contexts. We saw this when we implemented the generic contract of the mapReduce function and its various specializations by supplying different concrete instances of the Monoid algebra.

In this post we will take a look at the other end of the spectrum in designing functional domain models. We will discuss evaluation semantics of model behaviors - the precise problem of when to commit to specific concrete evaluation semantics. Consider the following definition of a domain service module ..
type ErrorOr[A] = Either[String, A]

trait PaymentService {
  def paymentCycle: ErrorOr[PaymentCycle]
  def qualifyingAccounts(paymentCycle: PaymentCycle): ErrorOr[Vector[Account]]
  def payments(accounts: Vector[Account]): ErrorOr[Vector[Payment]]
  def adjustTax(payments: Vector[Payment]): ErrorOr[Vector[Payment]]
  def postToLedger(payments: Vector[Payment]): ErrorOr[Unit]
} 
Such definitions are quite common these days. We have a nice monadic definition going on which can be composed as well to implement larger behaviors out of smaller ones ..
def processPayments() = for {
  p <- paymentCycle
  a <- qualifyingAccounts(p)
  m <- payments(a)
  a <- adjustTax(m)
  _ <- postToLedger(a)
} yield ()
Can we improve upon this design ?

Committing to the concrete early - the pitfalls ..

One of the defining aspects of reusable abstractions is the ability to run it under different context. This is one lesson that we learnt in the last post as well. Make the abstractions depend on the least powerful algebra. In this example our service functions return Either, which is a monad. But it's not necessarily the least powerful algebra in the context. Users may choose to use some other monad or may be even applicative to thread through the context of building larger behaviors. Why not keep the algebra unspecified at the service definition level and hope to have specializations in implementations or even in usages at the end of the world ? Here's what we can do ..
// algebra
trait PaymentService[M[_]] {
  def paymentCycle: M[PaymentCycle]
  def qualifyingAccounts(paymentCycle: PaymentCycle): M[Vector[Account]]
  def payments(accounts: Vector[Account]): M[Vector[Payment]]
  def adjustTax(payments: Vector[Payment]): M[Vector[Payment]]
  def postToLedger(payments: Vector[Payment]): M[Unit]
} 
A top level service definition that keeps the algebra unspecified. Now if we want to implement a larger behavior with monadic composition, we can do this ..
// weaving the monad
def processPayments()(implicit me: Monad[M]) = for {
  p <- paymentCycle
  a <- qualifyingAccounts(p)
  m <- payments(a)
  a <- adjustTax(m)
  _ <- postToLedger(a)
} yield p
Note that we are using only the monadic bind in composing the larger behavior - hence the least powerful algebra that we can use is that of a Monad. And we express this exact constraint by publishing the requirements of the existence of an instance of a Monad for the type constructor M.

What about Implementation ?

Well, we could avoid the committment to a concrete algebra in the definition of the service. What about the implementation ? One of the core issues with the implementation is how you need to handle errors. This is an issue which often makes you commit to an implementation when you write the interpreter / implementation of a service contract. You may use Failure for a Try based implementation, or Left for an Either based implementation etc. Can we abstract over this behavior through a generic error handling strategy ? Some libraries like cats offers you abstractions like MonadError that helps you implement error reporting functionality using generic monadic APIs. Here's how we can do this ..
class PaymentServiceInterpreter[M[_]](implicit me: MonadError[M, Throwable])
  extends PaymentService[M] {

  //..

  def payments(accounts: Vector[Account]): M[Vector[Payment]] = 
    if (accounts.isEmpty) me.raiseError(
      new IllegalArgumentException("Empty account list"))
    else //..
    //..
  }
  //..
}
Note we needed a monad with error handling capabilities and we used MonadError for that. Note that we have kept the error type in MonadError as Throwable, which may seem a bit unethical in the context of pure functional programming. But it's also true that many libraries (especially Java ones) or underlying abstractions like Future or Try play well with exceptions. Anyway this is just a red herring though it has nothing to do with the current topic of discussion. The moot point is that you need to supply a MonadError that you have instances of.

Here's how cats defines the trait MonadError ..
trait MonadError[F[_], E] extends ApplicativeError[F, E] with Monad[F] { //..
.. and that's exactly what we will commit to. We are still dealing with a generic Monad even in the implementation without committing to any concreate instance.

End of the World!

The basic purpose why we wanted to delay committing to the concrete instance was to allow the users the flexibility to choose their own implementations. This is what we call the principle of delayed evaluation. Abstract early, evaluate late and decouple the concerns of building and the evaluation of the abstractions. We have already seen the 2 of these principles - we will see that our design so far will accommodate the third one as well, at least for some instances of M.

The user of our API has the flexibility to choose the monad as long as she supplies the MonadError[M, Throwable] instance. And we have many to choose from. Here's an example of the above service implementation in use that composes with another service in a monadic way and choosing the exact concrete instance of the Monad at the end of the world ..
import cats._
import cats.data._
import cats.implicits._

// monix task based computation
object MonixTaskModule {
  import monix.eval.Task
  import monix.cats._

  val paymentInterpreter = new PaymentServiceInterpreter[Task]
  val emailInterpreter = new EmailServiceInterpreter[Task]

  for {
    p <- paymentInterpreter.processPayments
    e <- emailInterpreter.sendEmail(p)
  } yield e
}

// future based computation
object FutureModule {
  import scala.concurrent.Future
  import scala.concurrent.ExecutionContext.Implicits.global
  
  val paymentInterpreter = new PaymentServiceInterpreter[Future]
  val emailInterpreter = new EmailServiceInterpreter[Future]

  for {
    p <- paymentInterpreter.processPayments
    e <- emailInterpreter.sendEmail(p)
  } yield e
}

// try task based computation
object TryModule {
  import scala.util.Try

  val paymentInterpreter = new PaymentServiceInterpreter[Try]
  val emailInterpreter = new EmailServiceInterpreter[Try]

  for {
    p <- paymentInterpreter.processPayments
    e <- emailInterpreter.sendEmail(p)
  } yield e
}
Monix Task is an abstraction that decouples the building of the abstraction from execution. So the Task that we get from building the composed behavior as in the above example can be executed in a deferred way depending of the requirements of the application. It can also be composed with other Tasks to build larger ones as well.

Vertical Composition - stacking abstractions

When you have not committed to an implementation early enough, all you have is an unspecified algebra. You can do fun stuff like stacking abstractions vertically. Suppose we want to implement auditability in some of our service methods. Here we consider a simple strategy of logging as a means to audit the behaviors. How can we take an existing implementation and plugin the audit function selectively ? The answer is we compose algebras .. here's an example that stacks the Writer monad with an already existing algebra to make the payments function auditable ..
final class AuditablePaymentService[M[_]: Applicative](paymentService: PaymentService[M]) 
  extends PaymentService[WriterT[M, Vector[String], ?]] {

  def paymentCycle: WriterT[M, Vector[String], PaymentCycle] =
    WriterT.lift(paymentService.paymentCycle)

  def qualifyingAccounts(paymentCycle: PaymentCycle): WriterT[M, Vector[String], Vector[Account]] =
    WriterT.lift(paymentService.qualifyingAccounts(paymentCycle))

  def payments(accounts: Vector[Account]): WriterT[M, Vector[String], Vector[Payment]] =
    WriterT.putT(paymentService.payments(accounts))(accounts.map(_.no))

  //..
}

val auditablePaymentInterpreter = new AuditablePaymentService[Future](
  new PaymentServiceInterpreter[Future]
)
We took the decision to abstract the return type in the form of a type constructor early on. But committed to the specific type only during the actual usage of the service implementation. Early abstraction and late committment to implementation make great composition possible and often in ways that may pleasantly surprise you later ..

Sunday, June 25, 2017

Domain Models - Early Abstractions and Polymorphic Domain Behaviors

Let's talk genericity or generic abstractions. In the last post we talked about an abstraction Money, which, BTW was not generic. But we expressed some of the operations on Money in terms of a Money[Monoid], where Monoid is a generic algebraic structure. By algebraic we mean that a Monoid

  1. is generic in types
  2. offers operations that are completely generic on the types
  3. all operations honor the algebraic laws of left and right identities and associativity

But when we design a domain model, what does this really buy us ? We already saw in the earlier post how law abiding abstractions save you from writing some unit tests just through generic verification of the laws using property based testing. That's just a couple of lines in any of the available libraries out there.

Besides reducing the burden of your unit tests, what does Money[Monoid] buy us in the bigger context of things ? Let's look at a simple operation that we defined in Money ..

Just to recapitulate, here's the definition of Money

class Money (val items: Map[Currency, BigDecimal]) { //..
}

object Money {
  final val zeroMoney = new Money(Map.empty[Currency, BigDecimal])

  def apply(amount: BigDecimal, ccy: Currency) = new Money(Map(ccy -> amount))

  // concrete naive implementation: don't
  def add(m: Money, n: Money) = new Money(
    (m.items.toList ++ n.items.toList)
      .groupBy(_._1)
      .map { case (k, v) => 
        (k, v.map(_._2).sum) 
      }
    )

  //..
}

add is a naive implementation though it's possibly the most frequent one that you will ever encounter in domain models around you. It picks up the Map elements and then adds the ones with the same key to come up with the new Money.

Why is this a naive implementation ?

First of all it deconstructs the implementation of Money, instead of using the algebraic properties that the implementation may have. Here we implement Money in terms of a Map, which itself forms a Monoid under the operations defined by Monoid[Map[K, V]]. Hence why don't we use the monoidal algebra of a Map to implement the operations of Money ?

object Money {

  //..

  def add(m: Money, n: Money) = new Money(m.items |+| n.items)

  //..
}

|+| is a helper function that combines the 2 Maps in a monoidal manner. The concrete piece of code that you wrote in the naive implementation is now delegated to the implementation of the algebra of monoids for a Map in a completely generic way. The advantage is that you need (or possibly someone else has already done that for you) to write this implementation only once and use it in every place you use a Map. Reusability of polymorphic code is not via documentation but by actual code reuse.

On to some more reusability of generic patterns ..

Consider the following abstraction that builds on top of Money ..

import java.time.OffsetDateTime
import Money._

import cats._
import cats.data._
import cats.implicits._

object Payments {
  case class Account(no: String, name: String, openDate: OffsetDateTime, 
    closeDate: Option[OffsetDateTime] = None)
  case class Payment(account: Account, amount: Money, dateOfPayment: OffsetDateTime)

  // returns the Money for credit payment, zeroMoney otherwise
  def creditsOnly(p: Payment): Money = if (p.amount.isDebit) zeroMoney else p.amount

  // compute valuation of all credit payments
  def valuation(payments: List[Payment]) = payments.foldLeft(zeroMoney) { (a, e) =>
    add(a, creditsOnly(e))
  }
  //..
}


valuation gives a standard implementation folding over the List that it gets. Now let's try to critique the implementation ..

1. The function does a foldLeft on the passed in collection payments. The collection only needs to have the ability to be folded over and List can do much more than that. We violate the principle of using the least powerful abstraction as part of the implementation. The function that implements the fold over the collection only needs to take a Foldable - that prevents misuse on part of a user feeling like a child in a toy store with something more grandiose than what she needs.

2. The implementation uses the add function of Money, which is nothing but a concrete wrapper over a monoidal operation. If we can replace this with something more generic then it will be a step forward towards a generic implementation of the whole function.

3. If we squint a bit, we can get some more light into the generic nature of all the components of this 2 line small implementation. zeroMoney is a zero of a Monoid, fold is a generic operation of a Foldable, add is a wrapper over a monoidal operation and creditsOnly is a mapping operation over every payment that the collection hands you over. In summary the implementation folds over a Foldable mapping each element using a function and uses the monoidal operation to collapse the fold.

Well, it's actually a concrete implementation of a generic map-reduce function ..

def mapReduce[F[_], A, B](as: F[A])(f: A => B)
  (implicit fd: Foldable[F], m: Monoid[B]): B = 
    fd.foldLeft(as, m.empty)((b, a) => m.combine(b, f(a)))

In fact the Foldable trait contains this implementation in the name of foldMap, which makes our implementation of mapReduce even simpler ..

def mapReduce1[F[_], A, B](as: F[A])(f: A => B)
  (implicit fd: Foldable[F], m: Monoid[B]): B = fd.foldMap(as)(f)

And List is a Foldable and our implementation of valuation becomes as generic as ..

object Payments {
  //..

  // generic implementation
  def valuation(payments: List[Payment]): Money = {
    implicit val m: Monoid[Money] = Money.MoneyAddMonoid
    mapReduce(payments)(creditsOnly)
  }
}

The implementation is generic and the typesystem will ensure that the Money that we produce can only come from the list of payments that we pass. In the naive implementation there's always a chance that the user subverts the typesystem and can play malice by plugging in some additional Money as the output. If you look at the type signature of mapReduce, you will see that the only way we can get a B is by invoking the function f on an element of F[A]. Since the function is generic on types we cannot ever produce a B otherwise. Parametricity FTW.

mapReduce is completely generic on types - there's no specific implementation that asks it to add the payments passed to it. This abstraction over operations is provided by the Monoid[B]. And the abstraction over the form of collection is provided by Foldable[F]. It's now no surprise that we can pass in any concrete operation or structure that honors the contracts of mapReduce. Here's another example from the same model ..

object Payments {
  //..

  // generic implementation
  def maxPayment(payments: List[Payment]): Money = {
    implicit val m: Monoid[Money] = Money.MoneyOrderMonoid
    mapReduce(payments)(creditsOnly)
  }
}

We want to compute the maximum credit payment amount from a collection of payments. A different domain behavior needs to be modeled but we can think of it as belonging to the same form as valuation and implemented using the same structure as mapReduce, only passing a different instance of Monoid[Money]. No additional client code, no fiddling around with concrete data types, just matching the type contracts of a polymorphic function.

Looks like our investment on an early abstraction of mapReduce has started to pay off. The domain model remains clean with much of the domain logic being implemented in terms of the algebra that the likes of Foldables and Monoids offer. I discussed some of these topics at length in my book Functional and Reactive Domain Modeling. In the next instalment we will explore some more complex algebra as part of domain modeling ..

Sunday, June 18, 2017

Domain models, Algebraic laws and Unit tests

In a domain model, when you have a domain element that forms an algebraic abstraction honoring certain laws, you can get rid of many of your explicitly written unit tests just by checking the laws. Of course you have to squint hard and discover the lawful abstraction that hides behind your concrete domain element.

Consider this simple abstraction for Money that keeps track of amounts in various currencies.

scala> import Money._
import Money._

// 1000 USD
scala> val m = Money(1000, USD)
m: laws.Money = (USD,1000)

// add 248 AUD
scala> val n = add(m, Money(248, AUD))
n: laws.Money = (AUD,248),(USD,1000)

// add 230 USD more
scala> val p = add(n, Money(230, USD))
p: laws.Money = (AUD,248),(USD,1230)

// value of the money in base currency (USD)
scala> p.toBaseCurrency
res1: BigDecimal = 1418.48

// debit amount
scala> val q = Money(-250, USD)
q: laws.Money = (USD,-250)

scala> val r = add(p, q)
r: laws.Money = (AUD,248),(USD,980)

The valuation of Money is done in terms of its base currency which is usually USD. One of the possible implementations of Money is the following (some parts elided for future explanations) ..

sealed trait Currency
case object USD extends Currency
case object AUD extends Currency
case object JPY extends Currency
case object INR extends Currency

class Money private[laws] (val items: Map[Currency, BigDecimal]) {
  def toBaseCurrency: BigDecimal = 
    items.foldLeft(BigDecimal(0)) { case (a, (ccy, amount)) =>
      a + Money.exchangeRateWithUSD.get(ccy).getOrElse(BigDecimal(1)) * amount
    }

  override def toString = items.toList.mkString(",")
}

object Money {
  final val zeroMoney = new Money(Map.empty[Currency, BigDecimal])

  def apply(amount: BigDecimal, ccy: Currency) = new Money(Map(ccy -> amount))
  def add(m: Money, amount: BigDecimal, ccy: Currency) = ???

  final val exchangeRateWithUSD: Map[Currency, BigDecimal] = 
    Map(AUD -> 0.76, JPY -> 0.009, INR -> 0.016, USD -> 1.0)
}

Needless to say we will have quite a number of unit tests that check for addition of Money, including the boundary cases of adding to zeroMoney.

It's not very hard to see that the type Money forms a Monoid under the add operation. Or to speak a bit loosely we can say that Money is a Monoid under the add operation.

A Monoid has laws that every instance needs to honor - associativity, left identity and right identity. And when your model element needs to honor the laws of algebra, it's always recommended to include the verification of the laws as part of your test suite. Besides validating the sanity of your abstractions, one side-effect of verifying laws is that you can get rid of many of your explicitly written unit tests for the operation that forms the Monoid. They will be automatically verified when verifying the laws of Monoid[Money].

Here's how we define Monoid[Money] using Cats ..

val MoneyAddMonoid: Monoid[Money] = new Monoid[Money] {
  def combine(m: Money, n: Money): Money = add(m, n)
  def empty: Money = zeroMoney
}

and the implementation of the previously elided add operation on Money using Monoid on Map ..

object Money {
  //..

  def add(m: Money, amount: BigDecimal, ccy: Currency) = 
    new Money(m.items |+| Map(ccy -> amount))

  //..

}

Now we can verify the laws of Monoid[Money] using specs2 and ScalaCheck and the helper classes that Cats offers ..

import cats._
import kernel.laws.GroupLaws
import org.scalacheck.{ Arbitrary, Gen }
import Arbitrary.arbitrary

class MoneySpec extends CatsSpec { def is = s2"""

  This is a specification for validating laws of Money

  (Money) should
     form a monoid under addition    $e1 
  """

  implicit lazy val arbCurrency: Arbitrary[Currency] = Arbitrary { 
    Gen.oneOf(AUD, USD, INR, JPY) 
  }

  implicit def moneyArbitrary: Arbitrary[Money] = 
    Arbitrary {
      for {
        i <- Arbitrary.arbitrary[Map[Currency, BigDecimal]]
      } yield new Money(i)
    }

  def e1 = checkAll("Money", GroupLaws[Money].monoid(Money.MoneyAddMonoid))
}


and running the test suite will verify the Monoid laws for Monoid[Money] ..

[info] This is a specification for validating laws of Money
[info]
[info] (Money) should
[info] form a monoid under addition monoid laws must hold for Money
[info] + monoid.associativity
[info] + monoid.combineAll
[info] + monoid.combineAll(Nil) == id
[info] + monoid.combineAllOption
[info] + monoid.combineN(a, 0) == id
[info] + monoid.combineN(a, 1) == a
[info] + monoid.combineN(a, 2) == a |+| a
[info] + monoid.isEmpty
[info] + monoid.leftIdentity
[info] + monoid.rightIdentity
[info] + monoid.serializable

In summary ..
  • strive to find abstractions in your domain model that are constrained by algebraic laws
  • check all laws as part of your test suite
  • you will find that you can get rid of quite a few explicitly written unit tests just by checking the laws of your abstraction
  • and of course use property based testing for unit tests
In case you want to take a look at the full code base, it's there on my Github repo. In the next post we will take the next step towards modeling with generic algebraic code using the Monoid pattern from this example. Code written in parametric form without depending on specialized concrete types can be more robust, easier to test and easier to reason about. I have also discussed this at length in my book Functional and Reactive Domain Modeling. I plan to supplement the materials covered there with more examples and code patterns ..

Saturday, June 13, 2015

Baking a π can teach you a bit of Parametricity

Even though I got my copy of Prof. Eugenia Cheng's awesome How to Bake π a couple of weeks back, I started reading it only over this weekend. I am only on page 19 enjoying all the stuff regarding cookies that Prof. Cheng is using to explain abstraction. This is a beautiful piece of explanation and if you are a programmer you may get an extra mile out of the concepts that she explains here. Let's see if we can unravel a few of them ..

She starts with a real life situation such as:

If Grandma gives you five cookies and Grandpa gives you five cookies, how many cookies will you have ?

Let's model this as box of cookies that you get from your Grandma and Grandpa and you need to count them and find the total. Let's model this in Scala and we may have something like the following ..

case class CookieBox(count: Int)

and we can define a function that gives you a CookieBox containing the total number of cookies from the 2 boxes that we pass to the function ..

def howManyCookies(gm: CookieBox, gp: CookieBox) = {
  CookieBox(gm.count + gp.count)
}

and we use howManyCookies to find the count ..

scala> val gm = CookieBox(5)
gm: CookieBox = CookieBox(5)

scala> val gp = CookieBox(5)
gp: CookieBox = CookieBox(5)

scala> howManyCookies(gm, gp)
res5: CookieBox = CookieBox(10)

.. so we have 10 cookies from our Grandma & Grandpa .. Perfect!

The problem is .. the child answers: "None, because I'll eat them all". To model this let's add a function eat to our CookieBox abstraction ..

case class CookieBox(count: Int) {
  // let's assume n < count for simplicity
  def eat(n: Int): CookieBox = CookieBox(count - n)
}

So instead of the correct way to answer the question, the child cheats and implements howManyCookies as ..

def howManyCookies(gm: CookieBox, gp: CookieBox) = {
  CookieBox(gm.eat(gm.count).count + gp.eat(gp.count).count)
}

and we get the following ..

scala> howManyCookies(gm, gf)
res6: CookieBox = CookieBox(0)

Prof. Cheng continues ..

The trouble here is that cookies do not obey the rules of logic, so using math to study them doesn't quite work. .. We could impose an extra rule on the situation by adding "... and you're not allowed to eat the cookies". If you're not allowed to eat them, what's the point of them being cookies ?

This is profound indeed. When we are asked to count some stuff, it really doesn't matter if they are cookies or stones or pastries. The only property we need here is to be able to add together the 2 stuff that we are handed over. The fact that we have implemented howManyCookies in terms of CookieBox gives the little child the opportunity to cheat by using the eat function. More information is actually hurting us here, being concrete with data types is actually creating more avenues for incorrect implementation.

Prof. Cheng is succinct here when she explains ..

We could treat the cookies as just things rather than cookies. We lose some resemblance to reality, but we gain scope and with it efficiency. The point of numbers is that we can reason about "things" without having to change the reasoning depending on what "thing" we are thinking about.

Yes, she is talking about generalization, being polymorphic over what we count. We just need the ability to add 2 "things", be it cookies, monkeys or anchovies. In programming we model this with parametric polymorphism, and use a universal quantification over the set of types for which we implement the behavior.

def howMany[A](gm: A, gp: A) = //..

We have made the implementation parametric and got rid of the concrete data type CookieBox. But how do we add the capability to sum the 2 objects and get the result ? You got it right - we already have an abstraction that makes this algebra available to a generic data type. Monoids FTW .. and it doesn't get simpler than this ..

trait Monoid[T] {
  def zero: T
  def append(t1: T, t2: T): T
}

zero is the identity function and append is a binary associative function over 2 objects of the type. So given a monoid instance for our data type, we can model howMany in a completely generic way irrespective of whether A is a CookieBox or Monkey.

def howMany[A : Monoid](gm: A, gp: A): A = gm append gp

Implementing a monoid for CookieBox is also simple ..

object CookieBox {
  implicit val CookieBoxMonoid = new Monoid[CookieBox] {
    val zero = CookieBox(0)
    def append(i: CookieBox, j: CookieBox) = CookieBox(i.count + j.count)
  }
}
 
With the above implementation of howMany, the little child will not be able to cheat. By providing a simpler data type we have made the implementation more robust and reusable across multiple data types.

Next time someone wants me to explain parametricity, I will point them to Page 19 of How to Bake π.

Wednesday, February 11, 2015

Functional Patterns in Domain Modeling - Composing a domain workflow with statically checked invariants

I have been doing quite a bit of domain modeling using functional programming mostly in Scala. And as it happens when you work on something for a long period of time you tend to identify more and more patterns that come up repeatedly within your implementations. You may ignore these as patterns the first time, get a feeling of mere coincidence the next time, but third time really gives you that aha! moment and you feel like documenting it as a design pattern. In course of my learnings I have started blogging on some of these patterns - you can find the earlier ones in the series in:

  • Functional Patterns in Domain Modeling - The Specification Pattern

  • Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

  • Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors


  • In this continuing series of functional patterns in domain modeling, I will go through yet another idiom which has been a quite common occurrence in my explorations across various domain models. You will find many of these patterns explained in details in my upcoming book on Functional and Reactive Domain Modeling, the early access edition of which is already published by Manning.

    One of the things that I strive to achieve in implementing domain models is to use the type system to encode as much domain logic as possible. If you can use the type system effectively then you get the benefits of parametricity, which not only makes your code generic, concise and polymorphic, but also makes it self-testing. But that's another story which we can discuss in another post. In this post I will talk about a pattern that helps you design domain workflows compositionally, and also enables implementing domain invariants within the workflow, all done statically with little help from the type system.

    As an example let's consider a loan processing system (simplified for illustration purposes) typically followed by banks issuing loans to customers. A typical simplified workflow looks like the following :-

    The Domain Model


    The details of each process is not important - we will focus on how we compose the sequence and ensure that the API verifies statically that the correct sequence is followed. Let's start with a domain model for the loan application - we will keep on enriching it as we traverse the workflow.

    case class LoanApplication private[Loans](
      // date of application
      date: Date,
      // name of applicant
      name: String,
      // purpose of loan
      purpose: String,
      // intended period of repayment in years
      repayIn: Int,
      // actually sanctioned repayment period in years
      actualRepaymentYears: Option[Int] = None,
      // actual start date of loan repayment
      startDate: Option[Date] = None,
      // loan application number
      loanNo: Option[String] = None,
      // emi
      emi: Option[BigDecimal] = None
    )

    Note we have a bunch of attributes that are defined as optional and will be filled out later as the loan application traverses through the sequence of workflow. Also we have declared the class private and we will have a smart constructor to create an instance of the class.

    Wiring the workflow with Kleisli


    Here are the various domain behaviors modeling the stages of the workflow .. I will be using the scalaz library for the Kleisli implementation.

    def applyLoan(name: String, purpose: String, repayIn: Int, 
      date: Date = today) =
      LoanApplication(date, name, purpose, repayIn)
    
    def approve = Kleisli[Option, LoanApplication, LoanApplication] { l => 
      // .. some logic to approve
      l.copy(
        loanNo = scala.util.Random.nextString(10).some,
        actualRepaymentYears = 15.some,
        startDate = today.some
      ).some
    }
    
    def enrich = Kleisli[Option, LoanApplication, LoanApplication] { l => 
      //.. may be some logic here
      val x = for {
        y <- l.actualRepaymentYears
        s <- l.startDate
      } yield (y, s)
    
      l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some
    }

    applyLoan is the smart constructor that creates the initial instance of LoanApplication. The other 2 functions approve and enrich perform the approval and enrichment steps of the workflow. Note both of them return an enriched version of the LoanApplication within a Kleisli, so that we can use the power of Kleisli composition and wire them together to model the workflow ..

    val l = applyLoan("john", "house building", 10)
    val op = approve andThen enrich
    op run l

    When you have a sequence to model that takes an initial object and then applies a chain of functions, you can use plain function composition like h(g(f(x))) or using the point free notation, (h compose g compose f) or using the more readable order (f andThen g andThen h). But in the above case we need to have effects along with the composition - we are returning Option from each stage of the workflow. So here instead of plain composition we need effectful composition of functions and that's exactly what Kleisli offers. The andThen combinator in the above code snippet is actually a Kleisli composition aka function composition with effects.

    So we have everything the workflow needs and clients use our API to construct workflows for processing loan applications. But one of the qualities of good API design is to design it in such a way that it becomes difficult for the client to use it in the wrong way. Consider what happens with the above design of the workflow if we invoke the sequence as enrich andThen approve. This violates the domain invariant that states that enrichment is a process that happens after the approval. Approval of the application generates some information which the enrichment process needs to use. But because our types align, the compiler will be perfectly happy to accept this semantically invalid composition to pass through. And we will have the error reported during run time in this case.

    Remembering that we have a static type system at our disposal, can we do better ?

    Phantom Types in the Mix


    Let's throw in some more types and see if we can tag in some more information for the compiler to help us. Let's tag each state of the workflow with a separate type ..

    trait Applied
    trait Approved
    trait Enriched

    Finally make the main model LoanApplication parameterized on a type that indicates which state it is in. And we have some helpful type aliases ..

    case class LoanApplication[Status] private[Loans]( //..
    
    type LoanApplied  = LoanApplication[Applied]
    type LoanApproved = LoanApplication[Approved]
    type LoanEnriched = LoanApplication[Enriched]

    These types will have no role in modeling domain behaviors - they will just be used to dispatch to the correct state of the sequence that the domain invariants mandate. The workflow functions need to be modified slightly to take care of this ..

    def applyLoan(name: String, purpose: String, repayIn: Int, 
      date: Date = today) =
      LoanApplication[Applied](date, name, purpose, repayIn)
    
    def approve = Kleisli[Option, LoanApplied, LoanApproved] { l => 
      l.copy(
        loanNo = scala.util.Random.nextString(10).some,
        actualRepaymentYears = 15.some,
        startDate = today.some
      ).some.map(identity[LoanApproved])
    }
    
    def enrich = Kleisli[Option, LoanApproved, LoanEnriched] { l => 
      val x = for {
        y <- l.actualRepaymentYears
        s <- l.startDate
      } yield (y, s)
    
      l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some.map(identity[LoanEnriched])
    }

    Note how we use the phantom types within the Kleisli and ensure statically that the sequence can flow only in one direction - that which is mandated by the domain invariant. So now an invocation of enrich andThen approve will result in a compilation error because the types don't match. So once again yay! for having the correct encoding of domain logic with proper types.

    Monday, November 03, 2014

    Functional and Reactive Domain Modeling

    Manning has launched the MEAP of my upcoming book on Domain Modeling.



    The first time I was formally introduced to the topic was way back when I played around with Erik Evans' awesome text on the subject of Domain Driven Design. In the book he discusses various object lifecycle patterns like the Factory, Aggregate or Repository that help separation of concerns when you are implementing the various interactions between the elements of the domain model. Entities are artifacts with identities, value objects are pure values while services model the coarse level use cases of the model components.

    Traditionally we followed the recommendations of Erik in our real world implementations and used the object oriented paradigm for modeling all interactions. We started talking about rich domain models and anemic domain models. The rich model espoused a richer agglomeration of state and behavior within the model, while the anemic model preferred to keep them decoupled. Martin Fowler sums this up in his post on Anemic Domain Models ..
    "The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data."

    Go Functional

    In Functional and Reactive Domain Modeling I look at the problem with a different lens. The primary focus of the book is to encourage building domain models using the principles of functional programming. It's a completely orthogonal approach than OO and focuses on verbs first (as opposed to nouns first in OO), algebra first (as opposed to objects in OO), function composition first (as opposed to object composition in OO), lightweight objects as ADTs (instead of rich class models).

    The book starts with the basics of functional programming principles and discusses the virtues of purity and the advantages of keeping side-effects decoupled from the core business logic. The book uses Scala as the programming language and does an extensive discussion on why the OO and functional features of Scala are a perfect fit for modelling complex domains. Chapter 3 starts the core subject of functional domain modeling with real world examples illustrating how we can make good use of patterns like smart constructors, monads and monoids in implementing your domain model. The main virtue that these patterns bring to your model is genericity - they help you extract generic algebra from domain specific logic into parametric functions which are far more reusable and less error prone. Chapter 4 focuses on advanced usages like typeclass based design and patterns like monad transformers, kleislis and other forms of compositional idioms of functional programming. One of the primary focus of the book is an emphasis on algebraic API design and to develop an appreciation towards ability to reason about your model.

    Go Reactive

    The term "reactive" has recently become quite popular in describing systems that are responsive, scalable and adaptive. In implementing complex domain models, we find many areas which can be made more responsive by implementing them as non blocking operations instead of the standard blocking function calls. Using higher level concurrency primitives like actors, futures and data flow based computing we can compose asynchronous operations and increase the net throughput of your model. The second part of the book discusses how you can combine the principles of functional programming with the reactive way of implementing behaviors. The paradigm of application design using the principles of functional programming together with an asynchronous non-blocking mode of communication between the participating entities promises to be a potent combination towards developing performant systems that are relatively easy to manage, maintain and evolve. But designing and implementing such models need a different way of thinking. The behaviors that you implement have to be composable using pure functions and they can form building blocks of bigger abstractions that communicate between them using non-blocking asynchronous message passing.

    Functional meets Reactive

    Functional and Reactive Domain Modeling takes you through the steps teaching you how to think of the domain model in terms of pure functions and how to compose them to build larger abstractions. You will start learning with the basics of functional programming principles and gradually progress to the advanced concepts and patterns that you need to know to implement complex domain models. The book demonstrates how advanced FP patterns like algebraic data types, typeclass based design and isolation of side-effects can make your model compose for readability and verifiability. On the subject of reactive modeling, the book focuses on the higher order concurrency patterns like actors and futures. It uses the Akka framework as the reference implementation and demonstrates how advanced architectural patterns like event sourcing and command-query-responsibility-segregation can be put to great use while implementing scalable models. You will learn techniques that are radically different from the standard RDBMS based applications that are based on mutation of records. You’ll also pick up important patterns like using asynchronous messaging for interaction based on non blocking concurrency and model persistence, which delivers the speed of in-memory processing along with suitable guarantees of reliability.

    Looking forward to an exciting journey with the book. I am sure you will also find interest in the topics that I discuss there. And feel free to jump on to AuthorOnline and fire questions that we all can discuss. I am sure this will also lead to an overall improvement in the quality of the book.

    Monday, May 12, 2014

    Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors

    I was looking at the presentation that Dean Wampler made recently regarding domain driven design, anemic domain models and how using functional programming principles help ameliorate some of the problems there. There are some statements that he made which, I am sure made many OO practitioners chuckle. They contradict popular beliefs that encourage OOP as the primary way of modeling using DDD principles.

    One statement that resonates a lot with my thought is "DDD encourages understanding of the domain, but don't implement the models". DDD does a great job in encouraging developers to understand the underlying domain model and ensuring a uniform vocabulary throughout the lifecycle of design and implementation. This is what design patterns also do by giving you a vocabulary that you can heartily exchange with your fellow developers without influencing any bit of implementation of the underlying pattern.

    On the flip side of it, trying to implement DDD concepts using standard techniques of OO with joined state and behavior often gives you a muddled mutable model. The model may be rich from the point of view that you will find all concepts related to the particular domain abstraction baked in the class you are modeling. But it makes the class fragile as well since the abstraction becomes more locally focused losing the global perspective of reusability and composability. As a result when you try to compose multiple abstractions within the domain service layer, it becomes too much polluted with glue code that resolves the impedance mismatch between class boundaries.

    So when Dean claims "Models should be anemic", I think he means to avoid this bundling of state and behavior within the domain object that gives you the false sense of security of richness of the model. He encourages the practice that builds domain objects to have the state only while you model behaviors using standalone functions.



    One other strawman argument that I come across very frequently is that bundling state and behavior by modeling the latter as methods of the class increases encapsulation. If you are still a believer of this school of thought, have a look at Scott Meyer's excellent article which he wrote as early as 2000. He eschews the view that a class is the right level of modularization and encourages more powerful module systems as better containers of your domain behaviors.

    As continuation of my series on functional domain modeling, we continue with the example of the earlier posts and explore the theme that Dean discusses ..

    Here's the anemic domain model of the Order abstraction ..

    case class Order(orderNo: String, orderDate: Date, customer: Customer, 
      lineItems: Vector[LineItem], shipTo: ShipTo, 
      netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed)

    In the earlier posts we discussed how to implement the Specification and Aggregate Patterns of DDD using functional programming principles. We also discussed how to do functional updates of aggregates using data structures like Lens. In this post we will use these as the building blocks, use more functional patterns and build larger behaviors that model the ubiquitous language of the domain. After all, one of the basic principles behind DDD is to lift the domain model vocabulary into your implementation so that the functionality becomes apparent to the developer maintaining your model.

    The core idea is to validate the assumption that building domain behaviors as standalone functions leads to an effective realization of the domain model according to the principles of DDD. The base classes of the model contain only the states that can be mutated functionally. All domain behaviors are modeled through functions that reside within the module that represents the aggregate.

    Functions compose and that's precisely how we will chain sequence of domain behaviors to build bigger abstractions out of smaller ones. Here's a small function that values an Order. Note it returns a Kleisli, which essentially gives us a composition over monadic functions. So instead of composing a -> b and b -> c, which we do with normal function composition, we can do the same over a -> m b and b -> m c, where m is a monad. Composition with effects if you may say so.

    def valueOrder = Kleisli[ProcessingStatus, Order, Order] {order =>
      val o = orderLineItems.set(
        order,
        setLineItemValues(order.lineItems)
      )
      o.lineItems.map(_.value).sequenceU match {
        case Some(_) => right(o)
        case _ => left("Missing value for items")
      }
    }

    But what does that buy us ? What exactly do we gain from these functional patterns ? It's the power to abstract over families of similar abstractions like applicatives and monads. Well, that may sound a bit rhetoric and it needs a separate post to justify their use. Stated simply, they encapsulate effects and side-effects of your computation so that you can focus on the domain behavior itself. Have a look at the process function below - it's actually a composition of monadic functions in action. But all the machinery that does the processing of effects and side-effects are abstracted within the Kleisli itself so that the user level implementation is simple and concise.

    With Kleisli it's the power to compose over monadic functions. Every domain behavior has a chance of failure, which we model using the Either monad - here ProcessingStatus is just a type alias for this .. type ProcessingStatus[S] = \/[String, S]. Using the Kleisli, we don't have to write any code for handling failures. As you will see below, the composition is just like the normal functions - the design pattern takes care of alternate flows.

    Once the Order is valued, we need to apply discounts to qualifying items. It's another behavior that follows the same pattern of implementation as valueOrder.

    def applyDiscounts = Kleisli[ProcessingStatus, Order, Order] {order =>
      val o = orderLineItems.set(
        order,
        setLineItemValues(order.lineItems)
      )
      o.lineItems.map(_.discount).sequenceU match {
        case Some(_) => right(o)
        case _ => left("Missing discount for items")
      }
    }

    Finally we check out the Order ..

    def checkOut = Kleisli[ProcessingStatus, Order, Order] {order =>
      val netOrderValue = order.lineItems.foldLeft(BigDecimal(0).some) {(s, i) => 
        s |+| (i.value |+| i.discount.map(d => Tags.Multiplication(BigDecimal(-1)) |+| Tags.Multiplication(d)))
      }
      right(orderNetValue.set(order, netOrderValue))
    }

    And here's the service method that composes all of the above domain behaviors into the big abstraction. We don't have any object to instantiate. Just plain function composition that results in an expression modeling the entire flow of events. And it's the cleanliness of abstraction that makes the code readable and succinct.

    def process(order: Order) = {
      (valueOrder andThen 
         applyDiscounts andThen checkOut) =<< right(orderStatus.set(order, Validated))
    }
    In case you are interested in the full source code of this small example, feel free to take a peek at my github repo.

    Sunday, April 06, 2014

    Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

    In the last post I looked at a pattern that enforces constraints to ensure domain objects honor the domain rules. But what exactly is a domain object ? What should be the granularity of an object that my solution model should expose so that it makes sense to a domain user ? After all, the domain model should speak the language of the domain. We may have a cluster of entities modeling various concepts of the domain. But only some of them can be published as abstractions to the user of the model. The others can be treated as implementation artifacts and are best hidden under the covers of the published ones.

    An aggregate in domain driven design is a published abstraction that provides a single point of interaction to a specific domain concept. Considering the classes I introduced in the last post, an Order is an aggregate. It encapsulates the details that an Order is composed of in the real world (well, only barely in this example, which is only for illustration purposes :-)).

    Note an aggregate can consist of other aggregates - e.g. we have a Customer instance within an Order. Eric Evans in his book on Domain Driven Design provides an excellent discussion of what constitutes an Aggregate.

    Functional Updates of Aggregates with Lens


    This is not a post about Aggregates and how they fit in the realm of domain driven design. In this post I will talk about how to use some patterns to build immutable aggregates. Immutable data structures offer a lot of advantages, so build your aggregates ground up as immutable objects. Algebraic Data Type (ADT) is one of the patterns to build immutable aggregate objects. Primarily coming from the domain of functional programming, ADTs offer powerful techniques of pattern matching that help you match values against patterns and bind variables to successful matches. In Scala we use case classes as ADTs that give immutable objects out of the box ..

    case class Order(orderNo: String, orderDate: Date, customer: Customer, 
      lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None, 
      status: OrderStatus = Placed)
    

    Like all good aggregates, we need to provide a single point of interaction to users. Of course we can access all properties using accessors of case classes. But what about updates ? We can update the orderNo of an order like this ..

    val o = Order( .. )
    o.copy(orderNo = newOrderNo)

    which gives us a copy of the original order with the new order no. We don't mutate the original order. But anybody having some knowledge of Scala will realize that this becomes pretty clunky when we have to deal with nested object updation. e.g in the above case, ShipTo is defined as follows ..

    case class Address(number: String, street: String, city: String, zip: String)
    case class ShipTo(name: String, address: Address)

    So, here you go in order to update the zip code of a ShipTo ..

    val s = ShipTo("abc", Address("123", "Monroe Street", "Denver", "80233"))
    s.copy(address = s.address.copy(zip = "80231"))

    Not really pleasing and can go off bounds in comprehensibility pretty soon.

    In our domain model we use an abstraction called a Lens for updating Aggregates. In very layman's terms, a lens is an encapsulated get and set combination. The get extracts a small part from a larger whole, while the set transforms the larger abstraction with a smaller part taken as a parameter.

    case class Lens[A, B](get: A => B, set: (A, B) => A)

    This is a naive definition of a Lens in Scala. Sophisticated lens designs go a long way to ensure proper abstraction and composition. scalaz provides one such implementation out of the box that exploits the similarity in structure between the get and the set to generalize the lens definition in terms of another abstraction named Store. As it happens so often in functional programming, Store happens to abstract yet another pattern called the Comonad. You can think of a Comonad as the inverse of a Monad. But in case you are more curious, and have wondered how lenses form "the Coalgebras for the Store Comonad", have a look at the 2 papers here and here.

    Anyway for us mere domain modelers, we will use the Lens implementation as in scalaz .. here's a lens that helps us update the OrderStatus within an Order ..

    val orderStatus = Lens.lensu[Order, OrderStatus] (
      (o, value) => o.copy(status = value),
      _.status
    )

    and use it as follows ..

    val o = Order( .. )
    orderStatus.set(o, Placed)

    will change the status field of the Order to Placed. Let's have a look at some of the compositional properties of a lens which help us write readable code for functionally updating nested structures.

    Composition of Lenses

    First let's define some individual lenses ..

    // lens for updating a ShipTo of an Order
    val orderShipTo = Lens.lensu[Order, ShipTo] (
      (o, sh) => o.copy(shipTo = sh),
      _.shipTo
    )
    
    // lens for updating an address of a ShipTo
    val shipToAddress = Lens.lensu[ShipTo, Address] (
      (sh, add) => sh.copy(address = add),
      _.address
    )
    
    // lens for updating a city of an address
    val addressToCity = Lens.lensu[Address, String] (
      (add, c) => add.copy(city = c),
      _.city
    )

    And now we compose them to define a lens that directly updates the city of a ShipTo belonging to an Order ..

    // compositionality FTW
    def orderShipToCity = orderShipTo andThen shipToAddress andThen addressToCity

    Now updating a city of a ShipTo in an Order is as simple and expressive as ..

    val o = Order( .. )
    orderShipToCity.set(o, "London")

    The best part of using such compositional data structures is that it makes your domain model implementation readable and expressive to the users of your API. And yet your aggregate remains immutable.

    Let's look at another use case when the nested object is a collection. scalaz offers partial lenses that you can use for such composition. Here's an example where we build a lens that updates the value member within a LineItem of an Order. A LineItem is defined as ..

    case class LineItem(item: Item, quantity: BigDecimal, value: Option[BigDecimal] = None, 
      discount: Option[BigDecimal] = None)
    

    and an Order has a collection of LineItems. Let's define a lens that updates the value within a LineItem ..

    val lineItemValue = Lens.lensu[LineItem, Option[BigDecimal]] (
      (l, v) => l.copy(value = v),
      _.value
    )

    and then compose it with a partial lens that helps us update a specific item within a vector. Note how we convert our lineItemValue lens to a partial lens using the unary operator ~ ..

    // a lens that updates the value in a specific LineItem within an Order 
    def lineItemValues(i: Int) = ~lineItemValue compose vectorNthPLens(i)

    Now we can use this composite lens to functionally update the value field of each of the items in a Vector of LineItems using some specific business rules ..

    (0 to lis.length - 1).foldLeft(lis) {(s, i) => 
      val li = lis(i)
      lineItemValues(i).set(s, unitPrice(li.item).map(_ * li.quantity)).getOrElse(s)
    }

    In this post we saw how we can handle aggregates functionally and without any in-place mutation. This keeps the model pure and helps us implement domain models that has sane behavior even in concurrent settings without any explicit use of locks and semaphores. In the next post we will take a look at how we can use such compositional structures to make the domain model speak the ubiquitous language of the domain - another pattern recommended by Eric Evans in domain driven design.

    Monday, March 31, 2014

    Functional Patterns in Domain Modeling - The Specification Pattern

    When you model a domain, you model its entities and behaviors. As Eric Evans mentions in his book Domain Driven Design, the focus is on the domain itself. The model that you design and implement must speak the ubiquitous language so that the essence of the domain is not lost in the myriads of incidental complexities that your implementation enforces. While being expressive the model needs to be extensible too. And when we talk about extensibility, one related attribute is compositionality.

    Functions compose more naturally than objects and In this post I will use functional programming idioms to implement one of the patterns that form the core of domain driven design - the Specification pattern, whose most common use case is to implement domain validation. Eric's book on DDD says regarding the Specification pattern ..
    It has multiple uses, but one that conveys the most basic concept is that a SPECIFICATION can test any object to see if it satisfies the specified criteria.
    A specification is defined as a predicate, whereby business rules can be combined by chaining them together using boolean logic. So there's a concept of composition and we can talk about Composite Specification when we talk about this pattern. Various literature on DDD implement this using the Composite design pattern so commonly implemented using class hierarchies and composition. In this post we will use function composition instead.

    Specification - Where ?

    One of the very common confusions that we have when we design a model is where to keep the validation code of an aggregate root or any entity, for that matter.
    • Should we have the validation as part of the entity ? No, it makes the entity bloated. Also validations may vary based on some context, while the core of the entity remains the same.
    • Should we have validations as part of the interface ? May be we consume JSON and build entities out of it. Indeed some validations can belong to the interface and don't hesitate to put them there.
    • But the most interesting validations are those that belong to the domain layer. They are business validations (or specifications), which Eric Evans defines as something that "states a constraint on the state of another object". They are business rules which the entity needs to honor in order to proceed to the next stage of processing.
    We consider the following simple example. We take an Order entity and the model identifies the following domain "specifications" that a new Order must satisfy before being thrown into the processing pipeline:

    1. it must be a valid order obeying the constraints that the domain requires e.g. valid date, valid no of line items etc.
    2. it must be approved by a valid approving authority - only then it proceeds to the next stage of the pipeline
    3. customer status check must be passed to ensure that the customer is not black-listed
    4. the line items from the order must be checked against inventory to see if the order can be fulfilled
    These are the separate steps that are to be done in sequence by the order processing pipeline as pre-order checks before the actual order is ready for fulfilment. A failure in any of them takes the order out of the pipeline and the process stops there. So the model that we will design needs to honor the sequence as well as check all constraints that need to be done as part of every step.

    An important point to note here is that none of the above steps mutate the order - so every specification gets a copy of the original Order object as input, on which it checks some domain rules and determines if it's suitable to be passed to the next step of the pipeline.

    Jumping on to the implementation ..

    Let's take down some implementation notes from what we learnt above ..

    • The Order can be an immutable entity at least for this sequence of operations
    • Every specification needs an order, can we can pull some trick out of our hat which prevents this cluttering of API by passing an Order instance to every specification in the sequence ?
    • Since we plan to use functional programming principles, how can we model the above sequence as an expression so that our final result still remains composable with the next process of order fulfilment (which we will discuss in a future post) ?
    • All these functions look like having similar signatures - we need to make them compose with each other
    Before I present more of any explanation or theory, here are the basic building blocks which will implement the notes that we took after talking to the domain experts ..

    type ValidationStatus[S] = \/[String, S]
    
    type ReaderTStatus[A, S] = ReaderT[ValidationStatus, A, S]
    
    object ReaderTStatus extends KleisliInstances with KleisliFunctions {
      def apply[A, S](f: A => ValidationStatus[S]): ReaderTStatus[A, S] = kleisli(f)
    }

    ValidationStatus defines the type that we will return from each of the functions. It's either some status S or an error string that explains what went wrong. It's actually an Either type (right biased) as implemented in scalaz.

    One of the things which we thought will be cool is to avoid repeating the Order parameter for every method when we invoke the sequence. And one of the idioamtic ways of doing it is to use the Reader monad. But here we already have a monad - \/ is a monad. So we need to stack them together using a monad transformer. ReaderT does this job and ReaderTStatus defines the type that somehow makes our life easier by combining the two of them.

    The next step is an implementation of ReaderTStatus, which we do in terms of another abstraction called Kleisli. We will use the scalaz library for this, which implements ReaderT in terms of Kleisli. I will not go into the details of this implementation - in case you are curious, refer to this excellent piece by Eugene.

    So, how does one sample specification look like ?

    Before going into that, here are some basic abstractions, grossly simplified only for illustration purposes ..

    // the base abstraction
    sealed trait Item {
      def itemCode: String
    }
    
    // sample implementations
    case class ItemA(itemCode: String, desc: Option[String], 
      minPurchaseUnit: Int) extends Item
    case class ItemB(itemCode: String, desc: Option[String], 
      nutritionInfo: String) extends Item
    
    case class LineItem(item: Item, quantity: Int)
    
    case class Customer(custId: String, name: String, category: Int)
    
    // a skeleton order
    case class Order(orderNo: String, orderDate: Date, customer: Customer, 
      lineItems: List[LineItem])

    And here's a specification that checks some of the constraints on the Order object ..

    // a basic validation
    private def validate = ReaderTStatus[Order, Boolean] {order =>
      if (order.lineItems isEmpty) left(s"Validation failed for order $order") 
      else right(true)
    }

    It's just for illustration and does not contain much domain rules. The important part is how we use the above defined types to implement the function. Order is not an explicit argument to the function - it's curried. The function returns a ReaderTStatus, which itself is a monad and hence allows us to sequence in the pipeline with other specifications. So we get the requirement of sequencing without breaking out of the expression oriented programming style.

    Here are a few other specifications based on the domain knowledge that we have gathered ..

    private def approve = ReaderTStatus[Order, Boolean] {order =>
      right(true)
    }
    
    private def checkCustomerStatus(customer: Customer) = ReaderTStatus[Order, Boolean] {order =>
      right(true)
    }
    
    private def checkInventory = ReaderTStatus[Order, Boolean] {order =>
      right(true)
    }

    Wiring them together

    But how do we wire these pieces together so that we have the sequence of operations that the domain mandates and yet all goodness of compositionality in our model ? It's actually quite easy since we have already done the hard work of defining the appropriate types that compose ..

    Here's the isReadyForFulfilment method that defines the composite specification and invokes all the individual specifications in sequence using for-comprehension, which, as you all know does the monadic bind in Scala and gives us the final expression that needs to be evaluated for the Order supplied.

    def isReadyForFulfilment(order: Order) = {
      val s = for {
        _ <- validate
        _ <- approve
        _ <- checkCustomerStatus(order.customer)
        c <- checkInventory
      } yield c
      s(order)
    }

    So we have the monadic bind implement the sequencing without breaking the compositionality of the abstractions. In the next instalment we will see how this can be composed with the downstream processing of the order that will not only read stuff from the entity but mutate it too, of course in a functional way.

    Monday, June 03, 2013

    Endo is the new fluent API

    I tweeted this over the weekend .. My last two blog posts have been about endomorphisms and how it combines with the other functional structures to help you write expressive and composable code. In A DSL with an Endo - monoids for free, endos play with Writer monad and implement a DSL for a sequence of activities through monoidal composition. And in An exercise in Refactoring - Playing around with Monoids and Endomorphisms, I discuss a refactoring exercise that exploits the monoid of an endo to make composition easier. Endomorphisms help you lift your computation into a data type that gives you an instance of a monoid. And the mappend operation of the monoid is the function composition. Hence once you have the Endo for your type defined, you get a nice declarative syntax for the operations that you want to compose, resulting in a fluent API. Just a quick recap .. endomorphisms are functions that map a type on to itself and offer composition over monoids. Given an endomorphism we can define an implicit monoid instance ..
    implicit def endoInstance[A]: Monoid[Endo[A]] = new Monoid[Endo[A]] {
      def append(f1: Endo[A], f2: => Endo[A]) = f1 compose f2
      def zero = Endo.idEndo
    }
    
    I am not going into the details of this, which I discussed at length in my earlier posts. In this article I will sum up with yet another use case for making fluent APIs using the monoid instance of an Endo. Consider an example from the domain of securities trading, where a security trade goes through a sequence of transformations in its lifecycle through the trading process .. Here's a typical Trade model (very very trivialified for demonstration) ..
    sealed trait Instrument
    case class Security(isin: String, name: String) extends Instrument
    
    case class Trade(refNo: String, tradeDate: Date, valueDate: Option[Date] = None, 
      ins: Instrument, principal: BigDecimal, net: Option[BigDecimal] = None, 
      status: TradeStatus = CREATED)
    
    Modeling a typical lifecycle of a trade is complex. But for illustration, let's consider these simple ones which need to executed on a trade in sequence ..
    1. Validate the trade
    2. Assign value date to the trade, which will ideally be the settlement date
    3. Enrich the trade with tax/fees and net trade value
    4. Journalize the trade in books
    Each of the functions take a Trade and return a copy of the Trade with some attributes modified. A naive way of doing that will be as follows ..
    def validate(t: Trade): Trade = //..
    
    def addValueDate(t: Trade): Trade = //..
    
    def enrich(t: Trade): Trade = //..
    
    def journalize(t: Trade): Trade = //..
    
    and invoke these methods in sequence while modeling the lifecycle. Instead we try to make it more composable and lift the function Trade => Trade within the Endo ..
    type TradeLifecycle = Endo[Trade]
    
    and here's the implementation ..
    // validate the trade: business logic elided
    def validate: TradeLifecycle = 
      ((t: Trade) => t.copy(status = VALIDATED)).endo
    
    // add value date to the trade (for settlement)
    def addValueDate: TradeLifecycle = 
      ((t: Trade) => t.copy(valueDate = Some(t.tradeDate), status = VALUE_DATE_ADDED)).endo
    
    // enrich the trade: add taxes and compute net value: business logic elided
    def enrich: TradeLifecycle = 
      ((t: Trade) => t.copy(net = Some(t.principal + 100), status = ENRICHED)).endo
    
    // journalize the trade into book: business logic elided
    def journalize: TradeLifecycle = 
      ((t: Trade) => t.copy(status = FINALIZED)).endo
    
    Now endo has an instance of Monoid defined by scalaz and the mappend of Endo is function composition .. Hence here's our lifecycle model using the holy monoid of endo ..
    def doTrade(t: Trade) =
      (journalize |+| enrich |+| addValueDate |+| validate).apply(t)
    
    It's almost the specification that we listed above in numbered bullets. Note the inside out sequence that's required for the composition to take place in proper order.

    Why not plain old composition ?


    A valid question. The reason - abstraction. Abstracting the composition within types helps you compose the result with other types, as we saw in my earlier blog posts. In one of them we built larger abstractions using the Writer monad with Endo and in the other we used the mzero of the monoid as a fallback during composition thereby avoiding any special case branch statements.

    One size doesn't fit all ..


    The endo and its monoid compose beautifully and gives us a domain friendly syntax that expresses the business functionality ina nice succinct way. But it's not a pattern which you can apply everywhere where you need to compose a bunch of domain behaviors. Like every idiom, it has its shortcomings and you need different sets of solutions in your repertoire. For example the above solution doesn't handle any of the domain exceptions - what if the validation fails ? With the above strategy the only way you can handle this situation is to throw exceptions from validate function. But exceptions are side-effects and in functional programming there are more cleaner ways to tame the evil. And for that you need different patterns in practice. More on that in subsequent posts ..

    Monday, January 14, 2013

    A language and its interpretation - Learning free monads

    I have been playing around with free monads of late and finding them more and more useful in implementing separation of concerns between pure data and its interpretation. Monads generally don't compose. But if you restrict monads to a particular form, then you can define a sum type that composes. In the paper Data Types a la carte, Wouter Swierstra describes this form as
    data Term f a =
        Pure a
        | Impure (f (Term f a)) 
    
    These monads consist of either pure values or an impure effect, constructed using f. When f is a functor, Term f is a monad. And in this case, Term f is the free monad being the left adjoint to the forgetful functor f.

    I am not going into the details of what makes a monad free. I once asked this question on google+ and Edward Kmett came up with a beautiful explanation. So instead of trying to come up with a half assed version of the same, have a look at Ed's response here .

    In short, we can say that a free monad is the freeest object possible that's still a monad.

    Composition

    Free monads compose and help you build larger abstractions which are pure data and yet manage to retain all properties of a monad. Hmm .. this sounds interesting because now we can not only build abstractions but make them extensible through composition by clients using the fact that it's still a monad.

    A free monad is pure data, not yet interpreted, as we will see shortly. You can pass it to a separate interpreter (possibly multiple interpreters) which can do whatever you feel like with the structure. Your free monad remains pure while all impurities can be put inside your interpreters.

    And so we interpret ..

    In this post I will describe how I implemented an interpreter for a Joy like concatenative language that uses the stack for its computation. Of course it's just a prototype and addresses an extremely simplified subset, but you get the idea. The basic purpose is to explore the power of free monads and come up with something that can potentially be extended to a full blown implementation of the language.

    When designing an interpreter there's always the risk of conflating the language along with the concerns of interpreting it. Here we will have the 2 completely decoupled by designing the core language constructs as free monads. Gabriel Gonzalez has written a series of blog posts [1,2,3] on the use of free monads which contain details of their virtues and usage patterns. My post is just an account of my learning experience. In fact after I wrote the post, I discovered a similar exercise done for embedding Forth in Haskell - so I guess I'm not the first one to learn free monads using a language interpreter.

    Let's start with some code, which is basically a snippet of Joy like code that will run within our Haskell interpreter ..
    p :: Joy ()
    p = do push 5
           push 6
           add
           incr
           add2
           square
           cube
           end
    

    This is our wish list and at the end of the post we will see if we can interpret this correctly and reason about some aspects of this code. Let's not bother much about the details of the above snippet. The important point is that if we fire it up in ghci, we will see that it's pure data!
    *Joy> p
    Free (Push 5 (Free (Push 6 (Free (Add (Free (Push 1 (Free (Add (Free (Push 1 (Free (Add (Free (Push 1 (Free (Add (Free (Dup (Free (Mult (Free (Dup (Free (Dup (Free (Mult (Free (Mult (Free End))))))))))))))))))))))))))))))
    *Joy> 
    

    We haven't yet executed anything of the above code. It's completely free to be interpreted and possibly in multiple ways. So, we have achieved this isolation - that the data is freed from the interpreter. You want to develop a pretty printer for this data - go for it. You want to apply semantics and give an execution model based on Joy - do it.

    Building the pure data (aka Builder)

    Let's first define the core operators of the language ..
    data JoyOperator cont = Push Int cont
                          | Add      cont 
                          | Mult     cont 
                          | Dup      cont 
                          | End          
                          deriving (Show, Functor)
    

    The interesting piece is the derivation of the Functor, which is required for implementing the forgetful functor adjoint of the free monad. Keeping the technical mumbo jumbo aside, free monads are just a general way of turning functors into monads. So if we have a core operator f as a functor, we can get a free monad Free f out of it. And knowing something is a free monad helps you transform transform an operation over the monad (the monad homomorphism) into an operation over the functor (functor homomorphism). We will see how this helps later ..

    The other point to note is that all operators take a continuation argument that points to the next operation in the chain. End is the terminal symbol and we plan to ignore anything that the user enters after an End.

    Push takes an Int and pushes into the stack, Add pops the top 2 elements of the stack and pushes the sum, Mult does the same for multiplication and Dup duplicates the top element of the stack. End signifies the end of program.

    Next we define the free monad over JoyOperator by using the Free data constructor, defined as part of Control.Monad.Free ..

    data Free f a = Pure a | Free (f (Free f a))

    -- | The free monad over JoyOperator
    type Joy = Free JoyOperator
    

    And then follow it up with some of the definitions of Joy operators as operations over free monads. Note that liftF lifts an operator (which is a Functor) into the context of a free monad. liftF has the following type ..

    liftF :: Functor f => f a -> Free f a

    As a property, a free moand has a forgetful functor as its left adjoint. The unlifting from the monad to the functor is given by the retract function ..

    retract :: Monad f => Free f a -> f a

    and needless to say

    retract . liftF = id

    -- | Push an integer to the stack
    push :: Int -> Joy ()
    push n = liftF $ Push n ()
    
    -- | Add the top two numbers of the stack and push the sum
    add :: Joy ()
    add = liftF $ Add ()
    

    .. and this can be done for all operators that we wish to support.

    Not only this, we can also combine the above operators and build newer ones. Remember we are working with monads and hence the *do* notation based sequencing comes for free ..

    -- | This combinator adds 1 to a number. 
    incr :: Joy ()
    incr = do {1; add}
    
    -- | This combinator increments twice
    add2 :: Joy ()
    add2 = do {incr; incr}
    
    -- | This combinator squares a number
    square :: Joy ()
    square = do {dup; mult}
    

    Now we can have a composite program which sequences through the core operators as well as the ones we derive from them. And that's what we posted as our first example snippet of a target program.

    An Interpreter (aka Visitor)

    Once we have the pure data part done, let's try and build an interpreter that does the actual execution based on the semantics that we defined on the operators.

    -- | Run a joy program. Result is either an Int or an error
    runProgram :: Joy n -> Either JoyError Int
    runProgram program = joy [] program
      where joy stack (Free (Push v cont))         = joy (v : stack) cont
            joy (a : b : stack) (Free (Add cont))  = joy (a + b : stack) cont
            joy (a : b : stack) (Free (Mult cont)) = joy (a * b : stack) cont
            joy (a : stack) (Free (Dup cont))      = joy (a : a : stack) cont
            joy _ (Free Add {})                    = Left NotEnoughParamsOnStack
            joy _ (Free Dup {})                    = Left NotEnoughParamsOnStack
            joy [] (Free End)                      = Left NotEnoughParamsOnStack
            joy [result] (Free End)                = Right result
            joy _ (Free End)                       = Left NotEmptyOnEnd
            joy _ Pure {}                          = Left NoEnd
    

    runProgram is the interpreter that takes a free monad as its input. Its implementation is quite trivial - it just matches on the recursive structure of the data and pushes the appropriate results on the stack. Now if we run our program p using the above interpreter we get the correct result ..

    *Joy> runProgram p
    Right 7529536
    *Joy> 
    

    Equational Reasoning

    Being Haskell and being pure, we obviously can prove some bits and pieces of our program as mathematical equations. At the beginning I said that End is the end of the program and anything after End needs to be ignored. What happens if we do the following ..

    runProgram $ do {push 5; incr; incr; end; incr; incr}

    If you have guessed correctly we get Right 7, which means that all operations after end have been ignored. But can we prove that our program indeed does this ? The following is a proof that end indeed ends our program. Consider some operation m follows end ..

    end >> m
    
    -- definition of end
    = liftF $ End >> m
    
    -- m >> m' = m >>= \_ -> m'
    = liftF $ End >>= \_ -> m
    
    -- definition of liftF
    = Free End >>= \_ -> m
    
    -- Free m >>= f = Free (fmap (>>= f) m)
    = Free (fmap (>>= \_ -> m) End)
    
    -- fmap f End = End
    = Free End
    
    -- liftF f = Free f (f is a functor)
    = liftF End
    
    -- definition of End
    = end
    

    So this shows that any operation we do after end is never executed.

    Improving the bind

    Free monads offer improved modularity by allowing us to separate the building of pure data from the (possibly) impure concerns of interpreting it with the external world. But a naive implementation leads to a penalty in runtime efficiency as Janis Voigtlander discusses in his paper Asymptotic Improvement of Computations over Free Monads. And the Haskell's implementation of free monads had this inefficiency where the asymptotic complexity of substitution was quadratic because of left associative bind. Edward Kmett implemented the Janis trick and engineered a solution that gets over this inefficiency. I will discuss this in a future post. And if you would like to play around with the interpreter, the source is there in my github.

    Thursday, January 03, 2013

    strict : recursion :: non-strict : co-recursion

    Consider the very popular algorithm that uses a tail recursive call to implement a map over a List. Here's the implementation in F#

    let map converter l =
      let rec loop acc = function
        | [] -> acc
        | hd :: tl -> loop (converter hd :: acc) tl
      List.rev (loop [] l)

    Scala is also a statically typed functional programming language, though it uses a completely different trick and mutability to implement map over a sequence. Let's ignore it for the time being and imagine we are being a bonafide functional programmer.

    The above code uses a very common idiom of accumulate-and-reverse to implement a tail recursive algorithm. Though Scala stdlib does not use this technique and uses a mutable construct for this implementation, we could have done the same thing in Scala as well ..

    Both Scala and F# are languages with strict evaluation semantics as the default. What would a similar tail recursive Haskell implementation look like ?

    map' f xs = reverse $ go [] xs
        where
        go accum [] = accum
        go accum (x:xs) = go (f x : accum) xs
    

    Let's now have a look at the actual implementation of map in Haskell prelude ..

    map :: (a -> b) -> [a] -> [b]
    map _ []     = []
    map f (x:xs) = f x : map f xs
    

    Whoa! It's not a tail recursion and instead a body recursive implementation. Why is that ? We have been taught that tail recursion is the holy grail since it takes constant stack space and all ..

    In a strict language the above implementation of map will be a bad idea since it uses linear stack space. On the other hand the initial implementation is tail recursive. But Haskell has non-strict semantics and hence the last version of map consumes only one element of the input and yields one element of the output. The earlier versions consume the whole input before yielding any output.

    In a lazy language what you need is to make your algorithm co-recursive. And this needs co-data. Understanding co-data or co-recursion needs a brief background of co-induction and how it differs from induction.

    When we define a list in Haskell as

    data [a] = [] | a : [a]
    

    it means that the set "List of a" is the smallest set such that [] is in the List of a, and if [a] is in the List of a and a is in a, then a : [a] is in List of a. This is the inductive definition os a List and we can use recursion to implement various properties of a List. Also once we have a recursive definition, we can use structural induction to prove various properties of the data structure.

    If an inductive definition on data gives us the smallest set, a co-inductive definition on co-data gives us the largest set. Let's define a Stream in Haskell ..

    data Stream a = Cons a (Stream a)
    

    First to note - unlike the definition of a List, there's no special case for empty Stream. Hence no base case unlike the inductive definition above. And a "Stream of a" is the largest set consisting of a pair of an "a" and a "Stream of a". Which means that Stream is an infinite data structure i.e. the range (or co-domain) of a Stream is infinite. And we implement properties on co-inductive data structures using co-recursive programs. A co-recursive program is defined as one whose range is a type defined recursively as the greatest solution of some equation.

    In terms of a little bit of mathematical concepts, if we model types as sets, then an infinite list of integers is the greatest set X for which there is a bijection X ≅ ℤ x X and a program that generates such a list is a co-recursive program. As its dual, the type of a finite list is the least set X for which X ≅ 1 + (ℤ x X), where 1 is a singleton set and + is the disjoint union of sets. Any program which consumes such an input is a recursive program. (Note the dualities in italics).

    Strictly speaking, the co-domain does not necessarily have to be infinite. If you have read my earlier post on initial algebra, co-recursion recursively defines functions whose co-domain is a final data type, dual to the way that ordinary recursion recursively defines functions whose domain is an initial data type.

    In case of primitive recursion (with the List data type above), you always frame the recursive step to operate on data which gets reduced in size in subsequent steps of recursive calls. It's not so with co-recursion since you may be working with co-data and infinite data structures. Since in Haskell, a List can also be infinite, using the above co-recursive definition of map, we can have

    head $ map (1+) [1..]
    

    which invokes map on an infinite list of integers. And here the co-recursive steps of map operate successively on sets of data which are not less than the earlier set. So, it's not tail recursion that makes an efficient implementation in Haskell, you need to make the co-recursive call within the application of a constructor.

    As the first post of the new year, that's all friends. Consider this as the notes from someone learning the principles of co-inductive definitions. Who knew co-recursion is so interesting ? It will be more interesting once we get down into the proof techniques for co-recursive programs. But that's for a future post ..

    Monday, October 01, 2012

    Towards better refactoring support in IDEs for functional programming


    A couple of days back I was thinking how we could improve the state of IDEs and let them give rich feedbacks to users focusing on code improvement and refactoring. When you write Java programs, IntelliJ IDEA is possibly the best IDE you can have with an extensive set of refactoring tools. But most of these are targeted towards reducing boilerplates and do refactor based on structural changes only (e.g. replace constructor with builder or use interfaces wherever possible etc.). Can we improve the state of the art when we are using a semantically richer language like Scala or Haskell ? How about suggesting some idioms that relate to the paradigm of functional programming and explore some of the best practices suggested by the masters of the trade ?

    I always find many programmers use Option.get in Scala when they should be using higher order structures instead. Here's an example of the anti-pattern ..

    if oName.isDefined oName.get.toUpperCase else "Name not present"

    which should ideally be refactored to

    oName.map(_.toUpperCase).getOrElse("Name not present")

    The advantage with the latter is that it's more composable and can be pipelined to other higher order functions down the line without introducing any temporary variables.

    Many people also use pattern matching for destructuring an Option ..

    oName match {
      case Some(n) => n.toUpperCase
      case _ => "Name not present"
    }
    

    This is also not what idioms espouse and is almost equivalent to explicit null checking.

    Can an IDE suggest such refactoring ? I tweeted about my thoughts and almost immediately Mirko, who is doing some great stuff on the Scala IDE, added this to his list of ideas. Nice!

    Here's another idea. How about suggesting some program transformations for functional programming ? I know Scala is not a pure functional language and we don't have effect tracking. I have seen people use side-effecting functions in map, which, I know is a strict no-no. But again, the compiler doesn't complain. Still we can have the IDE suggest some refactorings assuming the programmer uses purity as per recommendations. After all, these are suggestions only and can enlighten a programmer with some of the realizations of how to model based on the best practices.

    Consider this ..

    (-1000000 to 1000000).map(_ + 1).filter(_ > 0)

    Here the map is executed on all the elements and then filter processes the whole output. One option to optimize will be to defer the computations using view ..

    (-1000000 to 1000000).view.map(_ + 1).filter(_ > 0)

    But this doesn't reduce the load of computation on map. It just prevents the creation of a temporary list as an output of the map. We can apply a transformation called filter promotion that does the filtering first so that map can operate on a potentially smaller list. Of course this assumes that both functions are *pure* so that operations can be reordered - but that's what we should anyway ensure while doing functional programming .. right ? Here's the transformation ..

    (-1000000 to 1000000).filter(((_: Int) > 0) compose ((_: Int) + 1)).map(_ + 1)

    which can be further simplified into ..

    (-1000000 to 1000000).filter((_: Int) >= 0).map(_ + 1)

    .. and this has same computation load on filter but potentially less load on map, since it now has to process a filtered list.

    Can we have such transformations available in the IDE ? I am a faithful user of vim, but with such enrichments I may consider going back to an IDE. TYhis really has no limits. Consider Wadler's free theorems. Many times we tend to write functions whose arguments are more specialized in types than what's required. Generalization of such functions using polymorphic arguments can lead to better abstraction and verification through free theorems. An IDE can suggest all these and may also help generate properties for property based testing using tools like QuickCheck and ScalaCheck. But that's discussion for another day ..

    P.S. Simon Thompson's Haskell The Craft of Functional Programming 3rd edition discusses lots of similar transformations in various chapters that discuss reasoning or Haskell programs.