Monday, November 03, 2014

Functional and Reactive Domain Modeling

Manning has launched the MEAP of my upcoming book on Domain Modeling.



The first time I was formally introduced to the topic was way back when I played around with Erik Evans' awesome text on the subject of Domain Driven Design. In the book he discusses various object lifecycle patterns like the Factory, Aggregate or Repository that help separation of concerns when you are implementing the various interactions between the elements of the domain model. Entities are artifacts with identities, value objects are pure values while services model the coarse level use cases of the model components.

Traditionally we followed the recommendations of Erik in our real world implementations and used the object oriented paradigm for modeling all interactions. We started talking about rich domain models and anemic domain models. The rich model espoused a richer agglomeration of state and behavior within the model, while the anemic model preferred to keep them decoupled. Martin Fowler sums this up in his post on Anemic Domain Models ..
"The basic symptom of an Anemic Domain Model is that at first blush it looks like the real thing. There are objects, many named after the nouns in the domain space, and these objects are connected with the rich relationships and structure that true domain models have. The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. Instead there are a set of service objects which capture all the domain logic. These services live on top of the domain model and use the domain model for data."

Go Functional

In Functional and Reactive Domain Modeling I look at the problem with a different lens. The primary focus of the book is to encourage building domain models using the principles of functional programming. It's a completely orthogonal approach than OO and focuses on verbs first (as opposed to nouns first in OO), algebra first (as opposed to objects in OO), function composition first (as opposed to object composition in OO), lightweight objects as ADTs (instead of rich class models).

The book starts with the basics of functional programming principles and discusses the virtues of purity and the advantages of keeping side-effects decoupled from the core business logic. The book uses Scala as the programming language and does an extensive discussion on why the OO and functional features of Scala are a perfect fit for modelling complex domains. Chapter 3 starts the core subject of functional domain modeling with real world examples illustrating how we can make good use of patterns like smart constructors, monads and monoids in implementing your domain model. The main virtue that these patterns bring to your model is genericity - they help you extract generic algebra from domain specific logic into parametric functions which are far more reusable and less error prone. Chapter 4 focuses on advanced usages like typeclass based design and patterns like monad transformers, kleislis and other forms of compositional idioms of functional programming. One of the primary focus of the book is an emphasis on algebraic API design and to develop an appreciation towards ability to reason about your model.

Go Reactive

The term "reactive" has recently become quite popular in describing systems that are responsive, scalable and adaptive. In implementing complex domain models, we find many areas which can be made more responsive by implementing them as non blocking operations instead of the standard blocking function calls. Using higher level concurrency primitives like actors, futures and data flow based computing we can compose asynchronous operations and increase the net throughput of your model. The second part of the book discusses how you can combine the principles of functional programming with the reactive way of implementing behaviors. The paradigm of application design using the principles of functional programming together with an asynchronous non-blocking mode of communication between the participating entities promises to be a potent combination towards developing performant systems that are relatively easy to manage, maintain and evolve. But designing and implementing such models need a different way of thinking. The behaviors that you implement have to be composable using pure functions and they can form building blocks of bigger abstractions that communicate between them using non-blocking asynchronous message passing.

Functional meets Reactive

Functional and Reactive Domain Modeling takes you through the steps teaching you how to think of the domain model in terms of pure functions and how to compose them to build larger abstractions. You will start learning with the basics of functional programming principles and gradually progress to the advanced concepts and patterns that you need to know to implement complex domain models. The book demonstrates how advanced FP patterns like algebraic data types, typeclass based design and isolation of side-effects can make your model compose for readability and verifiability. On the subject of reactive modeling, the book focuses on the higher order concurrency patterns like actors and futures. It uses the Akka framework as the reference implementation and demonstrates how advanced architectural patterns like event sourcing and command-query-responsibility-segregation can be put to great use while implementing scalable models. You will learn techniques that are radically different from the standard RDBMS based applications that are based on mutation of records. You’ll also pick up important patterns like using asynchronous messaging for interaction based on non blocking concurrency and model persistence, which delivers the speed of in-memory processing along with suitable guarantees of reliability.

Looking forward to an exciting journey with the book. I am sure you will also find interest in the topics that I discuss there. And feel free to jump on to AuthorOnline and fire questions that we all can discuss. I am sure this will also lead to an overall improvement in the quality of the book.

Monday, May 12, 2014

Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors

I was looking at the presentation that Dean Wampler made recently regarding domain driven design, anemic domain models and how using functional programming principles help ameliorate some of the problems there. There are some statements that he made which, I am sure made many OO practitioners chuckle. They contradict popular beliefs that encourage OOP as the primary way of modeling using DDD principles.

One statement that resonates a lot with my thought is "DDD encourages understanding of the domain, but don't implement the models". DDD does a great job in encouraging developers to understand the underlying domain model and ensuring a uniform vocabulary throughout the lifecycle of design and implementation. This is what design patterns also do by giving you a vocabulary that you can heartily exchange with your fellow developers without influencing any bit of implementation of the underlying pattern.

On the flip side of it, trying to implement DDD concepts using standard techniques of OO with joined state and behavior often gives you a muddled mutable model. The model may be rich from the point of view that you will find all concepts related to the particular domain abstraction baked in the class you are modeling. But it makes the class fragile as well since the abstraction becomes more locally focused losing the global perspective of reusability and composability. As a result when you try to compose multiple abstractions within the domain service layer, it becomes too much polluted with glue code that resolves the impedance mismatch between class boundaries.

So when Dean claims "Models should be anemic", I think he means to avoid this bundling of state and behavior within the domain object that gives you the false sense of security of richness of the model. He encourages the practice that builds domain objects to have the state only while you model behaviors using standalone functions.



One other strawman argument that I come across very frequently is that bundling state and behavior by modeling the latter as methods of the class increases encapsulation. If you are still a believer of this school of thought, have a look at Scott Meyer's excellent article which he wrote as early as 2000. He eschews the view that a class is the right level of modularization and encourages more powerful module systems as better containers of your domain behaviors.

As continuation of my series on functional domain modeling, we continue with the example of the earlier posts and explore the theme that Dean discusses ..

Here's the anemic domain model of the Order abstraction ..

case class Order(orderNo: String, orderDate: Date, customer: Customer, 
  lineItems: Vector[LineItem], shipTo: ShipTo, 
  netOrderValue: Option[BigDecimal] = None, status: OrderStatus = Placed)

In the earlier posts we discussed how to implement the Specification and Aggregate Patterns of DDD using functional programming principles. We also discussed how to do functional updates of aggregates using data structures like Lens. In this post we will use these as the building blocks, use more functional patterns and build larger behaviors that model the ubiquitous language of the domain. After all, one of the basic principles behind DDD is to lift the domain model vocabulary into your implementation so that the functionality becomes apparent to the developer maintaining your model.

The core idea is to validate the assumption that building domain behaviors as standalone functions leads to an effective realization of the domain model according to the principles of DDD. The base classes of the model contain only the states that can be mutated functionally. All domain behaviors are modeled through functions that reside within the module that represents the aggregate.

Functions compose and that's precisely how we will chain sequence of domain behaviors to build bigger abstractions out of smaller ones. Here's a small function that values an Order. Note it returns a Kleisli, which essentially gives us a composition over monadic functions. So instead of composing a -> b and b -> c, which we do with normal function composition, we can do the same over a -> m b and b -> m c, where m is a monad. Composition with effects if you may say so.

def valueOrder = Kleisli[ProcessingStatus, Order, Order] {order =>
  val o = orderLineItems.set(
    order,
    setLineItemValues(order.lineItems)
  )
  o.lineItems.map(_.value).sequenceU match {
    case Some(_) => right(o)
    case _ => left("Missing value for items")
  }
}

But what does that buy us ? What exactly do we gain from these functional patterns ? It's the power to abstract over families of similar abstractions like applicatives and monads. Well, that may sound a bit rhetoric and it needs a separate post to justify their use. Stated simply, they encapsulate effects and side-effects of your computation so that you can focus on the domain behavior itself. Have a look at the process function below - it's actually a composition of monadic functions in action. But all the machinery that does the processing of effects and side-effects are abstracted within the Kleisli itself so that the user level implementation is simple and concise.

With Kleisli it's the power to compose over monadic functions. Every domain behavior has a chance of failure, which we model using the Either monad - here ProcessingStatus is just a type alias for this .. type ProcessingStatus[S] = \/[String, S]. Using the Kleisli, we don't have to write any code for handling failures. As you will see below, the composition is just like the normal functions - the design pattern takes care of alternate flows.

Once the Order is valued, we need to apply discounts to qualifying items. It's another behavior that follows the same pattern of implementation as valueOrder.

def applyDiscounts = Kleisli[ProcessingStatus, Order, Order] {order =>
  val o = orderLineItems.set(
    order,
    setLineItemValues(order.lineItems)
  )
  o.lineItems.map(_.discount).sequenceU match {
    case Some(_) => right(o)
    case _ => left("Missing discount for items")
  }
}

Finally we check out the Order ..

def checkOut = Kleisli[ProcessingStatus, Order, Order] {order =>
  val netOrderValue = order.lineItems.foldLeft(BigDecimal(0).some) {(s, i) => 
    s |+| (i.value |+| i.discount.map(d => Tags.Multiplication(BigDecimal(-1)) |+| Tags.Multiplication(d)))
  }
  right(orderNetValue.set(order, netOrderValue))
}

And here's the service method that composes all of the above domain behaviors into the big abstraction. We don't have any object to instantiate. Just plain function composition that results in an expression modeling the entire flow of events. And it's the cleanliness of abstraction that makes the code readable and succinct.

def process(order: Order) = {
  (valueOrder andThen 
     applyDiscounts andThen checkOut) =<< right(orderStatus.set(order, Validated))
}
In case you are interested in the full source code of this small example, feel free to take a peek at my github repo.

Sunday, April 06, 2014

Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

In the last post I looked at a pattern that enforces constraints to ensure domain objects honor the domain rules. But what exactly is a domain object ? What should be the granularity of an object that my solution model should expose so that it makes sense to a domain user ? After all, the domain model should speak the language of the domain. We may have a cluster of entities modeling various concepts of the domain. But only some of them can be published as abstractions to the user of the model. The others can be treated as implementation artifacts and are best hidden under the covers of the published ones.

An aggregate in domain driven design is a published abstraction that provides a single point of interaction to a specific domain concept. Considering the classes I introduced in the last post, an Order is an aggregate. It encapsulates the details that an Order is composed of in the real world (well, only barely in this example, which is only for illustration purposes :-)).

Note an aggregate can consist of other aggregates - e.g. we have a Customer instance within an Order. Eric Evans in his book on Domain Driven Design provides an excellent discussion of what constitutes an Aggregate.

Functional Updates of Aggregates with Lens


This is not a post about Aggregates and how they fit in the realm of domain driven design. In this post I will talk about how to use some patterns to build immutable aggregates. Immutable data structures offer a lot of advantages, so build your aggregates ground up as immutable objects. Algebraic Data Type (ADT) is one of the patterns to build immutable aggregate objects. Primarily coming from the domain of functional programming, ADTs offer powerful techniques of pattern matching that help you match values against patterns and bind variables to successful matches. In Scala we use case classes as ADTs that give immutable objects out of the box ..

case class Order(orderNo: String, orderDate: Date, customer: Customer, 
  lineItems: Vector[LineItem], shipTo: ShipTo, netOrderValue: Option[BigDecimal] = None, 
  status: OrderStatus = Placed)

Like all good aggregates, we need to provide a single point of interaction to users. Of course we can access all properties using accessors of case classes. But what about updates ? We can update the orderNo of an order like this ..

val o = Order( .. )
o.copy(orderNo = newOrderNo)

which gives us a copy of the original order with the new order no. We don't mutate the original order. But anybody having some knowledge of Scala will realize that this becomes pretty clunky when we have to deal with nested object updation. e.g in the above case, ShipTo is defined as follows ..

case class Address(number: String, street: String, city: String, zip: String)
case class ShipTo(name: String, address: Address)

So, here you go in order to update the zip code of a ShipTo ..

val s = ShipTo("abc", Address("123", "Monroe Street", "Denver", "80233"))
s.copy(address = s.address.copy(zip = "80231"))

Not really pleasing and can go off bounds in comprehensibility pretty soon.

In our domain model we use an abstraction called a Lens for updating Aggregates. In very layman's terms, a lens is an encapsulated get and set combination. The get extracts a small part from a larger whole, while the set transforms the larger abstraction with a smaller part taken as a parameter.

case class Lens[A, B](get: A => B, set: (A, B) => A)

This is a naive definition of a Lens in Scala. Sophisticated lens designs go a long way to ensure proper abstraction and composition. scalaz provides one such implementation out of the box that exploits the similarity in structure between the get and the set to generalize the lens definition in terms of another abstraction named Store. As it happens so often in functional programming, Store happens to abstract yet another pattern called the Comonad. You can think of a Comonad as the inverse of a Monad. But in case you are more curious, and have wondered how lenses form "the Coalgebras for the Store Comonad", have a look at the 2 papers here and here.

Anyway for us mere domain modelers, we will use the Lens implementation as in scalaz .. here's a lens that helps us update the OrderStatus within an Order ..

val orderStatus = Lens.lensu[Order, OrderStatus] (
  (o, value) => o.copy(status = value),
  _.status
)

and use it as follows ..

val o = Order( .. )
orderStatus.set(o, Placed)

will change the status field of the Order to Placed. Let's have a look at some of the compositional properties of a lens which help us write readable code for functionally updating nested structures.

Composition of Lenses

First let's define some individual lenses ..

// lens for updating a ShipTo of an Order
val orderShipTo = Lens.lensu[Order, ShipTo] (
  (o, sh) => o.copy(shipTo = sh),
  _.shipTo
)

// lens for updating an address of a ShipTo
val shipToAddress = Lens.lensu[ShipTo, Address] (
  (sh, add) => sh.copy(address = add),
  _.address
)

// lens for updating a city of an address
val addressToCity = Lens.lensu[Address, String] (
  (add, c) => add.copy(city = c),
  _.city
)

And now we compose them to define a lens that directly updates the city of a ShipTo belonging to an Order ..

// compositionality FTW
def orderShipToCity = orderShipTo andThen shipToAddress andThen addressToCity

Now updating a city of a ShipTo in an Order is as simple and expressive as ..

val o = Order( .. )
orderShipToCity.set(o, "London")

The best part of using such compositional data structures is that it makes your domain model implementation readable and expressive to the users of your API. And yet your aggregate remains immutable.

Let's look at another use case when the nested object is a collection. scalaz offers partial lenses that you can use for such composition. Here's an example where we build a lens that updates the value member within a LineItem of an Order. A LineItem is defined as ..

case class LineItem(item: Item, quantity: BigDecimal, value: Option[BigDecimal] = None, 
  discount: Option[BigDecimal] = None)

and an Order has a collection of LineItems. Let's define a lens that updates the value within a LineItem ..

val lineItemValue = Lens.lensu[LineItem, Option[BigDecimal]] (
  (l, v) => l.copy(value = v),
  _.value
)

and then compose it with a partial lens that helps us update a specific item within a vector. Note how we convert our lineItemValue lens to a partial lens using the unary operator ~ ..

// a lens that updates the value in a specific LineItem within an Order 
def lineItemValues(i: Int) = ~lineItemValue compose vectorNthPLens(i)

Now we can use this composite lens to functionally update the value field of each of the items in a Vector of LineItems using some specific business rules ..

(0 to lis.length - 1).foldLeft(lis) {(s, i) => 
  val li = lis(i)
  lineItemValues(i).set(s, unitPrice(li.item).map(_ * li.quantity)).getOrElse(s)
}

In this post we saw how we can handle aggregates functionally and without any in-place mutation. This keeps the model pure and helps us implement domain models that has sane behavior even in concurrent settings without any explicit use of locks and semaphores. In the next post we will take a look at how we can use such compositional structures to make the domain model speak the ubiquitous language of the domain - another pattern recommended by Eric Evans in domain driven design.

Monday, March 31, 2014

Functional Patterns in Domain Modeling - The Specification Pattern

When you model a domain, you model its entities and behaviors. As Eric Evans mentions in his book Domain Driven Design, the focus is on the domain itself. The model that you design and implement must speak the ubiquitous language so that the essence of the domain is not lost in the myriads of incidental complexities that your implementation enforces. While being expressive the model needs to be extensible too. And when we talk about extensibility, one related attribute is compositionality.

Functions compose more naturally than objects and In this post I will use functional programming idioms to implement one of the patterns that form the core of domain driven design - the Specification pattern, whose most common use case is to implement domain validation. Eric's book on DDD says regarding the Specification pattern ..
It has multiple uses, but one that conveys the most basic concept is that a SPECIFICATION can test any object to see if it satisfies the specified criteria.
A specification is defined as a predicate, whereby business rules can be combined by chaining them together using boolean logic. So there's a concept of composition and we can talk about Composite Specification when we talk about this pattern. Various literature on DDD implement this using the Composite design pattern so commonly implemented using class hierarchies and composition. In this post we will use function composition instead.

Specification - Where ?

One of the very common confusions that we have when we design a model is where to keep the validation code of an aggregate root or any entity, for that matter.
  • Should we have the validation as part of the entity ? No, it makes the entity bloated. Also validations may vary based on some context, while the core of the entity remains the same.
  • Should we have validations as part of the interface ? May be we consume JSON and build entities out of it. Indeed some validations can belong to the interface and don't hesitate to put them there.
  • But the most interesting validations are those that belong to the domain layer. They are business validations (or specifications), which Eric Evans defines as something that "states a constraint on the state of another object". They are business rules which the entity needs to honor in order to proceed to the next stage of processing.
We consider the following simple example. We take an Order entity and the model identifies the following domain "specifications" that a new Order must satisfy before being thrown into the processing pipeline:

  1. it must be a valid order obeying the constraints that the domain requires e.g. valid date, valid no of line items etc.
  2. it must be approved by a valid approving authority - only then it proceeds to the next stage of the pipeline
  3. customer status check must be passed to ensure that the customer is not black-listed
  4. the line items from the order must be checked against inventory to see if the order can be fulfilled
These are the separate steps that are to be done in sequence by the order processing pipeline as pre-order checks before the actual order is ready for fulfilment. A failure in any of them takes the order out of the pipeline and the process stops there. So the model that we will design needs to honor the sequence as well as check all constraints that need to be done as part of every step.

An important point to note here is that none of the above steps mutate the order - so every specification gets a copy of the original Order object as input, on which it checks some domain rules and determines if it's suitable to be passed to the next step of the pipeline.

Jumping on to the implementation ..

Let's take down some implementation notes from what we learnt above ..

  • The Order can be an immutable entity at least for this sequence of operations
  • Every specification needs an order, can we can pull some trick out of our hat which prevents this cluttering of API by passing an Order instance to every specification in the sequence ?
  • Since we plan to use functional programming principles, how can we model the above sequence as an expression so that our final result still remains composable with the next process of order fulfilment (which we will discuss in a future post) ?
  • All these functions look like having similar signatures - we need to make them compose with each other
Before I present more of any explanation or theory, here are the basic building blocks which will implement the notes that we took after talking to the domain experts ..

type ValidationStatus[S] = \/[String, S]

type ReaderTStatus[A, S] = ReaderT[ValidationStatus, A, S]

object ReaderTStatus extends KleisliInstances with KleisliFunctions {
  def apply[A, S](f: A => ValidationStatus[S]): ReaderTStatus[A, S] = kleisli(f)
}

ValidationStatus defines the type that we will return from each of the functions. It's either some status S or an error string that explains what went wrong. It's actually an Either type (right biased) as implemented in scalaz.

One of the things which we thought will be cool is to avoid repeating the Order parameter for every method when we invoke the sequence. And one of the idioamtic ways of doing it is to use the Reader monad. But here we already have a monad - \/ is a monad. So we need to stack them together using a monad transformer. ReaderT does this job and ReaderTStatus defines the type that somehow makes our life easier by combining the two of them.

The next step is an implementation of ReaderTStatus, which we do in terms of another abstraction called Kleisli. We will use the scalaz library for this, which implements ReaderT in terms of Kleisli. I will not go into the details of this implementation - in case you are curious, refer to this excellent piece by Eugene.

So, how does one sample specification look like ?

Before going into that, here are some basic abstractions, grossly simplified only for illustration purposes ..

// the base abstraction
sealed trait Item {
  def itemCode: String
}

// sample implementations
case class ItemA(itemCode: String, desc: Option[String], 
  minPurchaseUnit: Int) extends Item
case class ItemB(itemCode: String, desc: Option[String], 
  nutritionInfo: String) extends Item

case class LineItem(item: Item, quantity: Int)

case class Customer(custId: String, name: String, category: Int)

// a skeleton order
case class Order(orderNo: String, orderDate: Date, customer: Customer, 
  lineItems: List[LineItem])

And here's a specification that checks some of the constraints on the Order object ..

// a basic validation
private def validate = ReaderTStatus[Order, Boolean] {order =>
  if (order.lineItems isEmpty) left(s"Validation failed for order $order") 
  else right(true)
}

It's just for illustration and does not contain much domain rules. The important part is how we use the above defined types to implement the function. Order is not an explicit argument to the function - it's curried. The function returns a ReaderTStatus, which itself is a monad and hence allows us to sequence in the pipeline with other specifications. So we get the requirement of sequencing without breaking out of the expression oriented programming style.

Here are a few other specifications based on the domain knowledge that we have gathered ..

private def approve = ReaderTStatus[Order, Boolean] {order =>
  right(true)
}

private def checkCustomerStatus(customer: Customer) = ReaderTStatus[Order, Boolean] {order =>
  right(true)
}

private def checkInventory = ReaderTStatus[Order, Boolean] {order =>
  right(true)
}

Wiring them together

But how do we wire these pieces together so that we have the sequence of operations that the domain mandates and yet all goodness of compositionality in our model ? It's actually quite easy since we have already done the hard work of defining the appropriate types that compose ..

Here's the isReadyForFulfilment method that defines the composite specification and invokes all the individual specifications in sequence using for-comprehension, which, as you all know does the monadic bind in Scala and gives us the final expression that needs to be evaluated for the Order supplied.

def isReadyForFulfilment(order: Order) = {
  val s = for {
    _ <- validate
    _ <- approve
    _ <- checkCustomerStatus(order.customer)
    c <- checkInventory
  } yield c
  s(order)
}

So we have the monadic bind implement the sequencing without breaking the compositionality of the abstractions. In the next instalment we will see how this can be composed with the downstream processing of the order that will not only read stuff from the entity but mutate it too, of course in a functional way.

Thursday, January 23, 2014

A Sketch as the Query Model of an EventSourced System

In my last post I discussed the count-min sketch data structure that can be used to process data streams using sub-linear space. In this post I will continue with some of my thoughts on how count-min sketches can be used in a typical event sourced application architecture. An event sourcing system typically has a query model which provides a read only view of how all the events are folded to provide a coherent view of the system. I have seen applications where the query model is typically rendered from a relational database. And the queries can take a lot of time to be successfully processed and displayed to the user if the data volume is huge. And when we are talking about Big Data, this is not a very uncommon use case.

Instead of rendering the query from the RDBMS, quite a few types of them can be rendered from a count-min sketch using sub-linear space. Consider the use case where you need to report the highest occuring user-ids in a Twitter stream. The stream is continuous, huge and non ending and you get to see each item once. So you get each item from where you parse out the user-id occurring in it and update the sketch. So each entry of the sketch contains the frequency of the user-id that hashes to that slot. And we can take the minimum of all the slots to which a user-id hashes to, in order to get the frequency of that user-id. The details of how this works can be found in my last post.

Consider the case where we need to find the heavy-hitters - those user-ids whose frequency exceeds a pre-determined threshold. For that, in addition to the sketch we can also maintain a data structure like heap or tree where we update the top-k heavy hitters. When a user-id appears, we update the sketch, get its estimated frequency from the sketch and if it exceeds the threshold, also record it in the data structure. So at any point in time we can probe this accessary data structure to get the current heavy-hitters. Spark examples contain a sample implementation of this heavy hitters query from a Twitter stream using the CountMinSketchMonoid of Algebird.

Can this be a viable approach of implementing the query model in an event sourced system if the use case fits the approximation query approach ? It can be faster, relatively cheap in space and can prove to be responsive enough to be displayed in dashboards in the form of charts or graphs.

Sunday, January 19, 2014

Count-Min Sketch - A Data Structure for Stream Mining Applications

In today's age of Big Data, streaming is one of the techniques for low latency computing. Besides the batch processing infrastructure of map/reduce paradigm, we are seeing a plethora of ways in which streaming data is processed at near real time to cater to some specific kinds of applications. Libraries like Storm, Samza and Spark belong to this genre and are starting to get their share of user base in the industry today.

This post is not about Spark, Storm or Samza. It's about a data structure which is one of the relatively new entrants in the domain of stream processing, which is simple to implement, but has already proved to be of immense use in serving a certain class of queries over huge streams of data. I have been doing some readings about the application of such structures and thought of sharing them with the readers of my blog.

Using Sublinear Space


Besides data processing, these tools also support data mining over streams that include serving specialized queries over data using limited space and time. Ok, so once we store all data as they come we can always serve queries with O(n) space. But since we are talking about huge data streams, it may not even be possible to run algorithms on the full set of data - it simply will be too expensive. Even if we have the entire set of data in a data warehouse, the processing of the entire data set may take time and consume resources that we cannot afford to have, considering the fee charged under the evolving models of using the platform-as-a-service within the cloud based infrastructure. Also the fact that these algorithms will be working on data streams, there's a high likelihood that they will get to see these data only in a single pass. The bottom line is that we need to have algorithms that work on sub-linear space.

Working on sublinear space implies that we don't get to store or see all data - hence an obvious conclusion from this will be the fact that we also don't get to deliver an accurate answer to some queries. We rely on some approximation techniques and deliver an accuracy with a reasonably high probability bound. We don't store all data, instead we store a lossy compressed representation of the data and deliver user queries from this subset instead of the entire set.

One widely used technique for storing a subset of data is through Random Sampling, where the data stored is selected through some stochastic mechanism. There are various ways to determine which data we select for storing and how we build the estimator for querying the data. There are pros and cons with this approach, but it's one of the simplest ways to do approximation based queries on streaming data.

There are a few other options like Histograms and Wavelet based synopses. But one of the most interesting data structures that have been developed in recent times is the Sketch, which uses summary based techniques for delivering approximation queries, gets around the typical problems that sampling techniques have and are highly parallelizable in practice.

An important class of sketch is one where the sketch vector (which is the summary information) is a linear transform of the input vector. So if we model the input as a vector we can multiply it by a sketch matrix to obtain the sketch vector that contains the synopses data that we can use for serving approximation queries. Here's a diagrammatic representation of the sketch as a linear transform of the input data.



Count-Min Sketch


One of the most popular forms of the sketch data structure is the Count-Min Sketch introduced by Muthukrishnan and Cormode in 2003. The idea is quite simple and the data structure is based on probabilistic algorithms to serve various types of queries on streaming data. The data structure is parameterized by two factors - ε and δ, where the error in answering the query is within a factor of ε with probability δ. So you can tune these parameters based on the space that you can afford and accordingly amortize the accuracy of results that the data structure can serve you.

Consider this situation where you have a stream of data (typically modeled as a vector) like updates to stock quotes in a financial processing system arriving continuously that you need to process and report statistical queries on a real time basis.

  • We model the data stream as a vector a[1 .. n] and the updates received at time t are of the form (it, ct) which mean that the stock quote for a[it] has been incremented by ct. There are various models in which this update can appear as discussed in Data Streams: Algorithms and Applications by Muthukrishnan which includes negative updates as well and the data structure can be tuned to handle each of these variants.


  • The core of the data structure is a 2 dimensional array count[w, d] that stores the synopses of the original vector and which is used to report results of queries using approximation techniques. Hence the total space requirement of the data structure is (w * d). We can bound each of w and d in terms of our parameters ε and δ and control the level of accuracy that we want our data structure to serve.


  • The data structure uses hashing techniques to process these updates and report queries using sublinear space. So assume we have d pairwise-independent hash functions {h1 .. hd} that hash each of our inputs to the range (1 .. w). Just for the more curious mind, pairwise independence is a method to construct a universal hash family, a technique that ensures lower number of collisions in the hash implementation.


  • When an update (it, ct) comes for the stream, we hash a[it] through each of the hash functions h1 .. hd and increment each of the w entries in the array that they hash to.



  • for i = 1 to d
      v = h(i)(a[it]) // v is between 1 and w
      count[i, v] += ct // increment the cell count by ct
    end

    At any point in time if we want to know the approximate value of an element a[i] of the vector a, we can get it from computing the minimum of all values in each of the d cells of count where i hashes to. This can be proved formally. But the general intuition is that since we are using hash functions there's always a possibility of multiple i's colliding on to the same cell and contributing additively to the value of the cell. Hence the minimum among all hash values is the closest candidate to give us the correct result for the query.


    The figure above shows the processing of the updates in a Count-Min sketch. This is typically called the Point Query that returns an approximation of a[i]. Similarly we can use a Count-Min sketch to get approximation queries for ranges which is typically a summation over multiple point queries. Another interesting application is to serve inner product queries where the data structure is used to query inner products of 2 vectors, a typical application of this being the estimation of join sizes in relational query processing. The paper Statistical Analysis of Sketch Estimators gives all details of how to use sketching as a technique for this.

    Count-Min sketches have some great properties which make them a very useful data structure when processing distributed streams. They have associativity properties and can be modelled as monoids and hence terribly performant in a distributed environment where you can parallelize sketch operations. In a future post I will discuss some implementation techniques and how we can use count-min sketches to serve some useful applications over data streams. Meanwhile Twitter's algebird and ClearSpring's stream-lib offer implementations of Count-Min sketch and various other data structures applicable for stream mining applications.



    Wednesday, July 24, 2013

    Scala Redis client goes non blocking : uses Akka IO

    scala-redis is getting a new non blocking version based on a kernel implemented with the new Akka IO. The result is that all APIs are non blocking and return a Future. We are trying to keep the API as close to the blocking version as possible. But bits rot and some of the technical debt need to be repaid. We have cleaned up some of the return types which had unnecessary Option[] wrappers, made some improvements and standardizations on the API type signatures and also working on making the byte array manipulation faster using akka.util.ByteString at the implementation level. We also have plans of using the Akka IO pipeline for abstracting the various stages of handling Redis protocol request and response.

    As of today we have quite a bit ready for review by the users. The APIs may change a bit here and there, but the core APIs are up there. There are a few areas which have not yet been implemented like PubSub or clustering. Stay tuned for more updates on this blog .. Here are a few code snippets that demonstrate the usage of the APIs ..

    Non blocking get/set

    @volatile var callbackExecuted = false
    
    val ks = (1 to 10).map(i => s"client_key_$i")
    val kvs = ks.zip(1 to 10)
    
    val sets: Seq[Future[Boolean]] = kvs map {
      case (k, v) => client.set(k, v)
    }
    
    val setResult = Future.sequence(sets) map { r: Seq[Boolean] =>
      callbackExecuted = true
      r
    }
    
    callbackExecuted should be (false)
    setResult.futureValue should contain only (true)
    callbackExecuted should be (true)
    
    callbackExecuted = false
    val gets: Seq[Future[Option[Long]]] = ks.map { k => client.get[Long](k) }
    val getResult = Future.sequence(gets).map { rs =>
      callbackExecuted = true
      rs.flatten.sum
    }
    
    callbackExecuted should be (false)
    getResult.futureValue should equal (55)
    callbackExecuted should be (true)
    

    Composing through sequential combinators

    val key = "client_key_seq"
    val values = (1 to 100).toList
    val pushResult = client.lpush(key, 0, values:_*)
    val getResult = client.lrange[Long](key, 0, -1)
    
    val res = for {
      p <- pushResult.mapTo[Long]
      if p > 0
      r <- getResult.mapTo[List[Long]]
    } yield (p, r)
    
    val (count, list) = res.futureValue
    count should equal (101)
    list.reverse should equal (0 to 100)
    

    Error handling using Promise Failure

    val key = "client_err"
    val v = client.set(key, "value200")
    v.futureValue should be (true)
    
    val x = client.lpush(key, 1200)
    val thrown = evaluating { x.futureValue } should produce [TestFailedException]
    thrown.getCause.getMessage should equal ("ERR Operation against a key holding the wrong kind of value")
    
    Feedbacks welcome, especially on the APIs and their usage. All code are in Github with all tests in the test folder. Jisoo Park (@guersam) has been doing an awesome job contributing a lot to all the goodness that's there in the repo. Thanks a lot for all the help ..

    Monday, July 22, 2013

    The Realm of Racket is an enjoyable read



    There are many ways to write a programming language book. You can start introducing the syntax and semantics of the language in a naturally comprehensible sequence of complexity and usage. Or you can choose to introduce the various features of the language with real world examples using the standard librray that the language offers. IIRC Accelerated C++ by Andrew Koenig and Barbara Moo takes this route. I really loved this approach and enjoyed reading the book.

    Of course Matthias Felleisen is known for a third way of teaching a language - the fun way. The Little Schemer and The Seasoned Schemer have introduced a novel way of learning a language. The Realm of Racket follows a similar style of teaching the latest descendant of Lisp, one game at a time. The implementation of every game introduces the idioms and language features with increasing degrees of complexity. There's a nice progression which helps understanding the complex features of the language building upon the already acquired knowledge of the simpler ones in earlier chapters.

    The book begins with a history of the Racket programming language and how it evolved as a descendant of Scheme, how it makes programming fun and how it can be used successfully as an introductory language to students aspiring to learn programming. Then it starts with Getting Started with Dr Racket for the impatient programmer and explains the IDE that will serve as your companion for the entire duration of your playing around with the book.

    Every chapter introduces some of the new language features and either develops a new game or builds on some improvement of a game developed earlier. This not only demonstrates a real world usage of the syntax and semantics of the language but makes the programmer aware of how the various features interact as a whole to build complex abstractions out of simpler ones. The book also takes every pain to defer the complexity of the features to the right point so that the reader is not burdened upfront. e.g. Lambdas are introduced only when the authors have introduced all basics of programming with functions and recursion. Mutants are introduced only after teaching the virtues of immutablity. For loops and comprehensions appear only when the book has introduced all list processing functions like folds, maps and filters. And then the book goes into great depth explaining why the language has so many variants of the for loop like for/list, for/fold, for*, for/first, for/last etc. In this entire discussion of list processing, for loops etc., I would love to see a more detailed discussion on sequence in the book. Sequence abstracts a large number of data types, but, much like Clojure it introduces a new way of API design - a single sequence to rule them all. API designers would surely like to have more of this sauce as part of their repertoire. Racket's uniform way of handling sequence is definitely a potent model of abstraction as compared to Scheme or other versions of Lisp.

    The games developed progress in complexity and we can see the powers of the language being put to great use when the authors introduce lazy evaluation and memoized computations and use them to improve the Dice of Doom. Then the authors introduce distributed game development which is the final frontier that the book covers. It's truly an enjoyable ride through the entire series.

    The concluding chapter talks about some of the advanced features like classes, objects and meta-programming. Any Lisp book will be incomplete without a discussion of macros and language development. But I think the authors have done well to defer these features till the end. Considering the fact that this is a book for beginning to learn the language this sounded like a sane measure.

    However, as a programmer experienced in other languages and wanting to take a look at Racket, I would have loved to see some coverage on testing. Introducing a bit on testing practices, maybe a unit testing library would have made the book more complete.

    The style of writing of this book has an underlying tone of humor and simplicity, which makes it a thoroughly enjoyable read. The use of illustrations and comics take away the monotony of learning the prosaics of a language. And the fact that Racket is a simple enough language makes this combination with pictures very refreshing.

    On the whole, as a book introducing a language, The Realm of Racket is a fun read. I enjoyed reading it a lot and recommend it without reservations for your bookshelf.



    Monday, June 03, 2013

    Endo is the new fluent API

    I tweeted this over the weekend .. My last two blog posts have been about endomorphisms and how it combines with the other functional structures to help you write expressive and composable code. In A DSL with an Endo - monoids for free, endos play with Writer monad and implement a DSL for a sequence of activities through monoidal composition. And in An exercise in Refactoring - Playing around with Monoids and Endomorphisms, I discuss a refactoring exercise that exploits the monoid of an endo to make composition easier. Endomorphisms help you lift your computation into a data type that gives you an instance of a monoid. And the mappend operation of the monoid is the function composition. Hence once you have the Endo for your type defined, you get a nice declarative syntax for the operations that you want to compose, resulting in a fluent API. Just a quick recap .. endomorphisms are functions that map a type on to itself and offer composition over monoids. Given an endomorphism we can define an implicit monoid instance ..
    implicit def endoInstance[A]: Monoid[Endo[A]] = new Monoid[Endo[A]] {
      def append(f1: Endo[A], f2: => Endo[A]) = f1 compose f2
      def zero = Endo.idEndo
    }
    
    I am not going into the details of this, which I discussed at length in my earlier posts. In this article I will sum up with yet another use case for making fluent APIs using the monoid instance of an Endo. Consider an example from the domain of securities trading, where a security trade goes through a sequence of transformations in its lifecycle through the trading process .. Here's a typical Trade model (very very trivialified for demonstration) ..
    sealed trait Instrument
    case class Security(isin: String, name: String) extends Instrument
    
    case class Trade(refNo: String, tradeDate: Date, valueDate: Option[Date] = None, 
      ins: Instrument, principal: BigDecimal, net: Option[BigDecimal] = None, 
      status: TradeStatus = CREATED)
    
    Modeling a typical lifecycle of a trade is complex. But for illustration, let's consider these simple ones which need to executed on a trade in sequence ..
    1. Validate the trade
    2. Assign value date to the trade, which will ideally be the settlement date
    3. Enrich the trade with tax/fees and net trade value
    4. Journalize the trade in books
    Each of the functions take a Trade and return a copy of the Trade with some attributes modified. A naive way of doing that will be as follows ..
    def validate(t: Trade): Trade = //..
    
    def addValueDate(t: Trade): Trade = //..
    
    def enrich(t: Trade): Trade = //..
    
    def journalize(t: Trade): Trade = //..
    
    and invoke these methods in sequence while modeling the lifecycle. Instead we try to make it more composable and lift the function Trade => Trade within the Endo ..
    type TradeLifecycle = Endo[Trade]
    
    and here's the implementation ..
    // validate the trade: business logic elided
    def validate: TradeLifecycle = 
      ((t: Trade) => t.copy(status = VALIDATED)).endo
    
    // add value date to the trade (for settlement)
    def addValueDate: TradeLifecycle = 
      ((t: Trade) => t.copy(valueDate = Some(t.tradeDate), status = VALUE_DATE_ADDED)).endo
    
    // enrich the trade: add taxes and compute net value: business logic elided
    def enrich: TradeLifecycle = 
      ((t: Trade) => t.copy(net = Some(t.principal + 100), status = ENRICHED)).endo
    
    // journalize the trade into book: business logic elided
    def journalize: TradeLifecycle = 
      ((t: Trade) => t.copy(status = FINALIZED)).endo
    
    Now endo has an instance of Monoid defined by scalaz and the mappend of Endo is function composition .. Hence here's our lifecycle model using the holy monoid of endo ..
    def doTrade(t: Trade) =
      (journalize |+| enrich |+| addValueDate |+| validate).apply(t)
    
    It's almost the specification that we listed above in numbered bullets. Note the inside out sequence that's required for the composition to take place in proper order.

    Why not plain old composition ?


    A valid question. The reason - abstraction. Abstracting the composition within types helps you compose the result with other types, as we saw in my earlier blog posts. In one of them we built larger abstractions using the Writer monad with Endo and in the other we used the mzero of the monoid as a fallback during composition thereby avoiding any special case branch statements.

    One size doesn't fit all ..


    The endo and its monoid compose beautifully and gives us a domain friendly syntax that expresses the business functionality ina nice succinct way. But it's not a pattern which you can apply everywhere where you need to compose a bunch of domain behaviors. Like every idiom, it has its shortcomings and you need different sets of solutions in your repertoire. For example the above solution doesn't handle any of the domain exceptions - what if the validation fails ? With the above strategy the only way you can handle this situation is to throw exceptions from validate function. But exceptions are side-effects and in functional programming there are more cleaner ways to tame the evil. And for that you need different patterns in practice. More on that in subsequent posts ..

    Monday, March 04, 2013

    An exercise in Refactoring - Playing around with Monoids and Endomorphisms

    A language is powerful when it offers sufficient building blocks for library design and adequate syntactic sugar that helps build expressive syntax on top of the lower level APIs that the library publishes. In this post I will discuss an exercise in refactoring while trying to raise the level of abstraction of a modeling problem.

    Consider the following modeling problem that I recently discussed in one of the Scala training sessions. It's simple but offers ample opportunities to explore how we can raise the level of abstraction in designing the solution model. We will start with an imperative solution and then incrementally work on raising the level of abstraction to make the final code functional and more composable.

    A Problem of Transformation ..

    The problem is to compute the salary of a person through composition of multiple salary components calculated based on some percentage of other components. It's a problem of applying repeated transformations to a pipeline of successive computations - hence it can be generalized as a case study in function composition. But with some constraints as we will see shortly.

    Let's say that the salary of a person is computed as per the following algorithm :

    1. basic = the basic component of his salary
    2. allowances = 20% of basic
    3. bonus = 10% of (basic + allowances)
    4. tax = 30% of (basic + allowances + bonus)
    5. surcharge = 10% of (basic + allowances + bonus - tax)
    Note that the computation process starts with a basic salary, computes successive components taking the input from the previous computation of the pipeline. But there's a catch though, which makes the problem a bit more interesting from the modleing perspective. Not all components of the salary are mandatory - of course the basic is mandatory. Hence the final components of the salary will be determined by a configuration object which can be like the following ..

    // an item = true means the component should be activated in the computation
    case class SalaryConfig(
      surcharge: Boolean    = true, 
      tax: Boolean          = true, 
      bonus: Boolean        = true,
      allowance: Boolean    = true 
    )
    

    So when we compute the salary we need to take care of this configuration object and activate the relevant components for calculation.

    A Function defines a Transformation ..

    Let's first translate the above components into separate Scala functions ..

    // B = basic + 20%
    val plusAllowance = (b: Double) => b * 1.2
    
    // C = B + 10%
    val plusBonus = (b: Double) => b * 1.1
    
    // D = C - 30%
    val plusTax = (b: Double) => 0.7 * b
    
    // E = D - 10%
    val plusSurcharge = (b: Double) => 0.9 * b
    

    Note that every function computes the salary up to the stage which will be fed to the next component computation. So the final salary is really the chained composition of all of these functions in a specific order as determined by the above stated algorithm.

    But we need to selectively activate and deactivate the components depending on the SalaryConfig passed. Here's the version that comes straight from the imperative mindset ..

    The Imperative Solution ..

    // no abstraction, imperative, using var
    def computeSalary(sc: SalaryConfig, basic: Double) = {
      var salary = basic
      if (sc.allowance) salary = plusAllowance(salary)
      if (sc.bonus) salary = plusBonus(salary)
      if (sc.tax) salary = plusTax(salary)
      if (sc.surcharge) salary = plusSurcharge(salary)
      salary
    }
    

    Straight, imperative, mutating (using var) and finally rejected by our functional mindset.

    Thinking in terms of Expressions and Composition ..

    Think in terms of expressions (not statements) that compose. We have functions defined above that we could compose together and get the result. But, but .. the config, which we somehow need to incorporate as part of our composable expressions.

    So direct composition of functions won't work because we need some conditional support to take care of the config. How else can we have a chain of functions to compose ?

    Note that all of the above functions for computing the components are of type (Double => Double). Hmm .. this means they are endomorphisms, which are functions that have the same argument and return type - "endo" means "inside" and "morphism" means "transformation". So an endomorphism maps a type on to itself. Scalaz defines it as ..

    sealed trait Endo[A] {
      /** The captured function. */
      def run: A => A
      //..
    }
    

    But the interesting part is that there's a monoid instance for Endo and the associative append operation of the monoid for Endo is function composition. That seems mouthful .. so let's dissect what we just said ..

    As you all know, a monoid is defined as "a semigroup with an identity", i.e.

    trait Monoid[A] {
      def append(m1: A, m2: A): A
      def zero: A
    }
    

    and append has to be associative.

    Endo forms a monoid where zero is the identity endomorphism and append composes the underlying functions. Isn't that what we need ? Of course we need to figure out how to sneak in those conditionals ..

    implicit def endoInstance[A]: Monoid[Endo[A]] = new Monoid[Endo[A]] {
      def append(f1: Endo[A], f2: => Endo[A]) = f1 compose f2
      def zero = Endo.idEndo
    }
    

    But we need to append the Endo only if the corresponding bit in SalaryConfig is true. Scala allows extending a class with custom methods and scalaz gives us the following as an extension method on Boolean ..

    /**
     * Returns the given argument if this is `true`, otherwise, the zero element 
     * for the type of the given argument.
     */
    final def ??[A](a: => A)(implicit z: Monoid[A]): A = b.valueOrZero(self)(a)
    

    That's exactly what we need to have the following implementation of a functional computeSalary that uses monoids on Endomorphisms to compose our functions of computing the salary components ..

    // compose using mappend of endomorphism
    def computeSalary(sc: SalaryConfig, basic: Double) = {
      val e = 
        sc.surcharge ?? plusSurcharge.endo     |+|
        sc.tax ?? plusTax.endo                 |+|
        sc.bonus ?? plusBonus.endo             |+| 
        sc.allowance ?? plusAllowance.endo 
      e run basic
    }
    

    More Generalization - Abstracting over Types ..

    We can generalize the solution further and abstract upon the type that represents the collection of component functions. In the above implementation we are picking each function individually and doing an append on the monoid. Instead we can abstract over a type constructor that allows us to fold the append operation over a collection of elements.

    Foldable[] is an abstraction which allows its elements to be folded over. Scalaz defines instances of Foldable[] typeclass for List, Vector etc. so we don't care about the underlying type as long as it has an instance of Foldable[]. And Foldable[] has a method foldMap that makes a Monoid out of every element of the Foldable[] using a supplied function and then folds over the structure using the append function of the Monoid.

    trait Foldable[F[_]]  { self =>
      def foldMap[A,B](fa: F[A])(f: A => B)(implicit F: Monoid[B]): B
      //..
    }
    

    In our example, f: A => B is the endo function and the append is the append of Endo which composes all the functions that form the Foldable[] structure. Here's the version using foldMap ..

    def computeSalary(sc: SalaryConfig, basic: Double) = {
      val components = 
        List((sc.surcharge, plusSurcharge), 
             (sc.tax, plusTax), 
             (sc.bonus, plusBonus),
             (sc.allowance, plusAllowance)
        )
      val e = components.foldMap(e => e._1 ?? e._2.endo)
      e run basic
    }
    

    This is an exercise which discusses how to apply transformations on values when you need to model endomorphisms. Instead of thinking in terms of generic composition of functions, we exploited the types more, discovered that our tranformations are actually endomorphisms. And then applied the properties of endomorphism to model function composition as monoidal appends. The moment we modeled at a higher level of abstraction (endomorphism rather than native functions), we could use the zero element of the monoid as the composable null object in the sequence of function transformations.

    In case you are interested I have the whole working example in my github repo.

    Friday, February 15, 2013

    A DSL with an Endo - monoids for free

    When we design a domain model, one of the issues that we care about is abstraction of implementation from the user level API. Besides making the published contract simple, this also decouples the implementation and allows post facto optimization to be done without any impact on the user level API.

    Consider a class like the following ..

    // a sample task in a project
    case class Task(name: String) 
    
    // a project with a list of tasks & dependencies amongst the
    // various tasks
    case class Project(name: String, 
                       startDate: java.util.Date, 
                       endDate: Option[java.util.Date] = None, 
                       tasks: List[Task] = List(), 
                       deps: List[(Task, Task)] = List())
    

    We can always use the algebraic data type definition above to add tasks and dependencies to a project. Besides being cumbersome as a user level API, it also is a way to program too close to the implementation. The user is coupled to the fact that we use a List to store tasks, making it difficult to use any alternate implementation in the future. We can offer a Builder like OO interface with fluent APIs, but that also adds to the verbosity of implementation, makes builders mutable and is generally more difficult to compose with other generic functional abstractions.

    Ideally we should be having a DSL that lets users create projects and add tasks and dependencies to them.

    In this post I will discuss a few functional abstractions that will stay behind from the user APIs, and yet provide the compositional power to wire up the DSL. This is a post inspired by this post which discusses a similar DSL design using Endo and Writers in Haskell.

    Let's address the issues one by one. We need to accumulate tasks that belong to the project. So we need an abstraction that helps in this accumulation e.g. concatenation in a list, or in a set or in a Map .. etc. One abstraction that comes to mind is a Monoid that gives us an associative binary operation between two objects of a type that form a monoid.

    trait Monoid[T] {
      def append(m1: T, m2: T): T
      def zero: T
    }
    

    A List is a monoid with concatenation as the append. But since we don't want to expose the concrete data structure to the client API, we can talk in terms of monoids.

    The other data structure that we need is some form of an abstraction that will offer us the writing operation into the monoid. A Writer monad is an example of this. In fact the combination of a Writer and a Monoid is potent enough to have such a DSL in the making. Tony Morris used this combo to implement a logging functionality ..

    for {
      a <- k withvaluelog ("starting with " + _)
      b <- (a + 7) withlog "adding 7"
      c <- (b * 3).nolog
      d <- c.toString.reverse.toInt withvaluelog ("switcheroo with " + _)
      e <- (d % 2 == 0) withlog "is even?"
    } yield e
    
    We could use this same technique here. But we have a problem - Project is not a monoid and we don't have a definition of zero for a Project that we can use to make it a Monoid. Is there something that would help us get a monoid from Project i.e. allow us to use Project in a monoid ?

    Enter Endo .. an endomorphism which is simply a function that takes an argument of type T and returns the same type. In Scala, we can state this as ..
    sealed trait Endo[A] {
      // The captured function
      def run: A => A
      //..
    }
    
    Scalaz defines Endo[A] and provides a lot of helper functions and syntactic sugars to use endomorphisms. Among its other properties, Endo[A] provides a natural monoid and allows us to use A in a Monoid. In other words, endomorphisms of A form a monoid under composition. In our case we can define an Endo[Project] as a function that takes a Project and returns a Project. We can then use it with a Writer (as above) and implement the accumulation of tasks within a Project.

    Exercise: Implement Tony Morris' logger without side-effects using an Endo.

    Here's how we would like to accumulate tasks in our DSL ..
    for {
      _ <- task("Study Customer Requirements")
      _ <- task("Analyze Use Cases")
      a <- task("Develop code")
    } yield a
    


    Let's define a function that adds a Task to a Project ..
    // add task to a project
    val withTask = (t: Task, p: Project) => p.copy(tasks = t :: p.tasks)
    


    and use this function to define the DSL API task which makes an Endo[Project] and passes it as a Monoid to the Writer monad. In the following snippet, (p: Project) => withTask(t, p) is a mapping from Project => Project, which gets converted to an Endo and then passed to the Writer monad for adding to the task list of the Project.
    def task(n: String): Writer[Endo[Project], Task] = {
      val t = Task(n)
      for {
        _ <- tell(((p: Project) => withTask(t, p)).endo)
      } yield t
    }
    

    The DSL snippet above is a monad comprehension. Let's add some more syntax to the DSL by defining dependencies of a Project. That's also a mapping from one Project state to another and can be realized using a similar function like withTask ..
    // add project dependency
    val withDependency = (t: Task, on: Task, p: Project) => 
      p.copy(deps = (t, on) :: p.deps)
    

    .. and define a function dependsOn to our DSL that allows the user to add the explicit dependencies amongst tasks. But this time instead of making it a standalone function we will make it a method of the class Task. This is only for getting some free syntactic sugar in the DSL. Here's the modified Task ADT ..
    case class Task(name: String) {
      def dependsOn(on: Task): Writer[Endo[Project], Task] = {
        for {
          _ <- tell(((p: Project) => withDependency(this, on, p)).endo)
        } yield this
      }
    }
    
    Finally we define the last API of our DSL that glues together the building of the Project and the addition of tasks and dependencies without directly coupling the user to some of the underlying implementation artifacts.
    def project(name: String, startDate: Date)(e: Writer[Endo[Project], Task]) = {
      val p = Project(name, startDate)
      e.run._1(p)
    }
    
    And we can finally create a Project along with tasks and dependencies using our DSL ..
    project("xenos", now) {
      for {
        a <- task("study customer requirements")
        b <- task("analyze usecases")
        _ <- b dependsOn a
        c <- task("design & code")
        _ <- c dependsOn b
        d <- c dependsOn a
      } yield d
    }
    
    In case you are interested I have the whole working example in my github repo.

    Friday, February 01, 2013

    Modular Abstractions in Scala with Cakes and Path Dependent Types

    I have been trying out various options of implementing the Cake pattern in Scala, considered to be one of the many ways of doing dependency injection without using any additional framework. There are other (more functional) ways of doing the same thing, one of which I blogged about before and also talked about in a NY Scala meetup. But I digress ..

    Call it DI or not, the Cake pattern is one of the helpful techniques to implement modular abstractions in Scala. You weave your abstract components (aka traits), layering on the dependencies and commit to implementations only at the end of the world. I was trying to come up with an implementation that does not use self type annotations. It's not that I think self type annotations are kludgy or anything but I don't find them used elsewhere much besides the Cake pattern. And of course mutually recursive self annotations are a code smell that makes your system anti-modular.

    In the following implementation I use path dependent types, which have become a regular feature in Scala 2.10. Incidentally it was there since long back under the blessings of an experimental feature, but has come out in public only in 2.10. The consequence is that instead of self type annotations or inheritance I will be configuring my dependencies using composition.

    Let me start with some basic abstractions of a very simple domain model. The core component that I will build is a service that reports the portfolio of clients as a balance. The example has been simplified for illustration purposes - the actual real life model has a much more complex implementation.

    A Portfolio is a collection of Balances. A Balance is a position of an Account in a specific Currency as on a particular Date. Expressing this in simple terms, we have the following traits ..

    // currency
    sealed trait Currency
    case object USD extends Currency
    case object EUR extends Currency
    case object AUD extends Currency
    
    //account
    case class Account(no: String, name: String, openedOn: Date, status: String)
    
    trait BalanceComponent {
      type Balance
    
      def balance(amount: Double, currency: Currency, asOf: Date): Balance
      def inBaseCurrency(b: Balance): Balance
    }
    

    The interesting point to note is that the actual type of Balance has been abstracted in BalanceComponent, since various services may choose to use various representations of a Balance. And this is one of the layers of the Cake that we will mix finally ..

    Just a note for the uninitiated, a base currency is typically considered the domestic currency or accounting currency. For accounting purposes, a firm may use the base currency to represent all profits and losses. So we may have some service or component that would like to have the balances reported in base currency.

    trait Portfolio {
      val bal: BalanceComponent
      import bal._
    
      def currentPortfolio(account: Account): List[Balance]
    } 
    

    Portfolio uses the abstract BalanceComponent and does not commit to any specific implementation. And the Balance in the return type of the method currentPortfolio is actually a path dependent type, made to look nice through the object import syntax.

    Now let's have some standalone implementations of the above components .. we are still not there yet to mix the cake ..

    // report balance as a TUPLE3 - simple
    trait SimpleBalanceComponent extends BalanceComponent {
      type Balance = (Double, Currency, Date)
    
      override def balance(amount: Double, currency: Currency, asOf: Date) = 
        (amount, currency, asOf)
      override def inBaseCurrency(b: Balance) = 
        ((b._1) * baseCurrencyFactor.get(b._2).get, baseCurrency, b._3)
    }
    
    // report balance as an ADT
    trait CustomBalanceComponent extends BalanceComponent {
      type Balance = BalanceRep
    
      // balance representation
      case class BalanceRep(amount: Double, currency: Currency, asOf: Date)
    
      override def balance(amount: Double, currency: Currency, asOf: Date) = 
        BalanceRep(amount, currency, asOf)
      override def inBaseCurrency(b: Balance) = 
        BalanceRep((b.amount) * baseCurrencyFactor.get(b.currency).get, baseCurrency, b.asOf)
    }
    

    And a sample implementation of ClientPortfolio that adds logic without yet commiting to any concrete type for the BalanceComponent.

    trait ClientPortfolio extends Portfolio {
      val bal: BalanceComponent
      import bal._
    
      override def currentPortfolio(account: Account) = {
        //.. actual impl will fetch from database
        List(
          balance(1000, EUR, Calendar.getInstance.getTime),
          balance(1500, AUD, Calendar.getInstance.getTime)
        )
      }
    }
    

    Similar to ClientPortfolio, we can have multiple implementations of Portfolio reporting that reports balances in various forms. So our cake has started taking shape. We have the Portfolio component and the BalanceComponent already weaved in without any implementation. Let's add yet another layer to the mix, maybe for fun - a decorator for the Portfolio.

    We add Auditing as a component which can decorate *any* Portfolio component and report the balance of an account in base currency. Note that Auditing needs to abstract implementations of BalanceComponent as well as Portfolio since the idea is to decorate any Portfolio component using any of the underlying BalanceComponent implementations.

    Many cake implementations use self type annotations (or inheritance) for this. I will be using composition and path dependent types.

    trait Auditing extends Portfolio {
      val semantics: Portfolio
      val bal: semantics.bal.type
      import bal._
    
      override def currentPortfolio(account: Account) = {
        semantics.currentPortfolio(account) map inBaseCurrency
      }
    }
    

    Note how the Auditing component uses the same Balance implementation as the underlying decorated Portfolio component, enforced through path dependent types.

    And we have reached the end of the world without yet committing to any implementation of our components .. But now let's do that and get a concrete service instantiated ..

    object SimpleBalanceComponent extends SimpleBalanceComponent
    object CustomBalanceComponent extends CustomBalanceComponent
    
    object ClientPortfolioAuditService1 extends Auditing {
      val semantics = new ClientPortfolio { val bal = SimpleBalanceComponent }
      val bal: semantics.bal.type = semantics.bal
    }
    
    object ClientPortfolioAuditService2 extends Auditing {
      val semantics = new ClientPortfolio { val bal = CustomBalanceComponent }
      val bal: semantics.bal.type = semantics.bal
    }
    

    Try out in your Repl and see how the two services behave the same way abstracting away all implementations of components from the user ..

    scala> ClientPortfolioAuditService1.currentPortfolio(Account("100", "dg", java.util.Calendar.getInstance.getTime, "a"))
    res0: List[(Double, com.redis.cake.Currency, java.util.Date)] = List((1300.0,USD,Thu Jan 31 12:58:35 IST 2013), (1800.0,USD,Thu Jan 31 12:58:35 IST 2013))
    
    scala> ClientPortfolioAuditService2.currentPortfolio(Account("100", "dg", java.util.Calendar.getInstance.getTime, "a"))
    res1: List[com.redis.cake.ClientPortfolioAuditService2.bal.Balance] = List(BalanceRep(1300.0,USD,Thu Jan 31 12:58:46 IST 2013), BalanceRep(1800.0,USD,Thu Jan 31 12:58:46 IST 2013))
    

    The technique discussed above is inspired from the paper Polymoprhic Embedding of DSLs. I have been using this technique for quite some time and I have discussed a somewhat similar implementation in my book DSLs In Action while discussing internal DSL design in Scala.

    And in case you are interested in the full code, I have uploaded it on my Github.

    Monday, January 14, 2013

    A language and its interpretation - Learning free monads

    I have been playing around with free monads of late and finding them more and more useful in implementing separation of concerns between pure data and its interpretation. Monads generally don't compose. But if you restrict monads to a particular form, then you can define a sum type that composes. In the paper Data Types a la carte, Wouter Swierstra describes this form as
    data Term f a =
        Pure a
        | Impure (f (Term f a)) 
    
    These monads consist of either pure values or an impure effect, constructed using f. When f is a functor, Term f is a monad. And in this case, Term f is the free monad being the left adjoint to the forgetful functor f.

    I am not going into the details of what makes a monad free. I once asked this question on google+ and Edward Kmett came up with a beautiful explanation. So instead of trying to come up with a half assed version of the same, have a look at Ed's response here .

    In short, we can say that a free monad is the freeest object possible that's still a monad.

    Composition

    Free monads compose and help you build larger abstractions which are pure data and yet manage to retain all properties of a monad. Hmm .. this sounds interesting because now we can not only build abstractions but make them extensible through composition by clients using the fact that it's still a monad.

    A free monad is pure data, not yet interpreted, as we will see shortly. You can pass it to a separate interpreter (possibly multiple interpreters) which can do whatever you feel like with the structure. Your free monad remains pure while all impurities can be put inside your interpreters.

    And so we interpret ..

    In this post I will describe how I implemented an interpreter for a Joy like concatenative language that uses the stack for its computation. Of course it's just a prototype and addresses an extremely simplified subset, but you get the idea. The basic purpose is to explore the power of free monads and come up with something that can potentially be extended to a full blown implementation of the language.

    When designing an interpreter there's always the risk of conflating the language along with the concerns of interpreting it. Here we will have the 2 completely decoupled by designing the core language constructs as free monads. Gabriel Gonzalez has written a series of blog posts [1,2,3] on the use of free monads which contain details of their virtues and usage patterns. My post is just an account of my learning experience. In fact after I wrote the post, I discovered a similar exercise done for embedding Forth in Haskell - so I guess I'm not the first one to learn free monads using a language interpreter.

    Let's start with some code, which is basically a snippet of Joy like code that will run within our Haskell interpreter ..
    p :: Joy ()
    p = do push 5
           push 6
           add
           incr
           add2
           square
           cube
           end
    

    This is our wish list and at the end of the post we will see if we can interpret this correctly and reason about some aspects of this code. Let's not bother much about the details of the above snippet. The important point is that if we fire it up in ghci, we will see that it's pure data!
    *Joy> p
    Free (Push 5 (Free (Push 6 (Free (Add (Free (Push 1 (Free (Add (Free (Push 1 (Free (Add (Free (Push 1 (Free (Add (Free (Dup (Free (Mult (Free (Dup (Free (Dup (Free (Mult (Free (Mult (Free End))))))))))))))))))))))))))))))
    *Joy> 
    

    We haven't yet executed anything of the above code. It's completely free to be interpreted and possibly in multiple ways. So, we have achieved this isolation - that the data is freed from the interpreter. You want to develop a pretty printer for this data - go for it. You want to apply semantics and give an execution model based on Joy - do it.

    Building the pure data (aka Builder)

    Let's first define the core operators of the language ..
    data JoyOperator cont = Push Int cont
                          | Add      cont 
                          | Mult     cont 
                          | Dup      cont 
                          | End          
                          deriving (Show, Functor)
    

    The interesting piece is the derivation of the Functor, which is required for implementing the forgetful functor adjoint of the free monad. Keeping the technical mumbo jumbo aside, free monads are just a general way of turning functors into monads. So if we have a core operator f as a functor, we can get a free monad Free f out of it. And knowing something is a free monad helps you transform transform an operation over the monad (the monad homomorphism) into an operation over the functor (functor homomorphism). We will see how this helps later ..

    The other point to note is that all operators take a continuation argument that points to the next operation in the chain. End is the terminal symbol and we plan to ignore anything that the user enters after an End.

    Push takes an Int and pushes into the stack, Add pops the top 2 elements of the stack and pushes the sum, Mult does the same for multiplication and Dup duplicates the top element of the stack. End signifies the end of program.

    Next we define the free monad over JoyOperator by using the Free data constructor, defined as part of Control.Monad.Free ..

    data Free f a = Pure a | Free (f (Free f a))

    -- | The free monad over JoyOperator
    type Joy = Free JoyOperator
    

    And then follow it up with some of the definitions of Joy operators as operations over free monads. Note that liftF lifts an operator (which is a Functor) into the context of a free monad. liftF has the following type ..

    liftF :: Functor f => f a -> Free f a

    As a property, a free moand has a forgetful functor as its left adjoint. The unlifting from the monad to the functor is given by the retract function ..

    retract :: Monad f => Free f a -> f a

    and needless to say

    retract . liftF = id

    -- | Push an integer to the stack
    push :: Int -> Joy ()
    push n = liftF $ Push n ()
    
    -- | Add the top two numbers of the stack and push the sum
    add :: Joy ()
    add = liftF $ Add ()
    

    .. and this can be done for all operators that we wish to support.

    Not only this, we can also combine the above operators and build newer ones. Remember we are working with monads and hence the *do* notation based sequencing comes for free ..

    -- | This combinator adds 1 to a number. 
    incr :: Joy ()
    incr = do {1; add}
    
    -- | This combinator increments twice
    add2 :: Joy ()
    add2 = do {incr; incr}
    
    -- | This combinator squares a number
    square :: Joy ()
    square = do {dup; mult}
    

    Now we can have a composite program which sequences through the core operators as well as the ones we derive from them. And that's what we posted as our first example snippet of a target program.

    An Interpreter (aka Visitor)

    Once we have the pure data part done, let's try and build an interpreter that does the actual execution based on the semantics that we defined on the operators.

    -- | Run a joy program. Result is either an Int or an error
    runProgram :: Joy n -> Either JoyError Int
    runProgram program = joy [] program
      where joy stack (Free (Push v cont))         = joy (v : stack) cont
            joy (a : b : stack) (Free (Add cont))  = joy (a + b : stack) cont
            joy (a : b : stack) (Free (Mult cont)) = joy (a * b : stack) cont
            joy (a : stack) (Free (Dup cont))      = joy (a : a : stack) cont
            joy _ (Free Add {})                    = Left NotEnoughParamsOnStack
            joy _ (Free Dup {})                    = Left NotEnoughParamsOnStack
            joy [] (Free End)                      = Left NotEnoughParamsOnStack
            joy [result] (Free End)                = Right result
            joy _ (Free End)                       = Left NotEmptyOnEnd
            joy _ Pure {}                          = Left NoEnd
    

    runProgram is the interpreter that takes a free monad as its input. Its implementation is quite trivial - it just matches on the recursive structure of the data and pushes the appropriate results on the stack. Now if we run our program p using the above interpreter we get the correct result ..

    *Joy> runProgram p
    Right 7529536
    *Joy> 
    

    Equational Reasoning

    Being Haskell and being pure, we obviously can prove some bits and pieces of our program as mathematical equations. At the beginning I said that End is the end of the program and anything after End needs to be ignored. What happens if we do the following ..

    runProgram $ do {push 5; incr; incr; end; incr; incr}

    If you have guessed correctly we get Right 7, which means that all operations after end have been ignored. But can we prove that our program indeed does this ? The following is a proof that end indeed ends our program. Consider some operation m follows end ..

    end >> m
    
    -- definition of end
    = liftF $ End >> m
    
    -- m >> m' = m >>= \_ -> m'
    = liftF $ End >>= \_ -> m
    
    -- definition of liftF
    = Free End >>= \_ -> m
    
    -- Free m >>= f = Free (fmap (>>= f) m)
    = Free (fmap (>>= \_ -> m) End)
    
    -- fmap f End = End
    = Free End
    
    -- liftF f = Free f (f is a functor)
    = liftF End
    
    -- definition of End
    = end
    

    So this shows that any operation we do after end is never executed.

    Improving the bind

    Free monads offer improved modularity by allowing us to separate the building of pure data from the (possibly) impure concerns of interpreting it with the external world. But a naive implementation leads to a penalty in runtime efficiency as Janis Voigtlander discusses in his paper Asymptotic Improvement of Computations over Free Monads. And the Haskell's implementation of free monads had this inefficiency where the asymptotic complexity of substitution was quadratic because of left associative bind. Edward Kmett implemented the Janis trick and engineered a solution that gets over this inefficiency. I will discuss this in a future post. And if you would like to play around with the interpreter, the source is there in my github.