Showing posts with label patterns. Show all posts
Showing posts with label patterns. Show all posts

Wednesday, February 11, 2015

Functional Patterns in Domain Modeling - Composing a domain workflow with statically checked invariants

I have been doing quite a bit of domain modeling using functional programming mostly in Scala. And as it happens when you work on something for a long period of time you tend to identify more and more patterns that come up repeatedly within your implementations. You may ignore these as patterns the first time, get a feeling of mere coincidence the next time, but third time really gives you that aha! moment and you feel like documenting it as a design pattern. In course of my learnings I have started blogging on some of these patterns - you can find the earlier ones in the series in:

  • Functional Patterns in Domain Modeling - The Specification Pattern

  • Functional Patterns in Domain Modeling - Immutable Aggregates and Functional Updates

  • Functional Patterns in Domain Modeling - Anemic Models and Compositional Domain Behaviors


  • In this continuing series of functional patterns in domain modeling, I will go through yet another idiom which has been a quite common occurrence in my explorations across various domain models. You will find many of these patterns explained in details in my upcoming book on Functional and Reactive Domain Modeling, the early access edition of which is already published by Manning.

    One of the things that I strive to achieve in implementing domain models is to use the type system to encode as much domain logic as possible. If you can use the type system effectively then you get the benefits of parametricity, which not only makes your code generic, concise and polymorphic, but also makes it self-testing. But that's another story which we can discuss in another post. In this post I will talk about a pattern that helps you design domain workflows compositionally, and also enables implementing domain invariants within the workflow, all done statically with little help from the type system.

    As an example let's consider a loan processing system (simplified for illustration purposes) typically followed by banks issuing loans to customers. A typical simplified workflow looks like the following :-

    The Domain Model


    The details of each process is not important - we will focus on how we compose the sequence and ensure that the API verifies statically that the correct sequence is followed. Let's start with a domain model for the loan application - we will keep on enriching it as we traverse the workflow.

    case class LoanApplication private[Loans](
      // date of application
      date: Date,
      // name of applicant
      name: String,
      // purpose of loan
      purpose: String,
      // intended period of repayment in years
      repayIn: Int,
      // actually sanctioned repayment period in years
      actualRepaymentYears: Option[Int] = None,
      // actual start date of loan repayment
      startDate: Option[Date] = None,
      // loan application number
      loanNo: Option[String] = None,
      // emi
      emi: Option[BigDecimal] = None
    )

    Note we have a bunch of attributes that are defined as optional and will be filled out later as the loan application traverses through the sequence of workflow. Also we have declared the class private and we will have a smart constructor to create an instance of the class.

    Wiring the workflow with Kleisli


    Here are the various domain behaviors modeling the stages of the workflow .. I will be using the scalaz library for the Kleisli implementation.

    def applyLoan(name: String, purpose: String, repayIn: Int, 
      date: Date = today) =
      LoanApplication(date, name, purpose, repayIn)
    
    def approve = Kleisli[Option, LoanApplication, LoanApplication] { l => 
      // .. some logic to approve
      l.copy(
        loanNo = scala.util.Random.nextString(10).some,
        actualRepaymentYears = 15.some,
        startDate = today.some
      ).some
    }
    
    def enrich = Kleisli[Option, LoanApplication, LoanApplication] { l => 
      //.. may be some logic here
      val x = for {
        y <- l.actualRepaymentYears
        s <- l.startDate
      } yield (y, s)
    
      l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some
    }

    applyLoan is the smart constructor that creates the initial instance of LoanApplication. The other 2 functions approve and enrich perform the approval and enrichment steps of the workflow. Note both of them return an enriched version of the LoanApplication within a Kleisli, so that we can use the power of Kleisli composition and wire them together to model the workflow ..

    val l = applyLoan("john", "house building", 10)
    val op = approve andThen enrich
    op run l

    When you have a sequence to model that takes an initial object and then applies a chain of functions, you can use plain function composition like h(g(f(x))) or using the point free notation, (h compose g compose f) or using the more readable order (f andThen g andThen h). But in the above case we need to have effects along with the composition - we are returning Option from each stage of the workflow. So here instead of plain composition we need effectful composition of functions and that's exactly what Kleisli offers. The andThen combinator in the above code snippet is actually a Kleisli composition aka function composition with effects.

    So we have everything the workflow needs and clients use our API to construct workflows for processing loan applications. But one of the qualities of good API design is to design it in such a way that it becomes difficult for the client to use it in the wrong way. Consider what happens with the above design of the workflow if we invoke the sequence as enrich andThen approve. This violates the domain invariant that states that enrichment is a process that happens after the approval. Approval of the application generates some information which the enrichment process needs to use. But because our types align, the compiler will be perfectly happy to accept this semantically invalid composition to pass through. And we will have the error reported during run time in this case.

    Remembering that we have a static type system at our disposal, can we do better ?

    Phantom Types in the Mix


    Let's throw in some more types and see if we can tag in some more information for the compiler to help us. Let's tag each state of the workflow with a separate type ..

    trait Applied
    trait Approved
    trait Enriched

    Finally make the main model LoanApplication parameterized on a type that indicates which state it is in. And we have some helpful type aliases ..

    case class LoanApplication[Status] private[Loans]( //..
    
    type LoanApplied  = LoanApplication[Applied]
    type LoanApproved = LoanApplication[Approved]
    type LoanEnriched = LoanApplication[Enriched]

    These types will have no role in modeling domain behaviors - they will just be used to dispatch to the correct state of the sequence that the domain invariants mandate. The workflow functions need to be modified slightly to take care of this ..

    def applyLoan(name: String, purpose: String, repayIn: Int, 
      date: Date = today) =
      LoanApplication[Applied](date, name, purpose, repayIn)
    
    def approve = Kleisli[Option, LoanApplied, LoanApproved] { l => 
      l.copy(
        loanNo = scala.util.Random.nextString(10).some,
        actualRepaymentYears = 15.some,
        startDate = today.some
      ).some.map(identity[LoanApproved])
    }
    
    def enrich = Kleisli[Option, LoanApproved, LoanEnriched] { l => 
      val x = for {
        y <- l.actualRepaymentYears
        s <- l.startDate
      } yield (y, s)
    
      l.copy(emi = x.map { case (y, s) => calculateEMI(y, s) }).some.map(identity[LoanEnriched])
    }

    Note how we use the phantom types within the Kleisli and ensure statically that the sequence can flow only in one direction - that which is mandated by the domain invariant. So now an invocation of enrich andThen approve will result in a compilation error because the types don't match. So once again yay! for having the correct encoding of domain logic with proper types.

    Friday, February 01, 2013

    Modular Abstractions in Scala with Cakes and Path Dependent Types

    I have been trying out various options of implementing the Cake pattern in Scala, considered to be one of the many ways of doing dependency injection without using any additional framework. There are other (more functional) ways of doing the same thing, one of which I blogged about before and also talked about in a NY Scala meetup. But I digress ..

    Call it DI or not, the Cake pattern is one of the helpful techniques to implement modular abstractions in Scala. You weave your abstract components (aka traits), layering on the dependencies and commit to implementations only at the end of the world. I was trying to come up with an implementation that does not use self type annotations. It's not that I think self type annotations are kludgy or anything but I don't find them used elsewhere much besides the Cake pattern. And of course mutually recursive self annotations are a code smell that makes your system anti-modular.

    In the following implementation I use path dependent types, which have become a regular feature in Scala 2.10. Incidentally it was there since long back under the blessings of an experimental feature, but has come out in public only in 2.10. The consequence is that instead of self type annotations or inheritance I will be configuring my dependencies using composition.

    Let me start with some basic abstractions of a very simple domain model. The core component that I will build is a service that reports the portfolio of clients as a balance. The example has been simplified for illustration purposes - the actual real life model has a much more complex implementation.

    A Portfolio is a collection of Balances. A Balance is a position of an Account in a specific Currency as on a particular Date. Expressing this in simple terms, we have the following traits ..

    // currency
    sealed trait Currency
    case object USD extends Currency
    case object EUR extends Currency
    case object AUD extends Currency
    
    //account
    case class Account(no: String, name: String, openedOn: Date, status: String)
    
    trait BalanceComponent {
      type Balance
    
      def balance(amount: Double, currency: Currency, asOf: Date): Balance
      def inBaseCurrency(b: Balance): Balance
    }
    

    The interesting point to note is that the actual type of Balance has been abstracted in BalanceComponent, since various services may choose to use various representations of a Balance. And this is one of the layers of the Cake that we will mix finally ..

    Just a note for the uninitiated, a base currency is typically considered the domestic currency or accounting currency. For accounting purposes, a firm may use the base currency to represent all profits and losses. So we may have some service or component that would like to have the balances reported in base currency.

    trait Portfolio {
      val bal: BalanceComponent
      import bal._
    
      def currentPortfolio(account: Account): List[Balance]
    } 
    

    Portfolio uses the abstract BalanceComponent and does not commit to any specific implementation. And the Balance in the return type of the method currentPortfolio is actually a path dependent type, made to look nice through the object import syntax.

    Now let's have some standalone implementations of the above components .. we are still not there yet to mix the cake ..

    // report balance as a TUPLE3 - simple
    trait SimpleBalanceComponent extends BalanceComponent {
      type Balance = (Double, Currency, Date)
    
      override def balance(amount: Double, currency: Currency, asOf: Date) = 
        (amount, currency, asOf)
      override def inBaseCurrency(b: Balance) = 
        ((b._1) * baseCurrencyFactor.get(b._2).get, baseCurrency, b._3)
    }
    
    // report balance as an ADT
    trait CustomBalanceComponent extends BalanceComponent {
      type Balance = BalanceRep
    
      // balance representation
      case class BalanceRep(amount: Double, currency: Currency, asOf: Date)
    
      override def balance(amount: Double, currency: Currency, asOf: Date) = 
        BalanceRep(amount, currency, asOf)
      override def inBaseCurrency(b: Balance) = 
        BalanceRep((b.amount) * baseCurrencyFactor.get(b.currency).get, baseCurrency, b.asOf)
    }
    

    And a sample implementation of ClientPortfolio that adds logic without yet commiting to any concrete type for the BalanceComponent.

    trait ClientPortfolio extends Portfolio {
      val bal: BalanceComponent
      import bal._
    
      override def currentPortfolio(account: Account) = {
        //.. actual impl will fetch from database
        List(
          balance(1000, EUR, Calendar.getInstance.getTime),
          balance(1500, AUD, Calendar.getInstance.getTime)
        )
      }
    }
    

    Similar to ClientPortfolio, we can have multiple implementations of Portfolio reporting that reports balances in various forms. So our cake has started taking shape. We have the Portfolio component and the BalanceComponent already weaved in without any implementation. Let's add yet another layer to the mix, maybe for fun - a decorator for the Portfolio.

    We add Auditing as a component which can decorate *any* Portfolio component and report the balance of an account in base currency. Note that Auditing needs to abstract implementations of BalanceComponent as well as Portfolio since the idea is to decorate any Portfolio component using any of the underlying BalanceComponent implementations.

    Many cake implementations use self type annotations (or inheritance) for this. I will be using composition and path dependent types.

    trait Auditing extends Portfolio {
      val semantics: Portfolio
      val bal: semantics.bal.type
      import bal._
    
      override def currentPortfolio(account: Account) = {
        semantics.currentPortfolio(account) map inBaseCurrency
      }
    }
    

    Note how the Auditing component uses the same Balance implementation as the underlying decorated Portfolio component, enforced through path dependent types.

    And we have reached the end of the world without yet committing to any implementation of our components .. But now let's do that and get a concrete service instantiated ..

    object SimpleBalanceComponent extends SimpleBalanceComponent
    object CustomBalanceComponent extends CustomBalanceComponent
    
    object ClientPortfolioAuditService1 extends Auditing {
      val semantics = new ClientPortfolio { val bal = SimpleBalanceComponent }
      val bal: semantics.bal.type = semantics.bal
    }
    
    object ClientPortfolioAuditService2 extends Auditing {
      val semantics = new ClientPortfolio { val bal = CustomBalanceComponent }
      val bal: semantics.bal.type = semantics.bal
    }
    

    Try out in your Repl and see how the two services behave the same way abstracting away all implementations of components from the user ..

    scala> ClientPortfolioAuditService1.currentPortfolio(Account("100", "dg", java.util.Calendar.getInstance.getTime, "a"))
    res0: List[(Double, com.redis.cake.Currency, java.util.Date)] = List((1300.0,USD,Thu Jan 31 12:58:35 IST 2013), (1800.0,USD,Thu Jan 31 12:58:35 IST 2013))
    
    scala> ClientPortfolioAuditService2.currentPortfolio(Account("100", "dg", java.util.Calendar.getInstance.getTime, "a"))
    res1: List[com.redis.cake.ClientPortfolioAuditService2.bal.Balance] = List(BalanceRep(1300.0,USD,Thu Jan 31 12:58:46 IST 2013), BalanceRep(1800.0,USD,Thu Jan 31 12:58:46 IST 2013))
    

    The technique discussed above is inspired from the paper Polymoprhic Embedding of DSLs. I have been using this technique for quite some time and I have discussed a somewhat similar implementation in my book DSLs In Action while discussing internal DSL design in Scala.

    And in case you are interested in the full code, I have uploaded it on my Github.

    Monday, June 01, 2009

    Prototypal Inheritance in Javascript - Template Method meets Strategy

    I have been reading some of the papers on Self, a programming environment that models computation exclusively in terms of objects. However unlike the classical object-oriented approach, Self is a classless language, where everything is an object. An object has slots - each slot has a name and a value. The slot name is always a String, while the value can be any other Self object. The slot can point to methods as well, consisting of code. A special designated slot points to the parent object in the hierarchy. Hence each object is consistently designed for extensibility through inheritance. But since we don't have class structures, everything is dynamic and runtime. Objects interact through messages - when an object receives a message, it looks up into its slot for a match. If the matching message is not found, the search continues up the chain through successive parent pointers, till the root is reached.

    Prototype based languages offer a different way of implementing objects, and hence require a different thinking for structuring your programs. They make you think more in terms of messages that your objects will receive, and how the messages get propagated up the inheritance chain.

    Javascript follows an almost identical architecture, where the hierarchies of objects are constructed through prototypes. This post is not about Self or, for that matter, about the Javascript language. Some time back I had blogged about how the Template Method design pattern gets subsumed into higher order functions and closures when implemented using functional programming languages.

    In a class based language, template method pattern is implemented in terms of inheritance, which makes the structure of the pattern static and makes the derivatives of the hierarchy statically coupled to the base abstraction. Closures liberate the pattern structure from this compile time coupling and make it dynamic. But once we take off the class inheritance part and use higher order functions to plug in the variable parts of the algorithm, what we end up with closely matches the Strategy pattern. Have a look at James Iry's insightful comments in my earlier post.

    James also hinted at another level of subsumption which is more interesting - the case of the two patterns implemented in a prototype based language like Javascript. Here is how it looks ..

    // the template function at the base object
    // defines the generic flow
    // uses hooks to be plugged in by derived objects

    var processor = {
      process: function() {
        this.doInit();
        this.doProcess();
        this.doEnd();
        return true;
      }
    };


    We construct another object that inherits from the base object. The function beget is the one that Douglas Crockford defines as a helper to create a new object using another object as the prototype.

    if (typeof Object.beget !== 'function') {
      Object.beget = function(o) {
        var F = function() {};
        F.prototype = o;
        return new F();
      };
    }

    var my_processor  = Object.beget(processor);


    The new object now implements the variable parts of the algorithm.

    my_processor.doInit = function() {
      //..
    };
    my_processor.doProcess = function() {
      //..
    };
    my_processor.doEnd = function() {
      //..
    };


    and we invoke the function from the base object ..

    my_processor.process();

    If we need to define another specialization of the algorithm that only has to override a single variable part, we do it likewise by supplying the object my_processor as the prototype ..

    var your_processor= Object.beget(my_processor);
    your_processor.doEnd = function() {
      //.. another specialization
    };

    your_processor.process();


    So what we get is a dynamic version of the Template Method pattern with no static coupling - thanks to prototypal inheritance of Javascript. Is this a Template Method pattern or a Strategy pattern ? Both get subsumed into the prototypal nature of the language.

    Sunday, January 11, 2009

    Subsuming the Template Method pattern

    It all started with this Kevin Rutherford post, where he talks about his unnatural urge to use the Template Method pattern. He mentions that whenever he sees a duplicate algorithm, he has a natural tendency is to push up the skeleton into a superclass. This way he creates too many inheritance hierarchies, which prove to be quite brittle in the face of changing algorithms.

    Template method pattern mandates that the *template method* must model the invariant part of the algorithm, while the concrete classes will implement the variabilities that will be called back into the template method. While the pattern encourages implementation inheritance, which may lead to unnecessary coupling and brittle hierarchies, the pattern has been used quite effectively in various frameworks developed over times.

    As the GoF book mentions, the template method pattern leads to an inverted control structure sometimes referred to as the Hollywood Principle, and find better usages in frameworks and libraries to decouple client extensions / implementations from the reusable invariance of the algorithm.

    Outside the OO world, template method pattern has been put to great use by many programming languages and frameworks. In many of those applications, like many of the other design patterns, it has been subsumed within the language itself. Hence its usage has transcended into the idioms and best practices of the language itself without much of a ceremony or fanfare.

    Consider this code snippet in Scala ..

    def doForAll[A, B](l: List[A], f: A => B): List[B] = l match {
      case x :: xs => f(x) :: doForAll(xs, f)
      case Nil => Nil
    }


    The code processes the input list and invokes the user supplied function (f) on every element. Here the invariant part of the algorithm is the traversal of the list, while the hook is provided by the user supplied function. This follows the template method pattern, and is more popularly known as "map" in functional languages ..

    List(1, 2, 3, 4).map {=> x * 2}


    In languages that support higher order functions, template methods are ubiquitous, and used as a common idiom to promote framework style reusability. Languages like Haskell allow you to do more advanced stuff in the base abstraction through function composition and monadic sequencing, thereby making the invariant part more robust in the face of implementation variabilities. Even with IoC frameworks like Spring, template methods promote implementing monadic combinators (to the extent you can have in OO without HOF) that result in autowiring and sequencing operations.

    Higher Order Template Methods through Message Passing

    Template method pattern has two distinct aspects that combine to form the wholeness of the pattern - the commonality of the algorithm described by the base class and the concrete variabilities being injected by the derived implementations. Replace class by module and we have some great examples in Erlang / OTP Framework. Here is a fun example from Joe Armstrong's Programming Erlang ..

    -module(server5).
    -export([start/0, rpc/2]).

    start() -> spawn(fun() -> wait() end).

    wait() ->
        receive
            {become, F} -> F()
        end.

    rpc(Pid, Q) ->
        Pid ! {self(), Q},
        receive
            {Pid, Reply} -> Reply
        end.


    Erlang implements variability through message passing, dynamic typing and dynamic hot swapping of code. The server skeleton above really does nothing and awaits an impersonation request. On receiving the message {become, F}, the server behaves as an F. F models the variability part, while the above module is the invariant server logic. Feed it with a Factorial message and the server starts behaving as a Factorial server. Isn't this some template method on steroids ? But teh Erlang people never call it by this name. This is because, it is one of the most common idioms of Erlang programming, it needs no formal announcement in the name of a "design pattern". Of course, as I mentioned before, this does not invalidate the necessity of a design pattern - only difference with OO is that in some other languages, the pattern gets melded within the language syntax and idioms.

    Monday, November 10, 2008

    Design Patterns - The Cult to Blame ?

    I always thought GOF Design Patterns book achieved it's objective to make us better C++/Java programmers. It taught us how to design polymorphic class hierarchies alongside encouraging delegation over inheritance. In an object oriented language like Java or C++, which does not offer first class higher order functions or closures, the GOF patterns taught us how to implement patterns like Command, Strategy and State through properly encapsulated class structures that would decouple the theme from the context. As long as we do not link them with the pattern definition and theory as espoused by Christopher Alexander, the GOF book has an invaluable contribution towards today's mainstream OO community.

    But that's only the good part. I was wondering what made many people cringe at the patterns movement that seem to deluge the programming community for well over a decade.

    One of the drawbacks that the pattern movement in the programming community suffered from, was that, the design patterns as promoted by the GOF book, focused too much on the implementation aspects, the how part of the solution, instead of the what part of problem that they solve. This resulted in the implementation language being the main focus of the entire pattern. People started thinking that C++ is the language of choice since all of the GOF design patterns can be implemented faithfully using the honored constructs of the language. And, of course, in due time, Java, positioned as a better C++, also inherited the same honor. It may not be totally coincidental that during the time we were busy adulating the virtues of GOF design patterns in Java and C++, development of newer programming languages were at a very low ebb.

    Developers started thinking that Java and C++ are the be-all and end-all of programming language implementations, since we can implement all GOF patterns in them. Enterprise software loves to follow common practices, rote techniques that can be easily replicated through a set of replacable programmers, and the combination of Java and design patterns made a perfect match. Switching languages was difficult, and even for some mainstream programmers today, switching to more powerful languages like Ruby or Scala, the initial thought processes were limited to the abstraction levels of Java. In reality, the patterns movement created a cult and made us somewhat myopic towards implementing higher order abstractions even in any other more powerful programming language.

    Mark Jason Dominus reiterated this way back in 2006 ..

    "If the Design Patterns movement had been popular in the 1980's, we wouldn't even have C++ or Java; we would still be implementing Object Oriented Classes in C with structs, and the argument would go that since programmers were forced to use C anyway, we should at least help them as much as possible."

    Are Design Patterns useless ?

    Absolutely not. Patterns are solutions to problems in context. As long as the problems remain, patterns remain equally relevant. What we need to do is highlight on the intent of the pattern, the problem that it solves, the forces that it resolves and the resultant forces that it generates, instead of just documenting how it solves it in one particular language.

    As an example, the Strategy pattern is intended to decouple the implementation of an algorithm from the context of the application. In case of Java, it is implemented through the context class delegating the responsibility to the strategy interface, that can have multiple implementations. Looks and sounds ceremonious, but that is what it is, if you use Java as the implementation language. In languages that support higher order functions, the same pattern is implemented much more succinctly using native language features. Hence the implementation is subsumed into the natural idioms of the language itself. Again, if we highlight on the intent of the pattern, it remains *equally relevant*, irrespective of whether we use Ruby, Groovy, Scala or Lisp for implementation. And the intent is loud and clear - the implementation of the algorithm is a separate concern than the concrete context to which it is being plugged into.

    Similarly, the Visitor pattern has often been maligned as an overtly complex structure that should be avoided at any cost. Visitor looks complex in Java or C++, because these languages do not support higher order abstractions like pattern matching or multiple dispatch mechanisms. Languages like Scala that support pattern matching have been known to implement succinct visitors without much ceremony. This paper, presented in OOPSLA 2008, implements the Visitor pattern as a reusable generic type-safe component using the powerful typesystem of Scala.

    Irrespective of how you implement, Visitor remains a potent solution to the problem of separating the structure of abstraction hierarchies from the behavior of traversals over that hierarchy. The important part is to add the intent of Visitors to your design vocabulary - it is just the level of abstractions that the language offers that makes the implementation invisible, informal or formal.

    Many people also talk about Singleton implementation in Java as being too elaborate, complex and unnecessarily verbose. On the other hand, singleton is available as a library call in Ruby and as a single word language syntax in Scala. That does not make Singleton a simpleton, it is just that Java does not offer a sufficiently higher level of abstraction to express the intent and solution of the pattern. Go, use dependency injection frameworks, they offer Singletons as configurable behaviors ready-to-be-wired with your native objects.

    Patterns as a vehicle of communication

    .. and not as code snippets that can be copy pasted. The real intrinsic value of design patterns is that they facilitate communication in processes by encouraging a common vocabulary and format that binds the design space. Coplien mentions in his monograph on Software Patterns ..

    "Besides, we believe that human communication is the bottleneck in software development. If the pattern literary form can help programmers communicate with their clients, their customers, and with each other, they help fill a crucial need of contemporary software development.

    Patterns are not a complete design method; they capture important practices of existing methods and practices uncodified by conventional methods. They are not a CASE tool. They focus more on the human activities of design than on transformations that can blindly be automated. They are not artificial intelligence; they celebrate and encourage the human intelligence that separates people from computers."


    People, mostly from the dynamic language community, frown upon design patterns as being too ceremonious, claiming that most of them can be implemented in dynamically typed and functional languages much more easily. Very true, but that does not undermine the core value proposition of design patterns. Maybe most of us became converts to a cult that focused too much on the implementation details of the patterns in a specific family of languages. Ideally we should have been more careful towards documenting beautiful patterns and pattern languages as the core vocabulary of designers.

    Some people on the Erlang mailing list recently talked about OTP being the embodiment of Erlang design patterns. OTP is clearly a framework, that helps develop highly concurrent client server applications in Erlang. Patrick Logan correctly points out that OTP is a collection of patterns, only in a very poor sense of the term. The real collection would be a pattern language that documents the problems, identifies the forces at play and suggests solutions through a catalog of literary forms that will help develop concurrent server applications from ground up. In case of Erlang, gen_server, gen_event, gen_fsm etc. collaborate towards implementing the pattern language. Similar efforts have been known to exist for Scala as well. Implementing the same in any other language may be difficult, but that will only point to the deficiencies of that particular language towards implementing a concrete instance of that pattern language.

    Monday, August 20, 2007

    Learning a programming language can be fun

    Giles Bowkett makes a great point about how one-liner code snippets often make a great idiom in a programming language. It is always worth learning the idioms of a language than memorizing the syntax, and, as Giles mentions, a good one-liner compresses syntax down to idiom. Reading his blog, I instantly remembered the first one-liner in a programming language that gave me my first Aha! moment while learning C. The K&R bible, in Section 5.5, builds up the various versions of the C library function strcpy() that improves in conciseness and succinctness in every iteration and ultimately culminates into this power-packed one-liner ..

    void strcpy(char *s, char *t) {
        while (*s++ = *t++);
    }


    In the very next paragraph, the authors mention ..

    Althought this may seem cryptic at first sight, the notational convenience is considerable, and the idiom should be mastered, because you will see it frequently in C programs.


    So true!

    If you want to learn a programming language, invest most of your time learning the idioms - that's where you will discover the densest knowledge about using the language in the best possible way. As Giles mentions ..

    But the act of compressing code into a one-liner is like the act of creating slang, and if you understand how slang comes into being, you understand how to make new words.


    During my learning years of C++, one book that stood out in teaching the idioms of the language was Coplien's Advanced C++ Programming Styles and Idioms. The book was written way back in 1992, but remains my #1 recommendation for someone learning C++ even today. Many of the design patterns so popularized by the celebrated GOF book had already been beaten to hell in the Coplien book.

    Often there is a fine line of misunderstanding between a syntax and an idiom ..

    Many years back, in an interview for hiring a C++ developer, I had asked one candidate to explain the function prototype of a typical assignment operator overloading function. I believe that a candidate able to explain the nuances of the assignment operator overload function prototype in C++ has a good understanding of the basics of the language. Specifically I wanted him to explain the return type and value of the usual assignment operator overload function of a class in C++.

    Foo& Foo::operator=(const Foo& right) {
      // ..
      // ..
      return *this;
    }


    • Why does the function return *this ?

    • Why does the function not return right ?

    • Why does the function take a reference-to-const-Foo ?



    I felt this is an important idiom of C++ that developers should be thorough with, and a correct explanation by a person reveals a good understanding of multiple aspects of the language - variable lifetime, temporaries, idiomatic usage of assignment etc. Unfortunately the candidate was furious with me and complained that he does not memorize syntaxes - what are manuals for ? What he did not realize is that a proper understanding of the idiom of how assignments can be chained through references in C++ obviates the necessity of syntax memorization.

    Learning a new programming language can be fun

    This thread in the discussion of Lightweight Languages has an interesting list of the available options .. Amongst all of them, the most interesting one suggests to (t)ry to identify and concentrate on features of the language that are absent from other languages you know. I am sure this is a very effective way to know the idioms of a language and how to make new words with the language that you are trying to learn. Try implementing map/reduce in Javascript, lazy data structures in Erlang, Lisp like macros in Ruby - apart from the loads of fun that you will be having, you will learn the new language as well.

    Every language has its own form - a Ruby program has to look like Ruby. The moment your Ruby code starts looking like Java, then you are no longer thinking in Ruby. This is the essential difference between idiomatic and non-idiomatic use of a programming language. Try exploring the following one-liner in Ruby ..

    Account = Struct.new(:account_no, :account_name, :account_type)


    and see how concise you can do for an equivalent Java snippet. Now that's hell of a one-liner in Ruby! And an important idiom too ..

    Monday, August 13, 2007

    Collector Idiom in Scheme - Is this a Design Pattern ?

    As a newbie into the world of functional programming (one of my new year resolutions for 2007), I started my venture to learn the skills of constructing recursive processes and manipulating recursive data structures with Friedman and Felleisen's The Little Schemer. I am now into Chapter 8, Lambda The Ultimate, sailing through The Ninth Commandment (Abstract Common Patterns with a new function) and into Page 137 of the text. And here I meet multirember&co(), destined to be my thought-process-companion for the last one week. It took me quite some cycles of stepping through the debugger of DrScheme to figure out the actual computation process going on inside the stacks of lambdas that the function defines.

    Here is a copy of the function for those uninitiated to the amazing text ..

    (define multirember&co
      (lambda (a lat col)
        (cond
          ((null? lat)
           (col (quote()) (quote())))
          ((eq? (car lat) a)
           (multirember&co a
                           (cdr lat)
                           (lambda (newlat seen)
                             (col newlat
                                  (cons (car lat) seen)))))
          (else
           (multirember&co a
                           (cdr lat)
                           (lambda (newlat seen)
                             (col (cons (car lat) newlat)
                                  seen)))))))



    What does this function call (multirember&co a lat f) do ?

    In the words of the authors ..
    It looks at every atom of the lat to see whether it is eq? to a. Those atoms that are not are collected in one list ls1; the others for which the answer is true are collected in a second list ls2. Finally it determines the value of (f ls1 ls2).

    The statement of work is quite straightforward. I tried implementing the same in another functional language, Erlang, and it did not take me too much time to come up with a supposedly meaningful Erlangy implementation ..

    filter_atom(A, L, Col) -> filter_atom(A, L, [], [], Col).

    filter_atom(A, [H|T], Has, NotHas, Col) ->
      case (=:= A) of
        true    -> filter_atom(A, T, [H|Has], NotHas, Col);
        false   -> filter_atom(A, T, Has, [H|NotHas], Col)
      end;

    filter_atom(A, [], Has, NotHas, Col) ->
      Col(Has, NotHas).


    The Erlang implementation looks simple enough and does not have the accidental complexity that we find in the Scheme implementation.

    Honest confession : I am also a newbie in Erlang - any suggestions for improvements towards a more Erlangy modeling is welcome!

    Why does the Scheme implementation look so complex ?

    Coming back to the Scheme implementation, I am not sure if this is the most idiomatic implementation of the solution. But it definitely is an ample demonstration of the power of closures (lambdas). The inner lambdas build the new collectors by getting stuff from the enclosing scope and constructs values that are handed over to the next collector in the stack. This is precisely what the Tenth Commandment preaches - Build functions to collect more than one value at a time.

    Is this a Design Pattern ?

    The idiom of using the Collector for building up values has been used in subsequent function implementations as well - refer to multiinsertLR&co() and evens-only*&co() in the same chapter of the book. Given a list, and some predicate, the Collector idiom uses continuation passing style (CPS) to create closures that successively collects multiple values from the list. Is there a design pattern lurking around ?

    I am not qualified enough to comment on the virtues of the above Scheme implementation. One great thing about it is that the above implementation is tail recursive. I can come up with an accumulator based implementation, which is not tail recursive, but possibly has less of an accidental complexity.

    What do the experts have to say about the collector based idiom ?

    Monday, July 30, 2007

    Configurable Decorators using AOP

    Sometime back I had posted on the increasing misuse (or overuse) of aspects by today's developers. In many of those cases, plain old decorators would have just been fine. However, very recently, I used a combination of decorators and aspects to add another degree of configurability to a domain model.

    Decorators, by design, should be lightweight objects which can be attached to the skin of other domain objects to add to the roles and responsibilities of the latter. They are added and/or removed dynamically from original objects - hence we say that decorators offer an added level of flexibility over static inheritance. Consider the following aggregate, which is modeled as a Composite :

    Here is the base abstraction for the Composite, which models the Salary components of an employee :


    public abstract class Salary {
      public Salary add(final Salary component) {
        throw new UnsupportedOperationException("not supported at this level");
      }

      public Salary remove(final Salary component) {
        throw new UnsupportedOperationException("not supported at this level");
      }

      public Salary getChild(final int i) {
        throw new UnsupportedOperationException("not supported at this level");
      }

      public abstract double calculate();
    }



    and here is the composite ..


    public class SalaryComposite extends Salary {
      private List<Salary> components = new ArrayList<Salary>();

      @Override
      public Salary add(Salary component) {
        components.add(component);
        return this;
      }

      @Override
      public double calculate() {
        double total = 0;
        for(Salary salary : components) {
          total += salary.calculate();
        }
        return total;
      }

      @Override
      public Salary getChild(int i) {
        return components.get(i);
      }

      @Override
      public Salary remove(Salary component) {
        components.remove(component);
        return this;
      }
    }



    and a couple of leaf level components ..


    public class BasicPay extends Salary {
      private double basicPay;

      public BasicPay(double basicPay) {
        this.basicPay = basicPay;
      }

      @Override
      public double calculate() {
        return basicPay;
      }
    }

    public class HouseRentAllowance extends Salary {
      private double amount;

      public HouseRentAllowance(double amount) {
        this.amount = amount;
      }

      @Override
      public double calculate() {
        return amount;
      }
    }



    Decorators and Composites - a Marriage

    Decorators work nicely with Composites, and I have some decorators to allow users add behaviors to the Composite elements dynamically. In order to have decorators for the Composite, I need to have the Decorator share the base class with the Composite.


    public abstract class SalaryDecorator extends Salary {
      private Salary component;

      public SalaryDecorator(final Salary component) {
        this.component = component;
      }

      @Override
      public double calculate() {
        return component.calculate();
      }
    }



    and a couple of concrete decorators ..

    a senior citizen adjustment which can be applied to some of the components of the salary ..


    public class SeniorCitizenAdjustment extends SalaryDecorator {
      protected double adjustmentFactor;
      public SeniorCitizenAdjustment(final Salary component, double adjustmentFactor) {
        super(component);
        this.adjustmentFactor = adjustmentFactor;
      }

      @Override
      public double calculate() {
        //.. calculation of absolute amount based on adjustmentFactor
        //.. complex logic
        adjustment = ...
        return super.calculate() + adjustment;
      }
    }



    and a city compensatory allowance which varies based on the city where the employee leaves ..


    public class CityCompensatoryAdjustment extends SalaryDecorator {
      private double adjustmentFactor;

      public CityCompensatoryAdjustment(final Salary component, double adjustmentFactor) {
        super(component);
        this.adjustmentFactor = adjustmentFactor;
      }

      @Override
      public double calculate() {
        //.. calculation of absolute amount based on adjustmentFactor
        //.. complex logic
        adjustment = ...
        return super.calculate() + adjustment;
      }
    }



    Now my clients can make use of the Composite and decorate them using the decorators designed to compose the individual salary components ..


    //..
    Salary s = new SalaryComposite();
    s.add(new SeniorCitizenAdjustment(new BasicPay(1000), 100))
     .add(new CityCompensatoryAdjustment(new HouseRentAllowance(300), 150));
    //..



    In the above example, the use of decorators provide the flexibility that the particular design pattern promises and allows instance level customization of individual components of the composite.

    Decorating the Decorators

    How do you handle fine grained variations within decorators ? In our case many of the decorators had fine grained variations within themselves, which calls for either inheritance hierarchies between them or some other way to decorate the decorators (super-decorators ?). The problem with static inheritance hierarchies is well-known. They have to be defined during compile time and the codebase for each deployment need to have the explosion of all possible variations modeled as subclasses. Looks like the problem that decorators are supposed to solve and that which we have already solved for the Composite Salary aggregate. Now we have the same problem for the decorators themselves.

    We rejected the idea of static inheritance hierarchies and decided to model the variations as yet another level of decorators. We did not want to make the original decorators too complex and focused on making small composable, additive abstractions that can be tagged on transparently onto the domain objects and decorators alike.

    e.g. in an implementation we found that the CityCompensatoryAdjustment has another fine grained variation. When CityCompensatoryAdjustment is applied, we have the additional variation that states that if the employee's residence is in one of a group of premium cities, then he is eligible for an additional premium allowance on top of the normal CityCompensatoryAdjustment. We rejected the idea of changing CityCompensatoryAdjustment to incorporate the new variations for 2 reasons :

    1. Trying to incorporate too much logic into decorators will make them complicated, which defeats the basic design motivation for the decorators

    2. These variations change across deployments - hence it will be a bloat trying to incorporate everything into a single class


    Modeling the new requirement as another decorator (PremiumCityAdjustment), we have the following usage of the above snippet ..


    //..
    Salary s = new SalaryComposite();
    s.add(new SeniorCitizenAdjustment(new BasicPay(1000), 100))
     .add(new PremiumCityAdjustment(
              new CityCompensatoryAdjustment(
                  new HouseRentAllowance(300), 150),
              adjustmentFactor));
    //..



    But the Variations are Deployment specific!

    We cannot change the main codebase with the above fine-grained variations, since they are deployment specific. We need to find out ways to externalize the invocation of these decorators. We found out that these finer variations are policies that need to change the behavior of the original decorators whenever they are applied. In the above example, every invocation of CityCompensatoryAdjustment needs to be decorated with PremiumCityAdjustment.


    public class PremiumCityAdjustment extends SalaryDecorator {
      public PremiumCityAdjustment(final Salary component) {
        super(component);
      }

      @Override
      public double calculate() {
        //.. calculation of adjustment amount
        adjustment = ..
        return super.calculate() + adjustment;
      }
    }



    Aspects again !

    We decided to implement the finer grained variations as decorators which would be applied through aspects to the original decorators. This is the base aspect for all such SuperDecorators ..


    public abstract aspect SalarySuperDecoratorBase<extends SalaryDecorator> {

      pointcut decoratorCalculate(T s) : target(s)
            && execution(double T.calculate());

      abstract pointcut myAdvice();
    }



    and here is the implementation for adding PremiumCityAdjustment as a decorator for CityCompensatoryAdjustment ..


    public aspect SuperDecoratorCityCompensatoryAdjustment
        extends SalarySuperDecoratorBase<CityCompensatoryAdjustment> {

      pointcut myAdvice(): adviceexecution()
        && within(SuperDecoratorCityCompensatoryAdjustment);

      Object around(CityCompensatoryAdjustment s) : decoratorCalculate(s) && !cflow(myAdvice()) {
        return new PremiumCityAdjustment(s).calculate();
      }
    }



    This way we had a set of core decorators for the main domain model and another set of deployment specific decorators which decorated the core ones through aspect weaving. By adopting this strategy we were able to keep the decorators lightweight, avoided deep inheritance hierarchies and managed to keep customizable code base completely separate from the core one.