Showing posts with label type. Show all posts
Showing posts with label type. Show all posts

Tuesday, January 24, 2012

List Algebras and the fixpoint combinator Mu

In my last post on recursive types and fixed point combinator, we saw how the type equations of the form a = F(a), where F is the type constructor have solutions of the form Mu a . F where Mu is the fixed point combinator. Substituting the solution in the original equation, we get ..

Mu a . F = F {Mu a . F / a}

where the rhs indicates substitution of all free a's in F by Mu a . F.

Using this we also got the type equation for ListInt as ..

ListInt = Mu a . Unit + Int x a

In this post we view the same problem from a category theory point of view. This post assumes understanding of quite a bit of category theory concepts. If you are unfamiliar with any of them you can refer to some basic text on the subject.

We start with the definition of ListInt as in the earlier post ..

// nil takes no arguments and returns a List data type
nil : 1 -> ListInt

// cons takes 2 arguments and returns a List data type
cons : (Int x ListInt) -> ListInt


Combining the two functions above, we get a single function as ..

in = [nil, cons] : 1 + (Int x ListInt) -> ListInt

We can say that this forms an algebra of the functor F(X) = 1 + (Int x X). Let's represent this algebra by (Mu F, in) or (Mu F, [nil, cons]), where Mu F is ListInt in the above combined function.

As a next step we show that the algebra (Mu F, [nil, cons]) forms an initial algebra representing the data type of Lists over a given set of integers. Here we are dealing with lists of integers though the same result can be shown for lists of any type A.

In order to show (Mu F, [nil cons]) form an initial F-algebra we consider an arbitrary F-algebra (C, phi), where phi is an arrow out of the sum type given by :

C : 1 -> C
h : (Int x C) -> C


and the join given by [c, h] : 1 + (Int x C) -> C

By definition, if (Mu F, [nil, cons]) has to form an initial F-algebra, then for any arbitrary F-algebra (C, phi) in that category, we need to find a function f: Mu F -> C which is a homomorphism and it should be unique. So for the algebra [c, h] the following diagram must commute ..
which means we must have a unique solution to the following 2 equations ..

f o nil = c
f o cons = h o (id x f)


From the universal property of initial F-algebras it's easy to see that this system of equations has a unique solution which is fold(c, h). It's the catamorphism represented by ..

f: {[c, h]}: ListInt -> C

This proves that (Mu F, [nil, cons]) is an initial F-algebra over the endofunctor F(X) = 1 + (Int x X). And it can be shown that an initial algebra in: F (Mu F) -> Mu F is an isomorphism and the carrier of the initial algebra is (upto isomorphism) a fixed point of the functor. Well, that may sound a bit of a mouthful. But we can discuss this in more details in one of my subsequent posts. There's a well established lemma due to Lambek that proves this. I can't do it in this blog post, since it needs some more prerequisites to be established beforehand which would make this post a bit bloated. But it's really a fascinating proof and I promise to take this up in one of my upcoming posts. Also we will see many properties of initial algebras and how they can be combined to define many of the properties of recursive data types in a purely algebraic way.

As I promised in my last post, here we have seen the other side of Mu - we started with the list definition, showed that it forms an initial algebra over the endofunctor F(X) = 1 + (Int x X) and arrived at the same conclusion that Mu F is a fixed point. Or Mu is the fixed point combinator.

Sunday, January 08, 2012

Learning the type level fixpoint combinator Mu

I blogged on Mu, type level fixpoint combinator some time back. I discussed how Mu can be implemented in Scala and how you can use it to derive a generic model for catamorphism and some cool type level data structures. Recently I have been reading TAPL by Benjamin Pierce that gives a very thorough treatment of the theories and implementation semantics of types in a programming language.

And Mu we meet again. Pierce does a very nice job of explaining how Mu does for types what Y does for values. In this post, I will discuss my understanding of Mu from a type theory point of view much of what TAPL explains.

As we know, the collection of types in a programming language forms a category and any equation recursive in types can be converted to obtain an endofunctor on the same category. In an upcoming post I will discuss how the fixed point that we get from Mu translates to an isomoprhism in the diagram of categories.

Let's have a look at the Mu constructor - the fixed point for type constructor. What does it mean ?

Here's the ordinary fixed point combinator for functions (from values to values) ..

Y f = f (Y f)

and here's Mu

Mu f = f (Mu f)

Quite similar in structure to Y, the difference being that Mu operates on type constructors. Here f is a type constructor (one that takes a type as input and generates another type). List is the most commonly used type constructor. You give it a type Int and you get a concrete type ListInt.

So, Mu takes a type constructor f and gives you a type T. This T is the fixed point of f, i.e. f T = T.

Consider the following recursive definition of a List ..

// nil takes no arguments and returns a List data type
nil : 1 -> ListInt

// cons takes 2 arguments and returns a List data type
cons : (Int x ListInt) -> ListInt


Taken together we would like to solve the following equation :

= Unit + Int x a     // ..... (1)

Now this is recursive and can be unfolded infinitely as

= Unit + Int x (Unit + Int x a)
  = Unit + Int x (Unit + Int x (Unit + Int x a))
  = ...


TAPL shows that this equation can be represented in the form of an infinite labeled tree and calls this infinite type regular. So, generally speaking, we have an equation of the form a = τ where

1. if a does not occur in τ, then we have a finite solution which, in fact is τ
2. if a occurs in τ, then we have an infinite solution represented by an infinite regular tree

So the above equation is of the form a = ... a ... or we can say a = F(a) where F is the type constructor. This highlights the recursion of types (not of values). Hence any solution to this equation will give us an object which will be the fixed point of the equation. We call this solution Mu a . F.

Since Mu a . F is a solution to a = F(a), we have the following:

Mu a . F = F {Mu a . F / a}, where the rhs indicates substitution of all free a's in F by Mu a . F.

Here Mu is the fixed point combinator which takes the type constructor F and gives us a type, which is the fixed point of F. Using this idea, the above equation (1) has the solution ListInt, which is the fixed point type ..

ListInt = Mu a . Unit + Int x a

In summary, we express recursive types using the fix point type constructor Mu and show that Mu generates the fixed point for the type constructor just like Y generates the same for functions on values.

Monday, August 10, 2009

Static Typing gives you a head start, Tests help you finish

In one of my earlier posts (almost a year back) I had indicated how type driven modeling leads to succinct domain structures that inherit the following goodness :

  • Lesser amount of code to write, since the static types encapsulate lots of business constraints

  • Lesser amount of tests to write, since the compiler writes them implicitly for you


In a recent thread on Twitter, I had mentioned about a comment that Manuel Chakravarty made in one of the blog posts of Micheal Feathers ..

"Of course, strong type checking cannot replace a rigorous testing discipline, but it makes you more confident to take bigger steps."

The statement resonated my own feelings on static typing that I have been practising for quite some time now using Scala. Since the twitter thread became louder, Patrick Logan made an interesting comment in my blog on this very subject ..

This is interesting... it is a long way toward the kind of explanation I have been looking for re: "type-driven programming" with rich type systems as opposed to "test-driven programming" with dynamic languages.

I am still a big fan of the latter and do not fully comprehend the former.

I'd be interested in your "type development" process - without "tests" of some kind, the type system may validate the "type soundness" of your types, but how do you know they are the types you actually *want* to have proven sound?


and the conversation became somewhat longer where both of us were trying to look into the practices and subtleties that domain modeling with type constraints imply on the programmer. One of the points that Patrick raised was regarding the kind of tests that you would typically provide for a code like this.

Let me try to look into some of the real life coding that I have been using this practice on. When I have a code snippet like this ..

/**
 * A trade needs to have a Trading Account
 */
trait Trade {
  type T
  val account: T
  def valueOf: Unit
}

/**
 * An equity trade needs to have a Stock as the instrument
 */
trait EquityTrade extends Trade {
  override def valueOf {
    //.. calculate value
  }
}

/**
 * A fixed income trade needs to have a FixedIncome type of instrument
 */
trait FixedIncomeTrade extends Trade {
  override def valueOf {
    //.. calculate value
  }
}
//..
//..

/**
 * Accrued Interest is computed only for fixed income trades
 */
trait AccruedInterestCalculatorComponent {
  type T

  val acc: AccruedInterestCalculator
  trait AccruedInterestCalculator {
    def calculate(trade: T)
  }
}


I need to do validations and write up unit and functional tests to check ..

  • EquityTrade needs to work only on equity class of instruments

  • FixedIncomeTrade needs to work on fixed incomes only and not on any other instruments

  • For every method in the domain model that takes an instrument or trade, I need to check if the passed in instrument or trade is of the proper type and as well write unit tests that check the same. AccruedInterestCalculator takes a trade as an argument, which needs to be of type FixedIncomeTrade, since accrued interest is only meaningful for bond trades only. The method AccruedInterestCalculator#calculate() needs to do an explicit check for the trade type which makes me write unit tests as well for valid as well as invalid use cases.


Now let us introduce the type constraints that a statically typed language with a powerful type system offers.

trait Trade {
  type T <: Trading
  val account: T

  //..as above
}

trait EquityTrade extends Trade {
  type S <: Stock
  val equity: S

  //.. as above
}

trait FixedIncomeTrade extends Trade {
  type FI <: FixedIncome
  val fi: FI

  //.. as above
}
//..


The moment we add these type constraints our domain model becomes more expressive and implicitly constrained with a lot of business rules .. as for example ..

  1. A Trade takes place on a Trading account only

  2. An EquityTrade only deals with Stocks, while a FixedIncomeTrade deals exclusively with FixedIncome type of instruments


Consider this more expressive example that slaps the domain constraints right in front of you without them being buried within procedural code logic in the form of runtime checks. Note that in the following example, all the types and vals that were left abstract earlier are being instantiated while defining the concrete component. And you can only instantiate honoring the domain rules that you have defined earlier. How useful is that as a succinct way to write concise domain logic without having to write any unit test ?

object FixedIncomeTradeComponentRegistry extends TradingServiceComponentImpl
  with AccruedInterestCalculatorComponentImpl
  with TaxRuleComponentImpl {

  type T = FixedIncomeTrade
  val tax = new TaxRuleServiceImpl
  val trd = new TradingServiceImpl
  val acc = new AccruedInterestCalculatorImpl
}


Every wiring that you do above is statically checked for consistency - hence the FixedIncome component that you build will honor all the domain rules that you have stitched into it through explicit type constraints.

The good part is that these business rules will be enforced by the compiler itself, without me having to write any additional explicit check in the code base. And the compiler is also the testing tool - you will not be able to instantiate a FixedIncomeTrade with an instrument that is not a subtype of FixedIncome.

Then how do we test such type constrained domain abstractions ?

Rule #1: Type constraints are tested by the compiler. You cannot instantiate an inconsistent component that violates the constraints that you have incorporated in your domain abstractions.

Rule #2: You need to write tests for the business logic only that form the procedural part of your abstractions. Obviously! Types cannot be of much help there. But if you are using a statically typed language, get the maximum out of the abstractions that the type system offers. There are situations when you will discover repetitive procedural business logic with minor variations sprinkled across the code base. If you are working with a statically typed language, model them up into a type family. Your tests for that logic will be localized *only* within the type itself. This is true for dynamically typed languages as well. Where static typing gets the advantage is that all usages will be statically checked by the compiler. In a statically typed language, you think and model in "types". In a dynamically typed languages you think in terms of the messages that the abstrcation needs to handle.

Rule #3: But you need to create instances of your abstractions within the tests. How do you do that ? Very soon you will notice that the bulk of your tests are being polluted by complicated instantiations using concrete val or type injection. What I do usually is to use the generators that ScalaCheck offers. ScalaCheck offers a special generator, org.scalacheck.Arbitrary.arbitrary, which generates arbitrary values of any supported type. And once you have the generators in place, you can use them to write properties that do the necessary testing of the rest of your domain logic.

Sunday, January 04, 2009

Higher Order abstractions in Scala with Type Constructor Polymorphism

Abstractions at a higher level through type constructor polymorphism. Good type systems are expressive enough to conceal the implementation complexity, and expose *only* what it matters to the developer. People often cringe about the complexity of Scala's type system and how it serves as a barrier to the entry point in mainstream programming. As Michael Feathers recently noted on Tweeter, the unfortunate fact is that people often jump at the esoteric parts of a language before looking at the simpler subset, which he will be using 90% of the time. And, I think Scala has that sweet spot, where you need not mess around too much with variances, implicits and existentials and yet come up with a nice, concise and functional codebase.

In this post, I discuss the already introduced intimidating phrase "Type Constructor Polymorphism" through a series of code snippets ranging from toys to some real-world stuff. The idea is, once again, not to evangelize type theory intricacies, but share some of the experiences of how this feature in Scala's type system can help write idiomatic code, while staying away from the complexities of its underlying implementation.

Jump on ..

We have a list of Option[String] that we need to abstract over and compute some value. Say, for the sake of keeping the example simple, we will calculate the sum of lengths of all the strings present ..

val employees: List[Option[String]] =
  List(Some("dave"), None, Some("john"), Some("sam"))

val n: List[Int] =
  employees.map { x =>
    x match {
      case Some(name) => name.length
      case None => 0
    }
  }.elements.reduceLeft[Int](+ _)
println(n)


Let us take another problem that needs to abstract over a different List structure, a List of List of Strings, and compute the same result as before, i.e. the sum of lengths of all the strings encountered in the collection ..

val brs: List[List[String]] =
  List(List("dave", "john", "sam"), List("peter", "robin", "david"), List("pras", "srim"))

val m: List[Int] =
  brs.flatMap {=> x.map {_.length}}
     .elements.reduceLeft[Int](+ _)
println(m)


Do you see any commonality in the solution structure in the above snippets ? After all, the problem space has a common generic structure ..

  1. we have a List with some String values abstracted in different forms

  2. need to iterate over the list

  3. do some stuff with elements in the list and compute an Int value


Unfortunately the actual solution structures look quite different and have to deal a lot digging into the abstractions of the underlying representations within the collection itself. And this is because, we cannot abstract over the type constructor (the List in this case) that takes another type constructor as an argument (Option[String] in the first case and List[String] in the second case).

Enter type constructor polymorphism.

Sounds intimidating ? Maybe, but ask the Haskellers .. they have been using typeclasses ever since so successfully in comprehensions, parser combinators and embedded DSLs and programming at a different level of abstraction.

Scala supports type constructor polymorphism since 2.5, and the details have been discussed in a recent paper by Adriaan Moors et al in OOPSLA 2008.

Here is a snippet of the Scala code that works seamlessly for both of the above cases ..

val l: List[Int] = employees.flatMapTo[Int, List]{_.map{_.length}}
val sum: Int = l.elements.reduceLeft[Int](+ _)
println(sum)


The above code abstracts over List through higher order parametric polymorphism, i.e. independent of whether the List parameter is an Option[] or another List[]. Incidentally both of them (List and Option) are monads and flatMapTo abstracts a monadic computation, hiding all details of type constructor polymorphism from the developer.

Now here is some real life example (elided for simplicity) ..

Here are the simple domain models for Customer, Instrument and Trade, used for modeling a use case where a Customer can order for the Trade of an Instrument in an exchange.

case class Customer(id: Int, name: String)
case object nomura extends Customer(1, "NOMURA")
case object meryll extends Customer(2, "MERYLL")
case object lehman extends Customer(3, "LEHMAN")

case class Instrument(id: Int)
case object ibm extends Instrument(1)
case object sun extends Instrument(2)
case object google extends Instrument(3)

case class Trade(ref: Int, ins: Instrument, qty: Int, price: Int)


And we fetch the following list through a query from the database. It is a List of tuples where each tuple consists of a Customer and a trade that has been executed based on the Order he placed at the exchange. And here is the snippet of the code that computes the sum total of the values of all trades executed in the day for all customers.

val trades: List[(Customer, Option[Trade])] =
  List((nomura, Some(Trade(100, ibm, 20, 12))),
       (meryll, None), (lehman, Some(Trade(200, google, 10, 10))))

val ts: List[Option[Trade]] = trades.map(_._2)
val t: List[Int] = ts.flatMapTo[Int, List]{_.map{=> x.qty * x.price}}
val value = t.elements.reduceLeft[Int](+ _)
println(value)


Really not much different from the above simple cases where we were dealing with toy examples - isn't it ? The structure of the solution is the same irrespective of the complexity of data being stored within the collections. The iteration is being done at a much higher level of abstraction, independent of the types stored within the container. And as I mentioned above, flatMapTo is the secret sauce in the above solution structure that abstracts the building of the new container hiding the inner details. To get more into the innards of flatMapTo and similar higher order abstractions, including the new form of Iterable[+T], have a look at the OOPSLA paper referenced above.

Postscript: In all the snippets above, I have been explicit about all type signatures, just for the sake of easier understanding of my passionate reader. In reality, almost all of these will be inferred by Scala.

Monday, October 20, 2008

How Scala's type system works for your Domain Model

When you have your type system working for you and model many of the domain constraints without a single line of runtime checking code, you can save much on the continuous tax payable down the line. I have been working on a domain model in Scala, and enjoying the expressiveness that the type system brings in to the implementation. There is so much that you can do with minimal ceremony, just using the powers of abstract types, abstract vals and self type annotations. In this post, I will share some of my experiences in implementing explicit domain constraints and invariants using Scala's type system and review some of the benefits that we can derive out of it, with respect to code readability, maintenability and succinct expression of the essence of the model.

I am talking about modeling security trades from a brokerage solution stack .. (horribly simplified) ..


object TradeComponent {
  import ReferenceDataComponent._
  /**
   * A trade needs to have a Trading Account
   */
  trait Trade {
    type T <: Trading

    val account: T
    def valueOf: Unit
  }

  /**
   * An equity trade needs to have a Stock as the instrument
   */
  trait EquityTrade extends Trade {
    type S <: Stock

    val equity: S
    def valueOf {
      //.. calculate value
    }
  }

  /**
   * A fixed income trade needs to have a FixedIncome type of instrument
   */
  trait FixedIncomeTrade extends Trade {
    type FI <: FixedIncome

    val fi: FI
    def valueOf {
      //.. calculate value
    }
  }
  //..
  //..
}



All of the above traits can be mixed in with actual service components and provide explicit abstraction over the constrained types that they declare as members. Note we have not yet committed to any concrete type in any of the above abstractions so far, but provided declarative constraints, just enough to model their domain invariants, e.g. a fixed income trade can only be instantiated with an instrument whose type is bounded on the upper side by FixedIncome. Once declared, the compiler will ensure this forever ..

Now we define a helper component that is only applicable for FixedIncome trades .. Note the appropriate bound on the type, explicitly declared to enforce the business rule ..


object TradeComponent {
  //.. as above
  //..

  /**
   * Accrued Interest is computed only for fixed income trades
   */
  trait AccruedInterestCalculatorComponent {
    type T <: FixedIncomeTrade

    val acc: AccruedInterestCalculator
    trait AccruedInterestCalculator {
      def calculate(trade: T)
    }
  }

  /**
   * Implementation of AccruedInterestCalculatorComponent. Does not
   * yet commit on the actual type of the instrument.
   */
  trait AccruedInterestCalculatorComponentImpl
    extends AccruedInterestCalculatorComponent {
    class AccruedInterestCalculatorImpl extends AccruedInterestCalculator {
      override def calculate(trade: T) = {
        //.. logic
      }
    }
  }
  //..
  //..
}



Let us now define a generic service component that provides trading service for all types of trades and instruments. Ignore the details of what the service offers, details have been elided to focus on how we can build expressive domain models using the abstraction capabilities offered by Scala's type system. The generic service still does not commit to any implementation. It uses the services of another component TaxRuleComponent using self type annotations.


object TradeComponent {

  //.. as above
  //..

  /**
   * Generic trading service
   */
  trait TradingServiceComponent {
    type T <: Trade

    val trd: TradingService
    trait TradingService {
      def valueTrade(t: T)
    }
  }

  /**
   * Implementation of generic trading service
   */
  trait TradingServiceComponentImpl extends TradingServiceComponent {
    this: TaxRuleComponent =>
    class TradingServiceImpl extends TradingService {
      override def valueTrade(t: T) = {
        val l = tax.getTaxRules(t.account)
        //.. logic
        t.valueOf
      }
    }
  }
  //..
  //..
}



When you define your type contraints correctly, Scala provides great support for wiring your components together through explicit type and value definitions in the final assembly. The following component assembly uses the Cake pattern that Jonas Boner recommended for implementing dependency injection. We are going to implement a concrete component assembly for fixed income trades. We need to define the concrete type once in the object and all type dependencies will be resolved through the Scala compiler magic. And we need to provide concrete implementations for all of the abstract vals that we have declared above in defining the generic components ..


object TradeComponent {

  //.. as above
  //..

  /**
   * The actual component that will be published and commits on the concrete types and vals
   */
  object FixedIncomeTradeComponentRegistry extends TradingServiceComponentImpl
    with AccruedInterestCalculatorComponentImpl
    with TaxRuleComponentImpl {

      type T = FixedIncomeTrade
      val tax = new TaxRuleServiceImpl
      val trd = new TradingServiceImpl
      val acc = new AccruedInterestCalculatorImpl
  }
  //..
  //..
}



Now that's a lot of domain constraints implemented only through the power of the type system. And these business rules are explicit within the code base and not buried within spaghetti of procedural routines, making your code maintenable and readable. Imagine how many lines of runtime checks we would have to write to implement the same in a dynamically typed language. And, btw, when you express your constraints through the type system, you don't need to write a single line of unit test for their verification - the compiler does that for you. Next time when you wonder how concise or terse your favorite dynamically typed language is, don't forget to figure that count in.

Monday, June 02, 2008

Java to Scala - Smaller Inheritance hierarchies with Structural Typing

I was going through a not-so-recent Java code base that contained the following structure for modeling the employee hierarchy of an organization. This looks quite representative of idiomatic Java being used to model a polymorphic hierarchy for designing a payroll generation application.


public interface Salaried {
  int salary();
}

public class Employee implements Salaried {
  //..
  //.. other methods

  @Override
  public int salary() {
    // implementation
  }
}

public class WageWorker implements Salaried {
  //..
  //.. other methods

  @Override
  public int salary() {
    // implementation
  }
}

public class Contractor implements Salaried {
  //..
  //.. other methods

  @Override
  public int salary() {
    // implementation
  }
}



And the payroll generation class (simplified for brevity ..) that actually needs the subtype polymorphism between the various concrete implementations of the Salaried interface.


public class Payroll {
  public int makeSalarySheet(List<Salaried> emps) {
    int total = 0;
    for(Salaried s : emps) {
      total += s.salary();
    }
    return total;
  }
}



While implementing in Java, have you ever wondered whether using public inheritance is the best approach to model such a scenario ? After all, in the above class hierarchy, the classes Employee, WageWorker and Contractor does not have *anything* in common except the fact that all of them are salaried persons and that subtype polymorphism has to be modeled *only* for the purpose of generating paysheets for all of them through a single API. In other words, we are coupling the entire class hierarchy through a compile time static relationship only for the purpose of unifying a single commonality in behavior.

Public inheritance has frequently been under fire, mainly because of the coupling that it induces between the base and the derived classes. Experts say Inheritance breaks Encapsulation and also regards it as the second strongest relationship between classes (only next to friend classes of C++). Interface driven programming has its advantages in promoting loose coupling between the contracts that it exposes and their concrete implementations. But interfaces in Java also pose problems when it comes to evolution of an API - once an interface is published, it is not possible to make any changes without breaking client code. No wonder we find design patterns like The Extension Object or strict guidelines for evolution of abstractions being enforced in big projects like Eclipse.

Finer Grained Polymorphism

Structural typing offers the ability to reduce the scope of polymorphism only over the subset of behaviors that need to be common between the classes. Just as in duck typing, commonality in abstractions does not mean that they belong to one common type; but only the fact that they respond to a common set of messages. Scala offers the benefit of both the worlds through its implementation of structural typing - a compile time checked duck typing. Hence we have a nice solution to unify certain behaviors of otherwise unrelated classes. The entire class hierarchy need not be related through static compile time subtyping relationship in order to be processed polymorphically over a certain set of behaviors. As an example, I tried modeling the above application using Scala's structural typing ..


case class Employee(id: Int) { def salary: Int = //.. }
case class DailyWorker(id: Int) { def salary: Int = //.. }
case class Contractor(id: Int) { def salary: Int = //.. }

class Payroll {
  def makeSalarySheet(emps: List[{ def salary: Int }]) = {
    (0 /: emps)(+ _.salary)
  }
}

val l = List[{ def salary: Int }](DailyWorker(1), Employee(2), Employee(1), Contractor(9))
val p = new Payroll
println(p.makeSalarySheet(l))



The commonality in behavior between the above classes is through the method salary and is only used in the method makeSalarySheet for generating the payroll. We can generalize this commonality into an anonymous type that implements a method having the same signature. All classes that implement a method salary returning an Int are said to be structurally conformant to this anonymous type { def salary: Int }. And of course we can use this anonymous type as a generic parameter to a Scala List. In the above snippet we define makeSalarySheet accept such a List as parameter, which will include all types of workers defined above.

The Smart Adapter

Actually it gets better than this with Scala. Suppose in the above model, the name salary is not meaningful for DailyWorkers and the standard business terminology for their earnings is called wage. Hence let us assume that for the DailyWorker, the class is defined as ..

case class DailyWorker(id: Int) { def wage: Int = //.. }


Obviously the above scheme will not work now, and the unfortunate DailyWorker falls out of the closure of all types that qualify for payroll generation.

In Scala we can use implicit conversion - I call it the Smart Adapter Pattern .. we define a conversion function that automatically converts wage into salary and instructs the compiler to adapt the wage method to the salary method ..


case class Salaried(salary: Int)
implicit def wageToSalary(in: {def wage: Int}) = Salaried(in.wage)



makeSalarySheet api now changes accordingly to process a List of objects that either implement an Int returning salary method or can be implicitly converted to one with the same contract. This is indicated by <% and is known as a view bound in Scala. Here is the implementation of the class Payroll that incorporates this modification ..


class Payroll {
  def makeSalarySheet[<% { def salary: Int }](emps: List[T]) = {
    (0 /: emps)(+ _.salary)
  }
}



Of course the rest of the program remains the same since all conversions and implicit magic takes place with the compiler .. and we can still process all objects polymorphically even with a different method name for DailyWorker. Here is the complete source ..


case class Employee(id: Int) { def salary: Int = //.. }
case class DailyWorker(id: Int) { def salary: Int = //.. }
case class Contractor(id: Int) { def wage: Int = //.. }

case class Salaried(salary: Int)
implicit def wageToSalary(in: {def wage: Int}) = Salaried(in.wage)

class Payroll {
  def makeSalarySheet[<% { def salary: Int }](emps: List[T]) = {
    (0 /: emps)(+ _.salary)
  }
}

val l = List[{ def salary: Int }](DailyWorker(1), Employee(2), Employee(1), Contractor(9))
val p = new Payroll
println(p.makeSalarySheet(l))



With structural typing, we can afford to be more conservative with public inheritance. Inheritance should be used *only* to model true subtype relationship between classes (aka LSP). Inheritance definitely has lots of uses, we only need to use our judgement not to misuse it. It is a strong relationship and, as the experts say, always try to implement the least strong relationship that correctly models your problem domain.