Showing posts with label clojure. Show all posts
Showing posts with label clojure. Show all posts

Monday, July 22, 2013

The Realm of Racket is an enjoyable read



There are many ways to write a programming language book. You can start introducing the syntax and semantics of the language in a naturally comprehensible sequence of complexity and usage. Or you can choose to introduce the various features of the language with real world examples using the standard librray that the language offers. IIRC Accelerated C++ by Andrew Koenig and Barbara Moo takes this route. I really loved this approach and enjoyed reading the book.

Of course Matthias Felleisen is known for a third way of teaching a language - the fun way. The Little Schemer and The Seasoned Schemer have introduced a novel way of learning a language. The Realm of Racket follows a similar style of teaching the latest descendant of Lisp, one game at a time. The implementation of every game introduces the idioms and language features with increasing degrees of complexity. There's a nice progression which helps understanding the complex features of the language building upon the already acquired knowledge of the simpler ones in earlier chapters.

The book begins with a history of the Racket programming language and how it evolved as a descendant of Scheme, how it makes programming fun and how it can be used successfully as an introductory language to students aspiring to learn programming. Then it starts with Getting Started with Dr Racket for the impatient programmer and explains the IDE that will serve as your companion for the entire duration of your playing around with the book.

Every chapter introduces some of the new language features and either develops a new game or builds on some improvement of a game developed earlier. This not only demonstrates a real world usage of the syntax and semantics of the language but makes the programmer aware of how the various features interact as a whole to build complex abstractions out of simpler ones. The book also takes every pain to defer the complexity of the features to the right point so that the reader is not burdened upfront. e.g. Lambdas are introduced only when the authors have introduced all basics of programming with functions and recursion. Mutants are introduced only after teaching the virtues of immutablity. For loops and comprehensions appear only when the book has introduced all list processing functions like folds, maps and filters. And then the book goes into great depth explaining why the language has so many variants of the for loop like for/list, for/fold, for*, for/first, for/last etc. In this entire discussion of list processing, for loops etc., I would love to see a more detailed discussion on sequence in the book. Sequence abstracts a large number of data types, but, much like Clojure it introduces a new way of API design - a single sequence to rule them all. API designers would surely like to have more of this sauce as part of their repertoire. Racket's uniform way of handling sequence is definitely a potent model of abstraction as compared to Scheme or other versions of Lisp.

The games developed progress in complexity and we can see the powers of the language being put to great use when the authors introduce lazy evaluation and memoized computations and use them to improve the Dice of Doom. Then the authors introduce distributed game development which is the final frontier that the book covers. It's truly an enjoyable ride through the entire series.

The concluding chapter talks about some of the advanced features like classes, objects and meta-programming. Any Lisp book will be incomplete without a discussion of macros and language development. But I think the authors have done well to defer these features till the end. Considering the fact that this is a book for beginning to learn the language this sounded like a sane measure.

However, as a programmer experienced in other languages and wanting to take a look at Racket, I would have loved to see some coverage on testing. Introducing a bit on testing practices, maybe a unit testing library would have made the book more complete.

The style of writing of this book has an underlying tone of humor and simplicity, which makes it a thoroughly enjoyable read. The use of illustrations and comics take away the monotony of learning the prosaics of a language. And the fact that Racket is a simple enough language makes this combination with pictures very refreshing.

On the whole, as a book introducing a language, The Realm of Racket is a fun read. I enjoyed reading it a lot and recommend it without reservations for your bookshelf.



Monday, February 07, 2011

Why I made DSLs In Action polyglotic


Since I started writing DSLs In Action (buy here)*, a lot of my friends and readers asked me about my decision to make the book polyglotic. Indeed right from the word go, I had decided to treat the topic of DSL based design without a significant bias towards any specific language. Even today after the book has been published, many readers come up to me and ask the same question. I thought I would clarify my points on this subject in this blog post.

A DSL is a vehicle to speak the language of the domain on top of an implementation of a clean domain model. Whatever be the implementation language of the DSL, you need to make it speak the ubiquitous language of the domain. And by language I mean the syntax and the semantics that the experts of the particular domain are habituated to use.

A DSL is a facade of linguistic abstraction on top of your domain's semantic model. As a DSL designer it's your responsibility to make it as expressive to your users as possible. It starts with the mapping of the problem domain to the solution domain artifacts, converging on a common set of vocabulary for all the stakeholders of the implementation and finally getting into the nuts and bolts of how to implement the model and the language.

It's a known fact that there's NO programming language that can express ALL forms of abstraction in the most expressive way. So as a language designer you always have the flexibility to choose the implementation language based on your solution domain model. You make this choice as a compromise of all the forces that come up in any software development project. You have the timeline, the resident expertise within your team and other social factors to consider before you converge to the final set of languages. In short there's always a choice of the language(s) that you make just like any other software development effort.

Being Idiomatic

Same problem domains can be modeled in solution domain using radically different forms of abstraction. It depends on the language that you use and the power of abstraction that it offers. The same set of domain rules may be implemented using the type system of a statically typed language. While you need to use the power of meta-programming to implement the same concepts idiomatically in a dynamically typed language. Even within dynamic languages, idioms vary a lot. Clojure or the Lisp family offers compile time meta-programming in the form of macros to offer your users the specific custom syntactic structures that they need for the domain. While Ruby and Groovy do the same with runtime meta-programming.

Here's an example code snippet from my book of an internal DSL being used to register a security trade and computes its net cash value using the principal amount and the associated tax/fee components. It's implemented in Ruby using all of the Ruby idioms.

str = <<END_OF_STRING
new_trade 'T-12435' for account 'acc-123' to buy 100 shares of 'IBM',
                    at UnitPrice = 100
END_OF_STRING

TradeDSL.trade str do |t|
  CashValueCalculator.new(t).with TaxFee, BrokerCommission do |cv|
    t.cash_value = cv.value 
    t.principal = cv.p
    t.tax = cv.t
    t.commission = cv.c
  end
end


The above DSL has a different geometry than what you would get with the same domain concepts implemented using Clojure .. Have a look at the following snippet of a similar use case implemented in Clojure and executed from the Clojure REPL:

user> (def request {:ref-no "r-123", :account "a-123", :instrument "i-123", 
         :unit-price 20, :quantity 100})
#'user/request

user> (trade request)
{:ref-no "r-123", :account "a-123", :instrument "i-123", :principal 2000, :tax-fees {}}

user> (with-tax-fee trade
        (with-values :tax 12)
        (with-values :commission 23))
#'user/trade

user> (trade request)
{:ref-no "r-123", :account "a-123", :instrument "i-123", :principal 2000, 
  :tax-fees {:commission 460, :tax 240}}

user> (with-tax-fee trade
        (with-values :vat 12))
#'user/trade

user> (trade request)
{:ref-no "r-123", :account "a-123", :instrument "i-123", :principal 2000, 
  :tax-fees {:vat 240, :commission 460, :tax 240}}

user> (net-value (trade request))
2940


The above DSL is implemented using syntactic macros of Clojure for custom syntax building and standard functional programming idioms that the language supports.

In summary, we need to learn multiple languages in order to implement domain models idiomatically in each of them. DSLs In Action discusses all these ideas and contains lots and lots of implementations of real world use cases using Java, Scala, Clojure, Ruby and Groovy.

Hope this clears my intent of a polyglotic treatment of the subject in the book.


* Affiliate Link

Monday, September 27, 2010

Domain Models - Thinking differently in Scala & Clojure

A language that doesn't affect the way you think about programming, is not worth knowing.
- Alan J Perlis

When you model a domain you map the artifacts from the problem domain to the solution domain. The problem domain artifacts are the same irrespective of your solution domain. But the mapping process depends on your medium of modeling, the target platform, the programming language and the paradigms that it offers. Accordingly you need to orient your thought process so as to adapt to the language idioms that you have chosen for implementation.

Recently I did a fun exercise in modeling the same problem domain on to 2 target languages - Scala, that offers a mix of OO and functional features and Clojure, that's much more functional and makes you think more in terms of functions and combinators. The idea is to share the thought process of this domain modeling exercise and demonstrate how even similar architectural patterns of the solution can map to entirely different paradigms in the  implementation model.

This is also one of the underlying themes of my upcoming book DSLs In Action. When you think of a DSL you need to think not only of the surface syntax that the user gets to know, but also of the underlying domain model that forms the core of the DSL implementation. And the underlying host language paradigms shape the way you think of your DSL implementation. In the book there are plenty of examples where I take similar examples from one problem domain and design DSLs in multiple languages. That way you get to know how your thought process needs to be re-oriented when you change implementation languages even for the same problem at hand.

Modeling in Scala, a hybrid object functional language

Consider an abstraction for a security trade. For simplicity we will consider only a small set of attributes meaningful only for the current discussion. Let's say we are modeling a Trade abstraction using a statically typed language like Scala that offers OO as one of the paradigms of modeling. Here's a sample implementation that models a Trade as a case class in Scala ..

Objects for coarse-grained abstractions ..

type Instrument = String
type Account = String
type Quantity = BigDecimal
type Money = BigDecimal

import java.util.{Calendar, Date}
val today = Calendar.getInstance.getTime

case class Trade(ref: String, ins: Instrument, account: Account, unitPrice: Money,
  quantity: Quantity, tradeDate: Date = today) {
  def principal = unitPrice * quantity
}


Ok .. that was simple. When we have classes as the primary modeling primitive in the language we try to map artifacts to objects. So the trade artifact of the problem domain maps nicely to the above class Trade.

But a trade has a definite lifecycle and a trade abstraction needs to be enriched with additional attributes and behaviors in course of the various stages of the trading process. How do we add behaviors to a trade dynamically ?

Consider enriching a trade with tax and fee attributes when we make an instance of a Trade. Similar to Trade we also model the tax and fee types as separate artifacts that can be mixed in with the Trade abstraction.

trait TradeTax { this: Trade =>
  def tradeTax(logic: Money => Money): Money = logic(principal)
}

trait Commission { this: Trade =>
  def commission(logic: (Money, Quantity) => Money): Money = logic(principal, quantity)
}


Now we can instantiate a trade decorated with the additional set of taxes and fees required as per the market regulations ..

lazy val t = new Trade("1", "IBM", "a-123", 12.45, 200) with TradeTax with Commission


Note how the final abstraction composes the Trade class along with the mixins defined for tax and fee types. The thought process is OO along with mixin inheritance - a successful implementation of the decorator pattern. In a language that offers classes and objects as the modeling primitives, we tend to think of them as abstracting the coarse grained artifacts of the domain.

Functions for fine grained domain logic ..

Also note that we use higher order functions to model the variable part of the tax fee calculation. The user supplies this logic during instantiation which gets passed as a function to the calculation of TradeTax and Commission.

lazy val tradeTax = t.tradeTax { p => p * 0.05 }
lazy val commission = t.commission { (p, q) => if (> 100) p * 0.05 else p * 0.07 }


When modeling with Scala as the language that offers both OO and functional paradigms, we tend to use the combo pack - not a pure way of thinking, but the hybrid model that takes advantage of both the paradigms that the language offers.

And now Clojure - a more functional language than Scala

As an alternate paradigm let's consider the same modeling activity in Clojure, a language that's more functional than Scala despite being hosted on top of the Java language infrastructure. Clojure forces you to think more functionally than Scala or Java (though not as purely as Haskell). Accordingly our solution domain modeling thoughts also need to take a functional bend. The following example is also from my upcoming book DSLs In Action and illustrates the same point while discussing DSL design in Clojure.

Trade needs to be a concrete abstraction and one way of ensuring that in Clojure is as a Map. We can also use defrecord to create a Clojure record, but that's not important for the point of today's discussion. We model a trade as a Map, but abstract its construction behind a function. Remember we are dealing with a functional language and all manipulations of a trade need to be thought out as pure functions that operate on immutable data structures.

This is how we construct a trade from a request, which can be an arbitrary structure. In the following listing, trade is the constructor function that returns a Map populated from a request structure.

Abstractions to map naturally to functions ..

; create a trade from a request
(defn trade
    "Make a trade from the request"
  [request]
  {:ref-no (:ref-no request)
   :account (:account request)
   :instrument (:instrument request)
   :principal (* (:unit-price request) (:quantity request))
   :tax-fees {}})
    
0x3b a sample request 
(def request
  {:ref-no "trd-123"
   :account "nomura-123"
   :instrument "IBM"
   :unit-price 120
   :quantity 300})


In our case the Map just acts as the holder of data, the center of attraction is the function trade, which, as we will see shortly, will be the main subject of composition.

Note that the Map that trade returns contains an empty tax-fees structure, which will be filled up when we decorate the trade with the tax fee values. But how do we do that idiomatically, keeping in mind that our modeling language is functional and offers all goodness of immutable and persistent data structures. No, we can't mutate the Map!

Combinators for wiring up abstractions ..

But we can generate another function that takes the current trade function and gives us another Map with the tax-fee values filled up. Clojure has higher order functions and we can make a nice little combinator out of them for the job at hand ..

; augment a trade with a tax fee value
(defn with-values [trade tax-fee value]
  (fn [request]
    (let [trdval (trade request)
          principal (:principal trdval)]
       (assoc-in trdval [:tax-fees tax-fee]
         (* principal (/ value 100))))))


Here trade is a function which with-values takes as input and it generates another function that decorates the original trade Map with the passed in tax-fee name and the value. Here the value that we pass is a percentage that's calculated based on the principal of the trade. I have kept it simpler than the Scala version that models some more logic for calculating each individual tax fee value. This is the crux of the decorator pattern of our model. We will soon dress it up a little bit and give it a user friendly syntax.

and Macros for DSL ..

Now we can define another combinator that plays around nicely with with-values to model the little language that our users can relate to their trading desk vocabulary.

Weeee .. it's a macro :)

; macro to decorate a function       
(defmacro with-tax-fee
    "Wrap a function in one or more decorators"
  [func & decorators]
  `(redef ~func (-> ~func ~@decorators)))


and this is how we mix it with the other combinator with-values ..

(with-tax-fee trade
  (with-values :tax 12)
  (with-values :commission 23))


Note how we made with-tax-fee a decorator that's replete with functional idioms that Clojure offers. -> is a Thrush combinator in Clojure that redefines the original function trade by threading it through the decorators. redef is just a helper macro that redefines the root binding of the function preserving the metadata. This is adapted from the decorator implementation that Compojure offers.

We had the same problem domain to model. In the solution domain we adopted the same overall architecture of augmenting behaviors through decorators. But the solution modeling medium were different. Scala offers the hybrid of OO and functional paradigms and we used both of them to implement decorator based behavior in the first case. In the second effort, we exploited the functional nature of Clojure. Our decorators and the subject were all functions and using the power of higher order functions and combinator based approach we were able to thread the subject through a series of decorators to augment the original trade abstraction.

Friday, August 27, 2010

Random thoughts on Clojure Protocols

Great languages are those that offer orthogonality in design. Stated simply it means that the language core offers a minimal set of non-overlapping ways to compose abstractions. In an earlier article A Case for Orthogonality in Design I discussed some features from languages like Haskell, C++ and Scala that help you compose higher order abstractions from smaller ones using techniques offered by those languages.

In this post I discuss the new feature in Clojure that just made its way in the recently released 1.2. I am not going into what Protocols are - there are quite a few nice articles that introduce Clojure Protocols and the associated defrecord and deftype forms. This post will be some random rants about how protocols encourage non intrusive extension of abstractions without muddling inheritance into polymorphism. I also discuss some of my realizations about what protocols aren't, which I felt was equally important along with understanding what they are.

Let's start with the familiar Show type class of Haskell ..

> :t show
show :: (Show a) => a -> String

Takes a type and renders a string for it. You get show for your class if you have implemented it as an instance of the Show type class. The Show type class extends your abstraction transparently through an additional behavior set. We can do the same thing using protocols in Clojure ..

(defprotocol SHOW 
  (show [val]))

The protocol definition just declares the contract without any concrete implementation in it. Under the covers it generates a Java interface which you can use in your Java code as well. But a protocol is not an interface.

Adding behaviors non-invasively ..

I can extend an existing type with the behaviors of this protocol. And for this I need not have the source code for the type. This is one of the benefits that ad hoc polymorphism of type classes offers - type classes (and Clojure protocols) are open. Note how this is in contrast to the compile time coupling of Java interface and inheritance.

Extending java.lang.Integer with SHOW ..

(extend-type Integer
  SHOW
  (show [i] (.toString i)))

We can extend an interface also. And get access to the added behavior from *any* of its implementations .. Here's extending clojure.lang.IPersistentVector ..

(extend-type clojure.lang.IPersistentVector
  SHOW
  (show [v] (.toString v)))

(show [12 1 4 15 2 4 67])
> "[12 1 4 15 2 4 67]"

And of course I can extend my own abstractions with the new behavior ..

(defrecord Name [last first])

(defn name-desc [name]
  (str (:last name) " " (:first name)))

(name-desc (Name. "ghosh" "debasish")) ;; "ghosh debasish"

(extend-type Name
  SHOW
  (show [n]
    (name-desc n)))

(show (Name. "ghosh" "debasish")) ;; "ghosh debasish"

No Inheritance

Protocols help you wire abstractions that are in no way related to each other. And it does this non-invasively. An object conforms to a protocol only if it implements the contract. As I mentioned before, there's no notion of hierarchy or inheritance related to this form of polymorphism.

No object bloat, no monkey patching

And there's no object bloat going on here. You can invoke show on any abstraction for which you implement the protocol, but show is never added as a method on that object. As an example try the following after implementing SHOW for Integer ..

(filter #(= "show" (.getName %)) (.getMethods Integer))

will return an empty list. Hence there is no scope of *accidentally* overriding some one else's monkey patch on some shared class.

Not really a type class

Clojure protocols dispatch on the first argument of the methods. This limits its ability from getting the full power that Haskell / Scala type classes offer. Consider the counterpart of Show in Haskell, which is the Read type class ..

> :t read  
read :: (Read a) => String -> a

If your abstraction implements Read, then the exact instance of the method invoked will depend on the return type. e.g.

> [1,2,3] ++ read "[4,5,6]"
=> [1,2,3,4,5,6]

The specific instance of read that returns a list of integers is automatically invoked here. Haskell maintains the dispatch match as part of its global dictionary.

We cannot do this in Clojure protocols, since it's unable to dispatch based on the return type. Protocols dispatch only on the first argument of the function.


Monday, May 10, 2010

Laziness in Clojure - Some Thoughts

Some readers commented on my earlier post on Thrush implementation in Clojure that the functional way of doing stuff may seem to create additional overhead through unnecessary iterations and intermediate structures that could have been avoided using traditional imperative loops. Consider the following thrush for the collection of accounts from the earlier post ..

(defn savings-balance
  [& accounts]
  (->> accounts
       (filter #(= (:type %) ::savings))
       (map :balance)
       (apply +)))


Is it one iteration of the collection for the filter and another for the map ?

It's not. Clojure offers lazy sequences and the functions that operate on them all return lazy sequences only. So in the above snippet Clojure actually produces a composition of the filter and the map that act on the collection accounts in a lazy fashion. Of course with apply, everything is computed since we need to realize the full list to compute the sum. Let's look at the following example without the sum to see how Clojure sequences differ from a language with eager evaluation semantics ..

user> (def lazy-balance
      (->> accounts
           (filter #(= (:type %) ::savings))
           (map #(do (println "getting balance") (:balance %)))))
#'user/lazy-balance


lazy-balance has not been evaluated - we don't yet have the printlns. Only when we force the evaluation we have it computed ..

user> lazy-balance
(getting balance
getting balance
200 300)


Had Clojure been a strict language it would have been stupid to follow the above strategy for a large list of accounts. We would have been doing multiple iterations over the list generating lots of intermediate structures to arrive at the final result. An imperative loop would have rested the case much more cheaply.

Laziness improves compositionality. With laziness, Clojure sequences and the higher order functions on them essentially reify loops so that you can transform them all at once. As Cale Gibbard defends laziness in Haskell with his comments on this LtU thread .. "It's laziness that allows you to think of data structures like control structures."

Clojure is not as lazy as Haskell. And hence the benefits are also not as pervasive. Haskell being lazy by default, the compiler can afford to make aggressive optimizations like reordering of operations and transformations that Clojure can't. With Haskell's purity that guarantees absence of side-effects, deforestation optimizations like stream fusion generates tight loops and minimizes heap allocations. But I hear that Clojure 1.2 will have some more compiler level optimizations centered around laziness of its sequences.

Laziness makes you think differently. I had written an earlier post on this context with Haskell as the reference language. I have been doing some Clojure programming of late. Many of my observations with Haskell apply to Clojure as well. You need to keep in mind the idioms and best practices that laziness demands. And at many times they may not seem obvious to you. In fact with Clojure you need to know the implementation of the abstraction in order to ensure that you get the benefits of lazy sequences.

You need to know that destructuring's & uses nthnext function which uses next that needs to know the future to determine the present. In short, next doesn't fit in the lazy paradigm.

The other day I was working on a generic walker that traverses some recursive data structures for some crap processing. I used walk from clojure.walk, but later realized that for seqs it does a doall that realizes the sequence - another lazy gotcha that caught me unawares. But I actually needed to peek into the implementation to get to know it.

Being interoperable with Java is one of the benefits of Clojure. However you need to be aware of the pitfalls of using Java's data structures with the lazy paradigm of Clojure. Consider the following example where I put all accounts in a java.util.concurrent.LinkedBlockingQueue.

(import '(java.util.concurrent LinkedBlockingQueue))
(def myq (new LinkedBlockingQueue))
(doto myq (.put acc-1) (.put acc-2) (.put acc-3))


Now consider the following snippet that does some stuff on the queue ..

(let [savings-accounts (filter #(= (:type %) ::savings) myq)]
     (.clear myq)
     (.addAll myq savings-accounts))


Should work .. right ? Doesn't ! filter is lazy and hence savings-accounts is empty within the let-block. Then we clear myq and when we do an addAll, it fails since savings-accounts is still empty. The solution is to use a doall, that blows away the laziness and realizes the filtered sequence ..

(let [savings-accounts (doall (filter #(= (:type %) ::savings) myq))]
     (.clear myq)
     (.addAll myq savings-accounts))


Of course laziness in Clojure sequences is something that adds power to your abstractions. However you need to be careful on two grounds :
  • Clojure as a language is not lazy by default in totality (unlike Haskell) and hence laziness may get mixed up with strict evaluation leading to surprising and unoptimized consequences.
  • Clojure interoperates with Java, which has mutable data structures and strict evaluation. Like the situation I described above with LinkedBlockingQueue, sometimes it's always safe to bite the bullet and do things the Java way.

Monday, April 12, 2010

Thrush in Clojure

Quite some time back I wrote a blog post on the Thrush Combinator implementation in Scala. Just for those who missed reading it, Thrush is one of the permuting combinators from Raymond Smullyan's To Mock a Mockingbird. The book is a fascinating read where the author teaches combinatorial logic using examples of birds from an enchanted forest. In case you've not yet read it, please do yourself a favor and get it from your favorite book store.

A Thrush is defined by the following condition: Txy = yx. Thrush reverses the order of evaluation. In our context, it's not really an essential programming tool. But if you're someone who takes special care to make your code readable to the human interface, the technique sometimes comes in quite handy.

Recently I came across Thrush in Clojure. You don't have to implement anything - it's there for you in the Clojure library implemented as a macro ..

Conside this simple example of bank accounts where we represent an account as a Map in Clojure ..

(def acc-1 {:no 101 :name "debasish" :type 'savings :balance 100})
(def acc-2 {:no 102 :name "john p." :type 'checking :balance 200})


We have a list of accounts and we need to find all savings accounts and compute the sum of their current balances .. well not too difficult in Clojure ..

(defn savings-balance
  [& accounts]
  (apply +
    (map :balance
      (filter #(= (:type %) 'savings) 
        (seq accounts)))))


To a programmer familiar with the concepts of functional programming, it's quite clear what the above function does. Let's read it out for all of us .. From a list of accounts, filter the ones with type as savings, get their balances and report the sum of them. That was easy .. but did you notice that we read it inside out from the implementation, which btw is a 4 level nested function ?

Enter Thrush ..

Being a permuting combinator, Thrush enables us to position the functions outside in, in the exact sequence that the human mind interprets the problem. In our Scala version we had to implement something custom .. with Clojure, it comes with the standard library .. have a look ..

(defn savings-balance
  [& accounts]
  (->> (seq accounts)
       (filter #(= (:type %) 'savings))
       (map :balance)
       (apply +)))


->> is implemented as a macro in Clojure that does the following :

  1. threads the first form (seq accounts) as the last argument of the second form (the filter), which makes (seq accounts) the last argument of filter
  2. then makes the entire filter form, including the newly added argument the last argument of the map
.. and so on for all the forms present in the argument list. Effectively the resulting form that we see during runtime is our previous version using nested functions. The Thrush combinator only dresses it up nicely for the human eyes synchronizing the thought process with the visual implementation of the logic. And all this at no extra runtime cost! Macros FTW :)

->> has a related cousin ->, which is same as ->>, but only threads the forms as the second argument of the next form instead of the last. These macros implement Thrush in Clojure. Idiomatic Clojure code is concise and readable and using a proper ubiquitous language of the domain, makes a very good DSL. Think about using Thrush when you feel that reordering the forms will make your API look more intuitive to the user.

Thrush also helps you implement the Decorator pattern in a very cool and concise way. In my upcoming book, DSLs In Action I discuss these techniques in the context of designing DSLs in Clojure.

Sunday, January 03, 2010

Pragmatics of Impurity

James Hague, a long time Erlanger, drives home a point or two regarding purity of paradigms in a couple of his latest blog posts. Here's his take on being effective with pure functional languages ..

"My real position is this: 100% pure functional programing doesn't work. Even 98% pure functional programming doesn't work. But if the slider between functional purity and 1980s BASIC-style imperative messiness is kicked down a few notches--say to 85%--then it really does work. You get all the advantages of functional programming, but without the extreme mental effort and unmaintainability that increases as you get closer and closer to perfectly pure."

Purity is not necessarily pragmatic. In my last blog post I also tangentially touched upon the notion of purity while discussing how a *hybrid* model of SQL-NoSQL database stack can be effective for large application deployments. Be it with programming languages or with databases or any other paradigms of computation, we need to have the right balance of purity and pragmatism.

Clojure introduced transients. Rich Hickey says in the rationale .. "If a pure function mutates some local data in order to produce an immutable return value, is that ok?". Transients in Clojure allow localized mutation in initializing or transforming a large persistent data structure. This mutation will only be seen by the code that does the transformation - the client gets back a version for immutable use that can be shared. In no way does this invalidate the benefits that immutability brings in reasoning of Clojure programs. It's good to see Rich Hickey being flexible and pragmatic at the expense of injecting that little impurity into his creation.

Just like the little compromise (and big pragmatism) with the purity of persistent data structures, Clojure also made a similar compromise with laziness by introducing chunked sequences that optimize the overhead associated with lazy sequences. These are design decisions that have been taken consciously by the creator of the language that values pragmatism over purity.

Enough has already been said about the virtues of purity in functional languages. Believe me, 99% of the programming world does not even care for purity. They do what works best for them and hybrid languages are mostly the ones that find the sweetest spots. Clojure is as impure as Scala is, considering the fact that both allow side-effecting with mutable references and uncontrolled IO. Even Erlang has uncontrolled IO and a mutable process dictionary, though its use is often frowned upon within the community. The important point is that all of them have proved to be useful to programmers at large.

Why do creators infuse impurity into their languages ? Why aren't every language created as pure as Haskell is ? Well, it's mostly related to a larger thought that the language often targets to. Lisp started as an incarnation of the lambda calculus under the tutelage of John McCarthy and became the first significant language promoting the purely applicative model of programming without side-effects. Later on it added the impurities of mutation constructs based on the von Neumann architecture of the machines where Lisp was implemented. The obvious reason was to get an improved performance over purely functional constructs. Scala and Clojure both decided to go for the JVM as the primary runtime platform - hence both languages are susceptible to the pitfalls of impurity that JVM offers. Both of them decided to inherit all the impurities that Java has.

Consider the module system of Scala. You can compose modules using traits with deferred concrete definitions of types and objects. You can even compose mutually recursive modules using lazy vals, somewhat similar to what Newspeak and some dialects of ML offer. But because you have decided to bite the Java pill, you can also wreak havoc through shared mutable state at the top level object that you compose. In his post titled A Ban on Imports Gilad Bracha discusses all evil effects that an accessible global namespace can bring to the modularity aspects of your code. Newspeak is being designed as pure in this respect, with all dependencies being abstract and need to be plugged together explicitly as part of configuring the module. Scala is impure in this respect, allows imports to bring in the world on to your module definitions, but at the same time opens up all possibilities of sharing the huge ecosystem that the Java community has built over the years. You can rightfully choose to be pure in Scala, but that's not enforced by the language.

When we talk about impurity in languages, it's mostly related to how it handles side-effects and mutable state. And Haskell has a completely different take on this aspect than what we discussed with Lisp, Scala or Clojure. You have to use monads in Haskell towards any side-effecting operation. And people with a taste for finer things in life are absolutely fine with that. You cannot just stick in a printf to your program for debugging. You need to return the whole stuff within an IO monad and then do a print. The Haskell philosophy looks at a program as a model of mathematical functions where side-effects are also implemented in a functional way. This makes reasoning and optimization by the compiler much easier - you can make your pure Haskell code run as fast as C code. But you need to think differently. Pragmatic ? What do you think ?

Gilad Bracha is planning to implement pure subsets of Newspeak. It will be really exciting to get to see languages which are pure, functional (note: not purely functional) and object-oriented at the same time. He observes in his post that (t)he world is slowly digesting the idea that object-oriented and functional programming are not contradictory concepts. They are orthogonal, and can be arranged to be rather complementary. This is an interesting trend where we can see families of languages built around the same philosophy but differing in aspects of purity. You need to be pragmatic to choose and even mix them depending on your requirements.