Showing posts with label combinator. Show all posts
Showing posts with label combinator. Show all posts

Monday, April 12, 2010

Thrush in Clojure

Quite some time back I wrote a blog post on the Thrush Combinator implementation in Scala. Just for those who missed reading it, Thrush is one of the permuting combinators from Raymond Smullyan's To Mock a Mockingbird. The book is a fascinating read where the author teaches combinatorial logic using examples of birds from an enchanted forest. In case you've not yet read it, please do yourself a favor and get it from your favorite book store.

A Thrush is defined by the following condition: Txy = yx. Thrush reverses the order of evaluation. In our context, it's not really an essential programming tool. But if you're someone who takes special care to make your code readable to the human interface, the technique sometimes comes in quite handy.

Recently I came across Thrush in Clojure. You don't have to implement anything - it's there for you in the Clojure library implemented as a macro ..

Conside this simple example of bank accounts where we represent an account as a Map in Clojure ..

(def acc-1 {:no 101 :name "debasish" :type 'savings :balance 100})
(def acc-2 {:no 102 :name "john p." :type 'checking :balance 200})


We have a list of accounts and we need to find all savings accounts and compute the sum of their current balances .. well not too difficult in Clojure ..

(defn savings-balance
  [& accounts]
  (apply +
    (map :balance
      (filter #(= (:type %) 'savings) 
        (seq accounts)))))


To a programmer familiar with the concepts of functional programming, it's quite clear what the above function does. Let's read it out for all of us .. From a list of accounts, filter the ones with type as savings, get their balances and report the sum of them. That was easy .. but did you notice that we read it inside out from the implementation, which btw is a 4 level nested function ?

Enter Thrush ..

Being a permuting combinator, Thrush enables us to position the functions outside in, in the exact sequence that the human mind interprets the problem. In our Scala version we had to implement something custom .. with Clojure, it comes with the standard library .. have a look ..

(defn savings-balance
  [& accounts]
  (->> (seq accounts)
       (filter #(= (:type %) 'savings))
       (map :balance)
       (apply +)))


->> is implemented as a macro in Clojure that does the following :

  1. threads the first form (seq accounts) as the last argument of the second form (the filter), which makes (seq accounts) the last argument of filter
  2. then makes the entire filter form, including the newly added argument the last argument of the map
.. and so on for all the forms present in the argument list. Effectively the resulting form that we see during runtime is our previous version using nested functions. The Thrush combinator only dresses it up nicely for the human eyes synchronizing the thought process with the visual implementation of the logic. And all this at no extra runtime cost! Macros FTW :)

->> has a related cousin ->, which is same as ->>, but only threads the forms as the second argument of the next form instead of the last. These macros implement Thrush in Clojure. Idiomatic Clojure code is concise and readable and using a proper ubiquitous language of the domain, makes a very good DSL. Think about using Thrush when you feel that reordering the forms will make your API look more intuitive to the user.

Thrush also helps you implement the Decorator pattern in a very cool and concise way. In my upcoming book, DSLs In Action I discuss these techniques in the context of designing DSLs in Clojure.

Sunday, September 27, 2009

The Thrush combinator in Scala

In his book To Mock a Mockingbird, Raymond Smullyan teaches combinatory logic using songbirds in a forest. He derives important results combining various combinators, all using the birds of the enchanted forest. Combinators are an effective tool in designing abstractions with functional programming principles. They are reusable units, make you code very concise, without losing on the expressivity. More than the implementation, the names can go a long way in establishing a common vocabulary of programming idioms and techniques. In an earlier post, I talked about Kestrel, and its implementation in Scala for handling side-effects in an abstraction.

In this post, I look at Thrush, a permuting combinator. A Thrush is defined by the following condition: Txy = yx. Thrush reverses the order of evaluation. Raganwald talks about Thrush and its implementations in Ruby in an excellent post quite some time back.

Why would you want to reverse the order of evaluation in a computation ? Well, if you value readability of your code, and have been cringing at how a mix of expression oriented pipelines and nested function calls can make your code less readable, then a Thrush may be for you.

Consider the Ruby example that Raganwald discusses in his first example of Thrush implementation.

lambda { |x| x * x }.call((1..100).select(&:odd?).inject(&:+))

The argument to the proc is a pipeline expression that reads nicely from left to right : get the list of numbers from 1 to 100, select the odd ones and add them up. What the proc does is it finds the square of its input number. But the proc invocation is a function call, which, though is the last in sequence to be executed, has to be the first one that you read. You can find the Ruby implementation in Raganwald's blog that transforms the above code to a left-to-right pipeline expression using Thrush.

Let's try to see how we can do the same in Scala ..

In Scala, I can write the above as ..

((x: Int) => (* x))((1 to 100).filter(% 2 != 0).foldLeft(0)(_+_))

Almost the same as the Ruby code above, and has the similar drawback in readability.

Let's define the Scala Thrush combinator ..

case class Thrush[A](x: A) {
  def into[B](g: A => B): B = {
    g(x)
  }
}


Immediately we can write the above invocation as ..

Thrush((1 to 100)
  .filter(% 2 != 0)
  .foldLeft(0)(+ _))
  .into((x: Int) => x * x)


A very simple combinator, a permuting one that pushes the function to where it belongs in the line of readability. If you want to be more succinct, commit the sin of defining an implicit for your use case ..

implicit def int2Thrush(x: Int) = Thrush(x)

and immediately the above transforms to ..

(1 to 100)
  .filter(% 2 != 0)
  .foldLeft(0)(+ _)
  .into((x: Int) => x * x)


Does it read better ?

In fact with this implicit definition, you can go chaining into all the way ..

(1 to 100)
  .filter(% 2 != 0)
  .foldLeft(0)(+ _)
  .into((x: Int) => x * x)
  .into(* 2)


While designing domain APIs that need to be expressive, this technique can often come very handy. Here's an example that uses the Thrush combinator to ensure a clean pipeline expression flowing into a piece of DSL.

//..
accounts.filter(_ belongsTo "John S.") 
        .map(_.calculateInterest)
        .filter(> threshold)
        .foldLeft(0)(+ _)
        .into {x: Int =>
          updateBooks journalize(Ledger.INTEREST, x)
        }
//..

Sunday, September 06, 2009

Side-effects with Kestrel in Scala

Consider the following piece of logic that we frequently come across in codebases ..

  1. val x = get an instance, either create it or find it

  2. manipulate x with post-creation activities (side-effects)

  3. use x


Step 2 is only for some side-effecting operations, maybe on the instance itself or for some other purposes like logging, registering, writing to database etc. While working on the serialization framework sjson, I have been writing pieces of code that follows exactly the above pattern to create Scala objects out of JSON structures. Now if you notice the above 3 steps, step 2 looks like being a part of the namespace which calls the function that sets up x. But logically step 2 needs to be completed before we can use x. Which means that step 2 is also a necessary piece of logic that needs to be completed before we hand over the constructed instance x to the calling context.

One option of course is to make Step 2 a part of the function in Step 1. But this is not always feasible, since Step 2 needs access to the context of the caller namespace.

Let's look at an idiom in Scala that expresses the above behavior more succinctly and leads us into implementing one of the most popularly used objects in combinatory logic.

Consider the following method in Scala that creates a new instance of a class with the arguments passed to it ..


def newInstance[T](args: Array[AnyRef])(implicit m: Manifest[T]): T = {
  val constructor = 
    m.erasure.getDeclaredConstructors.first
  constructor.newInstance(args: _*).asInstanceOf[T]
}



I can use it like ..

newInstance[Person](Array("ghosh", "debasish"))

for a class Person defined as ..

case class Person(lastName: String, firstName: String)

It's often the case that I would like to have some operations on the new instance after its creation which will be pure side-effects. It may or may not mutate the new instance, but will be done in the context of the new instance. I can very well do that like ..

val persons = new ListBuffer[Person]()
val p = newInstance[Person](Array("ghosh", "debasish"))
persons += p  // register to the list
persons.foreach(mail("New member has joined: " + p)) // new member mail to all
//.. other stuff


It works perfectly .. but we can make the code more expressive if we can somehow be explicit about the context of the block of code that needs to go with every new instance of Person being created. Maybe something like ..

newInstance[Person](Array("ghosh", "debasish")) { p =>
  persons += p
  persons.foreach(mail("New member has joined: " + p))
  //.. other stuff
}


This clearly indicates that the side-effecting steps of adding to the global list of persons or sending out a mail to every member is also part of the creation process of the new person. The effect is the same as the earlier example, only that it delineates the context more clearly. Though at the end of it all, it returns only the instance that it creates.

Consider another example of a good old Java bean ..

class Address {
  private String street;
  private String houseNumber;
  private String city;
  private String zip;

  //..
  //.. getters and setters
}


Working with a reflection based library it's not uncommon to see code that instantiates the bean using the default constructor and then allow clients to set the instance up with custom values .. something like ..

var p = Person(..)
val pAddress =
  newInstance[Address](null) {=>
    a.setStreet("Market Street")
    a.setHouseNumber("B/102")
    a.setCity("San Francisco")
    a.setZip("98032")
    p.address = a
    p.mail("Your address has been changed to: " + a)
  }


Once again the block is only for side-effects, which can contain lots of other custom codes that depends on the caller's context. Make it more concise, DSLish using the object import syntax of Scala ..

var p = Person(..)
val pAddress =
  newInstance[Address](null) {=>
    import a._
    setStreet("Market Street")
    setHouseNumber("B/102")
    setCity("San Francisco")
    setZip("98032")
    p.address = a
    p.mail("Your address has been changed to: " + a)
  }


Looks like a piece of idiom that can be effective as part of your programming repertoire. Here is the version of newInstance that allows you to make the above happen ..


def newInstance[T](args: Array[AnyRef])(op: T => Unit)(implicit m: Manifest[T]): T = {
  val constructor = 
    m.erasure.getDeclaredConstructors.first
  val v = constructor.newInstance(args: _*).asInstanceOf[T]
  op(v)
  v
}



Looking carefully at newInstance I realized that it is actually the Kestrel combinator that Raymond Smullyan explains so eloquently in his amazing book To Mock a Mockingbird. A bird K is called a Kestrel if for any birds x and y, (Kx)y = x. And that's exactly what's happening with newInstance. It does everything you pass onto the block, but ultimately returns the new instance that it creates. A nice way to plug in some side-effects. Reg has blogged about Kestrels in Ruby - the tap method in Ruby 1.9 and returning in Rails. These small code pieces may not be as ceremonious as the popularly used design patterns, but just as effective and provide deep insights into how our code needs to be structured for expressivity. Next time you discover any such snippet that you find useful for sharing, feel free to write a few lines of blog about it .. the community will love it ..

Monday, April 20, 2009

Towards Combinator based API design

I was listening to the presentation by Alex Payne, the Twitter API lead, that he delivered at Stanford very recently. A nice presentation that sums up the philosophies and practices that they followed in the evolution of APIs for Twitter. In case you are interested in API design, Josh Bloch also has a great video up as part of a Javapolis interview that discusses in details all the nuances of designing good and robust APIs.

These days, we are getting more and more used to interesting programming languages, which, apart from being powerful themselves, also mostly happen to share the wonderful ecosystem of common runtimes. As Ola Bini noted sometime back on his blog, it is a time when we can design the various architectural layers of our application in different languages, depending on the mix of robustness, type-safety and expressiveness that each of them demands. Client facing APIs can be focused more towards expressiveness, being more humane in nature, a pleasure to use, while at the same time offering modest error handling and recovery abilities. But whatever the layer, APIs need to be consistent, both in signature and in return values.

One of the very important aspects of API design is the level of abstraction that it should offer to the user. The ideal level comes out only after a series of exploratory evolutions, refactorings and user implementations, and can often lead to loss of backward compatibility within the existing user community.

One of the very powerful ways of API design that many of today's languages offer is the use of combinators. I have blogged on uses of combinator in concatenative languages like Joy - it is truly a great experience as a user to use such compositional API s as part of your application design. Here is one from my earlier post on Joy combinators. It finds the arithmetic mean of a list of numbers ..

dup  0  [+]  fold  swap  size  /


The API is beautiful in the sense that it evolves the intention ground up and makes use of smaller combinators in building up the whole. This is beautiful composition.

In one of the comments to my earlier post, James Iry mentioned "I remain unconvinced that concatenative languages are really buying much over applicative languages, as interesting as they are". Since then I have been dabbling a bit with Haskell, a pure functional language that does many things right with the notion of static typing, offering powerful point free capabilities along with rich combinator composition ..

A simple pointfree sum ..

sum = foldr (+) 0


and a more complex map fusion ..

foldr f e . map g == foldr (. g) e


The main point is to seek the beauty of API design in expressiveness of the contract through effective composition of smaller combinators. The biggest advantage of combinators is that they offer composability i.e. the value of a bigger abstraction is given by combining the values of its sub-abstractions. And the power of composability comes from the world of higher order functions and their ability to combine them in your programming language, just as you would do the same in mathematics.

Object orientation is not so blessed in this respect. Composition in OOP is mostly confined to designing fluent interfaces that make expressive APIs but can be made useful only in a limited context and of course, without the purity that functional abstractions espouse. The Builder design pattern is possibly the most famous compositional construct in object oriented languages, and often lead to sleak APIs. Here is a great example from Google Collections MapMaker API ..

ConcurrentMap<Key, Graph> graphs = 
  new MapMaker()
    .concurrencyLevel(32)
    .softKeys()
    .weakValues()
    .expiration(30, TimeUnit.MINUTES)
    .makeComputingMap(
      new Function<Key, Graph>() {
        public Graph apply(Key key) {
          return createExpensiveGraph(key);
        }
      }
    );



But Java, being a language that is not known to offer the best of functional features, it is often quite clumsy to compose abstractions in a fluent way that can offer consistent, rich and robust APIs that match the elegance of functional combinators.

Scala is not a particularly rich language for pointfree programming. But Scala offers great library support for combinators. Of course the secret sauce to all these is the rich support of functional programming that Scala offers. Parser combinators that come with Scala standard library help design external DSL s with enough ease and convenience. Quite some time back I had blogged on designing a combinator based DSL for trading systems using Scala parser combinators.

The main power of combinators come from the fact that they are compositional, and it is the presence of non composable features that make combinators hard in some languages. And one of them is shared mutable state. Paul Johnson had it absolutely right when he said "Mutable state is actually another form of manual memory management: every time you over-write a value you are making a decision that the old value is now garbage, regardless of what other part of the program might have been using it". Languages like Erlang enforces confinement of mutable state within individual processes, Scala encourages the same through programming practices, while Haskell, Clojure etc. offer managed environments for manipulating shared state. Hence we have composability in these languages that encourage combinator based API design.

Combinator based API design is nothing new. It has been quite a common practice in the worlds of Haskell and other functional languages for quite some time. Simon Peyton Jones described his experience in ICFP 2000 in applying combinator based API design while implementing a financial system for derivative trading. It was one of those trend setter applications in the sense that "the ability to define new combinators, and use them just as if they were built in, is quite routine for functional programmers, but not for financial engineers".

Sunday, December 21, 2008

What is special about combinators in Joy

I am continuing my rendezvous with Joy, and this post is yet another rant about my perception of why combinators are real first class citizens in the paradigm of concatenative languages. Many of today's languages offer combinator libraries, but, with the possible exception of Haskell, none match the elegance that Joy offers.

Combinators help programming at a higher level of abstraction. In any functional language, using maps and folds definitely make your program more expressive than explicit recursion or iteration. Consider this snippet that uses map as a transformer over a list to convert every element of the list to upper case ..

upperCase xs = map toUpper xs


and here is the same using explicit recursion ..

upperCase (x:xs) = toUpper x : upperCase xs
upperCase []     = []


Apart from the former being more expressive, using the map combinator also helps you separate the strategy from the implementation of your program. map is a higher level abstraction and the implementation of map can be varied from recursive to iterative without any impact on the client code base. For example, without tail call optimization on the JVM, Scala implements List.map as an iterative method as opposed to the recursive implementation of Erlang map.

So, always use combinators to program at a higher level of abstraction.

In Joy, programming with combinators is the most natural way of doing things. In case you are just wondering, why is it not so with the programming language of your choice, the reason is in the architecture of the language. The combinators which I implemented in Scala, in my last post, did the stuff, but do we see such usages in real life ? Even with languages like Ruby, which do not have the noise of type annotations, combinators like divide_and_conquer, are not in common usage. The only possible exception to this today is Haskell, which allows point free programming, is referentially transparent and allows algebraic program transformations much like using formal methods through the medium of programming. And combinators are the most natural idioms in concatenative languages like Joy.

Why is that ?

Combinators are most effective when you compose them from atomic programs and evolve towards the bigger whole. And the closer you can get towards algebraic transformation with your combinators, the more elegant the process of composition looks. Today's mainstream languages are based on application of functions on parameters, leading to complex evaluation rules. Formal parameters add to the noise of composition and make the process less elegant. In statically typed languages, you also have the type annotations that add to the complexity of composition. On the other hand, in Joy, every combinator has the same form (Stack -> Stack) and gets all parameters including quoted programs from the stack itself. Hence you can write elegant compositions like 1 [*] fold to define the product function over the elements of an aggregate. Today Haskell allows such point free style of programming (product in Haskell will be foldl (*) 1), but none of the other mainstream languages come even close. It will be interesting to compare Joy with Haskell with respect to rewriting capabilites and application for formal methods.

Perhaps, the most important feature in Joy that makes non-invasive composition of combinators work like a charm is the datatype of quoted programs. A quoted program is just another list that happens to be on top of the stack when a combinator expects it. And the list can very well be generated as a result of other combinator operations. It is this generalization that unifies all combinator operations. In other languages, combinators work on different abstractions, while in Joy, it is always the same thing - combinators get lists on top of stack (which are incidentally quoted programs), execute them and return a stack with the result. Hence map is a higher order function in other languages, while in Joy, map is still a unary first order function from stacks to stacks.

At the same time, quoted programs in Joy, allow the normal evaluation order semantics to be imposed during program execution, in an otherwise applicative model.

Here is an example ..

0  [dup * +]  fold


This is an example of using the fold combinator. The quoted program [dup * +] will be on the stack in passive form, and will only be active when we have the fold combinator being applied. The result is the sum of squares of the input aggregate.

Recursion Combinators

As mentioned at the beginning of the post, if explicit recursion makes programs hard to understand, how do we hide 'em ? Inside recursive combinators. But can we remove explicit recursion from individual combinators ? Sure .. we can, by introducing the venerable y-combinator. So, only the y-combinator will contain explicit recursion, and all other erstwhile recursive combinators will be implemented in terms of Y.

Let us visit the factorial example and find out for ourselves, how the y-combinator can be used to remove explicit recursion and what trade-off does it imply ..

Here is the explicitly recursive version of factorial in Joy ..

recursive-fac  ==  [0 =] [succ] [dup pred recursive-fac *] ifte


Just for the uninitiated, ifte is the if-then-else combinator in Joy that takes 3 program parameters - quite intuitively, the if-part, the then-part and the else part. Note how effectively, quotation has been used here to impose normal order of evaluation, aka call-by-name. Otherwise it would not have been possible to implement it at all.

and here is the version that uses the y-combinator ..

[ [pop 0 =] [pop succ] [[dup pred] dip i *] ifte ] y


where y is implemented as simply as ..

[dup cons] swap concat dup cons


and .. and .. which version do you think is more readable ? Of course the one that uses explicit recursion. Huh!

This is the problem with the y-combinator. It is too generic, all-purpose, and hence not enough descriptive. And when combinators are not self-descriptive, the readability goes for a toss. And this is yet another area where Joy shines ..

Joy has special purpose combinators for recursion that are independent of data types and enough descriptive depending on the structure of the recursion. Here is an example with the factorial computation ..

fac  == [null] [succ] [dup pred] [*] linrec


linrec is a recursion combinator that takes four parameters in addition to the data that it needs.

  • The first one is the if-part, where we specify a condition.

  • If the condition evaluates to true, then the second part (the then-part) gets executed and the combinator terminates.

  • Otherwise we look at the next 2 parameters - called the rec1-part and rec-2 part.

  • The rec1-part is executed and then the 4 program parameters are once again bundled together and the quotation form is immediately executed.

  • This is followed by the execution of the rec2-part.


The entire sequence models what we call a linear structural recursion.

For other variants of recursive problems, Joy offers a host of similar other combinators ..

  • binrec is for binary recursion, that makes problems like fibonacci computation or quicksort a trivial one liner

  • tailrec is for tail recursion, quite similar to linrec but without the rec2-part

  • primrec is for primitive recursion which offers suitable defaults for the if-part and the rec1-part


Then there are a host of other recursive combinators for more advanced recursions like mutual recursion, nested recursion, generic recursion with n-ary branching etc.

Tuesday, December 16, 2008

Combinators for fun and profit

Wikipedia says ..

"A combinator is a higher-order function which, for defining a result from its arguments, solely uses function application and earlier defined combinators."

Primitive functions that do not contain any free variables. The functions take some arguments and produce a result solely based on those arguments only. No side-effects, nothing. In concatenative languages like Joy and Factor, combinators are formed from the primitive elements by quotation and concatenation, using nothing but function composition, transforming your system into a rich algebra of programs. As I mentioned in my last post, there are no variables in Joy. Data is manipulated through a stack. Along with data, Joy pushes programs themselves onto the stack and manipulates just like data through combinators.

I am an enterprise programmer - what do combinators buy me ?

Nothing, except maybe a good evening over a weekend. Seriously! This is a continuation of my last post dedicated fully towards pure joy of writing programs. And this post is about some possibly wasted efforts that I have been plunging into, quite off and on. Dear prudent reader, you may not find anything useful out of it .. it is just a truthful and honest account of self indulgence into something I found enjoyable.

OK .. so you have been warned ..

Consider the simple function that finds the arithmatic mean of a list of numbers .. using a language that's quite not-so-elegant for combinator programming .. Scala ..

def arith_mean(list: List[Int]) = {
  list.foldLeft(0)(+ _) / list.size
}


The method is purely functional, has no side-effects, but really not a manifestation of being produced as a composition of primitive functions. The parameters get in the way, the type annotations are a noise. Joy makes combinator programming more concise, concatenative and reveals the intention purely as a composition of combinators .. Shout out the following aloud, and that is exactly what the program does, as every function works through the available stack .. Here is the program for arithmatic mean in Joy ..

dup  0  [+]  fold  swap  size  /


and here is the commentary of how the operators and combinators above compose ..

  1. Duplicates the list on top of the stack

  2. Does a fold to add elements of the list, which is on top of the stack. So the top of the stack contains the sum of all elements of the list. And the next element of the stack contains a copy of the original list

  3. Swaps the top 2 elements of the stack - the top now contains the list

  4. Replace the top list with its size

  5. Apply division on the top 2 elements of the stack to get the result



A more idiomatic version of the above will use the cleave combinator that does compact the above program and even parallelizes the computation of the sum of the elements and the size of the input list ..

[0 [+] fold]   [size]   cleave   /


Meanwhile, Reginald Braithwaite is having a party with combinators in his specialty un-blog homoiconic. He is exploring ways to write more enjoyable programs in Ruby using Krestel, Thrush and the other combinator birds that Raymond Smullyan has immortalized in his creation To Mock a MockingBird, as a tribute to the bird watching passion of Haskell Curry. Twenty years since the book came out, it is still a joy to read.

In his series on refactoring code with combinators, Reginald presents how to refactor a sum_of_squares method using a divide_and_conquer combinator. The method computes the sum of squares of a tree of integers represented as a nested list. The idea is to model the divide_and_conquer paradigm as a combinator and have the sum_of_squares method as an implementation of that combinator.

Here is my sum_of_squares method in Scala for integers .. functional and recursive ..

def sum_of_squares(list: Iterable[Any]): Int = {
  list.foldLeft(0) {
    (s, x) =>
      s +
        (match {
          case l: Iterable[_] => sum_of_squares(l)
          case e: Int => e * e
        })
  }
}


And now a divide_and_conquer combinator in Scala that defines the strategy ..

def divide_and_conquer(value: Iterable[Any],
                       conquer: Any=>Any,
                       divide: Iterable[Any]=>Iterable[Any],
                       recombine: Iterable[Any]=>Any): Any = {
  recombine(
    divide(value).map { (x: Any) =>
      x match {
        case l: Iterable[_] =>
          divide_and_conquer(l, conquer, divide, recombine)
        case e =>
          conquer(e)
      }
    }
  )
}


where ..

  • divide is the step that divides the main problem into subproblems

  • conquer is the trivial case for the smallest part

  • recombine is the step that pieces solutions to all subproblems together


And here is the implementation of sum_of_squares using the combinator ..

def sum_of_squares(l: List[Any]): Any = {
  divide_and_conquer(l,
    (x) => x.asInstanceOf[Int] * x.asInstanceOf[Int],
    (x) => x,
    (x) => x.asInstanceOf[List[Int]].foldLeft(0){+ _}
  )
}


Quite a bit of noise compared to an implementation using a concatenative language, with all parameters and type annotations sticking around. But it's not too ugly compared to what other mainstream languages can offer .. and anyway it's fun .. I am sure someone more conversant with Scala will be able to make it a bit more succinct and idiomatic.

Here is another implementation of the divide_and_conquer strategy using the combinator above .. flattening a list ..

def flatten(list: List[Any]): Any = {
  divide_and_conquer(list,
    (x) => x,
    (x: Iterable[Any]) => x,
    (x: Iterable[Any]) =>
      x.foldLeft[List[Any]](Nil){
        (s, x) =>
          s :::
            (match {
              case l: List[_] => l
              case e => List(e)
            })
      }
  )
}


Quite ugly .. huh ? Looks like Scala is not an ideal language for combinator based programming. Though we have some very good implementations of combinators in Scala, the parser combinator library, the pickler combinator library etc. But if you are serious about combinators and combinator based programs, jump into the concatenative languages. Joy gives us the following implementation of flatten ..

flatten  ==  [null]  []  [uncons]  [concat] linrec


Here goes the commentary ..

  1. If the parameter list is empty, nothing to flatten, leave the empty list

  2. Otherwise, uncons to get the first and its rest

  3. Flatten rest through anonymous recursion on it

  4. Concatenate the saved first to the result


Quite intuitive, and implemented using function composition only. Just one trick - what is the linrec doing out there ?

linrec is one of the most widely used recursion combinators in Joy which can be used to avoid recursive definitions in your program. It encapsulates the recursion part within its implementation, much like what we have done with the recursion in our divide-and-conquer combinator. Joy also offers a host of other recursion combinators like genrec, binrec, tailrec etc. Using them you can write non-recursive definitions of recursive tasks through simple composition and quotation. But that is the subject another rant .. some other day ..