Showing posts with label closures. Show all posts
Showing posts with label closures. Show all posts

Thursday, August 28, 2008

for eachConcurrently

For those who have wondered how to implement Neal Gafter's example 'for eachConcurrently' API with BGGA Closures, here's one attempt:

ForEachConcurrently.java

There's a bonus 'for eachEntryConcurrently' in there as well.

I'm sure it can be improved (suggestions welcome) and it certainly deserves a decent explanation, which is coming, but I'll make one comment at this point:

Adding support for nonlocal transfers across different threads required two lines of code.

Two lines is easy. I like easy. Especially when threads are involved.

Wednesday, August 27, 2008

NetBeans vs Closures

It's alive!



Well, actually it's kinda surviving on life support at the moment, but hey.

It could do with some syntax highlighting support too. And refactoring. And and and...

(If you're wondering what this is all about, it's NetBeans using a hacked into shape version of this compiler)

Thursday, June 26, 2008

Schwartzian Transforms in Java

Libraries like Java's Collections API contain numerous algorithms tuned to give good performance in a theoretical 'typical case', but it can be very easy to forget that these algorithms aren't magical, and have certain requirements of their own if that goal is to be achieved.

On numerous occasions, I've written or encountered code that seemed to work perfectly well during development and testing, but performed very poorly when it had to deal with a larger or more complex data-set, something which often happens only after the code has been put into production use.

Here's a program which lists the contents of a specified directory ordered by size, with the size of a child directory being calculated from the total size of all of its contents:

import java.io.*;
import java.util.*;

public class FileSizes {

public static void main(String[] args) {

File directory = new File(args[0]);
List<File> files = new ArrayList<File>(Arrays.asList(directory.listFiles()));

Collections.sort(files, new Comparator<File>() {
public int compare(File file1, File file2) {
long file1Size = calculateSize(file1);
long file2Size = calculateSize(file2);
if (file1Size == file2Size)
return 0;
return file1Size < file2Size ? -1 : 1;
}
});

for (File file : files)
System.out.println(file);
}

static long calculateSize(File file) {
long size = 0;
if (file.isDirectory())
for (File child : file.listFiles())
size += calculateSize(child);
else
size = file.length();
return size;
}
}

Now, how long does this program take to execute?

Potentially a lot longer than it should, that's how long.

The problem here is that during the sort operation, each item being sorted may be compared against the other items multiple times - exactly how many depending on the sorting algorithm in use and the size and ordering of the initial data-set. If that comparison operation involves some expensive calculation, as it does in this rather blatant example, it's normally wasted effort to perform that calculation every time. In such cases, it's often preferable to cache the result of the calculation the first time it's performed, and reuse that cached version if it's needed again.

A common technique used by Perl programmers faced with this problem is to employ something known as the Schwartzian Transform, which looks something like this:

my @sorted_data =
map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, expensive_calculation($_)] }
@unsorted_data;

The idea here is that we start with the unsorted data (read the code from the bottom line up), and apply a mapping function to it which maps each item to a 2 element array containing the original item at index 0 and its associated calculated value at index 1. These are then sorted, using the element at index 1 for the comparisons, and the result is finally fed through another mapping function to extract just the original items, now in the desired order.

We can try something similar in Java. We'll need a mapping method, and let's make it general enough that we can give it any Iterable (such as a List) and a Mapper to apply to each element:

interface Mapper<T,U> {
U map(T item);
}

<T,U> Iterable<U> map(final Iterable<T> items, final Mapper<? super T, ? extends U> mapper) {
return new Iterable<U>() {
public Iterator<U> iterator() {
Iterator<T> iter = items.iterator();
return new Iterator<U>() {
public boolean hasNext() { return iter.hasNext(); }
public U next() { return mapper.map(iter.next()); }
public void remove() { iter.remove(); }
};
}
};
}

To match the Perl example, we'll also want a sort method which sorts and returns a new List rather than modifying the one passed to it:

<T> List<T> sort(Iterable<T> items, Comparator<? super T> comparator) {
List<T> list = new ArrayList<T>();
for (T t : items)
list.add(t);
Collections.sort(list, comparator);
return list;
}

Finally we'll need some kind of data structure to hold the item together with its calculated-value. A type-safe Pair class will do:

class Pair<A,B> {
public static <A,B> Pair<A,B> of(A fst, B snd) {
return new Pair<A,B>(fst, snd);
}
private A fst;
private B snd;
private Pair(A fst, B snd) {
this.fst = fst;
this.snd = snd;
}
public A fst() { return fst; }
public B snd() { return snd; }
}

Now we can replace the Collections.sort call in the original example with a Schwartzian Transform:

Iterable<File> sortedFiles =
map(sort(map(files,
new Mapper<File, Pair<File,Long>>() {
public Pair<File,Long> map(File f) {
return Pair.of(f, calculateSize(f));
}
}),
new Comparator<Pair<File,Long>>() {
public int compare(Pair<File,Long> p1, Pair<File,Long> p2) {
return p1.snd().compareTo(p2.snd());
}
}),
new Mapper<Pair<File,Long>,File>() {
public File map(Pair<File,Long> p) {
return p.fst();
}
});

Well, that's pretty horrible, whichever way you try to lay out the code. Still, there's one improvement we could make - since we're just writing a Comparator which extracts the Long values and compares them, we could overload our sort method to accept a Mapper instead of a Comparator:

<T, U extends Comparable<U>> List<T> sort(Iterable<T> items, final Mapper<? super T, U> mapper) {
List<T> list = new ArrayList<T>();
for (T t : items)
list.add(t);
Collections.sort(list, new Comparator<T>() {
public int compare(T t1, T t2) { return mapper.map(t1).compareTo(mapper.map(t2)); }
});
return list;
}

That allows us to shorten the code a little:

Iterable<File> sortedFiles =
map(sort(map(files,
new Mapper<File, Pair<File,Long>>() {
public Pair<File,Long> map(File f) {
return Pair.of(f, calculateSize(f));
}
}),
new Mapper<Pair<File,Long>,Long>() {
public Long map(Pair<File,Long> p) {
return p.snd();
}
}),
new Mapper<Pair<File,Long>,File>() {
public File map(Pair<File,Long> p) {
return p.fst();
}
});

It's still far too cumbersome though - if I were writing this 'for real' I'd probably have given up by now.

Let's try it using closures instead of anonymous classes:

Iterable<File> sortedFiles =
map(sort(map(files,
{File f => Pair.of(f, calculateSize(f))}),
{Pair<File,Long> p => p.snd()}),
{Pair<File,Long> p => p.fst()});

That's much better to my eyes - maybe not quite as succinct as the Perl version, but it's digestible.

The Mapper interface can be replaced by function types so let's lose that and apply a bit more closures magic to the map and sort methods:

<T,U> Iterable<U> map(Iterable<T> items, {T=>U} mapper) {
return {=>
Iterator<T> iter = items.iterator();
new Iterator<U>() {
public boolean hasNext() { return iter.hasNext(); }
public U next() { return mapper.invoke(iter.next()); }
public void remove() { iter.remove(); }
}
};
}

<T,U extends Comparable<U>> List<T> sort(Iterable<T> items, {T=>U} c) {
List<T> list = new ArrayList<T>();
for (T t : items)
list.add(t);
Collections.sort(list, {T t1, T t2 => c.invoke(t1).compareTo(c.invoke(t2))});
return list;
}

Of course we could also tuck all that map/sort/map stuff away in a handy reusable method:

<T,U extends Comparable<U>> Iterable<T> schwartzianSort(Iterable<T> items, {T=>U} mapper) {
return map(sort(map(items,
{T t => Pair.of(t, mapper.invoke(t))}),
{Pair<T,U> p => p.snd()}),
{Pair<T,U> p => p.fst()});
}

Leaving us with:

public static void main(String[] args) {

File directory = new File(args[0]);
List<File> files = new ArrayList<File>(Arrays.asList(directory.listFiles()));

for (File file : schwartzianSort(files, {File f => calculateSize(f)}))
System.out.println(file);
}

Much nicer!

Sometimes, it's not immediately obvious what a method like that should look like, or how it might be implemented. Breaking the algorithm down into a series of functions, as the Schwartzian Transform does, can be a useful way to approach the problem at hand, but only if the language you're using has sufficiently practical constructs.

updated 29/06/08 to fix a bug in map() pointed out by Konstantin Triger.
updated 03/07/08 to fix a bug in the Comparator pointed out by bjkail.

Thursday, May 22, 2008

Nothing's guaranteed

Kasper B. Graversen's blog entry on 'Exception handling problems in Java' came up in my news reader today, courtesy of dzone. He wants to be able to extract common exception handling code from catch clauses into methods, allowing reusability of that code, and removing clutter from the main logic.

Here's a silly example, replete with dubious practices for the sake of conciseness :)

Food makeLunch() {

try {
return kitchen.makeSandwich();
}
catch (KitchenClosedException ex) {
handleException(ex, "Couldn't make a sandwich :(");
}
catch (NoBreadException ex) {
handleException(ex, "Couldn't make a sandwich :(");
}
return null;
}

void handleException(Exception ex, String message) throws RuntimeException {
log.error("Something went wrong", ex);
throw new RuntimeException(message, ex);
}


Kasper points out that the 'return null;' statement at the end of the makeLunch() method is pointless - although we can see that it will never be executed, the compiler requires it because the signature of the handleException() method only says that it may throw an exception, but does not guarantee it.

There's some discussion of the 'problem' here, including alternative ways to structure the code.

With the BGGA closures prototype though, we can rewrite the example as follows:

Sandwich makeLunch() {

try {
return kitchen.makeSandwich();
}
catch (KitchenClosedException | NoBreadException ex) {
handleException(ex, "Couldn't make a sandwich :(");
}
}

Nothing handleException(Exception ex, String message) throws RuntimeException {
log.error("Something went wrong", ex);
throw new RuntimeException(message, ex);
}

There are three changes to the original example here:
  • BGGA allows us to catch multiple types of exception with one catch clause.

  • The signature of the handleException() method now has a return type of java.lang.Nothing, which guarantees that the method will not 'complete normally' (because it always throws an exception). The compiler can enforce this guarantee, and we can take advantage of it:

  • Because of the above change, the 'return null;' is neither required, nor allowed, since the compiler now knows that the statement can never be executed.


Which is nice.

Thursday, March 6, 2008

Taking control

I've been back in the world of GUI development in the last few days, which I really enjoy when I get a chance to do it.

It's not too often that I hear other people say that they actually enjoy building GUIs though - many of the developers I've worked with or interviewed have made it very clear that they are much happier writing 'back-end stuff': database persistence, messaging systems, business logic. The kind of thing you can define with use cases, implement, test with Unit and Fit tests, and put a tick in the box when you've met all the requirements.

Which is fair enough - when you've come up with simple, robust and maintainable solution that meets all the requirements, it can be very satisfying being able to put that tick there.

GUI development can seem so... messy. Vague specifications, awkward coding models and difficult testing are all common problems. And the bugs can bite so much harder.

When a network messaging sub-system goes wrong, there's a fair chance it 'just' means that the data the user's seeing is out of date. Maybe the developer who wrote the messaging code had the presence of mind, and the time, to implement some decent error handling, so you might get to see the problem in log files and figure out what went wrong before the shouting starts.

When a GUI goes awry though, Murphy's Law says it'll Go Bang in the user's face (and probably during a demo, in a way that has never happened before) and you damn well know about it when it happens... you'll get a screenshot as proof if you're lucky.

In the wild and wacky world of multi-threaded, object-oriented, event-driven programming, we find ourselves creating various coding patterns to deal with the complexity:

"Every time I update the model, I need to ensure a flag is set".
"Every time I display some data, I need to check the flag and perform some action depending on its state...".

And so on. Simple rules, designed to introduce invariants, checks, guards, some kind of control into a seemingly non-deterministic model. And in doing so we drag in boilerplate code, with all its inherent problems as it's copied, pasted and maintained over time.

We can often attempt to avoid these problems by pushing this boilerplate down into our existing layers of object abstractions, or by creating new layers to deal with this application-specific logic. But they don't always fit - sometimes because it's like fitting a square peg into a round hole, and sometimes because the layers of classes, interfaces, mapping, delegation, configuration and so forth are already so complex that pushing more in at one end just results in different things dropping out at the other end.

Some programming languages acknowledge this by helping us implement this boilerplate control logic as procedural abstractions instead, providing constructs to take the code that really matters, and pass it to methods which execute it wrapped safely inside our control logic.

I like programming GUIs once in a while - building something that works really slickly can be great fun.

I could really make use of closures and control abstractions right now however. Boilerplate is no fun.

Wednesday, February 6, 2008

Say it again SAM

CICE is designed to make it easier to create instances of SAM types - interfaces and abstract classes which have a Single Abstract Method. The proposal itself also contains the following statement:

At the same time as this facility is added to the language, it would make sense to add a few more single method interface types, such as Predicate, Function, and Builder.

It's no coincidence that the jsr166y.forkjoin framework from Doug Lea and the rest of the jsr166 group is defining a number of interfaces like these for that framework's own needs - in fact the commonality is explicitly acknowledged in the documentation:

Preliminary release note: Some of the declarations in this class are likely to be moved elsewhere in the JDK libraries upon actual release, and most likely will not all nested in the same class.

I've just uploaded a new build of the CICE/ARM prototype, which includes a number of interfaces derived from those currently defined by jsr166y.

The differences between the interfaces in the prototype, and those provided by jsr166y are:

  • I've put them in a new package - java.util.ops

  • It includes versions for all primitive types

  • All Op types come in two forms - one which can throw checked exceptions, and one which cannot

  • Op types no longer have anything useful to say in their javadoc


All of these changes are possible consequences of removing the interfaces from the context of a specific framework, and in particular, a framework for which only a limited number of these interfaces are useful.

Unfortunately it adds up to a massive increase in the number of interfaces defined, from 132 in jsr166y.forkjoin.Ops, to 1828 in java.util.ops, and it's possible to define even more variations. Ouch :( Kingdom of the Nouns indeed...

Still, it's a straw-man, and no doubt it can be improved. Some thoughts I've had on this are:

  • Placing them all in java.util.ops means Comparator is defined in two different packages, so it's not ideal.

  • It could be argued that providing versions for all primitive types is overkill. However, without knowing the requirements of future libraries which may make use of them, it's difficult to know which ones can be omitted. Type promotion helps for the parameter types, but it's no use with return types.

  • Is it useful providing checked exception versions of all the Op types? Would it be more appropriate to do this for Procedure and friends?



Stephen Colebourne recently compared writing the same snippet of code in BGGA, FCM and CICE, and noted that a CICE can be rather more verbose due to the repetition of type names. His example used an interface called PairMatcher, defined as follows:

public interface PairMatcher<T,U> {
    boolean matches(T arg1, U arg2);
}

Creating an instance of this using earlier versions of the prototype would look something like this:

PairMatcher<String,Integer> pm = PairMatcher<String,Integer>(String str, Integer val) {
    return (str != null && str.length() > 0 && val != null && val > 10);
};

However, it has been pointed out that the parameter types are not really necessary, since there is only one method which a CICE can legally implement. So the latest version of the prototype allows you to do just that:

PairMatcher<String,Integer> pm = PairMatcher<String,Integer>(str, val) {
    return (str != null && str.length() > 0 && val != null && val > 10);
};

I'm not a big fan of that syntax personally - I'd rather lose the type arguments - but it's an option.

Thoughts very welcome, as usual.

Thursday, January 17, 2008

Exception handling with ARM blocks

One of the big questions in my mind around ARM blocks is exactly how they should behave when exceptions are thrown while disposing of managed resources. It seems likely that we'd want some flexibility in this - there are probably cases where we'd like the ARM block to suppress these exceptions, but others where that would be inappropriate.

The initial version of the prototype only supported the do() { } form of ARM block, and forced the programmer to catch any checked exceptions thrown by the resources' close() methods, which I've found generally leads to wrapping the do() block in a try { ... } catch (IOException e) { ... }, or adding throws IOException to the enclosing method's signature, neither of which is necessarily useful.

Build 03 changes the default semantics of the do() block so that it no longer throws exceptions raised by calling the close() method, and adds support for a try() form of ARM block which can be used when the programmer does wish to explicitly handle such exceptions.

So in the following example using Closeable resource 'fooFactory' , there's no longer any need to provide explicit handling of IOExceptions:

do (fooFactory) {
System.out.println("innocuous");
}

whilst this code forces us to catch exceptions of type FooException, which is a checked exception:

do (fooFactory) {
Foo foo = fooFactory.createFoo();
if (foo == null)
throw new FooException("no foo :(");
}

and the new try() block allows us to do that more neatly:

try (fooFactory) {
Foo foo = fooFactory.createFoo();
if (foo == null)
throw new FooException("no foo :(");
}
catch (FooException e) {
// something went wrong in the block
}
catch (IOException e) {
// something went wrong disposing of fooFactory
}

A further option for the try() block would be to suppress any exceptions thrown during automatic disposal which are not explicitly caught, allowing us to choose which ones we wish to handle. This doesn't seem quite so nice if the main body of the try block also throws IOExceptions though.

Thoughts welcome.

Monday, January 14, 2008

CICE/ARM prototype

As I mentioned a few days ago, I've been hacking around in javac recently in an attempt to get it to implement some of the ideas in the CICE and ARM proposals. The idea being that having a prototype available may help to highlight which aspects of the proposals really work in practice, and which do not.

I've put an initial cut here, which is based on the JRL licensed JDK codebase from 04-Dec-2007 (JDK 7 build 24). It hasn't had a great deal of testing, so don't expect it to be wonderfully stable at this stage. I'll attempt to fix any bugs that crop up however.

It's important to note that this initial prototype has been put together without consulting the authors of the two proposals, so I may well have misunderstood some of the ideas - apologies if so. I've also had to make a call on which aspects of the proposals to include, and which to exclude for now. I've opted for a pretty minimalist implementation to start with, and that pans out as follows:

CICE

CICE expressions, public local variables and implicitly final local variables should all be working. I've left out transient public locals and type inference.

At this stage, I also haven't provided any of the library additions or changes mentioned in section IV of the proposal.

Basically all of the examples in the CICE document should work (apart from those involving AbstractThreadLocal).

ARM blocks

ARM blocks can take one of two forms, both based on the do keyword only for now. The first allows you to declare the resources' variables as part of the ARM block itself:

do (InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dest)) {
// ...
}

The second form is where the resources are provided as a list of expressions:

InputStream in = new FileInputStream(src);
OutputStream out = new FileOutputStream(dest);

do (in, out) {
// ...
}

Currently, all resources must implement java.io.Closeable. As the ARM proposal points out, Closeable is probably not a good choice in the long run, but it's a starting point.

An ARM block will attempt to call close() on all the resources, in the reverse order that they were listed. In the first form of the block the variables are not final so it is possible to reassign in and out to some other Input/OutputStream within the ARM block - the block will attempt to close whatever Closeables the variables refer to at the time the block ends. In the second form of the block, reassigning in and out has no effect.

In the first example, if in or out are set to null this effectively cancels the automatic closing of the variable in question, and does not cause a NullPointerException or any other runtime error. I don't know whether this is useful or not, as yet.

In the first form, it's possible to mark the variables as final. It's also possible to mark them as public, which is only really useful if reassignment should be allowed.

If exceptions are raised, whether by evaluating the variable initializers (in the first form), by executing the body of the block, or by the calls to close() at the end of the block, it should only ever be the first such exception which is ultimately thrown by the block, after it has attempted to close all the resources.

The proposal doesn't mention whether it should be possible to use 'break' statements within an ARM block so I haven't implemented that. I think it may be useful though - thoughts welcome.

Bug reports and the like can be sent to me directly at markmahieu at gmail dot com.

Friday, January 11, 2008

CICE/ARM observations

When looking at a proposed new language feature it's often useful to compare it against similar features in other languages, and against alternative proposals. Such comparisons can lead to a deeper understanding of the opportunities and pitfalls involved, and this certainly holds true in the case of adding closures to Java.

Neal Gafter has made available a prototype implementation of a large chunk of the BGGA specification, which was immensely helpful in my own attempts at understanding that proposal. However, Josh Bloch's recent comments on the closures debate caused me to think that it would be useful to have a similar prototype of the CICE and ARM proposals as a more concrete basis for comparison, so over the last couple of weeks I've attempted to put one together (it makes a change from the day job). The result is that I now have a version of javac which implements one interpretation of CICE/ARM.

A number of observations about both CICE/ARM and BGGA have come out of building and using this thing, and I'll blog about some of them over the coming days. For now, I'll start off with a couple of quick comments:

Firstly and most importantly, both the CICE and ARM documents really are 'a bit sketchy' (to quote Josh Bloch) in places - the ARM document in particular, which covers a good many of the possible options and issues within its chosen design space, but seems to stop short of making a call on some of the more important ones (this is why I say I've implemented 'one interpretation' of them). There seem to be quite a few valid possibilities when you combine the options for resource acquisition/initialisation/disposal and exception handling, and there's no obvious 80/20 rule apparent to me at least. In comparison, BGGA avoids such issues largely by being more open and flexible about these options, and therefore not having to specify them.

Something I didn't expect was that 'this' within a CICE doesn't mean what I subconsciously expect it to when writing code - I find myself thinking it means the instance of the containing class, not of the CICE itself. Maybe it's because of the shorter syntax, or the fact that I've been using BGGA recently, although I do get this problem with normal anonymous classes in Java from time to time anyway.

Finally (and I'm not sure about this one yet) I have a suspicion that the current syntax for a CICE could be a bit tricky to parse if you want to stick to one-token lookahead, possibly involving a fair bit of messing with the grammar which javac is based on.

More later.

Monday, December 10, 2007

An unrestricted closure example

An interesting question was raised recently about whether there are realistic examples of needing to pass an unrestricted closure when not using the control invocation syntax.

Let's say you were tasked with writing a method which calculates the returns on an accumulator bet. An accumulator (also known as a Parlay in many parts of the world) is a bet where you select several teams/opponents/whatever to win their matches. Your stake (the amount of money you are betting) goes onto the first of those selections, and if your prediction was correct, the returns from that part of the bet go onto the second of the selections, and so on. If any of your selections lost, you lose your money.

As an example, I could have a £10 accumulator bet on the following teams to win in the English Premiership this weekend:

Arsenal @ 2.40
Liverpool @ 2.60
Blackburn @ 2.25
Bolton @ 4.20
Everton @ 2.70

The number after the @ represents the odds for that team to win - 2.0 would mean doubling my money. If all these teams win their matches, I get £10 * 2.4 * 2.6 * 2.25 * 4.2 * 2.7 = £1592.14 back. If any one of these teams lose, I get nothing back. If any of the matches involved do not end in a decisive win/lose result (eg. a match is cancelled), that part of the bet is 'void' - I don't lose my whole bet, but it's calculated as though the odds on that selection were 1.0 (neither profit nor loss).

How could we write this?

Imagine we have a method called reduce(), which is defined as follows:

<E, R> R reduce(Iterable<E> values, R initial, {R, E => R} reducer) {
R result = initial;
for (E element : values) {
result = reducer.invoke(result, element);
}
return result;
}

This method iterates over values, and invokes reducer passing it the result of the invocation it performed on the previous element, and the current element. In the case of the first iteration, there was no previous element, so it uses the value provided as initial. The result of the very last invocation is what's returned from reduce().

Here's a silly but easy example, before I get back to the main point:

List<String> words = Arrays.asList("Mary", "had", "a", "little", "lamb");
// sum the number of characters in the words
int totalChars = reduce(words, 0, {Integer count, String word => count + word.length()});

OK, so we can also use reduce() to help us calculate the returns on an accumulator bet:

BigDecimal calculateReturns(AccumulatorBet bet) {

List<Selection> selections = bet.getSelections();

return reduce(selections, bet.getStake(),
{BigDecimal money, Selection selection =>
Result result = loadResult(selection);
if (result.isWinner()) {
money = money.multiply(selection.getOdds());
}
else if (result.isLoser()) {
money = BigDecimal.ZERO;
}
money
});
}

If you think about it though, once you hit a loser, there's no point continuing - the returns (value held in money) are now zero, and no amount of multiplication is going to change that. You might be tempted to just change:

else if (result.isLoser()) {
money = BigDecimal.ZERO;
}

to

else if (result.isLoser()) {
return BigDecimal.ZERO;
}

which would return BigDecimal.ZERO from calculateReturns(). At the very least, it would save you from unnecessary calls to loadResult(), which could well be a costly operation.

That's only possible if we're allowed to pass an unrestricted closure / function type to reduce(), which is not a method with which we can use the control invocation syntax. The current (v0.5) specification does allow this, but there's debate over whether it's a useful feature (see the comments on that page).

Real betting systems have to settle much more complicated bets, and have to do so as quickly as possible. Shortcuts like the above are very useful for achieving that.

Sunday, December 9, 2007

Memoization revisited

In last week's thrilling episode, the world watched in horror as I attempted memoization with Java Closures, then really pushed my luck with some dodgy curry.

This week, we're going to take the memoization plotline through a whole new twisty-turny story-arc, and see what happens with multi-key memoization.

Our story shall revolve around a class Font, which for some important but unspecified reason needs to ensure that only one instance will be created for any particular combination of its name and pointSize attributes:

public class Font {

private static Map<String, Map<Double, Font>> instanceMap =
new HashMap<String, Map<Double, Font>>();

public static synchronized Font getInstance(String name, double pointSize) {
Font font = null;
Map<Double, Font>> pointSizeMap = instanceMap.get(name);
if (pointSizeMap == null) {
pointSizeMap = new HashMap<Double, Font>();
instanceMap.put(name, pointSizeMap);
}
else {
font = pointSizeMap.get(pointSize);
}
if (font == null) {
font = new Font(name, pointSize);
pointSizeMap.put(pointSize, font);
}
return font;
}

private Font(String name, double pointSize) {
// construct the Font
}
}

There are a few problems with this. Although the code is fairly simple, it's still easy to make a mistake when implementing it. Worse, if we were writing a Font class for real we'd probably want to specify more than just the name and pointSize parameters - the example above doesn't even allow the user to specify any styles such as italic, or bold. The code just gets nastier when we do so:

public class Font {

private static Map<String, Map<Double, Map<EnumSet<FontStyle>, Font>>> instanceMap =
new HashMap<String, Map<Double, Map<EnumSet<FontStyle>, Font>>>();

public static synchronized Font getInstance(String name, double pointSize, EnumSet<FontStyle> styles) {
Font font = null;
Map<EnumSet<FontStyle>, Font>> stylesMap = null;
Map<Double, Map<EnumSet<FontStyle>, Font>>> pointSizeMap = instanceMap.get(name);
if (pointSizeMap == null) {
pointSizeMap = new HashMap<Double, Font>();
instanceMap.put(name, pointSizeMap);
stylesMap = new HashMap<EnumSet<FontStyle>, Font>>();
pointSizeMap.put(pointSize, stylesMap);
}
else {
stylesMap = pointSizeMap.get(pointSize);
if (stylesMap == null) {
stylesMap = new HashMap<EnumSet<FontStyle>, Font>>();
pointSizeMap.put(pointSize, stylesMap);
}
else {
font = stylesMap.get(styles);
}
}
if (font == null) {
font = new Font(name, pointSize);
stylesMap.put(pointSize, font);
}
return font;
}

private Font(String name, double pointSize) {
// construct the Font
}
}

We could abandon the nested Maps for a single Map with some kind of compound key - perhaps a String representation of the Font's attributes:

public class Font {

private static Map<String, Font> instanceMap = new HashMap<String, Font>();

public static synchronized Font getInstance(String name, double pointSize, EnumSet<FontStyle> styles) {
String key = name + " " + pointSize + " " + styles;
Font font = instanceMap.get(key);
// ...
}

// ...
}

Unfortunately this can often result in poorly distributed hash values - the hash codes for "Courier 10.0" and "Courier 10.5" are very close, compared to those for Double.valueOf(10.0) and Double.valueOf(10.5).
Worse, it's fragile. The Windows machine next to me has two distinct fonts "Rockwell" and "Rockwell Condensed", but what if "Condensed" happens to be the name of one of my FontStyles? Which font does a key of "Rockwell Condensed 10.0" represent - is it the Rockwell font with Condensed styling, or the Rockwell Condensed font with no styling?

Alternatively, we could opt to create a dedicated FontAttributes class to use as our compound key, but this just shuffles the complexity into this new class's hashCode() and equals() methods.

Finally, the simple synchronization technique employed exhibits poor concurrency, which is fixable, but at the cost of even more complexity.

Utilizing memoization, what we'd really like would be something like this:

public class Font {

private static {String, Double => Font} memoizer =
memoize({String name, Double pointSize => new Font(name, pointSize)});

public static Font getInstance(String name, double pointSize) {
return memoizer.invoke(name, pointSize);
}

private Font(String name, double pointSize) {
// construct the Font
}
}

Here, all the complexity involved in managing the creation of Font instances (including synchronization) is dealt with by the memoize() method, and it's practically effortless to add additional parameters.

So far though, we've only seen a version of memoize() which works with one key:

static <A, R> {A => R} memoize({A => R} fn) {
Map<A, R> cache = new HashMap<A, R>();
return {A a =>
R value = null;
synchronized(cache) {
value = cache.get(a);
if (value == null) {
value = fn.invoke(a);
cache.put(a, value);
}
}
value
};
}

How can we extend this to work with two, three, or even any number of keys? Obviously we can overload it, so let's see what a version which takes two keys might look like:

static <A1, A2, R> {A1, A2 => R} memoize({A1, A2 => R} fn) {
Map<A1, Map<A2, R>> cache1 = new HashMap<A1, Map<A2, R>>();
return {A1 a1, A2 a2 =>
R value = null;
synchronized(cache1) {
Map<A2, R> cache2 = cache1.get(a1);
if (cache2 == null) {
cache2 = new HashMap<A2, R>();
cache1.put(a1, cache2);
}
else {
value = cache2.get(a2);
}
if (value == null) {
value = fn.invoke(a1, a2);
cache2.put(a2, value);
}
}
value
};
}

This works, but it suffers from the same problems as our earlier implementation of getInstance() which used nested Maps - poor concurrency and increased complexity as the number of keys increases. It is better though, because now at least these problems don't need to be replicated throughout countless getInstance() methods. I'll deal with the concurrency issue later, but let's see if we can improve the complexity situation.

A useful observation we can make is that once we've retrieved the 'inner' Map (cache2), we're back to doing single-key memoization, which is a problem we already have a solution to. Can we define our two-key version of memoize() in terms of the single-key version?

In order to call the single-key memoize({A1 => R} fn), we somehow need to turn the fn we have, which is of type {A1, A2 => R}, into something of type {A1 => R}. This is exactly what partial application can do for us here, since we have the value of the first argument in a1:

static <A1, A2, R> {A1, A2 => R} memoize({A1, A2 => R} fn) {
Map<A1, {A2 => R}> cache = new HashMap<A1, {A2 => R}>();
return {A1 a1, A2 a2 =>
{A2 => R} valueMemoizer = null;
synchronized(cache) {
valueMemoizer = cache.get(a1);
if (valueMemoizer == null) {
valueMemoizer = memoize(partial(fn, a1));
cache.put(a1, valueMemoizer);
}
}
valueMemoizer.invoke(a2)
};
}

The cache is now a simple Map of keys of type A1 to values which are single-key memoizers. This is much better, since the code no longer increases in complexity as we increase the number of keys - we can see this clearly when we define our three-key version in terms of the two-key version:

static <A1, A2, A3, R> {A1, A2, A3 => R} memoize({A1, A2, A3 => R} fn) {
Map<A1, {A2, A3 => R}> cache = new HashMap<A1, {A2, A3 => R}>();
return {A1 a1, A2 a2, A3 a3 =>
{A2, A3 => R} valueMemoizer = null;
synchronized(cache) {
valueMemoizer = cache.get(a1);
if (valueMemoizer == null) {
valueMemoizer = memoize(partial(fn, a1));
cache.put(a1, valueMemoizer);
}
}
valueMemoizer.invoke(a2, a3)
};
}

In fact, the code is almost identical.

However, it's still a shame that we need to write lots of versions of memoize() to cater for different numbers of keys. Isn't there a way to do this with just one version?

Actually, there is :) If we look carefully at what's happening in the two- and three-key versions, they are essentially 'unwrapping' fn, 'injecting' memoization at each stage (ie. on each key). This sounds a lot like currying - turning a multi-argument function into a series of single-argument functions - and funnily enough, we can throw away our two- and three-key versions and rewrite the memoizer usage in curried form, just using the single-key version of memoize():

public class Font {

private static {String => {Double => Font}} memoizer =
memoize({String name => memoize({Double pointSize => new Font(name, pointSize)})});

public static Font getInstance(String name, double pointSize) {
return memoizer.invoke(name).invoke(pointSize);
}

private Font(String name, double pointSize) {
// construct the Font
}
}

So we only needed the single-key version of memoize() after all... great!

Except... we've now increased the complexity and understanding required for anyone using memoize() - not a great move in terms of API design. Perhaps there's a compromise though - we could have a method which did all 'injection' of memoization in curried form for us:

public class Font {

private static {String, Double => Font} memoizer =
injectMemoizer({String name, Double pointSize => new Font(name, pointSize)});

public static Font getInstance(String name, double pointSize) {
return memoizer.invoke(name, pointSize);
}

private Font(String name, double pointSize) {
// construct the Font
}
}

Here's a version of injectMemoizer() which copes with this:

static <A1, A2, R> {A1, A2 => R} injectMemoizer({A1, A2 => R} fn) {
{A1 => {A2 => R}} memoizer = memoize({A1 a1=> memoize({A2 a2 => fn.invoke(a1, a2)})});
return uncurry(memoizer);
}

OK, API usage is back to being relatively 'simple' again, but now we need a version of injectMemoizer() for 2 keys, 3 keys, etc etc... haven't we gone back a step? Well, yes and no. One more change and we can abstract out the call to memoize(), leaving us with a general purpose 'inject' method which we can use for other things. Perhaps we want to inject something that logs the values of the arguments, or substitutes their values for other values if some condition is met. We'll still need any number of versions of 'inject', but now it'll be in the same league as curry, uncurry and partial: one of a handful of building block methods with which this problem is more acceptable.

To do all this we need to find a way to tell inject() what it is that we want it to use (in our case, memoize()). So an additional parameter is required, but what is its type? Looking at the above example we can see that it is something which takes a function type with one argument and a return value, and returns an object of the same function type. These types aren't fixed though: in the first call to memoize() the function type is {A1 => {A2 => R}}, and in the second it is {A2 => R}.

Furthermore, we need to pass in an object, but currently we have a method. Putting all these requirements together suggests that we rewrite memoize() in terms of a type (so that we can have an object of that type), and since this is Java, that type should be an interface, which I'll call Bob because I have no idea what it should be called (same applies to inject for that matter), and the name Bob reminds me of an amusing episode of Blackadder.

public interface Bob {
<A, R> {A => R} invoke({A => R} fn);
}

public class Memoizer implements Bob {
public <A, R> {A => R} invoke({A => R} fn) {
Map<A, R> cache = new HashMap<A, R>();
return {A a =>
R value = null;
synchronized(cache) {
value = cache.get(a);
if (value == null) {
value = fn.invoke(a);
cache.put(a, value);
}
}
value
};
}
}

static <A1, A2, R> {A1, A2 => R} inject({A1, A2 => R} fn, Bob bob) {

{A1 => {A2 => R}} injected = bob.invoke({A1 a1 => bob.invoke({A2 a2 => fn.invoke(a1,a2)})});
return uncurry(injected);
}

Here's how we use it:

public class Font {

private static {String, Double => Font} memoizer =
inject({String name, Double pointSize => new Font(name, pointSize)}, new Memoizer());

public static Font getInstance(String name, double pointSize) {
return memoizer.invoke(name, pointSize);
}

private Font(String name, double pointSize) {
// construct the Font
}
}

And we're done... well, other than writing alternative versions of inject() for different versions of arguments, adding exception transparency, sorting out the poor concurrency in Memoizer.invoke(), and working out what Bob and inject() should really be called ;)

The key point is that we've now seen examples of currying, uncurrying and partial application being used as building blocks to support the creation and/or usage of a more obviously useful API.

Wednesday, December 5, 2007

Restricted syntax fun

WARNING: This entry's content relies on Javascript so you'll need to read it in a browser window.

Neal Gafter has blogged about some possible alternatives to the RestrictedFunction approach used by the current Closures spec, and I think it may be useful to get more of a glimpse at how a couple of these suggestions look in terms of code.

If you've just opened this page you should see some code below which illustrates the use of one of Neal's suggestions - allowing function types to be declared in such a way that we can additionally specify certain marker interfaces (such as RestrictedFunction) as part of a function type's type declaration.

Neal also mentions another option, which uses a slightly different syntax for unrestricted and restricted closures (and function types). The two input fields below allow you to vary the tokens used - click the 'Show me' link after changing them to see the effect. When the tokens are the same, '&RestrictedFunction' will be present in the type declarations, when they are different it will not.

Don't take the code examples too seriously!

Normal token: Restricted token: Show me

interface Foo {

void execute(Runnable r);

// invokes fn synchronously
void loop(int count, {=> void} fn);

// stores command in an instance variable for later invocation
void setCommand({=> void}&RestrictedFunction command);

// invokes reducer in the same thread
<T> T reduce(List<T> list, {T,T => T} reducer);

// invokes each evaluator concurrently, returning the evaluator with the highest
// result for the specified value
{int => int}&RestrictedFunction max(Collection<{int => int}&RestrictedFunction> evaluators, int value);

// invokes combiner in multiple threads
<T,U,V> ParallelArray<V> combine(ParallelArray<T> array1, ParallelArray<U> array2,
{T,U => V}&RestrictedFunction combiner);

// curries a 3-argument unrestricted function
<A1,A2,A3,R> {A1 => {A2 => {A3 => R}}&RestrictedFunction}&RestrictedFunction
        
curry({A1,A2,A3 => R} fn);

// curries a 3-argument restricted function
<A1,A2,A3,R> {A1 => {A2 => {A3 => R}&RestrictedFunction}&RestrictedFunction}&RestrictedFunction
        
curry({A1,A2,A3 => R}&RestrictedFunction fn);
}


silliness:
for (int i = 0; i < 10; i++) {

Foo foo = makeMeAFoo();

int count = 0;

foo.execute({=> if (++count == i) break silliness;});
foo.execute({=> System.out.println("executed!");});

foo.loop(5, {=> if (++count == i) break silliness;});
foo.loop(5, {=> System.out.println("executed!");});
foo.loop(5) {
if (++count == i) break silliness;
}

foo.setCommand({=> System.out.println("executed!");});

{String, String => String}&RestrictedFunction reducer = {String s1, String s2 => s1 + " " + s2};
String result1 = foo.reduce(myListOfStrings, reducer);
String result2 = foo.reduce(myListOfStrings,
{String s1, String s2 =>
if (++count == i) {
break silliness;
}
s1 + " " + s2
});

List<{int => int}&RestrictedFunction> evaluators = new ArrayList<{int => int}&RestrictedFunction>();
evaluators.add({int i => i + 1});
evaluators.add({int i => i * 2});
evaluators.add({int i => i * i});
{int => int}&RestrictedFunction maxEvaluator = foo.max(evaluators);

{String => {String => {String => String}}&RestrictedFunction}&RestrictedFunction
curried1 = curry({String s1, String s2, String s3 => s1 + s2 + s3});

{String => {String => {String => String}&RestrictedFunction}&RestrictedFunction}&RestrictedFunction
curried2 = curry({String s1, String s2, String s3 => s1 + s2 + s3});
}

Saturday, December 1, 2007

Currying and Partial Application with Java Closures

I had intended to dive straight in and continue with the earlier memoization example, extending it to work with cases where we have an arbitrary number of keys, but in writing that up I realised that I was making use of a couple of concepts which deserved a bit of explanation in their own right: currying and partial application.

There's a lot of confusion about these two, and the terms are often used interchangeably (although they are different!), partly because they naturally vary in their implementation depending on the underlying language. This blog entry is an attempt to informally describe the two concepts in a way that makes sense for Java with Closures, so that I can refer to them in later posts.

OK, first up then we have:

Currying (and uncurrying)

Say we have a function accepting 3 arguments of types A1, A2 and A3 and returning a value of type R. Its type would be:

{A1, A2, A3 => R}

This function's type in curried form would be:

{A1 => {A2 => {A3 => R}}}

Currying transforms a function which takes multiple arguments into an equivalent single-argument function which takes the first argument of the original function and returns another single-argument function, which in turn takes the second argument of the original function and returns yet another single-argument function... etc. The last single-argument function in the chain, which takes the last argument of the original function, will invoke the original function, passing all the arguments in one go, and returning its result.

If this is as clear as mud, a more concrete example may help. Imagine we have a simple 2-argument function which takes a String key and a Locale, and uses those to find a translation of some user-interface text by calling a method with the signature String lookup(String key, Locale locale) :

{String, Locale => String} translator =
{String key, Locale locale => lookup("somePrefix." + key, locale)};

we can use our translator function like this:

String translatedText = translator.invoke("name", customer.getLocale());

Now let's say we have a method called curry which takes any 2-argument function and returns the curried form of that function. Its signature might be:

<A1, A2, R> {A1 => {A2 => R}} curry({A1, A2 => R} fn)

This allows us to curry our translator function, and provide the arguments one at a time, rather than all in one go:

{String => {Locale => String}} curriedTranslator = curry(translator);
{Locale => String} nameTranslator = curriedTranslator.invoke("name");
String translatedText = nameTranslator.invoke(customer.getLocale());

This might not look much use at first glance, but it means that we can pass curriedTranslator to another method which knows how to provide the key but not the Locale, or that we can store nameTranslator away somewhere and use it at a later point, without having to know the key.

An implementation of the curry() method above might be:

static <A1, A2, R> {A1 => {A2 => R}} curry({A1, A2 => R} fn) {
return {A1 a1 => {A2 a2 => fn.invoke(a1, a2)}};
}

Notice that it uses closures to capture references to the original function fn and the first argument a1 so that they are available when the final function is invoked with the value a2.

This is a slightly simplistic version of curry() - in practice we might want to pass it a function which throws one or more checked exceptions; the closures proposal has provisions for implementing this in a very nice way, and I'll cover that at a later point. More importantly, this method only curries 2-argument functions so if we want to curry a function with more arguments we'll need to write a corresponding version of the curry() method; it isn't difficult to overload curry() to cope with n arguments, but how many overloaded versions should we provide by default? Solving this particular problem for all possible values of n would probably require additional support from the language however.

In any case, the key thing to remember about currying is that it takes a multi-argument function and transforms it into an equivalent single-argument function, and this can be a very important building block for more obviously useful concepts.

It's useful to note that some functions are naturally curried - these are functions originally defined in curried form and which may actually make use of arguments before the remaining arguments have been provided. I'll provide an example of this when we get to Partial Application.


While we're at it, uncurrying is the inverse of currying - it transforms a function in curried form into one which accepts all the arguments in one go. For example, uncurrying a function whose type is of the form:

{A1 => {A2 => {A3 => R}}}

results in a function with the type:

{A1, A2, A3 => R}

A simplistic uncurry() method for 2-argument functions can be implemented as follows:

<A1, A2, R> {A1, A2 => R} uncurry({A1 => {A2 => R}} fn) {
return {A1 a1, A2 a2 => fn.invoke(a1).invoke(a2)};
}



Partial Application

Closely related to currying is partial application. In practice, there are a number of subtly different interpretations of the term 'partial application', but let's start by informally defining it as the process of transforming a multiple-argument function into another function accepting fewer arguments, by providing some of the arguments.

For example, partially applying a function of type {A1, A2, A3 => R} to a value of type A1 results in a function of type {A2, A3 => R}. Partially applying it to two values of type A1 and A2 results in a function of type {A3 => R}.

To see this in practice, let's change the original translator example to allow different sets of translations depending on some Product:

{Product, String, Locale => String} translator =
{Product product, String key, Locale locale =>
lookup(product.getName() + '.' + key, locale)};

Now let's define a method called partial, which partially applies a 3-argument function to the value of its first argument:

<A1, A2, A3, R> {A2, A3 => R} partial({A1, A2, A3 => R} fn, A1 a1);

We can use this as follows:

{String, Locale => String} notepadTranslator =
partial(translator, Product.NOTEPAD);

An implementation of this version of partial() might be:

static <A1, A2, A3, R> {A2, A3 => R} partial({A1, A2, A3 => R} fn, A1 a1) {
return {A2 a2, A3 a3 => fn.invoke(a1, a2, a3)};
}

Notice again the use of a closure to capture the meaning of fn and a1 so that they are available when a2 and a3 are finally supplied.

Interpretations in other languages often define partial application in terms of functions in curried form, which we can also do:

Partially applying a function of type {A1 => {A2 => {A3 => R}}} to a value of type A1 results in a function of type {A2 => {A3 => R}}. Partially applying it to two values of type A1 and A2 results in a function of type {A3 -> R}.

The signature for an overloaded version of partial() which works with curried functions could be:

<A1, A2, A3, R> {A2 => {A3 => R}} partial({A1 => {A2 => {A3 => R}}} fn, A1 a1);

Implementing it leads us to an interesting choice, though. We could simply define it as:

static <A1, A2, A3, R> {A2 => {A3 => R}} partial({A1 => {A2 => {A3 => R}}} fn, A1 a1) {
return fn.invoke(a1);
}

which doesn't give us much compared to just calling fn.invoke() ourselves. Alternatively we could choose:

static <A1, A2, A3, R> {A2 => {A3 => R}} partial({A1 => {A2 => {A3 => R}}} fn, A1 a1) {
return {A2 a2 => {A3 a3 => fn.invoke(a1).invoke(a2).invoke(a3)}};
}

with the key difference being that the second version defers any invocation until all the arguments' values have been supplied - this may sound like a subtle distinction, but if fn is a naturally curried function, there may be an observable difference. Consider the following (somewhat contrived) example which uses a fictional database connection type DBConnection to execute some SQL:

{DBConnection => {String => boolean}} dbUpdater =
{DBConnection conn =>
conn.startTransaction();
{String sql =>
boolean success = false;
try {
executeUpdate(conn, sql};
conn.commit();
success = true;
}
finally {
conn.endTransaction();
}
success
}
};

Partially applying dbUpdater with a DBConnection results in very different behaviour depending on which implementation of partial() we opt for. Using the first version, a database transaction is started immediately, but using the second this doesn't happen until the SQL is supplied, which may be at a much later point in time.

This example may not be particularly realistic, but it should illustrate the difference in behaviour between the two implementations of partial() for curried functions.

The deferring version strikes me being as the more useful of the two, as its semantics are distinct from simply invoking a curried function with each argument in turn (which is all the first version does), but I'm not certain whether this choice could prove confusing in the long run if the distinction is not fully understood.



Bear in mind that there are other interpretations of currying and partial application which will disagree with mine on various points, or extend these definitions with other variations. Hopefully the underlying ideas will be reasonably clear, although the trivial examples above will probably seem less than convincing right now - the memoization follow-up will make better use of them.

I also have a rapidly growing set of implementations of curry, uncurry, partial and other useful methods which I'll try to include.

Memoization with Java Closures

I've been doing some catch-up reading recently, and one idea I've read about on some C# blogs is that we can borrow something from the functional programming world called 'memoization' and implement it neatly using closures to provide an alternative mechanism for some common coding problems.

Basically, memoization allows us to store a result of some kind against the input parameters from which the result was derived, and to retrieve that result again given the same parameters without having to calculate it again from scratch, which may be costly or contrary to the fundamental design. This should sound pretty familiar - if we're thinking about performance we might use an approach like this and call it a cache, and it can also be used as a way to implement the Singleton or 'Multiton' patterns for controlling object creation.

Using the BGGA Closures for Java prototype, we can implement a very simple memoize method like this:

public class Memoization {
static <A, R> {=> R} memoize({=> R} fn) {
boolean cached = false;
R cachedValue = null;
return {=>
synchronized(fn) {
if (!cached) {
cachedValue = fn.invoke();
cached = true;
}
}
cachedValue
};
}
}


This is an extremely basic form - it's only applicable when there are no parameters used to calculate the result - but we can use it to implement a singleton class Foo:

import static Memoization.*;
...
public class Foo {

private static {=> Foo} fooMemoizer =
memoize({=> new Foo()});

public static Foo getInstance() {
return fooMemoizer.invoke();
}
...
}

This works because the call to memoize() creates two local variables, cached and cachedValue, and returns a closure literal which has access to those variables and the fn parameter which knows how to get the value we're after. When we invoke() singletonMemoizer for the first time, cached is false, so fn is invoked, its result is stored in cachedValue, and cached is set to true. When we invoke singletonMemoizer again, cached is now true so we just return cachedValue.

There are neater ways to implement the singleton pattern in Java, but what if we return a value based on some key, where providing the same key multiple times should result in same value being returned each time? In Java 6 we might write:

public class Foo {

private static Map<Key, Foo> instanceMap =
new HashMap<Key, Foo>();

public static Foo getInstance(Key key) {
Foo foo = null;
synchronized(instanceMap) {
foo = instanceMap.get(key);
if (foo == null) {
foo = createFoo(key);
instanceMap.put(key, foo);
}
}
return foo;
}
...
}
This is also very common, and sometimes seen as an example of the 'multiton' pattern. Now let's overload memoize() to cope with this:

static <A, R> {A => R} memoize({A => R} fn) {
Map<A, R> cache = new HashMap<A, R>();
return {A a =>
R value = null;
synchronized(cache) {
value = cache.get(a);
if (value == null) {
value = fn.invoke(a);
cache.put(a, value);
}
}
value
};
}

There's a real advantage to using the memoize() approach here - we've abstracted the caching and thread synchronization logic into a general purpose method, and all we have to provide each time we use it is a simple closure literal which knows how to create a Foo given a particular Key:

import static Memoization.*;
...
public class Foo {

private static {Key => Foo} fooMemoizer =
memoize({Key key => createFoo(key)});

public static Foo getInstance(Key key) {
return fooMemoizer.invoke(key);
}
...
}

This advantage shouldn't be underestimated - when we re-code the same logic over and over, variations inevitably turn up in the form of bugs, which is especially nasty when we're dealing with thread-safety concerns (and you don't need to be explicitly using Threads or java.util.concurrent for thread-safety to be relevant!).

The next steps are to look at how we might write a version of memoize() which will work with multiple keys, but I'll save that for the next post.


UPDATE: Neal Gafter kindly pointed out a school-boy error in the synchronization I used in the first version of memoize() - I've updated the method accordingly. This just emphasises the point though: finding and fixing the bug in one place is far easier and better than finding and fixing all variations across multiple implementations of the same logic.

Tuesday, November 6, 2007

Logging and (simple) closures

I've been wondering recently what certain Java APIs might be like had closures been available when those APIs were designed.

Take logging. Most Java code I've been in contact with over the last few years is littered with statements like this:

log.debug("Received a message '" + msg.getTitle() +
"' with " + entries.size() + " entries");

Fair enough, until someone realises that the code is performing badly because statements like this are being executed 10,000 times per second. So they duly configure their logging API so that it won't log debug messages. But the performance is still poor, and they run the code through a profiler only to see that 50,000 StringBuilders are being created each second, just for log.debug calls that no longer do anything.

Eventually some poor programmer ends up wrapping half the logging code in if statements, wishing that there was some way that log.debug() could check if it's going to do anything first and only then evaluate the expression being passed in.

if (log.isDebugEnabled() {
log.debug("Received a message '" + msg.getTitle() +
"' with " + entries.size() + " entries");
}

Of course there is. We could define an interface

interface MessageBuilder {
String getMessage();
}

and a log.debug() method which accepted that instead, only invoking its getMessage() method if isDebugEnabled() returns true. Unfortunately that's even clumsier in use:

log.debug(new MessageBuilder() {
public String getMessage() {
return "Received a message '" + msg.getTitle() +
"' with " + entries.size() + " entries";
}
});

Not only is it even more verbose, but now we need to make the msg and entries variables final, and you can bet that won't be convenient in all cases!

Try it with closures though:

log.debug({ => "Received a message '" + msg.getTitle() +
"' with " + entries.size() + " entries" });

There are slight variations in syntax depending on whose closures proposal you choose, but basically we're back to one line, no need to make the variables final, and the expression is evaluated only if it's required. Much better.

Closures, at least this kind, are actually about much more than this, but I have to wonder whether the various logging APIs would have been designed to take an interface like MessageBuilder had the resulting code not been so awkward, and how many other APIs were similarly influenced in their design.