Showing posts with label types. Show all posts
Showing posts with label types. Show all posts

Monday, May 9, 2011

Why Eager Languages Don't Have Products and Lazy Languages Don't Have Sums

In The Point of Laziness, Bob Harper mentioned a core duality between lazy and eager languages[1]. Eager languages have sum types but not product types and lazy languages have product types but no sum types. I've seen a few questions about what he means. I thought I'd try to explain. It all has to do with non-termination, our old friend "bottom" which I'll write ⊥. In this entire article I'm focusing on languages that don't allow any side effects other than ⊥ - that way I don't have to deal with ordering print statements or whatever. Also, I'm using an MLish notation that hopefully isn't too confusing.

Update: This comment by Dan Doel does a better job of explaining the issues than my whole article.

Product Types

A product type is a pair. The pair type (A,B) is the product of A and B, and is often written as A * B or A × B. A 3-tuple[2] (A,B,C) can be seen as another way of saying ((A,B),C), etc. so any n-Tuple is just a pretty way to write nested pairs. The simplest way to define exactly what a pair means is through projection functions "first" and "second" which pull out the first or second element. The equations we want are

first (a,b) == a
second (a,b) == b

Pretty straight forward right? Well, those equations hold just fine in a lazy language even when a or b is ⊥. But in an eager language, first (a,⊥) == ⊥. Same deal with second. Oops. Hence lazy languages have product types in the sense that they completely meet these equations where eager languages don't.

That has a practical consequence. A compiler for an eager language can't necessarily simplify first(a, someComplicatedExpression) unless it can prove that someComplicatedExpression is never ⊥, and in general it can't do that. A lazy language compiler would always be free to throw away someComplicatedExpression.

Sum Types

A sum type is an alternative between two choices. A | B is the sum of the two things, A and B, and is frequently written A + B. Choices among three elements, like A | B | C can be considered as just a pretty form of (A | B) | C - a nesting of binary choices. The simplest sum type is good old Boolean, the choice between true or false. Here's one equation we might expect to hold for Booleans and something that examines booleans, the if expression[3]

((if condition then x else y), z) == if condition then (x,z) else (y,z)

Ignore the possibility of x, y, and z being ⊥ and focus on "condition". In an eager language, condition must be evaluated in either of the two forms above and either way you get ⊥ if condition is ⊥. But in a lazy language, the left side will compute (⊥, z) where he right side will compute ⊥. Pulling the second element from those two results gives differing results, proving they are unequal.

Again, this has consequences. An eager language can simplify the right side form to the left side form if it wants to. A lazy language can't.

Footnotes

  1. Strictly speaking I should say strict and non-strict instead of eager and lazy, but I'll go with Dr. Harper's wording because the difference isn't that important to the point.
  2. Or a record (aka a struct). A tuple can be seen as a record where the field labels are numbers or a record can be seen as a tuple with some convenient labels for the positions.
  3. If you are more familiar with C like languages that treat "if/else" as statements instead of expressions then pretend I'm using a ternary operator "condition ? x : y" instead of "if condition then x else y"

Monday, October 18, 2010

Phantom Types In Haskell and Scala

/*
Some literate Haskell and Scala to demonstrate 
1) phantom types
2) I am a masochist
3) ???
4) profit!

The code is probably best viewed with a syntax colorizer
for one language or the other but I've colorized all my
 comments.

> {-# LANGUAGE EmptyDataDecls #-}
> module RocketModule(test1, test2, createRocket, addFuel, addO2, launch) where

*/
object RocketModule {

/*
None of these data types have constructors, so there are
no values with these types. That's okay because I only 
need the types at compile time. Hence "phantom" - 
ethereal and unreal.

> data NoFuel
> data Fueled
> data NoO2
> data HasO2

*/
   sealed trait NoFuel
   sealed trait Fueled
   sealed trait NoO2
   sealed trait HasO2

/*
The Rocket data type takes two type parameters, fuel and
o2.  But the constructor doesn't touch them.  I don't
export the constructor so only this module can create
rockets.

> data Rocket fuel o2 = Rocket

*/
   final case class Rocket[Fuel, O2] private[RocketModule]()

/*
createRocket produces a rocket with no fuel and no o2
 
> createRocket :: Rocket NoFuel NoO2 
> createRocket = Rocket

*/
   def createRocket = Rocket[NoFuel, NoO2]()

/*
addFuel takes a rocket with no fuel and returns one with
fuel.  It doesn't touch the o2
 
> addFuel :: Rocket NoFuel o2 -> Rocket Fueled o2
> addFuel x = Rocket

*/
   def addFuel[O2](x : Rocket[NoFuel, O2]) = Rocket[Fueled, O2]()

/*
Similarly, addO2 adds o2 without touching the fuel

> addO2 :: Rocket fuel NoO2 -> Rocket fuel HasO2
> addO2 x = Rocket
 
*/
   def addO2[Fuel](x : Rocket[Fuel, NoO2]) = Rocket[Fuel, HasO2]()

/*
launch will only launch a rocket with both fuel and o2

> launch :: Rocket Fueled HasO2 -> String
> launch x = "blastoff"

*/
   def launch(x : Rocket[Fueled, HasO2]) = "blastoff"

/*
This is just a pretty way of stringing things together,
stolen shamelessly from F#.  Adding infix operations is
a bit verbose in Scala.

> a |> b = b a

*/
   implicit def toPiped[V] (value:V) = new {
      def |>[R] (f : V => R) = f(value)
   }

/*
Create a rocket, fuel it, add o2, then 
launch it

> test1 = createRocket |> addFuel |> addO2 |> launch

*/
   def test1 = createRocket |> addFuel |> addO2 |> launch

/*
This compiles just fine, too.  It doesn't matter which 
order we put in the fuel and o2
 
> test2 = createRocket |> addO2 |> addFuel |> launch

*/
   def test2 = createRocket |> addO2 |> addFuel |> launch

//This won't compile - there's no fuel
 
// > test3 = createRocket |> addO2 |> launch

//    def test3 = createRocket |> addO2 |> launch

// This won't compile either - there's no o2

// > test4 = createRocket |> addFuel |> launch

//    def test4 = createRocket |> addFuel |> launch

// This won't compile because you can't add fuel twice

// > test5 = createRocket |> addFuel |> addO2 |> addFuel |> launch

//    def test5 = createRocket |> addFuel |> addO2 |> addFuel |> launch
}

Tuesday, May 25, 2010

Anatomy of an Annoyance

The warning "Type safety : A generic array of T is created for a varargs parameter" is so irritating it makes me want to beat Java with a snoracle, an implement that allows defunct tech companies to breath while underwater. This article will explain what the warning is, where it comes from, and why it is pretty boneheaded. The warning also shows just how hard it is to extend an existing language without creating pain for your users.

My sample use case: anytime you build a Map in Java it takes approximately 6 metric tons of boilerplate.

Map<Integer, String> numbers = 
   new HashMap<Integer, String>(3);
numbers.put(1, "one");
numbers.put(2, "two");
numbers.put(3, "three");
// etc

It would sure be nice if that could be done in one line of code as in most sane languages. So let's fix it! First, we need the ubiquitous Pair class[1].

public static class Pair<A, B> {
   private A first;
   private B second;

   public Pair(A first, B second) {
      this.first = first;
      this.second = second;
   }

   public A getFirst() {
      return first;
   }

   public B getSecond() {
      return second;
   }
  
   /* a real Pair class would also have equals and 
     hashCode, but for this code I don't need them */
}

And then, to make it more pleasant to use, we write a static function that can take advantage of Java's half-assed type inference.

public static <A, B> Pair<A, B> pair(A first, B second) {
   return new Pair<A, B>(first, second);
}

Finally, the pièce de résistance

public static <A, B> Map<A, B> map(
      Pair<A, B>... pairs) {
   Map<A, B> map = 
      new HashMap<A, B>(pairs.length);
   for(Pair<A, B> pair : pairs) {
      map.put(pair.getFirst(), pair.getSecond());
   }
   return map;
}

And so, with faces smiling as the sunoracle shines down upon us, we use it

Map<Integer, String> numbers = 
   map(pair(1, "one"), pair(2, "two"), pair(3, "three"));

Ah, sweet sweet beauty[2]. Except one nasty, tiny, shoot-me-now-and-bury-my-carcass-in-downtown-Santa-Clara annoyance. That single lovely line of code that so innocently tried to use all our wonderful machinery is tagged with the warning "Type safety : A generic array of T is created for a varargs parameter." And the only way to "fix" it is to nuke the whole area from orbit with '@SuppressWarnings("unchecked"),' possibly hiding real problems in the same method. Gah.

History, Part The First

To understand the problem we have to go way back in history, at least as far as Java 1.0. Maybe even to Oak or to the egg and sperm that fused to make James Gosling. Whatever. In designing Java there was a question: since String is a subtype of Object shouldn't String[] be a subtype of Object[]. After all sometimes that's useful: if I have an array of Strings and you want to write a function that iterates through an array of Objects, aren't we good if I send you my Strings?

Well, no, we're not good. We're very bad. If String[] is a subtype of Object[] then Bad Things Happen™


void badThing(Object[] objects) {
   objects[0] = new Integer(42);
}

void test() {
   String[] strings = {"a", "b", "c"};
   badThing(strings);

   // length of an integer?
   System.out.println(strings[0].length());
}

If the above were allowed then we'd be trying to find the length of an Integer and that, um, won't work. Reading is okay but writing is bad. So to solve the problem the original Java designers made a pact with Satan. Instead of statically preventing code like that they added a dynamic check: anytime you store into a non-primitive array Java does an instanceOf-like check to make sure it's legit. In my sample code, the assignment is checked in "objects[0] = new Integer(42)". Failure will give an ArrayStoreException. That means every array store has to do expensive extra work and, more importantly to this article, at time of construction an array must be associated with a dynamic type tag. Like all pacts with Satan the negative consequences of this deal weren't obvious until much later.

History, Part the Second

With Java 1.5, Java programmers were finally able to move to the forefront of 1970's era technology with parameteric polymorphism, er, excuse me, generics. But see, there was this huge collection library and 3rd party libraries that extended it and...how the hell are we going to make the future safe for the past?

The decision was to compile with "type erasure" - dynamic type tags no longer carry all the information in the static type. Now, there's nothing evil about type erasure. Some of my best friends are languages that compile with even more type erasure than Java. It's just that Java has "instanceof" and other similar things that don't play well with it. Relevant to this article we have the aforementioned pact with Satan where arrays need a dynamic type tag or Bad Things Happen™.

Concerned with Bad Things, the Java designers started spackling over some holes. The upshot: this code fails to compile with the static error "Cannot create a generic array of T"

public <T> T[] arrayOfT(T a1, T a2, T a3) {
   return new T[]{a1, a2, a3}; // bzzt static error!
}

Same deal with this, "Cannot create a generic array of Pair<A, B>".

public <A, B> Pair<A, B>[] twoPairs(
      A a1, B b1, A a2, B b2) {
   return new Pair<A, B>[]{pair(a1, b2), pair(a2, b2)};
}

But this is a very shallow spackling indeed. Keep reading.

History, Part the Third

Also with Java 1.5, the designers added varargs: the ability to pass an arbitray number of arguments of the same static type to a method. So convenient! But, uh, apparently the varargs guys and the generics guys didn't talk to each other until late in the game. See, the varargs folks decided to implement varargs as sugar over arrays rather than something that's actually type safe.

public void whatClassIsIt(Object... things) {
   System.out.println(things.getClass());
}

whatClassIsIt(pair(1, "one"), 
   pair(2, "two"), pair(3, "three"));

There's our old buddy erasure. The above code prints "class [LPair;" - array of Pairs. But note it's not dynamically known to be an array of Pair<Integer, String>.

Making Bad Things Happen™

Now we've got enough knowledge to screw things up nicely. We've got statically unsafe arrays that depend on runtime type tags to provide a reasonably early error. And we have an ability to create arrays that don't carry all the information needed to do the runtime check.

public static <A, B> Pair<A, B>[] pairs(
      Pair<A, B>... pairs) {
  return pairs;
}

Pair<Integer, String>[] pairs = 
  pairs(pair(1, "one"), pair(2, "two"), pair(3, "three"));
Object[] objects = pairs;
// no error in this assignment, statically or dynamically!
objects[0] = pair('c', 2.0); 

// arrgg ClassCastException
System.out.println(pairs[0].getSecond().length()); 

The code doesn't have an error at the point of assigment. The error does finally occur later when I attempt to use a value. That means that in real code the error could be very far from the real cause. The only diagnostic that prevents this Bad Thing™ from happening is the warning on the first line; our lovely "Type safety : A generic array of T is created for a varargs parameter".[3] And that warning has to occur in a place that doesn't know whether the use of the array is actually bad or not. Hence, our code's beauty (or at least what passes for beauty in Java) is shot.

Conclusion

Ah, if only. If only Java didn't have covariant arrays. Or, since it does, if only the designers had used something other than arrays to carry varargs. Woulda coulda shoulda. This article shows just how hard it can be to extend an existing language without shooting yourself and your users in the ass. Runtime checked array assignment + varargs in arrays + type erasure == pain.

Now you know why Scala's arrays are invariant.

Footnotes

  1. Q: What makes Java so manly? A: It forces every programmer to grow a Pair.
  2. Please don't rag me about how I wrote 17 lines of code to avoid writing 3 - presumably I'd reuse this infrastructure a lot.
  3. For an even nastier problem with the erasure scheme and arrays see this example of trying to write higher order code using generics and varargs.

Wednesday, May 5, 2010

Types à la Chart

I recently saw a debate on whether a certain language had a "weak" or "strong" type system. Naturally the debate ended in flames. As far as I could tell neither side knew what the other was talking about which isn't surprising as I've almost as many definitions of "weak" and "strong" typing as there are developers who use the words. Frankly the words have too many meanings to be useful.

If the first part of the problem is ambiguity then it's still only a start of the problems. Even if you get something crisp like "static" vs "dynamic" you've still managed to condense out a lot of useful information. Agda and Java live in incredibly different spaces even though they are both statically typed. Or how 'bout the world of difference between E and Ruby, two dynamically typed languages?

In this article I'm going to try to convince you that the space is far more complicated than the typical debates would allow. The design space is very highly dimensioned, perhaps infinitely so, but I'll boil it down to 2 important ones which none-the-less already show considerably more sophistication. Here's the set up: what can you know statically about an expression, like f(x), without dynamically executing/evaluating it and if all you know is static types? The two dimensions I use are 1) How "safe" is it - are there back channels that might violate abstractions? And 2) How rich and sophisticated can the static type be?

One big, fat caveat: this is almost certainly wrong in many details. I've listed a bunch of languages. There just aren't enough "21 days" in a lifetime to understand in detail exactly what each one can promise and how one type logic does or does not contain another. Naturally if you are an expert on one of these languages and I got something a bit wrong you will conclude I'm an idiot even if everything else is 100% right. Such is a the price of writing for a technical audience. In spite of any such errors I contend that even if some details are wrong the overall picture should be reasonably accurate and should convey my central thesis that this space is too complicated for binary distinctions.

Click to zoom

Chart of Static Type Properties

Power

The horizontal axis is "power" or "sophistication" of the static type system's underlying logic. For this analysis I focused only on the ability of the static type system to describe actual data structures and computations. While it's interesting that C++, GHC Haskell, and Scala all have Turing complete type systems, none of their type systems seem to have the same ability to usefully describe computation as Agda's. Of course, I could be wrong. Feel free to bump the asterisked languages to the right accordingly.

The horizontal axis can also be read backwards in terms of type system simplicity.

The far left requires some explanation. The languages in this column tend to be dynamically typed which at first seems like a challenge for thinking about them in static terms. But there's a nice theoretical framework where you think of them as having one static type. A valid Scheme expression has "the Scheme type" and an invalid one does not. Different people who mean to emphasize different things will call these languages "untyped," "unityped", or "dynamically typed." For my purpose "unityped" gives the right connotation. This column corresponds to the untyped lambda calculus.

Moving to the right there are simple first order type systems. Such type systems are a bit richer than the unityped ones, but don't have any form of type abstraction: they don't allow a user to define types that will be parameterized by other types. This column corresponds to the simply typed lambda calculus.

Further right are second order type systems. These allow type abstraction qualified with "forall", and correspond to System F. In between first and second order is HM (Hindley-Milner) which allows full type inference but at the price of not allowing some second order things like higher rank polymorphism. Moving to the right are higher kinded languages, corresponding to System Fn up through System Fω

At the far right are dependently typed languages: languages that allows types to be parameterized by values[1][2]

Safety

The vertical axis is various kinds of "safety." For safety analysis I'm completely ignoring foreign function interfaces and the like. FFIs are useful and pragmatic but also make otherwise pretty safe languages no more safe than Assembly.[3]

You can also read this axis going the opposite direction in terms of a kind of operational power - Assembly isn't safe, but it lets you make the machine dance.

At the bottom end are memory unsafe languages. An expression like f(x) might very well totally hose memory.

A bit up from that is memory safe languages. Such languages have a few built in abstractions (like you can't write past the end of an array) that f(x) can't possibly violate. A couple of steps up is abstraction safety. That means that one developer's abstractions can be protected from another developer in some fashion. For instance, private variables can only be accessed through functions defined in a scope with access to the variable. Many popular "dynamic languages" are not abstraction safe because their powerful dynamic meta-programming facilities allow abstractions to be smashed. In between I've listed a few JVM languages as being configurably safe: the Java standard includes some pretty powerful reflection capabilities that allow many developer written abstractions to be violated, but the JVM standard includes a security mechanism that allows that to be turned off. Move these points up and down accordingly.

At the next level are capability safe languages. In most languages, any function call can have all manner of harmful side effects including formatting your hard drive or launching missiles. But at the capability safe level the expression cannot possibly do such crazy things unless granted that power in the form of an "object" that represents the capability to do so. While you cannot statically know whether f(x) has received a missile launching capability, you can statically know that if f(x) lives in a program that has not been granted missile launching capability then it will not launch missiles.

The pure level is for languages with a strict, statically enforced wall between side effecting expressions and non-side effecting ones. Examining the static type is sufficient to know the difference. Finally, the "pure & total" level is for languages where the expression f(x) is either guaranteed to terminate normally (no infinite loops, no exceptions)[4] or where static types indicate possible non-totality.

Random Notes

In many languages safety properties are enforced dynamically so a knee jerk reaction is "why is this stuff in a discussion about static properties." But remember the point was what I can statically know about an arbitrary expression f(x) without evaluating it. In Ruby I know with absolute conviction that, barring implementation bugs and foreign function interfacing, any arbitrary f(x) absolutely does not write past the end of an array. In C I don't know that.

It might seem odd to put Java further to the right than Haskell '98 but it's true. Haskell'98 has a slightly extended Hindley-Milner type system while Java has a kind of warped System F<: (use nesting to create rank-n polymorphism). Of course, a chart that ranked type systems on "bang for the buck", versatility vs ease of use, would rank things quite differently.

C# gets stuffed into the unsafe row because of the "unsafe" keyword and associated pointer machinery. But if you want to think of these as a kind of inline foreign function mechanism feel free to bump C# skyward.

I've marked Haskell '98 as pure but GHC Haskell as not pure due to the unsafe* family of functions. If you disable unsafe* then jump GHC Haskell up accordingly.

Assembly is problematic. In a certain weird sense some Assemblies are "memory safe" because they do protect their language abstractions by virtue of having almost none.

Conclusion

"Weak" and "strong" mean so many things that they mean nothing. "Static" and "dynamic" mean something but don't really convey many differences. This chart is not the end-all-be-all of language categorization, but it is the beginning of the start of an inkling of a glimpse at how rich the space really is. Hopefully this will make static typing advocates understand that dynamic typing doesn't necessarily mean total insanity while dynamic typing advocates can begin to see the range of sophisticated expression afforded by many modern statically typed languages. I highly recommend "What To Know Before Debating Type Systems" for more on the subject.

Postscript

Inspired in part by this post and a follow up discussion, David MacIver is investigating the dimensions of language design that are actually useful in picking the right tool for the job.

Footnotes

  1. Type theory folk will note that I've pretty casually collapsed two legs of the "lambda cube" into one dimension of power. That's not very valid but that seems to work out in practice. It's hard to find a sophisticated dependently typed language (types indexed by values) that isn't also very rich when it comes to types indexed by types.
  2. C++'s ability to parameterize templates by a few forms of literal shouldn't be confused with dependent typing.
  3. Technically there's no such thing as "Assembly;" it's a family of vaguely related languages with semantics tied extremely closely to underlying machines.
  4. A total language can of course crash due to resource exhaustion or hardware failure. There's no way to prevent that.

Friday, August 21, 2009

Getting to the Bottom of Nothing At All

Except when I'm being a total smart ass or self indulgent emo telling the Ruby community to f' off, most of what I write talks about practical stuff, often with a small dose of theory. But this time it's about theory with a small dose of practice.

Last time I talked about the way programmers in popular C derived languages C++, Java, and C# think of a void function as being one that returns nothing. I contrasted this with the functional view that such functions return something, they just return the unit value (), something that doesn't carry any information. To expand on this I want to talk about actually returning nothing.

In any Turing complete language it is possible to write a function that really truly never returns anything, not even a made-up singleton value like (). In C style languages the simplest such function might contain an infinite loop like

while(true);
For my purposes a better way to explore the idea is with infinite recursion. Pretend your favorite C style language does tail call optimization[1] so we can ignore stack overflow issues and examine the following
X foo(){ return foo(); }

The question is what should the X be? And the answer is that if you plug that line of code into C, C++, C#, or Java then X can be just about any type you want[2].

If that doesn't give you pause then consider this: your type system appears to be lying to you. The function is promising an X but has no hope of delivering. And its quite happy to promise you any X. If you write "String foo(){ return foo(); }" then your type system considers the expression "foo();" to have type String so that you can write "String myFoo = foo();". But guess what, foo doesn't return a String and myFoo will never get a String. Is the type system lying?

The answer is no, of course not. It's just proving something a bit weaker than you might think at first. Instead of proving that "foo() will compute something that is a String" it's proving that "foo() won't compute something that isn't a String." If your "Law of the Excluded Middle" sense just tweaked, then you're on the right page. You'd think there are only two cases to consider: either the function definitely computes a String or it definitely doesn't. But there's this third case where it doesn't compute anything, and since the type checker can't reliably detect non-termination (c.f. Turing Halting Problem) it has to do something a bit odd. First, it assumes that the foo() declaration is correct and does return a String. Then it goes looking for a contradiction such as returning an int. Inside the function the compiler sees that the return is the result of the expression foo(). But the compiler has already assumed that foo() returns a String. There's no contradiction between declared and result so all is happy. The word "tautology" should come to mind right about now. By assuming X is true the type checker has proved that X is true. This weakening is a practical consequence of the Turing Halting Problem and there are at least two good ways to think about it. But first some more examples of the phenomenon.

Exceptions and Exits and ...

I very casually suggested that we ignore stack overflow issues in the infinite recursion above. But exceptions are an important part of this story because they are, in some interesting way, related to non-termination. Consider this function (or its equivalent in your favorite language)
X foo(){ throw new RuntimeException(); }
Once again, X can be any type. And once again foo() does not in fact compute an X.

Clearly these two definitions of foo are different things. Non-termination means we're stuck whereas an exception means we might be able to recover and try something else, or at least notify that there's a problem and shutdown cleanly. None-the-less they both behave similarly. First, they hijack the normal flow of control in such a way that it never returns back to the caller. And second they are both examples of a kind of weakening in the type system since any type can be substituted for X. Formally they are both examples of functions that diverge (functions that return normally are said to converge).

The Type

Here's a third example of a diverging function in Java. Translate as necessary for your language (in Java System.exit(n) stops the program immediately and returns n to the calling process).
X foo(){ System.exit(1); return null; }

Yet another case where foo() won't compute anything, it diverges. In fact, the return statement after the exit call is dead code. However, this is example is slightly different than the other examples because we had to create the dead code for the compiler to be happy[3] and, in fact for a different "X" like int the return value might have to be changed. That bit of dead code is closely related to the heart of this article.

Java can detect some kinds of dead code. If you write "X foo() { throw new RuntimeException(); return null; };" then Java recognizes that the return is unreachable and complains. In my System.exit(1) example Java didn't recognize that the call to exit() will never return so it required a following return statement. Obviously "throw" is a keyword and can get special attention that a mere library function can't but it would be useful to be able to let Java know that, like throw, exit() diverges.

One of the best ways to tell a compiler how a function behaves is by using types and, in type theory, there's a type that expresses just what we need. The type is called bottom (often written ⊥), and while there are different ways to look at the bottom type I'm going to go with a subtyping based view that should be accessible to C++, C#, and Java programmers.

If a language has subtyping and a function says it will return a type "X" then the function is allowed to return a "Y" instead as long as Y is a subtype of X. In my example of a function that just throws an exception the return type could be anything at all. So if we wanted System.exit(1) to indicate that it diverges the same way throw does, then its return type should be a subtype of all types. And indeed, that's exactly what bottom is.[4] bottom is a subtype of String, and int, and File, and List<Foo>, and every other type. Usefully, conventional type hierarchies are written with supertypes above subtypes, which makes a convenient mnemonic for "bottom" which needs to go below everything on such a hierarchy.

Now, if you're used to OO thinking then you expect a value with a certain subtype to in some sense be substitutable everywhere that a supertype is expected. But how can any one object behave like a String, an int, a File, etc? Remember that bottom indicates divergence: an expression with type bottom can never compute a value. So if exit()'s return type was bottom it would be totally safe to write "String foo() { return System.exit(1); }" while another bit of code could have "int bar() {return System.exit(1); }."

Making it Useful, A Divergence to Scala Divergence

Occasionally it might be useful to indicate that a function diverges. Examples are functions like System.exit(1), or functions that always throw an exception, perhaps after logging something or doing some useful calculation to create the exception. But interestingly out of all the statically typed languages that have any following outside of pure research only Scala has an explicit bottom type, which it calls Nothing. The reason Scala has a bottom type is tied to its ability to express variance in type parameters.

For some reason a lot of programmers run screaming into the night when you say "covariance" or "contravariance." It's silly. I won't get into all the details of variance, but I will say that in Scala the declaration "class List[+T] {...}" means that List[Subtype] is a subtype of List[Supertype]. No screaming necessary. And List[+T] brings me to one extremely practical use of bottom - what type should an empty List have?

Well, an empty List can have type List[String] or List[Foo] or List[int]. T can be whatever. And what's a subtype of whatever for all values of whatever? You guessed it: bottom (Nothing). Indeed, Scala has one constant called Nil whose type is List[Nothing]: a subtype of List[String], List[int], and List[whatever]. It all ties up in a bow when you consider that a List[T] has a method called head which returns the first element of the list as type T. But an emtpy list has no first value, it must throw an exception. And sure enough, head in List[Nothing] has type Nothing.

C# 4.0 is supposed to be getting definition site variance similar to Scala's but using the clever mnemonic keywords "in" and "out". I haven't heard anything yet on whether it will also add a bottom type, but it would make a lot of sense.

Java has usage site variance using wildcards. You can say "List<? extends Supertype> x" to indicate that x can have a List<Supertype> or a List<Subtype>. The bottom type would be useful in Java, too, although not as compelling since wildcards are so verbose people rarely use them even when they would make sense. Plus, Java folk tend to mutate everything and List[Nothing] in part makes sense in Scala because Scala Lists are immutable.

C++ does not have any simple way to express this kind of variance so the bottom type is even less compelling in C++ than it is in Java.

Back On Track

Haskell and languages in the ML family don't have an explicit bottom type. Their type systems don't have subtyping so adding bottom as a subtype would confuse things. None-the-less, they do have a nice way to express bottom that can be teleported back to Java, C#, and C++ (but not C). Recall that bottom is a subtype of all types. Another way of saying that is that if a function returns type bottom then for all types A, the function returns something compatible with A so why not express that directly? In Haskell the "error" function takes a string and throws an exception.
Prelude> :type error
error :: [Char] -> a
In Haskell, a lower case identifier in type position is always a type parameter, and [Char] means "list of Char", aka String. So for all types a, if "error" doesn't diverge then it will take a String and return an a. That can pretty much be expressed directly in Java
public static <A> A error(String message) { throw new RuntimeException(message); }

or C++

class message_exception : public std::exception {
  public:
    explicit message_exception(const std::string& message) : message_(message) {}
    virtual ~message_exception() throw() {};

virtual const char * what() const throw() { return message_.c_str(); };

private: const std::string message_; }; template <typename A> A error(const std::string& message) {throw message_exception(message); }

And for either language, usage would be something like

int divide(int x, int y) {
  if (y == 0) {
    return error<int>("divide by zero"); // drop the "<int>" in Java
  } else {
    return x / y;
  }
}

Haskell also has a function called "undefined" that simply throws an exception with the message "undefined." It's useful when you want get started writing some code without fully specifying it.

Prelude> :type undefined
undefined :: a

The function isn't as interesting as the type, which promises that for any type a "undefined" can compute an a or diverge. Since "undefined" can't possible just produce a value of any arbitrary type, it has no choice but to diverge. The same idea can be added to Java

public static <A> A undefined() {return error("undefined"); }
or C++
template <typename A>
A undefined() { return error<A>("undefined"); }

In either language it might be used as

string IDontKnowHowToComputeThis(int input) { 
  return undefined<string>(); // again, make appropriate changes for Java
} 

Given the requirement to write the "return" keyword in C#, Java, and C++ I'm not sure how practical something like a generified error function really is as compared to having it return an exception and making the user write 'throw error("blah")', nor whether "undefined" is that much more useful than just throwing an UndefinedException, but this does illustrate the relationship between the bottom type and functions that, in Haskell terms, compute "forall a.a" or in C#/Java/C++ terms return the type parameter A without taking an A as an argument.

Also, as always care should be taken when transliterating from one language to another. Java would allow a function like "undefined" to return a null instead of diverging. C++ would allow it to return anything all and would only fail to compile if it were used in an incompatible way. That contrasts with languages like Haskell and ML in which the only way to implement "undefined :: a" is to make it diverge in some form or fashion[5].

The Bottom Value

I've spent some time talking about the bottom type as having no values. But it does have expressions like "undefined()" and that leads to a rather philosophical notion of how to think of bottom as having a value. Sorta. Skip this section if you don't like total gray beard philosophizing. If you're brave, then stick with me and imagine a subset of your favorite C derived language that does not allow side effects. No mutation, no IO, and no exceptions. In this strange world functions either just compute values or they diverge by not halting. In such a world the order of evaluation mostly isn't important as long as data dependencies are met - in f(a(), b()), you can compute a() before b() or b() before a(), whatever, they can't interfere with each other. Doesn't matter. The only time order of evaluation is forced on us is when there are data dependencies, so "a() + b()" must compute a() and b() (or b() and a()) before their sum can be computed. Nice.

Well, almost. Order of evaluation can matter for expressions that diverge. Let me give an example.

int infiniteLoop() {return infiniteLoop();}
int f(int x) {return 42;}
int result = f(infiniteLoop());

Because f ignores its argument, if we call "f(infiniteLoop());" there are two possible outcomes. If "infiniteLoop()" is eagerly evaluated before f is called then the program will diverge. On the other hand, if the expression "infiniteLoop()" is lazily remembered as being potentially needed later then f can successfully return 42 and the diverging expression can be forgotten just like it never happened (because it didn't).

We've gone to the pain of eliminating side effects from our language so it's a little irritating to have to keep thinking about evaluation order just to deal with divergence, so we perform a little mental trick; a little white lie. Imagine that functions like infiniteLoop above don't get stuck, they compute a value called ⊥, which is the only value of type bottom.

Now, since the bottom type is a subtype of all types and ⊥ is the bottom value, then it follows that every type must be extended with ⊥. Boolean = {⊥, true, false}, int = {⊥, 0, 1, -1, 2, -2, ...}, and unit = {⊥, ()}. That means we need some rules for ⊥ and how it interacts with everything else. In the vast majority of languages including Java, C#, and C++ but also impure functional languages like F# and OCaml doing just about anything with ⊥ can only compute ⊥. In other words, for all functions f, f(⊥) = ⊥. If you write f(infiniteLoop()) in those languages then the result is ⊥. This kind of rule is called "strictness".

In contrast, Haskell is often called a "lazy language," meaning that expressions aren't evaluated until they're needed. That's not quite technically correct. The Haskell spec just says that it is "non-strict." The spec doesn't care when expressions are evaluated so long as programs allow ⊥ to slide as far as possible. An expression like f(infiniteLoop) must evaluate to 42. Haskell basically forces an expression involving ⊥ to evaluate to ⊥ only when the argument must be used[6]. The distinction between "lazy" and "non-strict" is subtle, but by being "non-strict" rather than "lazy" a Haskell compiler can use eager evaluation anytime it can prove that doing so doesn't change behavior in the face of ⊥. If a function always uses its first argument in a comparison, then Haskell is free to use eager evaluation on that argument. Since Haskell truly does forbid side effects(unlike our imagined neutered language above), the choice of evaluation strategy is up to the compiler and invisible except for performance consequences[7].

C++, Java, and C# have just a tiny bit of non-strictness. In these languages "true || ⊥" is true and "false && ⊥" is false. If these languages were totally strict then "true || ⊥" would be ⊥. Users of these languages call this behavior "short circuiting" and it's done for performance reasons rather than being a philosophical goal, but it's still a curious departure from their normal rules.

There you have it. The bottom value ⊥ is a clever mental hack to allow purely declarative functional code to be reasoned about without injecting sequencing into the logic. It allows people to talk about the difference between a purely declarative strict language and a purely declarative non-strict language without getting into details of evaluation order. But since we're talking about languages that aren't so purely declarative, we can take off our philosopher hats and return back to the world where side effects are unrestricted, bottom is a type with no values and divergence means that flow of control goes sideways.

Tuples and Aesthetics

Last time I talked about the unit type and how, if you interpret types as logical propositions, the unit type behaves as a "true" and tuple types act as logical and "∧." I also talked about an algebraic interpretation where unit type acts like 1 and tuple types act like multiplication "×". So the type (A,B) can also be written as A∧B or A×B. The type (A,unit) is isomorphic to A, A×1 = A, and A∧True <=> A.

Bottom has similar interpretations as 0 or False. The type (A,bottom) is isomorphic to the bottom type because you can't compute any values of type (A,bottom). A×0 = 0, and A∧False <=> False. Nice how it all hangs together, eh?

Bottom behaves like False in another way. In logic if you assume that False is true you can prove anything. Similarly in type systems the bottom type allows the programmer to promise to do even the impossible. For instance, here's a Java function signature that promises to convert anything to anything.

public static <A,B> B convert(A arg) {...}
If you ignore dynamic casting and null (they're both weird in different ways) there's no meaningful way to implement that function except by diverging. More on this in an upcoming episode.

Conclusion

I somehow don't think anybody will be running into the office saying "I just read an article on the bottom type, so now I know how to solve our biggest engineering challenge." But the bottom type is still interesting. It's a kind of hole in your static type system that follows inevitably from the Turing Halting Problem[8]. It says that a function can't promise to compute a string, it can only promise to not compute something that isn't a string. It might compute nothing at all. And that in turn leads to the conclusion that in a Turing complete language static types don't classify values (as much as we pretend they do) they classify expressions.

Footnotes

  1. gcc does do tail call optimization under some circumstances.
  2. Oddly, in C, C#, and Java X can't be "void" because its an error to return an expression of type void. C++ does allow void to make writing templates a bit easier.
  3. Yes, I can make it a void function and not have a return. But again that would illustrate how exit() is different from a throw: throw doesn't require the return type to be void.
  4. Note I'm saying "subtype" here and not "subclass." int, List<String>, and List<File> are 3 different types. In C++, Java, and C# int doesn't have a class. In Java and C# List<String> and List<File> both come from the same class. Yet bottom must be a subtype of all 3 so it can't be a subclass.
  5. A plea to my readers: I'm too lazy to see if "forall a.a" is bottom in F# or C#, or if, like Java, null would be allowed. My bet is that null won't be allowed because only Java has the peculiar notion that type parameters must be bound to reference types.
  6. Very loosely speaking. Please see the Haskell Report for all the gory details about when Haskell implementations are allowed to diverge.
  7. Haskell has another clever hack where throwing an exception isn't an effect, but catching one is. Since order of evaluation is unspecified in Haskell, the same program with the same inputs could conceivably cause different exceptions. Exceptions are theoretically non-deterministic in Haskell.
  8. A language that isn't Turing complete can prove termination and does not need a bottom type, but such a language isn't powerful enough to have a program that can interpret the language.

Thursday, July 23, 2009

Void vs Unit

I suspect most people who come to one of the statically typed functional languages like SML, OCaml, Haskell, F#, or Scala will have only seen static typing in one of the popular C derived languages like C++, Java, or C#. Some differences in type system become clear fast. One particular spot is the relationship and difference between void and unit.

So imagine your favorite C derived language. Consider your language's equivalent of the following function definition [1]

void printHello(String name) { printf("hello %s", name); }

What does the word "void" mean? To any C, C++, Java, C# programmer it means that printHello doesn't return anything. Simple.

But a functional programmer[2] sees it quite differently. This article will explain the different point of view and why it's useful.

What's a Function?

It turns out to be very hard to write a single definition of "function" that covers all the ways the word has been used. But to a functional programmer at the very least a function is something that maps inputs to outputs. In the impure functional languages that function may also have an effect like printing "hello." I'll stick with the impure definition in this article.

Now, when a functional programmer talks about mapping inputs to outputs he or she doesn't mean taking a string as an input and outputting it on the console. Instead he or she is talking about arguments and returns of a function. If something is to have any pretense at being a function, or at least a function that returns normally, then it must return something.

printHello clearly does return normally so a functional programmer insists that it returns something. But a functional programmer also must recognize that it doesn't return anything "meaningful."

Thus rather than the concept "void," a functional programmer thinks of unit. In some languages the unit type is written as (), but for clarity I'm going to stick with "unit." Now, unit is a type, conceptually not much different from say "int". But unit, as its name somewhat implies, has exactly one value conventionally represented as an empty pair of parentheses ().

In OO terms () is a "singleton". There's only one instance of it. It's also immutable. In a very real sense it carries no information. You can't write a program that behaves differently based on what a function of type unit returns - it always returns the same ().

A Performance Diversion

At this point a little guy in the back of your head might be asking "doesn't returning something meaningless just waste cycles?" Well, it's important to distinguish between abstraction and implementation. The unit value, (), is an abstraction that the implementation is free to optimize away. If you write in Scala

def printHello : Unit = {
  println("hello")
  ()
}

Then the compiler will generate the equivalent of the original void based code. If in Scala you write

print(printHello)

Then the compiler might generate something like

printHello
print( () )

Because () is a singleton it doesn't matter at the machinecode/bytecode level that it's not "actually" returned. The compiler can figure it out with no performance hit at all.

Making It Useful

Okay, blah, blah, blah, functional think this, unit that. Whatever. What's the point? () is a value with no information, unit is a type with only that value...what's the practical use?

Well, it turns out that its useful for writing generic[3] code. Java and C++, somewhat famously, do not have first class functions. But they can be faked. Here's the Java

interface Function<Arg, Return> {
   Return apply(Arg arg);
}
Function<String, Integer> stringLength = new Function<String,Integer> {
  Integer apply(String arg) {
    return arg.length();
  }
}

In C++ you can do it essentially the same way with a template and operator() overloading[4]

template <typename Arg, typename Result>
struct Function {
  virtual Result operator()(Arg arg) = 0;
};
struct StringLength : Function<string, int> {
  int operator()(string arg) {
     return string.length();
  }
}
Function<int, string> *stringLength = new StringLength();

C# has a function type and lambdas so it's very easy to write.

Func<string, int> stringLength = s => s.Length;

So, if stringLength is an object of type Function<String, int> / Function<int, string> * / Func<string, int> then I can write the following

int result = stringLength.apply("hello"); // Java
int result = (*stringLength)("hello"); // C++
int reulst = stringLength("hello"); // C#

So here's the question: how would I convert the printHello function earlier into a Function<String,X>/Function<string, X> */Funct<string, X>. What type is X?

If you've been playing along so far you recognize that it "should be" void, but Java and C# don't allow it. It's invalid to write Function<String, void> in either language.

Let's pretend you could. After all, C++ will let you write a class PrintHello : Function<std::string, void>. But there's still a problem even in C++. Generic code might expect a result and none of these languages let you denote a value of type void. For instance (in Java) imagine thise simple bit of reusable code.

public static <A,B> List<B> map(List<A> in, Function<A,B> f) {
  List<B> out = new ArrayList<B>(in.size());
  for (A a : in) {
     out.add(f.apply(a));
  }
  return out;
}

C++ has std::transform and C# has IEnumerable#Map, which are more or less the same idea.

Anyway, given something like the above calling map/transform/Map with a list of strings and the printHello object should result in a List<void> of the same length as the original list. But what's in the list? The answer, in a functional language, is a bunch of references to (). Tada, the meaningless is useful for writing generic code.

C++ will allow a Function<std::string, void> but any attempt to call something like map with it will fail to compile. C++ doesn't have a () value so our generic code has become slightly less generic.

The Work Arounds

Java, C++, and C# programmers do solve this problem all the time. One common option is to not use Function<string, void> but some other concept like "Action<string>" to mean a function that takes a string, performs a side effect, and returns nothing useful. Then you essentially duplicate "map" into something called perhaps "foreach" that expects an Action<T> and returns void. That probably makes sense in this case since a list of references to a meaningless value is perhaps silly, but it also means a great deal of code duplication if this kind of higher order programming is common. For instance, you can't write just one compose function that creates a function from two other functions, you also have to write a compose that takes an action and a function to create an action.

Another answer is to make the printHello function object have type Function<String,Object>/Function<string,void*>/Func<string,object> and return a null. That's pretty disgusting.

Perhaps the best option is to create your own Unit enumeration with only a single value. That does leave Unit nullable in Java, but railing against nullability in Java is another blog post.

enum Unit { unit }; // C++, Java, or C#

As for good old C, well, the type system doesn't do parametric polymorphism so generic code in C is problematic no matter what you do. Still, with function pointers it's conceivable you might find a use for a unit value. Typically a C programmer would use 0.

A Quick Aside About Purity

For the most part I've casually used effects everywhere. Most functional languages like SML, OCaml, F#, and Scala are impure and let the programmer get away with that. Some functional languages, e.g. Haskell without unsafe* extensions, do not. They must track effects using the type system. But even with this caveat, the unit type is still useful to, for instance, distinguish from a function that fetches a string from the keyboard which would have might have type IO String vs one that prints to the console which might have type String -> IO Unit.

Conclusion

Void and unit are essentially the same concept. They're used to indicate that a function is called only for its side effects. The difference is that unit actually has a value. That means that unit functions can be used generically wherever a generic first class function is required. The C derived languages miss out by treating void functions as functions that don't return anything instead of as functions that don't return anything meaningful.

Next time I'll talk about functions that really, truly return nothing.

Postscript on Aesthtics and Tuples

Mostly I've talked about the practical value of the unit type in this article, but I do love it when convenient practice and beautiful theory line up. So here's an attack from a different direction.

Many functional languages have tuples. A tuple is an ordered fixed size collection of values of (possibly) different types[5]. Tuple types are usually written with parens. For instance, (Foo, Bar) is the type of pairs (2-tuples) with one Foo, one Bar. The same type is sometimes written as Foo × Bar, which makes sense too. You can see the type as a set that consists of a kind of Cartesian product of the Foo and Bar sets. For instance if Foo has only two possible values {f1, f2} and Bar has only two possible values {b1, b2} then the type Foo × Bar has 4 possible values, {(f1, b1), (f1, b2), (f2, b1), and (f2, b2)}.

You can have tuples that with higher numbers of members like triples and 4 tuples and such.

What about a 1-tuple? Well, the type Foo can be seen as a 1-tuple. There's would be no useful difference between the type Foo and the type (Foo).

The question to ask is, given that we have 3-tuples, 2-tuples, and 1-tuples does it make sense to have a 0 tuple? The answer, dear reader, is unit. The unit value is a tuple with no members. That's why Haskell calls both the type and the value "()" - it's like the type (A,B) without the A,B. And just as an empty list and empty set aren't quite "nothing" neither is the empty tuple.

And here's another interesting thing, type theorists will also call the unit type "1" (not to be confused with the value 1). To see why, look at the type (Foo, unit) which can also be written as Foo × 1. If Foo only has 2 possible values {f1, f2} then the entire set of possible values for the type Foo × 1 is {(f1, ()), (f2, ())}. It's the same 2 values extended with a little bit of "meaningless" information that can be stripped off or added back in as desired. Formally speaking it's isomorphic to the type Foo. To abuse notation slightly, Foo × 1 = Foo. So unit is not only the type that has only 1 value, it's the type that behaves like a 1 at the type level.

Footnotes

  1. In C of course it would be char *name, in C++ it might be char * or I might use the string and iostream libraries. Whatever. The differences aren't that relevant to this discussion.
  2. To save myself a lot of time I'm using phrases like "functional language" to mean the statically typed functional languages that have been influenced to a large extent by ML. Some of what I'm talking about does translate into some dynamically typed languages as well, although they often "pun" things so that there isn't a unit value that's distinct from nil, null, false or some other singleton value.
  3. I use the phrase "generic" here to mean what the functional community means by "parametrically polymorphic" or just "polymorphic." However, my target audience is more likely to associate polymorphism with OO dispatch, so "generic" it is.
  4. In C++ you can also do it without the pure virtual base class and just use template structural typing based on operator(), but shortly it'll be easier for me to describe a problem if I use a base class to give the concept a name. I could also allocate on the stack but that slightly confuses my point. I'm going this route to keep things symmetrical among the languages. Besides, this isn't a tutorial on why C++ isn't Java. On a related note it's a shame C++0x won't have concepts.
  5. This is quite different from a Python tuple which is really just a list.

Thursday, July 16, 2009

"Fixing" Groovy?

In response to my last article on Groovy's lack of static typing James Strachan, the original Groovy guy, tweeted "groovy should fix this really IMHO." Well, perhaps. But "fixing" it would substantially change Groovy's semantics. Most importantly it would prevent Groovy from executing some programs that work just fine right now.

An Example

Here's a simple example of a program that works now but wouldn't under a static typing regime. It's a simple variation on the program from the previous article.

interface Foo {}
interface Bar {}
class Baz implements Foo, Bar {}

Foo test(Bar x) {return x}

test(new Baz())
println("Everything's Groovy")

Compile it, run it, and feel groovy.

~/test$ groovyc test.groovy
~/test$ groovy test
Everything's Groovy

If I transliterate into Scala

trait Foo
trait Bar
class Baz extends Foo with Bar

def test(x : Bar) : Foo = x

test(new Baz)
println("Everything's Scala?")

And try to compile

~/test$ scalac -Xscript test test.scala
!!!
discarding <script preamble>
(fragment of test.scala):5: error: type mismatch;
 found   : this.Bar
 required: this.Foo
def test(x : Bar) : Foo = x
                           ^
one error found

Scala rejects a program that Groovy is happy with.

Conclusion

That's the nature of the beast. Static typing rejects some good programs that might otherwise work. And as my last article showed, dynamic typing allows some bad programs that can't possibly work. That's a pretty fundamental difference and there's no way around it.

If the Groovy community wants to make its type annotations work in a more statically typed manner that's ok. This article just shows that such a change wouldn't be 100% compatible with today's Groovy.

Wednesday, July 15, 2009

Groovy Does Not Have Optional Static Typing

Every now and then somebody writes that Groovy supports optional static typing. But it doesn't. They've confused the use of type annotations on variables with a static type system.

This article is emphatically not an entry in the long, sad, and mostly insipid static vs. dynamic debate. It is a clarification of a misunderstanding.

Definitions

The word "static" in "static type system" is related to phrases like "static analysis." Basically, without executing the code in question, a static type system analyzes code to prove that certain kinds of misbehaviors won't ever happen.

The word "dynamic" in "dynamic type system" is related to the "dynamic execution" of a program. In contrast to a static type system, a "dynamic type system" automatically tests properties when expressions are being (dynamically) evaluated.[1]

The Proof

With those definitions its easy to show that adding type annotations to a Groovy program does not make it statically typed. Create a file named test.groovy. In it define a couple of unrelated classes

class Foo {}
class Bar {}

Now write a function that promises to take a Bar and return a Foo. Add a println for fun.

Foo test(Bar x) { return x }
println("Everything's groovy")

Compile it and run it (explicitly compiling isn't necessary, it just reinforces my point)

~/test$ groovyc test.groovy
~/test$ groovy test
Everything's groovy

There were no complaints at all. A static type system would have rejected the program.

Open up test.groovy and make a slight adjustment so that the function is actually called

Foo test(Bar x) {return x}
println("Everything's groovy")
test(new Bar())

Compile and run this

~/test$ groovyc test.groovy
~/test$ groovy test
Everything's groovy
Caught: org.codehaus.groovy.runtime.typehandling.GroovyCastException: 
Cannot cast object 'Bar@861f24' with class 'Bar' to class 'Foo'
 at test.test(test.groovy:4)
 at test.run(test.groovy:6)
 at test.main(test.groovy)

A runtime exception is finally thrown at the (dynamic) moment when the test function is executed. That's dynamic checking rather than any form of static proof.

Actual Static Typing

Now open a file named test.scala.

class Foo
class Bar
def test(x : Bar) : Foo = x

Compile as a script

~/test$ scala -Xscript test test.scala
(virtual file):1: error: type mismatch;
 found   : this.Bar
 required: this.Foo
object test {
^
one error found

Ta da, static typing. With no attempt to execute my test function the type checker says "I have reason to believe this might go sideways."

The Misunderstanding

There's an oft repeated phrase that "in a statically typed language variables are typed." There's also a common misunderstanding that static typing and type annotations are the same thing. I can show a simple counter example to both misconceptions with one language: Haskell.

Create a file named Test.hs. Here's a function called flatten. It flattens a list of lists one "level" to just be a list.[2]

flatten = foldl (++) []

No variables were harmed during the creation of that function. It's written in what's called "point free" style. Yet with neither variables nor type annotations I can show it's statically typed by defining a bogus main function

main = print $ flatten [1,2,3,4,5,6]

And trying to compile

~/test$ ghc Test.hs -o test
Test.hs:3:24:
    No instance for (Num [a])
      arising from the literal `1' at Test.hs:3:24
    Possible fix: add an instance declaration for (Num [a])
    In the expression: 1
    In the first argument of `flatten', namely `[1, 2, 3, 4, ....]'
    In the second argument of `($)', namely
        `flatten [1, 2, 3, 4, ....]'

Thus you need neither variables nor annotations to have static typing. A small change fixes things right up

main = print $ flatten [[1,2,3],[4,5,6]]

~/test$ ghc Test.hs -o test
~/test$ ./test
[1,2,3,4,5,6]

It's easy enough to see what type GHC has assigned the function. Just add a module declaration at the very top of the file

module Main (flatten, main) where

And fire up ghci

~/test$ ghci
Loading package base ... linking ... done.
Prelude> :load Test
Ok, modules loaded: Main.
Prelude Main> :type flatten
flatten :: [[a]] -> [a]

Exactly as described, it takes a list of lists and returns a list.

Groovy Does Have Static Typing

Having shown that Groovy does not have static typing let me show that it does have static typing. :-) At least, it has static typing for some kinds of things. Open up test.groovy again and add this

class MyRunnable implements Runnable {}

Compile that and you get an error

~/test$ groovyc test.groovy
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, test.groovy: 8: 
Can't have an abstract method in a non-abstract class. 
The class 'MyRunnable' must be declared abstract or the method 'void run()' must be implemented.
 @ line 8, column 1.
   class MyRunnable implements Runnable {}
   ^

1 error

The error happens without executing anything so it's a form of static analysis, and it's caused by failure to adhere to the Runnable type's requirements hence it's a static type error. But it's not an optional static type check - you can't turn it off.

Interesting, no?

Conclusion

Whatever you feel positively or negatively about Groovy please don't perpetuate the myth that it has optional static types. It doesn't. Groovy's optional type annotations are used for dynamic testing not static proofs. It's just that with type annotations Groovy dynamically tests are for "dynamic type" tags attached to objects rather than its more normal mechanism of dynamically testing for the presence of certain methods. What little static typing Groovy does have is not optional.

Update: Some time after this article was written, a team announced Groovy++, an extension to Groovy that really does have optional static typing.

Footnotes

[1] In a certain formal sense the phrase "dynamic type system" is an oxymoron and "static type system" is redundant. However, the phrases are commonly used and there doesn't seem to be a commonly recognized concise phrase to describe what a "dynamic type system" does. Pedants will argue for "untyped" but that doesn't give me a concept on which to hang the different ways that say Groovy, Ruby, and Scheme automatically deal with runtime properties.

[2] A more general function called join already exist for all monads in Control.Monad module. It can be defined as "join m = m >>= id"