Is there any value in learning new languages according to the epigrams of Alan Perlis ? Mychael Nygard recently wrote a very thoughtful essay on this, keeping in mind the recent trends of development planned for Java, the language and Java, the platform. Here are some of our thoughts from the trenches of the development team ..
Of late, a couple of new developments on the JVM platform have added a new dimension to our project infrastructure. One of them is the scripting support that comes bundled with Java 6 and the other is the emergence of languages like Scala, Groovy and JRuby that can coexist happily in a single project under the caring patronage of the Java virtual machine. As a result, things become a little more interesting nowadays, and we can enjoy some amount of polyglotism by sneaking in a few of these spices into the mass of Java objects. I had earlier blogged on some such (not all successful) attempts -
- trying to use Scheme as an executable XML through SISC, an attempt that got shot down the moment I uttered the word Lisp
- using Javascript through Rhino engine to externalize some of the customizable business rules in a big Java project
- making Java objects smarter with nicely scaffolded APIs using Scala's lexically scoped open classes
Is there any value to such attempts in projects where the bulk of the code is churned out by new inexperienced developers and managed by delivery managers encouraging maintainable programming paradigms ?
Isn't Java enough ?
Java has been the most dominant programming language for the enterprise. Given the proper set of tools, I love programming in Java, however unhip may it sound today. Then, why do I try to sneak in those weird features of Groovy, Scala or Rhino, when the ecosystem of the project thrives mainly on a Java diet ? Because I believe syntax matters, succinctness makes abstractions more resilient and powerful idioms always reduce the solution space noise, making the code base a more true representation of the business problem that I am trying to solve. Design patterns encourage best practices in application design, but their implementations in Java often tend to generate lots of boilerplate codes, which, in other higher level languages can be addressed through richer levels of abstractions.
I understand that implementing a whole new enterprise project in an evolving language like Scala or Groovy is not a feasible option today (yet). But I can certainly use some of their goodness to make my APIs more like the domain that I am modeling. And the best thing is that I can use these power on top of my existing Java objects. The key part is to identify the use cases that makes this exercise non-invasive, risk free and maintainable by the profile of programmers on the job.
In a typical multi-tiered application stack, there are layers which do not need the safety net of static typing e.g. the modules which interact with the external data sources and receive amorphous data streams that my application has to consume and forward to the domain layer underneath. Groovy is a dynamically typed language on the JVM and provides strong support for XML consumption and generation without the verbosity of Java. Hamlet D'Arcy demonstrates how effectively we can use Groovy in the service layer acting as the perfect glue to my Java based domain layer. This makes the code base smaller through programming at a higher level of abstraction, and at the same time keeps dynamic modules decoupled from the core static domain layer. External interfaces are usually required to be kept malleable enough, so that changes to them do not impact your core logic - and Groovy or JRuby provides ample scope for such decoupling.
In one of our Java EE projects, I had used Rhino scripting to keep business rules externalized from the core model. The requirement was to have the rules configurable by the users without a full build of the application code and hot deployment of those rules within the running application containers. Scripting engines, bundled with Java 6 is a great option where I can provide dynamic loading capabilities for all such scripting tasks. With OSGi becoming mainstream, I am sure, we will have better options for application packaging, versioning and deployment very soon.
And for the static typing afficionados .. (believe me, I am also one !)
Not only with dynamically typed languages, you can get the benefits of static typing, along with nice concise syntax on the JVM today. Scala is a multi-paradigm language for the JVM offering all the goodness of statically checked duck typing, type inferencing, rich functional features and some great library support for threadless concurrency and parser combinators. Scala supports XML literals as part of the language, which can very well be used to implement elegant XML crunching modules, much concise compared to the DOM APIs or JAXB frameworks that Java offers.
Recently in one of my programming assignments, I had to design a few adapters to existing Java classes, not related througn common parentage. The requirement was to define a set of uniform operations over a collection of the adapted classes based on some common structural classifiers. Initially I came up with a Java solution. It was standard idiomatic Java which would pass any careful review if it were a couple of years ago. I tried the same problem in Scala and could come up with a far more elegant and succinct solution. The three features of Scala that made the solution more precise are the supports for structural typing, implicit adapters and of course, functional programming. And since the entire development was additive and belonged to the service layer of the application, the core domain model was not impacted. The client was never bothered as long as his investments and commitments on the JVM were protected. As David Pollak has recently stated in one of his posts, it is only an additional jar. So true.
Is the infrastructure ready ?
All the JVM languages are evolving - even Java is slated to undergo lots of evolutions in the coming days (closures, JSR-308, modularity, ..). The most important thing, as I mentioned above is to follow the evolutionary path and carefully choose the use cases to plugin the goodness of these languages. To me, lots of risks are mitigated once you start using them as additive glue, rather than invasive procedures. These languages are becoming performant by the day, and innovations on hosting languages on a common runtime are now a reality. Groovy 1.6 has seen significant performance improvements in method dispatch by shortening the call path between the caller and the receiver through using method handles and call site cache, a technique previously applied in optimizing JRuby performance, and documented very well by Charles Nutter in one of his recent posts. This is one big JVM community force in action towards improving the daily life of all languages hosted there.
The best part of "polyglotism under a common runtime" is that I can very well use a uniform toolset for all the languages that I use. Unit testing frameworks like JUnit, TestNG are available to all developers working on multiple languages like Java, Groovy, Scala etc. Maven and Ant with all their eyesore XMLs are still available for any of them. And of course I can use my favorite IDE polymorphically over all languages, albeit with varying degrees of refactoring abilities. And if I am adventurous enough, I can also use additional power of JTestR, ScalaCheck and specs for doing all sorts of BDD and PDD stuff. Real fun huh!
Are you planning to use any of the non-Java, JVM friendly languages in your Java project ? What are the typical use cases that you think fits the bill for another JVM language ?
1 comment:
Nice post. Like your way of using different languages. But I find myself using Java+Scala makes hard time switching back and forth if you add another two languages, most people cant take that much.
Post a Comment