Showing posts with label jvm. Show all posts
Showing posts with label jvm. Show all posts

Wednesday, March 11, 2009

Web programming at a higher level of abstraction

Frameworks like Sproutcore, Cappuccino and Objective-J have given the Web programming model a huge leap. Till date one of the pet peeves of Web programmers had been the fact that the programming model was based on the native DOM and the associated accidental complexities that HTML, Javascript and CSS brought forth. The crux of the problem was not the polyglotism involved in Web development, but all these technologies were often found to be competitive with overlapping responsibilities. Frameworks like Prototype and jQuery took care of cross browser compatibilities, but still you are never sure whether to create a button in HTML or Javascript code or whether to use CSS files or apply styles in Javascript. In a summary, we have so far been trying to use lower level abstractions, which should ideally be abstracted away in higher order frameworks.

And Cappuccino does address this issue head-on. It turns polyglotism on its head and defines a common way to do things on the Web.

I was listening to the interview of the 280 North guys through a post on Ajaxian, where Francisco Tolmasky, the co-founder of 280 North, mentioned in one of the comments ..

"JavaScript does not have any way of writing applications today in the same way mature platforms like Cocoa do. Cappuccino provides you with very high level features like copy-paste (of anything), undo and redo, document management and archiving, vector graphics and animation, etc etc. when you write an application for Mac OS X or iPhone, you can truly focus *just* on your application, not whether it will work on some obscure browser or how to get something as simple as undo working, and that’s precisely what we are trying to provide."

As this Cappuccino blog post clearly indicates, the only purpose behind Objective-J was to provide the right level of abstraction for Cappuccino to be built on top of it ..

"The purpose behind Objective-J was to facilitate the development of Cappuccino, and when we originally set out to do that we simply wanted to add a few key missing features to the existing JavaScript “standard”. In other words, Objective-J is our take on JavaScript 2.0."

To me, the most important takeaway from all these is that you can now develop rich applications on the browser only without having to depend on any proprietary runtime. Google Web Toolkit has been generating compiled highly optimized Javascript since 2006, it will be really interesting to see how the compiler optimized Javascript generated by GWT compares in runtime perfromance with the on-the-fly generated Javascript offered by Cappuccino.

We are seeing more and more impetus on Javascript being touted as a compilation target. Sometime back I read that Gilad Bracha was also thinking in terms of a Newspeak port on V8. With invokeDynamic being blessed by Sun, dynamic language compilation will improve and we will be looking at more and more scripting languages targeting the JVM. And this is the right time that we embrace frameworks that promise higher level abstractions in Javascript, making the browser and the Web a more productive ecosystem.

Monday, June 09, 2008

Targeting BEAM for extreme reliability

Sometime back I got to see Reia, a Python-Ruby esque language targetting the Erlang virtual machine (BEAM) and the high performance native compiler (HiPE). Much before discovering Reia, I had exchanged a couple of tweets with Michael Feathers expressing my surprise why not many languages target BEAM as the runtime. BEAM provides an ecosystem that offers phenomenal scalability with concurrent processes and distributed programming, which is really difficult (if not impossible) to replicate in any other virtual machine being worked upon today. After all, it is much easier to dress up Erlang with a nicer syntax than implementing the guarantees of reliability and extreme concurrency in any of your favorite languages' runtime.

And Reia precisely does that! The following is from the wiki pages of Reia describing the philosophy behind the design ..
Reia combines the strengths of existing language concepts. The goal isn't to invent new ones, but to provide a mix of existing approaches which provides an easy and painless approach to writing concurrent programs. To that end, Reia feels like Python with a dash of Ruby, and the concurrent programming ideas behind Erlang added to the mix.

Reia also leverages the highly mature Erlang VM and OTP framework. These both provide a bulletproof foundation for concurrent systems which has been highly optimized over the years, particularly in regard to SMP systems. There's no need to build a new VM when an awesome one like BEAM is available.

Bunking the single assignment semantics ..

This is something I don't like about Reia. From the FAQ .. "Reia does away with many of the more obtuse semantics in Erlang, including single assignment." Single assignment in Erlang is all about extreme reliability and providing developers a lower cost of fixing bugs that may arise in highly distributed systems with hot swapping and transparent failover. The code that I write today in Reia may not be deployed in a distributed architecture right now. But it has to be safe enough to interplay with Erlang modules that already run in the same virtual machine as part of an extremely reliable architecture. This is the basic idea behind cooperative polyglotism - modules written in various languages can interplay seamlessly within the confines of the honored VM.

Erlang guarantees immutability at the language level. Once you allow destructive assignment, you are no longer playing to the strengths of the ecosystem. A similar logic holds with Scala actors built on top of the Java Virtual Machine. Because the language does not guarantee immutability, you can always send mutable data between actors, which can lead to race conditions. And as Yariv explains here, this leaves you just where you started: having to ensure synchronized access to shared memory. And because of the fact that neither Java nor Scala ensures language level immutability, you cannot really take advantage of the numerous existing Java libraries out there while implementing your actor based concurrent system in Scala. Without single assignment enforced, systems written in Reia may also have the same consequence while interoperating with Erlang modules.

Monday, May 12, 2008

Thinking in JVM languages

When I find a language expressive enough to implement the programming idioms succinctly, I like to use the language in my projects. But I constantly have to thunder on myself the fact that a project development is a team game and the language has to be usable by all members of the development team. Another fact that stares at me is the deadline for delivery that has long been committed to the client by a different team, completely oblivious of all constraints of software development lifecycle that appear to rock the project right from inception. Hence choosing the programming language is also a function of the number of available frameworks, libraries, refactoring-friendly IDEs and community support that can perceivably add to the velocity of program development. Considering all such forces, it is a no brainer that both us and the client happily choose the Java programming language for most of our development projects.

Is there any value in learning new languages according to the epigrams of Alan Perlis ? Mychael Nygard recently wrote a very thoughtful essay on this, keeping in mind the recent trends of development planned for Java, the language and Java, the platform. Here are some of our thoughts from the trenches of the development team ..

Of late, a couple of new developments on the JVM platform have added a new dimension to our project infrastructure. One of them is the scripting support that comes bundled with Java 6 and the other is the emergence of languages like Scala, Groovy and JRuby that can coexist happily in a single project under the caring patronage of the Java virtual machine. As a result, things become a little more interesting nowadays, and we can enjoy some amount of polyglotism by sneaking in a few of these spices into the mass of Java objects. I had earlier blogged on some such (not all successful) attempts -

  • trying to use Scheme as an executable XML through SISC, an attempt that got shot down the moment I uttered the word Lisp

  • using Javascript through Rhino engine to externalize some of the customizable business rules in a big Java project

  • making Java objects smarter with nicely scaffolded APIs using Scala's lexically scoped open classes


Is there any value to such attempts in projects where the bulk of the code is churned out by new inexperienced developers and managed by delivery managers encouraging maintainable programming paradigms ?

Isn't Java enough ?

Java has been the most dominant programming language for the enterprise. Given the proper set of tools, I love programming in Java, however unhip may it sound today. Then, why do I try to sneak in those weird features of Groovy, Scala or Rhino, when the ecosystem of the project thrives mainly on a Java diet ? Because I believe syntax matters, succinctness makes abstractions more resilient and powerful idioms always reduce the solution space noise, making the code base a more true representation of the business problem that I am trying to solve. Design patterns encourage best practices in application design, but their implementations in Java often tend to generate lots of boilerplate codes, which, in other higher level languages can be addressed through richer levels of abstractions.

I understand that implementing a whole new enterprise project in an evolving language like Scala or Groovy is not a feasible option today (yet). But I can certainly use some of their goodness to make my APIs more like the domain that I am modeling. And the best thing is that I can use these power on top of my existing Java objects. The key part is to identify the use cases that makes this exercise non-invasive, risk free and maintainable by the profile of programmers on the job.

In a typical multi-tiered application stack, there are layers which do not need the safety net of static typing e.g. the modules which interact with the external data sources and receive amorphous data streams that my application has to consume and forward to the domain layer underneath. Groovy is a dynamically typed language on the JVM and provides strong support for XML consumption and generation without the verbosity of Java. Hamlet D'Arcy demonstrates how effectively we can use Groovy in the service layer acting as the perfect glue to my Java based domain layer. This makes the code base smaller through programming at a higher level of abstraction, and at the same time keeps dynamic modules decoupled from the core static domain layer. External interfaces are usually required to be kept malleable enough, so that changes to them do not impact your core logic - and Groovy or JRuby provides ample scope for such decoupling.

In one of our Java EE projects, I had used Rhino scripting to keep business rules externalized from the core model. The requirement was to have the rules configurable by the users without a full build of the application code and hot deployment of those rules within the running application containers. Scripting engines, bundled with Java 6 is a great option where I can provide dynamic loading capabilities for all such scripting tasks. With OSGi becoming mainstream, I am sure, we will have better options for application packaging, versioning and deployment very soon.

And for the static typing afficionados .. (believe me, I am also one !)

Not only with dynamically typed languages, you can get the benefits of static typing, along with nice concise syntax on the JVM today. Scala is a multi-paradigm language for the JVM offering all the goodness of statically checked duck typing, type inferencing, rich functional features and some great library support for threadless concurrency and parser combinators. Scala supports XML literals as part of the language, which can very well be used to implement elegant XML crunching modules, much concise compared to the DOM APIs or JAXB frameworks that Java offers.

Recently in one of my programming assignments, I had to design a few adapters to existing Java classes, not related througn common parentage. The requirement was to define a set of uniform operations over a collection of the adapted classes based on some common structural classifiers. Initially I came up with a Java solution. It was standard idiomatic Java which would pass any careful review if it were a couple of years ago. I tried the same problem in Scala and could come up with a far more elegant and succinct solution. The three features of Scala that made the solution more precise are the supports for structural typing, implicit adapters and of course, functional programming. And since the entire development was additive and belonged to the service layer of the application, the core domain model was not impacted. The client was never bothered as long as his investments and commitments on the JVM were protected. As David Pollak has recently stated in one of his posts, it is only an additional jar. So true.

Is the infrastructure ready ?

All the JVM languages are evolving - even Java is slated to undergo lots of evolutions in the coming days (closures, JSR-308, modularity, ..). The most important thing, as I mentioned above is to follow the evolutionary path and carefully choose the use cases to plugin the goodness of these languages. To me, lots of risks are mitigated once you start using them as additive glue, rather than invasive procedures. These languages are becoming performant by the day, and innovations on hosting languages on a common runtime are now a reality. Groovy 1.6 has seen significant performance improvements in method dispatch by shortening the call path between the caller and the receiver through using method handles and call site cache, a technique previously applied in optimizing JRuby performance, and documented very well by Charles Nutter in one of his recent posts. This is one big JVM community force in action towards improving the daily life of all languages hosted there.

The best part of "polyglotism under a common runtime" is that I can very well use a uniform toolset for all the languages that I use. Unit testing frameworks like JUnit, TestNG are available to all developers working on multiple languages like Java, Groovy, Scala etc. Maven and Ant with all their eyesore XMLs are still available for any of them. And of course I can use my favorite IDE polymorphically over all languages, albeit with varying degrees of refactoring abilities. And if I am adventurous enough, I can also use additional power of JTestR, ScalaCheck and specs for doing all sorts of BDD and PDD stuff. Real fun huh!

Are you planning to use any of the non-Java, JVM friendly languages in your Java project ? What are the typical use cases that you think fits the bill for another JVM language ?

Wednesday, March 05, 2008

Sunshine on Dynamic Languages rolls on

Lighting up enterprisey Java shops with Ruby and Python.

Sun is up in arms strengthening the ecosystem of the JVM. In fact the J in JVM is starting to fade away and is fast becoming the VM of choice for dynamic languages. I am sure, with Sun's hiring of Ted and Frank, invokedynamic is poised to gain more importance in the JCP world. After Ruby, Python is slated to be another first-class-citizen on the JVM.

But what happens to Java, the language ?

With more and more power of the dynamic languages being blessed on the JVM, are we prepared to see more and more polyglotism in enterprise applications ? BigCos and enterprise managers always like to remain sensitive to the platform they have invested in - you can implement in any language you want to, so long it runs on the JVM. In the days to come, expect more strategic support from Sun on features that will enrich Java, the platform, rather than Java, the language.

Next stop for Sun ? Scala ?

Monday, January 07, 2008

Language Explorations on the JVM - An Application Developer's perspective

Sometime ago I had reported on our first experience of using Rhino scripting in a Java EE application for a large client. It was exactly what Ola Bini suggests in his post on language explorations. Some of the modules of the application needed the dynamism, were required to be hot swappable and customizable by the domain users. And the compilation cycle was getting in the way in trying to meet up these requirements through the standard server side language. We went for Rhino scripting for all such controllers using the new scripting engine available for executing Javascript within the JVM.

Since that application has been successfully deployed, we have been fiddling around with some more options towards polyglotism. This post is a brief summary of some of the languages / language bridges we explored in the process. All of what we did so far has been on the JVM as the underlying Polyglot platform - we have not yet explored anything on the .NET world.

Web controllers are areas which may need lots of dynamic nature, since they deal with user interactions, page flows, stateful storage across requests and many other control flow structures for realizing one complex use case. Spring Web Flow provides one viable option for modeling this. Another option from the scripting world is Rhino in Spring, which integrates Mozilla Rhino JavaScript interpreter with Spring Framework. The value add is to offer to the user the flexibility of a dynamic language to model the dynamic parts of the application on the Java platform, while integrating with the dependency injection principles of the Spring framework. Spring also offers nice support of plugging in managed script based controllers in multiple languages - this will surely provide more options towards evolution of polyglot programming in today's applications.

Another area where we explored the possible usage of an expressive language is the configuration of an application. Applications today mostly use XML based configurations, which feels too noisy for human consumption. SISC offers a lightweight Scheme scripting engine atop the JVM and comes bundled with a small footprint of around 230 KB. I had blogged before on using Scheme as an executable XML :
In SISC bridging is accomplished by a Java API for executing Scheme code and evaluating Scheme expressions, and a module that provides Scheme-level access to Java objects and implementation of Java interfaces in Scheme.

Talking about what Ola Bini calls the "stable layer", I fully agree that static type safety helps here, since the entire application infrastructure will be built upon this layer. Till today Java is my #1 choice as the language and Spring is my only choice as the framework for this layer. I have talked on this a number of times before, but I guess it is worth repeating that I love the non-intrusiveness of Spring as far as declarative programming on the JVM is concerned. As it stands now, I will not forego Spring if I am developing on the JVM platform.

It will be really interesting to see how Scala shapes up its future as a potential candidate for this layer. Scala is a feature rich language with an advanced type system, nice syntax, less verbosity and more elegance than Java. Where Scala lacks are tooling, documentation and industry patronage, all of which can improve with more and more users joining the community.

In the domain layer, most applications rely on pure Java to model business rules. As Ola has mentioned, this layer is a strong candidate for DSL based implementation. Irrespective of what language(s) you use to implement your DSL, the business application rules should always be based on the DSL only. My feeling is that in today's scenario, Java is not really an ideal language to design a DSL. Hence we tend to find almost all applications implementing the domain layer at lower levels of abstraction. This makes the domain layer of today more verbose and less maintainable.

Powerful and expressive languages with conciseness of syntax are better fit for designing DSLs. While JRuby and Scala make suitable candidates for designing DSLs for the domain layer, I think the static typing of Scala makes it a better fit here. I may be biased, but when I am thinking of reusable API design to be used by big teams, somehow static typing (possibly done better than Java) makes me more comfortable. However, considering the state of enterprise software development today, there is a significant entry barrier for average programmers to both Scala and JRuby. Idiomatic Scala or Ruby is primarily based on functional paradigms, something which is still not intuitive to a Java programmer today. With most of today's new generation languages embracing FP, this may be the single most deciding factor that will determine the amount of acceptability that polyglot programming will find within the behemoth called enterprise software development. But there is no doubt that a well designed DSL using languages like Scala or JRuby will find tomorrow's domain model at a much higher level of abstraction than what it is today.

Friday, October 19, 2007

Clojure is here

I came across this posting Lisp on the JVM in reddit and thought .. what the heck ? What's so great about it when we have already ABCL, KAWA, SISC for the JVM ? In fact the title in reddit is a bit misleading - Clojure is very much like Lisp. It is targetted for the JVM, but more than anything else, the design embodies lots of thoughts towards immutability, functional data structures, concurrency, STM etc. Here is a comment from the author himself on reddit :
Clojure has some tangible, non-superficial differences from Common Lisp and Scheme. They yield something that is different, and might or might not be more suitable depending on your programming style and application domain.

  • Most of the core data structures are immutable. This is part of an overall design philosophy to make Clojure a good language for concurrent/multi-core programming.

  • Most of the data structures are extensible abstractions. This is different from Common Lisp where you can't extend the sequence functions to your own data structures, for instance. Even invocability is an abstraction - allowing maps to be functions of their keys and vice-versa.

  • Clojure extends code-as-data to maps and vectors in a deep way. They have literal reader syntax, the compiler understands them, backquote works with them, they support metadata etc. Because they are efficiently immutable and persistent, they support very Lisp-y recursive usage, shared structure etc, in ways Common Lisp's hash tables and vectors cannot.

  • Clojure embraces its host platform in ways that the standard Lisps ported to the JVM can't. For instance, Common Lisp's strings could never be Java Strings since the former are mutable and the latter are not. Clojure strings are Java Strings. The Clojure sequence library functions work over Clojure and Java data structures transparently. Etc.

  • Clojure has metadata as a core concept, not something one could retrofit onto the built-in Common Lisp types.

  • Clojure is designed for concurrency. Vars (similar to CL special variables) have explicit threading semantics. Clojure has a software transactional memory system. Etc.


In short, Clojure is (non-gratuitously) different. If you don't want different, you don't want Clojure. If you like Lisp and need to write multi-threaded programs for the JVM, you might find something interesting in Clojure.


I had blogged about SISC sometime back and discussed how we could use Scheme as a more consumer friendly XML in your Java application. I think Clojure is going to be the most interesting dynamic language on the JVM very soon. There has never been a better time to learn Lisp !