## Tuesday, March 28, 2006

### Non Java Languages on the JVM

In one of my earlier posts, I had a brief discussion on the support of the milieu of dynamic languages on the JVM. Inspite of the fact that both the JVM and the .NET CLR had been conceived as being static-type friendly, we are seeing lots of movements towards supporting dynamic languages on these platforms. In this entry I plan to continue on my earlier thread with some more technical details on some of the features that language implementers expect to be supported by the virtual machines. There are still confusions regarding what should be parts of the language and what should be supported at the VM level - but on the whole, some of them are heavily discussed in all major forums. Both JVM and .NET CLR being turing complete, support for all of the features should be, very well possible - it is just a question of the ease of implementation and use and the speed of execution on the VM platforms.

invokedynamic in JVM

Duck Typing makes its way to the JVM through this new bytecode. invokedynamic will make it easy for dynamically typed languages to be ported and run on the JVM. Dynamically typed languages like Python, Ruby etc. perform method dispatch by name, and not by type - invokedynamic-enabled JVM will ensure
that the verifier won’t insist that the type of the target of the method invocation (the receiver, in Smalltalk speak) be known to support the method being invoked, or that the types of the arguments be known to match the signature of that method. Instead, these checks will be done dynamically.

In case the method lookup fails at runtime, there will be a trap mechanism as a fallback. Gilad Bracha covers the details of this new bytecode in his blog here. Parrot already has support for an "invoke dynamic" instruction, which does runtime lookup along with exception handling.

Hotswapping

The main idea is to allow code changes on the fly, while they are running. The full capability of hotswapping implies any kind of change to be supported, addition/modification/removal of methods and attributes including changes in inheritance hierarchy. For statically typed languages, this is still an area of active research, but Gilad is hopeful of delivering full hotswapping at least for the dynamic languages. As he mentions in his blog
If we can get both invokedynamic and full hotswapping for dynamically typed languages working, implementations of languages like Python or Ruby could use the JVM object model directly, and really take advantage of the great performance of JVMs.

Could not agree more to the scepticism of Gilad. IANA expert on language or VM design, but the thought of implementing hotswapping of static variables makes me black!
Microsoft has come up with extension methods in C# 3.0 for a limited support of adding methods to an existing type without recompilation. But this is purely a C# feature and not a feature of the CLR.

invokedynamic and Hotswapping have been officialized as JSR 292, chartered to extend JVM support for dynamically typed languages.

Tail Calls

Just a small prelude for the new generation of programmers who missed the elegance of Lisp or Scheme and have been writing for-loops since. Functional language programmers use recursion to implement loops - however elegant it may look, recursive calls consume lots of stack space. Hence these languages employ all sorts of optimizations to make efficient loop implementation possible with constant stack space. Tail call optimization is one such technique, which replaces calls in tail position with jump statements. A call is said to be in a tail position if it is the last statement of a function.

Enough of beating around the bush - let's come straight to the point. Implementers want tail call support in the JVM. This is not as simple as it may seem. Recursive tail calls, which calls the function to which it belongs, can be optimized easily by introducing local jumps, and does not require any support from the target language. Hence language implementers targetting the JVM usually compiles these recursive tail calls as a goto bytecode directly, since Java source does not have any support of a goto statement. However, problem arises with non-recursive tail calls, since the JVM does not support the equivalents of C's setjmp/longjmp functions - hence non-local jumps cannot be achieved efficiently within the JVM. Implementors claim that supporting tail call optimization interferes with Java's stack walking security mechanism.

Various techniques have been proposed and used in the last decade or so for generic tail call optimization. But none of them have been found to be suitable for an efficient implementation on the JVM. The following are some of the commonly used techniques :

1. Place the whole program in a big function and change all non local calls to direct jumps or switch statements inside this function. This method cannot work for the JVM because of the limit of 64 KB on method size, though it may work for the .NET CLR. But, overall, this is not a very scalable technique.

2. Use trampoline. “A trampoline is an outer function which repeatedly calls an inner function. Each time the inner function wishes to tail call another function, it does not call it directly but simply returns its identity (e.g. as a closure) to the trampoline, which then does the call itself.” Use of trampolines limit indefinite stack growth, but at the expense of performance, since all calls made by the trampoline are statically unknown method dispatches.

3. Cheney on the M.T.A. This technique by Henry Baker converts the program to CPS style, thereby turning all calls into tail-calls. Implementing this technique in the JVM implies simulating non-local jumps with chains of RETURNs or exceptions and doing CPS conversion on the entire program, which proves to be too expensive to be practical.

In summary, implementing tail-calls is expensive and Sun has no plans to support it yet in the JVM. Some of the JVMs like IBM's support optimization of recursive tail calls. CLR has an explicit tail call instruction which instructs the compiler to discard the caller’s stack frame prior to making the call. However, as Meijer and Miller observes, if the call is from untrusted code to trusted code, the frame cannot be fully discarded for security reasons.Regarding the support of tail-calls in CLR, here is what Gilad Bracha has to say
As far as tail recursion support in .Net, I understand that using it is rather expensive, because it forces a stack crawl to compute the effective protection domain (which otherwise could be computed lazily). So it has much less real value than one might expect.

Continuations

Yet another devilish elegance popularized by the functional paradigm, the Continuations Passing Style (CPS) of calling functions. Modern day Java programmers who have been inducted into programming with stack based function calls have already started smelling Greek! Don't worry, just think of CPS as yet another semantics of function call implementation, where instead of returning value to the calling function, the parent function tells the child function which function to pass the result to. No function call ever returns. The child function does this with an object named Continuation, which it gets from the parent. A continuation is an object which takes a snapshot of the current function's lexicals and control stack - when invoked, the complete state is restored from the calling chain. Once again, Parrot has this cool feature, and Dan has the commentary on this subject. Continuation based programming has got a real boost with the increasing popularity of the concept of continuation server, a radically different paradigm of Web development. Bruce Tate discusses this new paradigm in detail in the Crossing Borders article.
By offering a stateful programming model without giving up the scalability inherent in statelessness, continuation servers make Web application development much easier.

As a result people porting their dynamic languages would love to see CPS implemented in the JVM. Once again, the real challenge is stack management. Since dynamic languages like Scheme and Ruby have full support for continuations, implementers apprehend that running these languages under the JVM through invokedynamic may not get the best performance if they have to do their own stack management for supporting continuations. In the Kawa implementation for compiling dynamic languages to the JVM, a restricted version of Scheme's call-with-current-continuation has been implemented using Java exceptionsand can be used to prematurely exit (throw), but not to implement co-routines (which should use threads anyway).

.NET CLR also does not have explicit support for continuations. Don Box tries his hand in this blog post. But whatever he does is not a CLR support - it is a compiler hack exploiting the iterator feature of C# as has been rightly pointed out in the Comments section of Cedric Otaku's response to Don Box.
C# doesn't support continuations, instead it has the iterator feature, which allows you to use the yield() keyword. This isn't a special VM feature, but instead it's a compiler hack; the method that holds a yield is turned into an object, and all local vars are moved into fields, so that the state of the method is actually recorded in the object.

These things are not continuations, because the yield() only returns to the caller, whereas with continuations you could hand off control to some other functions (you can only call subroutines, which is different). This means, you don't get the full power of continuations but instead a specialized language feature that... well, allows you to write iterators.

I think the biggest challenge of implementing continuations support in the JVM is to follow the principle of "pay for it only if you need it", since not many languages actually need them. Gilad Bracha also predicts the support for continuations in the JVM as a "tall order" - till then we will have to live with the broken hand crafted implementation strategies with serious performance overheads.

## Monday, March 27, 2006

### Potpourri .. Math etc.

Gathering for Gardner

Seventh Gathering for Gardner (G4G7) was held recently in Atlanta, where a potent jumble of mathematicians, computer scientists, artists, writers, engineers, magicians, inventors, puzzlists gathered to honor Martin Gardner, whose writings in recreational mathematics and magic over many years had exerted such a profound influence over their lives. Catch up with the details in Science News Online here ..

The Limits of Reason

More than a century ago, David Hilbert declared that there was a theory of everything for mathematics following the rules of logic. Later, Kurt Godel refuted this statement and proposed that mathematics contains many true statements / axioms, which cannot be proved. In the March 2006 issue of Scientific American, mathematician Gregory J. Chaitin of the IBM Thomas J. Watson Research Center, has put forth a delightful exposition of his theory on this subject. Chaitin relates this subject with Turing's Halting Problem, for which no general solution exists. If we consider the ensemble of all possible computer programs, then will a program chosen at random halt? This probability, which he calls, omega, is the probability that the machine will eventually come to a halt when supplied with a stream of random bits. He goes on to show that omega cannot be computed since knowing omega will let us solve the Halting Problem, known to be unsolvable. Ivars Peterson's MathTrek also covers this article by Chaitin in detail.

## Sunday, March 26, 2006

### Scala: Everything is an Object!

The main force behind design of Scala is unification of the object-oriented and functional paradigms. Every function in Scala is a value, every value is an object - hence every function in Scala is an object. Hence everything is an object in Scala.

The unification of OO and functional paradigms in Scala is managed by an advanced static type system -

• Scala supports algebraic data types, that can be decomposed for pattern matching as found in functional languages.

• Scala also offers class hierarchy based static typing of object oriented languages

Thus Scala can express both closed and extensible data types, and also provides a convenient way to exploit run-time type information in cases where static typing is too restrictive.

Numbers are Objects

Java distinguishes numeric types from objects - Scala does not.
e.g. In the following expression

1 + 2 * 3 / x

Scala sees it as

1.+(2.*(3./(x)))

It treats every number as an object and every operator as a method invoked on that object. Operators and identifiers are never distinguished - they can either be a sequence of letters and digits which begins with a letter or a sequence of special characters, such as “+”, “*”, or “:”. A binary operation 1 + 2 is interpreted as 1.+(2), which is interpreted as invocation of the method "+" on integer value 1.

Similarly the concatenation operator "::" of a List follows the same principle -

val nums = 1 :: 2 :: 3 :: 4 :: Nil // adopted from [Scala By Example]

is equivalent to

val nums = 1.::(2.::(3.::(4.::(Nil))))

Functions are Objects

Since Scala supports functional programming, functions are treated as first class objects in Scala. Functions can be passed as arguments to other functions (Higher Order Functions), they can be used as values as well. Let us look at the anatomy of the sort function from [Scala By Example].

def sort(xs: List[int]): List[int] =    if (xs.length <= 1) xs    else {        val pivot = xs(xs.length / 2);        sort(xs.filter(x => x < pivot))          ::: xs.filter(x => x == pivot)          ::: sort(xs.filter(x => x > pivot))    }

• The function is an expression - hence there is no explicit "return" statement. The function evaluates to a value which is a List of int.

• According to the usual quicksort algorithm, the sort works by partitioning the list into three sublists and then concatenating the sorted sublists by recursive calls around the pivot. The three sublists are concatenated using the list-concatenation operator ":::".

• The most interesting part happens within the calls to filter. The filter method of the List accepts a Predicate and returns a list of elements for which the predicate evaluates to true. It has the signature

    def filter(p: t => boolean): List[t]

Here, t => boolean is the type of functions that take an element of type t and return a boolean. Functions like filter that take another function as argument or return one as result are called higher-order functions.

Experienced Java programmers must now be wondering about the black magic going on behind the predicate implementation. Actually it is bread-and-butter stuff for functional programming - Closures at work! The argument to filter is an anonymous function (a function defined without a name), which is created on the fly and has access to the enclosing lexical scope, through which it gets the variable pivot. The argument to the first filter call, x => x <= pivot represents the function that maps its parameter x to the boolean value x <= pivot. The function returns true if x is less than or equal to the pivot, false otherwise. Also, we have an implicit iteration going on here within filter, where each value fetched is put into x. The Scala compiler is smart enough to infer the type of x from the context, the user does not have to provide it explicitly.

Try implementing the same in Java or C++ - you won't be disappointed that you have come this far in this entry!

• Closures in Scala Stand Out

Closures are offered by all functional/hybrid languages, but the one from Scala implements one of the best approaches for lazy evaluation of expressions.
Let us look at an example from the Scala world -

object TargetTest1 extends Application {  def whileLoop(cond: => Boolean)(body: => Unit): Unit =    if (cond) {      body      whileLoop(cond)(body)    }  var i = 10  whileLoop (i > 0) {    Console.println(i)    i = i - 1  }}

In the above example, when the method whileLoop is applied, the actual arguments are NOT evaluated immediately. Instead they are encapsulated within nullary functions and passed as the corresponding actual parameters - the classic call-by-name evaluation. Scala's automatic closures allow building new control structures avoiding the need for the caller to explicitly state that an expression must be evaluated lazily.

Let me end this post with a comment from Gilad Bracha on the strengths of Scala ..
Scala is one of the nicest and most powerful languages around, and runs on a JVM as well. It has a lot of really nice features - purely object oriented, closures, mixins, a powerful type system, a powerful notion of modularity, and it interoperates well with the Java platform.

The next post on Scala will talk about the Class abstraction and two of its composition mechanisms - Mixins and Traits.

## Friday, March 24, 2006

### Scale up with Scala

Of all the debates raging the programming languages space of the web, many a blood has been shed on the static versus dynamic typing issues. Statically typed languages running on a virtual machine are busy incorporating dynamic typing in various forms - Java has officialized JSR 292 (I blogged on this before), while C# has been aggressive in supporting dynamic language features within the CLR. Besides, dynamically typed languages like Ruby, Groovy, Python etc. have been seeking their destiny in all available virtual machines. Amongst all this brouahha, one new language has been able to carve out a niche in the programming language fraternity by its solid design and elegance of style. Of course I am talking about Scala, the new kid on the block, a masterpiece from Martin Odersky, which can run on the JVM and can cross-compile into a .NET assembly.

In today's world of complex enterprise applications, objects alone are not able to deliver all the goods, multiparadigm design forces compel one to think in terms of artifacts which provide higher degrees of abstraction over the problem domain. As an example, consider the world of Web Services along with its rich XML manipulation requirements, which make us yarn for more powerful tree transformers, not offered by current OO languages. Think functional - this is what the document on Scala Rationale hints at! Enter Scala - the language designed with the mission of unifying object-oriented and functional elements. Martin Odersky sums it up beautifully in the document [Programming In Scala] :
Scala is both an object-oriented and functional language. It is a pure objectoriented
language in the sense that every value is an object. Types and behavior of objects are described by classes. Classes can be composed using mixin composition. Scala is designed to work seamlessly with mainstream object-oriented languages, in particular Java and C#.
Scala is also a functional language in the sense that every function is a value. Nesting of function definitions and higher-order functions are naturally supported. Scala also supports a general notion of pattern matching which can model the algebraic types used in many functional languages. Furthermore, this notion of pattern matching naturally extends to the processing of XML data.

I plan to blog a lot on Scala, focusing on all niceties, which have the potential to position it as the language of the future. Apart from the fact that Scala supports OO as well as functional paradigm, one other sublime aspect of Scala is its extensibility - Scala is a programmable programming language.

No introduction to a programming language is complete without the HelloWorld program. I would like to end today's post precisely with that. The following is the Scala way of saying Hello, World!

object HelloWorld {    def main(args: Array[String]): unit = {        System.out.println("Hello, world!");    }}
Trivial as it may seem, the above snippet brings out some very interesting thoughts behind the design of Scala. Let us compare it with the corresponding version in Java.

class HelloWorld {    public static void main(String[] args) {        System.out.println("Hello, world!");    }}
In the Java version we use the class based encapsulation, while Scala introduces the object declaration for the main method. This declaration introduces a Singleton object along with the class named HelloWorld, which is created on demand, the first time it is used. The HelloWorld program makes the Singleton Design Pattern "invisible" in Scala - that's really hell of a HelloWorld!

When we can construct singletons invisibly, why should we require statics ? That's the second point of note
static members (methods or fields) do not exist in Scala. Rather than defining static members, the Scala programmer declares these members in singleton objects.

Coming Up! Everything is an Object in Scala!

## Thursday, March 23, 2006

### Dependency Inversion using Terracotta

I am not an expert in enterprise applications caching in the Java EE space. However it does not take an expert to appreciate the non-invasive clustering and caching solutions offered by Terracotta's Virtualization Server. Enjoy!

Traditional caching solutions based on api s run as services within the JVM and implemented using a clustered hashmap that has to be replicated within all participating JVMs. The developers use get methods to fetch data from the cache and put methods to update cache. Terracotta does it the other way - their implementation of virtualization server inverts the relationship between the application server and the cache. The cache is no longer inside the server; rather, the server is now running inside a DSO-enabled JVM. Instead of developers accessing the cache state, Terracotta manages identity of all cached objects as Java references in its virtualization server. When the client issues a "set" on his POJO, Terracotta DSO intercepts the call using bytecode instrumentation, and automatically distributes the change to the other replicas in the cluster. Neat stuff!

What value does this Terracotta caching architecture bring to the world of Java EE applications ?

Apart from all the architectural benefits that the apiless paradigm presents, I think the killer proposition is their offering of "drop in clustering" without "any" application code change. At a time when enterprise level Java EE applications are scaling out with hundreds of CPUs, Terracotta's combination of Virtualization Server and DSO-enabled JVMs provide the right value. Complex enterprise applications built incrementally over layered frameworks can now run within a scalable multi-level clustering infrastructure with capability to share object states across JVMs.

Is the Terracotta architecture scalable in the mass market ?

We have interesting observations from some of the lead architects. Billy Newport, of Websphere group in IBM, feels that the success and scalability of the apiless architecture of Terracotta caching depends a lot on the developer's ability to churn out correct multi-threading code, which he feels is a real tough ask. He expresses his concern that
It's hard to write multi-threading code that scales vertically. ... Why do we think customers can now write perfect multi-threading code that can be transparently distributed using a non invasive approach?

Cameron Purdy of Tangosol (who also offer clustered caching solution through Coherence), is vehemently against Terracotta's bytecode instrumentation based approach, which he feels "will simply re-invent the same fundamental mistakes that JBoss AOP Cache and Gemstone/J already discovered". Working with simple POJO references that preserve object identities is being viewed by him as an attempt to build a fully transparent OODBMS in Java, which will fail in the context of the current JVM design. Follow him more in the Comments section of this page.

## Wednesday, March 22, 2006

### Mustang Promises a Better SQLException

Java6, codenamed Mustang, has some real niceties in SQLException implementation. Some of these were real problems in writing JDBC applications - it is a welcome move to have these enhanced in the upcoming JDBC 4.0. Lance Andersen, the spec lead for JDBC 4.0, has all the details in his blog, along with other enhancements.

Some of the very useful ones in SQLException are :

Real Chaining Support

SQLException now supports additional constructors with the "cause" specified as argument. The causal relationship can now be used extensively to program chaining of exceptions. Here is a snipped adopted from Lance.

catch(SQLException ex) { while(ex != null) {    Throwable t = ex.getCause();    while(t != null) {       System.out.println("Cause:" + t);       t = t.getCause();    }    ex = ex.getNextException(); }}

Categorization

Two distinct categories of SQLExceptions will be supported - SQLTransientException and SQLNonTransientException. SQLNonTransientException is more of a runtime exception (would have loved to see it derived from RuntimeException though) where retry will not help, unless the problem is explicitly rectified. The classes included as SQLNonTransientException are

• SQLFeatureNotSupportedException

• SQLNonTransientConnectionException

• SQLDataException

• SQLIntegrityConstraintViolationException

• SQLInvalidAuthorizationException

• SQLSyntaxErrorException

SQLTransientException is more of a checked exception with the following subclasses

• SQLTransientConnectionException

• SQLTransactionRollbackException

• SQLTimeoutException

For more details on the advances in JDBC 4.0 features refer to this artima article.

### OO Patterns and Functional Paradigm

Regarding my earlier post on design patterns and programming languages, someone in an LtU thread has the following observation
My problem with this paper is that they are using functional languages to implement OO patterns. What's the point? Document the patterns that programmers use when doing functional programming. That's more interesting.

The main point that I was trying to drive home in the post is that implementation of any design pattern needs to consider the best practices and idioms of the language that you are using. A Command pattern implemented in C++ will not be the same as one implemented in Ruby. Though the end result is to provide an abstraction to the commands executed, the language artifacts used in the above two cases will be different. In case of C++ or Java, which do not provide higher order functions, closures etc., the implementation will be through a class hierarchy manifesting as "Command Objects". But in case of a Ruby implementation, it should be all "Closures".

The above is more important from the very fact that modern day languages provide you options - in C# you can choose to use the classical OO model of polymorphic class hierarchies or you can also use the anonymous classes and delegates offered by 2.0 or the classical lambdas coming forth in 3.0. We need not necessarily have to implement OO patterns using OO artifacts ONLY - in some cases functional paradigms may very well be a better fit!

## Thursday, March 16, 2006

### All in One VM - It's as Good as It Gets

It is possibly one of those times when we are witnessing a plethora of programming languages, each trying to carve out its own niche in the development space with its own set of cool features. The experienced heads tend to think that most of these are really good old ideas from the past which reappear because the technology is now available to commercially exploit them; others become popular because they provide a better approach to solving a particular problem.

Possibly the most blogged about thread raging the community is that of dynamic languages and their support on the JVM/CLR. Suddenly the blokes who used to think of types as the coolest invention since sliced bread, have been voting for dynamic types and ruminate how dynamic languages make programming much easier and ensure better productivity from programmers armed with automated unit testing tools. Robert Martin and Bruce Eckel have expressed this sentiment in switching over from C++/Java to Python, Martin Fowler shares his similar experience with Ruby.

It's 3 AM in the morning and I just finished reading two of the papers on this subject, both heavily discussed in LtU. In Static Typing Where Possible, Dynamic Typing When Needed, Erik Meijer and Peter Drayton of Microsoft, tries to make it a world of both the goods. Fanatics of static typing have been professing the idea of all-safety with well-typed programs. This, they claim is vacuous - static typing, they say is a compile-time abstraction of the runtime behavior of your program, and hence it is necessarily only partially sound and incomplete. This means that programs can still go wrong because of properties that are not tracked by the type-checker, and that there are programs that while they cannot go wrong cannot be type-checked. Dynamic languages, OTOH, may be an automatic choice for modeling truly dynamic behavior through their features like method interception, dynamic loading, mobile code, runtime reflection, etc. In the other paper On the Revival of Dynamic Languages, Nierstrasz et. al. looks at static languages as an obstacle to the effective realization of real applications with essentially dynamic requirements. They suggest languages which can support changes during runtime through features like pluggable types (Gilad Bracha is also one of the main proponents of this), reflection on demand and first class namespaces.

The moot point of the above is that the dynamic languages are back with all their usual artifacts and features and we need to strive for a peaceful integration of them alongside the mainstream statically typed languages. As far as I understand, the only medium of integration is the Virtual Machine (yes, by VM, I mean both the JVM and the CLR) - both Sun and Microsoft should come out with more clairvoyance with their plans to make their VMs a universal medium of support. As Ted Neward yarns
My long-shot hope, rather than prediction, for 2006: Sun comes to realize that the Java platform isn't about the language, but the platform, and begin to give serious credence and hope behind a multi-linguistic JVM ecosystem.

Experts have voted both Sun's JVM and Microsoft's CLR as being unfriendly towards supporting dynamic typing. Quite some time ago, I came across a paper Supporting dynamic languages on the Java virtual machine by Olin Shivers of MIT. The paper points out two areas which need to be improved by JVMs in order to support dynamic languages - more efficient representation of dynamic data and efficient encoding of language specific computational elements. Regarding the first issue, the VM needs to ensure optimization of small scalar data types without doing a dynamic lookup, a feature which has been implemented as value types in the .NET CLR today. The second issue deals with the problem that the JVM is not able to represent all computations which dynamic languages will support - for this he suggests making the JVM a RISC model rather than the current CISC model, thereby enabling the VM to have an extendable instruction set. But doing so, he warns, we may lose out on the safety side - allowing microcode extensions written in C and delivered as raw machine code to be dynamically loaded into the VM requires us to decide why we are going to trust the microcode, and how we are going to verify programs that use these instructions.

The real light of hope in this front comes from the blogs of Gilad Bracha, where he discusses Sun's initiatives to support dynamically typed languages in the JVM. They have been working towards supporting languages like Perl, Python/Jython, Groovy, Ruby etc. through dynamically typed method invocation at the byte code level. Look out for the new bytecode "invokedynamic", at a JSR near you. Regarding how the new bytecode will work, he has the following observation
Basically, it will be a lot like invokevirtual (if you don’t know what that is, either open a JVM spec and find out, or stop reading). The big difference is that the verifier won’t insist that the type of the target of the method invocation (the receiver, in Smalltalk speak) be known to support the method being invoked, or that the types of the arguments be known to match the signature of that method. Instead, these checks will be done dynamically.

Microsoft is also up in arms with lots to offer for supporting dynamically typed languages on the CLR. Features like closures have been implemented using Anonymous Methods and Delegates, Lightweight Code Gen (LCG), along with DynamicMethod class enables easy authoring of languages and late bound calls under the hood of reflection are definite steps forward towards enabling a multi-lingual virtual machine ecosystem.

Hopefully it is not a far cry when I should be able to replace by Strategy with the Blocks of Ruby, while the context enjoys the type-safety of Java. Until then, enjoy programming in Scala, the latest kid on the block which runs on the JVM .. I plan to blog a lot on the cool aspects of Scala, but that's for another day - for the time being, I am losing out on my caffeine !!

## Monday, March 13, 2006

### Quadratic Forms, Ramanujan and Bhargava - The Legacy Goes On ..

The current issue of Science News carries an article on the advancements in the study of universal quadratic forms led by Professor Manjul Bhargava of Princeton University. Professor Bhargava described these landmark results while delivering the Third Ramanujan Commemoration Lecture at SASTRA University, Kumbakonam, on December 22. Bhargava's work is being looked upon as an work of ingenuity and a significant progress towards the complete resolution of Ramanujan's problem of universal quadratic forms. The article, along with all references, is a fabulous read !!

Postscript:

The biography of the publication contains a sidenote on Professor Bhargava, himself an avid musician and an exponent of the art of Tabla. The note ends with:

Both number theory and tabla playing may be viewed as the study of patterns, Bhargava says. "The goal of every number theorist and every tabla player," he explains, "is to combine these patterns, carefully and creatively, so that they flow as a sequence of ideas, tell a story, and form a complete and beautiful piece."

Compare this to what Brain Marick says in his "Little Ruby" book (refer to my earlier post on Software Abstraction) - "In computation, simple rules combine to allow complex possibilities."

I guess it is this basic idea of composition, the ability to combine small integral units to form the big piece, is what makes a beautiful design - be it the forms of numbers, bridges and buildings of Alexander or an elegant piece of software module.

### Patterns and Programming Languages - Decoupled ?

Almost all literature on software design talk about design patterns being artifacts independent of any particular programming language. Purists hate to associate any pattern with languages and scaffold the essence with terms like "microarchitectures", "design best practices", "three part rule consisting of context, problem and the solution" etc. Heck, the reality of truth is that you (being an implementer) cannot but think of language abstractions whenever you think of a pattern. Someone summarized it so beautifully :

The dirty unacknowledged secret of design patterns is that they’re strongly coupled to a language. For example, switching from a statically typed language like Java to a dynamic one with forwarding substantially changes the approach to Factory, Proxy, Singleton, Observer and others. In fact, they’re often rendered trivial.

For me, after being indoctrinated in programming with classes in C++ and Java, when I start modeling the solution of a problem in Ruby, I have to step back and ruminate on the changed level of implementation for almost every pattern. After all, I don't want to make my Ruby code Java-ish by defining a base class for the Strategy Design Pattern. Ruby has made Strategy invisible within the implementation of Blocks and Closures - as an implementor you need to think of these abstractions when you smell of a Strategy in your model.

Can we ever decouple patterns from the language of implementation ? In my last post on software abstraction, I had discussed the implementation of Ruby Iterators as control abstractions offered by the language. For clarity, let us look at the implementation once again:

def producer    100.times{|i| yield i if i%5 == 0}enddef consumer    sum = 0    producer {|v| sum = sum + v}    print sumend
The above snippet is an elegant implementation of the producer consumer model which combines the Iterator pattern (through the internal iterator times) and the Command pattern (through Closure). This composition is only possible in languages which offer the invisibility of these two design patterns through language features.

In typical object-oriented languages like C++/Java, we think about classes first (though the name is OO), all our abstractions are based on designing effective class hierarchies and delegation. In dynamic langages, it's always objects-first - types or classes are objects at runtime (not only at compile time). Types / classes can be manipulated at runtime, methods added to supplement behaviour, thereby obviating the need for many creational patterns like Abstract Factory, Factory Method, Flyweight, State, Proxy etc. Types serve as Factories !!

Dynamic languages support higher order functions - functions as first class objects along with closures (behaviors along with attached lexical scope) and continuations (a lexical scope along with a control chain). If Ruby is my implementation language, I will implement Strategy, Command, Template Method and Visitor using higher order functions. If I use C#, then I will go for Delegates and Anonymous Methods.

The following example of Strategy in Ruby is from RubyGarden and uses Closures :

class Context attr_accessor :strategy def do(*args) strategy.call(*args) endendctx = Context.newctx.strategy = proc { do stuff }ctx.do(some, args)

Over the last few years I have been asking myself - is it possible to think of design patterns without an implementation language ? The GOF pattern template has a section named "Implementation", which discusses all implementation related issues of the pattern (including the language of implementation). Next time when you switch the implementation of your Iterator Pattern from C++ to a coroutine based one of Ruby, check out if you need to make any changes to the "Participants" and "Collaboration" sections. Drop me a line if you don't need to ..

## Friday, March 10, 2006

### Abstraction in Software - Its not all OO (Part 3)

Modern day programmers tend to believe that software abstractions are synonymous to the OO paradigm - unless you are programming in a statically typed OO language, you miss out on the abstraction. With all the dynamic languages gaining popularity these days, functional programming paradigms of Closures and Continuations being discussed in various blogs and mailing lists and Martin Fowler bliki ing on DuckInterfaces and CollectionClosureMethod, things look set for the next revolution in the arena of programming languages.

Coming from the background of hard core imperative programming paradigms of C, C++ and Java, I have also been trying to catch up with this new wave of Ruby and Python. It was during this time that I got the pointer of "The Little Ruby, A Lot of Objects" from Roshan's blog (Thanks Roshan, for the enlightening experience !!). The "Little Ruby" book was like a breath of fresh air - the elements of programming style that it embraced to come up with the Model of Computation built around passing messages to objects, points to the hard reality that calls for a change in the way we teach software design/abstraction in schools. All the hallmarks of good abstraction design that I mentioned in my post Abstraction in Software - The Good, Bad and Ugly (Part 1), come out naturally once you are able to formulate your model of computation through ideas like "Classes are objects with a protocol to create other objects".

What data abstraction is to OO, control abstraction is to functional programming. Constructs like first class functions, higher level functions, closures & functional composition foster the elegance of designing beautiful abstractions in a completely non OO world. The programming language horizon is ready to embrace these sublime ideas as evident from the voices of Paul Graham, Martin Fowler and from the enthusiasm generated in the community by languages like Ruby and Python. Paul Graham has been writing about the reincarnation of Lisp in all the many features that have found way in Python - he mentions "The question is not whether Lisp..but rather 'when'".

Ruby Iterators: Catch me if you can !!

Modeling an iterator in an OO language like C++ and Java has been considered ad hoc by many purists and has been attributed to the lack of suitable control structures in the language. The state of the iterator needs to be stored somewhere - it cannot be the collection class itself, since that would prevent defining multiple concurrent iterators on the same collection. C++ provides a friend iterator class along with the collection class itself to solve this problem. Java also provides external helper objects based on its Iterator interface to store the iterator state. Languages like Ruby that support higher order functions and can define local functions which are closed over their free lexical variables offer much stronger and natural iteration capabilities for abstract collections without the need to define an iterator class for each such collection class.

Let us look at an example of Ruby iterator (adopted from Programming Ruby) :

class SongList    def with_title(title)        @songs.find {|song| title == song.name }    endend
In the above snippet, the method find is an iterator, which nicely encapsulates the block of code to be repeatedly executed over each element of the collection songs. The client does not have to invoke an explicit loop structure (as in C++ or Java) - the implementation of the iteration semantics is completely hidden within the control structure. The collection object songs responds to the message find and invokes the block of code (the Closure) for every element - this is what Brain Marick refers to as "In computation, simple rules combine to allow complex possibilities" in the "Little Ruby" book.

Let us look at another killer example of the beauty of control abstraction offered by Ruby iterators. This is shamelessly copied from Roshan.

def producer    100.times{|i| yield i if i%5 == 0}enddef consumer    sum = 0    producer {|v| sum = sum + v}    print sumend

The beauty of the above snippet is the separation of concerns between the producer and consumer through abstraction - this is only possible because the language offers built-in iteration construct times. The interaction between the producer and the consumer is done through the neat implementation of the yield statement, which allows the function producer to exit on every iteration with the return value i.

Before I end this blog entry, one piece of advice to all the new programmers (the victims of The Perils of Java Schools). The horizons of programming language support for software abstractions is changing - languages like Ruby, Python, Scala and Groovy are offering all the elegance that Lisp had delivered in the 1970s; next time when you find yourself modeling a solution to a computing problem, a lambda or a closure may be a better fit than your complex Java / C++ data abstraction.

### Java Pitfalls - The Josh Way !

SDN has published a fascinating interview with Joshua Bloch on his latest book on Java Puzzlers. Seeing is not believing - that's the mantra of the book and as Josh mentions "Working through puzzles can 'inoculate' you against the traps and pitfalls of the Java platform."

The interview is a fascinating read with small anecdotes like how Doug Lea went back and changed the way he defined a constant in one of the libraries that he was developing after reading Puzzle 3.

Great stuff !!

### Infinity Tower II Tenth Floor

Yesterday we went to see our new office at the 10th floor of the Infinity building. This is where we have taken additional space along with our existing office at the 5th floor. The new space looks spanking - kudos to the great effort of our admin and infrastructure support. Like our exisitng office, the new one is also an open office (no pun intended, we still use Microsoft Office on our desktops!) with sprawling workstations flanked by a number of discussion and conference rooms. The open office fosters better communication, as we have learnt with our experience of the current office structure and also reinforces the lack of hierarchy. One side of the floor is all-glass, overlooking the entire span of the Salt Lake Electronic Complex - hope the new ambience encourages a more productive and creative setting.

## Wednesday, March 08, 2006

### Abstraction in Software - Commonality and Variability (Part 2)

In Part 1 of this series, I had stated the definition of abstraction as studied by Dick Gabriel in Patterns of Software : Tales from the Software Community
Abstraction in programming is the process of identifying common patterns that have systematic variations; an abstraction represents the common pattern and provides a means for specifying which variation to use.

A good abstraction needs to capture the common aspects across the family while specifying the variabilities of individual items. Jim Coplien refers to commonality as the essence of an abstraction, while he mentions variability as the spice of life. Parnas defines a family of programs based on the common properties of the set and the special properties of the individual members :
We consider a set of programs to constitute a family, whenever it is worthwhile to study programs from the set by first studying the common properties of the set and then determining the special properties of the individual family members

Whatever be the implementation language, model of computation or programming paradigm, software abstraction is always based on this duality of commonality and variability. All design patterns which promise good abstractions, support this duality.

• The Template Method pattern encapsulates the common operation flow with variable hotspots.

• The Strategy pattern supports common interface with varying implementation of the algorithm.

• The Bridge pattern allows varying abstraction and implementation hierarchies with a common interface.

In his book Multi-Paradigm Design for C++, Jim Coplien distinguishes between Positive Variability and Negative Variability in software abstractions. Positive variabilites add to the contracts exposed by the base class, without violating any of those specifications. OTOH negative variabilities contradict the assumptions that underlie the base contracts. However, he insists that not all negative variabilities are bad - in fact various programming languages offer specific constructs to design negative variabilites in a positive way. In C++, public inheritance with addition of contracts is an approach to address positive variability (the base class encapsulating the commonality of the abstraction), while features like template specialization may be used to implement negative variability. The following example, which illustrates the commonality and variability of an abstraction, is, once again from Coplien :

template <class T>class Stack {    StackRep<T>* rep;    ....};template <class T>class StackRep {    friend class Stack<T>;    StackRep<T>* next;    T* rep;};

The above example presents a generic implementation of a stack based around a completely general implementation - can handle polymorphic data types, can easily grow as large as needed and the stacked items need not have a default constructor. This is the commonality of the abstraction which all instantiations of the template are bound to.
But this implementation adds a lot of overhead to the memory consumed by the objects it manages, particularly if the managed objects are small. Users wanting to implement a stack of integers may like to go for a more efficient solution, since they don't need the generality of polymorphism support.
Enter template specialization !!

template<> class Stack<int> {    int* rep;    ....};

In the above example, Stack<int> violates the commonality of the structure which the base template provides - we treat this as a negative variability !!
This post illustrates the features of a good data abstraction - a unification of the common properties of a family alongside managing the variations. In the next entry of this series, we will look at some of the control abstractions and how they have evolved in some of the modern programming languages.

Stay tuned !!

## Tuesday, March 07, 2006

### The Crouching Tiger and the Changing Representation of the Computing Community

In the March issue of the CACM, Robert L Glass in his Practical Programmer column has expressed his concerns over the changing face of the computing community. With hard statistics he presents the gradual dominance of the Asian community in the computing horizon, including US computer science academic programs. He also laments over the fact that there
has been a 60% decline in the number of U.S. incoming college freshmen considering CS as a major during the period from 2000 to 2004.

This, he looks as a threat to the dominance of the US in the computing profession.
Coincidentally ACM has recently released a report on Globalization and Offshoring of Software, which has also been covered by NY Times. This report, while admitting the reality of a deep connection between globalization and offshoring has expressed an optimism that
(b)oth anecdotal evidence and economic theory indicate that offshoring between developed and developing countries can, as a whole, benefit both, but competition is intensifying.

I personally feel that the above facts are real pointers to the true global face of the computing community. Instead of looking it as a threat of the crouching tiger and end of the dominance of US regime, why can't we perceive it as a phenomenon which will increase the horizon of the computing fraternity. Its really a global world and you need to adapt yourself to the technologies and management issues that underlie the globalization of software in order to survive as the fittest.

### Abstraction in Software - The Good, Bad and Ugly (Part 1)

It is a common knowledge amongst software practitioners that abstraction in software is the Holy Grail - the more you abstract your design, the more modular it becomes and more it gets qualified as the reusable artifact. However, in real life, going thru popular codebases, it does not take a wink to discover the myriads of instances where unwanted abstractions have led to unmanageable code bloats. In a series of postings, I would like to think aloud my thoughts on abstractions in programming and design. I will bring in the concepts of how Design Patterns have influenced software abstraction and how some of the modern day programming languages have started incorporating these constructs as part of the core language. But, once again, all abstractions are not good - the devil is in the details.

In his book Patterns of Software : Tales from the Software Community, Richard Gabriel gives his definition of software abstraction ..
Abstraction in programming is the process of identifying common patterns that have systematic variations; an abstraction represents the common pattern and provides a means for specifying which variation to use.

The two phrases which appear twice in the above definition are common pattern and variation - undoubtedly these are the hotspots which define what an abstraction is. At one point in time (may be even today), OO community had personalized the concept of abstractions with their own style of designing - separating the interface from the implementation was thought to be known as the only way of software abstraction around. I have seen lots of codebase where PublishedInterfaces were just mimicking the implementation classes. This is definitely not a good interface design where the interface has a strict coupling with the implementation. As Martin Fowler mentions in InterfaceImplementationPair, Interfaces should be designed around your clients' needs.

Abstraction Granularity

The main problem with abstraction in software arises when we take them too far. Gabriel observes
This (taking abstraction too far) results in code that is structured like a big web, in which groups of abstractions are tightly interwoven with others by means of interfacial glue.

A big abstraction loses its reusability, has traces of implementation knowledge sneaked into it. The big bloat is not Habitable and does not support Piecemeal Growth. One of the main forces behind designing abstractions is to identify the appropriate granularity, one which will make them composable and allow piecemeal growth of small and shallow class hierarchies.

Abstraction Extensibility

One of the salient characteristics of good abstractions is extensibility. Bertrand Meyer professes the Open Closed Principle, where the abstractions should be open for extension but closed for invasive changes. Uncle Bob has a nice article on this. Many of the GOF design patterns support designing extensible abstractions.

• If you want to decouple an abstraction from the implementation and allow independent extensibility of both hierarchies, use the Bridge Design Pattern.

• If you want to decouple the algorithm from its user, go for the Strategy Pattern.

• If you want to encapsulate the state of an object and allow transparent access of the object to its clients, use the State Design Pattern.

Sometimes, though, extensibility may appear to be orthogonal to efficiency, as Martin D. Carroll and John F. Isner emphasizes when discussing the design of the C++ Standard Components ..

Classes can only be made extensible in certain directions, where each of these directions is consciously chosen (and programmed in) by the designer of the class. Class libraries which claim to be “fully extensible” are making an extravagant claim which frequently does not hold up in practice. . . . There is absolutely no reason to sacrifice efficiency for an elusive kind of “extensibility.” (Carroll and Isner 1992) - source Gabriel [PatternsOfSoftware].

Abstraction Manageability

It's a nice idea to have an interface published to clients, while you go on switching your implementations merrily. But what happens if the abstraction is found to be wrong - repairing the abstraction implies repair of all its uses. This can often have huge consequences, since your published abstraction is now littered all over. The software development community is split in this regard - whether to make interfaces immutable. The school of people who consider interfaces to be immutable make use of the Extension Object design pattern while designing interfaces. The other group of people have switched to abstract classes and provide default implementations for the newly added methods. In an interview with Bill Venners, Erich Gamma has the following observation to make ..

Since changing interfaces breaks clients you should consider them as immutable once you've published them. As a consequence when adding a new method to an interface you have to do so in a separate interface. In Eclipse we take API stability seriously and for this reason you will find so called I*2 interfaces like IMarkerResolution2 or IWorkbenchPart2 in our APIs. These interfaces add methods to the base interfaces IMarkerResolution and IWorkbenchPart. Since the additions are done in separate extension interfaces you do not break the clients. However, there is now some burden on the caller in that they need to determine at run- time whether an object implements a particular extension interface.

In the next entry of this series I will discuss how good abstractions can be turned into beautiful programming artifacts through the model of computation of a programming language. Plug in some of the design patterns to model the commonality and variability and you have the three pillars of software nirvana - compression, habitability and piecemeal growth !!

## Friday, March 03, 2006

### Literate Program Wiki

A new wiki has come up on LiteratePrograms - just got the pointer from Lambda the Ultimate. This is one more way to popularize the novel concept of Donald Knuth.
LiteratePrograms is a unique wiki where every article is simultaneously a document and a piece of code that you can download, compile, and run by simply using the "download code" tab at the top of every article. See Insertion sort (C, simple) for a simple example.

## Thursday, March 02, 2006

### The Turing Test - Can we ever pass it ?

Passing the Turing Test seems to the holy grail of AI researchers and considered as the ultimate test of machine intelligence. In this article, Mark Halpern argues that the basic principle of the test is flawed because the premise which Turing asserts to judge a "thinking computer" does not apply to fellow humans as "thinking beings". And regarding the achievements of the AI community, he quotes Maurice V. Wilkes, the winner of the Turing Award :

Originally, the term AI was used exclusively in the sense of Turing’s dream that a computer might be programmed to behave like an intelligent human being. In recent years, however, AI has been used more as a label for programs which, if they had not emerged from the AI community, might have been seen as a natural fruit of work with such languages as COMIT and SNOBOL, and of the work of E.T. Irons on a pioneering syntax-directed compiler. I refer to expert systems.... Expert systems are indeed a valuable gift that the AI community has made to the world at large, but they have nothing to do with Turing’s dream.... Indeed, it is difficult to escape the conclusion that, in the 40 years that have elapsed since 1950, no tangible progress has been made towards realizing machine intelligence in the sense that Turing had envisaged.

### Nobel of Computing for Naur

The following is courtsey ACM TechNews

Peter Naur (of Backus Naur fame) has been named the recipient of ACM's 2005 A.M. Turing Award for his groundbreaking work defining the Algol 60 programming language, which would become the model for numerous subsequent languages, including many that are indispensable to software engineering. Named for the British mathematician Alan M. Turing and often recognized as the Nobel Prize for computing, the award features a \$100,000 prize. Naur edited the "Report on the Algorithmic Language Algol 60," defining program syntax with what would come to be known as Backus-Naur Form. The award also recognizes Naur's work in compiler design, as well as the art and practice of programming. "Dr. Naur's Algol 60 embodied the notion of elegant simplicity for algorithmic expression," said Intel CTO Justin Rattner. "This award should encourage future language designers who are addressing today's biggest programming challenges, such as general-purpose, multi-threaded computation, to achieve that same level of elegance and simplicity that was the hallmark of Algol 60." The late Edsger Dijkstra, recipient of ACM's 1972 Turing Award, credited Naur's work with elevating automatic computing to a legitimate academic pursuit. Until Naur's report, languages had been informally defined by their support manuals and the compiler code. The manual gave precise and economical definitions to both syntax and semantics. After the publication of Algol 60, Naur co-authored the GIER Algol Compiler. Microsoft's James Gray, who chaired the 2005 Turing Committee and is the recipient of the 1998 Turing Award, hailed Naur's contribution as a "watershed" that introduced many of the programming conventions that are taken for granted today. Naur is also credited as a pioneer in establishing software design as a discipline. Naur will receive the award at the annual ACM Awards Banquet on May 20, 2006, in San Francisco.

Here is the link to all 2005 Award Recipients. And here is the press release.