Thursday, May 1, 2008

Maven, OSGi and JSR 277

When I read about the SpringSource Application Platform (S2AP) release a couple of days ago, a little light bulb went on in my mind. The discussion of the OSGi repository that SpringSource has put together got me to thinking that there might be a way to make OSGi development easier for everyone, by using Maven.

Wouldn't it be possible to create a simple Maven plugin which analyzed a given library (e.g. Lucene) using BND, and used the output of that to generate an OSGi manifest? You could assemble the set of Import-Package statements that you need, and then whip up some sort of intelligent algorithm that scanned the existing set of capabilities in the repository and linked in the appropriate packages in the POM's <dependency> list. It seems like this is something that could automatically be run on the existing public Maven central repository, which would lead to a very large number of OSGi packages being available in a short period of time.

Where does JSR 277 come into this? Well, several months ago Glyn Normington described an ideal solution where JSR 277's modularity aspect was done using OSGi, and a Maven-like repository that modules could be downloaded from would be created and made available as part of Java 7. Rather than a Maven-like repository, why not leverage the existing Maven central repository and add the necessary metadata into that? There are an astonishing number of libraries available in central, so it seems like there's a big opportunity there.

In the system I envision, we would have the well-known benefits of an OSGi runtime environment, the wide library selection of the Maven repository, and the dynamic availability provided by the JSR 277 module system. Libraries could be automatically downloaded, activated and deactivated, and applications could be upgraded and installed seamlessly. When you throw in other components like Service Component Architecture (SCA) and Cloud Computing, suddenly I think I might have an idea where Java will be in a few years.

Am I missing something that prevents this from happening, or is it reallly doable?

Monday, March 10, 2008

London bound!

Less than a day now, and I'll be leaving for London - my flight leaves Tuesday morning, and arrives in London Tuesday night. If it's like the last QCon, it'll be energetic, interesting, very tiring, but completely worth it. I'm looking forward to it!

If you are there, come and say hi - I'll probably be in the Wordsworth room the whole time. :)

Thursday, February 28, 2008

QCon Interviews on a Google Calendar

I went ahead and published the QCon Interview schedule on a Google Calendar. Enjoy!

Tuesday, February 26, 2008

Two weeks until QCon Interviews!

It's just over two weeks until the interviews start at QCon London, and everything is set! We've added the interviews to the official website, and you can see the Wednesday, Thursday and Friday interviews there. As a note, there are actually two more interviews on Thursday which are not listed on the schedule - there is an interview with Neal Gafter at 18:45, and with Linda Rising at 20:00 on Thursday. A Google Calendar containing all of the interviews is also possible, if there is interest in it.

To give you an idea of what might be happening, I wanted to share this little gem from the last QCon London. It's quite a good laugh, especially for those that have seen the Ali G show:

Friday, February 22, 2008

Ted Neward speculates on the future of development tools

I think Ted Neward really hit the nail on the head with this recent blog post about modular toolchains. In it, he describes a future in which a series of languages all compile their syntax to the same Abstract Syntax Tree (AST), and that AST is then projected into many different forms, e.g. code in several languages, binary compiled code, virtual machine bytecode, and runtime-interpreted code. Also, each of these projections (code, compiled binary, etc) would be convertible back into the original AST, which allows for a very flexible toolchain.

Imagine being able to convert code in language X into language Y on the fly (e.g. take someone's Java code and view it in Ruby syntax) -- developers could develop in their language of choice without forcing an entire team to program in X. As Ted points out, the possibilities for domain-specific languages (DSLs) are also huge - as long as the DSL that you created had a way to be converted into the core AST, it would be possible to have that DSL directly generating code without any sort of post-processing/interpretive layer, and you wouldn't be constrained by the "parent language" that the DSL is being written in.

As Ted points out, this utopian vision is still a bit of a ways out:

How likely is this utopian vision? I'm not sure, honestly--certainly tools like LLVM and Phoenix seem to imply that there's ways to represent code across languages in a fairly generic form, but clearly there's much more work to be done, starting with this notion of the "uber-AST" that I've been so casually tossing around without definition. Every AST is more or less tied to the language it is supposed to represent, and there's clearly no way to imagine an AST that could represent every language ever invented. Just imagine trying to create an AST that could incorporate Java, COBOL and Brainf*ck, for example. But if we can get to a relatively stable 80/20, where we manage to represent the most-commonly-used 80% of languages within this AST (such as an AST that can incorporate Java, C#, and C++, for starters), then maybe there's enough of a critical mass there to move forward.

But that doesn't stop me from dreaming of such a future. None of the problems mentioned are intractable, which means that this vision of the future has a real chance of becoming reality. Here's hoping!

Thursday, February 21, 2008

Costin Leau discusses how to turn a JAR into an OSGi bundle

Earlier this week I came across this blog entry by Costin Leau which describes how to turn a standard, run-of-the-mill JAR into an OSGi bundle. Wow - it's pretty straightforward, and there aren't any significant magical incantations involved. It's nice to see that the process for bundling up an OSGi component is so straightforward - I was afraid that there were some significant hurdles involved.

Of course, the manifest entry is probably the easiest part. I'm willing to bet that if you took most JARs that exist in existing, internal enterprise applications and ran them through the Bnd tool that Costin mentioned, that youd have a massive list of required imports. It reminds me of something Peter Kriens said in this InfoQ interview:

John Wells from BEA [..] had a presentation and he said: "we thought we were working modularly -- we were disciplined, we were doing the right thing and we were so surprised when we moved to OSGi because we found out we never had been working modularly, we had all these dependencies, all these links to all kinds of subsystems and we never noticed it because it was all on the classpath, and as long as you didn't use it you didn't run into the problem"

Tuesday, February 19, 2008

Links between RIAs and "the cloud"

A colleague of mine at InfoQ, Deborah Hartmann, emailed me this morning with a thought about the strong link between cloud computing and Rich Internet Applications (RIAs). It triggered some thoughts which I'd like to share.

Deborah asked:

Don't [the cloud and RIAs] go hand-in-hand? [...] with more generic services on the server (in the cloud) it's natural that we move toward programming more finesse and usability on the client, right? (i.e. "pull" rather than "push")

My reply:

They definitely do - without having online data storage, RIAs are a lot less compelling.

RIAs seem to be an attempt to create thick-clients that are hosted inside the browser. People are trying to achieve desktop-like functionality, but hosted from a website so that you don't have the whole download/install/update model. I also find it amusing that some RIAs are integrating with e.g. Google Gears or SQLite for offline, local data storage - that puts it even closer to the thick client model.

My observation has been that almost every trend you can imagine oscillates like a sine wave - in this case, the top of the sine wave is centralized CPUs with dumb-terminal clients, and the bottom of the sine wave is standalone general-purpose computers which have all of their data and applications under their own control. In the 70s we were at Unix servers and terms, then we slowly oscillated to the PC, now we are oscillating back to centrlaized servers, this time enabled via the Internet - I will bet you dollars to donuts that in about 7-10 years we will see a move away from cloud computing and back to localized data storage and application management due to the issues of storing data on the cloud (who owns it? what if company X goes out of business and takes all of your data with it? What if company Y gets bought and chnages all sorts of policies and procedures? What if company Z turns out to be evil, selling all of it's data to the Russian Mafia? I'll keep my data where I can see it, thankyouverymuch!).

There's also a bit of a disconnect that I see - I've heard people in Silicon Valley make silly statements like "everyone's using broadband" (I heard that on-site at one of my customers and promptly set them straight). I think that, although RIAs are continuing to be the "cool thing" for at least the next couple of years, the heavy download weight and inherent opacity to search engines are problems that need to be addressed. If a client computer has an API that allows for rich clients to be constructed locally with minimal data sent over the net (e.g. Mozilla XUL) then I think that RIAs will become a real winning proposition for more than a small segment of the internet population (how useful is an RIA on a mobile phone?).