Tuesday, December 16, 2008

Make $1 million using Twitter


Thought it couldn't be done? Apparently, Dell have had some success with their Twitter feed and report they made one million dollars through Twitter. That's not a huge amount of cash for Dell, but its a glimmer of hope that there some viable business models for Twitter and, more importantly, the consumer market might be ready for them.

Looking at the followers on Twitter for Dell tweets, there seem to be about 4,000 followers on channels that advertise products. This means that each follower was worth approximately $250 in 2008. Given that a big majority of the followers would not have made purchases, lets say only %10 actually buy something (and that's probably generous). This means each purchase would have averaged out at $2,500. That seems a little high for the average Dell purchase... So maybe you can't make a $1 million on Twitter just yet, but the target is set for next year.

Is Java Dying a Slow Death?

There have been numerous reports of the demise of Java ever since the language gained popularity. However with SUN's market cap at $3 billion there have been renewed speculations that because SUN is in trouble Java is too.

First off, Java is a lot bigger than SUN, its out in the ethyr. Nobody controls Java (except the Java community) is its own entity. This is obviously a good thing for companies and developers. Java is not going to die because SUN is in a spot of bother.

I do believe the adoption curve of Java is on the decline. This is not due to a lack of popularity, rather the opposite. Java has enjoyed massive popularity over the last 12 years, it became the multipurpose language of choice for millions of developers and a large percentage of enterprise. And why not; Java combined the "write once, run anywhere" principal (that sort of worked) with the building blocks to create embedded applications to hugely scalable enterprise systems, simplifying memory management, concurrency and IO. However, the popularity of Java meant that it was used in places that it was not deigned for.

The fact that Java is a multipurpose language means in many cases it is not the best tool for the job. We've seen over the past 5 years new languages such as Ruby, Scala and Erlang become more popular. Ruby provides a more fluid, powerful language that is great for building front end applications, Erlang is always going to beat Java for scalable, robust back-end applications.

Also the way we build software is maturing. We are getting better at using the right tools for the job. We use patterns to describe nuggets of software wisdom, we use IDEs that tells about good practices, we use DSLs to help better describe what our software is doing. With this maturity we look to other, sometimes more specific languages to help us get to our goal more efficiently and elegantly. For this reason Java is not as popular as it once was, but Java's popularity was probably over-inflated in the first place.

A dying language is one that stops living. A living language is one that continues to grow. Java does continue to grow with JDK 7 considering support for closures, resource blocks and language-level support for XML. Additionally, support for a module system will greatly increase the flexibility of the Java run-time (if implemented correctly - I wish they would team up with OSGi). In java 6 we got built in support for scripting languages such as Python, Ruby and of course Groovy. These sorts of features allow a language to grow.

The challenge for a mature language like Java is that with every new release there is an accumulated legacy to deal with. Eventually, the weight of that legacy will crush the language. I don't think Java is anywhere near that point yet.

Friday, December 12, 2008

Strategy is Something You Can Only Learn

There is a great post by Mike Cannon-Brookes of Altassian, which talks about how they came up with the stellar business strategy that has that has driven the iconic company from strength to strength. Mike raises some interesting points, which are relevant to any tech start-up or indeed any industry that deals with progressive change, which are almost all industries.

I believe open source start-ups are particularly prone to bad strategic decisions because there is a very small pool of successful open source start-up with less than 10 years of history to draw upon. Furthermore, the market is rapidly changing making success factors from yesterday points of failure today.

I doubt there have been any companies in history that didn’t have to evolve their strategy to be successful. There are myriads of books out there that will try and tell you how to build a successful strategy, how to define it, how to target a market, what to measure, but until you try it you cannot ready yourself for the changes you and your company will need to go through to have a shot at success. When I was in high school applying for university I didn’t understand why the business degrees were Arts and not Sciences. The fact is business strategy is an art that you can only learn by doing.

My experience founding MuleSource so far has been a whirlwind of success, failure, enlightenment, disillusionment, confusion, uncertainty, terror and joy. And of course I am learning more every day. Mike’s post spurred me on to share some things I’ve learned over the last couple of years.

Your strategy will evolve over time but you mission should not. The mission statement sums up the ultimate goal of the company and will set the operating and strategic tone of the company. If you change the mission statement significantly you are changing the company significantly.

Fail early fail often. Every entrepreneur will cite this adage; the fact is failing takes practice. Until you have failed a few times its hard to know when you are failing or how to spot the early signs. Starting and open source company has been all about trial an error because there is very little experience to draw on. In order to fail early you need to really understand what you are aiming for, what you are selling and whom you are selling to.

Understand you market and understand that the market is always changing. When I created the Mule project I always knew that I would create a business around the project. Back then enterprises did not deploy software into production without a support contract from a vendor. In the time it took Mule to become popular, the market perception of open source had changed and companies were not so concerned with deploying unsupported software in to production (at least for non-mission critical applications). The fact is that the pitch to our investors for MuleSource was very different to how I once thought the company strategy would be when I started the project. And MuleSource today is a very different company to the one we pitched in 2006. To have a chance at success you must evolve your strategy.

For every goal you need a metric
. Metrics can be anything from downloads, sign-ups, bookings, renewal rates, bugs reported/closed, product releases, customer satisfaction indicators, website hits/clicks, upsells, burn rate, etc. Key metrics will be slightly different for every company, but every company needs to track their metrics rigorously. Without solid numbers to measure your business against its very hard to assess how you are doing, but more importantly it impossible to predict how you are trending in the future. Without these numbers it’s very difficult to know if you are spending precious time on things of little value to your business.

One of the toughest tasks of running a company is hiring. In a start-up every person you hire really counts. Your people are your company; they are the ones that make the dream a reality. You need to think long and hard about every position, understand not only the role and responsibilities but also the cultural aspects. You should always assess a candidate not only on their skills but also personality traits, employment values and the ability to work in your environment. The questions you ask should give you insights to the character of the person. Always do reference checks and backdoor checks where possible. Some of the best people I have hired have come from my network (LinkedIn is a great source) or someone else's network in the company. When we’ve hired from a recruiter we’ve had mixed results.

Communication is an art
. We all know communication is vital for any collaborative environment, but I have worked in many environments where we had lots of bad communication. Bad communication is frequent communication without getting the right messages across. Meetings shouldn’t be boring; attendees should walk out feeling that they learned something important to them. You don’t need gimmicks to make a meeting interesting you just need to focus on the content. We’ve learned a lot in this area at MuleSource. Our communication approach has evolved and continues to do so. Here are some principles to consider when scheduling a meeting –

  • What is the goal of the meeting? You’re about to take 30-60 minutes of your teams’ time. Make sure you make good use of that time

  • Invite only the people necessary to attend the meeting. If you want to let others know about the meeting, either share your calendar or make it clear in the invite that it is optional for certain people

  • Who is your audience? It’s very easy to ramble on about what’s important to you not what’s important to the goal of the meeting. This is especially true of status meetings. Try and keep your communication short and precise

  • Be honest, be open. Meetings shouldn’t be a place to blame colleagues, if employees feel the need to cover their back, there is probably a culture issue

  • If addressing the company, make sure the key points are clear and relevant. You should always relate back to objectives and the mission of the company. Company meetings should be energizing and gratifying, people need to see the impact of their hard work on the overall goals of the company.



Finally, the most important point is to enjoy what you do or don’t do it at all. Most open source developers are driven to write free software because they enjoy it and they chose to give up their spare time to pursue it. However, starting an open source project and starting a company are two very different beasts. I started Mule with a view to starting a company around it, I was considering the Mule project as a commercial avenue early on, it was part of a bigger plan. Now MuleSource has become a living entity that I truly enjoy being part of every day. Its very hard work, it consumes a lot of my life, its difficult on my wife. It’s a labour of love for both of us. You need to be really committed to create a successful open source project; you need to be little crazy (about it) to start company.

Friday, December 5, 2008

A new (excellent) way to hurt myself

Those who know me know I like to get out to the mountains for snowboarding whenever I can. This year I started kiteboarding since Malta has some decent conditions for it (once you get past the rocks and crowded beaches). Ever since I went heli-boarding in New Zealand a few years back I really wanted to do more with my boarding. Now I think I've found the answer, Snow kiting! Check this out:

Thursday, December 4, 2008

Open Source Upturn in Economic Downturn

I've been asked a lot over the last couple of months, "What is the impact of the economic downturn on open source companies?" I'm sure most people have read the slew of articles on this subject and the response is pretty consistent. Organisations are still spending money in this climate but are looking for low cost alternatives over costly proprietary solutions. The commercial open source approach works well in this climate because organisations can try before they buy, they can self-select without the hard sell from the vendor and they only need to buy if they are compelled to do so.

This point draws a fuzzy line between the open source companies that will do well in this climate and those that will suffer. If the open source product falls below the value line (and value can mean very different things depending on the target market) then its going to be difficult for those companies selling the product to operate successfully during this slump.

For established open source company such as Alfresco, SugarCRM, JasperSoft, Intalio or MuleSource, the economic change is already showing positive signs. If you are a new open source start-up or an existing one without a clear path to profitability then the next 12 months will be hanging in the balance. Nobody wants to raise an A or B round of funding right now, the VCs are going to be way too cautious.

However, the reason I am so positive about the year to come is I believe we needed a drastic shakeup in the market for the benefits of open source to be really acknowledged. Right now every organisation is assessing all their costs (most companies have cut between 10-30% of their workforce). In this process, CIO and Managers are being asked to do more with less, to keep the engine running with half the budget. This means they have to look to open source. They then realise that open source companies compete with proprietary software on the features they need and is typically 10-30% of the cost. With a 50% budget cut that leaves you with cash in hand.

Open source companies change the way organisations acquire software. It provides a much more business-friendly way of purchasing software based on real need and value, rather than vendor-crafted perceived value. The proprietary vendors have been getting away with selling promises, but now organisations have no choice but to look elsewhere for real value at a lower cost.

Wednesday, November 26, 2008

MuleCast, Blog and Books


The are number of good things happening in the Mule community at the moment. We recently started a podcast series which you can subscribe to here or get it directly in iTunes. Each episode is about 5 minutes where we talk to members of the community about all things Mule. Let me know if you have any particular topics you'd like us to cover.

Next, the team behind Mule are blogging! From the Mule's Mouth is where the core developers of Mule and Galaxy blog about what's going on with the products, in our industry, and more.

Finally, now is the time to stock up your geek library because there some great books for Mule users:


They should keep you busy when you're done watching crappy TV over Christmas. Note that Manning are also offering a 27% discount for customers that quote the promotional code: msource27. Be quick though I think this code will only work for another day.

Monday, November 24, 2008

Microsoft to buy SUN

Combine poor market conditions, SUN's stock price free-falling, Microsoft's desire to break into the enterprise market and you can imagine this acquisition actually happening. The fact is there are some compelling arguments to buy SUN at this point. SUN is obviously in a bit of trouble right now, their market cap is currently less than their annual revenue. That is free money for an acquirer. Schwartz has bet the farm on a long term open source strategy and while I think this is bold and commendable, it is not well suited to the current climate; nervous shareholders need to see results before 2014. It's already likely SUN will spin off some parts of their business (read: hardware) to reduce the staggering amount of cash they are burning through, but that will only go so far, SUN may need to make more drastic changes.

If Microsoft took control of SUN I think we'd see some big changes across the board. Microsoft would inject some much needed shorter term strategy, and a lot of marketing muscle that SUN desperately needs. I think they would systematically lop off all the random projects that SUN don't make money from including Open Office. OpenSolaris, MySQL and SUN's virtualization platform would be safe.

Of course this would mean that Microsoft owned JAVA. That's a pretty alarming thought. I was on a panel a couple of days ago where I contested the value of open sourcing the JDK for the end user. However, with possibilities like this I stand corrected, open sourcing the JDK was the way to go. Even though Microsoft wouldn't screw with it, you can bet there would be a huge push back form the JAVA community. I chose Java 10 years ago to get away from Microsoft's wacky APIs and brainwashing (ironic really), I can't help be feel others like myself would be pretty uncomfortable with this.

The end result would be Microsoft having a credible enterprise story, with a business to back it up. they would also (to some degree) control the mind-share of two of the biggest developer communities, Java and .NET. There would bound to be objections from other vendors on monopolization, but since the market share and products of the two companies don't actually overlap too much these objections may be difficult to defend. Scary thought.

Friday, November 14, 2008

New Mac Book Pro verdict

So after my foray with Ubuntu, I switched back to Mac with an all new MBP.
As always with Apple, with every improvement, there are always drawbacks and the new MBP is no exception.

The Good


  • Solid design. It feels a lot more robust. The form factor is more sleek too

  • SSD hard drive is lightening fast, plus I hope it will not fail me like the older SATA models

  • Keyboard feels great and I like the key spacing

  • Generally, faster hardware all round with 2.8 GHz Core Duo 2 and 4 GB ram. Makes for a good experience

  • Doesn't run anywhere nearly as hot as older MBP models



The Bad

  • The new trackpad is cool, but seems unresponsive at times. I also really *dislike* the click function. I would rather it would use a single tap only for click and use two fingers to drag items

  • The screen is glossy and there is no matte option. I am getting used to the screen but in some conditions it is awful

  • The new model is the same weight as the the older model and the its actually slightly bigger!

  • The battery and housing has a new design, so I cannot use my old backup batteries



I have no take on the battery life yet, but it doesn't seem any worse that my old MBP.

All in all, I am enjoying the responsiveness and will get used to the screen. I imagine the trackpad can be fixed with software. However, I would not have upgraded if I didn't need to, there is not enough incremental value from the old MBP model.

Monday, November 10, 2008

QCon is coming


Its QCon time again in San Francisco. This is one of the conferences that I really enjoy and the line up for this year is looking better than ever.

On Thursday, I will be doing a panel discussion with Bob Lee, Rod Johnson and Geir Magnusson discussing “How does the Open Source trend in Java affect your design and development process”, which should be a very interesting discussion.

On Friday I am co-hosting a talk with Dan Diephouse about Bringing the enterprise to the web with Mule. Folks attending the Mule session on Friday at 2pm will get the option of a 27% discount (not quite sure how we got to that random number) for the excellent Open source ESBs in Action Book plus the chance to win some Mule training.

As always I’d love to meet any Mule users at the conference, just catch me after my sessions ping me on twitter or wait for me by the beer fridge :)

Monday, November 3, 2008

Taking Ubuntu like a man

I wish I could say switching to Ubuntu was a breeze, but in fact it left me uneasy and in need of a hug. In the end my resolved faltered and now I'm writing this post from my restored Mac. Migration to Ubuntu is not impossible but its not an easy path either, my problem was that I just don't have the time right now to make a cold turkey switch. The major barriers I hit are -


  • No support for WebEx or Adobe Connect (I seem to be doing webinars regularly)

  • Open Office just doesn't cut it. I didn't get around to installing Wine and MS office due to other issues.

  • Ummm... iTunes, where is the Ubuntu version? I've been a long time user of iTunes, I have an iPhone and have an AppleTV unit at home. Without iTunes I can't do much with these things. Of course the real problem is I am locked into Apple, which is not a great place to be.

  • When things go wrong in Ubuntu, it takes a while to figure it out. My first hurdle was losing WIFI, but it didn't tell me I had lost it, it just disappeared. I quickly realised that the UI for the wireless wasn't going to help me diagnose the problem so I started sifting through the command-line options. I like command line for some things, WIFI isn't one of them. Eventually, somebody told me that the saved WIFI profiles do not work properly and I had to delete them. This is basic stuff, it should work

  • I couldn't get the VPN working properly. Again, trouble shooting this was unnecessarily painful in this day and age

  • The UI looks a little dated and something has to be done about the fonts (I installed MS fonts and didn't see much improvement)

  • I didn't get around to setting up my home HP printer, but the anticipated pain was bad enough

  • Finally, Ubuntu really seems to be the third wheel of operating systems. Most software vendors only provide their software for Windows and Mac. This is real shame, but I don't have hours to spend hunting for command-line utilities that may suit my needs



All that said, I actually liked Ubuntu for many reasons, I just don't have the time to invest in migrating myself over to the platform. The selling points for me were -


  • It is lightening fast. I was running on an old ThinkPad and responsivness was great

  • Its a great development machine. All the dev tools I use (when I have time) work on it and it doesn't do anything stupid with the JDK like Apple does

  • There's a great community around Ubuntu who are very helpful and know their stuff



I've decided to keep Ubuntu on the back burner. I'm not ready to make the leap yet, but I will keep playing with it. It has loads of potential, but is just not there yet in terms of usability.

Friday, October 31, 2008

Friday ascii art

We all love ascii art and what could be better than an donkey! Ken, our Director of Engineering just sent me this as a possible start-up slash screen for Mule. This one seems to be sneezing xml...

Monday, October 27, 2008

Webinar: Getting started with Mule 2.1

As some of you know we released Mule 2.1 Community and Enterprise last week. Tomorrow (Tuesday) I will be hosting a webinar for folks interested in getting started with Mule 2.1.

The session will cover:
- Introduction to new features
- Creating your first Mule project
- Mule IDE
- Example case study

You can register here. Hope to see you tomorrow at 9am PST, 4pm GMT.

Thursday, October 23, 2008

Apple hates me

I cannot believe it but my Mac Book Pro has just died again. I am using an older one because my other one died about a month ago thanks to a faulty graphics chip. For those counting, that is three major outages for me this year (the first happening in June).

While I have been a diehard fan of the PowerBook and MacBook Pro over the years, I think its time to find a new platform. My main concerns with the switch right now are:


  • Calendar and calendar sync to my iPhone - I guess this may mean a change in phone

  • Swtiching email - I just don't like the GMail interface

  • MS Office support - I use them every day

  • Backup - Timemachine is the best I have used

  • I don't think I can bring my self to go back to Windows



It seems like my only viable option is a Thinkpad and Ubuntu. Anyone else have another idea?

Tuesday, October 21, 2008

Mule 2.1 is out!

After a lot of sweat and beers Mule 2.1 Community and Enterprise editions have been released. Improvements on the Community version include-


  • Component Interceptors have been re-introduced. We thought folks could get by on Spring AOP alone but it wasn't the case. Interceptors allows developers to intercept events before and/or after a component

  • Expression Support. Expressions allow developers to evaluate expressions on the current message to perform transformations, content-based routing, and filters. There is now better support expressions in routers and transformers. The expression syntax has also changed in this release to play well with Spring Property Placeholders. See the migration guide for more information

  • Reconnection Strategies have been replaced with the new Retry Policy framework. the old reconnection method only worked some of the time, the new Policy framework gives us much better control over how retries are attempted

  • Routers now support returning collections of messages using the new MuleMessageCollection interface. This is useful if you want to invoke multiple endpoints, say using the Multicasting router and return more than one result

  • Endpoint configuration has been tightened up to make it easier for people to configure event flows. It's now required that all endpoints are configured with the synchronous attribute, otherwise the default for the Mule instance is used. For more information about configuring event flows, see the Messaging Styles page

  • The Message Splitter router has been simplified to make it easier build custom splitter routers

  • There have been many schema updates to make it easier or more logical to configure Mule. These are detailed in the migration guide

  • The documentation is in orders of magnitude better than in previous releases. We still have work to do, but you can find information on almost all areas of Mule at the community site

  • The Mule codebase has been made OSGi-ready. This means that Mule now ships as OSGi bundles, but we have not made the move to actually having the server run inside an OSGi container. We want to do alot more around our support for OSGi, this was the first step

  • As always there is a huge number of fixed in this release. Thanks to the Mule community and customers for reporting them



We also released Mule 2.1 Enterprise this week. This is the commercial Mule offering providing all the benefits of the Community edition plus some extra tools and features.


  • Premium JDBC Connector - support for advanced JDBC functions, such as batch and cursor mechanisms, resulting in over two orders of magnitude (150x) performance improvements for certain use cases

  • WebSphere MQ Connector - bi-directional request-response messaging between WebSphere MQ and Mule JMS using native MQ point-to-point, as well as synchronous and asynchronous request-response communications

  • Mule RESTpack - now supported as part of the core Mule 2.1 Enterprise product, allows developers to create REST-style services, which form the basis for web oriented architecture (WOA), using popular frameworks such as RESTlet and Jersey

  • Out-of-the-box retry policies - allows creation of self-healing connections, instructing Mule to attempt reconnection based on pre-defined policies, without the need for custom code
  • Mule 1.x Migration tool helps existing Mule users migrate from Mule 1.4-1.6 to Mule 2.1

  • MuleHQ is a 360-view management and monitoring tool for managing all deployed assets such as the OS, JVM , database and of course Mule.



You cab grab the Enterprise Edition here (inclides a 30-day evaluation license) or get the Community Edition here.

Wednesday, October 15, 2008

Who pays for open source?

Recently, Zack Urlocker posted an entry on his Infoworld blog, High volume is key for open source. He put open source users into two categories -

My view is that when open source products are most successful (and most disruptive), they serve two distinct markets: a nonpaying community and a paying enterprise market.


I think these groups are more like different ends of a user spectrum. I believe there is also a middle ground that defines the gray area between those that do pay for open source and those that might.

At one end of the user spectrum there is a large user group for any successful open source project who will never pay for the software. Within this group there is a small sub-set of people who will give back to the project in some other way such as submitting patches, feedback or answering questions on forums. Lets call these users the "passive community" and "active community" respectively. The best way to get value from this group is to grow your active community by making it as easy as possible for users to understand how to contribute and foster those contributions.

At the other end of the spectrum there are those that will always pay for open source software that has risk associated with it. By risk I mean that they rely on it to run parts of their day-to-day business. To these customers, the safety net of having a support agreement with the vendor is vital, typically they will also care about indemnity and warranty of the code they are deploying into their well-guarded production environments. If something breaks they want it fixed right away, so going with the vendor is the safest and most reliable option.

Then there are the people in between. These users will use the software in production even for mission critical applications. Generally, they will consider purchasing from the vendor but this consideration has a lot of variables that stem from the needs of the customer and the offering from the vendor. Customer considerations often include -

  • Is the application mission critical? People don't pay to support applications that have little or no impact on their daily operations.

  • Can we absorb the cost? I would guess that 100% of IT start-ups use open source software to build their applications ad infrastructure. There is no way they are paying for anything unless they are making money.

  • Can we support ourselves? Often, customers think they can support themselves without the vendor. This is obviously fine and one of the upsides of open source software. However, in my own experience I've seen that often customers take a "bad path" when designing their SOA/integration solution, just because they didn't understand what could be done with Mule. Talking to MuleSource early would have saved a lot of pain later on.

  • Do we have the right team to self-support? If customers build their solution in-house, they will answer yes to this question. However, I've seen users say they can support themselves then realise that their mission critical system is supported by 6 contractors. Given the current climate, those guys may not be around for too long. Often there will be a champion of the product in the organisation that introduces it. What happens when that person leaves?

  • Do we know we are running Mule? As Matt Asay points out, it's fairly common that an SI will introduce open source products into a customer environment without the customer knowing it. What if something goes wrong and the SI cannot fix it?



From the vendor perspective appealing to this middle group should be approached with an emphasis on value. Always ask "what problems do we solve for the customer with our commercial solutions?" rather than "here is what I can sell you".

The meaning of value will vary greatly from market to market and customer to customer. You can define value in different ways such as additional product features, tools, premium content such as knowledge bases, indemnity, services, training, support and of course pricing.

Ultimately, You need an acute understanding of your users and customers to ensure you strike the right balance between your open source offering and commercial offering. It boils down to making decisions that embrace your user community and provide value to your customers. You will find yourself make distinct decisions for users and customer, but must always consider both carefully before arriving to a conclusion.

Mule Galaxy Expands

We're proud to announce the recent release of Mule Galaxy 1.5 Community and Enterprise Editions. It is a huge leap forward from our 1.0 release (hence the version jump). Some of the great new features include-

Enterprise Edition Features



  • Remote Workspaces: Support for attaching workspaces from remote Galaxy instances. This provides simple federation since you can deploy multiple Galaxy instances in different regions/departments/projects and also share some artifacts and policies across the whole deployment.

  • Workspace Replication: Replication takes advantage of the administration shell and the remote workspaces capabilities to allow you to easily copy workspaces from one Galaxy instance to another. This is very useful if you wish to periodically push production worth artifacts/entries from a development instance of Galaxy to a production instance. By writing a simple script and scheduling it, you're able to control how and when replication occurs.


Community Edition Features



  • Scripting Shell: This is a groovy-based scripting shell that allows you to create executable scripts to run against Galaxy from the Web UI. This is a powerful feature that allows you do thins like create event listeners and trigger other processes, fire notification, perform back ups and more. There is also a cron-based scheduler to execute scripts periodically.

  • Entries: In addition to being able to store artifacts, Galaxy can also store entries inside the registry. Entries can represent a wide variety of services as they do not require a service description like WSDL, they just store pure meta-data. For instance, a RESTful service can be listed in the registry by creating an entry and storing information such as it's URL, links to documentation, etc.

  • Event API: All events inside Galaxy now trigger an internal event that can be listened to (Observer pattern). These are accessible from the Scripting Shell or Java extensions to Galaxy.

  • Feeds for search results: Now every workspace and every search has a link to a feed for that search. This allows you to easily subscribe to changes occurring inside Galaxy and monitor them via a newsreader.

  • There have also been a load of improvements to the Atom pub API, Query engine, and property type handling. The UI has changed a lot, we spent a lot of time addressing usability.

  • Finally, Galaxy supports an auto-upgrade feature. If you were using Galaxy 1.0, you just need to install Galaxy 1.5 and run it!



Go and give it a spin and let us know what you think!

Thursday, July 3, 2008

Fistfight: Stroustrup vs Gosling

I’m in Paris at the moment at the OCTO conference. I had the pleasure of meeting Bjarne Stroustrup, the inventor of C++ at Bell Labs some 20-something years ago. He gave a keynote about C++, which was actually very interesting. I like to hear about the origins of technologies that stick but even more I love to hear about what the inventor would have done differently. I found it amusing that top of the list of regrets was that he didn’t require a “C++ Inside” badge for programs written in C++. I got the distinct feeling that Bjarne was a little jaded by the marketing and hype that Java received when it was launched and still gets today. He also felt that if they had release better libraries from day one there wouldn’t be the plethora of overlapping libraries we have in C++ today. Also C++ would be a different beast if there was a single vendor that “owned” the language and hence there would have been a marketing budget for it. Instead the C++ market place is very fragmented with vendors still offering their own flavours of C++, but at least the language itself is standardized by the ISO.

C++ is used heavily in high performance and extreme critical environments. He said that C++ is seeing resurgence in the embedded market, which obviously makes a lot of sense, but he also sees value in an open web framework in C++. I’m not so sure about that one.

What struck me when talking to Bjarne is that he still has huge enthusiasm for C++ even after 20+ years working on it. Spending that much time on any topic is bound to colour your judgment, and his answers to all questions about language seemed lead back to C++ even if you ask him to specifically talk about another language.

Interestingly, he didn’t think there would be another popular general purpose language in his lifetime just because there is no one vendor that will feel the need to invest heavily in a new language because the gains would only be incremental.

I asked him about his thoughts on functional languages to which he replied, “… they continue to struggle to get out of the Ghetto”. That made me chuckle.

Finally I mused whether he would in win a fistfight with James Gosling. He felt that there would be a “speed differential”, but said that if James got a punch in early he might go down. Personally, I think it would turn into a rolling bear hug coloured with clean language and grey hair. Bjarne said that he’d struggle to work up the aggression to go hard against James, though when I mentioned the performance comparisons of Java 6 and C++ published by SUN the fire was ignited.

Friday, June 20, 2008

Recovering your MacBook

My MacBook Pro died on me last Thursday. This is a truly devastating moment where you start questioning your back up procedure and realise in a flash how royally screwed you are without your laptop.

It turns out I had an "Invalid node Structure", but it took me a while to reach this conclusion. Then I figured out that this wasn't just a data integrity issue, rather my hard drive was hosed. I got a lot of scattered information about how to recover a MBP in various forums and friends so I thought I would record my experience just in case I have to go through this again.


  1. You should try running in safe mode first. To do this restart the laptop (by holding down the power key for 5 seconds) and then hold down the [shift] key when it starts back up. This did not work for me, but if it does work you need to run "Disk Utility" (see below.)


  2. You can boot from a Tiger or Leopard install disk by inserting the CD, power down, then restart holding down the [c] key.


  3. When I first started booting from CD I didn't know you had to hold down the [c] key. I had a sinking moment when I couldn't get the CD to eject from the MBP. If you are in this situation you can try 2 things:

    1. Restart and hold down the track pad key. This worked for me.

    2. Restart and wait 10-15 minutes, the laptop will eventually spit out the CD. This also worked for me.




  4. Booting from CD takes a while but eventually you get a welcome screen. From here you can got to the Utilities menu and run "Disk Utility".


  5. Select your primary boot volume and hit repair. This will verify the disk and attempt to repair. It was at this point I was told I was a victim of "Invalid Node Structure".


  6. You have a backup right? Yes I do, so now the recommended course of action is to reformat the drive, reinstall Leopard and then restore from back up. One major gotcha to be aware of is that you need to format your primary disk in the same format as your back up disk. Your options are: Mac OS Extended (Journaled) or Mac OS Extended (Case-sensitive, Journaled). The formatting MUST match up.


  7. If you do not have a back-up, it is possible to save your data. You'll need to go into single user mode and run the command: fsck -fy. See this forum post for more info. Folks on twitter and IM also recommended Disk Worrior, this piece of software looks pretty awesome if you need it.


  8. Wait forever while the install DVD is verified.


  9. Once the installation starts you'll soon find out if your hard disk is screwed. Even though I ran the Disk Utility on my hard drive, it started making bad noises half way through the install and eventually failed.



At this point I realised I needed a new hard drive. Living in Malta there is no Apple 'Genius Bar' to take my laptop to so replacing my hard drive myself was the only option. Cracking open your MacBook voids your warranty, so only do this if you have no other way of getting the hard drive installed.

Before you start unscrewing, I recommend you you take a look at the various videos on youTube. I thought these two were pretty good (you can skip through the boring bits).

You will need a couple of tools to get your MacBook open. I got by with a philips head 2.4mm screw driver and T6 torx (star head) screwdriver. There are whopping 24 screws you need to remove before you can get the hard drive out, if you get lost about which go where check this guide out.

Once you have installed your new hard drive restoring your system from Time Machine is pretty easy, but lengthy (mine took about 4.5 hours).


  1. Boot from you Leopard CD (hold down the [c] key at start up).

  2. Select your language and hit the arrow button.

  3. Select Disk Utility from the Utilities menu.

  4. Format your new hard drive. Remember it needs to be in the same format as your backup drive.

  5. Once the hard drive is formated, exit Disk Utility.

  6. From the Utilities menu again, select Restore From Time Machine. You'll need to plug in your back up drive at this point.

  7. Select the backup you want to use and you are off.



Thank goodness for Time Machine, this whole debacle would have been a thousand times worse if I wasn't using Time Machine. While Apple's hardware can be flaky, having great backup software make up for a lot.

BTW thanks to everyone who emailed, twitted and IM'd me with suggestions and words of encouragement.

One bad Apple spoils the barrel



Following on from my Apple woes earlier this week, I think my MacBook Pro hard drive got corrupted last night. The issue has manifested itself by getting stuck at the grey Apple screen at startup and I'm struggling to get anything more out of it. On start up things whir and beep but the damn thing doesn't get past the Apple-grey-screen-of-indifference. It's as if my Apple devices are colluding against me this week. I am now in the midst of trying to fix this mess and cursing Apple all the way.

Wednesday, June 18, 2008

Dissecting Mice

In a fit of passive-aggressive rage against Apple, I decided to crack open my recently faulty Mighty Mouse. It was surprisingly hard to open because the thing had been glued shut (a $80 mouse is disposable according to Apple). After much prying and poking I got it open. This gave me sense of satisfaction that took me back to my childhood when I used to open up every gadget in the house to the dismay of my parents.
Anyway, I managed to clear out the dirt from the roller wheel but putting the thing back together was a fiddly task (much like it was in my younger years) and of course I had to glue the thing shut again to hold it all together. The good news is my mouse is working.



Here are some tips if you want to try this at home:


  • Do not remove the inner ring that the mouse glides on. There is no reason to and its something else you’ll have to glue.

  • Be careful when opening, the brown ribbon wire that attaches the scroll-ball is a bitch to put back in.

  • The scroll ball is housed pretty well, I think it would be easier to get a small screwdriver and run it around the scroll ball without opening the mouse up.

JBoss trying to reach for the clouds

JBoss have just announced a hosted offering on Amazon EC2. This is pretty similar to Sun’s announcement at JavaOne . JBoss are offering a monthly hosting plan plus a premium on the server time you use. I’ll be interested to see how this works out for them. My feeling right now is every OS and platform vendor will be offering a similar package in the next 6-12 months and then we'll realise that we want to avoid vendor lock-in... again.

We piloted a hosted offering almost a year ago, called MuleOnDemand, but felt we were too early to the table. Now we have a partnership with CohesiveFT (whom I’ve blogged about before), which means Mule users can already compose Mule-based applications online deploy them to Amazon EC2 as well as grab a VM image in any of the major formats from VMWare, XenSource, IBM, SUN, Oracle and Parallels.

Monday, June 16, 2008

Apple: Sexy, Sleek, Unreliable

I have been a Mac convert for over 4 years and bit-by-byte Apple have snuck their way into my home, office and pocket. I’ve always loved the fact that Apple have managed to marry function and form in away that makes me oblivious to spending money with them. Unfortunately, as I buy more Apple a nasty trend is emerging: their stuff breaks… a lot. I have been willing to ignore this trend because secretly, I want an excuse to buy the latest and greatest, but now it’s starting to bug me. Five minutes ago my “Mighty Mouse” scroll wheel broke. This is the third time this has happened and last Apple mouse I’ll buy. I just dug out a very old Microsoft mouse (one of the first with a scroll wheel) and it still works! (Though it is not wireless and cheapens the look of my desk). Last week, My partner in crime, Dave, was having problems with the Mac Book Air and has switched back to the Mac Book pro. And during JavaOne my iPhone crapped out and I had to replace it. That’s all in the space of just over a month. As much as I like Apple products, they are far from perfect, and these are the sorts of incidents that make me look elsewhere for my next gadget fix.

Wednesday, June 11, 2008

Mule and IntelliJ IDEA

As some may know, we are working on version 2.0 of our open source Mule IDE based on eclipse. However, with the new schema-based configuration in Mule, IntelliJ IDEA users (including me) get some great features to help them build mule applications quicker too.

In the most recent IntelliJ 7.0.4 EAP release they have fixed the XML editor so the full benefits of the Mule schemas can be realised. I'd like to thank the IntelliJ guys for this fix. It took them less than a month from us reporting the issue to them fixing it and doing a release. I think thats pretty fantastic!

Quick Introduction


Mule 2.0 XML configurations are defined in schemas. For every module and transport in Mule there is a schema that defines the configuration elements that can be used. For example the following defines the Mule schema (default namespace), the VM transport and the Quartz connector:


<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns="http://www.mulesource.org/schema/mule/core/2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:vm="http://www.mulesource.org/schema/mule/vm/2.0"
xmlns:quartz="http://www.mulesource.org/schema/mule/quartz/2.0"
xsi:schemaLocation="
http://www.mulesource.org/schema/mule/quartz/2.0 http://www.mulesource.org/schema/mule/quartz/2.0/mule-quartz.xsd
http://www.mulesource.org/schema/mule/vm/2.0 http://www.mulesource.org/schema/mule/vm/2.0/mule-vm.xsd
http://www.mulesource.org/schema/mule/core/2.0 http://www.mulesource.org/schema/mule/core/2.0/mule.xsd">

</mule>



Essentially the schemas in Mule 2.0 provide an XML Domain Specific Language (DSL) for constructing integrations and service compositions. The benefits of this approach over the Mule 1.x approach is that:

  • there no longer class names in XML configuration files (unless you are plugging in a custom implementation)

  • all available attributes for a configuration element are provided in the schema and which gives you auto-complete in many Xml editors. Previously, developers had to look at the documentation or JavaDoc to figure out what properties to set

  • required elements and attributes are validated and enforced by the schema

  • attributes are typed making validation more robust



When editing a Mule Xml in IntelliJ IDEA, you will get full code-complete for the modules imported into your XML file:





While you are filling out the details of your connector you can quickly get help about the attribute you are adding by hitting Apple Key+J or Ctrl+J on Windows and you will get context help on that attribute.



The context help also works on configuration elements:



Unfortunately, these features do not currently work with Eclispe unless you are using the Altova Xml Spy Eclipse plugin or Oxygen Xml for Eclipse.

Creating new Mule XML files can be labourious since you need to define the namespaces and schema locations each time. You can greatly simplify this in IntelliJ IDEA by using ‘Live Templates’ (from the Options dialog). Live Templates are code snippets that can be inserted into any editor window simply by typing an abbreviation and hitting the Tab key. I have lots of templates set up but 3 are useful for creating new Mule configurations:

1. Mule Xml Configuration Template [m2]


This can be used in a blank XML document.





1. New Namespace [ns]


Use this to add a new schema namespace. When the template is inserted you just need to type the Mule module name , i.e. ‘jms’ or ‘http’ and then hit the Enter key.



1. New Schema Location [sl]


Use this to add a new schema location. When the template is inserted you just need to type the Mule module name , i.e. ‘jms’ or ‘http’ and then hit the Enter key.

Tuesday, June 10, 2008

Effective Product Demonstrations

Last week we had to give a demonstration of all MuleSource products working together. The requirements were open-ended and it required some planning to put the demo together. Like giving presentations, giving demonstrations is a skill that you learn only by doing. Ours went very well so I figured I would share some thoughts about it.

1. Understand your audience


You need to make sure that you know what your audience is looking to get out of the session. In our case the audience was technical, but understanding how to build the ESB was less important than how to manage and control it. This sort information drastically changes how a demonstration is presented.

2. Build a story


Our demonstration was about 1.5 hours and we had a lot of products to demonstrate. We didn’t want to bombard the audience with everything all at once even though the pieces were very much related. We decided to provide a big-picture view in the context of the story we were building and then sequentially go through each chapter of the story.

3. Keep it simple


A common mistake I’ve seen before is that it takes so long to introduce the reference application for the demonstration that the audience is confused before you’ve got to your real agenda. Usually, real world scenarios need to be simplified for demonstration purposes.

4. Introduce and summarise


The story should be broken into chapters, each one tackling a specific part of the demonstration. Each chapter should be introduced. We found it very effective to provide a diagram for each chapter to show how the pieces where going to be presented. After each chapter we summarized what was covered and allowed time for questions.

5. Present in pairs


It is very difficult to drive a live demo, answer questions and talk coherently on your own. It’s much better to have a co-pilot; one provides commentary for the presentation and fields questions while the other drives it and talks to the specific aspects of the demonstration.

6. Prepare, prepare, prepare


History is littered with demonstrations gone wrong; everyone remembers Microsoft’s numerous blue-screen-of-death incidents. Both presenters should run through the presentation at least a couple of times to iron out glitches and spot areas for improvement. You should also do a dry run with other people in the organization to get feedback and check timings.

7. Lock it down


You should not change any code using in the demonstration last minute. What may seem like an innocuous change may blow up in your face later. Like software releases, any changes will require a full run through.

8. Stay focused


When doing technical demonstrations you are likely to get a lot of questions. Sometimes these may take the session in a skewed direction. It’s up to the presenter providing commentary to keep the demonstration on track.

9. Have fun with it


I’ve sat through demonstrations in the past where I felt like the presenter was dead at the wheel. You need to believe in what you are demonstrating and put energy and enthusiasm in to it otherwise you will lose your audience. Having a co-pilot helps to facilitate more lively discussions.

Saturday, May 31, 2008

Mule From a Crane

I’ve been traveling all week hitting London, Bergen, Olso and Munich before landing back in Malta last night. One of the highlights of this trip was delivering a “Mule Unplugged” session in Bergen to a group of customer and users. While I always find the Norwegians to a good bunch to be around, the notable part of the day was the location of our meeting. It was a meeting room housed in a disused dock crane, it even has the drivers cockpit preserved. I guess photo’s are the only way to describe it:



One of the more exotic conversions I've seen.



The crane cockpit. Unfortunately none of the controls worked (I tried).

Wednesday, May 28, 2008

Twitter or Twit?

I am not convinced about twitter, actually I think it's pointless. However, some people I respect rave about it. It seems to be one of those things you need to try for yourself, so I've decided to give it a two week trial to see what I get out of it (yep, it's all about me). If you are using twitter, look me up, my ID is 'rossmason'. It would be great to hear comments from other people about what they think about it. In the meantime I'll be posting my progress here.

Mule Updates

Oops, it’s been almost 2 weeks since I last posted. I have a lot of stuff going on but not enough time to gather my thoughts. In the mean time there has been some stuff happening in the Mule community since my last post so here are the highlights.


  • We released Mule Community 2.0.1 and 1.4.4. For those waiting for 1.4.4 was a long time coming, but we had to focus our efforts on Mule 2.0.

  • My “Mule 2 and Beyond” video went up on Parleys. This is a talk I did late last year but is still very relevant. What I can say is you learn a lot about how to improve your presentation skills by watching yourself!

  • Eugene published a great article on TheServerSide about how to implement a scalable, Map Reduce architecture using Mule. Nice work!

  • For those that missed MuleCon in San Francisco this year, there is a short re-cap video available (the actual session talks will be available shortly). Make sure you come next year ☺

  • There have bean a boatload of Mule 2.0 documentation changes over the past 2 weeks too. The team is putting a lot of effort into making the documentation more complete and easier to navigate.

Friday, May 16, 2008

OSGi: Deployment Nightmare Unfolding

There is quite a bit of buzz around OSGi at the moment with every vendor and his dog announcing support for it in one way or another. When it comes to deployment, right now everyone is doing their own thing with OSGi. On the one hand this is good since OSGi bundles are compatible with each other so the more vendors support it the more we can do around bundling components. On the other hand, every vendor is coming up with their own mechanism for deploying to an OSGi runtime. This is because there is no standard way to package up bundles of bundles. In JEE we have WARs, RARs and EARs for deploying applications but in OSGi world we’re going to have many vendor-specific ways of deploying applications. It’s like we’re taking a giant leap backwards. I should point out that we at MuleSource are guilty of doing the same thing because there is no standard right now that anyone can align with.

Ideally, what we need is a JSR (or similar) that defines a standard deployment format to deploy to OSGi containers. Essentially this would mean a small kernel that understood how to unpack and deploy bundles of bundles to an OSGi runtime. There is no value in vendors owning this stuff or coming up with proprietary implementations.

Ultimately, a standard deployment mechanism should be part of the JDK. I just don't know if I trust the JCP to produce a simple elegant solution by committee...

Bio Car: 0-100km/h in 3.1 seconds!



I'm a big fan of super cars in that I dream of buying something outrageously expensive one day that will cripple me every time it needs a service.

I stumbled upon a company today called Koenigsegg - a small, low volume super car manufacturer - who recently announced a version of their flagship CCX car, called CCXR. Apart from being a beautiful beast it's being touted as the first green super car. It runs on Biofuel and actually has better performance than the CCX hitting 100 km/h in 3.1 seconds!

While a small company such as Koenigsegg will not have a huge impact on global C02 emissions it goes to show you don't need to sacrifice performance to go green and the large car manufacturers could be doing a lot more to reduce the C02 footprints of the cars they produce.

You can be sure this beats hell out driving a Prius on a Sunday afternoon!

Thursday, May 15, 2008

FaceBook Chat uses Erlang

Facebook is one of the fastest growing community in the world. So much so that they can report their numbers in terms of percentages of population. They announced Chat for FaceBook about a month ago, and now Eugene Letuchy, a FaceBook engineer posted this very informative post about the technologies used and how they released the service. Their real challenge was how do you go from zero to 70 million clients with a flick of a switch?

Bored of your Asus eee PC? Give it a touch screen!

I love the guys at LShift, they're always doing something interesting. If you are board of your wee Asus eee PC and not afraid to dust off your soldering iron then here is a great guide on how to give your Asus a touch screen. There is even a video of the result!

Tuesday, May 13, 2008

Fring me twice, shame on me

Even though I got Fringed the first time round, I find the idea of a universal chat client on my iPhone so appealing that I couldn’t resist trying it again. For Simpsons fans I felt like Bart when he repeatedly reaches for the electrified cupcake when Lisa devises the “is my brother dumber than a hamster” test.

Anyway, Fring did delete my SMS messages again. Contrary to my last post, SMS messages do get backed up by iTunes but Fring manages to work its way around that. But what’s worse (and maybe coincidence) was that my battery started draining at an alarming rate. Even when I uninstalled Fring my battery was hosed. So bad in fact I had to get another phone because it wouldn't even last half of the day and I couldn’t schedule my JavaOne appointments.

I really have learned my lesson this time. Stay away from Fring.

No, I will not join your JSR

Atlassian should be known for their T-shirts. They always have some good ones at JavaOne and this year was no exception. I love the irony of Hani wearing this since he is on the JCP committee (unfortunately his is obscured by a feather boa).

Monday, May 12, 2008

Java + One = Mucho Drinking

By far the best part of JavaOne is the drinking. This is where all the Java kids converge and proceed to decimate their precious brain cells with copious amounts of booze. The only way to describe a good night out is with photos. My wife came out on the last night (though reluctant to join a group a guys “geeking out”) She was diligent enough to take some photos from the the Tangosol / SolarMetric founders party:



The Atlassian crew, Mike, Scott, Nick, Mary Ann and I. If there is beer drinking going on you can bet there will be some Aussies there!



I asked Patrick Linskey if he'll be cashing out after the BEA acquisition (who acquired his company, SolarMetric) he said nothing but held this pose for a good 5 minutes.



I had one of those sinking moments when I introduced my wife to Hani Suleiman of the BileBlog fame. He proceeded to describe what a "rusty trumpet" was to my wife and I made an obvious attempt to shield her delicate ears with a feather boa.



In open source community spirit Bruce Snyder initiates an Irish car bomb competition. Thankfully, I dodged this bullet and was able to function the next day.



When vendors collide; We have Glen Daniels from WSo2, Debbie Moynihan from IONA and me, pulling my unexpected paparazzi face.



Two of our hosts Cameron Purdy, Patrick Linskey and Matt Raible doing what they always do at JavaOne.



MuleSource out in force. Dan Diephouse is obscured by my beer but check out the cuff-links!

JavaOne Verdict: Pretty good, need better speakers

Walking away from JavaOne this year I don’t feel I learned about much new but there was definitely some good stuff around.


  • JavaFX was high on the agenda, but since this was the hot topic of last year it didn’t create the buzz they had hoped for. Still the had some nice demos even if the sessions themselves were awful.

  • There was some focus on cross language support in JDK 1.7, which is good new for everyone, but since JSR-233 it’s less of a scoop.

  • I was really impressed with SUN’s gaming server, DarkStar, mostly because it showed off cool 3D games that I can swoon at trying to remember the last time I had time to play games.

  • Livescribe was pretty good too. It’s a pen with an embedded Java run-time that can record what you say, write, perform translations and play games... yep, that’s right it’s a pen. For those of you who are familiar with Leapfrog (a toy manufacture and customer of ours) may have seen their FLY Fusion pentop computer before, which has all the features of Livescribe and more (I got my niece and nephew one of these for Christmas, they’re awesome), the only difference is it’s not Java.



On the desktop/middleware side there was quite a bit of buzz around OSGi, with every vendor and his dog announcing support for it in one way or another. Equinox even had a booth. I was staggered by the amount of software SUN was showcasing; everything from Open Solaris (nice new logo), GlassFish, SUN SIP server, Network.com (yes it’s still alive, apparently) and NetBeans to name a few. What was missing for me was the story that brought all these things together. It looks like they may be consolidating under the GlassFish brand, which is probably a good thing. I actually thought NetBeans looked good from a distance, but does anyone use it?

From my own experience and what others told me the sessions really weren't up to par. The content schedule was ok, though very biased towards SUN projects (their prerogative I suppose) but the speakers were universally bad it seems. SUN should probably run a day workshop before the event to train people on public speaking. Better still they might want to broaden the net to allow more non-SUN employee speakers. I think SUN really has to be careful here in order to bring new Java talent to the conference, otherwise JavaOne will just be for networking, vendor hype and swag baggers.

Saturday, May 10, 2008

Java + Mule

I just got back from JavaOne and I had a great time (post to follow). A big part of that enjoyment was meeting an array of folks using or looking to use Mule and Galaxy for projects large and small. Of course we had our fair share of swag-baggers who were particularly enamoured by our squeezy Mules and "Don't be a dumb-ass" T-shirts. Though one of our visitors went one step further and made his own Mule T-shirt. Nice!



Note squeezy Mules in hand :)

Friday, May 2, 2008

JavaOne: Come and see us

I wanted to send a quick note to those of you who will be in the San Francisco Bay Area next week, that MuleSource Architect Dan Diephouse and I will both be at JavaOne for the duration of the conference, from May 6 to May 9. If you are in the area, please come by and see us at the MuleSource booth, located in the SOA Pavilion.

We all know the best part of JavaOne is the drinking (I still have blurred memories of Irish car bombs, guys in chinos wrestling and 3am burgers at Denny's. All seems like a good idea at the time). Shoot me an email if you are around, it would be great to grab some beers.

If you are looking for good content Dan will also be speaking at a Birds-of-a-Feather Session on building services with the Atom Publishing Protocol. That session will take place at Thursday at 19:30.

If you have not yet registered for JavaOne, Mule community members can receive a $100 discount with the following code: ECXH28. Register here.

Hope to see some of you there!

Monday, April 28, 2008

Software on demand *except on weekends

We’re currently taking Google Apps for a test drive and Dave had some fun trying out the Salesforce.com integration over the weekend. I was really surprised to hear today that when he filed support issues with Salesforce.com he got no response until this morning. Not even an automated reply! I find it very strange that a company that sells itself on being there when you need it wouldn’t have support on the weekends.
I must admit it took us a lot of effort to figure out the infrastructure and processes for scalable 24x7 support at MuleSource, I can understand why SalesForce.com might have avoided it given their sales model.

Friday, April 18, 2008

Java Cloud: Has SUN dropped the ball?

With Amazon Web Services gaining a lot of attention and the recent announcement of Google’s AppEngine it begs the question “What is SUN doing?”.

SUN has all the right ingredients to make a PaaS (Platform as a Service) play;


  • They know hardware

  • they have a great operating system, Solaris

  • they have the Java platform and mindshare

  • They even have a great database offering, MySQL


Given all this plus the potential reach SUN has into Java communities and commercial organisations, they should be leading the PaaS movement.
Oh wait, SUN do have an offering. It’s called Network.com and not many people have heard of it.

SUN may have a lot under its belt but one thing they lack is the marketing engine to get their message out. This might be because for the last 8 years SUN’s messaging has been all over the map. I like SUN (did you know they are the worlds largest Open Source company? Me neither) but they need help. Apart from their hardware business I don’t think many people know what SUN does or what their business model is.

I’d love to see SUN dust off Network.com and create a PaaS offering. They should base it on Solaris, Java and MySQL, outsource their PR and marketing for this project and dive head-first into the PaaS game.

Wednesday, April 16, 2008

I got Fringed!

Fring deleted all my SMS messages. I know it was Fring because the same thing happened to Dan. Oh why doesn't iTunes back up my messages?

UPDATE: I can no longer receive SMS messages on my phone! Sadly, I think it's time to scrub Fring off my phone for now.

VoIP on the iPhone!

Finally, Fring have released a VoIP/Chat application for the iPhone. You can connect using your Skype, GTalk, Twitter, Yahoo, MSN, ICQ and generic SIP provider accounts and start talking. It even supports Skype Out. Fringing cool! We had some problems testing Skype voice yesterday but it's still a beta.

Monday, April 14, 2008

The Mighty ROA(R) of REST

I must confess I didn’t fully get REST until recently. Given the amount of interest around REST most people understand the basics:


  • It’s an architectural style. It’s not a technology itself, rather a set of well defined guidelines or rules for building scalable applications utilizing HTTP.

  • HTTP verbs such as POST, GET, PUT and DELETE are used to communicate desired behaviour to the server.

  • Each of these verbs have a well-defined CRUD actions associated with them i.e. POST = Create, Update, Delete; GET = Read, etc.

  • URIs are used to represent resources to act upon. The information that appears in URIs usually identifies Nouns such as http://myhost.com/people/{person} or http://myhost.com/people/{person}/addresses.


What I didn’t get until recently is how you build applications in a RESTful way. This was because my mind was still set in RPC mode. This means that I was trying to map RPC/WS calls to REST and quickly you start to think there aren’t enough verbs in HTTP to perform all tasks. The real problem was that in order to build a RESTful architecture you need to think in terms of Resource Oriented Architecture (ROA). That is you need leave behind the urge to define interactions in terms of ‘what you want to do’ and shift focus to ‘what resource you want to act upon’.

For example, you may have a Web Service that has a login() method. This is an action; it’s something you want to do. How does this map to resource? A resource needs to be defined where login() becomes an action on the resource. In this case it would be a UserSession resource. This may not be immediately obvious coming from WS/RPC world since the UserSession resource didn’t exist.

I am still getting to grips with some of the detail of ROA but I am becoming a big fan. I love the fact that RESTful proponents are very particular about REST terminology (apologies for any faux pas I might have made here), the power of REST seems to lie in the acceptance of a core set of principals; there is no room for ambiguity or bending the rules. This also means that REST is not suited for all architectures, but is certainly a powerful architecture style that should be in all architects toolbox.

Sunday, April 13, 2008

The Amazon Story

Amazon Web Services (AWS) amazes me. Not only have they built a fantastic platform but also it seems such an odd direction for a book and DVD company. I have often mused about how Amazon went in this direction by imagining the board meeting when the suggestion came up:







Jeff: We’re doing great. Book sales are through the roof, our recently launched DVD service is taking off and we even have a community market place that is getting traction.
Investor: Yes, the numbers look great. What’s next?
Jeff: Well, I think we should develop a compute cloud and build a set of Web Services around it to allow developers to build applications on our technology.
Investor: Hmmm... I don’t get it. What about selling electronics?
Jeff: No, I like the cloud idea.


Talking with Dave the other day gave me some insight as to how Amazon really stumbled on this interesting direction (though I still like my boardroom conversation). Basically, Amazon has a boatload of compute power available for redundancy and for them to cope with peaks such as Christmas. About 6 years ago the economy was on a downturn, Amazon sales were not doing as well as hoped and their stock started going south. Jeff Bezos, who has a reputation for having some crazy ideas decided to back the Electronic Compute Cloud (EC2) project since Amazon already had the infrastructure and knowhow for building hugely scalable systems and they had a bunch of hardware that was doing nothing.

It was a gutsy move for Amazon since EC2/AWS would not have been a short-term revenue generator and I doubt it is having a huge effect on Amazon’s earnings 5 years later. However, few would argue today that many in the industry are looking into compute clouds and SaaS with great interest. It’s just a matter of time.

JBI Misses the Mark

Every now and again I get into a discussions about why I decided not to adopt JBI for Mule. Admittedly, the topic has cropped up less and less over the past 2 years since people realize that just because something is called a standard doesn’t make it the best solution. However, now and again there is a die-hard fan that is there to push JBI as the only way. It’s been a few years since I read the JBI specification, but here goes.

When JBI (Java Business Integration a.k.a. JSR-208) first started cropping up it generated some interest since it was an attempt to standardize an area of application development where there were a lot of moving parts and complex problems to solve. Of course I looked at the spec early on since Mule was a platform that operating in the integration/ESB space.

My initial reaction was mixed since I felt there was scope for standardization in integration but the scope of JBI seemed to intrude into the problem space much further than was required. Integration problems are varied, complex and different for every organization because the technologies, infrastructure and custom applications in organizations are always different. Furthermore, they are immutable. This is a key point since many proprietary integration solutions (EAI brokers, ESBs, SOA suites) assume that an organization is either creating a green-field application or can rip-and-replace pieces of their infrastructure to make way for the Vendor X way of doing things.

Mule was designed around the philosophy of “Adaptive Integration”. What this means for Mule users is that they can build best-of-bread integration solutions because they can choose which technologies to plug together with Mule. It also means that they can leverage existing technology investment by utilizing middleware that was purchased in the past from other vendors. When talking about integration or SOA I think the piece with the most value is the glue between systems. With that said it is vitally important that this glue is as flexible as possible in order to be useful for a wide range of integration and SOA scenarios. This is one area I think JBI went wrong.

JBI attempts to standardize the container that hosts services, the binding between services, the interactions between services, and how services and bindings are loaded. It sounds like a great idea, right? The problem occurs when the APIs around all these things makes assumptions about how data is passed around, how a service will expose its interface contract and how people will write services. These assumptions manifest themselves as restrictions, which as we know is very bad for integration. These assumptions include –


  • Xml messages will be used for moving data around. For some systems this is true, but for most legacy systems, Xml didn’t exist when they were built and they use a variety of message types such as Cobol CopyBook, CSV, binary records, custom flat files, etc.

  • Data Transformation is always XML-based. Transformation is hugely important for integration, so to assume all transforms will be XML and use the standard javax.xml.Transform library was a huge limitation.

  • Service contracts will be WSDL. This might seem like a fair assumption, but again it’s very XML-centric and WS-centric too. We know that back and middle office integration is no place for Web Services. What about other ways of defining service contracts such as Java Annotations, WADL or Java interfaces?

  • No need for message streaming. There is no easy way to stream messages in JBI. It just wasn’t a consideration. Some vendors have worked around the API to support streaming, but that defeats the purpose of having an API.

  • You need to implement a pretty heavy API to implement a service. This means the guys writing your services need to understand more about JBI than is necessary. Mule has always believed that services can be any object such as a POJO, EJB Session bean or proxy to another component. This means it does not force developers to know anything about Mule logic, only write logic relevant to the business problem at hand. It’s worth noting that vendors have also found workarounds in JBI to allow developers to deploy POJO services, but it quickly starts looking like JBI is working against them.

  • It’s not actually that clear what a service engine is in JBI. JBI seems like a container of containers, but what about services? Do I need to host an EJB container inside JBI and then have an EJB Session bean as my service? Do I write a service engine per service? It seems both may be valid, but I never thought either were optimal.


There are other issues as well. The JBI specification is obfuscated with lots of new jargon. For a new developer they sheer amount of knowledge required to get started with JBI is a little overwhelming. This could be said for any middleware but I think JBI is one of the worst for new developers to grasp.

The JBI specification does talk about roles of the different types of users that will interact with a JBI system. This is a good approach, but in reality it was difficult to understand primarily because it seems JBI was designed without really thinking about how the developer would work with it. Why? Well, when you get a load of vendors (though not Oracle or BEA) to sit around and design an integration system, they will design a system around the way they see the world. This is exactly what vendors did before JBI too, and we were often not happy with the products we were given. JBI seems to be a “standard” written by middleware vendors for middleware vendors.

This “vendor view” of the world is one of the main reasons Open source has done so well. Traditionally, Open Source has been written by developers much closer to the problem being tackled. These developers can deliver a better way of solving the problem using their domain knowledge, experience and the need for something better. This was the ultimate goal Mule and given the success of the project I believe that goal has been realized with the caveat that things can always be improved (which we continue to do).

Also I think the far-reaching scope of JBI affects its re-usability across vendors. By their nature, vendors need to differentiate themselves in order to compete. Because JBI attempts to define how everything should work vendors have to build in features and workarounds that go beyond the specification to differentiate their service container. This breaks the re-use story since, if I use a JBI Binding Component in one container doesn’t mean it will behave the same way in another container.

This leads me to my final point, which is that one of the big selling points of JBI and standards in general is to promote re-use. But I don’t think we’ve seen much re-use from JBI. Where is the library of Service Engines and Binding Components? I know Sun have started porting their J-Way connectors to JBI, but nobody supports them. If you look at every JBI implementation each has written their own JMS, FILE, HTTP, FTP etc Binding components… not exactly what I’d call re-use.

One thing that concerns me about the JCP is that when something is released through this process people label the resulting work a standard. This is a little strange to me since I think of a standard as something that is defined when the problem domain being addressed is fully understood. For example, TCP/IP is a real standard because it states a well-defined protocol for exchanging information over a network. Its scope is very clear, the protocol is precise and it deals with a single problem. JMS on the other hand is not really a standard, it’s a standard API defined ultimately by two vendors (IBM and TIBCO) who made sure that the standard suited their needs. Not necessarily the needs of Java messaging. This is why JMS seems to have some quirks to it and the API is heavy (though improved in JMS 1.1).

The fact that JCR-312 is called JBI 2.0 not JBI 1.1 means that the forces behind JBI 1.0 realize the flaws and are looking to make amends. To all those that badgered me in the past about adopting JBI in Mule may now retract some of their overzealous statements since JBI 1.0 already looking obsolete. I am glad I did not pollute Mule with JBI.

I had some good conversations with Peter Walker (co-lead) about joining the expert group for JSR-312. I appreciated the community spirit of Peter reaching out but I felt the direction wasn’t compelling enough. It seemed to be going in a similar direction. We talked about OSGi and SCA support and that these may be weaved into the specification. While I think this is a good idea I don’t think it’s up to a standards committee to define how to build a server. Again, the scope seemed too broad.

So, should we just give up? No, of course not. But I think we should re-align our sights to target the particular areas that make sense to be standardized. From my experience there are two areas that would make sense to standardize; configuration and connectors. This is because these are the pieces that either the user or other systems need to interact with.

Service Component Architecture (SCA) addresses the XML configuration piece and I do think this is a step in the right direction (I’d like to see the same done for programmatic configuration). The issue with SCA right now is people still think it’s a bit Web Services centric and the configuration is quite verbose. Another major issue is that SCA hasn’t received the adoption required to be a front-runner.

On the Connectors front we do have , but anyone who’s read the JCA spec may wonder how on earth a bunch of smart people could come up with such a recalcitrant API.

I’d like to see a compelling connector architecture come out of the JCP process. Focusing on this specific area may promote middleware vendors and other independent software houses to build re-usable connectors that we could all use.