Tuesday, January 8, 2013

Java, JavaScript and Maven in 2013

A couple of predictions for this year:

  • Java will become cool again
  • JavaScript running on the JVM will be interesting
  • Maven will become cool

Java

Java 8 will bring lambda functions into the fold and enhance its collections to support them. Functions as first-class objects are one of my favourite features of other languages such as JavaScript. What I like about them is the mainly the flow of writing code. With Java today there's a bit too much of a context switch when providing some type to handle lambda style behaviour. This can curb productivity.

It is a shame that automatic property access isn't being incorporated as I think that this would be the final nail that pushes aside a few other JVM languages.

JavaScript on the JVM

Both Nashorn and DynJs will change the landscape in terms of running JavaScript on the JVM. I'm personally quite impressed with Mozilla's Rhino and have always found it to be quite functional. Having something that builds off of our API learnings and simultaneously runs much faster will make the JVM a first class platform for running JavaScript. Watch out Node.js… I'm suspecting that they'll be a nice JVM based equivalent coming along.

Maven

It seems to me that people's main issue with Maven is the XML pom file format. Its about time that we had a better DSL for POMs. Imagine what a JavaScript POM DSL would look like… My belief is that moving away from XML will make Maven appeal to many current naysayers.

 

Click here for my reflections a year on.

Saturday, March 3, 2012

Fast broadband == backup

Gosh, what a long time since I last blogged. Truth is I've moved over to R&D within VMware and I've been working on really interesting cloud stuff that has taken up all of my cycles. I'll be writing more about that soon.

There's a big debate here in Australia on the benefits of the National Broadband Network (NBN). The opposition in government are asking for justification in terms of business case etc. etc. They're also arguing that the government's plan is rubbish stating that wireless is a better option. The arguments go on and I think are mostly inspired by the fact that the NBN isn't the opposition's idea.

I think that the biggest unstated benefit that will come out of the NBN is offsite backup. Yep, you heard it here, the boring business of backing up data.

My 1TB backup drive gave up the ghost a couple of weeks ago. Since then I've been thinking about buying another, but the thing is, I really want more offsite backup. I've got some offsite backup going in the form of other devices (iPhones, iPods etc.), and I'm even using our VMware Mozy backup solution for work stuff.

The things that are holding me back in terms of full offsite backup though is: (i) the ISP cost of uploading/downloading GBs of data; and (ii) the speed at which this can be done (both the initial upload and incremental uploads).

My view is that not only will the NBN deliver more speed (I'm only getting 3mbps at the mo), I think it'd be reasonable to expect larger download/upload quotas over time. Right now download quotas are pretty similar to existing ADSL plans but as more people suck the data down, the market will demand higher quotas. And that means that offsite backup should become a reality.

So the next time you're thinking about the cost of NBN here in Australia (presuming that you do!), think about how much your data is worth and then multiply that across the population!

Thursday, September 8, 2011

Application developers should not write frameworks and toolkits (much)

On occasion I'm a little disappointed when I see developers create frameworks and toolkits. I understand that it is enjoyable to do so, but we, as application developers, should be majorly focused on the business needs of developing applications.

Quite often we think that xyz widget doesn't do exactly what is "required" and therefore pursue a path of days (weeks) writing frameworks and toolkits.

Let's not do this anymore.

Instead, every time you, as a developer, get the urge to write a framework or toolkit or even just a utility class, think about extending an existing open source project to meet your needs; or even starting a new open source project. For one thing, your requirements will most likely be challenged (which is a good thing). You will no doubt end up with something better than you could have created yourself (yes, that's hard to swallow, I know!). Importantly though, your organisation will benefit; the less "quirks" particular to an organisation the faster it is for them to ramp up resources and "Get Things Done".

Your organisation will also be able to share stuff between the silos. Imagine that.

I often hear, "but my organisation doesn't contribute to open source" (despite heavily using it). In this instance I always ask if the person in question has actually tried selling the business benefit (above) and attempted to make it happen. More often than not, the developer hasn't tried. Organisational behaviour is driven by the views of their employees and contractors. Organisations change and are changing.

I do believe that at the end of the day, it is better to suffer with an xyz widget that does most of what you need it to do than to re-invent the wheel. If you're an application developer then focus on the application, not frameworks and toolkits; unless you're contributing to open source projects.

Thursday, August 11, 2011

Dependency Injection in Java - @Resource, @Inject or @Autowired?

The @Resource documentation states that the "name" attribute refers to "The JNDI name of the resource". Spring has overloaded this to mean that you can refer to bean identifiers… but as far as the Java spec goes, there's no contractual obligation for a resource name to refer to a bean id. @Resource referring to Spring bean ids is therefore quite Spring-specific.

@Autowired is Spring specific.

@Inject is not Spring specific and performs type-safe injection.

The other benefit you'll get from type-safe injection is the ability to safely re-factor within your favourite IDE, but I personally think that the contractual obligation mentioned above is more important.

 

Wednesday, August 10, 2011

Versioning RESTful services

I've seen some comment out there regarding how to version RESTful services and so thought it about time to add in my own position on this.

I don't subscribe to crafting your own MIME type as these should refer to well-known entities. I also don't subscribe to passing a version number as an Accept param. Despite there being client/server compatibilities in this regard (Ruby), the developer is not forced to specify the version. I think it is useful to specify the version of a service to use.

I'm an advocate of using a /v1 path in the url of a RESTful service, but it should be noted that great care should be taken in exactly what that version number refers to. I've long subscribed to the major.minor[.maintenance[-build]] approach as per Maven and reasonably discussed at Wikipedia. In the case of URL versioning, only the major identification is used i.e. "1.0.2" becomes just "v1".

Major versions only change when there is an API-breaking change that has been introduced. This should be very rare. API-breaking generally means quite a big shift in the approach an API is taking which is why I found it interesting that Google declare their Google Maps API will always be backwardly compatible. To me that's effectively what held Microsoft back with Windows until Vista came along and started to drop backwards compatibility (Windows 7 of course is the evolution of that good decision).

So I say use /v[major] as a convention, but be diligent in the definition of what is "major".

Friday, July 22, 2011

LMAX and the benefits of keeping things in memory

I've now read Martin Fowler's review of the LMAX architecture and I feel good. This is very similar to what I've been doing for a long time with messaging (Spring Integration, Camel, AMQ, RMQ etc.).

LMAX is an event driven architecture. The Input Disruptor can be your favourite messaging broker. The Business Logic Processors are your message queue consumers. AMQP can certainly attain the goals described for the Output Disruptor (targeted consumers etc.).

The only thing "disruptive" about this article is that it introduces new terminology. I accept that there's probably a lot of optimisation going on, but from an architectural standpoint I don't see anything revolutionary.

Perhaps the most interesting points of the LMAX architecture are that everything is done in memory and that the processing of data is kept close to the data. These are things that we will all no doubt agree on being the main influence on performance.

Sunday, June 26, 2011

RESTful JSON fixtures for testing using CouchDB

 

I'm working in a distributed team at the moment where I don't have access to infrastructure available within the company's firewall. The application I'm working on has a nice RESTful API with JSON payloads for its logic layer. As I'm mostly developing the presentation layer I needed a quick method of generating something that would simulate the logic layer.

One thing that I felt was important is that I shouldn't have to mutate any settings within my presentation application in order to work either against the simulated logic layer, or the real one. It is all too easy for such settings to creep into production and break things.

I had a few thoughts on how to achieve my objectives considering Spring MVC,JAX-RS re-implementation using our existing interfaces (we're using JAX-RS in the real logic layer), writing a Jetty based servlet handler and node.js. Then I remembered...

CouchDB is a document oriented database that uses JSON as its Data Description Language (DDL). In addition CouchDB provides a RESTful JSON based API to access the database. This got me thinking that I could use sample responses from our real logic layer and enter them directly into the CouchDB database. It turns out you can using the batch interface!

I then thought that I needed to create a CouchDB service that could understand the parameters of the web service request I needed to make. These parameters are properties of my JSON objects that are otherwise unavailable to the URL format that CouchDB provides for querying. The answer here is to provide a View. Here's my view:

function(doc) { emit(doc.surname, doc) }

The surname is a property of my JSON object that I'll want to query.

I call the view the same name as my logic layer service. In my case the view's name is "getCustomerBySurname". If I query my service using CouchDB's URL convention I would issue something like:

http://localhost:5984/customerdb/_design
/customerService-1.0/_view
/getCustomerBySurname?key=%22Hunt%22

This yields a result like:

{"total_rows":1,"offset":0,"rows":[
  {"id":"285b217949d54c29c4b27adf6d000da2","key":"Hunt","value":
    {"id":1,"surname":"Hunt","firstname":"Christopher"...

Note that I'm using a convention for versioning a resource so that I can evolve its API nicely in the future - this is quite important for RESTful requests as versioning isn't something supported by anything within HTTP

I now have a web service that behaves the way the real service behaves. However, it doesn't look the same to the consumer for two reasons:

  • the result contains meta information about the query that the consumer does not expect ("total_rows" etc.); and
  • the URL we provide is not of the same form as the real logic layer service.

Transforming the JSON result

To transform the payload returned by CouchDB into something our consumer expects we create a List in CouchDB-speak. My list function looks like this:

function (head, req) {
  provides('json', function() {
    var results = [];
    while (row = getRow()) { results.push(row.value); }
    send(JSON.stringify(results));
  });
}

I've declared my list by the name of "rows". The function is focused on pulling out just the row.value property of each row and returning that back to the consumer. To access the view that I created and have it render through the list you provide a URL like this:

http://localhost:5984/customerDB/_design/
customerService-1.0/_list/rows/
getCustomerBySurname?key=%22Hunt%22

This now yields something like:

[{"id":1,"surname":"Hunt","firstname":"Christopher"...

...which is what my consumer expects.

Transforming the url

The url is transformed via Apache httpd using a RewriteRule. In order to avoid Same Origin Policy we've had to proxy requests for our presentation component and its web services anyhow. Here is the re-write rule I ended up with:

RewriteEngine On
RewriteOptions Inherit
RewriteRule ^/CustomerDB/services-1.0/rest/Customers?surname=(.*) http://localhost:5984/customerdb/_design/customerService-1.0/_view/getCustomerBySurname?key="$1" [QSA,P]

... and that's it. I now have a means of using CouchDB to provide fixtures for the purposes of development.

One last important thought...

The really interesting thing about using CouchDB for the use-case I set out to solve is that I'm really starting to wonder whether the real logic layer should be written using CouchDB... it is so simple and powerful!