Thursday, July 14, 2016

Microservices: from development to production


Let’s face it, microservices sound great, but they’re sure hard to set up and get going. There are service gateways to consider, setting up service discovery, consolidated logging, rolling updates, resiliency concerns… the list is almost endless. Distributed systems benefit the business, not so much the developer.

Until now.

Whatever you think of sbt, the primary build tool of Lagom, it is a powerful beast. As such we’ve made it do the heavy lifting of packaging, loading and running your entire Lagom system, including Cassandra, with just one simple command:


sbt> install


This “install” command will introspect your project and its sub-projects, generate configuration, package everything up, load it into a local ConductR cluster and then run it all. Just. One. Command. Try doing that with your >insert favourite build tool here<!

Lower level commands also remain available so that you can package, load and run individual services on a local ConductR cluster in support of getting everything right before pushing to production.

Lagom is aimed at making the developer productive when developing microservices. The ConductR integration now carries that same goal through to production.

Please watch the 8 minute video for a comprehensive demonstration, and be sure to visit the “Lagom for production” documentation in order to keep up to date with your production options. While we aim for Lagom to run with your favourite orchestration tool, we think you’ll find the build integration for ConductR hard to beat. Finally, you can focus on your business problem, and not the infrastructure to support it in production.

 Enjoy!

Tuesday, July 5, 2016

Developers need to care about resiliency

Disclaimer: I'm the technical lead for Lightbend ConductR - a tool that focuses on managing distributed systems with a key goal of resiliency.

I've been doing a reasonable amount of travelling over the past few years. Overall I enjoy it; I don't think that I'm away that much that it has become painful - may be 3-6 international flights per year.

One of the things you hear about when travelling is missing an international connection. I've been fortunate in that this has happened just once; a couple of weeks ago in fact.

The airline was British Airways (BA), and they did a really good job of trying to make up time given that flights out of Heathrow were causing delays across Europe. Thus my flight from Berlin TXL to London LHR was about two hours late. I missed my Sydney SYD flight from LHR. The BA staff did a great job of putting me up in a hotel overnight and getting me on to the next available flight. Honestly, from a staff perspective, BA were fantastic in fact.

What was frustrating though was that it took about two hours to arrange the accomodation and flight booking. This was also considering that I didn't have to queue for long and that a staff member attended to me in a reasonable time frame. The problem was the BA computer system.

Apparently BA have some new system. There were IT staff walking around helping the front-of-house staff get into the system and deal with its incapacity to handle any load. The IT staff were frustrated. The front-of-house staff were frustrated. I was frustrated (although not as frustrated as a Business Class passenger in front of me who felt that his ticket meant that BA should treat him like royalty!).

BA's computer system had an amazing effect on all concerned, except most likely the people that wrote it. In my opinion the original developers should be there supporting the front-of-house staff. They should feel the pain that they have inflicted.

I'm sure that you have similar stories to share, where computer systems have failed you miserably. Computer systems will of course fail, that's natural, but it is the fact that their degree of failure is considered acceptable that is the problem. Computer systems should not fail to the extent that they do. Your airline reservation system, online banking site, or whatever it is, it should be more reliable that it probably has been.

The problem is that developers do not understand that building-in resilience to their software is more important than most other things. As my colleague, Jonas Bonér has stated many times, "without resiliency nothing else matters". He's so right. Why is it then that developers just don't get this?

My answer to that is that many developers just see what they do as a job, and they don't really care about what they do. Putting that aside though, creating and then indeed managing distributed systems, a key requirement for resiliency, is harder than not; not hard, but harder and developers are lazy (btw: in case you don't realise this, I'm a developer!).

We need systems that are resilient. We therefore need developers to care about resiliency. The more that developers care about resiliency, the more tools and technologies we'll see appearing in support of it. I strongly feel that it all starts with the developer though.

I imagine a world where, given the inevitability of missing flight connections, I can wait in a queue for no longer than 10 minutes, be handled within another 10 minutes and then sleep off the tiredness and inconvenience of waiting another 24 hours for my next flight. The developer just needs to start caring in order for this to happen. Make he or she responsible for managing their system in production and they'll start caring, I guarantee it.

Here's a language/tool agnostic starting point for you if you are a developer that cares enough to have read this far: The Reactive Manifesto.

Thanks!

Sunday, May 22, 2016

Why we created an orchestration tool for you

One question I have had to answer a few times as the tech lead of ConductR, and I think it is a healthy question, is on why did Lightbend create ConductR? This post is my personal attempt to describe the rationale for it two years ago, and why I think it is more relevant than ever.

Back then we wanted a tool that was focused on making the deployment and management of applications and services built on Java, Scala, Akka and Play as easy as it could be. We wanted ConductR to be to operations what Play is to web application developers; a "batteries included" approach to deploying and managing reactive applications and services.

Two years ago, there really wasn't anything else out there that we felt offered such a packaged approach to solving these new use-cases for operations people. The sentiment was that we had done a reasonable job with the Reactive Manifesto at that point, and that we'd definitely engaged developers, but we were quickly going to arrive at a situation where operational people were going to find it a challenge to manage these new distributed applications and services. We also wanted something that had the reactive DNA.

That's how it all started. So, what's changed, and why is ConductR relevant now?

There are a number of players emerging in the orchestration space presently. This certainly validates our being a player in this space from a needs perspective. If you're happy to roll your own orchestration (which actually remains what we're up against in terms of competition, and this hasn't changed much in two years), then be prepared to have two people spend at least year tackling a problem that is harder than you think, and then realise that you have an operational cost in maintaining it. Atop of this, there's the risk to your company regarding those individuals leaving... is it sufficiently documented in terms of others taking over? Nobody has won in the orchestration space, but there's enough to choose from that will trump the business risk to your company of rolling your own. My advice here having been involved in designing and writing an orchestration tool (twice) is to not roll your own and focus on your core-business.

While I personally think that the operational productivity culture that permeates through our design is still the single most important reason to consider ConductR, here are some other reasons:

  • a means to manage configuration distinctly from your packaged artifact;
  • consolidated logging across many nodes;
  • a supervisory system whereby if your service(s) terminate unexpectedly then they are automatically restarted;
  • the ability to scale up and down with ease and with speed;
  • handling of network failures, in particular those that can lead to a split brain scenario;
  • automated seed node discovery when requiring more than one instance of your service so that they may share a cluster;
  • the ability to perform rolling updates of your services;
  • support for your services being monitored across a cluster; and
  • the ability to test your services locally prior to them being deployed.

Furthermore ConductR is the complete manifestation of the entire stack of technologies that we at Lightbend both contribute to and support. It is a great example of an Akka based distributed application that uses in particular, akka-cluster, akka-distributed-data and akka-streams/http. It is also tightly integrated with our Akka monitoring based instrumentation, and the monitoring story around events, tracing and metrics is going to get stronger. If you like our stack, you should feel good about the way ConductR has been put together.

We have programmed ConductR in the spirit of the Reactive Manifesto, with resiliency and elasticity being a particular focus. There is no single point of failure and our ability to scale out is holding up.

One last point: we use ConductR for our own production environment at Lightbend hosting our websites and sales/marketing services. With any product out there, you should always look for this trait. If a supplier is not dependent on their own technologies in terms of running their core business then beware; they can lose enthusiasm very quickly.

ConductR is as relevant as it ever was, and with its batteries-included approach for operations, I'm sure it'll become even more relevant as the industry moves toward deploying and managing microservices.

One last tidbit: ConductR is becoming a framework for Mesos/DCOS. Exciting times!

Thanks for reading this far!

Monday, March 7, 2016

What the name "lightbend" means to me

I thought that it'd be useful to share my personal perspective on the meaning of our company name change. Here are the contents of an email that I sent out to everyone within Lightbend, and which was warmly received.

---

Hi fellow lightbenders,

I’m very excited about the Lightbend name, and want to provide my view on what it means to me.

About two years ago, I presented at YOW. YOW is a great conference with the characteristic that speakers get to talk to a cross section of our industry on three occasions: Melbourne, Brisbane and Sydney. One is therefore not preaching to the converted, but rather talking to what can be quite a hostile crowd!

My first talk was to a few hundred people in Melbourne - apparently the most hostile of the three cities. About ten minutes into the talk I had that sinking feeling that I’d lost everyone. My talk was about Akka streams and the importance of back pressure. Lots of blank looks all around. An interesting aspect of YOW is that you are scored by the audience. You guessed it, my scores were low.

Travelling up to Brisbane I felt that it was important to bring the talk back a bit. Instead of delving right into Akka streams, I felt that I should at least have a preamble around reactive streams and why we did that. The Brisbane talk went much better.

However given the nature of the questions asked after my talk I felt that I could do even better. So, for Sydney, my preamble included a discussion on “why reactive”. This set the scene for the remainder of the talk and my Sydney scores reflected that.

Coming away from YOW I realised how fringe Typesafe were - again this is two years ago. I certainly appreciated that we were not anything near mainstream, but really, we were on another planet compared to where the IT industry was at.

Roll forward to today and you can see that we’ve come a long way. We have done so without deviating on our mission from an technical perspective. I would tell people that if you want to understand anything about our technical direction then simply read the Reactive Manifesto. You'll then see our DNA blueprint; the very fabric of what we are. Taking that further and quoting Jonas Boner, “without resiliency, nothing else matters”. We have upheld the manifesto and, in particular resiliency, like nothing else matters.

And now we are seeing the industry finally come our way. To highlight a few points, the industry received our new name well, it is excited about Lagom as a microservices framework for Java and the enterprise leading Spring framework is effectively adopting the reactive manifesto.

This is where the lightbend name kicks in for me.

I see lightbend as the gravitational force that is bending the light beam representing the direction of the industry at large. Gravity bends light.

To use my earlier analogy of Typesafe being on another planet, two years ago, we were light years away from where the industry was thinking. We are no longer. We have pulled the industry to the way we think software systems should be put together and managed.

We are now at an interesting juncture. As the company expands as it needs to, it would be easy to compromise our technical mission in order to gain further traction. However it is now more important than ever to stay on mission.

We need to be brave and continue to be bold. The industry doesn’t need more of the same; it needs more companies like us.

Thanks for reading!

Kind regards,
Christopher

Monday, December 29, 2014

Where FP meets OO

A strong feature of Scala is the embracing of both Functional Programming (FP) and Object Orientation (OO). This was a very deliberate and early design decision of Scala in recognition of the strengths of both approaches over the decades. I hope to show you that utilising both approaches to developing software can work together.

Over the past two years of using Scala on a daily basis I’ve found myself adopting a predominantly FP approach to development, while embracing true OO in the form of Akka’s actors. What this boils down to is side-effect-free programming with functions focused on one purpose. Where state is required to be maintained, typically for IO style scenarios, then actors come to the rescue. Within those actors if there are more than 2 state transitions then I find myself using Akka’s FSM where upon it maintains the state through transitions. As a consequence, there are very few “var" declarations in my code, but I don’t get hung up on their usage where the code becomes clearer or there is a performance advantage in some critical section of code.

I can’t even remember the last time I used a lock of some type...

Thus my actor code looks something like this:

object SomeActor {
  def props(): Props =
    ...

  // My actor’s other pure functions,
// perhaps forming the bulk of code.
// The functions are typically private
// to the package and therefore
// available for tests within the
// same package. 
}
 
class SomeActor extends Actor {
  override def receive: Receive =
    ...

  // Functions that break down the
// receive handling calling upon the
  // pure functions of the companion and
// possibly other traits

What I find is that as I expand the companion object with pure functions, patterns of behaviour emerge and its functions are factored out into other traits which of course become re-usable and remain highly testable. Sometimes I form these behavioural abstractions ahead of creating the companion object, but more often than not it is the other way round. I’m big on continuous re-factoring and spend a lot of time attempting to get the functional abstractions right. This can mean that the functions are often re-written once the behavioural patterns emerge.

So why then is the above representative of OO? Actors permit only message passing and are completely opaque to the outside of their instance. This is one of Alan Kay’s very early requirements of OO. Actors combined with Scala also mostly fit his following requirements (taken from http://c2.com/cgi/wiki?AlanKaysDefinitionOfObjectOriented):

  1. Everything is an object.
  2. Objects communicate by sending and receiving messages (in terms of objects).
  3. Objects have their own memory (in terms of objects).
  4. Every object is an instance of a class (which must be an object).
  5. The class holds the shared behavior for its instances (in the form of objects in a program list).
  6. To eval a program list, control is passed to the first object and the remainder is treated as its message.

Point 3 is a weak point of the JVM and one should be careful about message payloads remaining immutable, but at least with Scala, immutability is the default way of coding. It’d be great if actors themselves had their own address space. However this has never raised itself as a problem in my world.

Scala is one of the few languages that marries the world of FP and OO and thus does not need to “throw the baby out with the bathwater”. Many other languages force you to make a choice. That said, just like most marriages, there’s always the dominant party making the most sense, and that’d be FP!

Wednesday, January 1, 2014

Reflections on Java, JavaScript and Maven for 2013

About a year ago I made some predictions on Java, JavaScript and Maven for 2013. There has been some movement, so time to report back:

Java

Java 8 didn't quite make it as a GM release, but mid-March 2014 now appears to be the date. Java 8 has been available for playing with for some time during 2013 though.

I must confess to having been excited at the prospect of Java 8's lambda support a year ago, and I still think that what's coming is a great boost to the language. However I'm now squarely in the functional camp and, well, Java simply won't cut it. If you have an interest in functional programming, my personal recommendation is to move to a language designed for the job. Languages such as Scala which offer the best of the imperative and functional worlds are the ones to look at.

JavaScript

This one is mostly pinned to the release of Java 8 and Nashorn - DynJs hasn't really taken hold as I thought it might. So, may be March 2014 for this also.

Projects such as Trireme are particularly interesting as they bring the Node API to Rhino. I suspect that projects like this can be adapted to Nashorn, and I also see that Nashorn may provide its own Node API implementation, although the details on this are light. No matter what happens regarding Nashorn and its Node offerings, I suspect the Trireme and Rhino combination will remain relevant for some time given their Java 6 focus.

Maven

Maven continues to be strong however my hope for an alternate DSL for the pom hasn't materialised… sort of… Tesla polyglot is ready for a release and offers Groovy, Ruby and Scala DSLs for Maven. I actually wrote the Scala dsl :-)

I suspect that Tesla polyglot will be released sometime during the first quarter of 2014.

Conclusion

I feel that my predictions were largely on track, although they certainly haven't materialised within the timeframe that I expected. There's been considerable progress across all three fronts though and I'd be very surprised if they haven't materialised by the first half of 2014.

Wednesday, May 1, 2013

Play-ing with WebJars

One of my responsibilities on the Play team is to enhance the JavaScript development experience. We will shortly be releasing a strategy document on what is coming for Play 2.3 in this regard. As a preview though, one of the things the strategy will be advocating for is the use of WebJars. WebJars are JavaScript web libraries deployed to well known repositories including Maven Central. There are many popular JavaScript libraries already available as WebJars and the number is increasing.

Preamble

Why is there a need for WebJars? Managing the versions and dependencies of JavaScript libraries is just as important as for any other language. WebJars utilises familiar and established repositories instead of relying on newly introduced ones. I don't think that this can be understated; many organisations are already using and hosting Maven and Ivy based repositories so it makes sense to leverage them. To further support this, over 8 billion downloads occurred on Maven Central in 2012.

Why is dependency management important for JavaScript libraries? Some libraries are standalone of course but others are not. Many popular libraries have dependencies and it is the responsibility of the developer to source them and ensure they are available before the target library is sourced e.g. the popular bootstrap library depends on jQuery. Having a system that automatically manages the complexities of dependency management makes the JavaScript programmer more productive. WebJars enable such systems to be used.

The Play Framework makes it easy to build web applications with Java & Scala. Play is based on a lightweight, stateless, web-friendly architecture. What we on the Play team have done as a first step is extend the work of my colleague, James Ward, so that WebJars offers first class support of requirejs when used with Play. Requirejs is a popular implementation of the AMD specification - a means by which JavaScript applications can be modularised. The easiest way of thinking about AMD is that it is JavaScript's equivalent of package and import statements (or namespace and include statements depending on your preferences!).

JavaScript and Play

The first thing to state is that we want to make developing web applications in conjunction with Play as familiar as possible when it comes to authoring JavaScript. To use a WebJar the programmer declares its dependency in Play's Build.scala file. This file describes a Play project in a similar way that a Maven pom or NPM's package.json would. Here's what a typical build file looks like using a variation of the angular-seed project extended for Play and WebJars:

object ApplicationBuild extends Build {
val appName = "angular-seed-play"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
"org.webjars" % "angularjs" % "1.0.5",
"org.webjars" % "requirejs" % "2.1.1",
"org.webjars" % "webjars-play" % "2.1.0-1")
val main = play.Project(
appName, appVersion, appDependencies)
}

What is important to note is that the variable declared appDependencies specifies a list of WebJars that are required directly by the project. angularjs, requirejs and the webjars-play plugin are declared along with their versions. webjars-play actually depends on requirejs and so the above dependency declaration is not strictly required. However it is there to show that whatever requirejs version is declared by webjars-play, a different version can override it e.g. webjars-play depends on requirejs version 2.1.1 and so when requirejs version 2.2 is released then 2.2 can be specified above.

Knowledge of Scala is not required in order to declare dependencies. The above constitutes the total amount of Scala that the JavaScript programmer will be exposed to (unless they wish to delve into Scala which we would actively encourage of course!).

The WebJars website allows the easy selection of a WebJar and its version and then outputs the declaration required to download it for Play, SBT, Maven, Gradle and others.

The next requirement is to tell Play how WebJar assets are going to be resolved. Play's conf/routes file is used to do this. Here is the part of the routes file for the angular-seed-play project concerned specifically with WebJars:

# Obtain require.js with built-in knowledge of 
# how webjars resources can be resolved
GET /lib/require.js controllers.WebJarAssets.requirejs
# Enable webjar based resources to be returned
GET /webjars/*file controllers.WebJarAssets.at(file)

The above tells Play to return a wrapper of requirejs whenever /lib/require.js is requested. This wrapper configures requirejs so that it knows how to resolve files hosted within WebJars. The /webjars declaration takes a file path and locates the corresponding resource from a WebJar.

The JavaScript programmer need do very little else to have Play deliver a JavaScript application.

Requirejs usage

Declaring the use of requirejs should look quite familiar. Here is a sample HTML snippet:

<script data-main="js/app" src="lib/require.js"></script>

Given the routes declaration from the previous section the above will bring in js/app.js after requirejs has been loaded. Note that Play must also understand the routing in terms of how to load resources from the js path:

GET /js/*file   controllers.Assets.at(path="/public/js", file)

Here is what the angular-seed-play js/app.js file looks like with the less relevant bits removed:

require([
'./controllers',
'./directives',
'./filters',
'./services',
'webjars!angular.js'], function(controllers) {
  // Declare app level module which depends on filters, 
// and services
  ... 
});

The first few lines of the require statement declare dependencies on JavaScript files relative to the current one via the ./ convention. These files are required for the angular-seed project itself. The line of interest for this topic though is "webjars!angular.js". What this does is call upon the WebJars requirejs plugin to load a file that is contained in a WebJar declared as a dependency.

…and that's about all there is to it.

But wait, there's more...

Let's say that bootstrap is required. Bootstrap has a dependency on jQuery. Ordinarily the JavaScript programmer is required to ensure that jQuery is loaded before bootstrap given its dependency. When using bootstrap's WebJar, jQuery is declared as a dependency such that:

require(["webjars!bootstrap.js"], function () {
  ... 
});

...will automatically load jQuery. This is achieved given that bootstrap's WebJar declares how jQuery is to be located within a repository. Here's a snippet from the bootstrap WebJar pom.xml file:

<dependencies>
  <dependency>
    <groupId>org.webjars</groupId>
    <artifactId>jquery</artifactId>
    <version>1.9.0</version>
  </dependency>
</dependencies>

The other thing that bootstrap's WebJar provides is a file named "webjars-requirejs.js" in a well-known location within the jar. Here are the contents of that file:

requirejs.config({
  shim: {
    bootstrap: [ 'webjars!jquery.js' ]
  }
});

The above declares to requirejs that whenever "webjars!bootstrap.js" is depended on, "webjars!jquery.js" will be loaded first. requirejs.config along with the shim property is standard requirejs configuration. In addition to requirejs behaviour, whenever a "webjars!" module is specified we strip off the "webjars!" prefix and the ".js" suffix and end up with a module name i.e. "bootstrap" in the case of "webjars!bootstrap.js". This module name is then looked up within the shim property of requirejs.config and, if found, the dependencies that are declared are loaded prior to bootstrap.

One more thing...

If there is a requirement to avoid using "webjars!" in JavaScript AMD dependencies then something like the following can be done:

define("jquery", [ "webjars!jquery.js" ], function() {
  return $;
}); 

Thus any time that "jquery" is specified as a dependency its webjar will also be loaded e.g.:

define([ "jquery" ], function($) {
  …
}); 

The above is just as if jQuery was declared to use AMD itself (which it can do).

We intend to enhance the JavaScript development experience further with the goal of making Play the #1 choice for web development.

Happy Play-ing with WebJars!