Jan 4, 2016 - My ES6 development setup


I recently switched from TextMate to Atom, a lightweight cross-platform text editor and thought it would be useful to write down how to configure the editor and various related tools to be productive with ECMAScript 6. It also shows how to set up tooling for TypeScript, a typed dialect of JavaScript, which aligns well with ECMAScript 6 and gives you optional static type checking and type inference. IMHO, if you’re going to invest in new tooling for ECMAScript 6, going the extra mile to switch to TypeScript is worth it.

Getting started

Install Node.js v5 or higher (which comes with many ES6 features enabled by default).

Install the Atom text editor. Atom is a lightweight, easy-to-extend text editor with a large repository of plug-ins.

Atom has basic editor support for JavaScript out of the box (such as syntax highlighting).

Configure a JS linting tool

When you program in JavaScript, using a linter is a must. It will catch errors like undefined variables, unused variables, duplicate parameter names, forgotten semicolons, features of the language you would rather avoid, etc.

JSHint is my favorite linting tool for JavaScript. Other good options include JSLint and ESLint.

JSHint is highly configurable (look here for the list of configurable options). The easiest way to configure it is to set up a .jshintrc file in the root directory of your project. Here’s a good starting point for tweaking your .jshintrc file (comments in the config file are okay). To ensure JSHint doesn’t choke on new ES6 features, set the option “esnext” to true (or in the next major release, set “esversion” to 6). I would also recommend to set the option “node” to true, so that JSHint understands your code will run in node.js and functions such as require will be available.

There is a jshint atom package that will check your JavaScript files as you type, providing visible error highlighting like so:


To install the plug-in, in Atom, go to your Atom preferences > Packages > Install > search for “jshint”.

Enforcing strict mode

I always configure my JSHint file to require my JavaScript to be in “strict mode”. This is a safer subset of JavaScript with better-behaved scoping rules and less “silent errors” (e.g. operations that would silently fail without error will throw an error in strict mode). To enter strict mode, it suffices to add the literal string “use strict” as the first line in your JavaScript file (as shown on the screenshot above). In JSHint, I set the “strict” option to “global” (enforcing a single global “use strict” directive at the top of the file).

Configure typescript (optional)

TypeScript is a typed dialect of JavaScript. It allows you to add optional static type annotations on functions and variables. In addition, it has a good type inferencer that will catch type errors even when your code is mostly unannotated. Finally, it implements most ES6 features and even some ES7 features. For a good intro to TypeScript, see this book.

Configure typescript plug-in for Atom

Good editor support for TypeScript usually requires a commercial IDE like Visual Studio or WebStorm. Atom is one of the few open-source editors with very good TypeScript support, via the atom-typescript package. Install this just like you installed jshint above.


Configure typescript compiler

The atom-typescript package comes pre-bundled with a bleeding-edge TypeScript compiler. The compiler is configured using a configuration file called tsconfig.json which usually lives in the root of your project. An example file can be found here. If you don’t yet have a tsconfig file, the atom-typescript plug-in usually detects that the file does not exist and will offer to create it for you with default settings.

Two important properties in tsconfig.json to check are:

  1. set target to ‘es6’. This will let the TypeScript compiler generate ECMAScript 6 code, which is almost line-for-line the same as TypeScript code. This will only work if you run the subsequent compiled code on a recent version of node. If you develop for the browser, leave this set to ‘es5’. Keep in mind that some ES6 features are not yet enabled by default in node. So if you use them in your TypeScript, you must ensure to start node with the appropriate flags. For instance, I tend to use ‘destructuring’ a lot (allowing you to write things like let [a, b] = f(x)), which at the time of writing requires starting node with node --harmony_destructuring.
  2. set module to ‘commonjs’ so your TypeScript modules work just like node’s modules and npm packages. If you develop for the browser, it’s probably better to set it to ‘amd’ (for use with libraries like require.js).

Configure typescript linter

Just like JSHint lints your JavaScript, you can use TSLint to lint your TypeScript. Install the linter-tslint Atom package for built-in support.

Tslint reads its configuration from tslint.json. An example file can be found here. Details about the rules can be found here.

Set up type definitions for external libraries

Chances are that your JavaScript project is making use of existing JavaScript APIs, either in node.js or from external libraries. Many libraries are not written in TypeScript. Fortunately, TypeScript allows you to describe the types of an untyped API separately in a type definition file (*.d.ts). tsd is a tool to install and manage such type definition files.

To install:

npm install tsd -g

You will probably immediately want to install the type definitions for the node.js standard library. To do so:

tsd install node --save

This will do two things:

  1. download the node.d.ts type definition file to a directory called typings.
  2. create a file tsd.json remembering what version of the type definition file was installed.

(note: the command is tsd install <name> --save and not tsd install --save <name>, the latter will fail silently)

Using the tsd.json file it becomes easier to re-install the type definition files later. tsd.json acts similar to package.json and the typings directory is similar to the node_modules directory.

Normally your atom-typescript package will pick up the type declarations in the typings directory automatically, and any errors about e.g. the type of the node.js require function should go away.

Configure source maps

TypeScript code is compiled down to JavaScript code. When compiling to ES6, the source code and the compiled code will map almost one-to-one in many cases, but often the TypeScript compiler will insert some extra code. This causes the line numbers of the original code to diverge from the source code. This can become a problem when debugging: the stack traces and debugger will use JavaScript source lines, not TypeScript source lines.

Luckily there exists a translation format called “source maps” that allows JavaScript debuggers to work with external source code compiled down to JavaScript.

First, tell the TypeScript compiler to generate source maps. In your tsconfig.json file, add the following option:

{ "compilerOptions" : { ..., "sourceMap": true } }

Now, when you recompile a *.ts file (e.g. by editing and saving it), a *.js.map file will have been created as well.

When debugging code in the browser (e.g. using chrome developer tools), the presence of a source map file is enough for the debugger to use the correct line numbers. In node.js, you need to install a little utility library called source-map-support that will transform node.js stack traces so that source maps are taken into account:

npm install --save-dev source-map-support

To enable this library, start node (or a test runner like mocha) with the following command-line flag:

node --require source-map-support/register

Even better would be to edit your package.json to use a start script, so you can start your program using a simple npm start. Here is an excerpt from my package.json file:

"scripts": {
  "start": "node --harmony_destructuring --require source-map-support/register index.js",

Happy hacking.

Nov 3, 2015 - My node toolbelt: 10 libraries to boost your node.js productivity


This article surveys 10 libraries that I have found to be tremendously useful while developing back-end node.js services. They mostly address generic tasks that you will come across again and again: support for configuration, logging, processing command-line arguments, code coverage, asynchronous control flow, unit testing and more. While the focus is mostly on supporting back-end services, some of these libraries can equally well be used for front-end JavaScript development. I picked these 10 libraries based purely on personal experience and am documenting them in the hope they will prove useful to others.

The libraries are discussed in decreasing order of the number of Github stars they had at the time of writing this article, so more popular libraries are discussed first, and the lesser-known gems are discussed near the bottom. Don’t let this ranking trick you into thinking that this is a “top 10” article: I’m not comparing these libraries against one another.

Enough intro, let’s get started!


Simplifying asynchronous control flow using Promises

Github stars: 10029

npm install q

Using abstractions that simplify asynchronous control flow in node is a must. While callback hell can be avoided with rigor and discipline, Promises lead to a fluent, compositional style of programming. Compare standard callback-based control-flow:

step1(function (err, value1) {
    if (err) { return handleError(err); }
    step2(value1, function(err, value2) {
        if (err) { return handleError(err); }
        step3(value2, function(err, value3) {
            if (err) { return handleError(err); }
            step4(value3, function(err, value4) {
                if (err) { return handleError(err); }
                // Do something with value4

To promise-based control flow:

// now step1 returns Promise rather than taking a callback
.then(function (value4) {
    // Do something with value4
.catch(function (error) {
    // Handle any error from all above steps

As of ECMAScript 6, Promises are built-into the standard library. However, built-in Promises have quite a limited API. That’s why I prefer to use one of the pre-existing Promise libraries, which provide the same behavior as built-in Promises, but with additional bells and whistles that come in handy in practice.

I prefer to use a library called Q because of its many utility methods (e.g. done() to signal the end of a Promise-chain and excellent support for async stack traces) and because it is compliant with ES6 Promises. Another good alternative would be Bluebird.

To find out whether a Promise library is compliant with ES6 Promises, look for whether they implement the so-called “Promises A+” spec. ES6 Promises are based on that spec.


Easily configure and send HTTP Request

Github stars: 8676

npm install request

Request is a library to make outgoing HTTP requests. It provides all possible bells and whistles you could wish for and is much more pleasant to use than node’s built-in HTTP API. Here’s how to call a JSON REST API endpoint:

var request = require('request');

    url: 'http://api.service.com/widget',
    method: 'GET',
    qs: {
      limit: 20 // query-encoded as ?limit=20
    headers: {
      'X-SOME-HEADER': 'value'
    json: true // parses response body as JSON
  }, function (error, response, body) {
  if (!error && response.statusCode == 200) {

You can customize headers, query string, set form fields, and so on. Request also provides a nice request.defaults() method that can be used to set common parameters for many requests. For instance, if we are going to make a lot of calls to a particular host that all return JSON data, we may instead write:

var request = require('request');

var callAPI = request.defaults({
  baseUrl: 'http://api.service.com',
  json: true

    url: '/widget',
    method: 'GET',
    qs: {
      limit: 20 // query-encoded as ?limit=20
    headers: {
      'X-SOME-HEADER': 'value'
  }, function (error, response, body) {
  if (!error && response.statusCode == 200) {

There’s also request-promise which converts request’s callback-based API to a promise-based API.


Simple and flexible unit testing framework

GitHub stars: 7659

npm install mocha

Mocha is a simple, flexible unit testing framework for JavaScript. It provides a variety of ways to structure unit tests, but the basic idea is as follows: create a directory test and add your test code to that directory, e.g.:

var assert = require('assert'); // built-in nodejs module

describe('Array', function() {
  describe('#indexOf()', function () {
    it('should return -1 when the value is not present', function () {
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));

describe and it are methods used to group and label unit tests. Each test is a function that is considered passing if it finishes without throwing an exception. Executing mocha from the command-line will gather all unit tests under /test and call all tests (in sequence), presenting you with nice colored output of the result.

Two extremely useful but less well-known features of mocha are the ability to run only a single unit test and the ability to skip certain tests without having to comment out a single line of code. Often when I am debugging a failing unit test, I only want to re-run this single test to be able to focus on the problem at hand. Just call describe.only and Mocha will execute only this test (and all of its subtests), skipping all the other tests, e.g.:

describe('Array', function() {
  describe.only('#indexOf()', function () {
    it('should return -1 when the value is not present', function () {
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));

On the flip side, sometimes I have unit tests that I know are failing or broken. Rather than commenting out such tests, a better approach is to insert describe.skip, which lets Mocha skip the tests, but mark them as such in its test runner output, letting you know that some tests are in a pending state:

describe('Array', function() {
  describe.skip('#indexOf()', function () {
    it('should return -1 when the value is not present', function () {
      assert.equal(-1, [1,2,3].indexOf(5));
      assert.equal(-1, [1,2,3].indexOf(0));

Mocha also works well with promises and asynchronous unit tests: one can simply return a promise from a test case and mocha will automatically wait for the promise to resolve before continuing with the next test.

Mocha is agnostic to the way you write assertions. You can use node’s built-in assert module (as shown above), but I prefer using Chai which lets you write more fluent BDD-style unit tests, e.g.:

var expect = require('chai').expect;

describe('Array', function() {
  describe('#indexOf()', function () {
    it('should return -1 when the value is not present', function () {

There is also a plug-in for Chai named chai-as-promised that lets it work fluently with promises, automatically postponing assertions until the promise is resolved:

Promise.resolve(2 + 2).should.eventually.equal(4);


Easily implement rich command-line interfaces

Github stars: 4401

npm install commander

Often server-side programs take a variety of command-line arguments to be configured at start-up. Commander is a complete solution for writing node.js command-line interfaces, automating tasks such as parsing command-line arguments and taking care of generating the necessary ‘help’ or ‘usage’ documentation, so you don’t have to.

Straight from the library’s docs, here’s how it works:

var program = require('commander');

  .option('-p, --peppers', 'Add peppers')
  .option('-P, --pineapple', 'Add pineapple')
  .option('-b, --bbq-sauce', 'Add bbq sauce')
  .option('-c, --cheese [type]', 'Add the specified type of cheese [marble]', 'marble')

console.log('you ordered a pizza with:');
if (program.peppers) console.log('  - peppers');
if (program.pineapple) console.log('  - pineapple');
if (program.bbqSauce) console.log('  - bbq');
console.log('  - %s cheese', program.cheese);

As you can see, commander supports both short (e.g. -p) and long (e.g. --peppers) command-line flags. If a flag takes an argument (e.g. --cheese), this is indicated using square brackets, with the ability to provide a default value (e.g. marble).

Based on this declarative information, Commander can also generate help information, which one can invoke by passing the --help flag.

$ ./examples/pizza --help

 Usage: pizza [options]

     -h, --help           output usage information
     -V, --version        output the version number
     -p, --peppers        Add peppers
     -P, --pineapple      Add pineapple
     -b, --bbq            Add bbq sauce
     -c, --cheese <type>  Add the specified type of cheese [marble]

If all of this wasn’t enough, commander also allows you to easily specify Git-style subcommands, which lets you write applications that can perform a wide variety of tasks with just one executable.


A JavaScript code coverage tool

Github stars: 3267

npm install istanbul

Unit testing is essential to obtain confidence in your JavaScript code. But unit tests alone are not enough: you need to know what parts of your code are covered by your unit tests, and what parts are not. Istanbul is a code coverage tool for JavaScript. You can use it to verify precisely what lines of code got executed by a unit test suite, for example. Istanbul can generate nice HTML reports with non-executed lines colored in red, allowing you to easily spot parts of your code in need for testing.

Istanbul HTML Report

Istanbul by default assumes that you provide it with a node.js program as input, it runs the program, and then generates a report when the program terminates. This works well for traditional batch processing applications, but not very well for instrumenting HTTP servers that remain up all the time. If you want to test coverage of, say, an Express app, you can use istanbul-middleware to instrument your server-side code. It will also extend your Express app with a /coverage endpoint that serves up the current coverage statistics.

You usually do not want to enable code coverage on an HTTP server by default, because the instrumented code will run a lot slower. I use a setup where I have a normal ‘main’ file to configure my express app, and an alternative ‘main’ file that enables code coverage before loading the normal ‘main’ file. Like this, when I fire up the server in ‘coverage’ mode, I can write unit tests that hit the server with various requests, and then browse to /coverage to check the code coverage of my unit tests on-the-fly. If I notice some code is not triggered, I can add a new unit test to my test suite, rerun the test suite, and simply refresh the /coverage page to see the updated code coverage, without restarting the server. I have found this to be an excellent workflow to quickly increase my code coverage.


Versatile and structured logging tool

Github stars: 2293

npm install bunyan

Bunyan is a logging library for node (kind of like Log4J for Java, but more lightweight and with some interesting twists). Bunyan lets you create log streams, with support for well-known log levels such as DEBUG, INFO, WARN and ERROR. Bunyan loggers can be configured to write to standard output, file, or custom outputs. For instance, one could define a log stream that logs all messages with log level DEBUG or higher to standard output, and in addition, all messages with log level ERROR or higher to a file:

var log = bunyan.createLogger({
  name: 'myAppLog',
  streams: [
    { level: 'debug',
      stream: process.stdout },
    { level: 'error',
      path: 'logs/myApp_errors.log' }

Log messages are formatted using node’s util.format by default. Here’s how to log an INFO message:

log.info("request size: %s bytes, latency: %s seconds", size, time);

bunyan logs are actually made up of newline-terminated JSON records. Hence, they are structured logs that are very easy to parse and filter. For instance, the following log line:

log.error(err, "error serving request");

will generate the following log record:

{"name":"myAppLog","hostname":"mymachine.local","pid":39102,"level":50,"err":{"message":"socket hang up","name":"Error","stack":"Error: socket hang up\n    at [...]","code":"ECONNRESET"},"msg":"error serving request request","time":"2015-09-30T13:10:17.803Z","v":0}

This is hardly readable, but bunyan comes with a command-line tool that allows you to format bunyan log files in a human-readable way. As a bonus, the bunyan tool can be used to quickly filter the log. You can easily filter on log level, but it is possible to filter based on arbitrary log data. I typically start my node services as follows:

node myapp.js | ./node_modules/bunyan/bin/bunyan -o short

which turns the above raw log record in:

13:10:17.803Z ERROR myAppLog: error serving request (err.code=ECONNRESET)
    Error: socket hang up
        at createHangUpError (_http_client.js:215:15)
        at Socket.socketOnEnd (_http_client.js:300:23)

Bunyan offers colorized output to quickly identify certain log levels and allows you to include your own JSON-formatted properties in the log data, and knows how to properly render common node.js objects such as Error instances. Here’s what this would look like in a shell:



Configuration management made easy

Github stars: 1015

npm install node-config

Node-config is a library to manage configuration files. You basically create a folder in your root project folder named config and then write up your configuration data as a JSON config file named default.json saved under that directory (node-config supports a variety of other formats as well, and importantly, tolerates comments in your JSON config file).

$ mkdir config
$ vi config/default.json

  // database configuration goes here
  "dbConfig": {
    "host": "localhost",
    "port": 5984,
    "dbName": "customers"

Config then allows you to easily access that data:

var config = require('config');
var dbhost = config.get('dbConfig.host');

So far, this is nothing special, you could as easily just require the JSON file. Where node-config adds value is in its support for managing multiple configuration files (e.g. for a development vs a production environment) and in its support for inheritance among configuration files. node-config will select the most appropriate configuration file to load based on certain environment variables (such as $HOSTNAME and $NODE_ENV). This allows you to seamlessly toggle between, say, development and production configurations.

In addition, node-config has the ability to load more than one configuration file. It will try to load the most specific configuration file for a given environment, and then include values from more general environments, ending with default.json at the root. This implies that more specific environments only need to specify the delta w.r.t. the default environment, avoiding the need to duplicate configuration settings.

If you find yourself copying too much configuration information around, or adjusting lots of flags to switch between development and production environments, you should consider using node-config.


Avoid debugging asynchronous code without a call stack

Github stars: 416

npm install longjohn

A problem when writing asynchronous code is that the call stack is usually very “shallow”: it only goes back to the origin of the current event on the event loop, but doesn’t show you where that event originated. Longjohn is a simple little node.js plug-in that makes debugging errors in asynchronous programs more pleasant, by stitching together stack traces across multiple events of the event loop.

If you’ve done some node.js programming, you have probably come across quite unhelpful errors such as:

Error: connect ECONNREFUSED
    at exports._errnoException (util.js:746:11)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1010:19)

This tells us that some outgoing TCP connection failed, but we have no clue how to link it back to some place in our code. Enabling longjohn turns the stack trace into:

Error: connect ECONNREFUSED
    at exports._errnoException (util.js:746:11)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1010:19)
    at [object Object].Connection.connect (/MyProject/[...]/connection.js:370:19)

Now we have a better clue as to where the TCP call was made, and we can debug the problem more quickly.

A word of warning: you should only enable longjohn in development environments, not in production environments. The reason is that capturing these long stack traces introduces quite some overhead (essentially the full stack trace must be saved frequently, even when no errors occur, in order to be able to reconstruct the full stack trace when an error does occur).

If you are using promises, libraries such as Q offer similar functionality to enhance stack traces when promises are rejected with an exception.


Measure your code paths

Github stars: 281

npm install measured

Node-measured is a port of Coda Hale’s better-known metrics library for Java. node-measured offers a very small API to define, essentially, performance counters. They allow you to easily keep track of how many times a particular request was fired, how many times you hit your database or your cache, how many times a page was rendered, and so on. For instance:

var Measured = require('measured')
var meter = new Measured.Meter();

app.get('/customers', function(req, res) {
    meter.mark(); // count GET /customer requests per second

While counting events is nice, this isn’t something you would really need a library for. The benefit of using node-measured is that is can also compute throughput and latency statistics. For instance, if you call meter.toJSON() after hitting the server with requests, you will get:

{ mean: 1710.2180279856818,
  count: 10511,
  'currentRate': 1941.4893498239829,
  '1MinuteRate': 168.08263156623656,
  '5MinuteRate': 34.74630977619571,
  '15MinuteRate': 11.646507524106095 }

You can also measure how long it took to process the request, and then submit that measurement to a Histogram. The Histogram object can then calculate various latency percentiles, biased towards the last 5 minutes. Calculating correct latency percentiles over a sliding window can be tricky, and this is something I’d rather delegate to a library such as node-measured.

Coda Hale gave a great talk at CodeConf on why it is a good idea to add pervasive metrics to your code. For bonus points, you can instrument your node.js process to report these metrics automatically to a metrics visualizer such as Riemann or Graphite.


Keep a watch on your service’s memory use

Github stars: 101

npm install memwatch-next

Memwatch-next is a simple native extension for node.js that gives you the ability to register a callback on various memory-related events, such as when a garbage-collect occurred. It also emits a leak event if it detects that memory usage keeps on growing even after several subsequent garbage collections. This is usually (but not necessarily always) the sign of a memory leak.

Memwatch-next can also let you calculate heap diffs, but I don’t tend to use memwatch-next to get to the bottom of a memory leak. Instead, I use memwatch-next to simply monitor my service’s memory usage. If memwatch-next emits a leak event, that draws my attention to a potential issue, which I can then investigate further using e.g. the Chrome developer tools.

Here’s a typical way in which I used the library, simply logging some useful statistics to draw my attention to potential memory-related performance problems:

var memwatch = require('memwatch-next');

memwatch.on('leak', function(info) {
    log.warn('potential memleak: %s', info.reason);
memwatch.on('stats', function(stats) {
    log.info('heap size after full GC: %s MB', (stats.current_base / 1000 / 1000));


If you made it this far, I hope you’ve found some useful little libraries that you may not yet have come across yourself. The goal of this article was not to list all possible useful node libraries (impossible given the size of the npm ecosystem) nor was it the goal to list my “top 10 favorite libraries”. These are just libraries that I see myself reusing in various projects time and again because they solve specific yet general-purpose tasks well.

If you have experiences in using these libraries (both positive and negative) I would love to hear your feedback. Also, if you know of better alternatives to solve the tasks addressed by these libraries, I’m very happy to hear your thoughts as well.

Apr 17, 2015 - Speaking at jsconf.be on "The Road to ES6, and Beyond"


Next week I will be speaking at jsconf.be in lovely Bruges, Belgium. It’s the second edition of the local Belgian JavaScript community gathering and it’s promising to be quite an interesting program with talks on some of the usual suspects: React, Angular, Meteor, node and some less usual suspects, like the Cody CMS, a content-management system written 100% in JS. Perhaps unsurprisingly given recent events, I will be speaking about ECMAScript 6, which is nearing completion (at the time of writing, TC39 itself has signed off on the spec, but it is pending formal approval from ECMA). This is quite a historical moment. As Allen Wirfs-Brock, the editor of the ES6 spec put in the foreword of the new spec:

Focused development of the sixth edition started in 2009, as the fifth edition was being prepared for publication. However, this was preceded by significant experimentation and language enhancement design efforts dating to the publication of the third edition in 1999. In a very real sense, the completion of the sixth edition is the culmination of a fifteen year effort.

The title of my talk at jsconf.be is “The Road to ES6, and Beyond”. It’ll be about three things:

  • Part I: JavaScript’s past, and the long road to ECMAScript 6: I’ll give some background on the history of JavaScript, what “ECMAScript” is all about, who TC39 is and what they do. I’ll also recount the “harmony”-era decision that led first to a general cleanup of the language (ES5 strict mode) which then paved the way for growing the language, culminating in the ES6 effort.
  • Part II: a brief tour of ECMAScript 6: this is the part most probably of interest to JS devs. I’ll give an overview of some of the more significant new language features in ES6. It’s difficult to be exhaustive here, so I’ve focused mainly on the many improvements to functions, the addition of classes and modules, and new control flow abstractions like iterators, generators and promises.
  • Part III: using ECMAScript 6 today, and what lies beyond: this part will be on the practical issue of writing ES6 code in a time where none of the major platforms have yet fully implemented the spec. I’ll discuss some ES6-to-ES5 compilers like Traceur, BabelJS and TypeScript (yes, I’m aware the latter is not technically an ES6 compiler, but it’s a relevant tool in this space). I’ll end with an outlook on what’s on the table for ES7 (or I should say, ECMAScript 2016), focusing on some of the more mature features.

I consider it a privilege to be given the chance to talk to the JS community about these exciting new features. The timing couldn’t be better.

Update: slides of my talk. If you’re interested in me giving this talk at your company or event, do get in touch.

Jul 16, 2014 - Java Fork/Join Parallelism in the Wild


My student Mattias De Wael, with guidance from Stefan Marr and myself, recently published a study on how the Java Fork/Join framework is being used in practice by developers. From the abstract:

The Fork/Join framework […] is part of the standard Java platform since version 7. Fork/Join is a high-level parallel programming model advocated to make parallelizing recursive divide-and-conquer algorithms particularly easy. While, in theory, Fork/Join is a simple and effective technique to expose parallelism in applications, it has not been investigated before whether and how the technique is applied in practice. We therefore performed an empirical study on a corpus of 120 open source Java projects that use the framework for roughly 362 different tasks. On the one hand, we confirm the frequent use of four best-practice patterns (from Doug Lea’s book) – Sequential Cutoff, Linked Subtasks, Leaf Tasks, and avoiding unnecessary forking – in actual projects. On the other hand, we also discovered three recurring anti-patterns that potentially limit parallel performance: sub-optimal use of Java collections when splitting tasks into subtasks as well as when merging the results of subtasks, and finally the inappropriate sharing of resources between tasks.

To me, the most interesting outcome was the observation that the Fork/Join API could benefit from the Java Collections API being extended with collections that can be efficiently split and merged. Often, developers choose suboptimal data structures, or suboptimal methods on existing data structures to do recursive splits/merges. Although perhaps that isn’t even necessary, as it turns out Java 8 Streams effectively cover typical use cases of Fork/Join such as parallel maps and reduces, without the developer having to manually split and merge the collection anymore. The paper has been accepted at PPPJ 2014. The original submission can be accessed here.

May 21, 2014 - AmbientTalk actors are data race and deadlock free


We recently published a new article on AmbientTalk, an actor language I co-designed with a focus on developing mobile applications for ad hoc wireless networks. The main novelty of the article is what we believe to be the first formal account of the communicating event loops model, which is the concurrency model underlying the family of actor languages upon which AmbientTalk is based. Interestingly, this model is also closest to the concurrency model you get in JavaScript, if you think of a WebWorker as an actor. The article gives a comprehensive overview of AmbientTalk’s roots, the language itself, and introduces a “featherweight AmbientTalk” calculus with an operational semantics. We use it to establish data race freedom (actors have isolated memory) and deadlock freedom (assuming all event loop turns are finite, all asynchronous messages sent between actors will eventually be processed). The article is published in the journal “Computer Languages, Systems & Structures”. A preprint copy of the paper is available here. Quoting the abstract:

The rise of mobile computing platforms has given rise to a new class of applications: mobile applications that interact with peer applications running on neighbouring phones. Developing such applications is challenging because of problems inherent to concurrent and distributed programming, and because of problems inherent to mobile networks, such as the fact that wireless network connectivity is often intermittent, and the lack of centralized infrastructure to coordinate the peers. We present AmbientTalk, a distributed programming language designed specifically to develop mobile peer-to-peer applications. AmbientTalk aims to make it easy to develop mobile applications that are resilient to network failures by design. We describe the language’s concurrency and distribution model in detail, as it lies at the heart of AmbientTalk’s support for responsive, resilient application development. The model is based on communicating event loops, itself a descendant of the actor model. We contribute a small-step operational semantics for this model and use it to establish data race and deadlock freedom.