Friday, November 14, 2014

The goTo object pattern in Node.js code

I've been messing around with Node.js lately. Like everyone using Node.js, I've been wrestling with the fact that it forces programmers to use hand-rolled continuation passing style for all I/O. Of course, I could use something like async.waterfall() to eliminate boilerplate and deeply indented nested callbacks. However, since I am crazy, I am using Closure Compiler to statically typecheck my server code, and I don't like the way async.waterfall() defeats the typechecker.

You can stop reading here if you're perfectly happy with using async for everything, or perhaps if you're one of those people convinced that static typing is a boondoggle.

Anyway, I've been using a "goTo object" for my callbacks instead. The essence of the pattern is as follows:

  • Define a local variable named goTo whose members are your callbacks.
  • Asynchronous calls use goTo.functionName as the callback expression (usually the final argument to the asynchronous function).
  • At the end of the function, outside the goTo object, start the callback chain by calling goTo.start().

Here is an example, loosely patterned after some database code I recently wrote against the pg npm module. In raw nested-callback style, you would write the following:

var upsert = function(client, ..., cb) {
  client.connect(function(err, conn) {
    if (err) {
      cb(err);
      return;
    }
    
    conn.insert(..., function(err, result) {
      if (err) {
        if (isKeyConflict(err)) {
          conn.update(..., function(err, result) {
            conn.done();
            if (err) {
              cb(err);
              return;
            }
            cb(null, 'updated');
          });
        }
          
        conn.done();
        cb(err);
        return;
      }

      conn.done();
      cb(null, 'inserted');
    });
  });
};

With a goTo object, you write this:

var upsert = function(client, ..., cb) {
  var conn = null;

  var goTo = {
    start: function() {
      client.connect(..., goTo.onConnect);
    },

    onConnect: function(err, conn_) {
      if (err) {
        goTo.finish(err);
        return;
      }
      conn = conn_;  // Stash for later callbacks.
      conn.insert(..., goTo.onInsert);
    },

    onInsert: function(err, result) {
      if (err) {
        if (isKeyConflict(err)) {
          conn.update(..., goTo.onUpdate);
          return;
        }
        goTo.finish(err);
        return;
      }
      goTo.finish(null, 'inserted');
    },

    onUpdate: function(err, result) {
      if (err) {
        goTo.finish(err);
        return;
      }
      goTo.finish(null, 'updated');
    },

    finish: function(err, result) {
      if (conn) {
        conn.done();
      }
      cb(err, result);
    }
  };
  goTo.start();
};

This pattern is easy to annotate with accurate static types:

var upsert = function(...) {
  /** @type {?db.Connection} */
  var conn = null;

  var goTo = {
    ...

    /**
     * @param {db.Error} err
     * @param {db.Connection} conn_
     */
    onConnect: function(err, conn_) { ... },

    /**
     * @param {db.Error} err
     * @param {db.ResultSet} result 
     */
    onInsert: function(err, result) { ... },

    /**
     * @param {db.Error} err
     * @param {db.ResultSet} result
     */
    onUpdate: function(err, result) { ... },

    /**
     * @param {?db.Error} err
     * @param {string=} result
     */
    finish: function(err, result) { ... }
  };
  goTo.start();
};

Some notes on this pattern:

  • It is slightly more verbose than the naive nested-callback version. However, the indentation level does not grow linearly in the length of the call chain, so it scales better with the complexity of the operation. If you add a couple more levels to the nested-callback version, you have "Tea-Party Code", whereas the goTo object version stays the same nesting depth.
  • As in the nested-callback style, error handlers must be written by hand in each callback, which is still verbose and repetitive: the phrase if (err) { goTo.finish(err); return; } occurs repeatedly. On the other hand, you retain the ability to handle errors differently in one callback, as we do here with key conflicts on insertion.
  • Callbacks in the goTo object can have different types, and they will still be precisely typechecked.
  • The pattern generalizes easily to branches and loops (not surprising: it's just an encoding of old-school goto).
  • Data that is initialized during one callback and then used in later callbacks must be declared at top-level as a nullable variable. The top-level variable list can therefore get cluttered. More annoyingly, if your code base heavily uses non-nullable type annotations (Closure's !T), you will have to insert casts or checks when you use the variable, even if you can reason from the control flow that it will be non-null by the time you use it.
  • Sometimes I omit goTo.start(), and just write its contents at top-level, using the goTo object for callbacks only. This makes the code slightly more compact, but has the downside that the code no longer reads top-down.
  • There is no hidden control flow. The whole thing is just a coding idiom, not a library, and you don't have to reason about any complex combinator application going on behind the scenes. Therefore, for example, exceptions propagate exactly as you'd expect just from reading the code.
  • The camelCase identifier goTo is used because goto is a reserved word in JavaScript (reserved for future use; it currently has no semantics).

For comparison, here is the example rewritten with async.waterfall():

var async = require('async');

var upsert = function(client, ..., finalCb) {
  var conn = null;
  async.waterfall([
    function(cb) {
      client.connect(..., cb);
    },

    function(conn_, cb) {
      conn = conn_;
      client.insert(..., cb);
    }

  ], function(err, result) {
    if (isKeyConflict(err)) {
      client.update(..., function(err, result) {
        conn.done();
        if (err) {
          finalCb(err);
          return;
        }
        finalCb(null, 'updated');
      });
      return;
    }

    conn.done();
    if (err) {
      finalCb(err);
      return;
    }
    finalCb(null, 'inserted');
  });
};

In some ways, this is better and terser than the goTo object. Most importantly, error handling and operation completion are isolated in one location. Also, blocks in the waterfall are anonymous, so you're not cluttering up your code with extra identifiers.

On the other hand, this has some downsides, which are mostly the flip sides of some properties of goTo objects:

  • Closure, like most generic type systems, only supports arrays of homogeneous element type (AFAIK Typescript shares this limitation update: fixed in 1.3; see comments). Therefore, the callbacks in async.waterfall()'s first argument must be typed with their least upper bound, function(...[*]), thus losing any useful static typing for the callbacks and their arguments.
  • Any custom error handling for a particular callback must be performed in the shared "done" callback. Note that for the above example to work, the error object must carry enough information so that isKeyConflict() (whose implementation is not shown) can return true for insertion conflicts only. Otherwise, we have introduced a defect.
  • Only a linear chain of calls is supported. Branches and loops must be hand-rolled, or you have to use additional nested combinators. This doesn't matter for this example, but branches and loops aren't uncommon in interesting application code.

Now, goTo objects are not strictly superior to the alternatives in all situations. The pattern still has some overhead and boilerplate. For one or two levels of callbacks, you should probably just write in the naive nested callback style. If you have a linear callback chain of homogeneous type, or if you just don't care about statically typing the code, async.waterfall() has some advantages.

Plus, popping up a level, if you are writing lots of complex logic in your server, I'm not sure Node.js is even the right technology base. Languages where you don't have to write in continuation-passing style in the first place may be more pleasurable, terse, and straightforward. I mean, look: I've been reduced to programming with goto, the original harmful technology. By writing up this post, I'm trying to make the best of a bad situation, not leaping out of my bathtub crying eureka.

Anyway, caveats aside, I just thought I'd share this pattern in case anyone finds it useful. Yesterday I was chatting about Node.js with a friend and when I mentioned how I was handling callback hell, he seemed mildly surprised. I thought everybody was using some variant of this already, at least wherever they weren't using async. Apparently not.


p.s. The above pattern is, of course, not confined to Node.js. It could be used in any codebase written in CPS, in a language that has letrec or an equivalent construct. It's hard to think of another context where people intentionally write CPS by hand though.

Monday, November 10, 2014

A lottery is a tax on... people who are good at reasoning about risk-adjusted returns?

Rescued from the drafts folder because John Oliver has rendered it timely.

People who consider themselves smart sometimes joke that a lottery is a tax on people who are bad at math.

The root of this reasoning is that the expected return on investment for a lottery ticket is negative. It can't help but be otherwise: the lottery turns a profit, hence the payout multiplied by the probability of winning must be less than the price of the ticket. Therefore, the reasoning goes, the people who buy lottery tickets must be incapable of figuring this out. Ha ha, let us laugh at the rubes and congratulate ourselves on our superiority.

One rejoinder is that the true value of a lottery ticket, to the buyer, is the entertainment value of the fantasy of winning. One obtains the fantasy with probability 1 and thus as long as the entertainment value of this fantasy exceeds the price of the ticket, the buyer comes out ahead.

There is something to this. But I think a stronger claim can be made.

A lottery is the mirror image of catastrophe insurance. Note that buying insurance also has a negative expected return. Provided their actuarial tables are accurate, insurance companies turn a profit. Therefore the probability of being compensated for your loss, multiplied by the compensation, must be less than the cost of the insurance premiums. But nobody says that insurance is a tax on people who are bad at math. Quite the opposite: buying insurance is viewed as a sign of prudence.

The issue, of course, is that the naive expected return calculation fails to adequately consider the nature of risk and catastrophe. At certain thresholds, financial consequences as experienced by human beings become strongly nonlinear, probably due to the declining marginal utility of money. Suffering complete ruin due to, say, a car accident entails such severe consequences that we are willing to accept a modestly negative expected return in exchange for the assurance that it will simply never occur.

A lottery is simply the flip side of this coin. The extremely rare event is a hugely positive one, instead of a hugely negative one, huge enough to produce qualitative rather than merely quantitative changes to your lifestyle. And one accepts a modestly negative expected return, not so that one can avoid the risk, but that one can be exposed to the risk.

If you are a member of the educated, affluent middle class, there is an excellent chance that your instinct rebels, and you're already hunting for the flaw in this reasoning. Surely there's something sad, and not prudent, about those largely working-class souls who buy a lottery ticket every week, rather than rolling that $52 per year into a safe index fund at a post-tax rate of return of roughly 2.5% per annum, at which rate their investment, if it somehow survived unperturbed the ups and downs of a life that is considerably more exposed to financial risk than a middle-class person's, might compound to the princely sum of a couple months' rent by the time they die, or alternatively enough to pay for a slightly nicer coffin.

And maybe there is a flaw in this reasoning. I'm not completely convinced myself and I'm not going to start buying lottery tickets. But I'm honestly having trouble finding the flaw. Any indictment of spending (small amounts of) money on lottery tickets must surely also apply to buying insurance. If it is worth overpaying a little bit to eliminate the possibility of a hugely negative outcome, surely it is worth overpaying a little bit to create the possibility of a hugely positive outcome. The situations are symmetric, and I think one can only break the symmetry by admitting the validity of loss-aversion (usually viewed as irrational by economists) or something similar.

Alternatively, if you have access to the actuarial tables of your insurance company, then perhaps you can argue that a lottery is, quantitatively, simply a worse deal than your insurance typically is... but you probably don't have access to those actuarial tables. And I sincerely doubt that the widespread middle-class snobbery towards lottery players is based on quantitative calculations of this sort. (Actually, I strongly suspect that it is based on a fallacious, gut-instinct "folk probability" feeling that gambling on any extremely remote event, like winning the lottery, is somehow inherently foolish.)

So what's the flaw? I ask this question non-rhetorically; that is, I am genuinely curious about the answer.


p.s. None of the above, of course, is to say that I think the taxation effect of lotteries, which is staggeringly regressive, is a good thing. I would strongly support the replacement of lotteries with progressive tax increases plus transfer payments! Gambling addiction is bad too. But these things seem distinct from the notion that playing the lottery in small amounts is irrational in some game-theoretic sense.