Web Directions 2014 Reflections

I’ve just returned from Web Directions 2014 in Sydney. Overall, it was a great experience, with some valuable take-aways. I’ve jotted down a few of my thoughts around the conference — and a few specific sessions — below. These are my “notes from notes” — I’ve drawn ideas from the notes I took during the conference to consolidate my understandings of the presentations.

This is very much non-exhaustive — I’ve elided a lot of stuff, skipped over whole talks (either because I want to ponder them further, or because I didn’t take notes or my notes are useless). I’ve focussed primarily on the technical talks, as I’ll use this post as a reference for myself on these areas at work. If I get a chance, I’ll write up my thoughts on the keynotes as well.

Emily Nakashima: The Operable front-end


Emily, who works at GitHub, outlined exactly what “operability” is, and outlined some ways of achieving it.

Operability requires strategies for deployment, logging, monitoring, debugging, alerting, and scaling in production. That is, it’s the ability to detect, diagnose, and rectify problems, with the ability to push fixes into production. It’s important, therefore, to use dashboards and automation to ensure that everything that happens after a fix is ready to ship is as smooth and streamlined as possible.

Aside: if you fix a bug, you should look for similar error reports from other users. You’re now the expert in that bug, and are best-placed to respond. This ties in well with SEQTA’s, and my own, philosophy that developers should always always always be in a position to talk to clients, and doing so should be a routine part of their job.

JavaScript error monitoring is one of the most important pieces of the operability puzzle. A great idea is to add a listener for the error event on the window, which collects up various bits and pieces (URL, stack trace, time since load, event target — inc. xpath or CSS selector) and shoots them off to the server via AJAX. (Of course, you want to be careful not to try and send errors about your inability to send errors…) To make this really shine, stop swallowing errors (even though many frameworks advertise this as a “feature”) — in fact, make your own error classes and throw them!

Performance metrics are another part of the jigsaw. They can be synthetic (that is, measured in a carefully-controlled environment), or RUM (real-user metrics). Synthetic metrics should be part of the CI infrastructure; they’re primary utility is to catch performance regressions. RUM are arguably more useful, and should be tied to monitoring and alerts. RUM can be collected using the shiny new navigation timing API — but for single page applications (like the SEQTA Suite), you’ll need to record your own metrics — window.performance.mark and window.performance.measure APIs come in useful. An API which hasn’t yet landed — but is coming soon! — is the frame-timing API, which will help to catch jank issues (jank is where the framerate drops below 60fps).

Accessibility is often the “poor cousin”, often because it’s really hard to audit in tests or post-commit hooks etc. Emily has found that scanning the live DOM for accessibility issues, and throwing exceptions (to be caught in window.onerror) is a great way to check for these issues in live, dynamically-generated content. You’re looking for things like missing alt tags, or buttons without text content.

Useful tools:

  • New Relic
  • Errorception
  • Raygun
  • LogNormal
  • Google Analytics
  • Circonus
  • Whatever ops is using 🙂

Sarah Mei: Unpacking technical decisions


Conway’s law:

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

Eric Raymond’s retelling:

If you have four groups working on a compiler, you’ll get a four-pass compiler.

Takeaway: most of the data we look at when considering a project is social, not technical data.

Mark Dalgleish: A state of change


Mark outlined the new Object.observe capabilities in ES6, which lets you track mutations on an object. The main thrust of his talk, though, was about mutability vs. immutability. Traditionally, functional languages have favoured immutable data-structures, as they’re easier to reason about and work with. Lots of JS frameworks are now moving towards representing the state of the application in a series of (immutable) objects; theoretically, this allows you to treat the DOM as a renderer doing its thing somewhere else — and therefore never actually pull anything out of it.

I sent Mark a couple of tweets about using Object.observe to “fake” immutability (and avoid using massive chunks of memory and causing GC pauses), and caught up with him after the talk to discuss this a bit further. Mark noted that he’d also wondered this, but hadn’t yet had a road-to-Damascus epiphany. We chatted about the fact that if the browser (or the DOM) were to introduce truly immutable data-structures, it could then become the engine’s problem to optimise memory — and the engine could do this much more easily than could be done in JS, and more efficiently, too.

Sarah Maddox: Bitrot in the documentation


Technical documentation inevitably suffers from broken links, outdated information, pure fiction, and information overload — caused by changes to the environment, updates to the documentation platform, last-minute changes to the software being documented, and plain-old human error (which, strictly speaking, encompasses all of the above). Sarah outlined some ways of overcoming this:

  • Automated testing of code samples, to detect breakages in the samples, but also breaking changes in the API.
  • Doc reviews in the engineering team’s procedures — that is, making the definition of “done” include documentation. To facilitate this, use the same issue tracker, same review tools, and include the technical writers in the code reviews.
  • Collaborative spot testing sessions.

Jeremiah Lee: Elements of API excellence


The single most important point that Jeremiah made (in my view) was that APIs are built for human beings, and that they should therefore be treated as a UX problem. In other words, if you want to build good APIs, you need to understand the people who will use those APIs.

An excellent API should therefore be:

  • Functional: it should do what it says on the tin;
  • Reliable: it should be available, scalable, stable, and secure;
  • Usable: it should be intuitive, testable, and provide corrective guidance; and
  • Pleasurable: the API should become the means by which great things can be done.

To build excellent APIs that have, at their foundation, a really solid UX, we should make use of the same tools used by graphical UX work. One of these tools is the concept of “personas”. A persona is a descriptive representation of the people who will use the product — and the context in which they operate. By constructing personas, we make visible our assumptions about our users, and provide a frame of reference for the entire team. Personas should, of course, be validated and built around user interviews and surveys — that is, real people.

There are some key aspects of the people who will use our APIs that need to play into the personas we build:

  • Their relationship with the product;
  • The platform and programming language they’re working in;
  • Their experience and skill level;
  • Their English proficiency;
  • The motivation behind the integration work they’re doing;
  • The resources they have available; and
  • Their role within their organisation.

Ensuring that the APIs we build actually do work, and are usable, we need to undertake testing. We should test both passively and actively.

Passive testing involves looking at pieces of data such as support requests, and answering questions like “where are users asking for help?”, “what concepts are frequently misunderstood?”, and “what errors are hit often?”. We should look at API usage during an integration, extracting data such as the time between app registration and first request, the first requests and errors encountered, and the time between start of integration and production. We should also look at API usage after integration to find out information about which endpoints are actually used, and how — this will help to answer questions around what the integration is trying to achieve. It also makes it easier to track down antipattern usage of the API.

Active testing is, by its nature, going to be used more infrequently. For existing APIs, you can take the “dumb pair programmer” approach — basically, go sit with someone working against the API and silently observe what they’re doing. Look for things like interactions with their team (how do they talk about us?), how we fit into their application, how they approach the integration, what problems they encounter (and how they go about solving those), and how they test the integration.

For new APIs, we can build throw-away prototypes, or a mock API (a simple façade over an existing API, or a fully-mocked API). In either case, there should be just enough functionality to be useful, and it should be fully-documented. Put together a well-defined project, ready for an integration, and get an outsider (who lacks insider assumptions) to build the project. You can go to the extent of recording their face and the screen as they work to see how they react; alternatively, or additionally, have them commit their work regularly (every 10m or so) so that it can be tracked. This allows you to then track emotional responses, and see how long tasks took to complete. It also helps to see where the process can be made more affirming, and how errors can better be handled.

The secret to machines talking to machines is to speak human first.

API versioning was brought up in the questions afterwards, and Jeremiah noted his repugnance for such schemes. A recommendation he had, though, was to maintain the two versions for a period of time, and allow developers to “opt in” to the new version when they’re ready: this is the approach taken by Facebook.

CSSMin updated

I’ve updated my CSSMin project with a couple of new features and bugfixes. Download or fork it on GitHub!

What is it?

CSSMin takes your CSS file and strips out everything that’s not needed — spaces, extra semicolons, redundant units, and so on. That’s great, but there are loads of programs that do that. A shell script could do that! So what makes CSSMin different?

When you deliver content over the web, best practice is to deliver it gzipped. CSSMin takes your CSS file, and optimises it for gzip compression. This means that there’s a smaller payload delivered over the wire, which results in a faster experience for your users. It does this optimisation by ensuring that the properties inside your selectors are ordered consistently (alphabetically) and in lower case. That way, the gzip algorithm can work at its best.

What this means in practice is that your gzipped CSSMin-ed files are significantly smaller than plain gzipped CSS, and noticeably smaller than files that have been compressed by other means (say, YUI).

In this update:

Nested properties are now fully supported.

This means that the following CSS:

@-webkit-keyframes 'fadeUp' {
  from { opacity: 0; }
  to { opacity: 1; }

is compressed down to

@-webkit-keyframes 'fadeUp'{from{opacity:0}to{opacity:1}}

Your nested properties will have their contents compressed with all of the other tricks in system, but their order will be retained.

Thanks to bloveridge for reporting this bug and verifying the fix.

Font weights are replaced by their numeric counterparts.

.alpha {
  font-weight: bold;
.beta {
  font-weight: normal;



Values supported are “lighter”, “normal”, “bold”, and “bolder”.

Quotes are stripped where possible.

.alpha {
  background: url('ponies.png');
  font-family: 'Times New Roman', 'Arial';


.alpha{background:url(ponies.png);font-family:'Times New Roman',arial}

As much text as possible is changed to lower-case.

Only selectors, quoted strings (such as ‘Times New Roman’) and url() values are left intact.

Note that this means that if you mix the case of your selectors (for example, SPAN and span), your compression will be sub-optimal.


Some of the ideas for this update were inspired by Lottery Post’s CSS Compressor.

Start using it!


You will need a recent version of Java and the Java compiler.


Download or fork it on GitHub.


  1. Compile the Java:
    # javac CSSMin.java
  2. Run your CSS through it:
    # java CSSMin [input] [output]

If you don’t specify an output file, the result will be dumped to stdout. Warnings and errors are written to stderr.


These are the results of compressing the main CSS file for one of the webapps I develop at work. Note that many of these compressors only offer an online service, which means that they can’t easily be used as part of your general build process.

  Original size (bytes) Gzipped size (bytes)
Plain 81938 12291
YUI 64434 10198
LotteryPost 63609 10165
CSS Drive 69275 10795
CSSMin 63791 9896


Let me know how you go with it — bug reports and feature requests are always welcome!

Theme update (again!)

And so, for the third time this year, I’ve completely re-themed this site. Taking a very different tack from the last theme, I’ve tried to keep this one as simple as possible, with just a few subtle touches here and there to add interest. The palette is more subdued, which hopefully means that reading the text is a more pleasant experience. I’ve also done away with the multi-column body text in favour of a fixed-width design.

At the same time, though, I have endeavoured to use some of the new features available in modern web browsers — gradients, shadows, transitions, generated content, and so on. I’m fairly happy with the result — I think it’s a clean, unobtrusive theme that’s not too in your face. Feedback and criticism welcome! 😀

Input elements that fill their container

Previously, this post advocated the use of “text-indent” on a padding-less, border-less, 100% width input. This works, but it’s quite clunky, and old versions of IE don’t support text-indent, so it just looks bad. A much better solution is to

Just use box-sizing.

As pointed out in the comments, the simplest solution is to simply change the box-sizing model of the input element to “border-box”, rather than the default “content-box”. In the example below, I’ve given the containing div a 4px whiteborder.

<div style="background:#000; width:200px; border: solid 4px #fff;"><input  style="-moz-box-sizing: border-box; box-sizing: border-box; display: block; padding: 4px; width: 100%; height: 100%; background: #fff; opacity: 0.5; border: solid 1px #666;" type="text" value="text input"/></div>

Note that for Firefox, you still need to use the -moz- prefix; it’s supported unprefixed in all the other major browsers, though.

Updated 2012-08-03.

CSS Columns

In this post, I will walk through the new columns specification that arrived in CSS 3. I will show you the current implementation state of columns in the four major rendering engines: Gecko (Firefox), Webkit (Safari & Chrome), Trident (Internet Explorer), and Presto (Opera).

Before we get on to platform-specific issues and workarounds, though, we’ll look at the various CSS properties available for working with columns.

For more in-depth information on columns, you should check out the W3C working draft and Mozilla’s MDC page on columns. The Webkit blog also has an article, but it’s not particularly informative.


I will add more to this entry as I discover more about columns — the goal is to make it an easy-to-understand reference.

Browser capabilities

Property Gecko Webkit Trident Presto
column-count -moz-column-count -webkit-column-count
column-width -moz-column-width -webkit-column-width
columns -webkit-columns
column-gap -moz-column-gap -webkit-column-gap
column-rule-color -moz-column-rule-color -webkit-column-rule-color
column-rule-style -moz-column-rule-style -webkit-column-rule-style
column-rule-width -moz-column-rule-width -webkit-column-rule-width
column-rule -moz-column-rule column-rule

Browsers used for testing: Firefox 3.5.4 (Windows), Safari 4.0.2 (Windows), Internet Explorer 8.0.6001, Opera 10.00 (Windows)

Please let me know if this table is inaccurate, and I will update it.

Browser bugs

These are the bugs that I have encountered using CSS columns — if you know of more, please let me know, and I’ll add them to these lists.

Gecko bugs

  • Specifying an “overflow” (or “overflow-x” or “overflow-y”) property on an element with columns prevents the column rule from being rendered at all.
  • Column rules occasionally don’t render, regardless of the “overflow” property.
  • There is no way to break columns.

Webkit bugs

  • Pixel creep: Pixels from a later column can creep back to the bottom of the previous column. This can happen with plain text, but it is much more noticeable when you use a non-layout altering effect like text-shadow or box-shadow.
  • Text that overflows the column horizontally is chopped off
  • There is no way to break columns.



Value: | auto
Initial value: auto

If you don’t set the column-width property, column-count specifies the number of columns into which the content should be flowed.

If you specify column-width, column-count imposes a limit on the maximum number of columns to be rendered if you supply a numeric value.


Value: | auto
Initial value: auto

This property indicates the optimal column width — columns may be rendered narrower or wider by the UA, according to the available space.

If column-width has the value “auto”, then the width of the columns is determined by other means (for example, column-count).


Value: column-width && column-count

The columns property is a short-hand property, used to set both column-width and column-count simultaneously.


Value: | normal
Initial value: normal

Use column-gap to specify the size of the gutter that lies between columns. Most UAs will render “normal” as 1em.



When a column-rule is specified, you may use column-rule-color to set the colour for the line drawn between columns. This property is approximately equivalent to the various border-(?)-color properties.



By using column-rule-style, you may determine how the line between columns is to be rendered, if at all. Similar to border-(?)-style.


Initial value: medium

column-rule-width sets the width of the line rendered in the gutter between columns. Basically, it’s the same as the border-(?)-width properties.


Value: column-rule-width && column-rule-style & & column-rule-color

Shorthand for setting all three column-rule properties.


Value: 1 | all
Initial value: 1

By using column-span, you can allow an element to span either the entire set of columns, or none at all.

Note that you cannot set an arbitrary number of columns to span — this property essentially ‘interrupts’ the column flow and restarts it below the spanned element.


Value: auto | balance
Initial value: balance

If you have set a height for your columnified element, setting column-fill to ‘auto’ will cause the columns to be ‘filled’ in turn, rather than have the content balanced equally between them.

CSS minifier and alphabetiser

Update: This project is now hosted on GitHub: https://github.com/barryvan/CSSMin/

There are quite a few CSS minifiers out there, which can bring the raw size of your CSS files down substantially. There are, however, significant gains to be made if the CSS is minified so that it gzips better. To that end, I’ve written a small Java application that will read in a CSS file and output its contents to stdout or another file in a format that’s optimised for gzipping.

The problem

A gzipped file will be stored most efficiently when there are many recurring strings in the file. This means that when writing CSS files, this code:

.pony {
border: solid red 1px;
font-weight: bold;
.lemur {
border: solid red 1px;
font-weight: normal;

will be better-compressed than this:

.pony {
border: solid red 1px;
font-weight: bold;
.lemur {
font-weight: normal;
border: red solid 1px;

In the first sample, notice that we have a very long string that occurs twice:

border-style: solid red 1px;

In the second sample, there are strings that occur more than once, but they’re much shorter. The gzip algorithm can, in the first case, replace that entire long string with a much shorter placeholder.

What it does

So, how can we optimise CSS for gzipping, then? A file that’s minified using this CSS Minifier will have these operations applied:

  • All comments removed.
  • The properties within all selectors ordered alphabetically.
  • The values for all properties ordered alphabetically.
  • All unnecessary whitespace removed.
  • Font weights replaced by their numeric counterparts (which are shorter).
  • Quotes stripped wherever possible.
  • As much text as possible transformed to lowercase.
  • Prefixed properties (for example, -moz-box-sizing) placed before the unprefixed variant (box-sizing).
  • Colours simplified from rgb() to six- or three-digit hex values, or simple names.
  • Units on values of 0 stripped.
  • Multi-parameter items simplified to as few parameters as possible.
  • Various other small tweaks and adjustments made.

By way of example, the following CSS snippet:

body {
  padding: 8px;
  margin: 0;
  background-color: blue;
  color: white;
  font-family: "Trebuchet MS", sans-serif;

h1 {
  margin: 0;
  padding: 0;
  font-size: 200%;
  color: #0F0;
  font-weight: bold;

p {
  margin: 0 0 2em;
  line-height: 2em;

would be formatted to the following (note that line breaks have been added for legibility — no line breaks appear in the final output):

body{background-color:blue;color:#fff;font-family:"trebuchet ms",sans-serif;
padding:0}p{line-height:2em;margin:0 0 2em}

Compression results

These are the results of compressing the main CSS file for one of the webapps I develop at work.

  Original size (bytes) Gzipped size (bytes)
Plain 81938 12291
YUI 64434 10198
LotteryPost 63609 10165
CSS Drive 69275 10795
CSSMin 63791 9896


Head over to GitHub to download the source.


First, if you haven’t done so yet, compile the code:

# javac CSSMin.java

Then, you can call the minifier by running

# java CSSMin in.css [out.css]

If you do not specify an output file, the resultant CSS will be printed to stdout (and can then be redirected as you wish).


If you have any questions or comments about this app, or if you find a bug or some weird behaviour, just comment on this post, and I’ll see what I can do.

You can also raise issues on GitHub, fork the project, commit changes, and more.

If you find this utility useful, let me know!

New theme!

I’ve finally got around to replacing the placeholder theme I had on the site. The new theme that I’ve made is much cleaner, simpler, and fresher.

This new theme is built around the Sandbox WordPress theme. Sandbox provides you with a really well marked-up document, with appropriate classes, ids, and so on where you need them — essentially, it lets you build the entire theme in CSS without having to worry about the markup, and in so doing, encourages you to build a CSS-only design. I’m proud to say that this design is wholly CSS — there is no extraneous markup, and there are also no browser-specific hacks or files: everything is contained in a single CSS file and about five images, for a total size of around 40kB.

I should also note once again that Firebug is, perhaps, the best tool for web development, be it design or coding — about 90% of the styling was tested in the browser using Firebug before being applied in the CSS file itself.

Comments, questions, or criticisms of the new design? Just leave them in the comments.

Web developer tools

In this post, I’ll outline some of the web developer tools available in the major browsers: Firefox, Internet Explorer, Opera and Safari. This is a wholly subjective post, based on my experience as one of two developers on a very large AJAX application at Saron Education.


Firefox has arguably got the best web development tools available, all of which can be downloaded from the Firefox Addons site. The two which I find most useful are the Web Developer Toolbar, by Chris Pederick, and the often-copied Firebug (official website), which itself sports a variety of addons.

Web Developer Toolbar

The web developer toolbar is useful for quickly enabling and disabling features of your site, checking CSS, emulating mobile browser rendering, and controlling Firefox more precisely. Personally, I find its most useful features are the ability to:

  • Disable the browser cache entirely, which removes the need for Control-Refresh or cache-clearing;
  • Outline deprecated elements, or any particular set of elements in a variety of fashions, which is very useful for updating old sites;
  • Extract colour information from the current website; and
  • View the cookie information for the current site.

Download the Web Developer Toolbar


I sometimes wonder how I ever managed to develop web applications without Firebug. Firebug allows you to alter CSS styles on the fly, edit the HTML contents of the page on the fly, visually watch the DOM being changed by your scripts, debug your scripts, type and run JavaScript straight from the browser, visualise network activity, inspect XMLHttpRequests, and much much more besides. Firebug is, in my experience, the most mature, stable, and efficient of all the tools in this survey.

The features of Firebug which I find most useful are:

  • The ability to ‘inspect’ the DOM visually (by clicking on elements within the page), then alter their attributes, styles, and even their content dynamically;
  • The ability to watch the effects of DOM alterations by running scripts;
  • The console, with which you can craft and run JavaScript which is run as though it were part of the page itself;
  • The network monitor, which allows you to view all the POSTs and GETs your XMLHttpRequests create.

Download Firebug

Internet Explorer

Until IE 8, the tools available to developers in IE were woeful at best. Fortunately, however, Microsoft has got their act together, and mimicked Firebug for version 8. The features made available in this tool include

  • The ability to interrogate the DOM to view style information about elements (changing attributes and styles hardly ever seems to work in the latest Beta, so viewing them is all you can really achieve);
  • A console, with which you can craft and run Javascript as though it were embedded in the page;
  • Javascript debugging.

Unfortunately, these tools are still very much in beta, and are very buggy. As I mentioned, altering element attributes and styles hardly has any effect. Also, the CSS inspection system is poorly laid out and often just plain wrong. The console is well-implemented. The entire system is definitely a step in the right direction, but it suffers from bugs and lack of innovation. Also, it seems to slow down and destabilise the entire browser.

Internet Explorer 8’s developer tools are built in; access them with the F12 key.


Opera’s developer tools, codenamed ‘Dragonfly’, sit between Firebug and IE in terms of functionality and facility. The DOM inspection and manipulation tools work really well (as well as Firebug), and are more immediately configurable, thanks to a variety of toolbar buttons. Dragonfly doesn’t have a console; rather, it uses a ‘command line’ interface. The difference is that where the console in Firebug and IE has seperate areas for input and output (what you type and what it does), the command line mixes these two together, much like a Unix shell or DOS. Personally, I prefer the console paradigm, but it’s much of a muchness.

Opera’s Dragonfly is built in; access it by going to Tools -> Advanced -> Developer tools.


As with most Apple products, the developer tools in Safari are very pretty. There is a console akin to that in Firebug and IE, and you can inspect and manipulate the DOM. Unfortunately, however, the tools are quite buggy, and often fall down. Whilst the tools are very pretty, they don’t seem to be as stable even as IE 8’s.

Safari’s web developer tools are built in; access them enabling the develop menu from the Advanced tab of the config, then choosing the appropriate menu item from the Develop menu.


Whilst Firebug is still by far the best tool available for web developers, the widespread development of tools by browser developers means that cross-browser debugging and development is becoming ever easier. Hopefully the tools will foster competition, so that feature sets and stability improve in all the tools.

Geany IDE: Tango dark colour scheme

Now on GitHub

I’ve decided to host this theme on GitHub, in the hopes that it will be easier for people to contribute, modify, and extend it. Head over to the GitHub page to download and/or fork the theme!

Recent changes

  • Added in batisteo’s python filetype — much appreciated!
  • I’ve updated the scheme slightly, to better support CSS3 functionality, JavaScript highlighting, and more languages. The list of supported languages below has been updated.

Geany is a lightweight IDE for Linux and Windows, and it’s quickly become my favourite, even going so far as to supplant VIM in many of my day-to-day tasks. I’ve started putting together a dark Tango-based theme for the Geany IDE. So far, the coloured filetypes include:

  • C
  • C++
  • CSS
  • HTML
  • Java
  • Javascript
  • PHP
  • SGML
  • Shell scripts

The colours are loosely based on the Dark Geany project.

Get it from GitHub! (Or download the original package, which is probably out of date.)

Geany dark Tango colour scheme screenshot