Angular and Optimizely: A/B testing on SPAs

One of my first tasks at blinkbox books was to integrate Optimizely‘s powerful A/B testing service with our dynamic Angular JS single page web app. This blog post describes my experience and also shows how this can be done in a way that eliminates flashes of unstyled content (fooc).

How Optimizely works

Optimizely's WYSIWYG editor (screenshot from

Optimizely is a A/B testing service that allows creating one or more variations of a page, and targeting those variations on a percentage of the page’s visitors. The way it works is simple. Optimizely provides a WYSIWYG interface which allows a user (e.g. the marketing team) to make changes to a variation. The changes translate into some jQuery code. This code is embedded into a JavaScript snippet that is inserted into the page.
How Optimizley changes the page

Dynamically updating dynamic single page ecommerce websites

Optimizely works great if the page is rendered on the server, and user interactions are simply page changes. In other words, the jQuery snippet that changes the page would be applied once the page is loaded.

But what if the page loads a single page application (SPA)? In that case, Optimizely doesn’t know if the page has changed, because that is controlled by JavaScript instead of page requests. This is explained in detail in the support pages. To solve this, Optimizely provides an API that can be used to manually tell Optimizely that a page has changed. We simply call optimizely.push('activate') to tell Optimizely to activate any targeted by the current URL.

However, SPAs aren’t just about changing pages dynamically. They can involve very dynamic views and components that asynchronously request content dynamically. For example, if the user scrolls down, more items can be dynamically requested and displayed. To make use of Optimizely’s WYSIWYG editor, we needed to support any update of any part of the page at any url. This would mean that we could pass Optimizely to the marketing team without worrying about making any kind of code change to make it work. This post is an overview of what I did to get Optimizely to correctly apply modifications to any part of a SPA.

The naive solution: activate experiments when the page changes

There’s a few events that we could hook into and activate Optimizely experiments. Each of these solutions fixes a problem but introduces another problem. Nevertheless, let’s talk through each of them.

  • $routeChangeSuccess: The obvious one. Every time the page changes, we need to apply any experiments that might be associated with the new URL. We call optimizely.push('activate') and this will take care of everything for us. The problem with this is that there might be angular directives and embedded modules in the page which load dynamically. In this case, the Optimizely snippet would be applied, but no actual changes would be made because the required elements would not be loaded yet.

  • $includeContentLoaded: Handling dynamic templates. To fix the problem above, we could listen for this event and call optimizely.push('activate') every time it fires. This will work but only for the first time that the template is loaded. For example, if we load a template for one ‘tab’, then switch tabs, the second time the user visits the first tab $includeContentLoaded will not be fired, as it is already loaded and cached by angular. The other problem is angular directives that have external templates will not trigger this event when their templates are loaded.

  • $browser.notifyWhenNoOutstandingRequests: An estimate of when the page has finished ‘loading’. This private API is what is used by protractor for end-to-end tests. If we register a callback that activates Optimizely when the page has finished loading according to angular, this would apply any page modifications correctly to the page. However, the drawback is that it will take some time for the page to finish loading and so there will be a very obvious flash of un-styled content.

A more comprehensive solution: hook into the digest cycle.

What if we knew roughly when the DOM has changed by Angular? We could then apply the Optimizely changes every time Angular changed the DOM. Luckily, we do know when the DOM has changed - in the digest cycle. We can simply call the activate method every time angular executes the digest cycle. And this is easily done by setting up an infinite watch, like below:

var force = true;
$rootScope.$watch(function () {
setTimeout(function () {
force = !force;
return force;
}, function () {

I found that having this solution along with listening for the $routeChangeSuccess event worked best, but there are still some problems…

Applying an experiment multiple times.

One could ask, surely we’re calling the Optimizely API dozens of times. Is this wise? Well, it turns out that the Optimizely snippets are idempotent - meaning if we call it multiple times, it won’t change the page multiple times.

Too many XHR requests

However, I did notice that several XHR requests ended up being called because of Optimizely’s logging feature. This is bad. Every time we call optimizely.push('activate'), an XHR request is queued. This is not only bad network usage, it will also drain the battery and is just pure evil. We had to have a workaround. It would be nice if Optimizely allowed us to disable logging for a single page, but until then, I implemented an incredibly hacky workaround. The solution: monkey-patch the XHR.

function patchXHR() {
var originalOpen =;
var originalSend = window.XMLHttpRequest.prototype.send;

var prevUrl; = function (type, uri) {
if (uri.lastIndexOf('') >= 0 && fromDigest){
// we set this up in order to intercept the request in the 'send' function.
this._requestURI = uri;
fromDigest = false;
originalOpen.apply(this, arguments);

window.XMLHttpRequest.prototype.send = function () {
if (typeof this._requestURI === 'string'
&& this._requestURI.lastIndexOf('') >= 0) {
var currentUrl = $location.path();
if (currentUrl === prevUrl) {
// we prevent the request if this was the same page
} else {
// we allow requests on actual page changes.
prevUrl = currentUrl;
originalSend.apply(this, arguments);
} else {
// we allow all non-optimizely requests
originalSend.apply(this, arguments);


This just about solves the requests problem, but there are still some other problems…

Undoing an experiment

What if we had an experiment that removed the navigation bar, or the footer, or any other common element in our website? For example, what if we wanted to remove the page footer but only in the ‘About Us’ page, and not any other pages? Well, because we’re using a modular single page application, once we remove the element (using the Optimizely snippet), it’ll be removed from our whole application until the user reloads! This is because common elements like the page header and footer don’t change between page changes. All the routing is done in JavaScript, remember?

Unfortunately, there wasn’t any elegant way to fix this. We decided to simply not allow these types of changes unless they are app-wide. It would have been nice if there was a way of undoing a snippet, but this would definitely be a challenge (e.g. if you remove an element and not hide it, then re-insert it, how do you ensure angular still knows about it? What about memory and performance considerations).


Getting third party services to work with a SPA is hard. Optimizely should invest engineering effort to make it work with at least the most popular frameworks such as Angular, or at least provide more fine grained APIs to allow users to integrate manually.

While the solution of using Angular’s digest cycle worked, it isn’t great and smells like a hack. There needs to be a better way of applying A/B testing experiments on a single page app, and this would require a lot of thought.

Having said that, for very simple web apps, the approach described above would probably be overkill. In fact, for most simple applications, using the $routeChangeSuccess approach would work just fine. However, if your app is dynamic and has many components and directives which are also dynamic, getting Optimizely to work will need a bit more hacking - and this article was supposed to be an overview of what we did at blinkbox to get it to work.

Show me the code

To do all of the above (and a bit more), I wrote an Angular service for the blinkbox books application. You can see the service on GitHub. I’m also thinking about taking this out and making it its own service so anyone can drop it into their app.

Why I bought the Alfred Powerpack

If you haven’t heard of Alfred, check it out. It’s basically a Spotlight replacement with a lot of power.

If you use Alfred but not sure if you should buy the Powerpack, check out some of the cool things you can do with it in this GitHub repo.

Alfred is probably my #1 productivity tool on the Mac. Anything is literally a few keystrokes away.

Alfred also integrates with Dash. Dash is an offline, quick, searchable documentation tool. Combine Dash with Alfred, and you get documentation for your favourite language or library literally in a few keystrokes.

Instant documentation

Dash isn’t free (though there is a free trial), but if you’re willing to invest in some great developer tools that would save you time, this combo works great.

Finally one more thing I love about Alfred: opening Google Docs. If you install Google Drive, your Google Docs, Sheets, and Presentations will be downloaded and synced. Then you can open any Google Doc using Alfred (type ‘open’ followed by the filename), and it will open the Google Doc in the browser. I found this really quick and useful because it means I don’t have to have a browser open, go to Google Drive, and search for the file.

I was actually wondering if anyone has done a Google Drive Workflow for Alfred that would allow this functionality without having to download and sync all your files using the Google Drive app. It turns out no one has done it. I might take a look and see if I can integrate some of Google Drive’s APIs with Alfred Workflows to see if this is possible.

TIL How a Java Debugger Works

I’m working with my friend on a project to implement a web based debugger for Java projects. Today I learned all about the Java Debug Interface (JDI, which I like to pronounce as ‘Jedi’).

It’s essentially an event-driven request/response API that allows all the features that a debugger supports - step over, step into, breakpoints, stack inspection, etc. For example, say we want to set a breakpoint on a certain line number. First, we load the target class, then create a breakpoint request, wait for the response which tells us the breakpoint event was handled successfully. Now, we can look at the stack at this point and inspect variables.

Add some websocket wizardry, and you can hook it up with a web application.

If you’re interested in the details, head over to the GitHub project

Why a web based Java debugger? Aren’t you reinventing the wheel?

The idea isn’t to bring a fully-fledged code editor to the web. Plenty of those exist. Instead, the idea is to quickly debug an existing Java project in the browser. This simplifies the task of the web app - we don’t care about writing code, we just care about a simple and purpose-built debugging experience. It’s also useful for people who use Vim or Sublime instead of an IDE. Finally, at the moment this is just a proof-of-concept experiment. We’ll see how it goes.

Setting up my dev environment on a new Mac

Here’s what I do when I get a new Mac / reinstall OSX.

The absolute essentials

  1. Download chrome. Already, everything is synced. Awesome.
  2. Download Alfred. This is my go-to tool for opening just about anything. It’s so good I actually bought the powerpack.
  3. Download Spectacle for easy and powerful window management.
  4. Download iTerm. This is my preferred terminal environment.
  5. Set up a global keyboard shortcut for opening iTerm.
  6. Install homebrew. Once we have this, we have everything. When installing homebrew, it will also install the Apple Command Line Developer Tools. Yay.
  7. brew install all the things. I normally brew install node first as I am a self-proclaimed JavaScript fanboy.
  8. Install Sublime Text 3. Unless you’re a vim wizard. In that case ignore steps 1-7. Vim will suffice.

Make the terminal awesome

Dat terminal do

  1. Download oh-my-zsh. I can’t be bothered explaining the benefits of zsh over bash but you’ll feel the power of zsh as soon as you start using it.
  2. Set up zsh-syntax-highlighting. Gives instant visual feedback to tell you what you’re about to execute is correct. e.g. If you type the command “echo” incorrectly, it will show in red. If you type it correctly, it will show in green (like the above screenshot), before you actually execute the command.
  3. Download tomorrow-night-eighties theme for iTerm2. To do this, save this file as ‘Tomorrow Night Eighties.itermcolors’ and open it. iTerm2 will import it. Then, choose it in iTerm > Preferences > Profiles > Default > Colors > Load Presets…
  4. Set up pure prompt. This will add stuff like nicer git integration, timing functions (see screenshot above), and other neat tricks in your terminal. To do this, save this file as ‘pure.zsh’. Then run:

    mkdir ~/.oh-my-zsh/functions
    ln -s /path/to/pure.zsh ~/.oh-my-zsh/functions/prompt_pure_setup
  5. Restart iTerm.

  6. At this point, I like to set up my zshrc aliases. Sublime Text is an important one. Add this to the end of your .zshrc file:
    alias subl="'/Applications/Sublime'"

Cool, you can now edit files in your terminal using the subl keyword! (Try it on folders too!)

Other stuff

Set up your ssh keys:

  1. Open a terminal and type ssh-keygen
  2. Repeatedly press enter (feel free to give a password if you want)
  3. Copy your public key and put it into your github account.
    cat ~/.ssh/ | pbcopy

Set your git user name and email:

git config --global "Your Name"
git config --global

Set up git lg alias, a better git log:

git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"

JIT App Installations.

A lot of people install all the apps they could possibly need in the future right after they do a fresh install. I used to do this. Then I realised the majority of this time is time wasted. A better approach is to install apps on the fly when you need them. This will save you time, and also some precious disk space! Package and app managers (like npm or the App Store) make installations really quick and easy. So install the essentials and forget the rest!

TIL Watching People Code is a thing

So lately, thanks to Twitch, there’s been a new phenomenon on the interwebs to do with watching people play games. This craze got so big that large companies such as Google and Amazon are willing to spend millions to buy Twitch because of how successful it is.

Well, it looks like watching people write code live is now becoming a thing. There’s a growing reddit community (/r/WatchPeopleCode) dedicated to it, and a website was made to list the current live streams from reddit: The purpose is simple. You watch someone write code live online. Sometimes it’s someone giving a tutorial and speaking to the audience. Sometimes it is just someone working on their own project - maybe there’s better productivity when you know someone’s watching you code.

Who knows I might try it out one day. Or maybe in the future ‘watch people blog’ will become a thing. Who knows?