Coding While Fasting Part 2

Read Part 1: General productivity tips for coding during Ramadan

The assumption is that when you’re tired, you’re more likely to lose focus and therefore make mistakes. It is important to combat this tendency by first not blaming your lack of productivity and mistakes on fasting, and second by finding and using productivity tools and techniques. In the previous post, I mentioned 5 tips that can help you with general productivity during Ramadan. In this post, I focus on technical techniques and tools to help you be a more productive coder during Ramadan.

1. Catch errors fast.

Squash 'em before they become a problem

Linting

Sometimes, you want to have a safety net to catch common typos and errors. You can use linting to catch out obvious errors and typos. Think of it as a spell checker for code. Here’s an example showing the use of eslint to lint JavaScript code.

1
var foo = bar;
1
2
1:5 - 'foo' is defined but never used (no-unused-vars)
1:11 - 'bar' is not defined. (no-undef)

When coding, having such safety nets is useful in general, and when you’re fasting they certainly help you catch common mistakes.

We can attach a linter to a pre-commit hook. This means you won’t be able to commit code if there are any linting issues like above. Here‘s a pre-commit hook that uses eslint.

2. Focus on one thing at a time. Tests.

Tests make you focus!

Test driven development becomes more useful when you’re fasting because of a couple of reasons:

  1. It allows you to focus on one thing at a time.
  2. It allows you to catch errors before you commit them.

I find that it is always useful to write down what you’re doing - it allows you to focus on the task at hand and not get distracted by other things. Writing tests allows you to do this, it gives you a ‘task’ that you must fix before moving on to the next thing.

There are tools to help you with testing. PhantomFlow allows you to write front end user flows. Here’s an example flow:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
flow("Get a coffee", () => {
step("Go to the kitchen", goToKitchen);
step("Go to the coffee machine", goToMachine);
decision({
"Wants Latte": () => {
chance({
"There is no milk": () => {
step("Request Latte", requestLatte_fail);
decision({
"Give up": () => {
step("Walk away from the coffee machine", walkAway);
},
"Wants Espresso instead": wantsEspresso
});
},
"There is milk": () => {
step("Request Latte", requestLatte_success);
}
});
},
"Wants Cappuccino": () => {
// ...
},
"Wants Espresso": wantsEspresso
});
});

Now that the user flow is written, even if it is just stubs like above, you have in your head a general plan of what the feature will look like. You can then implement each step and focus on it. For example, you can implement goToMachine. For more details about PhantomFlow, check out the GitHub repo here.

3. Code reviews and pair programming

Pair programming

A good way to keep productive while fasting is to work with someone.

Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently.

From Wikipedia

Pair programming and code reviews have many benefits. When fasting, they are extra useful because:

  1. When in the driving seat, they force you to be constantly focussed. Someone else is watching you. Better not make mistakes.
  2. When in the observer seat they allow you to ‘rest’ from actual coding, but still allow you to contribute to the solution.
  3. They catch out problems in your code, and allow you to learn from someone else’s experience.

Conclusion

Ramdan Mubarak!

Ramadan is a month of spirituality, but it is also a month of productivity. It forces you to focus - whether it is on your worship, or on your work. Finding tools and techniques to help you focus during Ramadan is not only good to get you through the month, but the things you learn can be applied throughout the year.

Offline-resilient Mixpanel tracking for Ionic without a Cordova plugin

Mixpanel is a tracking and analytics platform that allows you to track and analyse user behaviour in your apps.

Mixpanel is a user behaviour event tracking library for the Web, iOS, and Android

Mixpanel provides great libraries for Web, iOS, and Android.
But what about cross-platform web applications that run in Cordova? This is the problem I was faced with when I tried to integrate Mixpanel analytics into my simple prayer times app.

Why not use the Mixpanel JS library?

The most obvious solution is to use the JS API that is provided directly from Mixpanel. However, there’s a problem with this approach. The JS API assumes always-on connectivity. However, as we know, mobile phones aren’t always online. If you want to track user behaviour while they’re on a plane or when they don’t have a connection, you can’t use the JS library. Mixpanel even say this in a blog post.

Wrap iOS and Android with JavaScript

To track events using Mixpanel while the user is offline, you need to implement a queuing system. The iOS and Android libraries implement this. In fact, there is a Cordova plugin that hooks into the official Mixpanel iOS and Android libraries - so you can track mixpanel events using

Although this will work, there are a few problems with this approach:

  1. Your app will depend on three libraries: the iOS, the Android, and the wrapper libraries. You’ll need to manage these dependencies properly as updates come in. This will also increase the size of your app.
  2. Other platforms will not be supported. For instance, Windows Phone will not be supported unless a wrapper for that comes out.
  3. Performance considerations. It’s good to bear in mind that calling Java and Objective C from JavaScript is not a smooth process, and it will include serializing and unpacking data you send to and from the native libraries. This may give an overhead when tracking events.

Mind the queue: write your own Mixpanel library

Luckily, we don’t have to settle for a native wrapper. Mixpanel also provide a RESTful HTTP API. We can implement our own Mixpanel library that allows for offline tracking.

The trick to offline event tracking is to keep a queue of things that you’re going to send to Mixpanel. We can use the http API from mixpanel to send the events.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var queueBuffer = [];
function pushToQueue(val){
val.id = queueBuffer.push(val) + (new Date().getTime());
return val.id;
}


function track(event, properties){
var nowTime = new Date().getTime();
pushToQueue({
event: event,
properties: _.merge({time: nowTime}, registrationProperties, properties || {}),
timeTracked: nowTime,
endpoint: 'track'
});

if(queueBuffer.length > 4){
push();
} else {
schedulePush();
}
}

Not online? Wait in the queue!

Periodically, we attempt to send 4 items in the queue. If the send succeeded, we remove those 4 items and continue with the next items in the queue. If the send failed, we keep the items in the queue and try again later.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
function doPost(endpoint, subQueue){
if(subQueue.length === 0){
idCounter = 0;
return;
}
var preProcessQueue = endpoint === 'track' ? preProcessTrackQueue : preProcessEngageQueue;
var queueEncoded = base64.encode(JSON.stringify(preProcessQueue(subQueue)));

$http.post(TRACKING_ENDPOINT+endpoint+'/', {data: queueEncoded}, {
headers: {'Content-Type': 'application/x-www-form-urlencoded'},
transformRequest: function(obj) {
var str = [];
for(var p in obj) {
str.push(p + "=" + obj[p]);
}
return str.join("&");
}

}).then(function pushSuccess(){
removeQueueItems(subQueue);
schedulePush();

}, function pushFail(){
schedulePush();
});
}

App closing? Save the queue for later

We need to persist the queue when the user switches app. This is because we don’t want to loose all the things in the queue when the user switches or even closes the app. We can use localStorage to save the data. Every time the user switches or closes the app, we save the queue onto local storage. Then when the user opens the app again we restore the queue from local storage and continue periodically processing the queue.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
window.document.addEventListener('pause', function(){
persist(QUEUE, queueBuffer);
queueBuffer.length = 0;
}, false);

window.document.addEventListener('resume', function(){
var queue = restore(QUEUE);
if(!queue){
queue = [];
persist(QUEUE, queue);
}
queueBuffer = queue;
schedulePush();
}, false);

Queue getting long? Compress it

Because we are using local storage, we are limited by the amount of data we can store. Typically, local storage has a maximum budget of about 5mb. If we use local storage for our app data, this can be a problem. We have a few options, but for maximum compatibility with all browsers, we can still store the queue in local storage but compress it first. We use a string compressor to reduce the size of the string we store in local storage.

1
2
3
4
5
6
7
8
9
10
11
12
13
function persist(key, value){
var valueCompressed = LZString.compress(JSON.stringify(value));
window.localStorage.setItem(key, valueCompressed);
}

function restore(key){
var item = window.localStorage.getItem(key);
if(item){
return JSON.parse(LZString.decompress(item));
} else {
return undefined;
}
}

As shown above, we use the LZString library to compress and decompress the stored string. As a very simple analysis, let’s say we want to compress a few very simple mixpanel events which don’t hold much data. The JSON data to store in localStorage would look like this:

1
2
3
4
[{"event": "Level Complete", "properties": {"Level Number": 9, "distinct_id": "13793", "token": "e3bc4100330c35722740fb8c6f5abddc", "time": 1358208000, "ip": "203.0.113.9"}},
{"event": "Level Complete", "properties": {"Level Number": 9, "distinct_id": "13793", "token": "e3bc4100330c35722740fb8c6f5abddc", "time": 1358208000, "ip": "203.0.113.9"}},
{"event": "Level Complete", "properties": {"Level Number": 9, "distinct_id": "13793", "token": "e3bc4100330c35722740fb8c6f5abddc", "time": 1358208000, "ip": "203.0.113.9"}},
{"event": "Level Complete", "properties": {"Level Number": 9, "distinct_id": "13793", "token": "e3bc4100330c35722740fb8c6f5abddc", "time": 1358208000, "ip": "203.0.113.9"}}]

This is approximately 1392 bytes. If we compress it with LZString, it would look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
86 36 44 f0 60 0a 10 6e 02 76 01 e6 00 70 01 8c
84 96 62 03 08 83 60 0f 80 2d d8 0e 04 4f 03 60
89 46 38 01 44 12 c0 f7 84 25 3b 03 b8 22 cb 98
1c 80 57 80 00 22 58 8c 09 f0 4c cb 13 00 9d 36
c0 da 6f 0c 3e 00 39 9b c0 3c 60 04 c0 0c 8a 1d
9a 7e e0 69 0d 10 07 67 7d 44 54 62 65 01 01 d0
fe 85 2e fd e8 57 c8 0a 13 60 a1 3f 8b a3 19 80
80 98 0a 07 1b 80 4f 28 21 80 9c 98 8a 9c 98 99
11 3b 2a 15 8f 01 7f 84 84 8b 8b 5b 9b ac 0e 09
7e be 1d 80 a5 0b 81 6e 54 a5 00 98 93 2f 38 6d
02 16 bf 0e 3e 0e 19 31 15 05 83 2c 0b 13 17 3b
1f 0f 10 16 84 a8 34 bd 82 ac 9b 92 86 2a 8e 96
b1 81 ac a9 b5 85 2d ac 93 bd bb ab b7 a7 a0 9f
58 70 4c 64 62 5c 6a 72 db 76 4f 26 5e 0e 51 41
59 49 18 6a 75 05 7d 6d 4b 63 a9 4d 82 01 be 74
5c dd 14 21 44 8e a5 a1 cc 86 07 56 8a 1b 98 84
50 08 71 22 95 24 23 03 2d 42 6a 94 36 4d e1 8b
31 b1 76 a4 2f 36 81 9d e3 cc 78 70 be bc 90 00
2e 44 8a 12 12 c4 14 49 31 a9 91 e5 bd 40 85 f2
07 37 ae dc f9 e5 aa d4 d6 00 18 af 04 17 82 41
bd a6 40 28 18 d6 85 c7 91 8c f0 92 26 5a 89 69
63 43 8a e4 6a 3c 96 d0 25 86 cc 6d 72 56 12 5a
71 98 9d a4 e7 e9 55 26 9b d5 e4 70 9e 3c 5f d9
df 2b 53 9c 94 42 55 85 a8 1a ac a1 68 d2 74 01
00 80

which is approximately 193 bytes. That’s a 14% reduction. We can therefore afford to store more events to be stored.

Alternatively, you can use another storage method such as indexedDB or better still, use localForage which has decent fallback to localStorage. I used localStorage above because of its simple, cross-platform, synchronous API.

Show me the code!

It would be an interesting project to create a stand-alone JavaScript library, but I think it would be best to contribute it to Mixpanel’s open-source library so that their JavaScript library supports it out of the box. Instead, this blog post was about how to implement it yourself - to show that there’s not a lot going on under the hood. In fact, if you read the mixpanel library source code, you’ll see that it really isn’t that complicated.

However, if you’d like to see a working implementation of the above snippets, do have a look at the Angular service I created for my prayer times app. This can be found here.

Moving to hexo

Today I migrated my blog from Wordpress to Hexo.
Hexo is a statically generated blogging framework, meaning you write posts locally using Markdown.

Why move from WordPress?

WordPress. You did well, but you're not fast enough

WordPress is great for managing a blog and is a fully-fledged all in one solution. It’s also the most popular blogging platform, with many sites on the web using it to manage blogs and content. So why move away from something that’s powerful and trusted?

The answer for me is the combination of price and performance. Put simply, WordPress gives you loads of features and power, but at the price of performance. If you want your website and blog to scale, you’ll have to spend money. The more features and plugins you add, the slower your site will get. If you want to speed it up you’ll have to invest in some better servers.

My original set up was that I’ve got a very simple blog that doesn’t use a plethora of plugins. Even with my simple site which has heavy caching on content and CDN-hosted images and resources, it is still noticably slow. It was hosted by TSO Host, which gave me all I needed and more - but the main drawback was performance. It simply wasn’t instant.

There was two options in front of me:

  1. Improve performance by going for a cloud based hosting solution with CDN

  2. Greatly simplify my blog by switching to a statically generated site.

I investigated option option 1: a cloud based solution. I found a great website that describes what’s needed and how much it is. The conclusion was that it would cost 10-20 USD and a great deal of hassle setting everything up. Sure, I have experience with linux and LAMP - but even then, I’m managing my blog in my spare time and don’t want to spend my weekends fixing nginx server issues. And with that, I’m paying money to get this done. For me, the hassle was not worth it.

I started to look into option 2: a statically generated blog. This means that I write my blog locally, save it, pass it through a generator with a templating engine, and publish the result. This means I can’t just blog from anywhere. I have to use a computer, save files, commit them to git, run a program and push the changes. Luckily, I’m used to this. In fact I used a Markdown plugin for WordPress that allowed me to write posts using Markdown.

Because it’s statically generated, the speed was good. There is no server-side processing involved whatsoever.

Because it’s statically generated, the price was good. I’m able to host my blog free on GitHub. I just need to pay for my domain.

I got what I was looking for. Speed and performance

The move to hexo

Next, I looked for what platform to use. There are loads to pick from.

Hexo. You're written in JavaScript. I chose you.

I decided to use hexo because… well, I just liked that it was written in JavaScript and I found a cool theme :) I agree, not really an engineering approach to picking a platform but who cares, this has what I need. I figured if I got bored I can easily switch.

Luckily, hexo had an official WordPress importer. So moving my blog was a few clicks and terminal commands away.

Once imported, I tweaked a few posts, copied my images, set up comments using Disqus, set up Google Analytics, etc. These are all options in a .yaml file. It’s that easy.

Once I finished with my blog locally, I created a repo for it on GitHub and followed instructions to get it working with my own domain name.

My experience over all was pretty sweet and straight forward. Hooray!

The price and performance

The price saving is easy to work out. I’m saving more than £5 a month.

In a later blog post, we’ll analyse performance gains.

Coding While Fasting

Update: Read some more technical tips in part 2

Some say you can’t be productive in Ramadan. I disagree. You can be more* productive while fasting, and it can help you stay focused. Here’s how I do it.

*Coding while fasting made me a really fast coder. Ok that was terrible.

1. Ease off the coffee

pro·gram·mer (n) An organism capable of converting caffeine into code.

As true as the definition above may seem, you can still be a programmer without drinking coffee. I promise. Coffee can help you concentrate, but a lot of the time it’s our perception of this than its true effects. So, to increase focus during Ramadan, I decided to slowly reduce my caffeine intake as Ramadan got closer. This means I’m less dependent on coffee to stay awake (e.g. on Monday mornings), and I can reduce the withdrawal symptoms during the month.

2. Fast Mondays and Thursdays

Intermittent fasting has health benefits and immense reward. On the run up to Ramadan this gives you the extra benefit of getting your body prepared for the long fasts. Once the month starts, you’re already used to it and are able to spend your energy on other ibadaat.

3. Go out for no lunch

Instead of sitting at your desk for very long periods, go out. Even better, go for a walk to the local Masjid, pray Duhr, and stay there for a while. You might not be feeding your body but you’re nourishing your soul ;)

4. Stop thinking about food and think about the ajr

Do as you're told!'

I know it’s hard, especially when you get those emails from people coming back from holidays with different types of sweets from all over the world. So instead, every time you think about food, think about the reward you’ll get inshaAllah.

Whoever observes fasts during the month of Ramadan out of sincere faith, and hoping to attain Allah’s rewards, then all his past sins will be forgiven. (Al-Bukhari and Muslim)

5. Stop blaming it on the fasting

You always hear people doing silly mistakes and saying “sorry, I’m fasting today”. This will only make you believe that fasting will decrease your performance. Take a positive outlook on the fasting. If you keep telling yourself it’s affecting you negatively, it will. If you tell yourself it improves your productivity, which imho it does, it will.

More on Ramadan productivity

Productive Muslim has a lot of practical tips for making the most of Ramadan and being super productive!

A better Ionic starter app

TLDR: I wrote a nice ionic starter app that anyone can use as a boilerplate. You can find it on GitHub.

While I was writing my first Ionic app, I realised there are a lot of tools from front end web development that can be added to the project. The default starter app was a bit too simple.

A better file structure

In the default starter app, every angular component was in its own file.

1
2
3
4
js
├── app.js
├── controllers.js
└── services.js

Opening up controllers.js will show all the controllers of our app. What if we had many? I prefer having a file for each controller.

A better starter app should have each controller, service, constant, etc. in its own file. That way, we can quickly get to the code we’re looking for later on.

1
2
3
4
5
6
7
8
9
js
├── app.js
├── controllers
│   ├── account.ctrl.js
│   ├── chatdetail.ctrl.js
│   ├── chats.ctrl.js
│   └── dash.ctrl.js
└── services
└── chats.service.js

The suffix in the names (.ctrl.js) is optional, but allows us to distinguish between controllers/services with the same name.

Unit testing support with Karma

I was surprised to find that the default project didn’t have unit test support. This was strange because Angular already had really good unit and end to end testing support by default. To fix this, we simply need to add and karma.conf.js. Most of it is the default settings (simply run karma init, make sure you have karma installed), with the following included files:

1
2
3
4
5
6
7
8
9
10
11
{
files: [
"www/lib/ionic/js/ionic.bundle.js",
"node_modules/angular-mocks/angular-mocks.js",
"www/lib/ngCordova/dist/ng-cordova.js",
"www/lib/ngCordova/dist/ng-cordova-mocks.js",

"test/**/*.test.js",
"src/js/**/*.js"
];
}

Now, we can run karma start to run the unit tests. We can also update our package.json to include a testing step. This is useful when using travis for continuous builds.

1
2
3
4
5
{
"scripts": {
"test": "./node_modules/karma/bin/karma start --single-run --browsers PhantomJS"
}
}

Running npm test will run the unit tests.

Concatenating, uglifiying and building our app

We could create an optimized app by putting everything in one file and reducing the number of requests (even if they are all local). To do this, we use a few gulp plugins.

First, we move all our source files into a new folder called src. The plan is to combine all these source files into a single app.js file that will go in the www folder. Here’s how we do it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
gulp.task("build", function() {
return gulp
.src("src/js/**/*.js")
.pipe(sourcemaps.init())
.pipe(
ngAnnotate({
single_quotes: true
})
)
.pipe(concat("app.js"))
.pipe(uglify())
.pipe(sourcemaps.write())
.pipe(header('window.VERSION = "<%= pkg.version %>;";', { pkg: pkg }))
.pipe(gulp.dest("www/dist"));
});

The gulp task is easy to read, but there’s a few things we didn’t mention:

  • The sourcemaps plugin. Here, we write the sourcemap to the same source file. This will allow us to debug the files more naturally in chrome developer tools, even though they are uglified and concatenated into a single file.
  • The ngAnnotate plugin. We use this to allow us to minify Angular shorthand injections. E.g. app.controller('MyCtrl', function($scope){}); becomes controller('MyCtrl', ['$scope', function($scope){}]);
  • The header plugin. We use this to smartly insert the app’s version number inside the app itself. We talk about versioning later in this article.
  • Finally, we write to the www/dist folder. Putting the output into another folder allows us to modify .gitignore so we don’t version the generated files.

While we’re here, we can also modify the sass gulp task so it also goes into the www/dist folder.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
gulp.task("sass", function(done) {
gulp
.src("src/scss/ionic.app.scss")
.pipe(sass())
.pipe(gulp.dest("./www/dist/css/"))
.pipe(
minifyCss({
keepSpecialComments: 0
})
)
.pipe(rename({ extname: ".min.css" }))
.pipe(gulp.dest("./www/dist/css/"))
.on("end", done);
});

Faster page changing using Angular’s $templateCache

When switching between ‘pages’ in a SPA, new template file requests are made. If we move all these template files into a single file and pre-cache it, these requests can be saved. There’s a gulp task for that.

1
2
3
4
5
6
7
8
gulp.task("templates", function() {
return gulp
.src("src/templates/**/*.html")
.pipe(
templateCache("templates.js", { module: "starter", root: "templates/" })
)
.pipe(gulp.dest("www/dist"));
});

Making it play nice with ionic serve

Finally, let’s modify our ionic.project file to make use of the build steps.

1
2
3
{
"gulpStartupTasks": ["default", "watch"]
}

This will run our gulp and gulp watch tasks which we create as follows, in our gulpfile.js:

1
2
3
4
5
6
7
gulp.task("default", ["sass", "templates", "build"]);

gulp.task("watch", function() {
gulp.watch(paths.sass, ["sass"]);
gulp.watch(paths.js, ["build"]);
gulp.watch(paths.templates, ["templates"]);
});

Great, now every time we make a change, our build will be triggered and the page will automatically refresh!

Updating our app version

Finally, I noticed that every time I update my app version number, I have to update it in quite a few places. package.json, bower.json, config.xml, and anywhere I use it inside my actual app (e.g. in the ‘about’ page). Instead, it would be nice to do this once. Luckily there’s a gulp task for that, and it’s really simple:

1
gulp.task("bump", require("gulp-cordova-bump"));

Now when I want to update my app version number, all I have to do is run one of the following:

1
2
3
4
$ gulp bump --patch
$ gulp bump --minor
$ gulp bump --major
$ gulp bump --setversion=2.1.0

And gulp will patch everything up. Oh, and remember that banner step in the build mentioned above? Well, this will get the version number from our package.json, put it into our compiled app.js script as a global variable (window.VERSION), and we can now use it in our app! To make it play nice with Angular, we can put it into our $rootScope so we can use it directly in our template. We simply add the following line in our run block:

1
2
3
4
.run(function($ionicPlatform, $rootScope) {
$rootScope.VERSION = window.VERSION;
// ...
}

and we can use it in any template:

1
2
3
<div>
App Version {{VERSION}}
</div>

Show me the code

Feel free to work with it on GitHub:
https://github.com/meltuhamy/ionic-base.

Credits

Post thumbnail and image are from the Ionic project.

Ionic Speed: Writing a prayer times smartphone app in a day

Update: I’m happy to announce the app has been released on the Google Play Store. Check it out!

Whilst I was in Belfast, I promised my sister I’d finally write an app for them to show our Mosque’s prayer times. Initially I planned to use this as a chance to learn some new tech such as native iOS and Android development, but unfortunately I kept procrastinating and didn’t manage to do it. With a couple of days remaining before leaving Belfast, I realised I had not fulfilled my promise, and decided: what’s the fastest way to write an application that works and looks nice - given my current skill set? The answer was obvious. Ionic Framework.

Ionic Framework as an obvious pick because of the following reasons:

  • I already have experience writing Angular apps. I could jump right in and know what I’m doing.
  • The app I’m making is really simple, and I already have an idea of how it’ll work.
  • I want my app to work on iOS and Android, and I don’t have the time to write different apps for each platform. Ionic works this out for you ;)

So going ahead with Ionic, I set it up straight away.

Wait, what are you building again?

Muslims pray five times a day. The times of prayer is based on the position of the sun in the location you’re in. While you can work out the prayer times using some math (and in fact there are hundreds of apps that already do this), there is an added importance of praying at the same time as other Muslims in the area. So if everyone in the city used their own app that works it out, each person would be praying at different times! The solution was to use the prayer times at the Mosque that’s closest to you. Fortunately in Belfast there are only two mosques, and they use the same prayer timetable. So, I decided to write an app that uses the mosque’s prayer times.

A screenshot of the app in action

Setting up

So many tech. So little effort to set up.

I followed the normal set up procedure without worrying too much about the details. It really is this easy:

1
2
3
4
5
6
npm install -g cordova ionic
ionic start belfastsalah tabs
cd belfastsalah
ionic platform add ios
ionic build ios
ionic emulate ios

App structure

By default, the Ionic app structure looked something like this:

  • app.js
  • controllers.js
  • services.js

And each file contained all the angular components of our app. I prefer structuring the app differently, so each service, controller, and so on is in its own file.

  • app.js: Contains config and module definitions.
  • constants/: Inside this folder, all our angular constants (in our case, our prayer time data) will go here.
  • controllers/: Each controller will live in its own file in this folder
  • filters/: same for filters
  • services/: and services

Now that we have a much better structure, we can now think about the services, controllers, constants and filters we need. Luckily, I already had all the data for the prayer time table. It’s essentially a JSON array containing objects for each day in the year - so for 366 days we had 366 objects. Each object is an array representing the different prayer times of that day. For example, here’s what 1 January’s prayer times looks like:

1
["1","1","06:49","08:44","12:29","13:55",null,"16:11","18:00"]

This file would go in as an angular constant, so we can inject it as a dependency wherever we need it.

Now, we define our services. We need a service that keeps track of the time ticking, I called it the Ticker service. We also need a service that gets the required prayer times for a day or a month. I called it PrayerTimes. We define and implement a few methods: getByDate would get the prayer times given a JavaScript Date object. getByMonth would get the prayer times for a whole month, and getNextPrayer would get the next and previous prayers and their times, given a JavaScript Date object.

Finally, we define our controllers. We essentially define a controller for each tab. We therefore have three: Today, Month, and Settings. The Today controller would make use of the Ticker and PrayerTimes services to update the remaining time for the next prayer. The Month controller would simply display all the times for the current month, and the Settings controller is a work in progress, but would allow disabling and enabling app notifications (see conclusion).

Adding some goodies to help us out

To save time and effort, I used lodash, moment.js, and angular-moment to help with array searching, time calculations, and displaying times as strings in the view. To do this with Ionic, you can simply type ionic add &lt;package name&gt; and Ionic will take care of it (uses bower).

Conclusion and source code

Overall, it was incredibly easy and fast to get the app finished. In fact, Ionic allowed me to also test the app without having to connect a device using their Ionic View service. Really, the only thing left is easy deployment / integration with Google Play Store / AppStore, though I’m not sure if that’s even possible.

There are a few things left, however. I didn’t get time to set up notifications for prayers, but this is certainly feasible using the localNotification cordova plugin. Hopefully I can get this done in a future version!

Check out the source code on GitHub.

Angular and Optimizely: A/B testing on SPAs

One of my first tasks at blinkbox books was to integrate Optimizely‘s powerful A/B testing service with our dynamic Angular JS single page web app. This blog post describes my experience and also shows how this can be done in a way that eliminates flashes of unstyled content (fooc).

How Optimizely works

Optimizely's WYSIWYG editor (screenshot from optimizely.com)

Optimizely is a A/B testing service that allows creating one or more variations of a page, and targeting those variations on a percentage of the page’s visitors. The way it works is simple. Optimizely provides a WYSIWYG interface which allows a user (e.g. the marketing team) to make changes to a variation. The changes translate into some jQuery code. This code is embedded into a JavaScript snippet that is inserted into the page.
How Optimizley changes the page

Dynamically updating dynamic single page ecommerce websites

Optimizely works great if the page is rendered on the server, and user interactions are simply page changes. In other words, the jQuery snippet that changes the page would be applied once the page is loaded.

But what if the page loads a single page application (SPA)? In that case, Optimizely doesn’t know if the page has changed, because that is controlled by JavaScript instead of page requests. This is explained in detail in the support pages. To solve this, Optimizely provides an API that can be used to manually tell Optimizely that a page has changed. We simply call optimizely.push('activate') to tell Optimizely to activate any targeted by the current URL.

However, SPAs aren’t just about changing pages dynamically. They can involve very dynamic views and components that asynchronously request content dynamically. For example, if the user scrolls down, more items can be dynamically requested and displayed. To make use of Optimizely’s WYSIWYG editor, we needed to support any update of any part of the page at any url. This would mean that we could pass Optimizely to the marketing team without worrying about making any kind of code change to make it work. This post is an overview of what I did to get Optimizely to correctly apply modifications to any part of a SPA.

The naive solution: activate experiments when the page changes

There’s a few events that we could hook into and activate Optimizely experiments. Each of these solutions fixes a problem but introduces another problem. Nevertheless, let’s talk through each of them.

  • $routeChangeSuccess: The obvious one. Every time the page changes, we need to apply any experiments that might be associated with the new URL. We call optimizely.push('activate') and this will take care of everything for us. The problem with this is that there might be angular directives and embedded modules in the page which load dynamically. In this case, the Optimizely snippet would be applied, but no actual changes would be made because the required elements would not be loaded yet.

  • $includeContentLoaded: Handling dynamic templates. To fix the problem above, we could listen for this event and call optimizely.push('activate') every time it fires. This will work but only for the first time that the template is loaded. For example, if we load a template for one ‘tab’, then switch tabs, the second time the user visits the first tab $includeContentLoaded will not be fired, as it is already loaded and cached by angular. The other problem is angular directives that have external templates will not trigger this event when their templates are loaded.

  • $browser.notifyWhenNoOutstandingRequests: An estimate of when the page has finished ‘loading’. This private API is what is used by protractor for end-to-end tests. If we register a callback that activates Optimizely when the page has finished loading according to angular, this would apply any page modifications correctly to the page. However, the drawback is that it will take some time for the page to finish loading and so there will be a very obvious flash of un-styled content.

A more comprehensive solution: hook into the digest cycle.

What if we knew roughly when the DOM has changed by Angular? We could then apply the Optimizely changes every time Angular changed the DOM. Luckily, we do know when the DOM has changed - in the digest cycle. We can simply call the activate method every time angular executes the digest cycle. And this is easily done by setting up an infinite watch, like below:

1
2
3
4
5
6
7
8
9
var force = true;
$rootScope.$watch(function () {
setTimeout(function () {
force = !force;
});
return force;
}, function () {
$window.optimizely.push(['activate']);
});

I found that having this solution along with listening for the $routeChangeSuccess event worked best, but there are still some problems…

Applying an experiment multiple times.

One could ask, surely we’re calling the Optimizely API dozens of times. Is this wise? Well, it turns out that the Optimizely snippets are idempotent - meaning if we call it multiple times, it won’t change the page multiple times.

Too many XHR requests

However, I did notice that several XHR requests ended up being called because of Optimizely’s logging feature. This is bad. Every time we call optimizely.push('activate'), an XHR request is queued. This is not only bad network usage, it will also drain the battery and is just pure evil. We had to have a workaround. It would be nice if Optimizely allowed us to disable logging for a single page, but until then, I implemented an incredibly hacky workaround. The solution: monkey-patch the XHR.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
function patchXHR() {
var originalOpen = window.XMLHttpRequest.prototype.open;
var originalSend = window.XMLHttpRequest.prototype.send;

var prevUrl;
window.XMLHttpRequest.prototype.open = function (type, uri) {
if (uri.lastIndexOf('log.optimizely.com') >= 0 && fromDigest){
// we set this up in order to intercept the request in the 'send' function.
this._requestURI = uri;
fromDigest = false;
}
originalOpen.apply(this, arguments);
};

window.XMLHttpRequest.prototype.send = function () {
if (typeof this._requestURI === 'string'
&& this._requestURI.lastIndexOf('log.optimizely.com') >= 0) {
var currentUrl = $location.path();
if (currentUrl === prevUrl) {
// we prevent the request if this was the same page
this.abort();
} else {
// we allow requests on actual page changes.
prevUrl = currentUrl;
originalSend.apply(this, arguments);
}
} else {
// we allow all non-optimizely requests
originalSend.apply(this, arguments);
}

};
}

This just about solves the requests problem, but there are still some other problems…

Undoing an experiment

What if we had an experiment that removed the navigation bar, or the footer, or any other common element in our website? For example, what if we wanted to remove the page footer but only in the ‘About Us’ page, and not any other pages? Well, because we’re using a modular single page application, once we remove the element (using the Optimizely snippet), it’ll be removed from our whole application until the user reloads! This is because common elements like the page header and footer don’t change between page changes. All the routing is done in JavaScript, remember?

Unfortunately, there wasn’t any elegant way to fix this. We decided to simply not allow these types of changes unless they are app-wide. It would have been nice if there was a way of undoing a snippet, but this would definitely be a challenge (e.g. if you remove an element and not hide it, then re-insert it, how do you ensure angular still knows about it? What about memory and performance considerations).

Conclusion

Getting third party services to work with a SPA is hard. Optimizely should invest engineering effort to make it work with at least the most popular frameworks such as Angular, or at least provide more fine grained APIs to allow users to integrate manually.

While the solution of using Angular’s digest cycle worked, it isn’t great and smells like a hack. There needs to be a better way of applying A/B testing experiments on a single page app, and this would require a lot of thought.

Having said that, for very simple web apps, the approach described above would probably be overkill. In fact, for most simple applications, using the $routeChangeSuccess approach would work just fine. However, if your app is dynamic and has many components and directives which are also dynamic, getting Optimizely to work will need a bit more hacking - and this article was supposed to be an overview of what we did at blinkbox to get it to work.

Show me the code

To do all of the above (and a bit more), I wrote an Angular service for the blinkbox books application. You can see the service on GitHub. I’m also thinking about taking this out and making it its own service so anyone can drop it into their app.

Why I bought the Alfred Powerpack

If you haven’t heard of Alfred, check it out. It’s basically a Spotlight replacement with a lot of power.

If you use Alfred but not sure if you should buy the Powerpack, check out some of the cool things you can do with it in this GitHub repo.

Alfred is probably my #1 productivity tool on the Mac. Anything is literally a few keystrokes away.

Alfred also integrates with Dash. Dash is an offline, quick, searchable documentation tool. Combine Dash with Alfred, and you get documentation for your favourite language or library literally in a few keystrokes.

Instant documentation

Dash isn’t free (though there is a free trial), but if you’re willing to invest in some great developer tools that would save you time, this combo works great.

Finally one more thing I love about Alfred: opening Google Docs. If you install Google Drive, your Google Docs, Sheets, and Presentations will be downloaded and synced. Then you can open any Google Doc using Alfred (type ‘open’ followed by the filename), and it will open the Google Doc in the browser. I found this really quick and useful because it means I don’t have to have a browser open, go to Google Drive, and search for the file.

I was actually wondering if anyone has done a Google Drive Workflow for Alfred that would allow this functionality without having to download and sync all your files using the Google Drive app. It turns out no one has done it. I might take a look and see if I can integrate some of Google Drive’s APIs with Alfred Workflows to see if this is possible.

TIL How a Java Debugger Works

I’m working with my friend on a project to implement a web based debugger for Java projects. Today I learned all about the Java Debug Interface (JDI, which I like to pronounce as ‘Jedi’).

It’s essentially an event-driven request/response API that allows all the features that a debugger supports - step over, step into, breakpoints, stack inspection, etc. For example, say we want to set a breakpoint on a certain line number. First, we load the target class, then create a breakpoint request, wait for the response which tells us the breakpoint event was handled successfully. Now, we can look at the stack at this point and inspect variables.

Add some websocket wizardry, and you can hook it up with a web application.

If you’re interested in the details, head over to the GitHub project https://github.com/jameslawson/webjdb.

Why a web based Java debugger? Aren’t you reinventing the wheel?

The idea isn’t to bring a fully-fledged code editor to the web. Plenty of those exist. Instead, the idea is to quickly debug an existing Java project in the browser. This simplifies the task of the web app - we don’t care about writing code, we just care about a simple and purpose-built debugging experience. It’s also useful for people who use Vim or Sublime instead of an IDE. Finally, at the moment this is just a proof-of-concept experiment. We’ll see how it goes.

Setting up my dev environment on a new Mac

Here’s what I do when I get a new Mac / reinstall OSX.

The absolute essentials

  1. Download chrome. Already, everything is synced. Awesome.
  2. Download Alfred. This is my go-to tool for opening just about anything. It’s so good I actually bought the powerpack.
  3. Download Spectacle for easy and powerful window management.
  4. Download iTerm. This is my preferred terminal environment.
  5. Set up a global keyboard shortcut for opening iTerm.
  6. Install homebrew. Once we have this, we have everything. When installing homebrew, it will also install the Apple Command Line Developer Tools. Yay.
  7. brew install all the things. I normally brew install node first as I am a self-proclaimed JavaScript fanboy.
  8. Install Sublime Text 3. Unless you’re a vim wizard. In that case ignore steps 1-7. Vim will suffice.

Make the terminal awesome

Dat terminal do

  1. Download oh-my-zsh. I can’t be bothered explaining the benefits of zsh over bash but you’ll feel the power of zsh as soon as you start using it.
  2. Set up zsh-syntax-highlighting. Gives instant visual feedback to tell you what you’re about to execute is correct. e.g. If you type the command “echo” incorrectly, it will show in red. If you type it correctly, it will show in green (like the above screenshot), before you actually execute the command.
  3. Download tomorrow-night-eighties theme for iTerm2. To do this, save this file as ‘Tomorrow Night Eighties.itermcolors’ and open it. iTerm2 will import it. Then, choose it in iTerm > Preferences > Profiles > Default > Colors > Load Presets…
  4. Set up pure prompt. This will add stuff like nicer git integration, timing functions (see screenshot above), and other neat tricks in your terminal. To do this, save this file as ‘pure.zsh’. Then run:
    1
    2
    mkdir ~/.oh-my-zsh/functions
    ln -s /path/to/pure.zsh ~/.oh-my-zsh/functions/prompt_pure_setup
  5. Restart iTerm.
  6. At this point, I like to set up my zshrc aliases. Sublime Text is an important one. Add this to the end of your .zshrc file:
    1
    alias subl="'/Applications/Sublime Text.app/Contents/SharedSupport/bin/subl'"
    Cool, you can now edit files in your terminal using the subl keyword! (Try it on folders too!)

Other stuff

Set up your ssh keys:

  1. Open a terminal and type ssh-keygen
  2. Repeatedly press enter (feel free to give a password if you want)
  3. Copy your public key and put it into your github account.
    cat ~/.ssh/id_rsa.pub | pbcopy

Set your git user name and email:

1
2
git config --global user.name "Your Name"
git config --global user.email you@example.com

Set up git lg alias, a better git log:

1
git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"

JIT App Installations.

A lot of people install all the apps they could possibly need in the future right after they do a fresh install. I used to do this. Then I realised the majority of this time is time wasted. A better approach is to install apps on the fly when you need them. This will save you time, and also some precious disk space! Package and app managers (like npm or the App Store) make installations really quick and easy. So install the essentials and forget the rest!

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×