Category Archives: Speed

For us, the largest benefit of Javascript templating is reduced size

There are quite a few Javascript templates. In my projects however, there are very few cases where I would prefer using any of them in place of regular HTML being pushed out from the server (running Ruby-on-Rails). The same can be said of the Ponzu conference system.

As far as I understand, the benefits of using Javascript templates are 1) reducing the load on the server (generating JSON is less load than generating full HTML), 2) speed if used in combination with a single page design.

The downside is the additional work that browsers have to do, which can be a problem on mobile where the devices are not as powerful as their desktop counterparts.

I’ve touched this subject before in these two posts [1](https://code.castle104.com/?p=289), [2](https://code.castle104.com/?p=291).

As discussed by David Heinemeier Hansson, the same benefits can be achieved without Javascript templates by using a PJAX/Turbolinks/Kamishibai like system that eliminates reloading Javascript and CSS on each page transition, and the use of aggressive caching on the server side to reduce the load of HTML generation.

There is a real case however, where I feel a strong need for a Javascript tempting language.

That is when I try to cache responses in browser side cache. The issue is that HTML is extremely verbose, and is a real killer in terms of storage consumption when you are working with repetitive content.

For example, the following is a “social box” that we use in Ponzu for the like button and a voting button. It takes about 2,000 bytes. Each social box is associated with a presentation so we have hundreds to thousands of social boxes for each conference. This can easily fill up the limited browser side cache.

<div class="" id="presentation_326_social_box"><div class='like_box'>
<div class='like' id='like_button_326'>
<span class='social_controls'>
<!-- To invalidate Like related paths, we need a like object -->
<a href="/likes?like%5Bpresentation_id%5D=326" class="button icon like" rel="nofollow">like</a>
<div class='prompt_message'>
(
To add to your schedule, please &quot;like&quot; it first.
)
</div>
</span>
<div class='social_stats'>
<img alt="Like" src="/assets/like-c3719a03fc7b33c23fda846c3ccfb175.png" title="いいね!を押すと、応援メッセージになります。またあなたのスケジュールに登録されます。" />
<a href="/presentations/326/likes?user=1">15 people</a>
liked this.
</div>
<div class='likes_list' id='likes_presentation_326'></div>
</div>

</div>
<div class='vote_box'>
<div class='like' id='vote_button_326'>
<span class='social_controls'>
<form accept-charset="UTF-8" action="/likes/vote" class="new_like" id="presentation_326_new_like" method="post"><div style="margin:0;padding:0"></div>


<span>

<label for="presentation_326_like_score_1">Excellent!</label>
</span>
<span>

<label for="presentation_326_like_score_2">Unique!</label>
</span>
<span>

<label for="presentation_326_like_score_0">No Vote</label>
</span>
</form>
</span>
</div>

</div>
</div><div id="session_details_presentation_326"></div>

Most of this content is repetitive and will be identical for each “social_box”. In fact, the content that is unique to each individual social box can be summarized in the following JSON.

[{ 
    presentation_id:326,
    user_id:1,
    score: 0,
    liked: 0,
    scheduled: 0
}]

If we could use Javascript templating to generate the 2,000 byte HTML from this small set of JSON, local storage savings would be huge.

This is one feature that we will be adding to Kamishibai and Ponzu in the near future, to enable the ultimate goal of complete offline viewing.

On HTTP caching

Kamishibai provides support for storing Ajax responses on the client using either localStorage or WebSQL (and Indexed DB is planned in the future). This enables us to dramatically speed up page loads by not sending the request out to the server, but retrieving the response from internal storage. It also allows us to provide offline access to pages.

HTTP itself provides the HTTP cache protocol which allows the server to control browser cache through the “Cache-Control”, “Expires”, “Last-Modified”, “If-Modified-Since”, “ETag” and “If-None-Match” HTTP headers. Ruby-on-Rails also provides methods that allow us to easily manage these headers.

The question is, why did we create our own caching mechanism for Kamishibai, instead of using HTTP cache. In the following, I hope to provide the answer.

Many Ajax requests are Private Content

The following is an excerpt from the above link describing use-cases from HTTP cache.

Private content (ie. that which can be considered sensitive and subject to security measures) requires even more assessment. Not only do you as the developer need to determine the cacheability of a particular resource, but you also need to consider the impact of having intermediary caches (such as web proxies) caching the files which may be outside of the users control. If in doubt, it is a safe option to not cache these items at all.

Should end-client caching still be desirable you can ask for resources to only be cached privately (i.e only within the end-user’s browser cache):

In Ponzu, our scientific conference information system with social network features, a lot of the content is “Private content”. For example, we generally only show the abstract text to conference participants (non-participants can only view the presentation titles and authors). Hence Ajax requests for presentation detail pages cannot be handled with HTTP cache.

URLs alone are not sufficient as the cache key

HTTP caching uses the URL only as the cache key. If the content changes depending on values in cookies, then HTTP caching doesn’t work.

However with Ponzu, we use cookies to store the current user id and we also store the locale. We display slightly different content depending on the privileges of each user and we also provide different translations. We do all this while keeping the URL the same. Keeping the URL the same is important to maximize social sharing.

Hence in Ponzu, URLs alone are not sufficient as the cache key.

Flexible purging of HTTP cache is not possible

HTTP cache does not allow flexible purging of the cache. For example, if you set “max-age” to a large value (e.g. for a few days), then you cannot touch the cache on the browser until the cache has expired. If you suddenly have an emergency notification that you need to put up, you can’t do it. You have to wait until the cache expires, whatever happens.

With Ponzu, we want to set very long cache expiry dates to maximize fast browsing. On the other hand, we want to be able to flush the cache when an emergency arises. An emergency might be an urgent notification, but it also may be a bug.

Hence HTTP cache is not particularly suitable, and we would not want to set long expiry times with it unless we were extra sure that the content would not change.

Summary

As we can see, HTTP cache is not suitable for the majority of Ajax requests (HTML requests) in Ponzu. Although we use it to serve content in the Ruby-on-Rails asset pipeline, we don’t use it for dynamically created content at all. Instead, we use Kamishibai caching which provides more flexibility.

On JavaScript MVC (part 2)

In Kamishibai and Ponzu, speed is a huge concern. This is particularly true for smartphones. In desktop web sites, it is possible to cram a lot of information and navigation elements into a single page to make up for slow page loading. You can see this in news websites like Asahi.com where 80% of the top page consists of navigation and shortcuts. The idea is that instead of asking the user to click a link and reload a new page (which is slow), the user simply can scroll down to see more content. With smartphones, cramming all this information is simply not a good idea and we have to reload pages.

One way to reduce the load time for pages and to update only the parts that you want is to use Javascript MVC or client-side MVC. With client side MVC, the pages are not reloaded as the content is switched. Instead of sending HTML pages, the server sends JSON data to the browser and browser-side javascript is used to construct the DOM from the JSON data. The advantage is that the client does not have to reload the whole page, and that it can intelligently update only the portions of the DOM that need to be redrawn.

In Kamishibai and Ponzu, we seriously contemplated using these Javascript MVC frameworks. However, we decided not to do so. Instead, our approach is similar to how GitHub handles updates with PJAX, and with how the new Basecamp uses Turbolinks. David Heinemeier Hansson gave a presentation describing why they did not use Javascript MVC extensively and a video is on YouTube.

There are other highly respected programmers who question the use of Javascript MVC. Thomas Fuchs, the author of script.aculo.us and Zepto.js has this to say in his blog “Client-side MVC is not a silver bullet” and his comments on a post about one of his projects, Charm.

I’ve come to the realization that this much client-side processing and decoupling is detrimental to both the speed of development, and application performance (a ton of JavaScript has to be loaded and evaluated each time you fire up the app). It’s better to let the server handle HTML rendering and minimize the use of JavaScript on the client. You can still have fast and highly interactive applications, as the new Basecamp shows—letting the server handle most stuff doesn’t mean that you have to cut back on cool front-end features and user friendliness.

We’ve spend a lot of time getting Backbone to work properly, as the easy-of-use quickly deteriorates when your models get more complex. It’s a great choice for simple stuff, but email is far from simple. We also had to add yet an other extra layer of processing to generate “ViewModels” on the server because the normal Rails serialization of objects wouldn’t cut it.

If you do any non-trivial resources, you’ll quickly end up with JSON objects that are just too large, especially for lists. For emails, imagine that you have nested threads, user avatar images, nested assigned cases, etc.
Because of this, you’ll need specialized JSON objects/arrays for different use cases (search, list view, detail view, and others). It follows that you’ll end up with this with more or less any front-end framework (if you care about performance!). Doing this adds complexity, which can be avoided by rendering HTML on the server where access to arbitrarily deeply nested data is relatively cheap (and can be highly optimized by keeping snippets of HTML around in memcache, etc).

In Ponzu and Kamishibai, we ended up following the traditional Rails route, enhanced with techniques that are seen in PJAX and Turbolinks.

What to we want to achieve

For us, the reasons for contemplating the use of a Javascript MVC framework were;

  1. Speed
  2. Use without network connection (offline apps)

A common benefit for Javascript MVC is interactivity, but this was not a concern for the types of web applications we had in mind which tend to be very read-heavy.

David Heinemeier Hansson has written a detailed post on how the new Basecamp dramatically increased performance through PJAX-like techniques and extensive server-side caching.

This technique basically nullifies the first benefit, because it makes it very easy to increase performance. Furthermore, the code will be almost the same as traditional Rails and hence very simple.

The second issue, offline usage, is something that even most Javascript MVC frameworks do not support very well. This is even more so when the data gets complex. Hence this in itself is not a compelling reason to go Javascript MVC; we would still have to figure out how to do offline usage effectively ourselves.

Why do people use Javascript MVC?

I came across this blog post describing why they used a Javascript MVC framework (Ember.js) instead of simple jQuery.

The author, Robin Ward, gives a specific example.

For example, on the bottom of every discourse post there is a button a user can click to like a post. When clicked, it vanishes and adds a footer below the post saying you liked it. If you implementing this in jQuery, you might add a data-post-id to the post. Then you’d bind a click event on your button element to a function that would make the AJAX call to the server. However, the click function passes a reference to the button, not the post. So you then have to traverse the DOM upwards to find the post the button belongs to and grab the id from there. Once you have it, you can make your XHR request. If the XHR succeeds, you then have to traverse the DOM downward from the post to the footer, and add in the text. At this point it works, but you’ve tied your implementation of the button click to a particular DOM structure. If you ever want to change your HTML around, you might have to adjust all the jQuery methods that accessed it. If this example seems simple – consider that in the footer we offer you a link to undo your like. When clicked, the footer text vanishes and the button appears again. Now you’re implementing the opposite operation against the DOM, only in reverse of what you did before. Discourse even takes it a step further – we know that 99% of the time when you click the like button the request is going to work, so we hide the button and show the footer text right away, even before waiting for the server to reply. In the infrequent event that request fails, we’ll show an error message and pop the UI back to the state it was in before. If we were doing that in jQuery, we’d have to have a callback on our AJAX request that knew how to put the UI back into the state it was in before. A prudent programmer might say, okay, well I’ll have a render function that can rebuild the DOM to the current UI state of whether the post is liked or not. Then both ‘undo’ and ‘like’ can call it. If ‘like’ fails it can call it again. Oh, and we have to store the current state of whether the post is liked somewhere. So maybe we add another data-liked=”true” attribute. ACK! Just typing this all out is giving me a headache!. Congratulations, your code is now spaghetti, your data is strewn out in the DOM and your logic is tied to a particular layout of HTML elements.

Although I understand Robin’s point and I have also experienced frustration when we want to update multiple elements that are in separate locations, I tend to think that using a full-blown Javascript MVC framework is an overkill. No doubt, DHH and Thomas Fuchs would send Javascript in the response to do the complex updates.

In fact, it is pretty difficult to find an Ember.js example on the web that does stuff that is complex enough that simple Javascript would not cut it.

A more intelligent RJS

Given that most of the use-cases of Ember.js and Javascript MVC frameworks seems to be stuff that could be done with regular Javascript, but might be a bit complex, the more pragmatic approach in my opinion, is to create a small library that would make updating the DOM through RJS-like methods simpler. It could also manage caching, expiration and updating of the response on the server side, so that subsequent requests do not have to go out to the network.

This is the approach that Kamishibai intends to persue.

Furthermore Kamishibai even handles the two-pane interface which is demonstrated in this Ember.js guide. This is possible because we have included a lot of “intelligence” in the library, whereas with the simple RJS approach, there is not library where complicated logic is stored for reuse.

More on this later when we open up the code.

On Javascript MVC

David Heinemeier Hansson gave a presentation on Backbone.js or rather how they are not using it too much. The video is on YouTube.

He mentions how he is using PJAX (or actually TurboLinks) and extensive caching on the server side to make the new Basecamp just as responsive as a Javascript MVC application would.

This is very similar to the Kamishibai approach.

There are a few things that I’m wondering about, and this post is a memo about these.

Why are Javascript MVC applications responsive?

Possibilities;

  1. Because all the Javascript and CSS files are not being reloaded and re-parsed.
  2. Because the whole screen is not being redrawn. Only a small portion is being rendered.
  3. The server is much faster at returning JSON responses compared to rendering HTML.

The first possibility is addressed by the PJAX-like approach.

The second possibility is not addressed by PJAX itself because it renders the whole page. Kamishibai on the other hand, can render segments of pages so Kamishibai addresses this.

As for the third possibility, I find it incredible that the server might be slower than the Javascript client at rendering HTML (or the DOM). Of course, Rails caching can solve this problem and all is well, but I wonder how a client, especially a mobile client can be that fast. Maybe the issue is that desktop view templates tend to be too complicated.

Analyzing script evaluation speed of jQuery & jQuery Mobile on the iPhone

In a previous post, I discussed how Javascript can slow down website loading speed significantly, and why Kamishibai and Ponzu aim to keep the size of Javascript small.

Today, I used the Safari web inspector to analyze how long it actually takes to evaluate jQuery Mobile, a common framework used for mobile web development.

For evaluation, we used the jQuery Mobile demo page for version 1.2.1.

Below are the results. We evaluated, in order, my MacBook Air with a 1.7GHz Intel Core i5 (Mac OS X 10.8.3), my iPhone 5 (iOS 6.1.3) and my iPhone 4S (iOS 6.1.3).

The numbers that we are interested in are the times that it takes to evaluate Javascript.

MacBook Air 1.7GHz Intel Core i5 2013 04 01 21 56 jQuery mobile MBA

MacBook Air 1.7GHz Core i5: Time taken to evaluate Javascript

  1. jquery-1.7.1.min.js – 10.7ms
  2. jquery.mobile-1.2.1.js – 31.5ms

iPhone 5 2013 04 01 21 56 jQuery Mobile iPhone 5

iPhone 5: Time taken to evaluate Javascript

  1. jquery-1.7.1.min.js – 41.7ms
  2. jquery.mobile-1.2.1.js – 144ms

iPhone 4s 2013 04 01 21 57 jQuery Mobile iPhone 4s

iPhone 4s: Time taken to evaluate Javascript

  1. jquery-1.7.1.min.js – 65.7ms
  2. jquery.mobile-1.2.1.js – 238ms

Conclusion

The total latency of evaluating jQuery Mobile (jquery.min.js + jquery.mobile.js) was 42.2ms for the MBA, 186.1ms for the iPhone 5, and 303.7ms for the iPhone 4s.

Network latencies for a good broadband WiFi connection are about 50ms. For a 3G connection, they are a few hundred ms.

Server latencies can depends a lot on the complexity of the page you wish to display, but can be anywhere between a few ms to a few seconds. We generally try to keep server latency less than 300ms, and for most pages, less than 100ms.

Whereas latencies of 100ms appear to be almost instantaneous to humans, response times of more that 300ms are very noticeable and negatively impact perceived responsiveness.

Given these numbers, and also considering that the iPhone 4s is by no means a slow phone, we conclude that jQuery mobile is still too slow. For the evaluation time of jQuery Mobile to get to the point that it doesn’t matter, we need at least twice the speed of the current iPhone 5. Given the performance increase with each iPhone model, we might see sufficient speed with the 2013 iPhone model. However, for such performance to become mainstream, we will probably have to wait about 3 years, especially considering that Android Javascript performance tends to significantly lag behind mobile Safari.

The need for speed (how Javascript slows things down)

Speed is a very important but often overlooked feature.

Regarding speed on mobile websites, there are several things to consider.

  • Web sites can be very fast to load.
  • Overuse of JavaScript can make mobile websites very slow, even if the resources are cached.
  • jQuery significantly slows down websites, just by being loaded and compiled.
  • Responsive web design is not the same as optimizing for mobile.

Discussion on the web

A detailed discussion about how jQuery can slow down websites was written by Martin Sutherland. I have confirmed this to be true, and this was the first reason I abandoned jQuery Mobile for use in Ponzu (there were many more important reasons, but this was the first show-stopper).

Arguments against responsive web design were put forward by Jason Zimdars of 37 Signals (Behind the speed: Basecamp mobile).

In Ponzu

In Ponzu, we ended up taking a similar route as Basecamp mobile by 37 Signals. Our setup is basically the same except that our Kamishibai Javascript library is much more complicated (but still much, much smaller than jQuery).

With Basecamp mobile, the resource sizes are as follows;

PC HTML:42.92KB, CSS: 515.79KB, JS: 273.80KB
Mobile HTML:16.90KB, CSS: 38.91KB, JS: 14.73KB

You can see that they cut down on the size of JS and CSS immensely. Since the browser has much less CSS and JS to parse and compile, the browser responds much faster after a reload (even when CSS and JS are cached).

With the MBSJ2012 Ponzu system, the sizes are the following;

PC HTML:12.20 + 13.61 + 3.09 = 28.90KB, CSS: 29.30KB, JS: 41.13KB
Mobile HTML:13.62 + 12.69 = 26.31KB, CSS: 18.19KB, JS: 40.96KB

Both Basecamp mobile and Ponzu keep the CSS and JS sizes very small.

Compare this to jQuery Mobile;

Mobile HTML: 2.73KB, CSS: 107.31KB, JS: 91.67 + 286.65 = 372.32 KB

The difference in JavaScript size is staggering.

Where you can see the difference

As mentioned in the articles that I linked to above, the difference in speed becomes apparent when the browser reads in the Javascript and parses it. Caching the response does not help.

When clicking on external links

Whenever browsers move to a new URL, they clear their memory. Therefore even common CSS and JS assets have to be reloaded and re-parsed. This can be prevented if the pages are loaded via Ajax. With Ajax, we keep using the same old CSS and Javascript.

jQuery Mobile mainly uses Ajax when links are clicked. Therefore, once you have loaded a jQuery Mobile page, you won’t notice slowness as long as you stay inside the same site. You will however notice slowness if you click a link to an external site (non-Ajax), and then come back. Coming back will require all the Javascript to be reloaded and re-parsed.

In other words, jQuery Mobile is OK if you stay if you never leave the site. It get’s terribly slow if you move in and out.

When you want to quickly look up something

When you are at the conference venue and you want to check up on you schedule, you want the page to show up fast. You don’t want to wait for seconds until your schedule shows up.

You want to first load to be fast. jQuery Mobile won’t provide that.

When you click on a link on twitter

The great thing about using the Web for the conference system as opposed to a native application, is that you can link to individual pages. For example, you could put a link on Twitter, telling the world how cool your presentation is and why they must come and see it. Anybody seeing that tweet could simply click on the link, and they will immediately see your abstract.

However with jQuery Mobile, it’s going to take a few seconds until the page shows up. Some people might decide that they don’t have enough time to wait and abort.

Conclusion

In Ponzu, we take speed very seriously. Without speed, many of the benefits of the system will be diminished. One of our strategies to achieve this goal, was to keep the size of our Javascript small. Other strategies including caching will be discussed later.