For us, the largest benefit of Javascript templating is reduced size

There are quite a few Javascript templates. In my projects however, there are very few cases where I would prefer using any of them in place of regular HTML being pushed out from the server (running Ruby-on-Rails). The same can be said of the Ponzu conference system.

As far as I understand, the benefits of using Javascript templates are 1) reducing the load on the server (generating JSON is less load than generating full HTML), 2) speed if used in combination with a single page design.

The downside is the additional work that browsers have to do, which can be a problem on mobile where the devices are not as powerful as their desktop counterparts.

I’ve touched this subject before in these two posts [1](https://code.castle104.com/?p=289), [2](https://code.castle104.com/?p=291).

As discussed by David Heinemeier Hansson, the same benefits can be achieved without Javascript templates by using a PJAX/Turbolinks/Kamishibai like system that eliminates reloading Javascript and CSS on each page transition, and the use of aggressive caching on the server side to reduce the load of HTML generation.

There is a real case however, where I feel a strong need for a Javascript tempting language.

That is when I try to cache responses in browser side cache. The issue is that HTML is extremely verbose, and is a real killer in terms of storage consumption when you are working with repetitive content.

For example, the following is a “social box” that we use in Ponzu for the like button and a voting button. It takes about 2,000 bytes. Each social box is associated with a presentation so we have hundreds to thousands of social boxes for each conference. This can easily fill up the limited browser side cache.

<div class="" id="presentation_326_social_box"><div class='like_box'>
<div class='like' id='like_button_326'>
<span class='social_controls'>
<!-- To invalidate Like related paths, we need a like object -->
<a href="/likes?like%5Bpresentation_id%5D=326" class="button icon like" rel="nofollow">like</a>
<div class='prompt_message'>
(
To add to your schedule, please &quot;like&quot; it first.
)
</div>
</span>
<div class='social_stats'>
<img alt="Like" src="/assets/like-c3719a03fc7b33c23fda846c3ccfb175.png" title="いいね!を押すと、応援メッセージになります。またあなたのスケジュールに登録されます。" />
<a href="/presentations/326/likes?user=1">15 people</a>
liked this.
</div>
<div class='likes_list' id='likes_presentation_326'></div>
</div>

</div>
<div class='vote_box'>
<div class='like' id='vote_button_326'>
<span class='social_controls'>
<form accept-charset="UTF-8" action="/likes/vote" class="new_like" id="presentation_326_new_like" method="post"><div style="margin:0;padding:0"></div>


<span>

<label for="presentation_326_like_score_1">Excellent!</label>
</span>
<span>

<label for="presentation_326_like_score_2">Unique!</label>
</span>
<span>

<label for="presentation_326_like_score_0">No Vote</label>
</span>
</form>
</span>
</div>

</div>
</div><div id="session_details_presentation_326"></div>

Most of this content is repetitive and will be identical for each “social_box”. In fact, the content that is unique to each individual social box can be summarized in the following JSON.

[{ 
    presentation_id:326,
    user_id:1,
    score: 0,
    liked: 0,
    scheduled: 0
}]

If we could use Javascript templating to generate the 2,000 byte HTML from this small set of JSON, local storage savings would be huge.

This is one feature that we will be adding to Kamishibai and Ponzu in the near future, to enable the ultimate goal of complete offline viewing.

On HTTP caching

Kamishibai provides support for storing Ajax responses on the client using either localStorage or WebSQL (and Indexed DB is planned in the future). This enables us to dramatically speed up page loads by not sending the request out to the server, but retrieving the response from internal storage. It also allows us to provide offline access to pages.

HTTP itself provides the HTTP cache protocol which allows the server to control browser cache through the “Cache-Control”, “Expires”, “Last-Modified”, “If-Modified-Since”, “ETag” and “If-None-Match” HTTP headers. Ruby-on-Rails also provides methods that allow us to easily manage these headers.

The question is, why did we create our own caching mechanism for Kamishibai, instead of using HTTP cache. In the following, I hope to provide the answer.

Many Ajax requests are Private Content

The following is an excerpt from the above link describing use-cases from HTTP cache.

Private content (ie. that which can be considered sensitive and subject to security measures) requires even more assessment. Not only do you as the developer need to determine the cacheability of a particular resource, but you also need to consider the impact of having intermediary caches (such as web proxies) caching the files which may be outside of the users control. If in doubt, it is a safe option to not cache these items at all.

Should end-client caching still be desirable you can ask for resources to only be cached privately (i.e only within the end-user’s browser cache):

In Ponzu, our scientific conference information system with social network features, a lot of the content is “Private content”. For example, we generally only show the abstract text to conference participants (non-participants can only view the presentation titles and authors). Hence Ajax requests for presentation detail pages cannot be handled with HTTP cache.

URLs alone are not sufficient as the cache key

HTTP caching uses the URL only as the cache key. If the content changes depending on values in cookies, then HTTP caching doesn’t work.

However with Ponzu, we use cookies to store the current user id and we also store the locale. We display slightly different content depending on the privileges of each user and we also provide different translations. We do all this while keeping the URL the same. Keeping the URL the same is important to maximize social sharing.

Hence in Ponzu, URLs alone are not sufficient as the cache key.

Flexible purging of HTTP cache is not possible

HTTP cache does not allow flexible purging of the cache. For example, if you set “max-age” to a large value (e.g. for a few days), then you cannot touch the cache on the browser until the cache has expired. If you suddenly have an emergency notification that you need to put up, you can’t do it. You have to wait until the cache expires, whatever happens.

With Ponzu, we want to set very long cache expiry dates to maximize fast browsing. On the other hand, we want to be able to flush the cache when an emergency arises. An emergency might be an urgent notification, but it also may be a bug.

Hence HTTP cache is not particularly suitable, and we would not want to set long expiry times with it unless we were extra sure that the content would not change.

Summary

As we can see, HTTP cache is not suitable for the majority of Ajax requests (HTML requests) in Ponzu. Although we use it to serve content in the Ruby-on-Rails asset pipeline, we don’t use it for dynamically created content at all. Instead, we use Kamishibai caching which provides more flexibility.

On JavaScript MVC (part 2)

In Kamishibai and Ponzu, speed is a huge concern. This is particularly true for smartphones. In desktop web sites, it is possible to cram a lot of information and navigation elements into a single page to make up for slow page loading. You can see this in news websites like Asahi.com where 80% of the top page consists of navigation and shortcuts. The idea is that instead of asking the user to click a link and reload a new page (which is slow), the user simply can scroll down to see more content. With smartphones, cramming all this information is simply not a good idea and we have to reload pages.

One way to reduce the load time for pages and to update only the parts that you want is to use Javascript MVC or client-side MVC. With client side MVC, the pages are not reloaded as the content is switched. Instead of sending HTML pages, the server sends JSON data to the browser and browser-side javascript is used to construct the DOM from the JSON data. The advantage is that the client does not have to reload the whole page, and that it can intelligently update only the portions of the DOM that need to be redrawn.

In Kamishibai and Ponzu, we seriously contemplated using these Javascript MVC frameworks. However, we decided not to do so. Instead, our approach is similar to how GitHub handles updates with PJAX, and with how the new Basecamp uses Turbolinks. David Heinemeier Hansson gave a presentation describing why they did not use Javascript MVC extensively and a video is on YouTube.

There are other highly respected programmers who question the use of Javascript MVC. Thomas Fuchs, the author of script.aculo.us and Zepto.js has this to say in his blog “Client-side MVC is not a silver bullet” and his comments on a post about one of his projects, Charm.

I’ve come to the realization that this much client-side processing and decoupling is detrimental to both the speed of development, and application performance (a ton of JavaScript has to be loaded and evaluated each time you fire up the app). It’s better to let the server handle HTML rendering and minimize the use of JavaScript on the client. You can still have fast and highly interactive applications, as the new Basecamp shows—letting the server handle most stuff doesn’t mean that you have to cut back on cool front-end features and user friendliness.

We’ve spend a lot of time getting Backbone to work properly, as the easy-of-use quickly deteriorates when your models get more complex. It’s a great choice for simple stuff, but email is far from simple. We also had to add yet an other extra layer of processing to generate “ViewModels” on the server because the normal Rails serialization of objects wouldn’t cut it.

If you do any non-trivial resources, you’ll quickly end up with JSON objects that are just too large, especially for lists. For emails, imagine that you have nested threads, user avatar images, nested assigned cases, etc.
Because of this, you’ll need specialized JSON objects/arrays for different use cases (search, list view, detail view, and others). It follows that you’ll end up with this with more or less any front-end framework (if you care about performance!). Doing this adds complexity, which can be avoided by rendering HTML on the server where access to arbitrarily deeply nested data is relatively cheap (and can be highly optimized by keeping snippets of HTML around in memcache, etc).

In Ponzu and Kamishibai, we ended up following the traditional Rails route, enhanced with techniques that are seen in PJAX and Turbolinks.

What to we want to achieve

For us, the reasons for contemplating the use of a Javascript MVC framework were;

  1. Speed
  2. Use without network connection (offline apps)

A common benefit for Javascript MVC is interactivity, but this was not a concern for the types of web applications we had in mind which tend to be very read-heavy.

David Heinemeier Hansson has written a detailed post on how the new Basecamp dramatically increased performance through PJAX-like techniques and extensive server-side caching.

This technique basically nullifies the first benefit, because it makes it very easy to increase performance. Furthermore, the code will be almost the same as traditional Rails and hence very simple.

The second issue, offline usage, is something that even most Javascript MVC frameworks do not support very well. This is even more so when the data gets complex. Hence this in itself is not a compelling reason to go Javascript MVC; we would still have to figure out how to do offline usage effectively ourselves.

Why do people use Javascript MVC?

I came across this blog post describing why they used a Javascript MVC framework (Ember.js) instead of simple jQuery.

The author, Robin Ward, gives a specific example.

For example, on the bottom of every discourse post there is a button a user can click to like a post. When clicked, it vanishes and adds a footer below the post saying you liked it. If you implementing this in jQuery, you might add a data-post-id to the post. Then you’d bind a click event on your button element to a function that would make the AJAX call to the server. However, the click function passes a reference to the button, not the post. So you then have to traverse the DOM upwards to find the post the button belongs to and grab the id from there. Once you have it, you can make your XHR request. If the XHR succeeds, you then have to traverse the DOM downward from the post to the footer, and add in the text. At this point it works, but you’ve tied your implementation of the button click to a particular DOM structure. If you ever want to change your HTML around, you might have to adjust all the jQuery methods that accessed it. If this example seems simple – consider that in the footer we offer you a link to undo your like. When clicked, the footer text vanishes and the button appears again. Now you’re implementing the opposite operation against the DOM, only in reverse of what you did before. Discourse even takes it a step further – we know that 99% of the time when you click the like button the request is going to work, so we hide the button and show the footer text right away, even before waiting for the server to reply. In the infrequent event that request fails, we’ll show an error message and pop the UI back to the state it was in before. If we were doing that in jQuery, we’d have to have a callback on our AJAX request that knew how to put the UI back into the state it was in before. A prudent programmer might say, okay, well I’ll have a render function that can rebuild the DOM to the current UI state of whether the post is liked or not. Then both ‘undo’ and ‘like’ can call it. If ‘like’ fails it can call it again. Oh, and we have to store the current state of whether the post is liked somewhere. So maybe we add another data-liked=”true” attribute. ACK! Just typing this all out is giving me a headache!. Congratulations, your code is now spaghetti, your data is strewn out in the DOM and your logic is tied to a particular layout of HTML elements.

Although I understand Robin’s point and I have also experienced frustration when we want to update multiple elements that are in separate locations, I tend to think that using a full-blown Javascript MVC framework is an overkill. No doubt, DHH and Thomas Fuchs would send Javascript in the response to do the complex updates.

In fact, it is pretty difficult to find an Ember.js example on the web that does stuff that is complex enough that simple Javascript would not cut it.

A more intelligent RJS

Given that most of the use-cases of Ember.js and Javascript MVC frameworks seems to be stuff that could be done with regular Javascript, but might be a bit complex, the more pragmatic approach in my opinion, is to create a small library that would make updating the DOM through RJS-like methods simpler. It could also manage caching, expiration and updating of the response on the server side, so that subsequent requests do not have to go out to the network.

This is the approach that Kamishibai intends to persue.

Furthermore Kamishibai even handles the two-pane interface which is demonstrated in this Ember.js guide. This is possible because we have included a lot of “intelligence” in the library, whereas with the simple RJS approach, there is not library where complicated logic is stored for reuse.

More on this later when we open up the code.

On Javascript MVC

David Heinemeier Hansson gave a presentation on Backbone.js or rather how they are not using it too much. The video is on YouTube.

He mentions how he is using PJAX (or actually TurboLinks) and extensive caching on the server side to make the new Basecamp just as responsive as a Javascript MVC application would.

This is very similar to the Kamishibai approach.

There are a few things that I’m wondering about, and this post is a memo about these.

Why are Javascript MVC applications responsive?

Possibilities;

  1. Because all the Javascript and CSS files are not being reloaded and re-parsed.
  2. Because the whole screen is not being redrawn. Only a small portion is being rendered.
  3. The server is much faster at returning JSON responses compared to rendering HTML.

The first possibility is addressed by the PJAX-like approach.

The second possibility is not addressed by PJAX itself because it renders the whole page. Kamishibai on the other hand, can render segments of pages so Kamishibai addresses this.

As for the third possibility, I find it incredible that the server might be slower than the Javascript client at rendering HTML (or the DOM). Of course, Rails caching can solve this problem and all is well, but I wonder how a client, especially a mobile client can be that fast. Maybe the issue is that desktop view templates tend to be too complicated.

Responsive web design and the problem with desktop websites

In the current iteration of Ponzu, almost all the views (except for feature phones) share the same code. That is to say that the desktop version, the tablet version and the smartphone version are the same HTML, Javascript and CSS with a little bit of CSS media queries to shrink down the interface for smartphones.

This is responsive web design.

Responsive design is a web design trend but is not without criticism. The most prominent criticism came from Jakob Nielsen, an expert in user interface design, in the blog post “Repurposing vs. Optimized Design”.

Nielsen point boils down to the following sentence;

It defies all common sense to expect the same user interface to be optimal for 3.5-inch screens and 30 inch screens, with no modifications beyond moving a few things around. Diversity in our interactive platforms requires diverse interaction design.

He lists the following as things that ought to be different for mobile vs. desktop design;

1. Most important, the content should be different: shorter and simpler writing is required for the smaller screen because the lack of context reduces text comprehension. 2. The IA changes to defer secondary content to secondary pages on mobile devices. 3. Interaction techniques change due to the differences between finger and mouse-driven input. 4. The feature set is reduced in mobile to lower complexity and to fit on the smaller screen.

Nielsen’s post met a lot of criticism from web designers, and he elaborated his points in a subsequent interview.

Similar points have been discussed in various blogs, and I find this article by Brad Frost, comparing the presidential campaign websites to be very informative, mainly because it compares the extremes.

Brad Frost summarizes the potential pros and cons of optimized-design vs. responsive site in the following.

Access

And while responsive designers can (and do) hide content from small-screen users, responsive design affords less opportunity to fork the content and create disparate experiences, which would deprive certain users of valuable information and features.

The main problem with Mitt Romney’s (optimized-design) mobile website is that only a fraction of the full website’s features are included.

Another common problem with separate mobile websites is URL management. Because desktop and mobile content live at separate URLs, device detection is required to route users to the appropriate site. Unfortunately, many websites don’t go deep enough in their URL redirection, so desktop users will get sent to mobile content and vice versa. This becomes apparent when mobile content gets shared by mobile users on social networks and then gets accessed by desktop users: Issues arise when a mobile URL is accessed from a desktop.
As we can see, having Web content all under the same roof and URL definitely makes it easier to give visitors access to the content they’re looking for, regardless of the device they happen to be using.

Interact

Obama’s (responsive design) navigation fails on a whole load of mobile devices: “And the menu failed. Never even opened. Suddenly, the site was without navigation… at all.”

Scrolling

Romney’s (optimized-design) mobile website has an acceptable page length.

Obama’s (responsive-design) pages contain a massive amount of content, often introducing entirely new sections far down in the flow. The result is extremely long pages that have serious problems.

Peformance

A typical page on Romney’s (optimized-design) mobile website is about 687 KB and, as a result, takes about 8.75 seconds to load. While that’s over the 5-second mark, the pages still weigh less than the average size.

A typical page on Obama’s responsive website is a massive 4.2 MB, resulting in a 25-second loading time.

The picture painted in Brad Frost’s article is that neither candidate got their mobile website right, even though Romney’s site appeared to follow Nielsen’s advice.

The question is how should we design the Ponzu conference system.

The problem is not about mobile. It’s about desktop design.

As I have discussed previously on this blog and others, my take is that desktop web design is too complicated. Rather than making mobile much simpler than current desktop design, my idea is to drastically simplify desktop design. By simplifying desktop design, it will become more adaptable to 3.5 inch mobile screens with minimal layout changes.

Tablets will be the motivation to simplify desktop design. In fact, I think that desktop websites should be designed primarily for tablet audiences.

iMode mobile sites and smartphone mobiles sites are different

When it becomes difficult to support a certain platform, one option is to provide a separate website that is written in simple HTML, and redirect that platform to this website.

This is how we support IE7 and below in Ponzu. Since our CSS and Javascript require at least IE8, we redirect IE7 and below to our website designed from iMode browsers. Since iMode only supports the simplest HTML and CSS, this website is extremely simple. Any browser, probably even old Netscape browsers are able to render to render this correctly.

One problem however is touch-based interfaces. When we render the iMode site on a touch device like an Android smartphone, the links are too small to tap with a finger. Although even the most incapable Android smartphones are able to correctly render the iMode website, the links are un-tappable because they are too small. One solution is of course to have the user zoom-in and out of the window. This is however a pain on old Android devices, because zooming is not smooth and in some cases, you do not have pinch-to-zoom.

We are currently investigating if we need to redirect some old Android devices to the iMode website, and if necessary, whether we can customize the font-size, etc. so that this site is usable with a touch interface.

Deciding which smartphones to support

Deciding which platforms to support and to what extent we should support them is a complex decision.

On the desktop, the vast majority of problems reside in how we should support Internet Explorer (IE) prior to version 10 (which is a very standards-compliant browser). Both IE8 and IE9 are still widely used but have numerous deviations from the HTML and CSS standards. Many features are not supported, and those that are are often buggy. However, since the problem has persisted for such a long time on browsers that have historically been the most popular, the open-source community has constructed many libraries that fill in the holes. Furthermore, the issues and workaround have been extensively documented on the web. Hence the problems are for the most part contained and controllable.

There are also differences and bugs among the “standards-compliant” browsers. However, in recent years, as HTML5 and CSS3 have stabilized, most of the issues have been fixed. In addition, users tend to use the most recent versions of these “standards-compliant” browsers, meaning that we don’t have to deal with the old buggy versions.

On the smartphone and mobile platforms, the situation is very different. Although most mobile browsers are based on the standards-compliant WebKit rendering engine and aspire to be standards-compliant in their Javascript implementations as well, the reality is that there are still a lot of differences.

On the smartphone side, the issues are almost exclusively on the Android side rather than iOS. By far the major reason why these issues persist is because the Android OS is very often not updated by the manufacturers of the phones. As with the desktop browsers, early smartphone browsers contained numerous bugs or lacked many features. Although these bugs were resolved in future iterations and were included in subsequent Android OS releases, a huge number of Android devices did not receive the fixes. The manufacturers simply did not bother to adapt the new Android version to their devices, or, due to the fact that they skimped on RAM to develop “budget” phones, they could not update them due to insufficient hardware resources.

As a result, there remain a huge number of Android phones on old versions of the OS, and hence on old versions of the browser; versions which contains a lot of old bugs.

Bugs in the Android browser tend not to be as severe as the bugs in Internet Explorer. They mostly reside in the HTML5 and CSS3 implementations, both of which were not stable in the webkit code at the time the old Android browser was forked. Other bugs are in “touch interface” implementation, which also was not in the original webkit code. However, since web development in the HTML5/CSS3 era has evolved to emphasize animations, feedback and interactivity, these bugs are very significant.

A table of Android OS versions per phone

To decide which Android versions to support, we use this table (Android端末一覧) on Wikipedia that lists every smartphone model sold in Japan and which Android version it is upgradable to.

We can observe that almost all smartphones that were introduced up till September, 2011 have only been updated to 2.3.4. Up till April, 2012, many new phones ended up being stuck at 2.3.4-6. Only after April 2012, just a year ago, do we see the majority of phones being upgradable to Android 4.0. Hence support for Android 2.3 is inevitable.

Just for comparison, the current version of iOS, iOS6.1), is installable on even the iPhone 3GS, a model that was first sold in Japan on June 2009. June 2009 is a month before the very first Android phone was sold in Japan and that could only be updated to Android 1.6.

Assessing the current and future popularity of Chrome on Android

In developing websites, especially those like Ponzu/Kamishibai which make heavy use of Javascript and CSS3, it is extremely important to decide which browsers to support. Older browsers will tend to not support the features required to make advanced features on the website run, and so the decision has to be make whether to support that old browser at all.

In Ponzu/Kamishibai, we currently support the following platforms;

  1. Newer versions of Safari, Firefox and Chrome on desktop platforms. The decision to not support older versions is based on statistics that show that users of these browsers tend to update quickly to the newest version.
  2. Internet Explorer 10 on windows.
  3. Internet Explorer 8 and 9 are supported through browser-specific code modifications. More technically, we have separate CSS and Javascript files that are uses only on these platforms to make up for deficiencies. Hence testing tends to be less thorough compared to the more fully supported platforms. Supporting older versions of Internet Explorer is a necessary vice, due to the fact that Windows XP (which only supports up to Internet Explorer 8) is still prevalent, and that IT departments within corporations usually restrict updates.
  4. On iOS, we support the latest version only, with brief testing on older versions. This is due to statistics that show rapid adoption of newer versions of iOS. For example, iOS6 was found installed on 85% of devices after only 5 months. Furthermore, iOS6 can be installed on even the iPhone 3GS, a device released almost 4 years ago.
  5. On Android, we support the Android stock browser on Android version 2.3.6 and Android version 4.0. We also support the latest version of Chrome on Android.

Android fragmentation due to slow adoption of new OS versions

Android support is complicated due to two issues.

One is the fragmentation of the Android platform itself. This is well documented and data can be found on Google’s website. As of May 1, 2013, close to 40% of users are on “Gingerbread” (version 2.3.*), an OS version that was first introduced on December, 2010, and superseded by “Ice Cream Sandwich” (version 4.0) on October, 2011.

The Android stock browser is an integral part of Android and is updated together with the OS itself. Hence Android OS fragmentation directly corresponds to browser version fragmentation.

スクリーンショット 2013 05 13 9 35 30

Android fragmentation due to two different Google browsers

Android browser support is further complicated due to the fact that there exists two separate brands of browsers, both of which are developed by Google and both of which may be found as the default browser on even the newest versions of Android.

Up till Android version 4.0, the default browser on Android was always what is commonly referred to as the “stock Android browser”. However in June, 2012, Google released Chrome for Android. Since then, some but not all devices (e.g. Nexus 7) have Chrome as the default browser and do not have the “stock browser” installed.

Will Chrome become the new default browser for Android?

Because Chrome is developed by Google, the question is whether or not Chrome will be the default browser for Android. More practically, the real question is whether Chrome will become a significant proportion of the web audience.

Unfortunately, current statistics point to that not being the case in near future.

The below graph is taken from NetMarketShare which tracks global website usage. The graph is a breakdown of mobile browser usage on April 2013. Chrome usage (which is a combination of Chrome on Android and iOS) is 2.63% whereas Android Browser usage is 22.89%. If we make the assumption that Chrome usage is 100% Android, this calculates to 10.3% of Android users using Chrome. This compares to 28.4% of Android users on “Jelly Bean” (version 4.1.) and 27.5% on “Ice Cream Sandwich” (version 4.0.).

Chrome for Android cannot be run on versions lower that 4.0.*. The above numbers mean that 10.3 / (28.4 + 27.5) = 18.4% of “Chrome-capable” Android devices are running Chrome. If we restrict to “Jelly Bean”, the Android version from which Google removed the Android stock browser on Nexus devices, 36.2% are using Chrome at maximum (assuming that all Chrome users are on Jelly Bean, a rather unrealistic assumption).

These numbers suggest that Chrome for Android adoption is not particularly high, even among newer devices.

It does not look very likely that Chrome will become the number 1 browser on Android for at least a few more years.

スクリーンショット 2013 05 13 10 18 19

Non-Google branded smartphones do not use Chrome for the default

Whereas Google “Nexus” branded devices like the Nexus 7 come with Chrome as the default browser, and without the stock Android browser installed, the same is not true for devices from Samsung.

In this review for the recently introduced Galaxy S4, which comes with Android version 4.2, the default browser is mentioned to have Samsung-specific features which are not available for Chrome. Hence the stock Android browser is not only installed as the default, it has unique features which Samsung intends to use to differentiate from the competition.

It is therefore likely that Samsung has no plans to switch to Chrome as the default browser. Instead, Samsung will most likely add features to the stock Android browser to further differentiate itself. If Samsung were to switch to Chrome, that would mean the browsing experience would be almost identical to other smartphones. Poor differentiation means commoditization, and Samsung will fight hard to prevent that.

In fact, Chrome on Android is pretty badly implemented at this point, and using Chrome as the default browser is likely to worsen the user experience. As Google improves the Chrome code, this is likely to be less of an issue. However, there is a larger issue that will still stop Samsung from using Chrome aggressively.

Android is open-source, which Chrome is not

The comments in this article about Chrome for Android are mostly detailed and provide a lot of insight. Particularly interesting is the fact that while Android (including the browser) is open-source, Chrome is not. This is discussed in more detail here.

Hence for a manufacturer like Samsung, which has resources to modify Android to create a unique user experience, the Android stock browser is an obvious choice. They simply cannot customize Chrome. It is possible that Samsung will switch from the Android stock browser to their own browser based on webkit, but it is unlikely that they will move to Chrome. The same can be said of HTC and other major players.

For less capable manufacturers, they may chose Chrome, but they also might chose a third party which has a more capable browser or which will agree to customize.

Either way, the current situation is such that the first choice for the default browser has disappeared without anything to fill that gap. Chrome in the current licensing status will not fill in that gap, as Samsung will most likely not chose it. As a result, we might see extreme fragmentation in Android browsers.

Conclusion

Chrome is unlikely to become the dominant browser for Android in the near future. This is true even if we only consider new high-end phones since manufacturers seem reluctant to give up their own customized stock browser in favor of Chrome.

The result is likely to be that we will see even more fragmentation of Android browsers. The players will be customized stock Android browsers (from Samsung, HTC and larger manufacturers), Chrome, Amazon Silk and third party browsers (which will be endorsed by smaller manufacturers which cannot customize the stock browser themselves).

The situation is pretty grim for web developers which want to take advantage of cutting edge HTML5/CSS3 features on Android.

Analyzing script evaluation speed of jQuery & jQuery Mobile on the iPhone

In a previous post, I discussed how Javascript can slow down website loading speed significantly, and why Kamishibai and Ponzu aim to keep the size of Javascript small.

Today, I used the Safari web inspector to analyze how long it actually takes to evaluate jQuery Mobile, a common framework used for mobile web development.

For evaluation, we used the jQuery Mobile demo page for version 1.2.1.

Below are the results. We evaluated, in order, my MacBook Air with a 1.7GHz Intel Core i5 (Mac OS X 10.8.3), my iPhone 5 (iOS 6.1.3) and my iPhone 4S (iOS 6.1.3).

The numbers that we are interested in are the times that it takes to evaluate Javascript.

MacBook Air 1.7GHz Intel Core i5 2013 04 01 21 56 jQuery mobile MBA

MacBook Air 1.7GHz Core i5: Time taken to evaluate Javascript

  1. jquery-1.7.1.min.js – 10.7ms
  2. jquery.mobile-1.2.1.js – 31.5ms

iPhone 5 2013 04 01 21 56 jQuery Mobile iPhone 5

iPhone 5: Time taken to evaluate Javascript

  1. jquery-1.7.1.min.js – 41.7ms
  2. jquery.mobile-1.2.1.js – 144ms

iPhone 4s 2013 04 01 21 57 jQuery Mobile iPhone 4s

iPhone 4s: Time taken to evaluate Javascript

  1. jquery-1.7.1.min.js – 65.7ms
  2. jquery.mobile-1.2.1.js – 238ms

Conclusion

The total latency of evaluating jQuery Mobile (jquery.min.js + jquery.mobile.js) was 42.2ms for the MBA, 186.1ms for the iPhone 5, and 303.7ms for the iPhone 4s.

Network latencies for a good broadband WiFi connection are about 50ms. For a 3G connection, they are a few hundred ms.

Server latencies can depends a lot on the complexity of the page you wish to display, but can be anywhere between a few ms to a few seconds. We generally try to keep server latency less than 300ms, and for most pages, less than 100ms.

Whereas latencies of 100ms appear to be almost instantaneous to humans, response times of more that 300ms are very noticeable and negatively impact perceived responsiveness.

Given these numbers, and also considering that the iPhone 4s is by no means a slow phone, we conclude that jQuery mobile is still too slow. For the evaluation time of jQuery Mobile to get to the point that it doesn’t matter, we need at least twice the speed of the current iPhone 5. Given the performance increase with each iPhone model, we might see sufficient speed with the 2013 iPhone model. However, for such performance to become mainstream, we will probably have to wait about 3 years, especially considering that Android Javascript performance tends to significantly lag behind mobile Safari.