Sign in with
Sign up | Sign in

Web Performance: SunSpider, V8, And BrowsingBench

Microsoft Surface Review, Part 1: Performance And Display Quality
By

We searched far and wide for benchmarks that run under Windows RT. But as a result of Microsoft's strict usage guidelines, there simply aren't any cross-platform tests available for comparing the Surface to other ARM-based tablets.

Everything we know about how Tegra 3 fares against competing SoCs comes from comparing Android- and iOS-based devices. We've seen the same processor perform differently under different operating environments, though. Right now, browser-based tests are the only way to quantify the performance of a Windows RT-based device compared to other tablets.

Unfortunately, that's not even a perfect solution, since browser support varies based on operating system. Safari is the default on iOS-based devices, and we saw a number of examples of it outperforming its competition in Which Browser Should You Be Running On Your iPad And iPhone?. Meanwhile, the Jelly Bean update to Android makes Chrome the default on Google's Nexus 7. With the Surface, we’re dealing with IE 10, complicating the comparison further. Even though we can’t standardize to a single browser, using each device's default is more appropriate for measuring device performance anyway.

SunSpider is a browser-based performance test that measures screen drawing, text manipulation, and encryption. We regularly test JavaScript performance as part of our regular Web Browser Grand Prix series, which is why these results are a bit surprising (Ed.: Actually, they shouldn't be. Our first look at RoboHornet in RoboHornet: The Next Big Thing In Browser Benchmarking showed IE10 doing really, really, well.). We already know IE 9 to be slower than Chrome on Windows 7. So, IE10 seems to be a major upgrade. The combination of Microsoft’s latest Web browser and Windows RT turns out to be quite a bit faster than Chrome on Android 4.1. Also, JavaScript performance is better on the Surface than on the iPad 2 or third-gen iPad.

The V8 Benchmark Suite was created by Google specifically to test the runtime performance of JavaScript in Chrome, so it's hardly surprising to see the Nexus 7 and Transformer TF201 in the lead. Because Microsoft won’t allow browsers other than Internet Explorer to run under Windows RT, though, it'd be difficult to remove the browser as a variable in this test.

Octane is Google's newest JavaScript-based benchmark. It incorporates the eight original tests from V8 along with five additional test that focus on runtime performance. Interestingly, the Surface fares worse than it did in V8. But, given the source of this test, we're not prepared to draw any conclusions from it.

BrowsingBench was created by EEMBC, also known as the Embedded Microprocessor Benchmark Consortium. It's a non-profit organization tasked with finding ways to develop testing methodology, specifically for embedded hardware. We're been playing around with this tool in the lab, and we love it. While it's meant for testing "smartphones, netbooks, portable gaming devices, navigation devices, and IP set-top boxes," it's just as applicable for measuring browser performance in general.

Unlike SunSpider or V8, BrowsingBench evaluates the total performance of a browser: page loading, processing, rendering, compositing, and so on. This helps reflect real-world use, unlike a single JavaScript-based metric. Frankly, these results are more representative of our own subjective experience. Both iPads share similar processing hardware, and yet Apple's second-generation tablet gets the win. Resolution plays a big part in the rendering workload, and at 2048x1536, more is asked of the third-gen iPad than the iPad 2 at 1024x768.

Meanwhile, Microsoft’s Surface falls ~30% behind the iPad 3 and ~40% behind the iPad 2. That’s in sharp contrast to SunSpider, where IE10 took a commanding lead. The good news is that Surface manages to outpace Tegra 3-based Android tablets like the Nexus 7 and Transformer TF201 by ~20%.

Bear in mind that the performance measured in these three tests is only one aspect of using a tablet, and by no means does a discrepancy of 40% in a browsing metric mean you're going to see a corresponding experiential gap in the real world.

React To This Article