Web Browsing Tests
The restrictive nature of mobile operating systems like iOS and Windows RT is stunting cross-platform benchmark development. Although Google is more flexible about development on Android, quality benchmarks remain few and far between. As a result, we’re often left with Geekbench and GLBenchmark as our two performance-oriented tablet metrics. Using just two benchmarks, both of which are theoretical, makes me uncomfortable, even if the results scale as we'd expect them to. Right now, browser-based tests help fill that void, if only because they’re easy to run.
SunSpider, the V8 Benchmark Suite, and now Octane are ubiquitous in tablet and smartphone reviews. However, they are imperfect. As on the desktop, swapping from one browser to another can dramatically change the performance reflected in a browser-oriented test without necessarily reflecting the performance of your platform. In short, the browser is a variable. At least if you keep all of your hardware constant, the Web browser is the only component that changes. Here, though, we're using different browsers on different hardware, so it's difficult to draw conclusions about either.
Even though it's a more expensive machine running the full version of Windows 8, I'm including results from our upcoming Samsung ATIV Smart PC 500T review for a couple of reasons. Most superficially, it's based on Intel's Atom Z2760 (Clover Trail), allowing us to compare a slightly faster version of the Medfield SoC, designed for tablets, to the ARM-based competition. Second, it’s the first truly mobile hardware platform that lets us test several browser versions (IE10, Chrome, and Safari).
When you keep your hardware platform constant, IE10 rises to the top in SunSpider (at least under Windows 8), while Chrome tops the chart in V8 and Octane. They're all JavaScript-based benchmarks, but Google publishes the latter two, so it's a little difficult for us to take them seriously.
Apple’s iPad mini performs on par with the iPad 2, as we'd expect. It trails behind the Nexus 7 in V8 and Octane, while it beats the Google tablet in SunSpider.
BrowsingBench was created by the Embedded Microprocessor Benchmark Consortium, a non-profit organization tasked with finding ways to develop testing methodology, specifically for embedded hardware. We're been playing around with this tool in the lab, and we love it. Intended for testing "smartphones, netbooks, portable gaming devices, navigation devices, and IP set-top boxes," it's also useful for measuring browser performance in general.
Unlike SunSpider or V8, BrowsingBench evaluates the total performance of a browser: page loading, processing, rendering, compositing, and so on. This helps reflect real-world use, unlike a single JavaScript-based metric. Frankly, these results are more representative of our own subjective experience.
The second- and third-generation iPads share the same CPU hardware, and yet the iPad 2 pushes ahead. Resolution plays a big part in the rendering workload, and at 2048x1536, more is asked of the third-gen iPad than the iPad 2 at 1024x768. So even though the iPad 3 sports twice as many graphics cores and a beefier memory subsystem, it’s asked to render three times as many pixels.
And that takes us to the iPad mini. You'd think that because it shares the same SoC and native resolution as the iPad 2, both tablets would perform similarly. We connected both devices over 2.4 GHz 802.11n, and, despite repeated test iterations, the iPad mini always comes out 100 points or more ahead of the iPad 2.
BrowsingBench measures the totality of performance, and some things have changed that may impact performance. For example, we know the iPad mini employs the Murata 339S0171 Wi-Fi module based on Broadcom’s BCM4334 chipset, whereas the iPad 2 leverages Broadcom’s BCM3429.