How We Test Smartphones And Tablets
Today, we outline the strict testing procedures used to obtain accurate data and discuss each test that we perform on smartphones and tablets.
Web Browsing Benchmarks
While it’s desirable to measure a device’s performance for a common task like Web browsing, collecting good data is difficult because of the browser software layer. In many cases, the choice of Web browser has a larger effect on performance than hardware or even CPU scaling frequencies. Since the browser is so important, which one should we use?
There are several options available for Android, but the two obvious choices are Google’s Chrome browser and the stock Android browser. As the most commonly used browsers, their performance is the most representative of what a user would experience using a particular device. Unfortunately, neither is well suited for benchmarking. Chrome’s frequent updates makes comparing scores from device-to-device or even week-to-week difficult. The stock browser is even worse, because every OEM makes its own modifications (including some benchmark cheating), making device-to-device comparisons impossible. To avoid these issues, we use a static version of the Chromium-based Opera browser. The advantage to this approach is consistency and the ability to compare hardware performance. The downside is that using an out-of-date, less-popular browser produces scores that are not representative of what a user would actually see.
Due to platform restrictions, Safari is the only choice for iOS-based devices, while Internet Explorer is the only game in town on Windows RT. This makes it impossible to compare hardware performance across different platforms. This also is not fair to Android devices since iOS and Windows devices get to use newer, and likely higher performing, browsers.
When running browser benchmarks, no additional browser tabs or pages are loaded. After running each benchmark, we close the page and force close the browser to make sure caches are cleared between runs.
Now that you have a better understanding of the complexities of browser-based benchmarks and the limitations of our method, let’s discuss the benchmarks we use. Browser benchmarks primarily test JavaScript performance which is CPU-dependent, usually scaling directly with CPU clock frequency.
Browsermark
Browsermark by Basemark Ltd. tests a number of common tasks, including CSS 2D/3D transforms and resizes, DOM (Document Object Model) searches and creation, HTML5 graphics (Canvas, WebGL, SVG), and a number of different JavaScript functions.
JSBench
JSBench was originally created at Purdue University. Unlike most JavaScript performance benchmarks, it could almost be considered real-world, since it utilizes actual snippets of JavaScript from Amazon, Google, Facebook, Twitter, and Yahoo.
Google Octane
Google Octane is a synthetic benchmark that tests JavaScript performance. Octane is interesting because it really pushes the JavaScript engine, testing many different aspects missed by other browser benchmarks. It tests everything from regular expressions and floating-point math to compiler and garbage collection latency. For a full listing of the included tests, see Google’s reference page.
Because some of the Octane test scores show a wider variance than other benchmarks, we run it three times and average the results (throwing away any outlier runs) instead of the usual two runs.
Peacekeeper
This benchmark by Futuremark also tests JavaScript performance, including text parsing, array processing, DOM operations, HTML5 Canvas graphics, and even a 2D HTML5 game.
Current page: Web Browsing Benchmarks
Prev Page CPU And System Performance Next Page GPU And Gaming PerformanceStay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
-
blackmagnum Thank you for clearing this up, Matt. I am sure us readers will show approval with our clicks and regular site visits.Reply -
falchard My testing methods amount to looking for the Windows Phone and putting the trophy next to it.Reply -
WyomingKnott It's called a phone. Did I miss something? Phones should be tested for call clarity, for volume and distortion, for call drops. This is a set of tests for a tablet.Reply -
MobileEditor It's called a phone. Did I miss something? Phones should be tested for call clarity, for volume and distortion, for call drops. This is a set of tests for a tablet.
It's ironic that the base function of a smartphone is the one thing that we cannot test. There are simply too many variables in play: carrier, location, time of day, etc. I know other sites post recordings of call quality and bandwidth numbers in an attempt to make their reviews appear more substantial and "scientific." All they're really doing, however, is feeding their readers garbage data. Testing the same phone at the same location but at a different time of day will yield different numbers. And unless you work in the same building where they're performing these tests, how is this data remotely relevant to you?
In reality, only the companies designing the RF components and making the smartphones can afford the equipment and special facilities necessary to properly test wireless performance. This is the reason why none of the more reputable sites test these functions; we know it cannot be done right, and no data is better than misleading data.
Call clarity and distortion, for example, has a lot to do with the codec used encode the voice traffic. Most carriers still use the old AMR codec, which is strictly a voice codec rather than an audio codec, and is relatively low quality. Some carriers are rolling out AMR wide-band (HD-Voice), which improves call quality, but this is not a universal feature. Even carriers that support it do not support it in all areas.
What about dropped calls? In the many years of using a cell phone, I can count the number of dropped calls I've had on one hand (that were not the result of driving into a tunnel or stepping into an elevator). How do we test something that occurs randomly and infrequently? If we do get a dropped call, is it the phone's fault or the network's? With only signal strength at the handset, it's impossible to tell.
If there's one thing we like doing, it's testing stuff, but we're not going to do it if we cannot do it right.
- Matt Humrick, Mobile Editor, Tom's Hardware -
WyomingKnott The reply is much appreciated.Reply
Not just Tom's (I like the site), but everyone has stopped rating phones on calls. It's been driving me nuts. -
KenOlson Matt,Reply
1st I think your reviews are very well done!
Question: is there anyway of testing cell phone low signal performance?
To date I have not found any English speaking reviews doing this.
Thanks
Ken -
MobileEditor 1st I think your reviews are very well done!
Question: is there anyway of testing cell phone low signal performance?
Thanks for the compliment :)
In order to test the low signal performance of a phone, we would need control of both ends of the connection. For example, you could be sitting right next to the cell tower and have an excellent signal, but still have a very slow connection. The problem is that you're sharing access to the tower with everyone else who's in range. So you can have a strong signal, but poor performance because the tower is overloaded. Without control of the tower, we would have no idea if the phone or the network is at fault.
You can test this yourself by finding a cell tower near a freeway off-ramp. Perform a speed test around 10am while sitting at the stoplight. You'll have five bars and get excellent throughput. Now do the same thing at 5pm. You'll still have five bars, but you'll probably be getting closer to dialup speeds. The reason being that the people in those hundreds of cars stopped on the freeway are all passing the time by talking, texting, browsing, and probably even watching videos.
- Matt Humrick, Mobile Editor, Tom's Hardware