Single Tab
The Google home page serves as our test in the single-tab startup time test.
Starting with a single tab, Google Chrome takes the lead, followed closely by IE9. Firefox 6 is the first browser to take more than one second to start up, landing Mozilla a third-place finish. Safari 5.1 ups the game, leap-frogging Opera to take fourth.
In Mac OS X, Apple's own Safari reaches the finish line first, just a fraction of a second behind Chrome's winning Windows 7 time. Google still manages to grab second place on OS X in under one second. Firefox 6 again comes in third at over one second, very close to the Windows 7 run. Opera still places last.
Eight Tabs
We used the Top Eight websites (according to Quantcast) for our eight-tab startup time test. These sites include: Google, Facebook, YouTube, Yahoo!, Twitter, MSN, Amazon, and Wikipedia.
When starting with eight tabs, it's Opera that really shines, finishing in just under 1.5 seconds. IE9 comes in second place. Chrome 13 places third at just under four seconds, while Firefox 6 takes fourth at 4.25 seconds. Safari falls far behind the pack at just under eight seconds.
Using Lion doesn't change the finishing order one bit. Opera still turns in the best time. Chrome takes second place at 4.25 seconds, Firefox is in third with 4.5, and Safari again places last with 5.5 seconds.
All of the third-party Web browsers start up in approximately the same time, or slightly slower, in OS X. Safari is the exception. We start to see that Safari has significant advantages on its native platform, starting up about 25% faster.
The newer versions of Safari and Firebox seem to have helped Apple and Mozilla's startup times. However, with Chrome 13, the times are higher than they were in version 12.
- Crowning A Web-Browsing King In Windows 7 And OS X
- The Contenders
- A Spotlight On Lion's Safari
- Hardware And Test Setup
- Performance Benchmarks: Startup Time
- Performance Benchmarks: Page Load Time
- Performance Benchmarks: JavaScript, DOM, And CSS
- Performance Benchmarks: Flash, Java, And Silverlight
- Performance Benchmarks: HTML5
- Performance Benchmarks: HTML5 Hardware Acceleration And WebGL
- Efficiency Benchmarks: Memory Usage
- Efficiency Benchmarks: Memory Management
- Reliability Benchmarks: Proper Page Loads
- Conformance Benchmarks: HTML5, CSS3, JavaScript, And DOM
- Placing Tables
- Analysis Tables
- Two Winners: One In Windows 7, One in OS X
thank you, workin' on it
chrome13 completely obliterats it.
and firefox 8/9 are still a memory hog.
not really surprised by poor show of ie9. moat updates it gets are "security updates".
Yeah? And exactly what principle would that be?
Bring back the Google Dictionary, otherwise I will use Bing Search, Firefox and Facebook instead of Google Search, Chrome and G+.
According to the graphic on "Reliability Benchmarks: Proper Page Loads" on MacOS Firefox is actually second, not third.
thank you, workin' on it
These "browser" GP are getting more and more complete and the're always very interesting.
I have to say, I am a bit surprised to see FF being so close to Chrome now: kudos to Mozilla.
I have been using FF since 1.0 and only recently coupled it with Chrome (it is just convenient for me to have 2 completely different setups).
FF 7.0 should have a significant boost in memory efficiency: if everything else stays the same, we´ll have a new champion ...
But if anythin is clear from these reviews, is that nothing stays the same for very long in the browser´s domain (well, except IE).
Looking forward to GP7, whenever that will be.
You should've put more emphasis on the actual scores and performances in tests rather than count the times when certain browsers placed 1st. Thus a browser that had a small advantage in more and minor tests and at the same time severe handicaps in more important but fewer tests would seem better, when technically it is not. Suggestion: tie all the candidates when the differences between them in a certain test are less than a single digit percent. Good article anyway.
And to think Apple hates Flash...
There are no points in the analysis tables. They simply list how each browser rates per category of testing. The 'Strong' part of the table was added a long time ago and it basically means that it's right up there with the winner in terms of performance. When we get a solid point-based scoring system figured out 'Winner' will only receive a minor boost above 'Strong', whereas 'Strong' will receive a significant boost above 'Acceptable', and 'Acceptable' above 'Weak'. We're not there yet, but we're getting closer with every WBGP. The composite tests are a BIG step in that direction, and the new benchmark rankings further lay the groundwork for a fair scoring system which accurately reflects scale.
The analysis tables were created to balance the raw placing tables. The problem with what you're saying is that you would have to decide which categories are more important than others. Is JavaScript more important than CSS? Is HTML5 more important than Flash? This is going to depend on who you ask. People who only watch Netflix with an HTPC will put mega emphasis on Silverlight perf, whereas the chronic YouTuber will be more concerned with Flash, and devs are going to gravitate towards standards conformance. Ranking benchmarks based on the importance of what they test isn't a one-size-fits-all type of thing with Web browsers. As far as your other suggestion, dealing with practical ties, this is something we definitely want to look into moving forward. Thanks!