Early iPhone 6 Benchmarks Show Small GPU Performance Improvement
New iPhone 6 benchmarks point to a GPU that's weaker than Apple may have led us believe at the iPhone announcement.
The iPhone 6 may not be out yet, but it seems at least one person that has one (either someone working at Apple or someone who snagged one early) has tried out the Basemark X benchmark, and the improvements look smaller than what Apple promised on stage a few days ago.
Basemark X is a cross-platform benchmark that works on Android, iOS and WP8. It’s the only vendor-independent benchmark that utilizes the Unity 4.2 engine, which is used by thousands of mobile games, to test the graphics performance of mobile devices.
The iPhone 6 results are taken from the Basemark X's site, which come from an unknown submitter that has tested the device. The iPhone 6 hasn't been tested yet by Tom's Hardware so we can't confirm the veracity of these results. (The other benchmark numbers below, though, are from our own testing.)
Assuming these numbers are true, we can see the iPhone 6 got a score of 21,204 when tested with the Medium graphics settings, and a 15,621 score when tested on the High settings. On the Medium settings we see that the iPhone 6 is significantly behind Android devices (which we've tested ourselves), and only about 3 percent faster than iPhone 5S.

At the High settings, the roles change, and not only is the iPhone 6 faster than all the others, but it's about 16 percent faster than the old iPhone 5S. Despite this win, this is a much smaller GPU performance increase than the 50 percent improvement Apple claimed on Tuesday.

Apple usually announces a 2x improvement in GPU performance with each new iPhone, but that doesn't appear to be the case this time around. This led us to believe that Apple may be using the same old CPU and GPU that it’s using in the iPhone 5S, but with slight performance boost from moving to the 20nm process, instead of moving to a new GPU such as Imagination’s GX6650.
If Apple kept the same GPU as last year, then it would make sense for the increase in performance to be so modest. But that leaves the question of why would Apple claim a 50 percent improvement at the iPhone 6 announcement if the benchmarks are showing only a 3-16 percent improvement?
Products
Pricing
SoC
Apple A8
Apple A7
Qualcomm Snapdragon 801 (MSM8974AB)
Qualcomm Snapdragon 801 (MSM8974AC)
Qualcomm Snapdragon 801 (MSM8974AC)
Qualcomm Snapdragon 801 (MSM8974AC)
CPU Core
Apple Cyclone (2 Core) @ 1.4GHz
Apple Cyclone (2 Core) @ 1.3 GHz
Qualcomm Krait 400 (4 Core) @ 2.26 GHz
Qualcomm Krait 400 (4 Core) @ 2.45 GHz
Qualcomm Krait 400 (4 Core) @ 2.45 GHz
Qualcomm Krait 400 (4 Core) @ 2.45 GHz
GPU Core
???
Imagination PowerVR G6430 (4 Cluster) @ 200 MHz
Qualcomm Adreno 330 (32 ALU) @ 578 MHz
Qualcomm Adreno 330 (32 ALU) @ 578 MHz
Qualcomm Adreno 330 (4 Core) @ 578 MHz
Qualcomm Adreno 330 (32 ALU) @ 578 MHz
Display
4.7-inch IPS @ 1334 x 750 (326 ppi)
4-inch IPS @ 1136x640 (326 ppi)
5-inch IPS @ 1920x1080 (441 ppi)
5.5-inch IPS @ 2560x1440 (538 ppi)
5.5-inch IPS @ 1920x1080 (401 ppi)
5.1-inch SAMOLED @ 1920x1080 (432 ppi)
Memory
???
1 GB LPDDR3
2 GB LPDDR3
3 GB LPDDR3
3 GB LPDDR3 @ 1866 MHz
2 GB LPDDR3
One theory could be that Apple was referring to the performance of games taking advantage of the Metal API, and not the performance of pure OpenGL ES graphics, which is what benchmarks like Basemark X currently use. That means the 50 percent extra performance would mainly come from software improvements, not hardware.
If the theory is true, then Apple perhaps should’ve mentioned that the performance increase comes from using the Metal API instead of making everyone believe that it’s the GPU itself that is 50 percent faster. The difference is important, because not all game developers will be taking advantage of the Metal API, especially if they want their games to remain cross-platform.
Khronos has recently announced that it’s overhauling the OpenGL API, and the new API will also have close-to-metal access, much like Apple’s Metal or AMD’s Mantle. This means in a not-too-distant future, all developers will be able to take advantage of a similar API that will also be cross-platform and will work not only on mobile devices, but on PC’s, too.
Follow us @tomshardware, on Facebook and on Google+.


The A8 might have 50% extra raw GPU power but require architecture-specific software tweaking to reduce its memory bandwidth dependence before the extra processing power can be leveraged.
oh I forgot....
HA!
http://bgr.com/2014/08/18/iphone-6-rumors-ram-memory/
http://bgr.com/2014/08/18/iphone-6-rumors-ram-memory/
Read this aloud, "1GB of RAM is easily capable of running the desktop version of Windows 7/8."
Now remember, not everything is Android and insanely memory hungry. WP is the leader when it comes to performing well with lower amounts of memory, but iOS is not bad and if Apple is doing their job and optimizing, iOS8 should run well on 1GB of RAM.
Having lots of cores does you no good when most software makes little to no meaningful use of threading.
In the Android Development Kit guidelines, they tell developers to use threaded objects to do things like load images to avoid bogging down the main thread so applications can feel more responsive. This is fine as far as responsiveness is concerned but in terms of CPU performance scaling, those extra threads will be waiting on IO most of the time so overall performance remains mostly the same and perceived performance is dominated by single-threaded performance: how quickly the main thread can manage the UI's layout and parsing.
So I have no trouble believing a lower-clocked CPU with fewer cores but higher IPC can end up generally more responsive than CPUs with more cores and higher clocks.
Where did Apple got those performance projections?
How about a little quote: "Series6XT GPU cores are up to 50% faster clock for clock, cluster for cluster compared to their Series6 counterparts".
Who wants to read more about it can officially @:
http://blog.imgtec.com/powervr/new-powervr-series6xt-gpus-go-rogue-ces-2014
Why didn't they go with more clusters?
Well ask the rotten Apple not me.
http://bgr.com/2014/08/18/iphone-6-rumors-ram-memory/
Read this aloud, "1GB of RAM is easily capable of running the desktop version of Windows 7/8."
Now remember, not everything is Android and insanely memory hungry. WP is the leader when it comes to performing well with lower amounts of memory, but iOS is not bad and if Apple is doing their job and optimizing, iOS8 should run well on 1GB of RAM.
7/8 can run on 1GB but it certainly should not. It is not smooth nor is it fun. I have tried. Even 2GB of RAM is cutting it close.
My main problem is that everyone was touting the 64Bit Apple CPU yet it is pointless with 1GB of system RAM. The biggest benefits of 64Bit is that the OS can allocate more than 4GB and the apps can access more than 2GB (without the need of PAE of course which even still limits it to 4GB).
Overall it shows that Apple is still behind the curve. The S6 is going to be a 64Bit CPU, probably 3GB+ system RAM, rumors of a 4K display but most likely a 5.5 inch QHD screen and even the possibility of it having the same wrap around screen as the Galaxy Edge.
This just shows that it is the same fluff Apple always does. They finally catch up and tout it as revolutionary when in fact it is not. NFC is not new. Hell my phone has NFC (S4) and I can even share movies between Galaxy phones. And it is almost 2 years old.
Still though people flock like sheep to the iPhone as if it is Gods gift to man.
On another note, I have been playing with WP8.1 and I like it so far. Smooth and fast. As well it has made me realize that the iPhone is babies first phone.