Why do smartphones have faster CPUs than most laptops
Tags:
-
Laptops
-
Smartphones
-
CPUs
Last response: in CPUs
They are smaller, cost a little less (off-contract), and have poor ways of getting rid of heat, yet new releases have 2.5GHz quad core CPUs, while $1,000 laptops are passing by with 1.8GHz quad cores. What's the deal here?
Is it simply a difference in the overall costs of the system (eg., battery instead of PSU allows for more emphasis on CPU) or a difference in technology?
Is it simply a difference in the overall costs of the system (eg., battery instead of PSU allows for more emphasis on CPU) or a difference in technology?
More about : smartphones faster cpus laptops
-
Reply to gumbykid
Clock speed is not the only metric in which to measure CPU speed and is a completely irrelevant distinction when comparing two different architectures. Laptop CPUs run on x86, which allows you to run full x86 operating systems like Windows, so you can use the software that is on your desktop computer and not be restricted to what software is available for mobile OSes like Android, iOS or Windows Phone.
Smartphones typically run on ARM processors, which don't offer nearly as much raw processing power as x86 CPUs but use a lot less power and thus offer a longer battery life than most x86 chips. While the newer ARM chips are offering higher clock speeds, they still aren't nearly as fast as an x86 CPU with a similar core count, so that laptop CPU would offer more processing power despite only being 1.8GHz, with the drawback being that your battery life would be maybe 4 hours with a much larger and bulkier battery as opposed to maybe 8 hours on the phone with a much smaller, lower capacity battery.
Smartphones typically run on ARM processors, which don't offer nearly as much raw processing power as x86 CPUs but use a lot less power and thus offer a longer battery life than most x86 chips. While the newer ARM chips are offering higher clock speeds, they still aren't nearly as fast as an x86 CPU with a similar core count, so that laptop CPU would offer more processing power despite only being 1.8GHz, with the drawback being that your battery life would be maybe 4 hours with a much larger and bulkier battery as opposed to maybe 8 hours on the phone with a much smaller, lower capacity battery.
-
Reply to Supernova1138
Related resources
- Why do most laptops Focus -->More<-- on the CPU rather than Gpu? - Forum
- Why does windows 8 boot far faster on my laptop than my desktop - Forum
- Why is Vista faster than 7 on my laptop? - Forum
- Why are AMD CPUs much cheaper than Intel? - Forum
- Why cant I find intel i7 quad core faster than 2.3ghz in a laptop - Forum
It's a difference in design and implementation, a Cortex A9 or Snapdragon chip is designed differently then a laptop version of a Core series chip. You have to keep in mind that clock speed isn't everything so while the Laptop chip might be slower it'll do more work per cycle. Even the difference in instructions sets (mainly x86 and ARM) make a difference.
-
Reply to CDdude55
CDdude55 said:
It's a difference in design and implementation, a Cortex A9 or Snapdragon chip is designed differently then a laptop version of a Core series chip. You have to keep in mind that clock speed isn't everything so while the Laptop chip might be slower it'll do more work per cycle. Even the difference in instructions sets (mainly x86 and ARM) make a difference.I'm glad you included how it's better (work/cycle). Is there a measurement or unit for the amount of information a CPU will process in a cycle? Or is it standard across desktop CPUs, then a different standard for laptops, etc.
Supernova1138 said:
Clock speed is not the only metric in which to measure CPU speed and is a completely irrelevant distinction when comparing two different architectures. Laptop CPUs run on x86, which allows you to run full x86 operating systems like Windows, so you can use the software that is on your desktop computer and not be restricted to what software is available for mobile OSes like Android, iOS or Windows Phone.Smartphones typically run on ARM processors, which don't offer nearly as much raw processing power as x86 CPUs but use a lot less power and thus offer a longer battery life than most x86 chips. While the newer ARM chips are offering higher clock speeds, they still aren't nearly as fast as an x86 CPU with a similar core count, so that laptop CPU would offer more processing power despite only being 1.8GHz, with the drawback being that your battery life would be maybe 4 hours with a much larger and bulkier battery as opposed to maybe 8 hours on the phone with a much smaller, lower capacity battery.
Thanks for the info, it cleared things up about the different architectures.
-
Reply to gumbykid
gumbykid said:
I'm glad you included how it's better (work/cycle). Is there a measurement or unit for the amount of information a CPU will process in a cycle? Or is it standard across desktop CPUs, then a different standard for laptops, etc.
There is no standard measurement for the amount of computational work performed per cycle. The reason for this is that it's very difficult to qualify such a unit of work as a single quantifiable unit that forms a basis for comparison between dislike products. It's a bit like comparing apples and oranges. They are both round shaped fruit, but the similarities end there.
Comparing x86 microprocessors for AMD and Intel is trivially easy because they both implement the same instruction set. From the perspective of application performance it's very easy to determine which product yields more desirable results simply by looking at performance metrics derived from running the exact same test on all devices being tested. The exact same test cannot be run on an ARM device because the ARMv7/ARMv8 instruction sets are entirely different. However, it is possible to run a similar test that is designed to create a common result and look at the differences between them.
The most telling benchmark for cross-family comparison is the Dhrystone benchmark.
The Dhrystone benchmark defines an abstract mathematical process that has a fixed number of Dhrystone instructions per iteration. Since Dhrystone instructions are abstract, the benchmark implementer can implement it on each microprocessor family in a machine-specific fashion. The benchmark measures the number of Dhrystone iterations performed by the microprocessor over a time period, and normalizes the performance to the basis point that makes the most sense, "Dhrystone MIPS per clock cycle", also known as "DMIPS/Mhz". Some manufacturers further break this down into "DMIPS/Core/Mhz" but that becomes incredibly problematic when the definition of a core changes.
Here are some examples:
i7-3960x @ 3.33Ghz = 178,000 DMIPS/sec = 53.3 DMIPS/Mhz
i7-4770k @ 3.5Ghz = 123,000 DMIPS/sec = 36.28 DMIPS/Mhz
XBox 360 @ 3.2Ghz = 19,200 DMIPS/sec = 6 DMIPS/Mhz
Quad-Core ARM Cortex-A9 (reference design) @ 2Ghz = 10,000 DMIPS/sec = 8DMIPS/Mhz
Dual-Core Apple A7 @ 1.4Ghz = 11,200 DMIPS/sec = 8DMIPS/Mhz
Can you see where I'm going with this? The high end Intel offerings simply blow away everything that ARM can produce on both the absolute and relative scales. ARM faces an uphill battle with respect to both increasing clock frequency and increasing instruction throughput per cycle.
-
Reply to Pinhedd
-
Reply to Pinhedd
One final question then. How do gaming consoles compete with PCs when high end PCs have a 10x stronger CPU? Obviously the PC can run at higher settings and better FPS, but it's certainly not 10x better. I remember hearing about how consoles are designed with all the parts integrated in a way that everything runs much more efficiently because each component specifically works with the others, instead of generally being compatible.
-
Reply to gumbykid
gumbykid said:
One final question then. How do gaming consoles compete with PCs when high end PCs have a 10x stronger CPU? Obviously the PC can run at higher settings and better FPS, but it's certainly not 10x better. I remember hearing about how consoles are designed with all the parts integrated in a way that everything runs much more efficiently because each component specifically works with the others, instead of generally being compatible. Console hardware is centered around a specific purpose so the design is geared towards what will be done on such a closed system. Again you can't really compare even though modern console hardware does use X86 like desktop PC CPU's and they've slowly evolved into PC like machines. But remember you're working with physical limitations with console design, they have to be small but powerful as they're really media center machines.
You're right in the sense that Console manufacturers are much more meticulous with console hardware, they really want to find a right balance of power consumption, heat and performance while all keeping it within the confines of a small casing, so the hardware is designed with things like bandwidth and chip placement in mind.PC's are a lot more ambiguous, the hardware isn't meant to feed a particular purpose it's a single machine in which the software given does what it can with the hardware resources given. While consoles developers know the hardware they're designing on.
-
Reply to CDdude55
gumbykid said:
One final question then. How do gaming consoles compete with PCs when high end PCs have a 10x stronger CPU? Obviously the PC can run at higher settings and better FPS, but it's certainly not 10x better. I remember hearing about how consoles are designed with all the parts integrated in a way that everything runs much more efficiently because each component specifically works with the others, instead of generally being compatible. There are several reasons:
1. The last generation of consoles can't compete with modern PCs. In fact, the XBox 360 and PS3 were unable to run some of the most demanding PC games from their time. My favourite example is Crysis, released in 2007. It was one of the first titles to support DirectX 10 and included a 64 bit executable. Cryengine 2 simply couldn't scale down to run on the consoles in a satisfactory way. Crysis 2 was released on PC to much dislike because Cryengine 3 made huge sacrifices to enable it to be released on the consoles at the same time. Physics were stripped out, levels were smaller with lower draw distances and object density (although the world design was top notch), the graphics API was DirectX 9 only upon release (later patched to DX11), textures were low resolution (later patched to include higher resolutions) and there was no 64 bit executable. When Crysis was finally ported to the consoles in 2011, it used Cryengine 3 instead.
2. Hardware APIs on PC have more overhead than their console counterparts. This is what allows PC games to run independent of underlying graphics hardware. The same (properly written) code paths can be used for all graphics hardware from AMD, Intel, and NVidia. Consoles have only one hardware set, so overhead can be minimized.
3. Most games really just don't need a hefty CPU to begin with. Crysis mentioned above is simply an exception. When developers feel like taking advantage of extra resources, they can simply allow the user to scale them up/down.
4. Consoles use operating systems that are heavily optimized for real-time gaming. There's little background overhead to worry about.
-
Reply to Pinhedd
Pinhedd said:
gumbykid said:
One final question then. How do gaming consoles compete with PCs when high end PCs have a 10x stronger CPU? Obviously the PC can run at higher settings and better FPS, but it's certainly not 10x better. I remember hearing about how consoles are designed with all the parts integrated in a way that everything runs much more efficiently because each component specifically works with the others, instead of generally being compatible. There are several reasons:
1. The last generation of consoles can't compete with modern PCs. In fact, the XBox 360 and PS3 were unable to run some of the most demanding PC games from their time. My favourite example is Crysis, released in 2007. It was one of the first titles to support DirectX 10 and included a 64 bit executable. Cryengine 2 simply couldn't scale down to run on the consoles in a satisfactory way. Crysis 2 was released on PC to much dislike because Cryengine 3 made huge sacrifices to enable it to be released on the consoles at the same time. Physics were stripped out, levels were smaller with lower draw distances and object density (although the world design was top notch), the graphics API was DirectX 9 only upon release (later patched to DX11), textures were low resolution (later patched to include higher resolutions) and there was no 64 bit executable. When Crysis was finally ported to the consoles in 2011, it used Cryengine 3 instead.
2. Hardware APIs on PC have more overhead than their console counterparts. This is what allows PC games to run independent of underlying graphics hardware. The same (properly written) code paths can be used for all graphics hardware from AMD, Intel, and NVidia. Consoles have only one hardware set, so overhead can be minimized.
3. Most games really just don't need a hefty CPU to begin with. Crysis mentioned above is simply an exception. When developers feel like taking advantage of extra resources, they can simply allow the user to scale them up/down.
4. Consoles use operating systems that are heavily optimized for real-time gaming. There's little background overhead to worry about.
and dont forget it used voxel-based terrain rendering, all inall its such a shame they moved to a console firendly game in my eyes cryengine 2 was far more advanced then cryengine 3, atleast it could deliver a lot more
-
Reply to cemerian
Related resources
- SolvedWhy is Fermi faster than Kepler Forum
- SolvedWhy do Intel cpus have less cores than AMD cpus? Forum
- SolvedWhy is my WD Green (5200 RPM) benchmarking way faster than my Seagate barracuda (7200RPMs) Forum
- Why windows 8 is faster than 7 Forum
- SolvedWhy high end SSD's which are put in pci-e are faster than those who work sata 3 interface. Forum
- So Nvidia says GTX 780 TI is more faster than GTX 980 and that why its more expensive. Forum
- SolvedWhy is the faster i7 1.60ghz faster than the i5 2.4ghz ? Forum
- SolvedWhy is the faster i7 1.60ghz faster than the i5 2.4ghz Forum
- Why is my upload faster than my download? Forum
- SolvedWhy would full format run hugely faster than chkdsk /r? Forum
- Why is my CPU running faster than it's maximum speed? Forum
- How are slower but newer CPUs better than older but faster CPUs? Forum
- SolvedWill 2 RAID-0 hard drives run a laptops CD game faster than 1 non RAID drive? Forum
- how can i make my laptop faster than others on same network Forum
- Why do most laptops in the U.S. not include discrete graphics, while the exact model in other countries do? Forum
- More resources
!