Over the past few years, Apple demonstrated an increasing desire to create its own custom SoC hardware, tuning architecture, differentiating functionality, and optimizing for cost.
Apple A-Series
Until the iPad and iPhone 4, Apple only used off-the-shelf SoCs from other companies. At that point, however, it began adding more customization. The CPU and GPU were still reference (800 MHz Cortex-A8 and PowerVR SGX535). However, Apple implemented technology around those complexes to improve battery life.
In 2011, the following year, Apple shipped the A5 SoC in its iPad and iPhone. Also designed by Apple (naturally), the A5 employed a dual-core 1 GHz Cortex-A9 CPU and PowerVR SGX543MP2 graphics engine that were much faster that their predecessors and could handle higher resolutions much better.
The third-gen iPad emerged in 2012 with an Apple A5X. It was almost identical to the A5 but had twice as many GPU cores. Therefore, in graphics-bound titles, it was up to two times as fast. Apple's newer GPU was nevertheless unable to properly handle the iPad's Retina screen. Even though it was twice as powerful, the new iPad had four times as many pixels, requiring even more performance to match its predecessor. So, later that same year, Apple introduced the iPad 4 with an A6X SoC that not only quadrupled the A5's graphics potential, but also included a proprietary dual-core “Swift” host processor running at 1.3 GHz. In a great many cases, it was faster than the competition. That was a first for Apple. Up until then, the company tended to go conservative with its host processor, preferring to favor long battery life, which invariably led to weaker benchmark results.
Apple A7
The A7 took everyone off-guard, especially competing SoC-makers. No one was expecting a CPU based on the ARMv8 instruction set to arrive for at least another year. But Apple released one in late 2013.
Even now, Qualcomm, the mobile chip leader, doesn’t have its own ARMv8-based architecture. Its only imminent releases involve off-the-shelf Cortex-A53 configurations bearing Snapdragon branding and the Snapdragon 808 and 810, which will utilize off-the-shelf Cortex-A57 cores. Only Nvidia is supposed to ship its Denver CPU this year, and Samsung may or may not release an Exynos chip based on Cortex-A57.
Nobody really knows how Apple managed to design and ship an ARMv8 chip so quickly. But it did, and is milking the marketing for all it's worth by claiming to offer the first desktop-class 64-bit chip for its phones and tablets. Competing processor companies were forced to scramble and promise their own 64-bit devices as soon as possible, too.
Unlike Swift, which is similar to Qualcomm's Krait, the Cyclone architecture is much wider, managing to handle up to six instructions at once (compared to three for Krait and Swift). In the real world, or at least in benchmarks, that seems to translate to roughly 50%-higher performance versus Swift at the same 1.3 GHz frequency.
With the 64-bit Cyclone, Apple showed all chip makers, including Intel, that it's serious about making powerful processors.

the stark soc seems to have vanished from the latest soc roadmap... wonder what happened to it....
wonder why mediatek, allwinner left out of the "big players" while nvidia in nowhere on the "competitive landscape" or anything that qualifies as such.
Half the cost and half the performance!
The Starks have been dropping like flies. Maybe Nvidia got worried HBO would finish killing them off in the fifth season.
the stark soc seems to have vanished from the latest soc roadmap... wonder what happened to it....
wonder why mediatek, allwinner left out of the "big players" while nvidia in nowhere on the "competitive landscape" or anything that qualifies as such.
Haven't heard peep about Stark for a very long time, but the followup article, scheduled for next week, focuses on lesser-known Chinese ARM-based SoCs
As ARM chips become more powerful and x86 chips become more power efficient, it won't be long until the two of them meet. I'm curious to see which format will win that war. One thing's for sure, the next decade will be a very exciting time for mobile computing.
The Shield Tablet murders its battery in just over two hours when its IGP gets pushed to its limits so I doubt the K1 will be particularly popular for products where small size and long battery life are priorities. If it does manage to succeed, it will be in larger devices that can accommodate larger batteries like Chromebooks and mobile devices specifically designed for mobile gamers.
The Starks have been dropping like flies. Maybe Nvidia got worried HBO would finish killing them off in the fifth season.
It's not related to the HBO show. Rather related to IRON MAN. As in, Tony Stark. His movies are still doing very well (a billion well that is, and so is downey's salary at about $80mil+ for ironman3...LOL). I think it's just delayed after erista/parker chips. These are superhero's, not hbo characters.
I believe most of the moves are due to NV (and others) not being able to count on fabs to get what they wanted in their chips (based on previous history), so we have a few stopgap chips now as CYA stuff I guess. A few years ago they probably started wondering, will they get to 20nm ok or not, will they get to 16/14 or not, will finfet be in or out and at what node, will we be able to do 3d stacked ram etc etc. Tons of questions so they put more chips on the roadmap just in case. Wise IMHO, based on fab track records, even if they do seem to be getting their crap together finally for these last few big moves at TSMC/Samsung/GF. TSMC seems to be on schedule and GF/Samsung have swappable process etc now since working together with IBM.
Maybe you guys should get on the right roadmap instead of the wrong show
But you don't HAVE to run it full out as anandtech showed, you can drop it to a 30fps limit (still beating everyone else) and avoid the problems you're talking about with battery and power sucking. They also mentioned it isn't bad on heat.
http://anandtech.com/show/8354/tegra-k1-lands-in-acers-newest-chromebook
Very battery efficient at 11.5-13hrs in a chromebook.
http://anandtech.com/show/8329/revisiting-shield-tablet-gaming-ux-and-battery-life
"In addition to the low GPU clocks, we see that the skin temperatures never exceed 34C, which is completely acceptable."
So once you drop the perf some instead of running the chip in a way NO GAME will run it (maxed permanently for a test), the temps drop and so does battery. Games don't do what their benchmark does as they clearly showed in the "revisiting shield" article. Which comically anandtech make not so easy to find...ROFL. K1 tag won't get it, you have to hit joshua's articles. AMD's checks are still coming I guess...ROFL. ***cough, AMD PORTAL, cough ***
If you cannot use the chip for more than 50% of what it is worth without murdering the battery, might as well make the chip 50% weaker in the first place and not have to bother with throttling to make battery life reasonable; it would still beat everything else currently on the market without having to bother with artificially capping its performance. The chip would be a buck or two cheaper to manufacture and yields would likely be better on top of that.
If you cannot use the chip for more than 50% of what it is worth without murdering the battery, might as well make the chip 50% weaker in the first place and not have to bother with throttling to make battery life reasonable; it would still beat everything else currently on the market without having to bother with artificially capping its performance. The chip would be a buck or two cheaper to manufacture and yields would likely be better on top of that.
Why remove the OPTION to run full out if desired? That is ridiculous. I can plug in and use FULL power all day, which is how most would use it if hooked to a TV as noted in reviews (for gaming with a gamepad).
You're arguing to limit choice of something they are giving you for free...LOL. Ok. Whatever. Manufacturers can put the chip at whatever they want, as Acer did running it at 2.1ghz instead of 2.3ghz, thus giving it massive battery life for their chromebook. They can also govern the clocks of the gpu any way they want (sure I can override whatever they do in most cases, but they can set it at whatever for sale). Should AMD sell all their gpus with less power because they use more watts than NV cards? That's dumb and they wouldn't be competitive then.
The point of having the power in there is you can use it WHEN DESIRED (like next year or the year after when games using this kind of power actually land). Your way, would have you require a NEW device at that point because they chose to artificially limit the soc forcing a new purchase. Your argument is ridiculous and even at 750 you get better battery than 852mhz it's clocked at, no need to drop it to half.
The user has no idea anything is happening anyway. Bother? Bother who? Logic in the device does all this for you, just like a desktop drops speed when not used etc. No difference here. The ONLY correct move is giving me full power that won't damage the device (if you get to damage levels THEN and only THEN is it giving me too much). I'd rather have a super-powered device I could plug in and use for an extra year or two in really intense gaming, than be forced to buy a new product because they limited me for ABSOLUTELY no reason. I want the fastest clocks my gpu in my pc can run at, unless it damages the unit. ALWAYS. I'll gladly turn it down if I don't want as much heat in my room etc (or too noisy), rather than NOT have the ability to use the free power.
By your logic they should just start shipping all current laptops as half speeds (heck ship everything at half), for great battery...LOL. What? Whatever. I'll plugin when needed and run intense games IN the house where I have a power outlet.