Intel Broadwell CPUs to Arrive Later This Year
Tags:
-
CPUs
-
Components
-
Intel
Last response: in News comments
It looks like we'll be seeing the smaller Broadwell CPUs from Intel before you need to put a new calendar on your wall.
Intel Broadwell CPUs to Arrive Later This Year : Read more
Intel Broadwell CPUs to Arrive Later This Year : Read more
More about : intel broadwell cpus arrive year
TheAshigaru
May 22, 2014 8:49:22 AM
Related resources
- Intel Broadwell CPUs? - Forum
- Intel Confirms “ScullTrail” Availability Later This Year - Forum
dstarr3
May 22, 2014 9:09:27 AM
Ahhh, 14nm. I know it's just Moore's Law and all that. But, having been building computers for fifteen or so years by now, the shrink still blows my mind a bit. Earliest I remember is working with a 350nm Pentium II. I'll be excited to see what the next 15 years has to offer once we've shrunk beyond the limits of usability.
Score
13
Osmin
May 22, 2014 9:12:43 AM
Vlad Rose
May 22, 2014 9:12:46 AM
dstarr3
May 22, 2014 9:21:29 AM
I think Z87 was the last good platform for a little while now. Z97 is a very marginal update; the new DDR4 memory interface isn't fully matured yet, and we really need to find a solution to the storage revolution that's occurring. Storage options are very cumbersome with Z97 and SSDs are going to oversaturate what the interfaces are capable of rather quickly. There's just too many devices requiring too much bandwidth all of a sudden. Z97 is a platform featuring many new technologies in their infancy, whereas Z87 was a fully matured platform. So I think if one is looking to build a new mid-range or high-end computer, you may want to hold off another year or two.
Score
5
pills161
May 22, 2014 10:40:59 AM
Quote:
I think Z87 was the last good platform for a little while now. Z97 is a very marginal update; the new DDR4 memory interface isn't fully matured yet, and we really need to find a solution to the storage revolution that's occurring. Storage options are very cumbersome with Z97 and SSDs are going to oversaturate what the interfaces are capable of rather quickly. There's just too many devices requiring too much bandwidth all of a sudden. Z97 is a platform featuring many new technologies in their infancy, whereas Z87 was a fully matured platform. So I think if one is looking to build a new mid-range or high-end computer, you may want to hold off another year or two.Score
5
dstarr3
May 22, 2014 11:04:34 AM
Quote:
Yes agreed, I upgraded to Z87/Haswell around last black friday, not going to need to upgrade for a while.Same here. I picked up a 4770k for $200 from my local shop. Incredible deal. And yeah, I'm not going to be needing a whole new build for at least three or four years. Maybe an upgrade here and there, a new graphics card if I decide to go 4K. But other than that, I'm really set for a long time.
Score
5
Vlad Rose said:
Has there been much stated about how much an improvement there is with the IGP on the chip over HD4600?IIRC, Broadwell is supposed to bring GT3/GT3e (HD5xxx) availability across most of the lineup, which should make its IGP about twice as fast as HD4xxx parts.
For HTPC, even a 6+ years old Core2Duo can handle multiple HD/h264 streams in full-software decode so Broadwell would be a "little" overkill for that.
For a steambox or other lightweight/low-power gaming/3D applications, GT3/3e becoming the baseline IGP would help a fair bit but this won't be happening across the board until Skylake unless Intel changes their plans.
Score
1
Vlad Rose
May 22, 2014 11:27:03 AM
InvalidError said:
Vlad Rose said:
Has there been much stated about how much an improvement there is with the IGP on the chip over HD4600?IIRC, Broadwell is supposed to bring GT3/GT3e (HD5xxx) availability across most of the lineup, which should make its IGP about twice as fast as HD4xxx parts.
For HTPC, even a 6+ years old Core2Duo can handle multiple HD/h264 streams in full-software decode so Broadwell would be a "little" overkill for that.
For a steambox or other lightweight/low-power gaming/3D applications, GT3/3e becoming the baseline IGP would help a fair bit but this won't be happening across the board until Skylake unless Intel changes their plans.
Yeah, I am looking at building mini-pc w/o a dedicated card using the Antec 110 ISK. I've been trying to decide if I should wait for the AMD a8-7600, Intel Broadwell, or a current Haswell running HD4600 if Broadwell isn't much of an improvement in power/graphics. It will be for HTPC, emulation, and some steam gaming capabilities as it will be used strictly on my TV.
Score
0
Talesseed
May 22, 2014 11:30:10 AM
achoo2
May 22, 2014 11:30:56 AM
As long as they keep successfully using die shrinks to either drive costs and power consumption down or increase speeds without increasing cost, it's a win. I'm not going to jump on the naysayer bandwagon because a new chip is "only" 15% faster at the same price. Meanwhile, if and when the graphics guys ever manage to transition to a new node they're going to be charging you /more/ money for less performance because they know they've got to stretch the node for three to seven more years.
Score
3
Hmm... I am going back to school this year and was going to get a new laptop because I thought that Broadwell was not going to hit until next year. But I can probably live with my 5 year old netbook for a few months into the school year for the sake of a better machine. Broadwell may not offer much for the desktop, but it looks like it is going to be a big deal for horse power and lower TDP for laptops... plus Intel graphics make pretty big strides forward with each generation. I would be lying if I said that I was not going to load up a game or two on my school laptop, but I don't exactly want to pay for a laptop with a dedicated GPU either.
Score
3
achoo2 said:
Meanwhile, if and when the graphics guys ever manage to transition to a new node they're going to be charging you /more/ money for less performance because they know they've got to stretch the node for three to seven more years.The other foundries (UMC, TSMC, GF, etc.) are almost three years behind Intel process-wise and for AMD/Nvidia/etc.'s sakes, they probably cannot afford falling much further behind than that - matching Intel on performance/watt is going to become extremely difficult if foundries slip a whole two process nodes (4-5 years) behind Intel and Samsung.
Since Samsung and GF decided to start doing "copy-smart" to help ramp up 14nm last month, there is a chance GF might move up to only being a year behind Intel instead of slipping further behind.
Score
1
Quote:
Ahhh, 14nm. I know it's just Moore's Law and all that. But, having been building computers for fifteen or so years by now, the shrink still blows my mind a bit. Earliest I remember is working with a 350nm Pentium II. I'll be excited to see what the next 15 years has to offer once we've shrunk beyond the limits of usability.Same here; I remember helping my dad build the 'ol Pentium II system... and then attempting to do video editing on it for school projects, which was painful. My first personal build was a Coppermine Pentium III which had the 180mm die shrink and I remember being amazed at how small it was in comparison.
I think they have another 2-3 die shrinks before they hit the wall, and then we are going to see major changes in the materials used to squeeze another 2-3 die shrinks before they are going to have to start implementing new instruction sets and architectures to get further efficiencies. It is going to be pretty cool to see, but once we start making major changes to architecture and instructions then we are going to have to say good-bye to legacy applications that have built up over the last 20 years, and that will be a little sad to see.
Score
3
CaedenV said:
I think they have another 2-3 die shrinks before they hit the wall, and then we are going to see major changes in the materials used to squeeze another 2-3 die shrinks before they are going to have to start implementing new instruction sets and architectures to get further efficiencies. It is going to be pretty cool to see, but once we start making major changes to architecture and instructions then we are going to have to say good-bye to legacy applications that have built up over the last 20 years, and that will be a little sad to see.Intel has already tried going with a "more efficient" instruction set on Itanium with tons of branch predication and other neat stuff that was supposed to enhance performance and scalability yet that failed to scale beyond x86's performance.
ARM, Power, Sparc and other ISAs are also failing to outclass x86 on raw performance and power-efficiency in many situations. As kludgy as x86 might be, Intel has managed to bring it on par with the best of anything else available today with things like uOPS cache to almost eliminate complex instruction performance hits and complexity (who would have thought ~2GHz dual-core x86 CPUs could be squeezed into 2-3W power budgets only a few years ago?) so it seems unlikely the industry is going to give it up any time soon - too much hassle for little to no gain.
Intel's biggest challenge/shortcoming for SoCs is the IGP. Bump that up a notch or two and Intel would have serious contenders across the board.
Score
3
joaompp
May 22, 2014 12:55:55 PM
JOSHSKORN
May 22, 2014 3:27:43 PM
Quote:
If you can wait, wait for Skylake (Successor to Broadwell) which will include DDR4, PCI Express 4, Thunderbolt 3, and Octacore processors. Yeah, you're talking on the Enthusiast chips. Probably not until 2016. Some of us who haven't upgraded their mobo/CPU/RAM since 2007 can't wait that long. Going for Haswell-E, We barely need PCI-e 3.0, now anyway, and how long has it been around? Waiting another 2-3 years while software is written to demand that much GPU won't kill anyone. I don't think we've even come close to hitting a ceiling with PCI-e 3.0, have we?
Score
1
somebodyspecial
May 22, 2014 9:12:19 PM
InvalidError said:
achoo2 said:
Meanwhile, if and when the graphics guys ever manage to transition to a new node they're going to be charging you /more/ money for less performance because they know they've got to stretch the node for three to seven more years.The other foundries (UMC, TSMC, GF, etc.) are almost three years behind Intel process-wise and for AMD/Nvidia/etc.'s sakes, they probably cannot afford falling much further behind than that - matching Intel on performance/watt is going to become extremely difficult if foundries slip a whole two process nodes (4-5 years) behind Intel and Samsung.
Since Samsung and GF decided to start doing "copy-smart" to help ramp up 14nm last month, there is a chance GF might move up to only being a year behind Intel instead of slipping further behind.
How can you say TSMC is 3yrs behind when they will ship A8's shortly for iphone6@20nm? If my 20nm is out before your 14nm at worst I'm ~2yrs behind and they have 14nm on tap for volume Q1 2016 (though I'd say Q2). If Intel's coming this Oct/Nov with devices (they said they'd miss back to school) and TSMC is looking at somewhere in 1h2016 for 16nm again that's under 2yrs.
http://www.eetimes.com/document.asp?doc_id=1319679&page...
20nm socs 3q14 for phones and tablets.
http://www.digitimes.com/news/a20140311PD203.html
Either they are completely lying or 20% of their revenue is 20nm this year in Q4. They are ramping and ahead of schedule already (fixed yields).
Samsung's A8 is coming a little later, so I'm not seeing your points. Don't get me wrong, I think samsung wins in the end if financials don't change for Intel/TSMC so they can keep up with $30B that samsung makes but TSMC appears to be in front on 20nm. You can't win as Intel or TSMC when samsung is spending 22B and Intel 11B while TSMC spent 9.7B (upping it this year to 11-12B IIRC, so tie ballgame for TSMC). Unless Intel figures out how to stop samsung from selling so many phones/tablets they are screwed most likely in 5yrs. If Samsung continues in 5yrs they will have spent $100B on fabs to Intel's 50B. Intel fabs are dead if they don't change the game here in some way that matters.
http://www.dailytech.com/TSMC+Were+Far+Superior+to+Inte...
"TSMC is starting its first 20 nm mass production this quarter, which will put it ahead of Intel -- if only briefly."
"So arguably TSMC is about a year behind Intel in process, at present, and Samsung is a year behind TSMC. Globalfoundries, a fourth major player, is thought to be a little behind Samsung."
So you say 3yrs, Dailytech, eetimes, digitimes etc think they are NOWHERE near 3yrs behind and samsung is behind tsmc. Everybody seems to agree but you. What is it you know that they do not? Is intel selling 40mil phones? Millions of top tablets behind our backs? NO. There is no process lead here. They are equal or you'd be winning something. I hate to agree with J. Mick, but this time he's not crazy
You have too much faith in (love for?) intel. Certainly as an AMERICAN fanboy (not intel, more AMD old time fanboy here but management has been killing them for a decade) I'd rather see an american company squash samsung (and TSMC) but the spending facts don't lie so no point in ignoring the reality here for me. IBM might look like they stepped away from the gang, but the R&D that is needed for most of their part is done for 20/14nm. Also they are still collaborating on below this, though probably until IBM dumps it all. They haven't been fabbing tons anyway; IBM does R&D then passes it to the other two to let them flesh it all out (they do not fab much themselves). Of course also as an american I'd rather see samsung kick the crap out of TSMC instead of see TSMC reach the top. I don't see how TSMC wins as they totally depend on the fab, where samsung has other devices to sell (phones, tablets, glass, memory, ssd's, tvs etc etc). This will be a war like google/amazon driving down phones (or killing an OS/DirectX in google vs. MS war) because they have ways to bleed you to death on the hardware while OTHER stuff pays the bills (books, movies etc on amazon, ads etc for google).
The most important point to me about Intel? For all the love of their process stuff from people like you (and me years ago), what has it gotten them? IF they are so far ahead, why are they not getting squat in phones and tablets? They are having to PAY to get into devices. They are promising to make up the cost difference on an ARM chip vs. Baytrail etc if you go their way (funny, I thought that was anti-competitive). They are essentially making nothing to get into a device (selling at ARM pricing instead of INTEL pricing). How good is a process if you are NOT the leaders because of it? It's 28nm vs 22nm and you gained nothing even with finfet (when they get it what then? Even that little help is gone). It's going to be 20nm vs. 14nm soon and again you'll gain nothing from it. I predict we will see the same things on the new processes from both sides. Intel will again have to buy their way into stuff unless magically 20nm fails for samsung/tsmc shortly (they are ramping already at tsmc for iphione6 with far better yields now so no magic will stop this).
I'm not alone thinking the above:
http://www.eetimes.com/document.asp?doc_id=1322263
JP Morgan - QUIT MOBILE
"We continue to believe Intel will lose money and not gain material EPS from tablets or smartphones"
Proof they are right so far:
"The mobile and communications group saw a $3.1 billion operating loss in 2013, with 1Q 2014 losses hitting $929 million and revenues at $156 million."
So 3.1B last year, and based on 1Q14, looks like you're ramping to a 4B loss this year right?
Intel's dumb comments in response:
"We feel that we have a plan"
"We’re actually feeling pretty good"
ROFL. Sounds confident. I'd prefer "we will dominate because of X and this is how and why they will suck compared to us" - something like that. They are a gorilla trying to thump the chest without arms (pun intended)...LOL.
and worse:
“Keep in mind we are also manufacturing these chips now at 22 nm, and we are in the process of starting up our 14 nm process.”
Umm...OK, and everyone else is doing this (to use Intel's own words):
“Keep in mind we are also manufacturing these chips now at 28 nm, and we are in the process of starting up our 20 nm process.” and will beat Intel to our new process...
See how that works? You're getting nowhere. Time to buy NV so you can get into the ARM game for real. Producing their chips on Intel's process WILL make a difference. Better mobile design+your process=WIN The definition of insanity is doing the same thing over and over and expecting different results right?
Buy NV for a REAL game changer. Based on the $4B they will lose this year if you go 5-6yrs of that you could have bought NV today and drove TSMC/Samsung's fabs into a painful existence. IF they keep this up until 7nm etc you gain nothing. Not to mention you can fill your fabs with 550nm gpus instead of delaying upgrades to 14nm fabs. The game changer here is buying NV and producing their stuff at Intel fabs. Intel is good enough to look like they're in the game, but not good enough to take ARM out without BEING ARM (tegra K1 etc). Pay Jen Hsun 3B to either walk away or run the SOC/GPU depts (CTO or some decent title) and buy them for another 22B. In 6yrs at these losses it's basically FREE and the damage you can do for the next 10yrs will destroy fab competition as all others would REALLY be behind by 2yrs+ forever with the same thing we have now. Only then it would be ARM who was trying to win via the definition of insanity
Intel would have the lowest power gpus for ages with the best perf, lowest watt/best perf socs, best cpus and TWO modem solutions (software and hardware versions) and doing so gives you a reason to upgrade fabs to 14nm instead of delaying them because you can't keep them full. Instead intel seems to think you can throw more money at it just like the govt...LOL.I'd say buy NV or AMD if AMD had a few socs out already but they just don't so you have to go with #1 gpu (their weak link forever) and a proven soc history with a desktop gpu in them now. Anything less than this is a failure that screws your company and shareholders out of $3-4B a year. Once 64bit models on ARM's side hit the desktops I'd bet money someone on ARM's side will decide to "vertically integrate" more and put out a DISCRETE GPU to cut out AMD/NV from being in their 500w ARM PC's. That is a no brainer. They won't want to support their mortal enemies' bottom lines with GPU sales who's profits will ultimately be used against them (AMD isn't an enemy yet, but will be the second their mobile socs hit). INTEL is NOT ahead. $1B Q1 loss purely on mobile doesn't lie.
Score
1
Technically, Intel's 14nm started production last year but ran into show-stopper complications and the schedule ended up slipping by over half a year.
BTW, the gate width in Intel's 22nm tri-gate/FinFET process is 8nm... so Intel has technically been shipping sub-10nm chips for nearly three years already.
BTW, the gate width in Intel's 22nm tri-gate/FinFET process is 8nm... so Intel has technically been shipping sub-10nm chips for nearly three years already.
Score
0
somebodyspecial
May 22, 2014 9:52:00 PM
JOSHSKORN said:
Quote:
If you can wait, wait for Skylake (Successor to Broadwell) which will include DDR4, PCI Express 4, Thunderbolt 3, and Octacore processors. Yeah, you're talking on the Enthusiast chips. Probably not until 2016. Some of us who haven't upgraded their mobo/CPU/RAM since 2007 can't wait that long. Going for Haswell-E, We barely need PCI-e 3.0, now anyway, and how long has it been around? Waiting another 2-3 years while software is written to demand that much GPU won't kill anyone. I don't think we've even come close to hitting a ceiling with PCI-e 3.0, have we?
ROFL...Thanks for making me feel better. I thought I was alone with a 2007 cpu. It's from late 2007 but it's 2007 but easily hits 3.6ghz when desired so I'm barely surviving here. I used to upgrade the cpu or gpu yearly (one each year usually just rotating the purchase), but today I replace board/mem/cpu once per cycle and only buy gpus every 2-3yrs. but I skipped an extra gen this time trying to get to 20nm gpus. That likely wouldn't have happened if I was gaming a lot, but I've had IT crap to do so not enough time to game to justify the purchase. Broadwell+Maxwell+shield2 (maybe 3...LOL)+ a 13in+ 1080p tablet with K1 or M1 (basically for training vids in bed/couch or gaming only) + 1600P Gysnc 27-30in. Come on people get some crap out I REALLY want to buy
I'm tired of waiting for awesome upgrades.
I'm tired of buying 3TB HD's every other month to make me a little happy...LOL. Up to 6 now and still have space problems..
Where is my 5TB-6TB helium drive?
Argh! Score
0
aldaia
May 23, 2014 4:55:02 AM
Quote:
BTW, the gate width in Intel's 22nm tri-gate/FinFET process is 8nm... so Intel has technically been shipping sub-10nm chips for nearly three years already.
That feature is not gate width. 8 nm is the width of one "Fin" at mid height. Since fins have a triangular crossection that means they are wider at the base. As far as I know a single transistor is implemented using several of those fins.
Once upon a time, the node size was defined as half the metal pitch (1rst level metal), what defines a new node is increasingly unclear.
Intel 22nm metal pitch – 64nm
TSMC 28nm metal pitch – 64nm
Acording to "traditional" definition both are 32nm process, however metal has not been scaling well for the last 2 or 3 node shrinks. That effectively means that moore's law scaling is already broken since around 2010. Each new node anounced is delivering less than the expected 2x transistor density, at least if you want those transistors interconected :-)
Score
0
somebodyspecial
May 23, 2014 10:30:14 AM
InvalidError said:
Technically, Intel's 14nm started production last year but ran into show-stopper complications and the schedule ended up slipping by over half a year.BTW, the gate width in Intel's 22nm tri-gate/FinFET process is 8nm... so Intel has technically been shipping sub-10nm chips for nearly three years already.
BTW that has nothing to do with the data in my previous post
You're still touting their process but not understanding they are getting killed in mobile. x86 just can't take out ARM so Intel needs to buy NV so they instantly get ARM to make on their process, not to mention GPUS that take up 5-6x a socs space, thus filling a 14nm fab between the two which would get them back to growth instead of losses in mobile.Regardless, my points still stand. They still can't get into a phone or tablet without bribing someone. The data in my post doesn't lie. A ten year financial summary shows Intel peaked in 2011 (12.94B profits, but TTM only 9.62 for 2013 down from 11B 2012, sliding each year) and the party has been over since. Intel could be putting out 1nm for all I care. IF they were losing 3.1B a year on it (moving to 4B this year) I'd say their 1nm is getting it's ARSE handed to it by TSMC 28nm.
Intel is behind if it's losing 3.1B last year and on schedule to lose another 4B trying to bribe others to use baytrail etc. Let me know when intel starts making money instead of losing billions in mobile. The day you can do that my post has been refuted
Gate size, etc etc mean nothing. Making money means everything and it appears Intel is having problems making MORE of it (stuck treading water for . The others are piling up billions in profits on mobile in fabs, while Intel keeps losing it. That's called losing right? I hope they are in talks to pay Jen Hsun whatever he wants so they can get busy making stuff that can take down samsung/tsmc/gf/Qcom/Arm. Well, pretty much qcom...LOL arm makes nothing ~650mil, which I really couldn't believe when I investigated the stock and decided to pass.You don't see Samsung saying they are halting the build of a fab. TSMC isn't saying it either, rather they are booked all the way to end of 2014. Meanwhile Intel has a fab empty here in the state of AZ because it would just lose money if they opened it. Fabs down at Intel, profits down, losing 21% of notebooks to chromebooks, next stop desktop then they'll be talking the fabs that ARE open losing money as arm cuts off the need to produce as many chips as they make already at Intel. It will only get worse unless they change the dynamics of the situation. That means fabbing arm, and the only way to do that is to make it themselves or buy NV (can't buy samsung, apple or qcom). If they attempt it themselves it would take too long and samsung/apple will be making 40B & 50B by then not to mention Intel would be racking up losses in mobile until the chip came out. They can't really halt the chase without looking like losers while making a chip.
The fastest route to victory is an Nvidia purchase. Then again Jen Hsun may have already done the math and has no need to be bought, figuring he'll win in the end as he assaults Intel's desktops and servers with Denver/Boulder (or whatever comes after them), attacks Wintel gaming via Android/linux etc, etc. How many chips LESS will intel need when they take 20% of desktops by next xmas? Will Intel be losing $5-6B on mobile in 2015? It just takes longer for him to win the ARM war without Intel's fabs. With Intel the gpus will take over ARM/Android even faster, but Jen probably wants to kill Intel more than he wants to speed up the ARM war. I doubt he thinks he'll lose a GPU/Gaming war with Qcom or Intel and samsung has no gpu yet. AMD has no modem or mobile soc for ARM yet so no assault on phones will happen any time soon so NV really has to screw up their gpu to on their own to be stopped.
When cuda starts getting used more in games this will get even worse. Unlike Mantle NV has 7yrs of Cuda out there with billions invested in it. Nobody else in socs has a cuda like ecosystem that is now being aimed at games not just pro-apps (in nascar 2014 this year). The only other soc vendor that has done ANYTHING in gaming is mediatek and they have ONE. Modern Combat 5 and NV will have that soon enough too so all players are miles behind NV in games.
http://www.geforce.com/landing-page/nvidia-shield-legen...
Gabe Newell now signing game packs for NV Shield. The rest of the field better up the game (pun intended) or get ready to be run over by NV's gaming prowess. I can't wait for NV's idea of an ARM console box at 150w or 500w PC box. I'd take a $350 ARM/OpenGL console over DirectX xbone any day. I will not support Sony as I'd rather buy USA only as much as possible (our economy sucks today and needs all the help we can give it...LOL). So I'm stuck waiting on an ARM console to compliment my PC gaming. To sell like crazy though it needs to be upgradable (SOC+gpu+HD - 2 out of 3 wouldn't be bad).
Score
0
somebodyspecial said:
You're still touting their process but not understanding they are getting killed in mobile. x86 just can't take out ARMIntel has barely begun starting to push the mobile SoC front in a remotely serious manner: they did not have anything worth talking about under 10W until this year and CPU-wise, Baytrail was beating ARM in most benchmarks by a fair margin at the time it was announced.
Intel is not making losses because they are "falling behind;" they are making losses because the high-margin PC/laptop segments are collapsing in favor of lower-cost, lower-margin systems (or in some cases, migration to smartphones and tablets) and longer replacement cycles.
Even if Intel went ARM, they would still have the problem of gaining market share in the mobile space and ARM-based sales generating only ~1/10th the gross profit per unit as higher-end desktop CPUs. Same goes with Atom should Intel manage to gain market share there: even if Atom scored heaps of design wins, Intel's profits on Atom sales would still be only a small fraction of what they get on i3/5/7. Also, SoCs have integrated IO controllers so there is no further profit on chipset sales to complement CPU sales either with SoCs, which is another $5-10 loss per sale for Intel compared to their traditional desktop/laptop model.
Going from selling $200-600 mobile CPUs + $30-50 chipset per laptop to selling $30-50 SoC is a pretty steep drop they have to work with if they want to play in that arena regardless of whether they choose to do so with ARM or Atom.
Score
0
somebodyspecial
May 23, 2014 11:00:09 PM
InvalidError said:
somebodyspecial said:
You're still touting their process but not understanding they are getting killed in mobile. x86 just can't take out ARMIntel has barely begun starting to push the mobile SoC front in a remotely serious manner: they did not have anything worth talking about under 10W until this year and CPU-wise, Baytrail was beating ARM in most benchmarks by a fair margin at the time it was announced.
Intel is not making losses because they are "falling behind;" they are making losses because the high-margin PC/laptop segments are collapsing in favor of lower-cost, lower-margin systems (or in some cases, migration to smartphones and tablets) and longer replacement cycles.
Even if Intel went ARM, they would still have the problem of gaining market share in the mobile space and ARM-based sales generating only ~1/10th the gross profit per unit as higher-end desktop CPUs. Same goes with Atom should Intel manage to gain market share there: even if Atom scored heaps of design wins, Intel's profits on Atom sales would still be only a small fraction of what they get on i3/5/7. Also, SoCs have integrated IO controllers so there is no further profit on chipset sales to complement CPU sales either with SoCs, which is another $5-10 loss per sale for Intel compared to their traditional desktop/laptop model.
Going from selling $200-600 mobile CPUs + $30-50 chipset per laptop to selling $30-50 SoC is a pretty steep drop they have to work with if they want to play in that arena regardless of whether they choose to do so with ARM or Atom.
Intel would be making ~13B+ if it wasn't for the 3.1B they toss away on mobile (and another 1B this last Q). Did you read the JP Morgan stuff asking Intel to STOP making mobile which would free up .50 eps (not that I agree, just saying)? If you're the top dog you get a premium for your stuff (IE NV's gpus vs. AMD's, Intel cpu vs. AMD cpu). They are practically giving them away because they are NOT the top dog in mobile in any way. Margins would be fine if they had a chip they could charge REAL pricing for. You're also forgetting we are talking 1.2B units vs. 340mil. You don't have to charge a 62% margin when selling 4x the units.
Score
0
catfishtx
May 27, 2014 7:25:02 AM
Vlad Rose
May 27, 2014 9:41:42 AM
catfishtx said:
When I started at Intel, they were on the tail end of their P852 process at 600nm. The P854 process at 350nm was just ramping up. This process saw Pentium Pros and Pentium MMX along with regular desktop chips. Hard to believe they will be at 14nm by the end of the year.Yeah, it's nice they had gotten some competition from AMD to ramp up their R&D department. Lets hope AMD can rebound back.
With ARM's 'threat', they're still quite a ways off on a performance level. Apple even thought about switching their CPU architecture to ARM on laptop/desktop to unify their platforms, but realized they'd be way too slow vs the competition. Intel on the other hand is closing the gap on performance per watt.
As the famous Japanese quote goes: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."
Score
0
Vlad Rose said:
As the famous Japanese quote goes: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."Pretty much.
3-4 years ago, few people (myself included) would have believed it would be possible to bring relatively high performance x86 cores down to sub-3W power budget but this year, Intel is shipping 2-3W x86-64 SoCs that outperform 32bits ARM CPUs in the same power budget.
I'm just amazed at how the old x86 kludge managed to survive so long and looks like it might finally break in the handheld and other traditionally non-x86 markets. ~15 years ago, I thought I would own an IA64-based PC by now.
Score
0
somebodyspecial
May 28, 2014 11:07:20 AM
InvalidError said:
Vlad Rose said:
As the famous Japanese quote goes: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."Pretty much.
3-4 years ago, few people (myself included) would have believed it would be possible to bring relatively high performance x86 cores down to sub-3W power budget but this year, Intel is shipping 2-3W x86-64 SoCs that outperform 32bits ARM CPUs in the same power budget.
I'm just amazed at how the old x86 kludge managed to survive so long and looks like it might finally break in the handheld and other traditionally non-x86 markets. ~15 years ago, I thought I would own an IA64-based PC by now.
Outperform arm in what?
http://www.slashgear.com/nvidia-tegra-k1-out-performs-i...
I don't see Intel doing much of anything in mobile other than losing money.
More fantasy on your part. But this year...blah...blah (I've been hearing that for a few years now)...Let me know when they stop losing money giving away chips. You act as though the enemy sits still waiting for Intel to finally pass them. They are NOT. On the other hand, ARM already stole 21% of Intel's entire notebook market and Intel still can't get into mobile without paying the difference between ARM's chips and theirs (I guess you'd call that price matching). More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS. Samsung/TSMC will be on their new processes before Intel moves to 14nm. You gain nothing and actually seem to be losing ground in more ways than one. 3Dfinfet was supposed to take ARM down...How did that work out? Now all they have is a shrink, and the enemy has one first. Next time, the enemy has 3Dfinfet too...What then? They gain more, that's what.
Intel better buy NV before china/korea run them over in fabs (TSMC, Samsung). ARM socs produced with the best gpus on Intel's process would be a game changer. Other than that we'll see ARM erode Intel's finances more for the foreseeable future.
Score
0
somebodyspecial said:
Outperform arm in what?I already answered that half a dozen times already: CPU performance; both raw and per watt. If you look at CPU-only benchmarks, Baytrail is 20-40% ahead of current ARM chips in most benches - particularly parsing and branch-heavy ones like browser and JScript benches. Intel has very much proved they are definitely capable of bringing x86 all the way down to ARM-level power budgets but AMD is still struggling to get there without sacrificing too much performance.
The K1 is mostly marketed for gaming-oriented devices and devices using it are not shipping yet. By the time they do, Intel's Moorefield will be about to get on the market too and likely give Atom another significant boost in CPU performance by reinstating out-of-order execution.
The competition might not be standing still but Intel is incrementally rolling all their architectural big guns back in with each new atom generation bringing much greater leaps in performance per watt than any ARM competitor does and 2014 marks the year where Intel's SoC-CPU performance leapfrogged most of the competition under 3W.
Yes, Intel still has some catching-up to do on the IGP side of things but that is not as much of a problem on more productivity/business-oriented mobile devices.
Score
0
Vlad Rose
May 28, 2014 1:11:07 PM
somebodyspecial said:
InvalidError said:
Vlad Rose said:
As the famous Japanese quote goes: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."Pretty much.
3-4 years ago, few people (myself included) would have believed it would be possible to bring relatively high performance x86 cores down to sub-3W power budget but this year, Intel is shipping 2-3W x86-64 SoCs that outperform 32bits ARM CPUs in the same power budget.
I'm just amazed at how the old x86 kludge managed to survive so long and looks like it might finally break in the handheld and other traditionally non-x86 markets. ~15 years ago, I thought I would own an IA64-based PC by now.
Outperform arm in what?
http://www.slashgear.com/nvidia-tegra-k1-out-performs-i...
I don't see Intel doing much of anything in mobile other than losing money.
More fantasy on your part. But this year...blah...blah (I've been hearing that for a few years now)...Let me know when they stop losing money giving away chips. You act as though the enemy sits still waiting for Intel to finally pass them. They are NOT. On the other hand, ARM already stole 21% of Intel's entire notebook market and Intel still can't get into mobile without paying the difference between ARM's chips and theirs (I guess you'd call that price matching). More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS. Samsung/TSMC will be on their new processes before Intel moves to 14nm. You gain nothing and actually seem to be losing ground in more ways than one. 3Dfinfet was supposed to take ARM down...How did that work out? Now all they have is a shrink, and the enemy has one first. Next time, the enemy has 3Dfinfet too...What then? They gain more, that's what.
Intel better buy NV before china/korea run them over in fabs (TSMC, Samsung). ARM socs produced with the best gpus on Intel's process would be a game changer. Other than that we'll see ARM erode Intel's finances more for the foreseeable future.
You might want to read that article again. The Tegra K1 outperformed the Haswell in graphic benchmarks only, not CPU power. And even so, it's a HD4400, not a HD4600 or Iris 5200. Nvidia better be able to win in the graphics department, considering that's what their company is based on initially.
"Samsung/TSMC will be on their new processes before Intel moves to 14nm."- And they will still be slower than the slowest chip in Intel's x86 lineup (outside possibly Atom).
"3Dfinfet was supposed to take ARM down." News to me that that was ever posted anywhere. The point of making 3D transistors was the ability to be able to cram more transistors into a smaller space; as mentioned by wikipedia.
"Multigate transistors are one of several strategies being developed by CMOS semiconductor manufacturers to create ever-smaller microprocessors and memory cells, colloquially referred to as extending Moore's Law."
"More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS." ... You may want to take a course in computer design before making a statement like that, then realize it's a false assumption. ARM by nature is designed to run at low power vs. high performance. That is what RISC (which ARM is based on) is designed for. Throwing more watts at it will not make it perform at the same level due to it's fundamental design. Oh and with Intel doing more with less wattage? What do you think these instruction sets Intel keeps adding to their CPU are? (MMX, SIMD, etc). They are RISC instructions.
Again, where is Intel losing money on their CPU chips? I know you may believe all the hype Nvidia creates, but when it comes to market size, they're just like a Chihuahua; a small dog with a constant bark, surrounded by a bunch of Great Danes (Intel, Apple, Qualcomm). Intel isn't going anywhere anytime soon.
Score
0
Vlad Rose said:
"More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS." ... You may want to take a course in computer design before making a statement like that, then realize it's a false assumption. ARM by nature is designed to run at low power vs. high performance. That is what RISC (which ARM is based on) is designed for.RISC is not specifically designed for low power or any power budget in particular: look at IBM's Power-series chips with TDPs going up all the way to 400W per package. The original design goal behind RISC was simplicity and homogeneity - keep instruction sets simple and the instruction/data formats as uniform as possible to waste as little logic as possible on decoding instructions and accommodating different instruction/data packing formats.
Also, if you look at the internal architecture of Intel and AMD chips from the past ~15 years, their internal design is fundamentally RISC: messy x86 code comes in the instruction decoder and the decoders issue one or more very RISC-like microcode instructions for internal scheduling, re-ordering and execution. You could almost say modern x86 CPUs are silicon-based emulators.
Score
0
somebodyspecial
June 2, 2014 3:05:35 PM
InvalidError said:
somebodyspecial said:
Outperform arm in what?I already answered that half a dozen times already: CPU performance; both raw and per watt. If you look at CPU-only benchmarks, Baytrail is 20-40% ahead of current ARM chips in most benches - particularly parsing and branch-heavy ones like browser and JScript benches. Intel has very much proved they are definitely capable of bringing x86 all the way down to ARM-level power budgets but AMD is still struggling to get there without sacrificing too much performance.
The K1 is mostly marketed for gaming-oriented devices and devices using it are not shipping yet. By the time they do, Intel's Moorefield will be about to get on the market too and likely give Atom another significant boost in CPU performance by reinstating out-of-order execution.
The competition might not be standing still but Intel is incrementally rolling all their architectural big guns back in with each new atom generation bringing much greater leaps in performance per watt than any ARM competitor does and 2014 marks the year where Intel's SoC-CPU performance leapfrogged most of the competition under 3W.
Yes, Intel still has some catching-up to do on the IGP side of things but that is not as much of a problem on more productivity/business-oriented mobile devices.
Xmas moorefield will be facing denver which is also out-of-order execution and we've already seen the dual performs like the quad in cpu and will likely be facing others at 20nm (NV is also rumored to be attempting to pull 20nm in Q4 but I doubt that as apple/qcom will get it first probably). A57 is a huge leap over A15.
You're claiming K1 isn't in devices yet but at the same time telling me moorefield is coming (and btw not until late H2 which coincides with Denver K1 with A57's)...So not in devices for a while either right? What device has leapfrogged ARM at 3w?
http://blog.gsmarena.com/nvidia-tegra-k1-benchmarked-ac...
http://www.evolife.cn/html/2014/77075.html
(2nd link was source of them I think)
Those are most of the K1 benchmarks. Point me to some scores toppling this with Intel. Those are not just gpu benchmarks in there. So far I've only seen press materials on merrifield and running webxprt is dubious at best at this point.
http://vr-zone.com/articles/nothing-redeeming-intel-mwc...
As discussed above, Intel optimized only probably (reminds me of bapco sysmark fiasco). Feel free to google Z3480 and give me some links to victories. As gaming is taking 70% of our time on mobile I fail to see a small cpu difference doing anything. It's all about the games going forward. The slashgear benchmark I linked shows K1 isn't bad against a 15w laptop chip on the gpu side (and I'm sure Iris will use more watts than 4400).
http://anandtech.com/show/7314/intel-baytrail-preview-i...
baytrail z3770 sunspider 566, K1 above scored 501 (wins, Jscript, lower is better).
Kraken z3770 4686, K1=3958 (wins, lower is better).
Google Octane z3770=6219 while K1=6450, but can't really compare since it's v1.0 vs 2.0 on NV K1. Intel beats shield though by 50% here, and K1 shows ~20% faster than T4 in 2.0, so I'd expect Intel to win here but nothing earth shattering and this is the crappy K1, not Denver coming for xmas.
K1 has 44225 in antutu which blows away 801/T4 as shown. New Antutu brings Intel down from previous BS benchmarks so not sure where they are now:
http://www.eetimes.com/author.asp?section_id=36&doc_id=...
Antutu nonsense fixed. Again cpu.
Loses to qcom in Browsermark at anandtech link, so yet another Java benchmark NOT showing Intel good.
AndEBench Java again shows qcom kicking Intels butt big time (699 Qcom, to Intels 428...OUCH). I'm not seeing what you're saying here.
It wins the native AndEbench but 801 is faster than 8974 800 so still won't be a blowout and 805 is upon us next month or two so that's it's real competition on Qcom side now. We can see K1 tripling T4 in basemark X and more than doubling S801. Basemark OSII K1 again doubles T4, and blows away 801 also. We can't compare Intel here yet AFAIK but we know how it does in a number of benches vs. S800, and we see the damage K1 does to S801 which is faster.
Point me to some benchmarks we can compare to the K1's where Intel is winning on android. I'm not really interested in hearing windows stuff, as that really isn't comparable. I see Qcom/K1 doing very well against Intel. You're acting like only Intel can improve massively but as shown K1 can triple T4 and we are NOT talking A57's which will be hitting from Qcom & NV (apple custom also) soon. Though Qcom's custom is a long ways off, apple/NV is this year. We have already seen Denver benchmarks.
What the heck are you looking at for benchmarks? Links please, statements are useless to me without the links. I see you claim a lot, but I prove otherwise. Proof please, not your opinion. IF you have some, I'll be happy to look as I have money at risk
IF you have the benchmarks post the links. You saying something a dozen times doesn't make it real. PROVE it. I know what you CLAIM, but I'm more interested in what can be PROVED. I don't see any Intel dominance. A bright spot here and there showing possible potential, but nothing concrete showing dominance you're claiming. Score
0
somebodyspecial
June 2, 2014 6:25:59 PM
Vlad Rose said:
somebodyspecial said:
InvalidError said:
Vlad Rose said:
As the famous Japanese quote goes: "I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."Pretty much.
3-4 years ago, few people (myself included) would have believed it would be possible to bring relatively high performance x86 cores down to sub-3W power budget but this year, Intel is shipping 2-3W x86-64 SoCs that outperform 32bits ARM CPUs in the same power budget.
I'm just amazed at how the old x86 kludge managed to survive so long and looks like it might finally break in the handheld and other traditionally non-x86 markets. ~15 years ago, I thought I would own an IA64-based PC by now.
Outperform arm in what?
http://www.slashgear.com/nvidia-tegra-k1-out-performs-i...
I don't see Intel doing much of anything in mobile other than losing money.
More fantasy on your part. But this year...blah...blah (I've been hearing that for a few years now)...Let me know when they stop losing money giving away chips. You act as though the enemy sits still waiting for Intel to finally pass them. They are NOT. On the other hand, ARM already stole 21% of Intel's entire notebook market and Intel still can't get into mobile without paying the difference between ARM's chips and theirs (I guess you'd call that price matching). More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS. Samsung/TSMC will be on their new processes before Intel moves to 14nm. You gain nothing and actually seem to be losing ground in more ways than one. 3Dfinfet was supposed to take ARM down...How did that work out? Now all they have is a shrink, and the enemy has one first. Next time, the enemy has 3Dfinfet too...What then? They gain more, that's what.
Intel better buy NV before china/korea run them over in fabs (TSMC, Samsung). ARM socs produced with the best gpus on Intel's process would be a game changer. Other than that we'll see ARM erode Intel's finances more for the foreseeable future.
You might want to read that article again. The Tegra K1 outperformed the Haswell in graphic benchmarks only, not CPU power. And even so, it's a HD4400, not a HD4600 or Iris 5200. Nvidia better be able to win in the graphics department, considering that's what their company is based on initially.
"Samsung/TSMC will be on their new processes before Intel moves to 14nm."- And they will still be slower than the slowest chip in Intel's x86 lineup (outside possibly Atom).
"3Dfinfet was supposed to take ARM down." News to me that that was ever posted anywhere. The point of making 3D transistors was the ability to be able to cram more transistors into a smaller space; as mentioned by wikipedia.
"Multigate transistors are one of several strategies being developed by CMOS semiconductor manufacturers to create ever-smaller microprocessors and memory cells, colloquially referred to as extending Moore's Law."
"More damage is heading Intel's way as ARM can far more easily adapt to MORE watts than Intel can to LESS." ... You may want to take a course in computer design before making a statement like that, then realize it's a false assumption. ARM by nature is designed to run at low power vs. high performance. That is what RISC (which ARM is based on) is designed for. Throwing more watts at it will not make it perform at the same level due to it's fundamental design. Oh and with Intel doing more with less wattage? What do you think these instruction sets Intel keeps adding to their CPU are? (MMX, SIMD, etc). They are RISC instructions.
Again, where is Intel losing money on their CPU chips? I know you may believe all the hype Nvidia creates, but when it comes to market size, they're just like a Chihuahua; a small dog with a constant bark, surrounded by a bunch of Great Danes (Intel, Apple, Qualcomm). Intel isn't going anywhere anytime soon.
Intel is losing $3Billion a year on mobile.
http://www.eetimes.com/document.asp?doc_id=1322263
JP Morgan telling them to give it up...LOL. You don't read financial reports do you?
"The mobile and communications group saw a $3.1 billion operating loss in 2013, with 1Q 2014 losses hitting $929 million and revenues at $156 million. While Intel officials acknowledged the loss, several were quick to call recent financial numbers an “investment” in the mobile ecosystem. "
You really want to argue with Intel on their OWN loss comments? It is expected to be $4 Billion this year if you're looking at Q1's rate of already 929mil loss! I don't believe hype, I believe BALANCE SHEETS and EARNINGS reports. Pile that up with comments from Major financial institutions telling you the pain will not end, and you should get my reasoning here. You think JP Morgan is full of idiots that can't do math?
"JP Morgan statement read:
We continue to believe Intel will lose money and not gain material EPS from tablets or smartphones due to the disadvantages of x86 versus ARM"
Let me know when they're wrong. Until then I'm right
Even Intel's comments say they merely expect to HOPEFULLY bring cost structure down some as they work through 2015. Sounds like a full year+ of complete losses in mobile even from their own mouths.http://www.fudzilla.com/home/item/33701-yes-intel-is-su...
https://webcache.googleusercontent.com/search?q=cache:w...
Google cache of the pcworld article and since this is kicking in you see the Q1 ~930mil loss and it will continue as shown.
No need to take a course, as Invaliderror just showed there is nothing stopping ARM from putting out a 85w chip at some point and coming full bore after Intel's desktops and that is exactly what I expect with M1, P1, V1 etc revs after denver. Boulder is already on the maps also for servers. They are so far losing 3.1B a year on mobile trying to tackle ARM in THEIR turf. We will see if ARM loses any money trying to take INTEL turf. So far they already took 21% of the ENTIRE notebook market last year. I'd say the writing is already on the wall without Intel buying NV to get a great foothold just as NV takes off with desktop gpus in socs. They already have a custom A57 on the books and benchmarked. It would be FAR better on Intel 14nm and so would NV vid cards. Wins all around. You're confused if you don't think an Arm A57 at 4ghz won't be a problem for Intel's desktop chips. The power is there, they just need apps/games to bring it all home which is already happening for games, and apps are next. Denver dual core performs just like A15rp3 quad core. Now double the dual core denver to a quad core and run it at 4ghz. You think that chip sucks? HECK NO.
What do you think you get when you can shove more transistors into a chip? MORE PERF. 3Dfinfet was supposed to vault intel ahead of everyone and it got them nowhere vs arm. What do you think it means to extend Gordon Moore's law? You will be able to continue to add more crap inside which will allow you to keep up perf increases instead of hitting the wall. You are building my case, not yours.
Let me know when IRIS 5200 gets into an android tablet at arm watts
IF Intel was beating Qcom/ARM they wouldn't have to subsidize their chips as they are now pushing harder with subsidies. Having to BUY your way into a design shows you're losing. The problem for Intel is Nvidia is no longer just producing gpus. Denver is IN HOUSE cpu and you should google the team that is behind it. Major cpu guys from all top designs of the past. Here's a quick one:https://webcache.googleusercontent.com/search?q=cache:s...
Patrick Moorhead is no dummy (11yrs at AMD, Paul was there too, has a ~dozen AMD patents). He's the #1 ranked analyst last I checked. He helped come up with AMD64 logo...LOL Among other more notable things:
http://www.moorinsightsstrategy.com/about/
Chock full of tech brains, but what does moorhead say about NV's coming chips? :
"Let’s look at Nvidia’s processor team. They have been in existence since 2006 and have hardened multiple, “off-the-shelf” ARM cores. Unknown to most, Nvidia’s engineers have been working on their ARM 64-bit Denver for at least three years, since before the CES 2011 announcement. Hailing out of Portland, Oregon, the team consists of former CPU jockeys from Intel, AMD, HP, Sun, and Transmeta with experience in superscalar, OoO (out of order) execution design, micro-code, VLIW, hyper-threading, and multi-core. Does this experience and background guarantee success? No, but it provides the opportunity to succeed, and succeed big if you look at what others have accomplished."
These guys don't suck. They have covered all the bases to take on Intel without WINTEL at all. From cpu, gpu to the bus (NVlink, removing hypertransport, Infiniband etc, no need), they're covered. This isn't NV kool-aide. Google Patrick M if you don't know who he is. I used to sit in Intel/AMD/NV conferences, so despite not taking engineering, I know a thing or two about these people since I was a reseller of all of them for 8yrs. I'm by no means ignorant about what is going on here.
You might want to READ more articles yourself. You're wasting my time.
More claims with no proof or anything to back your statements. Intel is losing ground. The data doesn't lie. Intel gains nothing from 14nm as everyone will move to 20nm before their socs go 14nm. A step after that everyone gets 3dfinfet AND a die shrink while Intel gets another shrink. They won't fab their way out of samsung making $32B a year to Intel's 10B and they are NOT alone (IBM laid the groundwork, and samsung/GF are running with the results now). Without something massively changing, WINTEL is in trouble vs the ARM/Android armada. I could go off on MS the same way (I've laid that case out many times in here already) and I think they will be the bigger loser unless they successfully run to greener pastures (maybe cloud crap offsets OS losses, god forbid we end up with Common Core crap etc...) or maybe they try to buy NV. They might have more luck since they don't have the hate that goes on with Intel over the chipset business they killed (and lawsuit etc). But this would do them no good if they didn't start pumping out android Nokia's with NV socs. They already make $5 for every android device sold...LOL. Buying a soc/gpu is definitely a move towards vertical integration now that they own a phone and they earn far more than Intel so it's easy for them to lay out $25B (they made $22B TTM).http://money.msn.com/business-news/article.aspx?feed=MW...
Piling up computex awards already for K1/Grid. Golden Award for K1! Koolaid? NOPE. Apparently a LOT of people are drinking it correct? Grid testing at 100 companies, then 200, now 600. Growing massively by the quarter.
Score
0
somebodyspecial said:
No need to take a course, as Invaliderror just showed there is nothing stopping ARM from putting out a 85w chip at some point and coming full bore after Intel's desktopsARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks and BayTail has about half the IPC of Intel's conventional CPUs so ARM is going to need more than A57 to become a credible threat there. No point in pushing a "low-power" architecture to 100W if it ends up performing barely on par with a 35W i3.
Integrating all the tricks of mature CPU architecture in ARM will take several years of trial and error for individual ARM chip vendors with in-house ARM core design ambitions to adapt to their specific implementation. ARM is not going to magically catch up with mature CPUs overnight since ARM chip designers will have to go through multiple large-scale re-designs just like AMD and Intel have with x86 before settling on a final general form.
Score
0
somebodyspecial
June 3, 2014 9:51:41 AM
InvalidError said:
somebodyspecial said:
No need to take a course, as Invaliderror just showed there is nothing stopping ARM from putting out a 85w chip at some point and coming full bore after Intel's desktopsARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks and BayTail has about half the IPC of Intel's conventional CPUs so ARM is going to need more than A57 to become a credible threat there. No point in pushing a "low-power" architecture to 100W if it ends up performing barely on par with a 35W i3.
Integrating all the tricks of mature CPU architecture in ARM will take several years of trial and error for individual ARM chip vendors with in-house ARM core design ambitions to adapt to their specific implementation. ARM is not going to magically catch up with mature CPUs overnight since ARM chip designers will have to go through multiple large-scale re-designs just like AMD and Intel have with x86 before settling on a final general form.
"ARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks"
Links to these benchmarks please. I gave many of mine with benchmarks. They do not show what you say, and in fact quite the opposite in jscript, java etc.
You're not paying attention to the teams at play here. Apple has a team (pasemi etc again loads of people from everywhere), qcom has one, both have been doing in house for ages, NV's team is highly skilled with many cores, bus experience etc etc BEFORE they came to NV. What do you think ARM has been doing for the last 10 generations? Final form? There is no difference between what AMD/Intel is doing and what the ARM gang is doing. Different architectures but the same tactics. You are vastly over estimating the skillset of Intel vs. everyone else. Many of these people are ex employees and every big house (some worked at multiple top dogs, Dec, sun, Intel, AMD, fairchild, motorola etc...). These teams are NOT kids who just graduated from college for any of the names mentioned. Patrick Moorhead was explaining this precisely in NV's case (same with all others). Intel isn't dealing with newbs, they are dealing with 10-25yr cpu veterans and many of whom used to build Intel's/AMD/Dec Alpha/Sun/transmeta etc cpus. They've been doing your "experiments" for 20yrs at all of the top semi companies. You are vastly underestimating the skillset of the teams working the ARM side.
More importantly, you seem to think CPU is still king. I would have agreed 5-10yrs ago. Now it's GPU's turn with a decent CPU to help out. There is no need to be tops on cpu to take down Intel's margins and put a real crimp in their earnings. Are there more cpu limited things or GPU? Do we want FOUR Intel i7-4770's in our PC's or Quad SLI vid cards? Do supercomputers run on tons of Intel cpus, or is the TESLA's in massive quantities that pump out the power in these Top 500's supercomputers? IT IS THE GPU'S. There is no need for more than a tegra3 to feed NV's next gen supercomputers (which was partially the point of the CPU they made, to feed tesla and stop letting INtel/amd do it).
I never said Intel would be out of business next year. I'm guessing we'll be watching the damage for 5-10yrs and even then I don't think Intel or MS go bankrupt. I just think we are witnessing the next changing of the guard.
Again, can't you give some benchmark links with Intel on android showing what you say please? All I see is YET ANOTHER opinion post with ZERO data. You keep making counterpoints (if that) but with no supporting evidence. PROVE SOMETHING or quite wasting my time. Arm has already "magically" caught 21% of Intel's market share in notebooks and they are not even 64bit yet. They have already caused Intel to "magically" lose 3.1Billion chasing them and about to hit 4Billion based on Q1 intel mobile losses. They just started this matching price crap in Nov and we see it is accelerating Intel losses in Q1.
I don't want to hear your 15th opinion again. I want YOUR DATA. I don't care if it's S800/801/805/K1, but show me Intel beating them handily in a bunch of stuff you mentioned. I've already downed your jscript, java, and other cpu claims with DATA. So where is YOUR data?
Score
0
somebodyspecial said:
"ARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks"Links to these benchmarks please. I gave many of mine with benchmarks. They do not show what you say, and in fact quite the opposite in jscript, java etc.
You should re-read the articles you linked and pay attention to "lower is better" vs "higher is better" because my 30-50% better is taken directly from your links.
In the Anandtech link, Atom wins all CPU disciplines except Java and Browsermark by a wide margin. In the Browsermark case, Atom basically ties for second place with three others with a ton more devices not far behind, which seems to indicate Browsermark does not scale particularly well as a CPU benchmark.
Score
0
somebodyspecial
June 3, 2014 3:36:34 PM
InvalidError said:
somebodyspecial said:
"ARM is already trailing 30-50% behind Intel's BayTrail in most CPU-based benchmarks"Links to these benchmarks please. I gave many of mine with benchmarks. They do not show what you say, and in fact quite the opposite in jscript, java etc.
You should re-read the articles you linked and pay attention to "lower is better" vs "higher is better" because my 30-50% better is taken directly from your links.
In the Anandtech link, Atom wins all CPU disciplines except Java and Browsermark by a wide margin. In the Browsermark case, Atom basically ties for second place with three others with a ton more devices not far behind, which seems to indicate Browsermark does not scale particularly well as a CPU benchmark.
Try again...K1 isn't in there, that is why I gave the K1 links vs. anandtech's Atom scores. It isn't winning. also lost AndEBench java, only won the native in that one. You're exaggerating saying ALL disciplines when they lose 1/2.
http://blog.gsmarena.com/nvidia-tegra-k1-benchmarked-ac...
K1 from there or the evo link I gave before among others. Compare those to the anandtech scores and intel is losing. Try again please. Intel is beating some of the other chips, but it's facing K1, and you can see it isn't the same story. Nice try though
Yeah intel won against some old chips, NOT K1. NV wins Kracken, Sunspider and as noted Qcom wins barely in Browsermark and again, that's just S800 (surely loses by more to 801 and now 805 right?). So you're using old data to make a point that no longer exists. S805 has been benchmarked also, and it is faster than what is in there too (those were 800's, they have 801 now, and just hitting with 805). Competition already moved, twice in Qcom's case and 810 hits shortly after Denver at xmas (probably a new samsung in there at some point too).Intel will have problems getting into devices as the camera crap is lacking (13mp, vs. up to 100 for NV, ~55mp for Qcom etc), among other issues. Samsung is about to ship 16mp and 20mp shortly after, so no wins coming for Intel in those even with moorefield. Mobile is about more than just the CPU. In everything else, the GPU is king these days. Other things matter too, but when all is said and done without a good gpu you're not having as much fun or speed (games or pro apps with cuda).
http://www.fool.com/investing/general/2014/03/09/intel-...
Where are all the android baytrail devices? It's difficult to find benchmarks still that aren't FFRD and that info is old today as everyone has moved on.
http://www.fool.com/investing/general/2014/05/25/will-i...
Comparing Qcom 805, K1, Moorefield. Not impressed. Bandwidth and camera lacking. No high end for Intel. We'll have to see how the tablets work out. I expect K1 to do well there for sure. Intel still has to show us some android tablets from the LAST chip as they sure missed the xmas promises right? Then again I predicted that all along on here when people pointed to the slides.
http://www.anandtech.com/show/8035/qualcomm-snapdragon-...
Baytrail already losing sunspider even to T4. Again, losing to kracken with Baytrail T100 vs. T4 and others (loses to both TF701T and shield). TWO CPU losses, cyclone shows well vs. Intel also in kracken (again Intel loss).
Octane 2.0, again loses to both T4 devices. TF701T scoring 5681, Intel already behind and K1 scores 6450. Intel doesn't win anything here. Try again
So we have Octane2.0, Kracken 1.1, Sunspider 1.02 all with INTEL LOSING. T4 also shows VERY well in the basemark stuff and K1 blows it away (3x faster in basemark X and almost 2x faster than T4 in Basemark OS II as shown in K1 benches). Intel won't be winning those vs. K1 either once we get the benchmarks. K1 scoring 1448 with everyone else at anandtech being 1158 or less (ipad air, S805 etc all this or less). GPU scores for T100 in the 805 preview at anandtech are just terrible.I'm really not seeing your evidence. I digress.
Score
0
InvalidError said:
CaedenV said:
I think they have another 2-3 die shrinks before they hit the wall, and then we are going to see major changes in the materials used to squeeze another 2-3 die shrinks before they are going to have to start implementing new instruction sets and architectures to get further efficiencies. It is going to be pretty cool to see, but once we start making major changes to architecture and instructions then we are going to have to say good-bye to legacy applications that have built up over the last 20 years, and that will be a little sad to see.Intel has already tried going with a "more efficient" instruction set on Itanium with tons of branch predication and other neat stuff that was supposed to enhance performance and scalability yet that failed to scale beyond x86's performance.
ARM, Power, Sparc and other ISAs are also failing to outclass x86 on raw performance and power-efficiency in many situations. As kludgy as x86 might be, Intel has managed to bring it on par with the best of anything else available today with things like uOPS cache to almost eliminate complex instruction performance hits and complexity (who would have thought ~2GHz dual-core x86 CPUs could be squeezed into 2-3W power budgets only a few years ago?) so it seems unlikely the industry is going to give it up any time soon - too much hassle for little to no gain.
Intel's biggest challenge/shortcoming for SoCs is the IGP. Bump that up a notch or two and Intel would have serious contenders across the board.
I think you are confused. I am not talking about today's performance, I am talking about 10-20 years from now. x86 has seen great leaps in performance almost entirely due to die shrinks and better manufacturing tech that far outclasses the competition. Once we hit the limit of die shrinks then we move to different materials, and once those hit their limit... then what?
ARM was never meant to be, or sold as, and efficient instruction set. It is a low power instruction set, meaning that it can run under certain power envelopes... all be it run extremely slowly. x86 has always been more efficient, but requiring a higher minimum amount of power to function. The recent improvements in Intel's ability to run at lower voltages has a lot more to do with the materials and manufacturing processes in use rather than instruction set improvements. As ARM manufacturers catch up to intel on manufacturing then they will continue to be able to offer even lower absolute power options, but take more power over time for a given workload. Over the next 5 years I think we will once again see Intel and AMD running the devices we interact with, and ARM being relegated to controlling appliances like cars, TVs, and clocks, just as they were before ARM based smartphones hit the market.
And Itanium failed for a whole host of reasons. The first of which is that Intel's engineers figured out how to apply the branch prediction that they learned from the Itanium development to the Pentium 4 (and related Xeon CPUs) which made continued Itanium development redundant and unnecessary (granted, by 'developed' I mean 'hired a bunch of AMD staff'). Beyond that there were a whole host of software and driver development issues which scared of deployment of such systems compared to Xeon and Opteron offerings that 'just worked'. However, if we hit a sort of end point of development with x86 then there will need to be other avenues to be explored and taken, just as they thought they needed to do with Itanium.
But I am talking about something more than just a mere instruction set, but a whole new architecture. While complicated, it is possible with modern electronics to move away from binary processors. Maybe we move to a base 8 or base 16 processor, which through native compression would be able to store and process several times more data per clock cycle. The latency of such a processor might be a bit of a bear to get around, but it would offer so much more raw compute and storage capacity that it may be worth the latency hit. That would be extremely interesting to see going forward. But we are talking about a development nightmare, and a complete resetting of our understanding about computer electronics, so there is not a chance of seeing this in the real world until x86 has exhausted it's options... and we have several more years until that happens.
Score
0
CaedenV said:
I think you are confused. I am not talking about today's performance, I am talking about 10-20 years from now. x86 has seen great leaps in performance almost entirely due to die shrinks and better manufacturing tech that far outclasses the competition.Totally incorrect: x86's IPC has increased by leaps and bounds too.
The 486DX33 is about 70X faster than the 8086 at 8MHz... so that's a 4X clock increase and 16X architectural efficiency increase. The 100MHz Pentium was around 10X that fast so that's about 3X from clock and 3X from architecture. The Pentium Pro / P2 roughly doubled IPC again thanks to out-of-order execution and on-package L2. Not many game-changing architectural changes left after that and we are already at the P3-S which topped out at 1.4GHz stock. There is the ~30% gain both AMD and Intel got with their integrated memory controllers, another ~20% gain for Intel from transplanting the good stuff from Netburst into the P3 to create Core2 and that's about it for the past 10 years if we omit incremental sub-10% gains.
Without all those architectural improvements, today's chips would be ~100X slower on IPC before including multi-core and hyperthreading which raise this bar to ~600X. This is very close to the ~800X clock increase since the original 4.77MHz 8088/8086 the whole desktop PC industry as we know it started from.
So, clock rates are only HALF the reason today's x86 CPUs are as fast as they are. Architecture deserves credit for the other.
Score
0
!