Sign in with
Sign up | Sign in
Your question

Is Intel too far ahead of its time?

Tags:
  • CPUs
  • Intel i7
  • Product
Last response: in CPUs
Share
October 13, 2008 3:50:14 PM

OK, most here have seen this http://translate.google.com/translate?hl=en&sl=zh-CN&tl...

Now, as these numbers indicate, i7 seems focused on a few things thats currently in low demand. Multi threaded apps are the future right? i7 kills in MT, no doubt about it. It does nicely in server as well, with its memory enhancements. But heres the problem. Just as we saw AMD come out with their 64 bit instruction/capability on their cpus, to this day, we see little use from it. Just like 64 bit OS's, theyre just not common place. To me, with 64 bit, its all or nothing, as 64 bit can be construed as being slower unless fully implemented. We have seen a very slow migration of apps moving towards MT, and tho theres a few bright spots, like Valve for gaming and others, for the most part, its been dismal.

Some will say, but thats eventually going to happen. While this may be true, theres always some apps thats best run at single thread, and the only thing thatll make this happen is 2 things, competition and the economy. If the SW companies dont find the need to convert their products to MT, they wont, unless their respective competitors do so and become a threat. Companies dont just spend money for the heck of it. If a company feels itll greatly impact their bottom line, you can bet they will, but that again depends on the economy.

Most companies only make a yearly revamp of their SW, and then again, the amount of change, and how its done is determined by both the economy and competition. So, these SW companies will move as the market moves, with some areas adopting MT quicker than others.

Now, alot of the most popular apps being run on DT today have a video companent to them, such as encoding, and that brings us around to the GPU. As the GPU abilities are being used more and more in these areas, the competition between cpu and gpu isnt heating up, as one would think. The GPU is just superior to these types of apps, and thus we will eventually have Larrabee, but this is one area where Intel is actually behind, so the traditional gpu makers, nVidia and ATI currently have a huge advantage, and time to establish themselves in this paricular market, and that leaves the cpu really out in the cold there. With the likes of DX11 coming, and a few possible enhancements seen from it, itll only push the gpu higher in this regard.

So, what do you make of these numbers from my link? Currently it looks as tho i7 is so so in gaming, and only slightly faster at ST. And what do people think about my market analysist? Will we actually start to see an increase in the volume of MT apps sooner, or faster than we currently do?

More about : intel ahead time

October 13, 2008 4:12:08 PM

Well, basically it could happen if the iCore7 would become the standard in servers, which I doubt, because of transition time and the simple fact that they can't just abandon their old servers.

The main reason for people not accepting the new is because they don't want to abandon their old habits. Like the habit with single threaded applications.

Too far ahead? Nah, more like they're trying to create their own standards, because if that happens there's even less chance of AMD fighting back, unless they start copying Intel again, and then the lawsuits again, etc.

October 13, 2008 5:17:16 PM

Does not look too far ahead by any means.

I can browse the web and do most of my daily tasks just fine on a 3-4 year old computer. However, I have some tasks that don't go so fine.

So folks will buy the i7 not becaues it makes EVERYTHING alot faster.
They will buy it because it maskes SOME things ALOT faster.

Now that new hardware is being released new software will be created to utilize that power.

What Intel does is push New Advances from the Top down.
Not everyone will need to power but those who do will pay the premium to get it.
As they convert their factories over and supply increases for the new and decreases for the old they will adjust prices so the hardware is more mainstream based on price. By this point, more software will exist for it.

Related resources
October 13, 2008 5:25:09 PM

Isn't this the Pentium Pro all over again?
October 13, 2008 5:38:56 PM

Sorta my thoughts as well. It really doesnt matter what Intel does, as I tried to lay out the software scenario. Im not saying Im right on how fast this transition will go, but itll be at least a year before we see alot of improvements in this direction, as software makers just wont/dont change their apps on a whim, but more in a yearly cycle, if that. I know they CAN do it, but the heart of it is, WILL they? The economy and competition limits this growth, or more importantly, controls it. Making it available doesnt mean the SW makers are going to jump on it.
October 13, 2008 5:48:53 PM

In the research world there would be many applications. For small groups of researcher it might be useful to have a powerful server for runnning calculations. Even if they are not multihreaded, people could run many copies in the same times. It is not all about playing games or doing some encoding of the party movies.

PL
October 13, 2008 6:05:10 PM

Simply put Intel is finally catching up in the server market. They are integrating AMD features into their new lineup with old pentium features to make a chip that will destroy AMD in the x86 server and supercomputing markets. If you look now AMD is still competitive with core 2 architecture in many server and supercomputing benchmarks. i7 is Intel's move to push AMD out of that market as well and start regaining its x86 monopoly.
October 13, 2008 6:15:30 PM

stridervm said:
Well, basically it could happen if the iCore7 would become the standard in servers, which I doubt, because of transition time and the simple fact that they can't just abandon their old servers.

The main reason for people not accepting the new is because they don't want to abandon their old habits. Like the habit with single threaded applications.

Too far ahead? Nah, more like they're trying to create their own standards, because if that happens there's even less chance of AMD fighting back, unless they start copying Intel again, and then the lawsuits again, etc.


Intel's only standard is to regurgitate the x86 architecture. They've been doing it for 40 years now.

:pt1cable: 

We need a change!

"This week Intel will release more details on its Nehalem architecture, the first iteration of which is now known as Core i7. Randy Allen, the senior vice president of AMD’s computing division, said many innovations in Core i7 are really imitations of features such as the integrated memory controller and HyperTransport already in AMD processors.

“I guess on one level it is sort of gratifying. Imitation is the sincerest form of flattery,” Allen said. “But on another level it is somewhat annoying . . . [Nehalem is] not rewriting the book, but rather imitating or photocopying our innovations.”

http://blogs.zdnet.com/processors/?p=192&tag=rbxccnbzd1
October 13, 2008 6:15:33 PM

zenmaster said:

What Intel does is push New Advances from the Top down.
Not everyone will need to power but those who do will pay the premium to get it.


Seems like the premiums are what every company wants off of us <_<
October 13, 2008 6:21:55 PM

Is Nehalem too far ahead of its time? No, not really, since there are many current programs that will already benefit greatly.

Let me put it this way:

Apart from gaming (which is more GPU limited anyway), what 'killer' single-threaded app actually runs too slow on current CPUs?

Now compare that to how many MT apps that could potentially benefit from Nehalem. I think the answer is clear.
October 13, 2008 6:23:07 PM

I don't know what people are fantasizing about when it come to multi-threading. It is already being used in professional applications that benefit from it, like Adobe apps, Maya, office apps, video encoding, etc. For example, Photoshop has already been modified to use up to 8 cores.

Multi-cores (4+) will mostly benefit users that run more than one application at a time or need to run the same application for more than one user at a time, like in a server.

Using it in games is limited, and I don't see game developers going too wild with MT since the vast majority of its customers have a dual core CPU and it adds to the complexity of debugging (ie costs).

Other apps, like Quicken, sure don't need it, as they just take input from the user and then perform a small calculation to update your balance.

There just aren't that many apps that need to perform multiple concurrent long processing tasks, and those that do are already taking advantage of multi-core CPUs.
October 13, 2008 6:31:32 PM

enigma067 said:
Intel's only standard is to regurgitate the x86 architecture. They've been doing it for 40 years now.
And have made buckets of money in the process.
enigma067 said:
We need a change!
Why?

a c 117 à CPUs
October 13, 2008 6:34:42 PM

DXRick said:
I don't know what people are fantasizing about when it come to multi-threading. It is already being used in professional applications that benefit from it, like Adobe apps, Maya, office apps, video encoding, etc. For example, Photoshop has already been modified to use up to 8 cores.

Multi-cores (4+) will mostly benefit users that run more than one application at a time or need to run the same application for more than one user at a time, like in a server.

Using it in games is limited, and I don't see game developers going too wild with MT since the vast majority of its customers have a dual core CPU and it adds to the complexity of debugging (ie costs).

Other apps, like Quicken, sure don't need it, as they just take input from the user and then perform a small calculation to update your balance.

There just aren't that many apps that need to perform multiple concurrent long processing tasks, and those that do are already taking advantage of multi-core CPUs.



News to me. Got any linkage with that?


Intel's not ahead of their time. They are just trying to drive the market.
a b à CPUs
October 13, 2008 6:39:12 PM

My workplace still uses a DOS programs from the 80's, and have to keep 10 or so computers on Windows 98 at all times to run the programs. Companies do no revamp software "once a year", closer to once a decade. Heck, the majority of the stuff I use at work were coded by the comapny interns over the course of a decade! (Funny thing is, these programs are generally more stable than the host OS...go figure).

Hardware only gets attention under two conditions: that they are accepted as standard, and their extra functions are needed. Vista 64 is doing well, because 4GB+ of RAM is now accepted as a necessity for some home users, and is now getting the extra support it needs. Programs are beng written for two cores, because two core systems are commonplace. The issue is, the majority of PC's are still single core, and until that changes, dont expect propriatary software to catch up.
October 13, 2008 7:01:29 PM

It's the common way of the things to happen. We get new hardware. We get some professional programs to use that hardware. It will take 10 years or more to change all those older computers to new technology. Other programers start using well stablished hardware solutions. (There have to be demand big enough to make the leap...)

But no... Computer technology is newer too much ahead of it's time. It's only ahead of general smallest limiting thing factor...
October 13, 2008 8:41:47 PM

Not an Intel fanboy just making an observation

Why is it that people are so quick to forget when AMD copies Intel but never seem to forget it Intel repeats the favour?

Sure Intel is the big bad company and AMD the smaller underdog but that doesn't excuse either their behaviour and some of the stuff Intel is "copying" AMD on is more natural evolution were as AMD has been caught directly ripping people's designs off.

Anyone remember how AMD's best chips(aka K8) are actually heavily modified Pentium 3s... Intel did the heavy lifting and AMD just modified it... Sure they did a great job but at the end of the day its still an Intel design (at least partially) and as much a victory of theirs as AMD's.

Perhaps we should cut back on the fanboyism and look at it. AMD does a amazing job modifying chips(however K9 and K10 proves they still need a bit of practice at designing a new chip :p ) while Intel does a great job designing new ones. The 2 will always feed off each other. If they stopped doing things because AMD or Intel has already done it it'd be one dead market and we'd suffer.
a c 127 à CPUs
October 13, 2008 8:57:01 PM

enigma067 said:
Intel's only standard is to regurgitate the x86 architecture. They've been doing it for 40 years now.

:pt1cable: 

We need a change!

"This week Intel will release more details on its Nehalem architecture, the first iteration of which is now known as Core i7. Randy Allen, the senior vice president of AMD’s computing division, said many innovations in Core i7 are really imitations of features such as the integrated memory controller and HyperTransport already in AMD processors.

“I guess on one level it is sort of gratifying. Imitation is the sincerest form of flattery,” Allen said. “But on another level it is somewhat annoying . . . [Nehalem is] not rewriting the book, but rather imitating or photocopying our innovations.”

http://blogs.zdnet.com/processors/?p=192&tag=rbxccnbzd1


Yea people like you tend to try to forget Intels try to move everyone to a true 64bit CPU, IA64 aka Itanium.

If anyone is to blame for us still being stuck on x86 its AMD with their x86-64. Yep. Good ol AMD threw that out, it beat IA64 which is great for 64 but not as good with 32bit. So now we wait for something else.

JDocs said:
Not an Intel fanboy just making an observation

Why is it that people are so quick to forget when AMD copies Intel but never seem to forget it Intel repeats the favour?

Sure Intel is the big bad company and AMD the smaller underdog but that doesn't excuse either their behaviour and some of the stuff Intel is "copying" AMD on is more natural evolution were as AMD has been caught directly ripping people's designs off.

Anyone remember how AMD's best chips(aka K8) are actually heavily modified Pentium 3s... Intel did the heavy lifting and AMD just modified it... Sure they did a great job but at the end of the day its still an Intel design (at least partially) and as much a victory of theirs as AMD's.

Perhaps we should cut back on the fanboyism and look at it. AMD does a amazing job modifying chips(however K9 and K10 proves they still need a bit of practice at designing a new chip :p ) while Intel does a great job designing new ones. The 2 will always feed off each other. If they stopped doing things because AMD or Intel has already done it it'd be one dead market and we'd suffer.


I agree 100%. I kinda get sick of that current "Intel is copying AMD with Nehalem" crap thats going around recently. The fact is that Intel has been working on and has had various IMCs throught the years. This is just the one that works the best. Sure AMD had theirs in mass production first but what does that mattter. What matters most if how this benefits us since we will now have the same design concept for both chips and apps and games should start to take advantage of this giving us better apps and games to use.
October 13, 2008 9:00:36 PM

DXRick said:
Using it in games is limited, and I don't see game developers going too wild with MT since the vast majority of its customers have a dual core CPU and it adds to the complexity of debugging (ie costs).


Once you decide to invest the effort to make the game multi-threaded, the number of threads doesn't much matter; if you have one AI thread, you can have a dozen if the CPU supports that many, if you have one physics thread, you can have a dozen.

Sure, if you write sucky code that requires every thread to keep locking the data it's using, then your performance won't improve much. But the solution to that is to design the game engine properly so you don't need many locks; once that's done, the number of threads can be set based on the capabilities of the CPU, with CPUs with more cores getting better AI, better physics, etc.
October 13, 2008 9:03:33 PM

maybe they are... but i think that can only be a good thing
the further we move with technology, the better things can get i guess...
October 13, 2008 9:56:23 PM

I think everybody forgets that thie intial Nehalems are designed for servers, period.

More cores and Hyperthreading allow more/better virtualization. Meaning one Nehalem-based server can do the job of several older ones.

CSI allows them to compete with AMD's HT in the 4 socket and up server market. Granted, this is a smaller market, but one where AMD dominates.

The initial Nehalem platform is not optimized for games or other home software, so I don't really understand why so many home users are planning to pay the premium price to upgrade to a server platform. The initial advantages to the home market seems to be very negligable.

I'm not going to consider Nehalem for the home, until the socket 1160 variants (The one designed for home use) are released in H1/09.
October 13, 2008 10:12:48 PM

exit2dos said:
I think everybody forgets that thie intial Nehalems are designed for servers, period.


Maybe you should tell that to Intel, because they're not advertising Nehalem as 'designed for servers, period'. Quite the opposite, in fact, Bloomfield is supposed to be for desktop users.
October 13, 2008 11:15:16 PM

JDocs said:

Anyone remember how AMD's best chips(aka K8) are actually heavily modified Pentium 3s... Intel did the heavy lifting and AMD just modified it... Sure they did a great job but at the end of the day its still an Intel design (at least partially) and as much a victory of theirs as AMD's.


The K8 design (Athlon64) was a copy from the Pentium 3? Do you have any basis for these claims? I thought they were improved K7 designs.... Hmm.....

If you are referring to their K7 design, I thought those was a combination in the K5, K6 and DEC microprocessors design?
October 13, 2008 11:15:46 PM

I think that pentiums III 450 overclocked to 566mhz with 570mb sdram 133mhz like this one im using to post right now should be plenty enough for everyone!
Seriously, Actually it does pretty well for normal web browsing and youtube and word processing, but obviously its a dinossaur.
Gaming is the most demanding sector when it comes to the most common average domestic pc user! For those tasks i mentioned above, even something with 10 years old like this example of pc i gave is enough, if you dont dump plenty of viruses and trojans inside of it.

But the future if forward and not backwards, of course if you dont have a pc you should go for the latest possible your wallet can handle, aside that, many people are misleaded by shoppers into buying a new machine because their actual one is like what, 3 years old?

Bah, waste.
October 14, 2008 12:01:32 AM

MarkG said:
Maybe you should tell that to Intel, because they're not advertising Nehalem as 'designed for servers, period'. Quite the opposite, in fact, Bloomfield is supposed to be for desktop users.


You are quite right. Intel itself is overmarketing.

My point is that from the home user standpoint, the whole Bloomfield platform is just like any other "Extreme" CPU. ie - premium pricing for a questionable (home) performance gain.
October 14, 2008 1:04:02 AM

None of this really matters. Its just another product launch. Weve already seen some "early" performance numbers and they arent awe inspiring to be honest.

This isnt Pentium to Core 2 all over again, no need to pretend it is. i7 is much more insignificant. These review sites, random forums, random people, and Intel "holding info" is just a device thats pumping it up to be something its not and wont be. People with their E8400's and Q9450's will be fine for quite awhile.

This is not a fanboi rant, as im obviously not an AMD user. Just preety tired of this false assumption that is being thrown around in forums that peoples current stuff is going to be garbage compared to i7. Not saying this particular thread is about that because its not.
October 14, 2008 1:51:07 AM

sonoran said:
You mean like a completely new architecture, totally unencumbered by the legacy of x86? Sounds like a great idea! Oh wait - Intel already did that...

http://www.intel.com/products/processor/itanium/index.htm?iid=servproc+body_itanium2subtitle

I guess you just forgot.


The Itanium isn't the best example to use for an alternative to x86 as it has pretty much been a failure as a processor.

Honestly, I don't see how anyone can reasonably say that moving to a new, open standard would not present major benefits. Having a open standard that was not controlled by a single company would allow many more companies to enter the CPU market, which would lead to more innovation and better overall products for end users.
a c 127 à CPUs
October 14, 2008 3:02:51 AM

^While I agree on some things I disagree that you cannot consider Itanium as a alternative to x86 when it is. Its a entirely new design and where it fails in 32bit/x86 programs it excells in 64bit.

The only reason why it "failed" is because of two factors 1. the horrible backwards support and only being able to emulate x86 code and 2. AMD released x86-64 which ran x86 perfectly and 64bit just as well. The latter being the biggest reason why IA64 didn't fly.

Either way, it is not just Intels fault we are stuck with x86. AMD is just as much to blame.

While having a open standard would be nice I fear the compatability of it all would be a horrid nightmare and especially for IT guys who would have to learn so many standards and such.
October 14, 2008 3:07:05 AM

exit2dos said:
I think everybody forgets that thie intial Nehalems are designed for servers, period.

More cores and Hyperthreading allow more/better virtualization. Meaning one Nehalem-based server can do the job of several older ones.

CSI allows them to compete with AMD's HT in the 4 socket and up server market. Granted, this is a smaller market, but one where AMD dominates.

The initial Nehalem platform is not optimized for games or other home software, so I don't really understand why so many home users are planning to pay the premium price to upgrade to a server platform. The initial advantages to the home market seems to be very negligable.

I'm not going to consider Nehalem for the home, until the socket 1160 variants (The one designed for home use) are released in H1/09.


Well, at least one person gets its. More did, but the got caught up in a gib gnab
October 14, 2008 3:32:20 AM

jimmysmitty said:
^While I agree on some things I disagree that you cannot consider Itanium as a alternative to x86 when it is. Its a entirely new design and where it fails in 32bit/x86 programs it excells in 64bit.

The only reason why it "failed" is because of two factors 1. the horrible backwards support and only being able to emulate x86 code and 2. AMD released x86-64 which ran x86 perfectly and 64bit just as well. The latter being the biggest reason why IA64 didn't fly.

Either way, it is not just Intels fault we are stuck with x86. AMD is just as much to blame.

While having a open standard would be nice I fear the compatability of it all would be a horrid nightmare and especially for IT guys who would have to learn so many standards and such.


Personally I'd say it's the software industry that carries most of the blame for the x86 instruction set becoming so dominant. Many of us probably remember the last big (failed) push to promote the RISC archtecture to the desktop market back in the late 80's and early 90's. For years all of the tech publications were gushing about how RISC chips were going to be much more efficient than CISC chips and how it would revolutionize the industry. In the end however software vendors decided that it was too complicated to write code that would work on RISC systems.

Regarding the Itanium, there are many reasons why it has been a failed experiment, the chief reason being that the original Itanium was delayed for so long that by the time it was actually released this supposed "super chip" was actually slower than many of the RISC and x86 processors on the market. My intent wasn't to say that the Itanium isn't an alternative to x86, just that it's not a good example of one since it is so unsuccessful. Better examples would be any of the RISC chips from Sun, IBM, or Fujitsu or even chips using the ARM instruction sets.
October 14, 2008 5:14:16 AM

And how many other promising technologies have died? Not that either RISC or Itanium were the best examples.

Lots of reasons why promising tech dies or never makes it out of the AAA league. Look at SCSI and MiniDisc.
October 14, 2008 5:18:52 AM

Buddy of mine blew $650 on a MiniDisc player when they first hit the market. A month later is was nowhere to be found, dead.
October 14, 2008 5:25:35 AM

Too soon after CD. Sony was on crack releasing it when they did
October 14, 2008 5:29:58 AM

MarkG said:
Maybe you should tell that to Intel, because they're not advertising Nehalem as 'designed for servers, period'. Quite the opposite, in fact, Bloomfield is supposed to be for desktop users.


Not only is it designed for Desktop Systems, it will do quite well in the home systems for which it is designed.
There are in fact multi-threaded apps that exist and are used on deskto systems that will do extremely well.

Furthermore, it will do extremely well for the Non-Ocing folks since they CPU will actually automatically throttle down two cores and run the other two faster when all four are not needed.

Now most folks here will disable that feature, but its a nice feature.

I know I could make use of some of the features right away.
However, I will have blown my budget with already a new Desktop this year and likely a new upper end laptop before years end.

Come this time next year, I will be all over it and utilizing every drop that it has.

Heck, the current quads are not designed for single threaded apps, but that does not mean they are not a hot commodity. I suspect by the time the Nehalem is out for a year, there will be multiple must have games that make excellent use of the extra cores and multi-threading. And not because of Nehalem, but because of the previous quads. Things will be rolling in soon.
October 14, 2008 5:35:52 AM

turpit said:
And how many other promising technologies have died? Not that either RISC or Itanium were the best examples.

Lots of reasons why promising tech dies or never makes it out of the AAA league. Look at SCSI and MiniDisc.


Itanium isn't a good example, it had a very limited niche and tried to force full 64bit adoption without the x86 base market. But as far as RISC goes you'd be suprised. It's still a mainstay for a lot of things like the ARM chips and such. Not to mention every AMD processor from the k5 to k7 Athlon XP was RISC based with a front end slapped on that allowed it to process x86 code. K8 is based on Modified K7 with x86-64 and IMC, likewise with k8 to k10. So, RISC is actually still around it's just in places you don't realize.

The problem with the Software industry moving to multi threaded code, especially past 2 threads, is the fact the software industry almost always waits for the biggest player to go full on for something. In this case they have been waiting for Intel to go Native quad. You notice you didn't see 2 threaded desktop games and apps until after later Pentium D's and especially the Core 2's came out, even though the Athlon X2's had been around for some time. You gotta remember the main compiler used is Intel's, and thats very likely the reason multi threading is being held back, until they release a new compiler for native quad Nehalem. Even 64bit based software didn't start to proliferate until after Intel adopted it. At least now we have working 64bit OS's in windows Vista, prior to that you had XP pro 64, which was more of MS dabbling in 64 bit code.
October 14, 2008 5:37:16 AM

Ive already mentioned that GPUs are doing encoding and Adobe etc, and better than a cpu can, so what Im saying is, besides the server abilities of i7, theres not alot of room for its design to really do much for DT. It all comes down to waiting for better SW thatll take advantage of it, and as weve all sen, who knows when thatll be.

Id point out, regardless of peoples thoughts on gaming, a faster core speed and a higher or better IPC works not only for gaming, but on ever thing you can possibly do, MT, ST , gaming, encoding etc etc. Yes, i7 is clearly aimed at server markets, and while Intel takes this direction, the SW makers will slowly head in their direction, but not any time soon
a b à CPUs
October 14, 2008 8:03:43 AM

JAYDEEJOHN said:
Ive already mentioned that GPUs are doing encoding and Adobe etc, and better than a cpu can, so what Im saying is, besides the server abilities of i7, theres not alot of room for its design to really do much for DT. It all comes down to waiting for better SW thatll take advantage of it, and as weve all sen, who knows when thatll be.

Id point out, regardless of peoples thoughts on gaming, a faster core speed and a higher or better IPC works not only for gaming, but on ever thing you can possibly do, MT, ST , gaming, encoding etc etc. Yes, i7 is clearly aimed at server markets, and while Intel takes this direction, the SW makers will slowly head in their direction, but not any time soon



So does this mean that Mindsweeper can reach 128 by 128 squares at long last..

October 14, 2008 2:52:13 PM

I hear Minesweeper is going 3D! so thats 128x128x128!!!
October 14, 2008 3:29:46 PM

how can anything ever technically be "ahead of their time" if its here then it's the time of that technology and it's time to phase out the old
October 14, 2008 3:43:18 PM

So, we are all running 64 bit OS? Using all Multi threaded apps? And anyone can play Crysis on max settings at 19x12? Theres more, but a few examples where some tech had trickled in, not become the norm, but some (Crysis) remain insanely popular despite them coming too soon.
a b à CPUs
October 14, 2008 3:48:16 PM

I wouldn't say Crysis came to soon for the tech. It is just a very poorly optimized game. It should have been refined more before release.
October 14, 2008 4:04:54 PM

And Warhead? After the "optimisations"? Neither are fully playable, and wont be conquered til next year. A better example of a poorly optimised game would be Neverwinter Nights. Thats NEVER gotten better, and still has problems. The point here isnt whether Intel isnt trying to do something good or not, nor is it to denegrate or excuse them. The point is, how soon will we actually be able to use these improvements in a day to day scenario for the vast majority of users? Itll be a year and a half before we see Crysis being fully capable by a few people at the res they want after its reslease. Does that make Crysis a bad game? Id think therell be many people here thatd argue that its a good game in its own right, not poorly optimised, but too far ahead of its time when released.
October 14, 2008 4:14:30 PM

Regarding the acceptance and sales of i7, depends alot on Deneb. Think for a moment. People are saying, I may wait awhile before I buy i7 and stay with my Penryn. If Deneb comes in fast, and performs as well, then Intels alternative would be to increase the clocks of i7, which then would make it a more attractive option, and would most likely at that point be in higher consideration for a purchase. Like Ive said, its not core speed, nor alot of IPC we see with i7, but that could change, and, like I said, would be good for sales regarding i7 for DT
October 14, 2008 5:35:42 PM

its not that intel is ahead its that other industries like gaming cant always keep up
October 14, 2008 6:13:30 PM

Wisecracker said:
News to me. Got any linkage with that?


Intel's not ahead of their time. They are just trying to drive the market.


It was years ago, and I don't remember the version. I just remember they had an update specific to users running systems with multiple CPUs. I also remember reading somewhere about the 8 core support. So, my memory is far from perfect, but being a photoshop user (currently on CS2), I know that it's been multithreaded for quite a while now.

*Update*
I found it: http://www.adobe.com/support/downloads/detail.jsp?ftpID=3447
It wasn't that long ago after all.
October 14, 2008 7:04:35 PM

JAYDEEJOHN said:
Ive already mentioned that GPUs are doing encoding and Adobe etc, and better than a cpu can, so what Im saying is, besides the server abilities of i7, theres not alot of room for its design to really do much for DT. It all comes down to waiting for better SW thatll take advantage of it, and as weve all sen, who knows when thatll be.

Id point out, regardless of peoples thoughts on gaming, a faster core speed and a higher or better IPC works not only for gaming, but on ever thing you can possibly do, MT, ST , gaming, encoding etc etc. Yes, i7 is clearly aimed at server markets, and while Intel takes this direction, the SW makers will slowly head in their direction, but not any time soon


GPU based encoding is still in its infant stages and has a long way to go yet: http://www.anandtech.com/video/showdoc.aspx?i=3374

Also isn't the GPU based acceleration in Photoshop only for select filters?

I'd agree that faster ST performance invariably leads to faster MT performance too, but I think you'll find Core 2 is pretty much at the pinnacle of ST performance without a radical shift in architecture. Nehalem builds upon Core 2, its slightly faster in ST (5 - 10%) but mainly addresses it weaknesses - mainly the FSB, memory bandwith and multi-core / multi-socket scaling.

Maybe thats not so important for DT, but there aren't exactly many desktop apps that scream for a 3GHz+ 8 thread CPU is there? That doesn't mean its ahead of its time, more that we're at the stage where many common tasks are no longer CPU bound on the desktop. My company still uses 3GHz P4s, and from the looks of things they don't look like upgrading anytime soon, because its already adequate for the software we run.
a b à CPUs
October 14, 2008 7:35:05 PM

JAYDEEJOHN said:
I hear Minesweeper is going 3D! so thats 128x128x128!!!



So who needs Farcry 2 when Mindsweeper 3D - 128 bits is on the horizon...

Im wetter than Free Willy :ange: 
October 14, 2008 10:01:13 PM

You better not free willy heheh.

How old is that from Anands? And what does it have to do with Adobe? Even Adobe, in its infancy using gpus is outdistancing any cpus made or to be made. So to me, having a cpu doing these things is foolish, as theyre awfully inadequate to do them.
a c 127 à CPUs
October 14, 2008 11:15:17 PM

It seems all your topics tend to return to the same thing eh jay?

GPU based apps are interesting albiet the majority are still mainly for business and other purposed (medical).

Personally I don't think it will go as far as they want since its a move from a already standard coding (x86) and thats where Larrabee will have leverage. Since most apps are already x86 based it will be much easier for them to just port it over to Larrabee than rewrite it.

Although I have heard CUDA is C/C+ based so it should be easier than say rewriting for Cell but still its money and money is something a company doesn't want to spend.
October 14, 2008 11:21:10 PM

JAYDEEJOHN said:
How old is that from Anands? And what does it have to do with Adobe? Even Adobe, in its infancy using gpus is outdistancing any cpus made or to be made. So to me, having a cpu doing these things is foolish, as theyre awfully inadequate to do them.


18th August 2008, hardly an ancient article by any means. Maybe one or two revisions have come out since then, but I doubt they would have anywhere near the customisation of a CPU based encoder in this short period of time. What does it have to do with Adobe? Uhh, nothing, it has everything to do with ENCODING. As far as Photoshop goes, as I mentioned earlier GPU acceleration is only available on certain functions (canvas rotating and zooming): http://www.tgdaily.com/content/view/39433/140/1/1/

So maybe having a CPU do the few GPU accelerated functions are foolish (not that anyone will override GPU acceleration in any case), but what about everything else? I'm sure theres a WHOLE LOT MORE to Photoshop than just image rotation and zoom, but whatever, using a CPU is 'foolish' apparently... LOL!

I swear some of you guys see a few marketing slides on CUDA and think the CPU is now redundant, well yeah if you wanna encode in baseline quality which looks like absolute **** or ONLY use zoom and rotation in Photoshop, then go ahead, but like I said GPU based acceleration of common desktop apps has a LONG way to go, so in the meantime I'd say the gains Nehalem brings are VERY relevant.

JD, this is just an observation of mine, but it appears everytime you start a thread, you've already made up your mind (just thinking back to the 'is AMD as fast as Intel in gaming' thread) and the rest of your replies thereafter involve trying to 'convert' people into your frame of thinking, rather than discussion of the actual topic. ;) 
      • 1 / 2
      • 2
      • Newest
!