S.T.A.L.K.E.R Call of Pripyat on ati 5970

sunnyp_343

Distinguished
Nov 24, 2009
41
0
18,540
hey guys.
u know that the monster or jaggaurnaut ati 5970 which claimed the new king of the graphics card fail on upcoming game S.T.A.L.K.E.R Call of Pripyat(2010).giving only 20-25 fps.check this over the net.
 

macelo

Distinguished
Aug 18, 2009
48
0
18,540
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=403&Itemid=47&limit=1&limitstart=11

"S.T.A.L.K.E.R. Call of Pripyat is a video game based on the DirectX 11 architecture and designed to use high-definition SSAO. If Benchmark Reviews were to test this game with the developers recommended settings for desired gaming experience, only the ATI Radeon HD 5000-series video cards would be tested. Having the good fortune to experience this free benchmark demo run through all four tests (Day, Night, Rain, Sun Shafts) with the highest settings possible (HDAO mode with Ultra SSAO quality), the Radeon HD5970 produced 30+ FPS (in the Day test) while the GTX295 looked like a slide show with 6 FPS. Needless to say, this wouldn't be the most education way of testing video cards for our readers. For this reason alone, we reduced quality to DirectX 10 levels, and ran tests with SSAO off and then enabled with Default settings on our collection of higher-end video cards. "

For what it's worth, STALKER was a game that ran worse than crysis and looked about 1/4 as good. Lot's of fancy effects put to poor use. Call of Pripyat will probably be so again. But it's a tough benchmark that's for sure.
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810
You have got to be kidding me.. Stay in school son..

Ignorance is not an excuse to expel such tripe, merely an excuse to get some education on, in your case, everything do do with hardware.

By the by.. one game does not a trend make..
 

sunnyp_343

Distinguished
Nov 24, 2009
41
0
18,540
the future is in multi core gpu.and games like crysis,stalker and uningine needs multi-gpu graphic card.actually technology of gpu graphic card is far behind according to upcoming games.we need some revolution in gpu world like intel and amd did in cpu some years ago.
 
Your link points to a exclusively gpgpu experimental chip.
Also, with each process, cards often see a doubling of resources, and if done correctly, a 2 cored card can also fit within PCI limitations, thus the 5970, the 4870x2, and the wait for the 295 because nVidia had to wait for the 55nm process.
Keep reading, lots to learn about it all
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810
Sunny, since you seem to know soo much more about hardware than everyone else. Tell me exactly how many 'cores' you think are in the average GPU?

Would it surprise you to learn that the answer is the order of a thousand with the latest generation of cards?

You clearly have no grasp of the extreme difference in tech between a CPU and GPU, might want to give wiki a read.

You might also want to consider that any modern GPU can output an order of magnitude more FLOPS than a CPU. Though you have to keep in mind these are relatively simple operations compared to the ones a CPU is working on most of the time. So to answer how well a graphics card using an i7 woudl do.. well it would be able to keep up with the GPUs of 4 years ago in massively parallel tasks... maybe.. (The x1900s were about 100gflops, the i9's are around 70). Of course on the same token, a computer could not run with a current GPU as the CPU as there are reasons we have both.
 

sunnyp_343

Distinguished
Nov 24, 2009
41
0
18,540
u didnt get me.the artitechture of cpu and gpu r same only programming make the dtfference.i can explain u.cpu and gpu is like left hand and right hand.we do our most of work with right hand.but when we need some power work we use our both hands.same thing happen with cpu and gpu.now INTEL says there future cpu also handle gaming application.itmeans there no need of graphic card.may be u hear that already.
 

brockh

Distinguished
Oct 5, 2007
513
0
19,010
The reason why STALKER is so graphically intensive is the way they do anti-aliasing. It's a custom algorithm that takes into account dynamic shadows and other things that aren't supported natively with standard anti-aliasing I believe.
 

cyberkuberiah

Distinguished
May 5, 2009
812
0
19,010


the architecture of a cpu and gpu are completely different for at least one part , and that is the CORE (in nvidia's core,shader,memory setup) .

games are rendered on screen by a pipeline similar the x86 superscalar pipeline , but the stages are all different ! shader model update = pipeline update (hull/domain shader added for dx11) .

you can emulate the same pipeline in software on a processor , period , but it wont run very very slow on today's cpus as silicon logic is always faster than emulated software computation .

but being positive , i see two directions :-

1. heavyweight cores like nehalem/phenom will be here to stay , which give record single threaded performance . code in which steps depend upon previous steps cannot be parallelized , EVER . a company like nvidia canno just create its own new x86 just like that .

2. but , there are number crunching cpu apps which use basic operations on floating point , now this thing can be done more and more on the "shaders" of gpus. these operations are much more effective in gpu's like folding@home .

now , the future gpu can be programmable fully like cpus today , as tim sweeney said in an arstechnica article , but WE dont have any word on that . perhaps nvidia's fermi is a step in that direction , but again i cannot know for sure until it comes out , and even ATI's next gen will reveal the steps that are being made . i heard it will use MIMD , like fermi , but these are all FUD's now .
 

cyberkuberiah

Distinguished
May 5, 2009
812
0
19,010
one more and bigger thing to consider : what if fixed function is more cost effective , and outright programmability is too expensive to implement in silicon ? and why do we need anything else other than pure graphics oriented silicon ? what if we dont want folding@home when CUDA chips are clearly more expensive .

the future depends on those who decide the future , and that's microsoft with its future directx api , and intel with larrabee , and nvidia with fermi , and ati whose hd5000 is a graphics chip first .

i wanted to really cannot reliably comment more because in spite of being a BS in CS , i dont know directx in depth except basic theoretical knowledge , and game dev jobs are rare in India where i live and work . but some of my friends are in Nvidia , Bangalore , perhaps i will dig them for some info ! they are all VLSI designers .
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810


I'm sorry sunny, but you are completely wrong. I don't know what mroe to tell you other than you need to educate youself a LOT on how a comptuer works, and what each part is for.

The cpu and GPU are absolutely NOT the same architecture. They are for totally different tasks. Trying to explain it to you would be a waste of time it seems, but there are some good posts here, wiki is helpful as well. I'd like to see your GPU try to execute an x86 instruction since it is the same architecture...
 

daedalus685

Distinguished
Nov 11, 2008
1,558
1
19,810


We will soon require massively parallel computing in our day to day lives. While it may be nice to think that a GPU could stay a GPU for the next litlte while, this will end. Very soon we are going to need a 'core' that can do these tasks well, and non specifically.

If a GPU company were to stick to making only graphics parts, in five years they may be out of work.

Nvidia is rushing to the GPGPU as they do not have CPU's, AMD does. I expect in the next 5 years GPU's become PPUs (parallel) more than jsut graphics. If they do not then when the CPU companies finally manage to produce parallel enough CPUs the GPU will find itself out of a job. Instead we are moving towards high speed serial cores for complex tasks, and massive parality for relatively simple ones (vectors and such).

I would expect the next gen ATI cards to be almost as focused on GPGPU as Nvidia is now, it is the wave of the future. Though ATI has the luxury of movign into is slowly. (though if they lag too long behind they will lose the entire market before they have an offering)
 

cyberkuberiah

Distinguished
May 5, 2009
812
0
19,010


yes , i agree that although it may be a bit cheaper to have graphics only chips , but parallel stuff like video encoding is becoming more and more better on gpu's . another example is how the programmable shaders can enhance HD playback better than any high end hardware player . and sort of unrelated , but the integrated sound capabilites (and that too high quality sound) are an added bonus on newer gpus .

nvidia is then rightly making more steps in that market with an architecture like fermi coming out (the upcoming tesla would annihilate x86 based number crunching multiprocessor systems for many cases).i like this idea of PPU's .

i am not sure , but i think that multithreaded rendering in dx11 will go hand in hand with mimd shaders . i am waiting for this speculation of mine to get validated/invalidated when gf100 comes out . then ati will surely come out with mimd on its next gen because otherwise as you said it will be too late , and mimd is really necessary to get max performance and enable all types of parallel processing like in multiprocessors (although there would be still a limit on number of max concurrent kernels , but the direction is good) .

mimd will give game engines a much better option for game engines to use multiple cores .
 

cyberkuberiah

Distinguished
May 5, 2009
812
0
19,010
Tim Sweeney preached the gospel , i hope he will lead the way in showing us the path and advantages to a fully programmed pipeline as in that ArsTech article , as those LRB pentium 3-class cores on a modern manufacturing process for high speeds will sure have some power :) but thinking on LRB , how nvidia mocked it in that tick-tock cartoon :D

according to that article , if a language c++ is all that is required to create an engine , it would have two possibilites :-

1.the mushrooming of rendering libraries , like JDK . just use this object or extend this class to create your own parallax mapper for example .

2.the coming of these c++ based OOP-like tech to linux and open source community in particular . no more api's , and directx especially , required . it will also enable people to create GNU collaborative projects for engines , texture packs , etc and less dependence on drivers of any as there will be direct hardware access to do the optimizations etc .

the possibilities are endless with language like c++ , which has both high level features as well as low level hardware manipulation capability to run on these LRB type chips . i bet fans will do all sorts of low level optimizations to get more fps and all that stuff . a lot of people are both gamers and coders for that matter . and fans of a game like crysis will be more than happy to share their creations for free just like they share all the mods now .
 

cyberkuberiah

Distinguished
May 5, 2009
812
0
19,010


yeah , i read that somewhere , MS said that win32 wasn't really designed from the start for this , and better windows support in , say , windows 8 will really help .
 
Theres devs in the way of this, c++ may be the bees knees, but if its just more work for devs, even if its overall easier, its still more.
LRB coming in has to do what nVidia is already doing, nVidia has CUDA, and is working in areas, while LRB has its own LRBl, and itll be used as well.
nVidia has the "lead", Intel, a more tried and true approach, except when it comes to HW.
If the HW fails in the gaming sector, itll put alot of strain on LRB overall, and is the key to its success, as Haswell and bobcat/bulldozer will implement it at the chip level, and itll no longer matter, but for discrete, LRB is still up in the air IMHLO, as it doesnt have alot of time to stretch its legs before its on chip with competition, which looks to me having a mainly upper mid/highend solution for gfx only discrete solutions, where we may yet be surprised by both nVidia and ATI here
 

cyberkuberiah

Distinguished
May 5, 2009
812
0
19,010
not much "leaks" for ati's next gen plans have come out , but nvidia's fermi is a huge step with dev support like nexus et. al . if LRB comes out in late 2010 , it better beat the upcoming fermi on gaming as well as supercomputing apps . otherwise nvidia will be the undisputed leader in this .

one more thing to watch is how well openCL can run on all and hence its adoption , nvidia ,ati and LRB as now ati/intel cant do cuda because 3 different dev environs for LRB , ati and nvidia would be a lot of porting headache for , say folding@home devs and others imo .