Sign in with
Sign up | Sign in
Your question

S.T.A.L.K.E.R Call of Pripyat on ati 5970

Last response: in Graphics & Displays
Share
November 25, 2009 5:00:37 PM

hey guys.
u know that the monster or jaggaurnaut ati 5970 which claimed the new king of the graphics card fail on upcoming game S.T.A.L.K.E.R Call of Pripyat(2010).giving only 20-25 fps.check this over the net.

More about : call pripyat ati 5970

November 25, 2009 5:13:39 PM

http://benchmarkreviews.com/index.php?option=com_conten...

"S.T.A.L.K.E.R. Call of Pripyat is a video game based on the DirectX 11 architecture and designed to use high-definition SSAO. If Benchmark Reviews were to test this game with the developers recommended settings for desired gaming experience, only the ATI Radeon HD 5000-series video cards would be tested. Having the good fortune to experience this free benchmark demo run through all four tests (Day, Night, Rain, Sun Shafts) with the highest settings possible (HDAO mode with Ultra SSAO quality), the Radeon HD5970 produced 30+ FPS (in the Day test) while the GTX295 looked like a slide show with 6 FPS. Needless to say, this wouldn't be the most education way of testing video cards for our readers. For this reason alone, we reduced quality to DirectX 10 levels, and ran tests with SSAO off and then enabled with Default settings on our collection of higher-end video cards. "

For what it's worth, STALKER was a game that ran worse than crysis and looked about 1/4 as good. Lot's of fancy effects put to poor use. Call of Pripyat will probably be so again. But it's a tough benchmark that's for sure.
m
0
l
November 25, 2009 5:40:29 PM

a 600$ graphic card giving only 60 fps is not worth my dear friend.and what game develpers r doing.why they r making such powerful game which even high-end system cannot handle.
m
0
l
Related resources
November 25, 2009 5:43:54 PM

why amd and ati dont make multi-core gpu like in cpu.if multi-core cpu can make then why not multi-core gpu.for example 4 core gpu,8 core gpu.
m
0
l
November 25, 2009 5:52:11 PM

They are working on it and so is intel. Give it time
m
0
l
a b U Graphics card
November 25, 2009 5:56:44 PM

And Crysis never hit 30 fps on 800$ Ultras either
m
0
l
a b U Graphics card
November 25, 2009 6:00:01 PM

You have got to be kidding me.. Stay in school son..

Ignorance is not an excuse to expel such tripe, merely an excuse to get some education on, in your case, everything do do with hardware.

By the by.. one game does not a trend make..
m
0
l
November 25, 2009 6:42:07 PM

the future is in multi core gpu.and games like crysis,stalker and uningine needs multi-gpu graphic card.actually technology of gpu graphic card is far behind according to upcoming games.we need some revolution in gpu world like intel and amd did in cpu some years ago.
m
0
l
November 25, 2009 6:45:02 PM

sometime i think if amd or nvidia use inel i7 cpu for their gpu what will happen.and i know this can be happen.
m
0
l
a b U Graphics card
November 25, 2009 7:08:05 PM

Your link points to a exclusively gpgpu experimental chip.
Also, with each process, cards often see a doubling of resources, and if done correctly, a 2 cored card can also fit within PCI limitations, thus the 5970, the 4870x2, and the wait for the 295 because nVidia had to wait for the 55nm process.
Keep reading, lots to learn about it all
m
0
l
a b U Graphics card
November 25, 2009 7:36:23 PM

Sunny, since you seem to know soo much more about hardware than everyone else. Tell me exactly how many 'cores' you think are in the average GPU?

Would it surprise you to learn that the answer is the order of a thousand with the latest generation of cards?

You clearly have no grasp of the extreme difference in tech between a CPU and GPU, might want to give wiki a read.

You might also want to consider that any modern GPU can output an order of magnitude more FLOPS than a CPU. Though you have to keep in mind these are relatively simple operations compared to the ones a CPU is working on most of the time. So to answer how well a graphics card using an i7 woudl do.. well it would be able to keep up with the GPUs of 4 years ago in massively parallel tasks... maybe.. (The x1900s were about 100gflops, the i9's are around 70). Of course on the same token, a computer could not run with a current GPU as the CPU as there are reasons we have both.
m
0
l
November 26, 2009 7:42:29 AM

u didnt get me.the artitechture of cpu and gpu r same only programming make the dtfference.i can explain u.cpu and gpu is like left hand and right hand.we do our most of work with right hand.but when we need some power work we use our both hands.same thing happen with cpu and gpu.now INTEL says there future cpu also handle gaming application.itmeans there no need of graphic card.may be u hear that already.
m
0
l
a b U Graphics card
November 26, 2009 7:59:12 AM

The reason why STALKER is so graphically intensive is the way they do anti-aliasing. It's a custom algorithm that takes into account dynamic shadows and other things that aren't supported natively with standard anti-aliasing I believe.
m
0
l
a b U Graphics card
November 26, 2009 12:17:17 PM

sunnyp_343 said:
u didnt get me.the architecture of cpu and gpu r same only programming make the dtfference.i can explain u.cpu and gpu is like left hand and right hand.we do our most of work with right hand.but when we need some power work we use our both hands.same thing happen with cpu and gpu.now INTEL says there future cpu also handle gaming application.itmeans there no need of graphic card.may be u hear that already.


the architecture of a cpu and gpu are completely different for at least one part , and that is the CORE (in nvidia's core,shader,memory setup) .

games are rendered on screen by a pipeline similar the x86 superscalar pipeline , but the stages are all different ! shader model update = pipeline update (hull/domain shader added for dx11) .

you can emulate the same pipeline in software on a processor , period , but it wont run very very slow on today's cpus as silicon logic is always faster than emulated software computation .

but being positive , i see two directions :-

1. heavyweight cores like nehalem/phenom will be here to stay , which give record single threaded performance . code in which steps depend upon previous steps cannot be parallelized , EVER . a company like nvidia canno just create its own new x86 just like that .

2. but , there are number crunching cpu apps which use basic operations on floating point , now this thing can be done more and more on the "shaders" of gpus. these operations are much more effective in gpu's like folding@home .

now , the future gpu can be programmable fully like cpus today , as tim sweeney said in an arstechnica article , but WE dont have any word on that . perhaps nvidia's fermi is a step in that direction , but again i cannot know for sure until it comes out , and even ATI's next gen will reveal the steps that are being made . i heard it will use MIMD , like fermi , but these are all FUD's now .
m
0
l
a b U Graphics card
November 26, 2009 12:25:03 PM

one more and bigger thing to consider : what if fixed function is more cost effective , and outright programmability is too expensive to implement in silicon ? and why do we need anything else other than pure graphics oriented silicon ? what if we dont want folding@home when CUDA chips are clearly more expensive .

the future depends on those who decide the future , and that's microsoft with its future directx api , and intel with larrabee , and nvidia with fermi , and ati whose hd5000 is a graphics chip first .

i wanted to really cannot reliably comment more because in spite of being a BS in CS , i dont know directx in depth except basic theoretical knowledge , and game dev jobs are rare in India where i live and work . but some of my friends are in Nvidia , Bangalore , perhaps i will dig them for some info ! they are all VLSI designers .
m
0
l
a b U Graphics card
November 26, 2009 2:37:30 PM

sunnyp_343 said:
u didnt get me.the artitechture of cpu and gpu r same only programming make the dtfference.i can explain u.cpu and gpu is like left hand and right hand.we do our most of work with right hand.but when we need some power work we use our both hands.same thing happen with cpu and gpu.now INTEL says there future cpu also handle gaming application.itmeans there no need of graphic card.may be u hear that already.


I'm sorry sunny, but you are completely wrong. I don't know what mroe to tell you other than you need to educate youself a LOT on how a comptuer works, and what each part is for.

The cpu and GPU are absolutely NOT the same architecture. They are for totally different tasks. Trying to explain it to you would be a waste of time it seems, but there are some good posts here, wiki is helpful as well. I'd like to see your GPU try to execute an x86 instruction since it is the same architecture...
m
0
l
a b U Graphics card
November 26, 2009 2:45:01 PM

cyberkuberiah said:
one more and bigger thing to consider : what if fixed function is more cost effective , and outright programmability is too expensive to implement in silicon ? and why do we need anything else other than pure graphics oriented silicon ? what if we dont want folding@home when CUDA chips are clearly more expensive .

the future depends on those who decide the future , and that's microsoft with its future directx api , and intel with larrabee , and nvidia with fermi , and ati whose hd5000 is a graphics chip first .

i wanted to really cannot reliably comment more because in spite of being a BS in CS , i dont know directx in depth except basic theoretical knowledge , and game dev jobs are rare in India where i live and work . but some of my friends are in Nvidia , Bangalore , perhaps i will dig them for some info ! they are all VLSI designers .


We will soon require massively parallel computing in our day to day lives. While it may be nice to think that a GPU could stay a GPU for the next litlte while, this will end. Very soon we are going to need a 'core' that can do these tasks well, and non specifically.

If a GPU company were to stick to making only graphics parts, in five years they may be out of work.

Nvidia is rushing to the GPGPU as they do not have CPU's, AMD does. I expect in the next 5 years GPU's become PPUs (parallel) more than jsut graphics. If they do not then when the CPU companies finally manage to produce parallel enough CPUs the GPU will find itself out of a job. Instead we are moving towards high speed serial cores for complex tasks, and massive parality for relatively simple ones (vectors and such).

I would expect the next gen ATI cards to be almost as focused on GPGPU as Nvidia is now, it is the wave of the future. Though ATI has the luxury of movign into is slowly. (though if they lag too long behind they will lose the entire market before they have an offering)
m
0
l
a b U Graphics card
November 26, 2009 2:45:56 PM

Dont have diagrams handy, made up ones at that, but just look at a LRB core vs a cpu core.
The LRB core is cut down
Start there, see the differences, find out why theyre there, and then look at a gpu arch and see how they function.
m
0
l
a b U Graphics card
November 26, 2009 5:36:49 PM

daedalus685 said:
We will soon require massively parallel computing in our day to day lives. While it may be nice to think that a GPU could stay a GPU for the next litlte while, this will end. Very soon we are going to need a 'core' that can do these tasks well, and non specifically.

If a GPU company were to stick to making only graphics parts, in five years they may be out of work.

Nvidia is rushing to the GPGPU as they do not have CPU's, AMD does. I expect in the next 5 years GPU's become PPUs (parallel) more than jsut graphics. If they do not then when the CPU companies finally manage to produce parallel enough CPUs the GPU will find itself out of a job. Instead we are moving towards high speed serial cores for complex tasks, and massive parality for relatively simple ones (vectors and such).

I would expect the next gen ATI cards to be almost as focused on GPGPU as Nvidia is now, it is the wave of the future. Though ATI has the luxury of movign into is slowly. (though if they lag too long behind they will lose the entire market before they have an offering)


yes , i agree that although it may be a bit cheaper to have graphics only chips , but parallel stuff like video encoding is becoming more and more better on gpu's . another example is how the programmable shaders can enhance HD playback better than any high end hardware player . and sort of unrelated , but the integrated sound capabilites (and that too high quality sound) are an added bonus on newer gpus .

nvidia is then rightly making more steps in that market with an architecture like fermi coming out (the upcoming tesla would annihilate x86 based number crunching multiprocessor systems for many cases).i like this idea of PPU's .

i am not sure , but i think that multithreaded rendering in dx11 will go hand in hand with mimd shaders . i am waiting for this speculation of mine to get validated/invalidated when gf100 comes out . then ati will surely come out with mimd on its next gen because otherwise as you said it will be too late , and mimd is really necessary to get max performance and enable all types of parallel processing like in multiprocessors (although there would be still a limit on number of max concurrent kernels , but the direction is good) .

mimd will give game engines a much better option for game engines to use multiple cores .
m
0
l
a b U Graphics card
November 26, 2009 5:52:17 PM

Also, look to windows 8 for even greater usage
m
0
l
a b U Graphics card
November 26, 2009 6:03:47 PM

Tim Sweeney preached the gospel , i hope he will lead the way in showing us the path and advantages to a fully programmed pipeline as in that ArsTech article , as those LRB pentium 3-class cores on a modern manufacturing process for high speeds will sure have some power :)  but thinking on LRB , how nvidia mocked it in that tick-tock cartoon :D 

according to that article , if a language c++ is all that is required to create an engine , it would have two possibilites :-

1.the mushrooming of rendering libraries , like JDK . just use this object or extend this class to create your own parallax mapper for example .

2.the coming of these c++ based OOP-like tech to linux and open source community in particular . no more api's , and directx especially , required . it will also enable people to create GNU collaborative projects for engines , texture packs , etc and less dependence on drivers of any as there will be direct hardware access to do the optimizations etc .

the possibilities are endless with language like c++ , which has both high level features as well as low level hardware manipulation capability to run on these LRB type chips . i bet fans will do all sorts of low level optimizations to get more fps and all that stuff . a lot of people are both gamers and coders for that matter . and fans of a game like crysis will be more than happy to share their creations for free just like they share all the mods now .
m
0
l
a b U Graphics card
November 26, 2009 6:07:09 PM

JAYDEEJOHN said:
Also, look to windows 8 for even greater usage


yeah , i read that somewhere , MS said that win32 wasn't really designed from the start for this , and better windows support in , say , windows 8 will really help .
m
0
l
a b U Graphics card
November 26, 2009 6:28:03 PM

Theres devs in the way of this, c++ may be the bees knees, but if its just more work for devs, even if its overall easier, its still more.
LRB coming in has to do what nVidia is already doing, nVidia has CUDA, and is working in areas, while LRB has its own LRBl, and itll be used as well.
nVidia has the "lead", Intel, a more tried and true approach, except when it comes to HW.
If the HW fails in the gaming sector, itll put alot of strain on LRB overall, and is the key to its success, as Haswell and bobcat/bulldozer will implement it at the chip level, and itll no longer matter, but for discrete, LRB is still up in the air IMHLO, as it doesnt have alot of time to stretch its legs before its on chip with competition, which looks to me having a mainly upper mid/highend solution for gfx only discrete solutions, where we may yet be surprised by both nVidia and ATI here
m
0
l
a b U Graphics card
November 27, 2009 6:42:23 AM

not much "leaks" for ati's next gen plans have come out , but nvidia's fermi is a huge step with dev support like nexus et. al . if LRB comes out in late 2010 , it better beat the upcoming fermi on gaming as well as supercomputing apps . otherwise nvidia will be the undisputed leader in this .

one more thing to watch is how well openCL can run on all and hence its adoption , nvidia ,ati and LRB as now ati/intel cant do cuda because 3 different dev environs for LRB , ati and nvidia would be a lot of porting headache for , say folding@home devs and others imo .
m
0
l
November 27, 2009 8:56:55 AM

win 8 will crap i m sure.now mac is popular in these days.mac will take place on windows in 2 or 3 years.check this window vista is slower thn xp.and win 7 even slower than vista and believe me win 8 will slowest to all
m
0
l
November 27, 2009 9:02:48 AM

i m waing intels larabee.when this monster come out and rule the world or this will all hipe.
m
0
l
November 27, 2009 10:49:40 AM

sunnyp_343 - its so obvious you are not professional and only casual user/player. You dont understand lot of things so dont make fool of yourself making such claims.
m
0
l
November 27, 2009 11:08:56 AM

sunnyp_343 said:
win 8 will crap i m sure.now mac is popular in these days.mac will take place on windows in 2 or 3 years.check this window vista is slower thn xp.and win 7 even slower than vista and believe me win 8 will slowest to all


What?? Vista might be a snail of a system, but Win7 is a heck alot faster even on older PCs.
m
0
l
a b U Graphics card
November 27, 2009 11:10:12 AM

MARSOC_Operator said:
What?? Vista might be a snail of a system, but Win7 is a heck alot faster even on older PCs.


Well we all know 2010 will also be the year of the Linux desktop... :sarcastic: 
m
0
l
November 27, 2009 2:20:22 PM

brockh said:
Well we all know 2010 will also be the year of the Linux desktop... :sarcastic: 


Oh yeah! The Year of the Linux Desktop--only for those who wants to remain isolated from the gaming, entertainment and modern software world. :lol: 
m
0
l
November 27, 2009 2:26:20 PM

yeah i m agree with maesoc_operator.
m
0
l
November 27, 2009 2:27:42 PM

but guys believe me mac leopard is the fastest and coolest os today.believe me or not
m
0
l
a c 272 U Graphics card
November 27, 2009 2:38:10 PM

sunnyp_343 said:
but guys believe me mac leopard is the fastest and coolest os today.believe me or not

Errr, not! :lol: 
m
0
l
a b U Graphics card
November 27, 2009 2:49:47 PM

This entire thread makes me cry on various levels.. I'm gong to go look at pictures of kitties for a while to make up for it.
m
0
l
November 27, 2009 2:52:47 PM

i m starting one new topic about sli and crossfire.

why we dont get double performance in sli and crossfire insted of getting only 20%-25%.we paid double for only 25%+.if we getting 100fps on single card on any game then we should get 200 fps on sli or crossfire similar with 3 way (300fps) and 4 way(400fps) sli or crosfire.because we paid another 300$ for a card.is that right.only fear is exceptional case.if we got this performance then everybody will go for sli or cross


m
0
l
a b U Graphics card
November 27, 2009 2:52:50 PM

To the OP
The only reason I mentioned windows 8 is simply because of the higher demand thatll be placed on gpus, and yes, FF etc will most likely include these features as well, including gpgpu usage, and theyll probably come even sooner as well.
Like I said, keep reading
m
0
l
a b U Graphics card
November 27, 2009 2:56:47 PM

sunnyp_343 said:
i m starting one new topic about sli and crossfire.

why we dont get double performance in sli and crossfire insted of getting only 20%-25%.we paid double for only 25%+.if we getting 100fps on single card on any game then we should get 200 fps on sli or crossfire similar with 3 way (300fps) and 4 way(400fps) sli or crosfire.because we paid another 300$ for a card.is that right.


.......

Please don't fill this forum with more tripe...

You get what they tell you you get, don't pay for sli if you don't want it. You need to understand scaling, even a 5870 (roughly 2 4870s) is on par with 4870CF, even in the best of situations doubling the core will NEVER double performance. I'd say CF and SLI scale pretty well considering.
m
0
l
January 26, 2010 3:40:30 PM

So why is everyone feeding the troll? He's obviously an idiot and/or child. Let this thread die.
m
0
l
a c 125 U Graphics card
January 26, 2010 3:46:24 PM

OP is BS, i get up to 200+ Average FPS on that benchmark using a HD5850 on DX11 ultra settings + tesselation at 1920 x 1080.
m
0
l
a c 217 U Graphics card
January 26, 2010 10:01:36 PM

holycrikey said:
So why is everyone feeding the troll? He's obviously an idiot and/or child. Let this thread die.


The thread was dead until you resurrected it. Look at the date before your post.
m
0
l
!