Radeon X1900XTX clocks revealed: 695/1550MHz

Take it for what it is, from the inquirer... but sounds reasonable enough:

http://www.theinquirer.org/?article=28611

X1900XTX at 695MHz core and 1550MHz memory
X1900 XT will be 625MHz core and 1450MHz memory

Nifty!
25 answers Last reply
More about radeon x1900xtx clocks revealed 1550mhz
  1. Why isn't ATI doing 24pipes like nVidia?
  2. Because they're going for more operations per pipe.

    I wouldn't worry too much about it. Their 16-pipe X1800 XT can compete with Nvidia's 24-pipe 7800 GTX... and it's not nearly as impressive from as hardware standpoint as the X1900 will be.
  3. I'm happy with my X850XT for now :)
  4. Quote:
    Why isn't ATI doing 24pipes like nVidia?


    Checking.......

    EDIT: Whoops I goofed (big time)!
  5. Quote:
    ATI secretly released its Silver Bullets material to AIBs this week and the picture of R580 is slowly coming together. R580, or Radeon X1900 as it is called internally, is expected to "launch" in January according to ATI documentation........

    The Silver Bullets presentation was a little light on details, but did confirm the R580 GPU has 48 pixel shader processors and higher clocks than R520 (a.k.a. Radeon X1800). Radeon X1900 uses a 90nm process also found on Radeon X1800.


    From: http://www.anandtech.com/video/showdoc.aspx?i=2653
  6. Having 48 pixel shaders does not mean 48 pipelines. The X1900 series will only have 16 pipelines with 16 TMUs and 16 ROPs while having 48 pixel shaders. ATI is aiming for a 3:1 pixel shader to TMU ratio and a 3:1 pixel sharder to ROP ratio..

    nVidia is going for 32 pipelines with 32 pixel shaders and 32 texture units, while likewise having 16 ROPs. nVidia is looking for a 1:1 pixel shader to TMU ratio and a 2:1 pixel shader to ROP ratio.

    http://www.theinquirer.net/?article=28609
  7. This might be in the FAQ but...

    what are TMU's and ROP's?

    I am guessing that i know how the cards work, ie the pipeline processes a line of pixels at a time instead of just one due to the new ratios (i think i am getting this right).

    I was going to get an x1800xt until i heard about this card coming along, luckily the person getting it for me was a girl and therefore did not know what she was doing.

    Saved by progesterone and oestrogen !

    Now if i can only find someone living in Canada...
  8. To bad no AGP. Time to drop those old AGP card prices!!!!!
  9. Yep, what data said;

    Shader units <> Pixel Pipelines

    The X1900 is a 16-pipeline card...

    TMUs are Texture Management Units. A single pipeline with two TMUs can process two textures on that pixel per pass.
  10. so this provides support for the concept of RV530 and R580 being architectually close

    and G71 is supposed to be clocking 750MHZ core...just for information

    Now, since ATI's card will be doing more operations per pixel fill rate in raw theoretical terms means a bit less, but here is a rough idea of where these two new cards will be, compared to 7800GTX 512 and X1800XT


    G71 - 750MHZ 32 TMU 16 ROP: 24,000M Texels/S and 12,000M Pixels/S
    G70 - 550MHZ 24 TMU 16 ROP: 10,320M Texels/S and 8,800M Pixels/S

    R520 - 625MHZ 16 TMU 16 ROP: 10,000M Texels/S and 10,000M Pixels/S
    R580 - 695MHZ 16 TMU 16 ROP: 11,120M Texels/S and 11,120M Pixels/S

    in raw theoretical fill rate, the G71 steps ahead by a considerable ammount (while it's pixel fill is rather close, the higher texture fillrate obviously increases performance) The next thing to note is the ATI card is planned with fairly slow memory for that high of a clock, which would lead a guess to 512-bit, or (more likely) typical ATI design with fairly low bandwidth for the GPU (if you compare R480 and NV40, R480 has a much higher theoretical fill rate in contrast to NV40, yet almost identical memory bandwidth, which does serve to lower performance)

    Since the R580 can maintain more operations on the individual pixel, compared to G71, which results in somewhat higher performance than it's fill rate figures would dictate, the other thing I would guess at, is that overall 3D power from G71 and R580 will not be as large of a performance increase as NV40 was from NV38/R360, or R420 was from NV38/R360

    But more along the lines of G70 from NV40/R480 and R520 from NV40/R480, it won't be a small bump but it won't be huge, and R580 adds a lot of computational power behind those pipes...
  11. Where can I read up on pipes vs pixel processors? I tried to Google it and came up with the answer that they were the same thing??

    From: http://www.hardwareanalysis.com/content/article/1817/


    Quote:

    R520 architecture

    The focus of ATI’s new R520 architecture has really been on efficiency, making sure that all clock-cycles are put to good use and creating minimal overhead. Unlike previous rumors indicating the architecture had up to 32-pipelines, the actual number is 16-pipelines or pixel-shader processors, with 8-vertex shader processors.
  12. the best way to get at what your trying to see
    look at Radeon X1600 series benchmarks in comparison to Radeon X800GT/GTO and GeForce 6800GT

    It has the same 3 ALU thing, just less of it (it's only 4 TMU/4 ROP)
    It performs anywhere from drastically under the X800/6800 series, to right ahead...

    Some will pose that if the X1600 is better in SM3.0, why is the X800 series faster in games like FarCry, which have SM3.0...

    remember that X800 isn't an SM3.0 part, so it's running an SM2.0 version of the game, which means less complex calculations

    the X800 has just as much power as the 6800 however, so it ends up performing better while running with less features, to see the visual differences of SM2.0 to SM3.0, check out THG's VGA chart for Winter PCIE, should be the #1 link on the VGA Charts page

    it AOE3 images with SM2.0 vs SM3.0, and goes on to explain X800 not being SM3.0 iirc, the apperance in AOE3 is drastic since it's top down, and has HDR bundled into it's SM3.0 iirc (SM3.0 isn't a req. of HDR, HDR can be done with SM2.0 (which is how Valve implements on Half-Life 2 and CS:S))

    basically the X800 is using just as much power, to run a "watered down" version of the game (watered down from an IQ perspective) so of course it's going to go faster...

    But seeing X1600's benchmarks compared to those cards, then realize that the X1600XT's overall fill rate is only 2500M Pixels/S and 2500M Texels/S (4 TMU and 4 ROP, @ 625MHZ) so that gives some insight to how well the 3 ALU system works, X800GTO for example is 12 TMU and 12 ROP, at 400MHZ (iirc) which maintains 4800M Pixels/S and 4800M Texels/S

    and it's performing basically in line with X1600XT, which is technically around 50% the power...the X800GTO (R480) only features 1 shader per pipe, compared to the 3 of X1600XT...

    So the extra shaders definately increase performance, I honestly don't feel like trying to equate the performance advantage based on shaders per pipe at the moment...
  13. You better just face it. AGP prices are climbing and you missed your opportunity. :cry: It will be PCI-e for you when it's time to toss that Ti4200.
  14. If the XTX can't clearly beat the GF7800GTX 512MB, I'll be disappointed. Of course, where are the 512MB GTX's anyway? :roll:
  15. If the XTX can't beat the GeForce i am going to burn Canada to the ground.
  16. It should be able to beat the 512MB, that isn't the 750MHZ clocked chip, it's the 550MHZ clocked chip...

    Just looking at the X1600XT which can beat the 6800GT (a card over twice as fast in theoretical fill rate) that should give an indication of what nVidia is up against with G71...and possibly explain the choice for a 750MHZ core clock (but please note that G70 has 2 ALU's per pipeline, I don't believe NV40 does)
  17. My HiS X1800XT can hit 700mhz core and 1600mhz mem overclocking using ATI's own overclocking tool currently with 5.13 drivers.
    This thing rocks!
  18. They've hit 1GHZ on the R520's, but that was LN2 cooled

    700MHZ tho, not bad at all
  19. From the TechReport, which are nVidiots like you...
    Quote:
    A Radeon X1800 XT CrossFire rig is mighty fast. Also, it's six degrees Fahrenheit outside right now at my place, and I've enjoyed the room-warming benefits of CrossFire and SLI systems throughout the preparation of this review. My mind boggles, though, when I try to consider the value proposition of plunking down $1200 for a pair of graphics cards and roughly $200 more for the motherboard. Could a pair of Radeon X1800 XT cards in CrossFire be a better deal than two GeForce 7800 GTX 512s in SLI?
    Yeah, I suppose so, especially with GTX 512 prices currently in low-altitude orbit. I do have my reservations about CrossFire, including the hassle of dealing with external dongles and the iffy I/O performance of CrossFire motherboards that use ATI's SB450 south bridge. Still, CrossFire performance generally scales well enough from one card to two, and I said in my initial CrossFire review that the long-term success of this solution would hinge on the quality of ATI's new GPUs. Turns out that the Radeon X1800 XT is a very desirable graphics card that matches the GeForce 7800 GTX feature for feature and adds a few new wrinkles of its own, including finer threading granularity for Shader Model 3.0 and the ability to do antialiasing with high-dynamic-range rendering. The Radeon X1800 XT trails the GeForce 7800 GTX 512 in overall performance, but Radeon X1800 CrossFire may hit the streets at prices as much as $150 lower per card than the 7800 GTX 512. (Radeon X1800 XTs are already widely available at $599 or less.) In the rarefied air of big-money graphics subsystems, that potential $300 price difference—if indeed it develops—could make a Radeon X1800 XT CrossFire system a, uh, er, uhm, solid value.

    Yeah, I said it.

    It's bitchin' fast, at any rate.
  20. Is it me or did you post that into 2 threads simoultaniously?

    It doesn't really make sense here...
  21. Quote:

    It doesn't really make sense here...

    Yet for some reason....it's crackin' me up :lol:
  22. Hello,

    Lets not forget, the quality of ATI's new GPU won't be the problem. The current problem will be the crossfire mobos. The Asus and DFI crossfire mobos don't seem to function all to well (hence many bad reviews) and not many other manf are making such mobos yet.
  23. LN2 GPU cooling??? :D i like it... :)
    also I don't know much about it the G71, but it really sounds like a killer...
  24. New tech: Wait a while. When this stuff becomes more common it is going to give nvidia some major headaches.
  25. Quote:


    Some will pose that if the X1600 is better in SM3.0, why is the X800 series faster in games like FarCry, which have SM3.0...

    remember that X800 isn't an SM3.0 part, so it's running an SM2.0 version of the game, which means less complex calculations

    the X800 has just as much power as the 6800 however, so it ends up performing better while running with less features, to see the visual differences of SM2.0 to SM3.0, check out THG's VGA chart for Winter PCIE, should be the #1 link on the VGA Charts page


    That has nothing to do with it. Both cards suport geometric instancing and thus other than HDR both perform under the exact same paths. This is the same with the X800 vs X1600 or GF6800. Only HDR makes a difference, and that's not what accounts for the performance differences between the two cards it's the TMU, ROPs and vertex engine differences that do it.

    Quote:
    basically the X800 is using just as much power, to run a "watered down" version of the game (watered down from an IQ perspective) so of course it's going to go faster...


    Stop talking BS! The X800 beats both in exactly the same version of the game, with IQ being the same for both. The only difference is the GF6800 and X1600 just have the option to run an additional mode (terribly / far FAR slower) once HDR is enbaled, but it's never used in heads'-up comparisons. So don't pretend that this is some kind of misleading benchmark, the performance is better under the same conditions. And if Crytek had implemented FP32 3-pass HDR you likely would still see the X800XT beating the X1600 series, and the X800XT nearing it's GF6800 counterparts performance, while rendering FP32 and not FP16 like the NV4Xs. Really bad comparison on your part. :roll:

    Try explaining the huge performance drop of the SM3 capable cards with rthdribl, that'd make my day. :P

    The main area to look for benifits will be in multi-pass situations, target dependant renders. Expect to see softshadows benifit alot from this advanatage compared to similar designs. I'm sure both will have their advanatages, because some games/apps will play to each architecture, just like now some apps prefer raw speed over added features.
Ask a new question

Read More

Graphics Cards Core Radeon Graphics