First Ever GeForce GTX 285 2 GB Card

Palit Microsystems has unleashed its very first custom designed GeForce GTX 285, available in both 1 GB and 2 GB tasty flavors.

According to the company, its GTX 285 2 GB graphics card is the first in the industry to offer such a heavy load of memory, however consumers who don't have that kind of cash to thrash can option to buy the neglected step-child, the Palit GTX 285 1 GB version. Despite the different memory helpings, both cards offer the usual Nvidia goodness gamers have come to know and love, including Nvidia's PhysX and CUDA technologies. Both also offer core speeds of 648 MHz, and 2.5 GHz on its GDDR3 memory with a 512-bit interface, coughing up nearly 50 percent more performance than prior generation GPUs. And of course, more power means more gaming love. Who doesn't want that?

Although the card sports two PWM fans, the GTX 285 series utilizes four heat-pipes, unlike the Inno3D GeForce GTX 260 announced yesterday, sporting only three heat-pipes. "Conceived for two GPUs, the two PWM fans are able to provide sufficient air flow to cool GPU on the graphics card quietly. The PWM fan created for both fans can adjust the fan speed depending on the GPU’s temperature," the company said.

The GeForce GTX 285 supports Microsoft's DirectX 10 Shader model 4.0, GigaThread, OpenGL 2.1, as well as SL multi-monitor support. Two dual-link DVI outputs support two 2560x1600 resolution displays, and utilizes dynamic contrast enhancement and color stretch. With Nvidia's CUDA technology, editors can transcode video up to 20x faster than using traditional CPUs (which is both annoying and time consuming, especially for consumers suffering ADD). Nvidia's PureVideo technology promises "unprecedented picture clarity, smooth video, accurate color, and precise image scaling for movies and video with minimal CPU usage and low power consumption."

Under the hood (so to speak), the card features a 240 enhanced processor core clocking in at 648 MHz. The 1 GB/2 GB of GDDR3 memory clocks in at 2.5 GHz using a 512-bit interface and a memory bandwidth of 159 GB/sec. The card offers a whopping 51.8 billion/sec texture fill rate, and cranks out all the graphical goodness through a PCI Express bus interface (supporting 2.0). According to the product page, the card requires a PCI Express or PCI Express 2.0-compliant motherboard with one dual-width x 16 graphics slot. The card also needs a 6-pin PCI Express supplementary power connector and a 550W or greater power supply to keep it juiced.

Palit Microsystems said that the GeForce GTX 285 is (unfortunately) "built for Windows Vista," and is GeForce 3D Vision-ready as well. Mmmmmm beefy!

Create a new thread in the US News comments forum about this subject
This thread is closed for comments
    Your comment
    Top Comments
  • Tindytim
    the GeForce GTX 285 is (unfortunately) "built for Windows Vista,"

    Please fire Kevin Parrish.
  • Other Comments
  • Claimintru
    How is it "Built" for vista. Using one right now (Not Palit mind you) on Win XP Pro and its butter.
  • curnel_D
    "Despite the different memory helpings, both cards offer the usual Nvidia goodness gamers have come to know and love, including Nvidia's PhysX and CUDA technologies."

    Wow kevin, the second article today that you spew this crap. Cuda and PhysX isnt the finest thing since sliced bread. Infact, PhysX divides your graphic card's computational cycles between PhysX and Graphics, slowing games down in most instances, while Nvidia cuts out the best PhysX option, the seperate PPU. And second, Cuda has hardly any compatable bug free programs, and they're all way way to expensive. Both things are just exuberant amounts of propaganda from Nvidia to get them to buy their overpriced GPUs.

    Or have they threatened to black list you too, best of media?
  • Darkness Flame
    I don't know about that, Curnel_D. I mean, both Mirror's Edge and Unreal Tournament 3 show rather nice eye candy with PhysX enabled. Sure, it cuts your frames, but it's better than running it on a CPU, and you could always get another card. In fact, if you already have one nVidia card, you could get a second nVidia card, that doesn't have to be the same mind you, and have that one focus on just PhysX, so you don't really see a drop.

    As for CUDA, sure, it's not perfect yet. But if you use the Adobe CS4 suite, or you just transcode a lot of video, spending ~$20 more on an nVidia card for equal frames, but a significant boost in those other applications isn't that bad.