Quote of the Day!

"In this case I hope that NVIDIA has applied for a patent early enough, because otherwise Rambus may follow its tradition, copy NVIDIA's design, patent it and then sue NVIDIA"-Tom Pabst concerning nVidia's new memory controller architecture, the Crossbar Memory Controller, for the GeForce3.

Suicide is painless...........
21 answers Last reply
More about quote
  1. I busted up laughing when I read that in his review of the GeForce3. Sadly though he does have a point. He also had a good point about how interesting it would be to see a motherboard chipset from NVIDIA. Besides just the more intelligent memory handling, could you imagine the integrated graphics? :)

    - Sanity is purely based on point-of-view.
  2. Hey, now there's a good idea. In this forum so far, everybody hates intergrated chipsets, but if nVidia made a GeForce 3 integrated chipset that might change! One of the major reasons the X box will be outperforming GF3 PC's is that the PC graphics have to go through the AGP bus. But an integrated solution could eliminate the AGP bus! A few people would complaind about the lack of an upgrade option, but the GF3 should be good enough that you really won't need one! Think about it-if you could have bought a TNT2 Ultra integrated solution 3 years ago, and you wanted to put in a GeForce, it would be time to replace your outdated motherboard anyway!

    Suicide is painless...........
  3. Exactly. And besides the AGP bus could still be supported, but initially deactivated by a DIP switch or jumper or something. So if you REALLY wanted to upgrade the onboard graphics, you'd have that option. (But against a GeForce3 integrated chipset, who would want to?)

    And besides, if NVIDIA got into the chipset market then they'd probably start working with .13 micron etching instead of .15. Imagine what a GeForce4 would be like with that! :)

    I know that if NVIDIA offered their own chipset, I wouldn't even hesitate to buy one. The thought of the great memory interface, the intelligent work that NVIDIA does, and the hopes of one kick-butt integrated video solution all sound like Siren lures that I can't resist. :)

    I also wonder if NVIDIA will branch out into audio work now that they've had a taste of full-fledged multi-media with the XBox. That could be just as interesting to see.

    - Sanity is purely based on point-of-view.
  4. What would be cool is if nVidia released an integrated chipset with a ZIF socket, so that the whole chipset could be upgraded without replacing the motherboard! And a REALLY BIG CPU COOLER would fit on it for overclocking! And with Intel pushing for 400MHz DDR system memory, the chipset could use DDR DIMM 400MHz memory! Think about that-faster video, upgradable, with upgradable Video ram at current video ram speeds! Awesome! And you would be able to buy faster video ram as it became available!

    Suicide is painless...........
  5. Now there's a killer speculation for graphics GPUs.

    If the motherboards themselves had a ZIF socket for GPUs as well as CPUs, then you could just buy a new GPU without having to get a whole card. You could completely bypass the slow (and soon to be insufficient) AGP bus and instead have the northbridge treat the GPU and CPU equally.

    This would also allow you to completely toss the whole concept of video memory. It could just run purely off of your system memory. Currently that sounds bad. But in the future (hopefully) system memory will be upgraded just as often as technology allows. (Instead of this whole weird thing where video cards have been using DDR SDRAM for years and yet the system memory is only now starting to use it.)

    And it would allow you to put a big heat sink and fan on the GPU because it would have the mounting stability of the CPU. This in turn would allow the GPU to run at a much higher clock now because it's cooling solution would be considerably superior to anything that could be done on an AGP card.

    And the additional power supply of the AGP Pro need not be worried about anymore either because that'd just all be part of the GPU mounting on the motherboard now, just like the CPU voltage regulator is.

    So the GPU could be clocked a lot higher, it wouldn't suffer from bus problems any more at all, it wouldn't suffer from a lack of voltage, it'd be more easily upgradable, it'd be less expensive, and upgrades would therefore be less expensive.

    It'd be a lot like running a dual CPU system except that one of your CPUs is a GPU, so your graphics performance is going to be out of this world.

    And it'd probably beat the pants off of a console because the larger heat sink allowing for a faster GPU clock would kick a console in the teeth. Consoles just aren't made that big.


    NVIDIA definately needs to revolutionize the motherboard business. I'd sell my soul to NVIDIA for a system like this.

    - Sanity is purely based on point-of-view.
  6. Youv'e got a deal! I can make these things happen, but the price is high, one soul should do the trick. I'll barter the deal-your not too fussy about who gets your soul, are you?

    Suicide is painless...........
  7. That would definitely be kickass setup.... especially since the GPU's now seem to have more transistors than the CPU's... is this right? Why have the poor little gpu set off by itself on a card when it could be right on the motherboard... I argued this very point with a friend the other day (he argued things about upgrading and ram,etc...) I pointed out that it could be on the board just like a cpu, with a chip that pops on or off (changeable) and use the ram on the motherboard... now that would be a sweet motherboard... and it would drop the overall cost of hte system... (ideally)... and you wouldnt really be limited by the ram options :)

    Old addage: "Users never prosper" :o) Long live the tweakers
  8. someone want to share this thread with Nvidia?
  9. Well, the rumor mill has been spitting out that nVidia's Crush chipset (AMD/Athlon based DDR northbridge) is going to kick some major ass. As soon as that baby is out, I'll be on the phone asking for Asus and Abit for engineering samples.

    -MP Jesse
  10. Hey, if you can deliver the product, I can deliver the soul.
    After all, without the limitations of a soul I would be capable of doing things that I'd never let myself do before. **evil grin**

    Now, if we could just find a way to toss out these electrical busses on the motherboard and replace them with fiber-optics. Even with electrical chips you could put translators on each end of the bus and still see a dramatic speed improvement.

    That PCI bus has just GOT to go.

    Price? Who cares about price? Where there's a will, there's a way. :)

    - Sanity is purely based on point-of-view.
  11. <font color=red>It's souls you're after, is it?

    Don't worry. I'm off to the execution chamber.
    You'll have to pay for postage and packaging though!
    </font color=red>
  12. And what do you propose we replace the PCI bus with?

    Suicide is painless...........
  13. Heck, if I were an electrical engineer sitting down at ye olde drawing board right now, I think my replacement for PCI would go as follows:

    Since you still need expansion cards, you'll still need a bus of some sort. But no one says it has to be so crappy. So why not use a double-pumped 128-bit wide bus running at the same clock frequency as the FSB? Give each component on this replacement bus the same rights to access system memory as an AGP card has.

    In fact, make the chipset treat each expansion card in the same manner that it treats the CPU. Make them all piers. Allow them to talk to each other over the bus, talk to the memory, or talk to the CPU.

    Include the socket on the motherboard for the video GPU and have it treated as a pier by the chipset as well.

    In fact, turn the chipset into a ChPU in it's own socket so that when better revision chipsets become available, you can upgrade that without having to replace the whole motherboard. And give it it's own heat sink because it'll run more like an intelligent communications director. Because of this, it'll have to have a frequency and multiplier like a CPU so that it can process multiple commands all in one clock cycle.

    In fact, add into this ChPU something like eight indivisual memory-controller slave systems, each with their own cache, so that several processes can actually use different busses to the memory simultaniously to access various parts of the memory for different components all at the same time.

    This way the CPU and each card can in effect access the memory all at the same time without slowing each other down to do it.

    And screw backwards compatability with PCI. Instead have the first series of motherboards still have support for an old PCI bus as well as this new bus with like two or three PCI slots the same as when ISA was phased out.

    I'd call it something like a Friendly Pier Architecture to symbolize that all of the components in the system are now piers to each other instead of slaves to the CPU.

    Granted, it'd be rather expensive I'd imagine to do this at first. But it would be quite worth the money.

    - Sanity is purely based on point-of-view.
  14. Of course, this wouldn't last infinitely. It might even have to be quad-pumpable just for the P4 chips.

    But each time the computer is turned on, each component on the FPA bus would report it's appropriate clock speed to the ChPU so that as the overall FSB of motherboards gets upped, the older FPA cards won't be overclocked accidentally.

    This way the architecture will last indefinately because it's not dependant on a specific bus speed. Each card can theoretically have it's own bus speed. Just so long as you don't put a card with a faster speed in a slower system. But then even that might work out okay, the card just won't perform as well as it will claim to, because nothing ever has problems being underclocked.

    - Sanity is purely based on point-of-view.
  15. Your making things too complicated. Keep the PCI bus, and make a 66MHz/133MHz version that is auto switchable by adding and extra 8-pin extension for detection. Put the graphics and northbridge on the same chip, and put that chips in a ZIF socket. Double the memory bandwidth by using two channels instead of doubling the clock rate (this would require chips to be mounted in pairs).

    Suicide is painless...........
  16. Yeah, I am making it complicated.

    But if we just amp up the PCI bus without making it more scalable in the future then five years from now people will be complaining how slow the bus is again.

    Where as with my more complex method, the bus technically never slows down. Faster and faster cards can be made for it.

    And, it gives things like sound cards the ability to run on the system without actually slowing the system down. So you get the same frame rate in games without one as you do with one.

    And it'd allow sound cards to become a lot more advanced for 3D sound. The sound card could even theoretically communicate directly with the video GPU this way, so that they could work together to determine 3D sound.

    It's a lot more complex, but the concepts behind it would never really become out of date. And it would allow card engineers to do a lot more that they're limited to today.

    Plus, it would let me develop my linked-list card. :)

    - Sanity is purely based on point-of-view.
  17. How about this-firewire interface. Sure, it's more epensive, but it's scalable, and would allow the cards to be mounted externally. In fact, they could develope a firewire interface slot that contains the firewire connections plus power connections, allowing cards to be mounted in a cheap external enclosure or internally.

    Suicide is painless...........
  18. Stop the madness! Your making me depressed about my current Pc although it is relatively state-of-the-art.

    There is one problem with your ideas... As is true right now, real estate on the motherboard PCB is very cramped. The PCB would have to be greatly enlarged in order to accomodate another ZIF socket and twice the number of transistors. And if you persay do enlarge the PCB, the mobo won't fit in any case. Also, have you considered the power supply that would be neccesary to run that thing? But don't let my groaning discourage you. In 5 years we might see something like that, I doubt any sooner.

    As for some more ideas, why not seperate the RAM for the CPU from the RAM for the GPU. Have two seperate sections on the mobo with DIMM slots for the CPU and its related piers, and the GPU and its like. That way, you could customize how much RAM you want each to have. And hey, we have dual cpu systems, why not make our board dual GPU capable? Imagine... *Drool driping*

    - I don't write Tom's Hardware Guide, I just preach it"
  19. We already have dual CPU boards Tempus, and they seem to have enough room on them. But if you were to use and integrated chipset with the video onboard, and make it replacable via a ZIF socket, you would only be increasing the footprint of the northbridge by a small amount. If we kept it simple by NOT having an AGP slot and KEEPING the PCI slots, even a Micro ATX board would work for most gamers by providing 4 PCI slots.

    Suicide is painless...........
  20. A Firewire interface sounds ... odd. Of course, you could probably do the same with a USB2 interface, but that's another story. I guess it'd be interesting to see. I personally wouldn't mind being capable of having a little black box next to my tower for keeping all sorts of extra cards in... But unless those extra cards could be ISA and PCI, it wouldn't do me any good.

    A dual GPU system would be pretty cool. But then with my FPA method you could theoretically have a main GPU onboard and as many daugher GPUs as you wanted in FPA slots. Since they would treat each others as piers, that would allow them to talk to each other instead of having to route communications through the CPU. Still, it probably wouldn't be quite the same.

    But I wouldn't imagine that the FPA method would be utilized any time soon. To even see something close to it come out in the next 5 years would be amazing.

    It's funny. Engineers improve the CPU and the GPU. They improve the memory and the IDE interface. But they don't improve the motherboard itself to eliminate these horribly slow busses that slow down the rest of the system. Sure, Intel and AMD are finally both making their considerations for such, as well as other people, but I'm not expecting their ideas to be all that amazing... just faster.

    But still, putting an on-board GPU ZIF socket would be one heck of an excellent improvement for gamers and 3d animators alike.

    - Sanity is purely based on point-of-view.
  21. I just realised another thing: I've heard of incompatability problems with firewire systems. What works on one controller card doesn't on another, or odd things like that. I know that'd piss me off greatly to purchase a component that doesn't work in my system when by all accounts it should. And, from what I've heard the new Firewire2 specs are more theoretical than actual.

    So I'm not sure if I'd want my computer expansions to be based on something like this. It sounds more nightmarish than cool.

    I'll stick with my Friendly Peer Architecture for my dream bus. Maybe it won't exist for at least another five years. Maybe it'd be complicated to integrate into a motherboard. But maybe it'd be worth it anyway. I know if I had the resources to start a company, I'd work on developing open-source standards for it and produce a motherboard that uses it for both Intel and AMD users.

    I mean memory is manufactured in what, .18 micron process or worse? Same with motherboards? Imagine if they just started using a .13 micron etching like the chip manufacturers, or .15 micron like nVidia is using. Maybe it would be complicated, but I bet it'd fit on the motherboard without a new size standard. Why should we see newer and better CPUs when our motherboards aren't showing the same improvements? Why should our fast CPUs be slowed down by inferior busses?

    And sure, maybe we'd need a bigger power supply. We're already up to a suggested 350 watt for Athlon systems. I don't know what the P4 systems are suggested to use. But would it be so crazy to suggest that such a more powerful system go to a 400 watt supply? It's not like we couldn't do it.

    Would it be costly? Yes. Would FPA, ChPU ZIF, and GPU ZIF be worth it? To me it would. I'd even be willing to pay double (maybe even tripple) the cost for a motherboard that implements this. The speed difference would be stunning. And the future expandability would be nearly limitless. That's worth a lot to me, since systems go out of date so rapidly these days.

    And even still, we could realise the onboard GPU ZIF today. Systems could be using it a year from now. And that would DEFINATELY be worth it to me. It'd make a computer just about as good as any console could be for games, and all without any hit to business performance. In fact, it'd probably save money since I'm sure video companies like nVidia would rather just produce a GPU with solid cooling like a CPU than a whole board that has to fit specific size and weight limitations.

    - Sanity is purely based on point-of-view.
Ask a new question

Read More

Graphics Cards Controller Nvidia Memory Graphics