-T-bird 900 not overclocked
-Abit K-T7 Raid board
-192 meg pc-100
-7200 40 gig drive on ATA 100
-Hercules Geforce DDR_DVI not overclocked
-Using Direct X 8
-Currently VIA 4 in 1 version 4.25 (I believe)
-Currently Detonator 6.47
My scores seem a bit low. In 3dmark 2000 when running default 1024x768 at 16 bit I get 4990 (never above 5000). When I set it to 32 bit I get 3950 (never above 4000). That seems quite paltry compared to other peoples scores. Also, I see people post scores for 3dmark, but I never see them tell what resolution/depth their using. What is the standard that everyone uses to compare?
Now in Quake 3 I think the scores are also low. At 1024x768 with every setting set to max I get 67fps. At 640x480 with everything max I get 93fps. With every option set at its lowest running at 640x480 I only get 96fps. Thats the most I can get period. Even the lowest res (320x256 I think) won't go above this. I need opinions here.
I've tried most of the detonator drivers from 3.31 up to 6.47 (nothing above yet). I've got my agp card set for 4x and it's enabled. I used to be running direct x 7 and then 7.1a I think. All of these variations seem to offer no real difference. In fact, everytime we get an update in drivers, my performance stays basically the same. I've even done complete wipes of my hard drive and started with completely fresh installs, but still same results. I've made sure that v-sync is off, I've fiddled with other settings with limited results. I must be missing something. What could it be? Also is there any way to verify a Thunderbird processor to make sure it is the speed that you bought. I would hate to find out that I am running an overclocked processor.
When comparing to other peoples scores, you should just click the standard benchmark button that 3DMark shows when it is started. That runs in 1024x768 with 16 bit color.
I see that your score around 5000 is a wee bit low, but not horrible. You can maybe get a bit above 5000, but not much.
The only way your Thunderbird could be overclocked is if the L1 bridges on it has been closed. In case you don't know, those are small conductors on the surface of the CPU. When delivered from the factory, they are open. They can be closed by drawing with a pencil or a pen with conductive ink.
If you look at the CPU (pull off heatsink first), you should see two rows of dots just besides an "L1" mark. There are also L2 and so on, but they are not interesting. If the L1 dots are connected with ink or graphite, the CPU is possibly overclocked by the help of your mainboards multiplier setting.
If the dots are just dots lying around, everything is fine.
I don't think there is any need to check it though. You CPU is most likely perfectly normal and not altered in any way. Especially if you got it from a decent vendor.
Are you sure you have the DDR version of the standard GeForce and not the SDR version? Your scores with the setup you have sounds like scores an SDR card would return. Do the drivers detect it as a DDR equipped card?
Groovecat, your scores are well within range for a non-overclocked video card. If you use the "compare results" database at Madonion you will see that most of the best scores are on systems with overclocked video cards. You can select Athlon 900 systems using Geforce DDR cards to see what I mean.
Although I better say that I don't recommend overclocking your video card (because it can be permanently damaged!), it can make a big difference. Here are some of my results which demonstrate my point.
Notice how I obtained more of a score increase overclocking my video card, 4016 to 4873, than I did overclocking my system, 4016 to 4852. Doing both together gave me a whopping 37 percent increase in my 3D Mark 2000 score!
The Madonion database will show an overclocked system but will not show an overclocked video card (unless the tester posts it in the project description). There is a 21 percent difference between my scores which are both obtained on the same 600 Mhz system. Scores between different systems can vary even more.
I think overclocking the CPU and the video card produces some of the bigger increases in scores but FSB speed, AGP bus speed, AGP fastwrites, AGP 1X vs 2x vs 4X, memory speed, memory CAS2 vs. CAS3, and memory timings can affect scores as well. None of these are listed in the Madonion database.
Hope this clears things up.
Abit KT7 motherboard
Duron 600 (at various speeds)
OCZ Monster II HSF
OCZ 128mb "enhanced" SDRAM, 133mhz (or more), CAS2, Fast Timing
VisionTek Geforce256 SDR
Maxtor DiamondMax VL 40GB hard drive
Toshiba 24X CD-ROM
HP 8100i CD-RW
NIMBUS PC VC-803K case with L&C Power 300w PSU
Mag Innovision DX-15T monitor
Some notable BIOS settings
AGP 4X - disabled
AGP Fastwrites - disabled
Enhanced Chip Performance - enabled
DRAM 4-Way Interleave - enabled
DRAM Timing - Fast
<P ID="edit"><FONT SIZE=-1><EM>Edited by phsstpok on 01/22/01 04:06 PM.</EM></FONT></P>
I appreciate everyone's suggestions. I'm still tinkering away. I tried upping to 1ghz, but the system wouldn't take it at the default settings. I may have to fiddle with the bus speeds (but I'm a little nervous doing this) to get it up to 1 ghz. The thing that really bugs me is just that I should have higher scores without any special tweaking or overclocking. Perhaps the pc100 memory is slowing me down, unfortunatley it's not worth it to upgrade to pc133. Well, back to fiddling.
I made the changes in bios settings to the ones you have, and my 3Dmark score went up to 5080.... Woohoo a step in the right direction and I finally surpassed 5000. However, I didn't really see any jump in my Quake fps score. I'm going to try the VIA 4.28 drivers later tonight and see if that makes a difference. I'll also try updating the bios if I have time. Thanks for the advice.
Well, I ran tests with memory set to 100 Mhz, CAS3, and I was surprised by the outcome (but I shouldn't have been). Running the default 3DMark 2000 benchmark (1024X768, 16-bit) I got the following scores. (Duron at 1000mhz, Geforce SDR 120/166).
100 Mhz, CAS3 - 4395
133 Mhz, CAS2 - 4480
Just a 1.9% difference.
Then it dawned on me. The Geforce SDR is slow and is causing a "bottleneck" for the system even with 16-bit colors. My computer has no trouble keeping up no matter what memory settings I use.
Next I tried at 640X480 resolution. I hoped the Geforce would be fast enough at that resolution to challenge the computer. Here are the results.
100 Mhz, CAS3 - 6922
133 Mhz, CAS2 - 7434
Here I get a 7.4% difference. More in line with what I was trying to prove.
What does it mean? Surely no one will be running their games at 640X480, 16-bit colors. So does memory speed matter for games.
It depends on what video card one has and the resolution at which it is used. When one is trying to strike a balance between high-resolution and barely, acceptable framerates the video card is already slowing things down. Consequently, the computer has no trouble keeping pace and memory speed is not so important.
However, this "balance point" is different for different classes of video cards. For MY Geforce card I just get good enough framerates at 1024x768 (at least in some games). In this case memory speed WOULD NOT make much difference. For a Geforce 2 GTS which can get acceptable rates at 1280X1024, if the card were run instead at 1024x768 then computer is less challenged and memory speed WOULD make some difference. The same goes for a Geforce 2 Ultra. It can run at 1600x1200 with OK rates but at 1280x1024 pc-133 memory can assist for faster rates than PC-100 memory.
Of course if one's framerates are already good enough then more does nothing but provide bragging rights. The improvement can't really be "seen".
A faster video card is still more important than a faster computer. For 3D games, that is.
I hope this was intelligible (and I hope I got it right).