3DS Render Benchmark

G

Guest

Guest
Hi,
Could anyone tell me how i could get a hold of the max scene (kinetix logo)that Tom uses to benchmark rendering speeds on several processors?
I'd like to compare my dual 1 GHZ to an Athlon setup. I know I may end up depressed (if not now then definately when dual athlon boards surface). I already had the motherboard and RAM as it was an upgrade and therefore more affordable a move. I also love the oh-so-smoothness of SMP and got tired of waiting for AMD to get their act together.
Thanks in advance,

Mark
 

mpjesse

Splendid
AMD isn't lost or confused about releasing the 760MP chipset- they're simply waiting for the best time to release it. It will be released this quarter- according to AMD's road map. They'll probably release it when Intel releases the 1.7Ghz P4.

-MP Jesse

"Signatures Still Suck"
 
G

Guest

Guest
Actually it has already been delayed a couple times. I've been following it for almost a year now. I would have loved to go with a dual Athlon solution especially considering the superior floating point performance. I don't see how you can claim they were simply "waiting for the best time". Considering they have no SMP solution out right now and Intel has really been the only consumer choice for an MP station I'd say the "right time" would've been as soon as responsibly possible.
I really doubt they were waiting for the p4 1.7ghz to compete with. I'm sure they'll have a Palomino cabable of competing processor to processor. No, I think they've just had a bit of a hard time perfecting a fairly complex design (considering the Athlon architecture and the way it does SMP).
Now I don't want to start an Intel vs. AMD debate as it's been absolutely beaten to death in several other threads (some more intelligent than others). All I really want is that KTX.max file.
Anyone got it?
 
G

Guest

Guest
I think it might be a file that was created in house. If you would like to compare P3 sinlgle cpu units to Athlons have a read of Toms Article "Proffesional Affair" in the graphic guide dated May 15. This also shows how the instruction sets are catered for using pro 3D cards. To get some real facts for multithreaded apps read and article called "Dual Cpu chipsets" at www.viatech.com. Duals are definetly the way to go for your situation and as AMD don't have their 760MP comercially available yet their not worth considering. Even when they do alot of people seem to suggest it will operate flawlessly. I very much doubt that. Just reading about all the issues about AMD units at this forum and others is enough to turn me off. SMP is a very complex concept to make work effectively. Intel have been developing it for years. Stick with them. You'll find most, if not all industry's will. One other thing to remember is that there are too many pretenders floating around giving advice. Reliability vary's greatly between apps when your pushing your computers limits. Not too many people here have actually ever set up a render that might take hours or days to complete, or composited video in real time, or ever done material analysis or solid rendered a complex component. Be carefull who you listen to.

"Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
 
G

Guest

Guest
bah! He uses that old max file from max1, not max 2. What a jerk :) Maybe you know someone with max 1? I'll ask my friend tmw.
 
G

Guest

Guest
I'd hate to say it but directx is the wave of the future, *shutters*. They just keep feeding more and more into directx and now the software is starting to use it (ie max 4). Mainly Discreet, Nvidia and microsoft are pushing for it, and so everyone else will fallow, get 3 power houses behind something you can bet it will happen :) I think when everyone is ready to spend another couple thousand on a new prof card, the successor to wildcat, or the card that comes to beat it, will have full directx support.



Granted, Intel chips do better in some 3d apps out there, I think that most software developers are putting amd optimizations in too now. Most of them are pissed that Intel makes them re-write from the beginning every time they come out with a new cpu.

Another good thing with intel is the dual chips, duals are just damn good in 3d apps. If I had a huge budget I'd go with dual p3 1ghz system but my pockets aren't all too deep right now so I am going with amd. It takes a P3 1ghz to equal the render power of a 700mhz duron/athlon. To me that is just amazing. I'll get a 1ghz axia tbird and oc to 1.4 or 1.5. That rendering power is almost that of a dual P3 1ghz system for a fraction of the price. When it comes down to it, when you have to render you want the fastest possible times. Of course if you have a rendering farm all setup then go dual p3's.

Hey! I found the file from max 1! Only thing is I use max4 and their are compatibility problems when opening but should be too big of a problem when it comes time to render. Also, max 4 render might be a little fast/slower than max 2 render which tom uses. I’ll find a place to upload it. My P3 500mhz, 256mb pc100 system renderes in 3:01. That comes out around the 20 page per hour mark. Glad to be rid of this setup :)

The quick place I found to upload was my yahoo briefcase. Here’s the link, and plz post your results :). http://briefcase.yahoo.com/m_kelder

Time to go to bed, talk to you guys later
 
G

Guest

Guest
Yep I agree with you in regards to Direct X. Dirext X isn't actually too bad on a small scene with a few objects. The rest...well I can't provide links or solid info there....I'd like to see where you got your benchmarks regarding an oc 1ghz AMD equals a dual P3 1 ghz? Just remember too Max 3.1 is optimised for the P3 instruction set. Much quicker than Max 2. I'II post a picture of my naked ass if you can get an oc Amd to render quicker than the dual P3 setup. You'll find its not a cpu speed issue but the multithreaded pipeline that makes the difference!!

"Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
 
G

Guest

Guest
Ok I went to your link and downloaded KTX-Rays.max but a couple of things. The file is only 188kb and has no animation. No moving cameras etc. I brought it into max2 (still not harnessing the power of the cpu's or my VX1) just to make it kinda compareable. It renders at 1000x600. I rendered it in viewport camera1 and it took 62 seconds flat. Don't know whats happening here. Just one more thing, with all the problems people are posting about AMD chips in the cpu forum, are you sure you want one?

"Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
 
G

Guest

Guest
Hey M_Kelder Thanks!
That is exactly the file I was looking for. Man, I've had a post on the MAX forum for days and no one could help me. I really appreciate it.
I'm at work now so I rendered it with my Dual 933 here. It took 49 seconds. Tonestar, that makes your time of 62 seconds sound fine. As for no moving camera's and not taxing your VX-1, it's just a render benchmark. Strictly CPU measurement. Graphics cards should have no effect. Even memory speed/ Front side bus has a limited effect. If you want to test your Video card you could always run the set of benchmark files located in the scenes folder under your max install.
As for DirectX ; who cares if max moves to it? As long as it is up to par. OpenGL has it limitations (like 8 lights, etc) I'd welcome anything that would show bump-maps in viewports, more than 8 lights, etc. As long as it works better who cares which technology it is?
As for AMD SMP we'll all have to wait and see. Saying they'll screw it up is pure speculation. Let's give them a chance. I'm sure they'll be some bugs in the beginning just like any Intel product. One things for certain SMP rocks. I had to use one of my Giggers in a single processor board while Supermicro upgraded my BX board to run coppermines and man did it suck. My dual 350 setup was 10x smoother. Not claiming faster mind you but smoother. On my dualie I can open a couple apps at once, check email, copy files, etc all at once without a hiccup. Try that with ANY processor Intel, AMD, you name it (at least with Windows).
Render time on a scene I used to compare systems:
Dual 350mhz P2= 8:44
Single 1GHZ p3= 4:11
Dual 1GHZ P3= 2:34
I'll check out that article over at Viatech.com.
Thanks again for the file!
-Mark
 
G

Guest

Guest
Damn, I just realized I'm running Max 3 and Tom benchmarked the Athlons with MAX 1 or 2. Not to mention I don't know what resolution he rendered at. Ah well.
If someone out there has an Athlon setup over 1ghz and MAX3 maybe you could render this file at a resolution of 800x600 and let me know your results.
We can see how an Athlon stacks up against Intel IGHZ SMP.
 

steinbr

Distinguished
Feb 19, 2001
38
0
18,530
I'm in home now and render the ktx_rays.max file , using default resolution (1000 x 600) , virtual frame buffer on and save file enabled , with a 32 bit alpha channel TGA output enabled.

apps running: MAX 4 , 2 IE 5 windows , Outlook , CPU IDLE 5 , Zone Alarm , MSI PC Alert 3 , MIRC , Mcaffe Virusscan , Virtual daemon manager , dumeter and a DSL connection/ downloading

My pc at home:

Duron 800 / MSI 6330 / 128MB PC133 / 30GB HD / TNT2 16 MB

Time to render: 1:17 ( 77 seconds)

very good for a duron !!

wow !!! imagine a AXIA running on 1.5 GHZ with 256 ddr ram !!! Lets wait for the Tyan's dual athlon board , soon!
 
G

Guest

Guest
you need to understand. The FPU of the duron/athlon are far far better than Pentium 3's. Max 3 has NO optimizations in the renderer for SSE. The renderer uses pure fpu instructions for processing. max3 and max4 have pentium 3 optimizations for _software_ (hiedi driver) rendering for viewports. You get about 25% gain using the SSE software driver over non SSE. Nvidia, ati, 3dlabs... have their drivers optomized for SSE as well so you see a performance gain in opengl when you have a p3. I have emailed discreet (ktx back when max3 was released) about this and they told me that the renderer doesn't and can't take advantage of SSE.

Again, the renderer uses pure FPU to render. At same clock, the P3 is 300mhz behind the duron/athlon FPU seen here.
http://www4.tomshardware.com/cpu/01q1/010108/images/3dsmax.gif

When rendering in max, dual processors enjoy a 100% boost. Meaning, if you have a P3 500 that renders 20ppr you will get double that 40ppr :) So if you have dual 1ghz you pretty much have 2ghz rendering power. Not in viewports though, I'll get into that.

Those renderer benchmarks are with a 100fsb duron/athlon and 133fsb intel chips. Running at 133fsb with duron/athlon will put those numbers up quite a bit, I'd speculate that running a athlon at 1.5ghz @133fsb would at least be like a 1.550ghz 100fsb athlon. I think that is fairly modist. So the total mhz of a dual 1ghz P3 system compared to an athlon 1.5@133 is 450mhz. Minus the 300mhz FPU performance of the P3 you are left with 150mhz boost. That will get you about 19 pages more in an hour. Nice boost in performance I’ll admit.

Viewports, now I am only speaking from discreets 3dsmax experience and knowledge but I imagine most 3d apps would use duals the same way. I can only speak from high end nvidia products and not from any 3dlabs professional cards. Plz let me know if it works differently with the special drivers. Max will split the cpu needs to each processor by assigning one viewport per cpu. If you have 4 then 2 go to each cpu. This is fast because you get no delay switching between viewports and you can play back 2 or more viewports back at once faster. So in this case, you really on have a 1ghz P3 FPU powering a viewport at one time (which you only need to deal with one at a time the majority of the time anyway). I’d rather have the 1.5ghz athlon FPU shooting those poly’s and other things around.

Now comes a really driving force for me, other may not have a problem with this but I do.
As per pricewatch
Intel Pentium 3 1ghz - $220
AMD Athlon Tbird 1ghz - $131/$145 100mhz/133mhz

Those prices are the lowest you can get around pretty much. Buying a dual board will cost a lot more than a single cpu mobo, plus you need 2 $220 cpu’s. Your average price per system going intel is quite a large sum of money. Between $300-$400 at least. For getting 150mhz more rendering power you sure do spend a lot more.
 
G

Guest

Guest
I would imagine tom uses the default rendering setting for the benchmark file. But he could be rendering at 585x477 becuse that is the size of the sample pic he shows which doesn't look like it was resized (but yet again he could have just rendered again to make it fit nicely on the html page). Wish tom would tell us stuff like that when he does his benchmarking.
 
G

Guest

Guest
Just did some math tests on what stienbr told us his duron 800 can render at. With his 1:17(77 secs). (60*60=3600/77=46.7 . Toms shows the duron getting 97.3 ppr. I guess that means he must be using 585x477 to render.


Now these numbers roughly add up. My P3 500 render 585x477 in 1:28, which translates into around 41 ppr (about half of the P3 1ghz).
 
G

Guest

Guest
Hmmm.....I won't comment on your derivations till I found some solid proof. You say that Max splits cpu resources between viewports, this in itself should be a good reason to consider a dual wintel config. Most pro designers render occasionally during mapping and at the end to output into some kind of movie format. I render large scenes at the end of the day just before I leave work....so I'm at home when my computer is rendering. I get there in the morning and its all done....no downtime on my behalf. A pure render test is really not a good judge of overall productivity in a 3D enviroment as most animators work in garoude shaded mode and most engineers in wireframe or solid shaded. Even then most people who have to render during work hours opt for cards like the gvx210 or any other dual rasteriser card, therefore removing a significant load off the cpu. Have a look at the hardware on some of those proffesional cards. Most often the card alone is worth more than the whole system. We can speak about the differences in cpu manufacturers forever but in a real 3D enviroment there are greater concerns. In proffesional animation studios they often have render boxes...containing 10,20,40 or more cpu's. In real terms for the home or budget user duals are still the way to go, which still makes any current AMD cpu a non-option. Besides if I was a beginner animator or draftsman I wouldn't spend money on an intermediate solution like a single cpu unit at 1ghz. Consider all the facts, have a look at the 3D pro cards, do some animation and think about how computing power is really used, check out SCSI interfaces etc. Stay cheap or go dual. You can't effectively use an intermediate solution like a single 1ghz cpu, be it AMD or Intel. Unless I suppose all you do is play around at home a bit. Besides all that I just want to say a big thanks to everyone that has contributed to this article. It's good to see real players like Mark adding their views. Your render time was exceptionally quick, whats the rest of you system like? My friend with dual 933's did it in 59 seconds? We should put up a real test file like roving cameras, multiple lights, particle effects, raytracing etc. But we really need a viewport transform test to see how cards and cpu's can really benefit an animators or engineers life. If I set up a render just before I go home in the evening it won't matter which computer I had...even an old 486!! Because in the morning my file will still be done with no downtime!!....Creating that file however is the real test of a systems performance. Perhaps we ought to move away from marketing hype, flawed benchmarks and post some real time figures. Thanks all!!

"Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
 
G

Guest

Guest
You know steinbr I was doubting your figures then I went and set up mine to render at (1000 x 600) , virtual frame buffer on and save file enabled , with a 32 bit alpha channel TGA output enabled.

I was connected to the net and had zone alarm going and render mine in 3D Max 2:
Time to render: 46 seconds.
Maybe Mark can try the same. Even though I'm happy with the render time I'm not trying to pretend that this file is any kind of productive test platform.

"Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
 
G

Guest

Guest
I didn't really talk about viewport speed and the cpu's. I do know that when you using one viewport you are only using one cpu, one 1ghz cpu instead of one 1.5ghz cpu. Here is the typical scene an animator would definately have to work with in a project.
http://www4.tomshardware.com/graphic/00q2/000515/images/image003.gif

Important thing to look at is the scenerios with the different makes. 3dlabs has been known to be bad with drivers, I've heard of many that said they'd have to think very hard about buying another 3dlabs cards because of that. Slowly but surely they will have their drivers up to speed with amd processors. That test was done quite some time ago and I think the drivers must be much better by now.

I'll quote tom for a bit
"Final Thoughts: Rambus-Pentium versus SDRAM-Athlon

This test clearly reveals better performance values for RDRAM platforms with the Pentium III. Contrary to the game PC world, the workstation segment finally gives Rambus memory the chance to show off its performance advantages propagated by Intel. In comparison to the Athlon 800 we measured better results in almost all categories at the same clock frequency under Windows NT 4.0.

Some card manufacturers seem to prefer the Pentium III for the driver development. In the application benchmarks the difference is less noticeable than in the synthetic benchmarks. We generally assign more importance to the application benchmarks because they reflect the real-life situation for the user much better. On top of that Nvidia proves that it is possible to optimize graphics card drivers for the Pentium III and Athlon equally.

One point is quite important when deciding on a platform. Workstations are generally equipped with more memory than mainstream PCs. 512 MByte are no rarity. Given today's prices, RDRAM would be much more expensive than the rest of the hardware. In this case the Rambus memory alone adds up to a whopping 5000 Dollars, five times more than PC133 SDRAM. The slight performance disadvantages of the Athlon KX133 platform can be balanced with more MHz. This would be a much wiser investment for your money. "

take a look at the whole thing here,
http://www4.tomshardware.com/graphic/00q2/000515/index.html

Now these tests don't go into dual intel systems but as I said before you only use one cpu per viewport or one cpu per 2 viewports.
 
G

Guest

Guest
blurb<P ID="edit"><FONT SIZE=-1><EM>Edited by m_kelder on 04/16/01 09:45 PM.</EM></FONT></P>
 
G

Guest

Guest
WHAT!!!! WHAT!!!!.....3D labs are renowned for their stable drivers. In fact often when you can get more performance for your dollars people will turn around and buy the 3D labs card anyway, just because they can trust the drivers. Have a look at Tom's article "Proffesional Affair". There was so much hype about the Gloria II (and it is a good card) but the drivers were leaving white lines across rendered frames. What are you meant to do if you actually owned this card? Go into Premiere and delete those frames 1 by 1? Or fix them in Photoshop? I don't think so. Two of my friends have 3D labs card and I have one too. No probs for me or them. Yes, there are better value cards around but their driver ARE renowned to be generally the best. Please don't post false information. With regards to your comment about using one cpu per viewport, I just booted up my cpu graphs in task manager (anyone running NT or W2K) can do this, I played around with a scene in the veiwports and guess what!!!....I got similar reading for both cpu's. So there must now be only 2 conclusions...Both cpu's are working even when I'm in one viewport!!! Once again a dual platform is looking ever more desirable!!! The only 2 conclusions left are:
1. The cpu graph's are only a gimmick of Microsofts
2. Your comments are totally false
Unless you can prove that the cpu graphs in task manager are a gimmick employed by microsoft I am going to disregard your last post as I know your comments about 3D labs are not only false but completely crazy as anyone who is a profesional owning one of these cards would testify too.

"Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
 
G

Guest

Guest
Well, I didn't say that I was speaking from first hand. Just a couple of guys I know say that they didn't like owning their cards because of drivers. I think it dates back to when their were problems with agp and when 3dlabs switched out on it's own. This is quite a while ago, maybe things have changed. The guy I talked to about the duals in 3dsmax is a professional animator and i think he knows what he is talking about. I'll try to get some verification on how exactly it works.

Another things, what are you talking about white lines? the video card has nothing to do with the rendering process.<P ID="edit"><FONT SIZE=-1><EM>Edited by m_kelder on 04/16/01 11:52 PM.</EM></FONT></P>
 
G

Guest

Guest
I found a place that will solve everything. Bottem line is that the athlon would actually beat the dual p3 system in viewports and rendering.
http://www.3dluvr.com/content/techz/maxfaq.php
http://www.3dluvr.com/content/techz/gregbench.php

He will soon be comparing a dual 1ghz P3 and a 1.33ghz athlon. He says the dual wins buy around 20 secs in most tests and some tests the athlon wins. We were comparing a athlon at 1.5ghz so it would win.
 
G

Guest

Guest
I'm starting to think you no NOTHING about what you are talking about. If you did, you'd already know how good the fpu in the athlon is and how 3d apps need nothing else but fpu. Also, the white lines? that is in the viewport, NOT THE RENDERING. rendering has nothing to do with the video card, nothing nothing nothing. Reading back, it seems all you care about it the video card, even when we are discussing the cpu you bring up the video card again. At the time I just thought you got off track but it seems you have this wrong idea on what a video card is.

You may want to check the date on the article I pointed you to. You'll see that it is may 15, 2000, yeah a year ago...
 
G

Guest

Guest
I get well under a minute in max 4 with a P4 1.5,
if you use dd3d 8a :)
much faster than athlon or P3 dual
the FP unit of the P4 is far superior and runs at 3 ghz
or 2 x clk,

the Spec FP marks proves this at 200 points faster than althlon or P3..

dual channel rambus does not hurt either..
nor does the fact that the data is being loaded to memory and back through a 400 mhz dual channel bus

P4 is the way to go for 3d apps like MAX and lightwave etc..
no contest
CAMERON

CYBERIMAGE
<A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
Ultra High Performance Computers-
 

steinbr

Distinguished
Feb 19, 2001
38
0
18,530
Just did one more rendering test , now using 585x477 resolution , virtual frame buffer on , save file enabled (32bit tga output file) , and now , instead of rendering a single frame , rendering the "active time segment 0-100".

Results:
First frame: 40 seconds
2nd: 38 s.
3rd: 37 s.
the others changing from 37 to 38 (+/- 37.5 seconds)

using the fastest frame time , 37 seconds , we have:

60*60/37 = 97,2972 , the same rendering time as Tom's show in their graph.

No doubts what resolution must be set to make our test.