3DS Render Benchmark

Could anyone tell me how i could get a hold of the max scene (kinetix logo)that Tom uses to benchmark rendering speeds on several processors?
I'd like to compare my dual 1 GHZ to an Athlon setup. I know I may end up depressed (if not now then definately when dual athlon boards surface). I already had the motherboard and RAM as it was an upgrade and therefore more affordable a move. I also love the oh-so-smoothness of SMP and got tired of waiting for AMD to get their act together.
Thanks in advance,

101 answers Last reply
More about render benchmark
  1. AMD isn't lost or confused about releasing the 760MP chipset- they're simply waiting for the best time to release it. It will be released this quarter- according to AMD's road map. They'll probably release it when Intel releases the 1.7Ghz P4.

    -MP Jesse

    "Signatures Still Suck"
  2. Actually it has already been delayed a couple times. I've been following it for almost a year now. I would have loved to go with a dual Athlon solution especially considering the superior floating point performance. I don't see how you can claim they were simply "waiting for the best time". Considering they have no SMP solution out right now and Intel has really been the only consumer choice for an MP station I'd say the "right time" would've been as soon as responsibly possible.
    I really doubt they were waiting for the p4 1.7ghz to compete with. I'm sure they'll have a Palomino cabable of competing processor to processor. No, I think they've just had a bit of a hard time perfecting a fairly complex design (considering the Athlon architecture and the way it does SMP).
    Now I don't want to start an Intel vs. AMD debate as it's been absolutely beaten to death in several other threads (some more intelligent than others). All I really want is that KTX.max file.
    Anyone got it?
  3. I think it might be a file that was created in house. If you would like to compare P3 sinlgle cpu units to Athlons have a read of Toms Article "Proffesional Affair" in the graphic guide dated May 15. This also shows how the instruction sets are catered for using pro 3D cards. To get some real facts for multithreaded apps read and article called "Dual Cpu chipsets" at www.viatech.com. Duals are definetly the way to go for your situation and as AMD don't have their 760MP comercially available yet their not worth considering. Even when they do alot of people seem to suggest it will operate flawlessly. I very much doubt that. Just reading about all the issues about AMD units at this forum and others is enough to turn me off. SMP is a very complex concept to make work effectively. Intel have been developing it for years. Stick with them. You'll find most, if not all industry's will. One other thing to remember is that there are too many pretenders floating around giving advice. Reliability vary's greatly between apps when your pushing your computers limits. Not too many people here have actually ever set up a render that might take hours or days to complete, or composited video in real time, or ever done material analysis or solid rendered a complex component. Be carefull who you listen to.

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  4. I'll try and find the file for you, where's my max 2 cd?
  5. bah! He uses that old max file from max1, not max 2. What a jerk :) Maybe you know someone with max 1? I'll ask my friend tmw.
  6. I'd hate to say it but directx is the wave of the future, *shutters*. They just keep feeding more and more into directx and now the software is starting to use it (ie max 4). Mainly Discreet, Nvidia and microsoft are pushing for it, and so everyone else will fallow, get 3 power houses behind something you can bet it will happen :) I think when everyone is ready to spend another couple thousand on a new prof card, the successor to wildcat, or the card that comes to beat it, will have full directx support.

    Granted, Intel chips do better in some 3d apps out there, I think that most software developers are putting amd optimizations in too now. Most of them are pissed that Intel makes them re-write from the beginning every time they come out with a new cpu.

    Another good thing with intel is the dual chips, duals are just damn good in 3d apps. If I had a huge budget I'd go with dual p3 1ghz system but my pockets aren't all too deep right now so I am going with amd. It takes a P3 1ghz to equal the render power of a 700mhz duron/athlon. To me that is just amazing. I'll get a 1ghz axia tbird and oc to 1.4 or 1.5. That rendering power is almost that of a dual P3 1ghz system for a fraction of the price. When it comes down to it, when you have to render you want the fastest possible times. Of course if you have a rendering farm all setup then go dual p3's.

    Hey! I found the file from max 1! Only thing is I use max4 and their are compatibility problems when opening but should be too big of a problem when it comes time to render. Also, max 4 render might be a little fast/slower than max 2 render which tom uses. I’ll find a place to upload it. My P3 500mhz, 256mb pc100 system renderes in 3:01. That comes out around the 20 page per hour mark. Glad to be rid of this setup :)

    The quick place I found to upload was my yahoo briefcase. Here’s the link, and plz post your results :). http://briefcase.yahoo.com/m_kelder

    Time to go to bed, talk to you guys later
  7. Yep I agree with you in regards to Direct X. Dirext X isn't actually too bad on a small scene with a few objects. The rest...well I can't provide links or solid info there....I'd like to see where you got your benchmarks regarding an oc 1ghz AMD equals a dual P3 1 ghz? Just remember too Max 3.1 is optimised for the P3 instruction set. Much quicker than Max 2. I'II post a picture of my naked ass if you can get an oc Amd to render quicker than the dual P3 setup. You'll find its not a cpu speed issue but the multithreaded pipeline that makes the difference!!

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  8. Ok I went to your link and downloaded KTX-Rays.max but a couple of things. The file is only 188kb and has no animation. No moving cameras etc. I brought it into max2 (still not harnessing the power of the cpu's or my VX1) just to make it kinda compareable. It renders at 1000x600. I rendered it in viewport camera1 and it took 62 seconds flat. Don't know whats happening here. Just one more thing, with all the problems people are posting about AMD chips in the cpu forum, are you sure you want one?

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  9. Hey M_Kelder Thanks!
    That is exactly the file I was looking for. Man, I've had a post on the MAX forum for days and no one could help me. I really appreciate it.
    I'm at work now so I rendered it with my Dual 933 here. It took 49 seconds. Tonestar, that makes your time of 62 seconds sound fine. As for no moving camera's and not taxing your VX-1, it's just a render benchmark. Strictly CPU measurement. Graphics cards should have no effect. Even memory speed/ Front side bus has a limited effect. If you want to test your Video card you could always run the set of benchmark files located in the scenes folder under your max install.
    As for DirectX ; who cares if max moves to it? As long as it is up to par. OpenGL has it limitations (like 8 lights, etc) I'd welcome anything that would show bump-maps in viewports, more than 8 lights, etc. As long as it works better who cares which technology it is?
    As for AMD SMP we'll all have to wait and see. Saying they'll screw it up is pure speculation. Let's give them a chance. I'm sure they'll be some bugs in the beginning just like any Intel product. One things for certain SMP rocks. I had to use one of my Giggers in a single processor board while Supermicro upgraded my BX board to run coppermines and man did it suck. My dual 350 setup was 10x smoother. Not claiming faster mind you but smoother. On my dualie I can open a couple apps at once, check email, copy files, etc all at once without a hiccup. Try that with ANY processor Intel, AMD, you name it (at least with Windows).
    Render time on a scene I used to compare systems:
    Dual 350mhz P2= 8:44
    Single 1GHZ p3= 4:11
    Dual 1GHZ P3= 2:34
    I'll check out that article over at Viatech.com.
    Thanks again for the file!
  10. Damn, I just realized I'm running Max 3 and Tom benchmarked the Athlons with MAX 1 or 2. Not to mention I don't know what resolution he rendered at. Ah well.
    If someone out there has an Athlon setup over 1ghz and MAX3 maybe you could render this file at a resolution of 800x600 and let me know your results.
    We can see how an Athlon stacks up against Intel IGHZ SMP.
  11. I'm in home now and render the ktx_rays.max file , using default resolution (1000 x 600) , virtual frame buffer on and save file enabled , with a 32 bit alpha channel TGA output enabled.

    apps running: MAX 4 , 2 IE 5 windows , Outlook , CPU IDLE 5 , Zone Alarm , MSI PC Alert 3 , MIRC , Mcaffe Virusscan , Virtual daemon manager , dumeter and a DSL connection/ downloading

    My pc at home:

    Duron 800 / MSI 6330 / 128MB PC133 / 30GB HD / TNT2 16 MB

    Time to render: 1:17 ( 77 seconds)

    very good for a duron !!

    wow !!! imagine a AXIA running on 1.5 GHZ with 256 ddr ram !!! Lets wait for the Tyan's dual athlon board , soon!
  12. you need to understand. The FPU of the duron/athlon are far far better than Pentium 3's. Max 3 has NO optimizations in the renderer for SSE. The renderer uses pure fpu instructions for processing. max3 and max4 have pentium 3 optimizations for _software_ (hiedi driver) rendering for viewports. You get about 25% gain using the SSE software driver over non SSE. Nvidia, ati, 3dlabs... have their drivers optomized for SSE as well so you see a performance gain in opengl when you have a p3. I have emailed discreet (ktx back when max3 was released) about this and they told me that the renderer doesn't and can't take advantage of SSE.

    Again, the renderer uses pure FPU to render. At same clock, the P3 is 300mhz behind the duron/athlon FPU seen here.

    When rendering in max, dual processors enjoy a 100% boost. Meaning, if you have a P3 500 that renders 20ppr you will get double that 40ppr :) So if you have dual 1ghz you pretty much have 2ghz rendering power. Not in viewports though, I'll get into that.

    Those renderer benchmarks are with a 100fsb duron/athlon and 133fsb intel chips. Running at 133fsb with duron/athlon will put those numbers up quite a bit, I'd speculate that running a athlon at 1.5ghz @133fsb would at least be like a 1.550ghz 100fsb athlon. I think that is fairly modist. So the total mhz of a dual 1ghz P3 system compared to an athlon 1.5@133 is 450mhz. Minus the 300mhz FPU performance of the P3 you are left with 150mhz boost. That will get you about 19 pages more in an hour. Nice boost in performance I’ll admit.

    Viewports, now I am only speaking from discreets 3dsmax experience and knowledge but I imagine most 3d apps would use duals the same way. I can only speak from high end nvidia products and not from any 3dlabs professional cards. Plz let me know if it works differently with the special drivers. Max will split the cpu needs to each processor by assigning one viewport per cpu. If you have 4 then 2 go to each cpu. This is fast because you get no delay switching between viewports and you can play back 2 or more viewports back at once faster. So in this case, you really on have a 1ghz P3 FPU powering a viewport at one time (which you only need to deal with one at a time the majority of the time anyway). I’d rather have the 1.5ghz athlon FPU shooting those poly’s and other things around.

    Now comes a really driving force for me, other may not have a problem with this but I do.
    As per pricewatch
    Intel Pentium 3 1ghz - $220
    AMD Athlon Tbird 1ghz - $131/$145 100mhz/133mhz

    Those prices are the lowest you can get around pretty much. Buying a dual board will cost a lot more than a single cpu mobo, plus you need 2 $220 cpu’s. Your average price per system going intel is quite a large sum of money. Between $300-$400 at least. For getting 150mhz more rendering power you sure do spend a lot more.
  13. I would imagine tom uses the default rendering setting for the benchmark file. But he could be rendering at 585x477 becuse that is the size of the sample pic he shows which doesn't look like it was resized (but yet again he could have just rendered again to make it fit nicely on the html page). Wish tom would tell us stuff like that when he does his benchmarking.
  14. Just did some math tests on what stienbr told us his duron 800 can render at. With his 1:17(77 secs). (60*60=3600/77=46.7 . Toms shows the duron getting 97.3 ppr. I guess that means he must be using 585x477 to render.

    Now these numbers roughly add up. My P3 500 render 585x477 in 1:28, which translates into around 41 ppr (about half of the P3 1ghz).
  15. Hmmm.....I won't comment on your derivations till I found some solid proof. You say that Max splits cpu resources between viewports, this in itself should be a good reason to consider a dual wintel config. Most pro designers render occasionally during mapping and at the end to output into some kind of movie format. I render large scenes at the end of the day just before I leave work....so I'm at home when my computer is rendering. I get there in the morning and its all done....no downtime on my behalf. A pure render test is really not a good judge of overall productivity in a 3D enviroment as most animators work in garoude shaded mode and most engineers in wireframe or solid shaded. Even then most people who have to render during work hours opt for cards like the gvx210 or any other dual rasteriser card, therefore removing a significant load off the cpu. Have a look at the hardware on some of those proffesional cards. Most often the card alone is worth more than the whole system. We can speak about the differences in cpu manufacturers forever but in a real 3D enviroment there are greater concerns. In proffesional animation studios they often have render boxes...containing 10,20,40 or more cpu's. In real terms for the home or budget user duals are still the way to go, which still makes any current AMD cpu a non-option. Besides if I was a beginner animator or draftsman I wouldn't spend money on an intermediate solution like a single cpu unit at 1ghz. Consider all the facts, have a look at the 3D pro cards, do some animation and think about how computing power is really used, check out SCSI interfaces etc. Stay cheap or go dual. You can't effectively use an intermediate solution like a single 1ghz cpu, be it AMD or Intel. Unless I suppose all you do is play around at home a bit. Besides all that I just want to say a big thanks to everyone that has contributed to this article. It's good to see real players like Mark adding their views. Your render time was exceptionally quick, whats the rest of you system like? My friend with dual 933's did it in 59 seconds? We should put up a real test file like roving cameras, multiple lights, particle effects, raytracing etc. But we really need a viewport transform test to see how cards and cpu's can really benefit an animators or engineers life. If I set up a render just before I go home in the evening it won't matter which computer I had...even an old 486!! Because in the morning my file will still be done with no downtime!!....Creating that file however is the real test of a systems performance. Perhaps we ought to move away from marketing hype, flawed benchmarks and post some real time figures. Thanks all!!

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  16. You know steinbr I was doubting your figures then I went and set up mine to render at (1000 x 600) , virtual frame buffer on and save file enabled , with a 32 bit alpha channel TGA output enabled.

    I was connected to the net and had zone alarm going and render mine in 3D Max 2:
    Time to render: 46 seconds.
    Maybe Mark can try the same. Even though I'm happy with the render time I'm not trying to pretend that this file is any kind of productive test platform.

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  17. I didn't really talk about viewport speed and the cpu's. I do know that when you using one viewport you are only using one cpu, one 1ghz cpu instead of one 1.5ghz cpu. Here is the typical scene an animator would definately have to work with in a project.

    Important thing to look at is the scenerios with the different makes. 3dlabs has been known to be bad with drivers, I've heard of many that said they'd have to think very hard about buying another 3dlabs cards because of that. Slowly but surely they will have their drivers up to speed with amd processors. That test was done quite some time ago and I think the drivers must be much better by now.

    I'll quote tom for a bit
    "Final Thoughts: Rambus-Pentium versus SDRAM-Athlon

    This test clearly reveals better performance values for RDRAM platforms with the Pentium III. Contrary to the game PC world, the workstation segment finally gives Rambus memory the chance to show off its performance advantages propagated by Intel. In comparison to the Athlon 800 we measured better results in almost all categories at the same clock frequency under Windows NT 4.0.

    Some card manufacturers seem to prefer the Pentium III for the driver development. In the application benchmarks the difference is less noticeable than in the synthetic benchmarks. We generally assign more importance to the application benchmarks because they reflect the real-life situation for the user much better. On top of that Nvidia proves that it is possible to optimize graphics card drivers for the Pentium III and Athlon equally.

    One point is quite important when deciding on a platform. Workstations are generally equipped with more memory than mainstream PCs. 512 MByte are no rarity. Given today's prices, RDRAM would be much more expensive than the rest of the hardware. In this case the Rambus memory alone adds up to a whopping 5000 Dollars, five times more than PC133 SDRAM. The slight performance disadvantages of the Athlon KX133 platform can be balanced with more MHz. This would be a much wiser investment for your money. "

    take a look at the whole thing here,

    Now these tests don't go into dual intel systems but as I said before you only use one cpu per viewport or one cpu per 2 viewports.
  18. blurb<P ID="edit"><FONT SIZE=-1><EM>Edited by m_kelder on 04/16/01 09:45 PM.</EM></FONT></P>
  19. WHAT!!!! WHAT!!!!.....3D labs are renowned for their stable drivers. In fact often when you can get more performance for your dollars people will turn around and buy the 3D labs card anyway, just because they can trust the drivers. Have a look at Tom's article "Proffesional Affair". There was so much hype about the Gloria II (and it is a good card) but the drivers were leaving white lines across rendered frames. What are you meant to do if you actually owned this card? Go into Premiere and delete those frames 1 by 1? Or fix them in Photoshop? I don't think so. Two of my friends have 3D labs card and I have one too. No probs for me or them. Yes, there are better value cards around but their driver ARE renowned to be generally the best. Please don't post false information. With regards to your comment about using one cpu per viewport, I just booted up my cpu graphs in task manager (anyone running NT or W2K) can do this, I played around with a scene in the veiwports and guess what!!!....I got similar reading for both cpu's. So there must now be only 2 conclusions...Both cpu's are working even when I'm in one viewport!!! Once again a dual platform is looking ever more desirable!!! The only 2 conclusions left are:
    1. The cpu graph's are only a gimmick of Microsofts
    2. Your comments are totally false
    Unless you can prove that the cpu graphs in task manager are a gimmick employed by microsoft I am going to disregard your last post as I know your comments about 3D labs are not only false but completely crazy as anyone who is a profesional owning one of these cards would testify too.

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  20. Well, I didn't say that I was speaking from first hand. Just a couple of guys I know say that they didn't like owning their cards because of drivers. I think it dates back to when their were problems with agp and when 3dlabs switched out on it's own. This is quite a while ago, maybe things have changed. The guy I talked to about the duals in 3dsmax is a professional animator and i think he knows what he is talking about. I'll try to get some verification on how exactly it works.

    Another things, what are you talking about white lines? the video card has nothing to do with the rendering process.<P ID="edit"><FONT SIZE=-1><EM>Edited by m_kelder on 04/16/01 11:52 PM.</EM></FONT></P>
  21. I found a place that will solve everything. Bottem line is that the athlon would actually beat the dual p3 system in viewports and rendering.

    He will soon be comparing a dual 1ghz P3 and a 1.33ghz athlon. He says the dual wins buy around 20 secs in most tests and some tests the athlon wins. We were comparing a athlon at 1.5ghz so it would win.
  22. I'm starting to think you no NOTHING about what you are talking about. If you did, you'd already know how good the fpu in the athlon is and how 3d apps need nothing else but fpu. Also, the white lines? that is in the viewport, NOT THE RENDERING. rendering has nothing to do with the video card, nothing nothing nothing. Reading back, it seems all you care about it the video card, even when we are discussing the cpu you bring up the video card again. At the time I just thought you got off track but it seems you have this wrong idea on what a video card is.

    You may want to check the date on the article I pointed you to. You'll see that it is may 15, 2000, yeah a year ago...
  23. I get well under a minute in max 4 with a P4 1.5,
    if you use dd3d 8a :)
    much faster than athlon or P3 dual
    the FP unit of the P4 is far superior and runs at 3 ghz
    or 2 x clk,

    the Spec FP marks proves this at 200 points faster than althlon or P3..

    dual channel rambus does not hurt either..
    nor does the fact that the data is being loaded to memory and back through a 400 mhz dual channel bus

    P4 is the way to go for 3d apps like MAX and lightwave etc..
    no contest

    <A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
    Ultra High Performance Computers-
  24. Just did one more rendering test , now using 585x477 resolution , virtual frame buffer on , save file enabled (32bit tga output file) , and now , instead of rendering a single frame , rendering the "active time segment 0-100".

    First frame: 40 seconds
    2nd: 38 s.
    3rd: 37 s.
    the others changing from 37 to 38 (+/- 37.5 seconds)

    using the fastest frame time , 37 seconds , we have:

    60*60/37 = 97,2972 , the same rendering time as Tom's show in their graph.

    No doubts what resolution must be set to make our test.
  25. ohhh!

    -I get well under a minute in max 4 with a P4 1.5,
    if you use dd3d 8a :)

    dd3d 8a means NOTHING to renderind times!!!!!!!!

    No other x86 cpu beats the Athlon FPU
    The Spec FP bench says nothing , the best way to proof a CPU is a real application like MAX 4!!
    SSE?!?!? Discreet says: Nothing
    Your rambus [-peep-] does nothing as well!

    PS.: P4 looks like your site: lots of neon lights , lightning, ugly bevels and much more.
    Must be on the TOP 10 worst ever!
  26. You know something kelder, you are one lying maggot. Your posts don't contain any information that an AMD unit that outperforms a dual P3 setup. This is just one blatant lie. Another thing, can you explain to me why both my cpu's register when I create a scene in MAX? Anyone with NT or W2K can test this for themselves in 3D Max. Another one of your lies. Why do you do this? Anyone can read the info at www.3Dluvr.com or boot up max with task manager measuring cpu performance. You are one true dickhead. Go back and read some of your bullshit statements you wanker. You missed out on the GX1 cos your a dumb bastard. You keep going on about the cpu setup when I've been telling you how life really works in a 3D enviroment. You started your initial post in the Graphics Card forum by saying you mainly use your machine mainly for animation. What crap. I don't think you have ever animated anything in your life you dumb bastard. You idea of a systems/card test is renderering some non-animated crapy scene like ktx_rays. You moron. Ok those white lines are in the viewports, granted. Its been months since I read that article. You claimed that 3D labs drivers were crap, have you found any proof yet? Liar!! Anyone who reads your post under graphics cards and this post will come to the conclusion you are just some pretender with no idea. What other fool would pass up a GVX1 for $120? Moron. You must be some wanna be little kid.

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  27. I am sorry to have to tell you this CYBER but you are either idiot either paid by Intel to spread FUD or most probably just too lazy to go there and to read some technical articles to understand how the things work.
    You have NO IDEA about RAY TRACING and P4 - I would suggest go to Aceshardware - there are excellent articles/comments for P4 describing how P4 works and what are its week points.
    And for your information only SMALL part of P4 ALU (NOTICE ALU not FPU!!!) works on double frequency (3GHz in 1.5GHz P4). And ALU in case you don't know handle only integer instructions while RAY TRACING (in case you don't know what is this - this is algorithm or more correctly set of algorithms that are used for high quality image generation) uses x87 FPU not ALU (in 3DS MAX v3+ there are SSE optimisations for WIREFRAME mode ONLY!!!).
    So any speculation about DirectX (any version!) increasing ray-tracing speed is just a DUMB LIE!(BTW 3DS MAX is not using DirectX at ALL!)
    Also in case you don't know RAY TRACING is very branch intensive algorythm especially in scenes where there is a lot of difractions/reflections so P4 is LITERALLY killed in those scenes due to its 20 stage pipeline (I mean 1GHZ PIII is faster than 1.5GHz P4).
    Also you are giving very bad advices to your potential customers and don't be surprised if they get very angry when they discover that 800MHz T-Bird is faster that 1.5 GHz P4 in rendering. The best x86 platform for RAY TRACING (i.e. 3D Studio Max, MAYA, LIGHTWAVE..) now is dual PIII/1GHz with 1.5GHz TBird (overclocked AXIA) slightly slower than dual 1GHz PIII.
    So if you still want to sell Intel machines to your customers at least give them a good advice.
    And please stop mentioning SPEC [-peep-] - just read comments from engineers at Aceshardware to understand that SPEC2000 in its current state is mostly BANDWITH/COMPILER test rather than CPU test. And current P4 FPU unit performance at its current variant is just SAD - it can achieve good instruction throughput only on SSE2 code that fits in L1 cache (8K). Probably in Northwood we will see original FPU (which was implemented only in half in current P4 due to core size limitation and Intel decide to cut it in half - from two units to one).
    And I appologize if I sound too sharp but sometimes I really get angry when people are advising other peoples on some things they have no idea about. I really expect you will take my criticism as positive - especially since you are in computer reseller business you need to read much more about platforms and to be much more educated than your potential customers, otherwise they will probably not became your customers.

  28. "BTW 3DS MAX is not using DirectX at ALL!"

    I don't know where you get your information. According to the company that makes the software, 3D Studio Max uses Direct3D/OpenGL. All current Direct3D/OpenGL drivers are optimized for 3DNow. This is not a straight FPU test.

    3D Studio Max: http://www2.discreet.com/docs/3dsmax4.pdf


    -- The center of your digital world --
  29. ok, read what section of the forum you are in, cpu. The guys asked about the rendering times of each cpu. I've done some checking, I can get a geforce 2 pro for $130 and it performs better than the GVX1 in most cases. When your dealing with high poly scenes the only thing that matters is having the most fpu power you can get, the vid cards are pretty much just waiting for the cou to hurry up. The GVX1 is 2 years old, todays game cards perform about the same.

    As I clearly stated before, I only can tell you what I was told by other people about dual cpu's in max and they all say that. Dual cpu's are good for multi tasking but generally can't be utilized doing one thing. Again you point to a system/card test. use the normal max benchmark tests for that moron. ktx_rays is just a file that has volume light so it is a bit slow to render. We could use any file we wanted, it doesn't matter. It is a render test ok? plz understand that. You can have a 2mb 2d card and render the same speed. I'd to keep asking you if you actually do know anything about this topic but you seem not to understand the basics. You call me a moron, you don't even know the relationship between the video card and rendering, that [-peep-] you said about the white line error from a year ago effecting final movie rendering, having to go through each frame of the avi and fix them... that just proves you don't have a clue.

    3d labs aren't crap drivers, they just haven't updated in a long long time and there is room for improvement. The main thing bad about 3dlabs, from what I hear is the customer service, combined with drivers that are never worked to improve makes a bad company to go with. Sure they make good cards but many aren't too far behind.

    I've shown you proof how the 1.5 amd unit would out perform or match the dual setup. I haven't lied about anything. You need to stop jumping to conclusions. Within the month 3dluvr is going to have a full and direct compare to a dual 1ghz P3 and a 1.33ghz athlon in both rendering and viewports (remember, those are different things). Hopefully the 1.5ghz compare will be up then too. I've been giving you nothing but facts and experiences of other animators I've met on the net and I have spoken their words about things I don't know and said when I didn't know. If I'm wrong then no problem, that is what discussions are for. What you could do is stop whining and flaming and run some benchmarks.

    I'd wish you'd grow up and stop acting so immature.
  30. yes max has used directx for quite some time (since max 3?), no one used it though. Opengl is still the major player in the graphic apps biz. Discreet made a driver for pentium 3 cpu's that increased the heidi (software viewports) by about 30%. Your drivers from your video card manu has these instructions (SSE) too but don't yeild too big of improvements. Only people that would use directx is game devolpers but opengl can do everything directx can in max4 except per pixil shading (which only the geforce3 can use).

    SSE doesn't improve fpu speeds. 3dsmax can never ever have enough cpu power. dual cpu's do help better than one but not too much. The app would have to be made just for dual apps from scratch to take good use of dual cpu's. winnt was made for dual cpu's, that is why two apps can be running at great speeds but one app can't use two of them very well.
  31. Any idea what version of DirectX it uses? SSE2 is incorporated into DirectX 7.0b (Note the 'b') and DirectX 8.0. Most peopel don't have the b version of 7.0, so it's safe to say that unless they built it for 8.0, it's using 3DNow and SSE1, but not SSE2. This is in addition to the fact that the app is optimized for the pipelines of the Athlon and P3 and hence get more pipeline stalls and branch mispredictions on later processors, including the P4, because of this.


    -- The center of your digital world --
  32. Discreet, nvidia and microsoft all had input into dx8 and thus max4 uses dx8.
  33. BOy , now you've done it,
    you made yourself look like a bunghole, assuming I know nothing about 3d when I have been doing rendering and raytracing since 1991 and was a original 3d Studio v1 user..

    ACE has no clue about the P4 and there are several errors he states..
    maybe you should read the INTEL engineering books like I have on the P4 as we are beta testors for INTEL..

    You can add a direct x rendering API instead of OGL in max,
    you are full of it..

    thus sse2 can come into play...

    the people who make max and who we have delat with for 10 years..
    from the FEATURES list of MAX

    "Interactive viewport graphics support OpenGL and Direct3D hardware acceleration, or fast Heidi ® software for any Windows display "

    thus SSE does come into play during rendering

    I have used MAX and LIghtwave and MAYA for years,
    what do you think was used on our webpage DUH !

    as far as the P4 architecture you are spewing worthless crap you read from a teenagers site and have no idea what you are talking about..

    the P4 has both 32 and 64 bit INT and FP instruction capabilities, and has 128 bit registers

    it can do single or double precision FP

    SIMD FP instructions can be mixed with either the x87 fp unit or use seperate dedicated registers

    the SCALAR instruction method allow the XMM SIMD to be used for general purpose FP uses regardless of whether there is SSE or not.

    as far as the ALU's , both can run at 3GHZ
    this is from the P4 engineering manual
    "The rapid execution engine allows the two integer ALUs in the processor to run at twice the core frequency, which allows many integer instructions to execute in 1/2 clock tick. "

    Many things in max require Integer including switching screens, networks rendering requests, etc, so MAx uses both Integer and FP ops, and if using Direct draw API, can use SSE as well..

    what you fail to understand is I have read the developers manual for programming for P4 so I know what I am talking about, and use of SSE2 is far superior to coding to a straight FPU in many instances, as there is more flexibility and speed , for example you can issue 4 conditional instructions for FP in sse2 in either 32 bit or 64 bit, which you cannot do in Athlon..

    and they are independant of the 87 FPU and can operate in conjunction or concurrently ..
    in fact it can convert a FP to integer and vs versa if efficiency dictates..
    you also seem to dismiss SSE 2 as not as good as x87 FP,
    when you fail to realize it is certified by the IEEE 754 standard for FP ops.

    what you also fail to realize is unlike the Athlon, the SSe2 FP instructions can stream data in and out of the 128 bit registers without going to or disrupting cache
    so your theory is bogus regarding the small cache
    and further the SSe2 can prefetch data way before it is needed..

    as far at the P4 20 stage pipeline everybody knows a deeper pipeline is better provided you reduce latency which the P4 does with 3ghz ALU's, dual channel rambus, the 850 chipset,
    you need a longer pipe to achieve higher ghz frequencies, that is why the Athlon's pathetic 10 stage cannot go past 2 ghz while the P4 design can go well past 3 ,
    becasue the faster things go through the pipe the longer it needs to be to prevent stalls and errors...

    in fact this 20 stage pipeline can hold up to 126 instructions !!!!

    the cache on P4 is a low latency 2 cycle cache which reduced decoder latency !!

    the branch predictor is 4k or 2 times larger than Athlon
    to reduce mispredicts

    the problem is you have no idea what my education or qualifications are....
    I have been using max or 3d studion for 10 years,
    I was a beta testor for Kinetix, contributed in the max forums as an advisor for years..
    I am a beta testor and registored developer for both
    INTEL and MS and Direct draw..

    Knowledge and information is why our customers like NAsa, JPL, M.I.T. , the US Military , Johns Hoppkins university,
    some 20 other universities, etc all get workstations from us
    not to mention many 3D rendering and effects houses

    you are operating in the dark with no knowledge of me or our company, and it looks like you are in a cave with one leg and no map or flashlight..
    you should think before you SPEW !

    quit while your behind...

    <A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
    Ultra High Performance Computers-
  34. yeah you better use max and read my post with the fact I have used max and 3ds for 10 years !

    max can use DD3d as a rendering \ display API
    besides OGL benefits from P4's FP ability as well

    SPEC FP is the world standard in FP and recognized by the IEEE for FP binary standards..

    stop spewing worthless crap and be professional, instead of citing bad internet hyped articles
    just because you cannot afford or do not have a P4 does not mean it is not capable of amazing performance or outdoes your athlon.. facts are facts buddy boyee


    <A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
    Ultra High Performance Computers-
  35. OK my mistake,
    I meant that DirectX/OpenGL are not used at all in ray-tracing phase.
    You are correct that in Viewport newer version of 3DS Max can use DirectX instead of OpenGL but I guess I din't paid enough attention to the phrasing.
    Anyway thanks for correction,
  36. 3D Studio Max 4 is that new? Are you sure?


    -- The center of your digital world --
  37. OK I am not sure that your response deserve the reply but at lest I will try.
    First of all about Aceshardware - I would be very carefull calling site with one of the deepest technical discussion
    "teenagers site" - just to inform you that people like Paul De Monde, SCumbria, GhostWhisperer and many others that take part of the technical discussions there are one of the top engineers in the world. So before making any childish statements how great you are (because YOU ARE NOT!) just try some time to listen to the wisest people on those boards - I myself am just a software developer (who BTW once developed small ray-tracer when was still at the university) so I know I lack a lot of knowledge about modern CPU design but seeing how little you know about these things I see that at least I reach the point to see how little I know, while you obviously haven't reached even this point. And one other hint - at the Aceshardware technical forum sometimes in discussions take part people from both Intel and AMD design teams so you better read carefully what those people says.
    Anyway I will leave with no comment your childish comments on P4 (BTW you read too much Intel marketing crap - better concentrate on technical articles ;-) like:

    "as far at the P4 20 stage pipeline everybody knows a deeper pipeline is better provided you reduce latency"

    "in fact this 20 stage pipeline can hold up to 126 instructions" (hint:what happens when you need to clear the large pipeline?)

    "P4 has both 32 and 64 bit INT and FP instruction capabilities" (another hint:show me 64bit INT instruction in P4 developer manual ;-)

    But at least you can explain to me how exactly you plan to use those GREAT SCALAR (SIMD) instructions in ray-tracer - I am just curious because you probably have in mind some specific implementation of general Ray-Tracing algorithm that I am not aware of? You can even include algorith in your reply - I will be very glad if I learn something new :-))
    May be even you can help Kinetix(now Discreet) with these algoriths to improve their product - since I even have friends in one of their development team I can even give you emails to explain them what is wrong with their product so that on the "Mighty 1.5GHz P4" it is slower than on cheap 1GHz TBird :-)))?

    And about the money for P4 - I am pretty much satisfied with my 1.533GHz TBird w PC2100 (which renders in 3DS Max about 2 times faster than "Amazing 1.5 P4") and to tell you honestly even if the price of the P4 was much lower than TBird setup I was still going to buy TBird (well software developers are not so bad paid after all - may be not as much as stupid idiots trying to sell bulshit machines to people but...).

    And the last comment about you and your company: yes, I am exactly "operating in the dark with no knowledge of me or our company" so I have to judge you (and your company) just by the stupid comments that you are making in this board and believe me you will not like my judgement (don't know for the company thought - one stupid person in the company may not spoil it after all).

  38. The P4 is nothing when it comes to rendering. It's core fpu is so freaken weak, can't believe intel would have anything in the P4 weaker than the P3 let alone the fpu. Sure the bandwidth of it will help in working in 3d apps but athlon systems aren't that hurt by bandwidth. Prefetching has no place here.
  39. Gee it seems like now that you have no where to run you are actually turning around and agreeing with me. Your initial post was in the Graphics card forum where you were considering buying the GVX1. You said in your own words: "I mainly use my machine for animation", you then mentioned how you like to play games occasionally too. After a bit of pressure and being exposed for being a fraud you moved to this post where you continue to avoid the major issues and post lies. Why do both my cpu's register when I'm in 1 viewport? According to you this shouldn't happen. Still no answer. You said you mainly used your computer for animation but you continuelly quote other people. For insatance:

    As I clearly stated before, I only can tell you what I was told by other people about dual cpu's in max and they all say that. Dual cpu's are good for multi tasking but generally can't be utilized doing one thing. Again you point to a system/card test. use the normal max benchmark tests for that moron. ktx_rays is just a file that has volume light so it is a bit slow to render. We could use any file we wanted, it doesn't matter. It is a render test ok? plz understand that. You can have a 2mb 2d card and render the same speed. I'd to keep asking you if you actually do know anything about this topic but you seem not to understand the basics. You call me a moron, you don't even know the relationship between the video card and rendering, that [-peep-] you said about the white line error from a year ago effecting final movie rendering, having to go through each frame of the avi and fix them... that just proves you don't have a clue.

    3d labs aren't crap drivers, they just haven't updated in a long long time and there is room for improvement. The main thing bad about 3dlabs, from what I hear is the customer service, combined with drivers that are never worked to improve makes a bad company to go with. Sure they make good cards but many aren't too far behind.

    You arselicker, I'm telling the forum that anyone with 3D MAX and NT or W2K can boot up task manager and see their cpu usage as they work in Max. You keep avoiding this issue rather than just admit you lied. Does anybody out there know wether the cpu graph's are a gimick or are they are a real measure, otherwise M Kelder the moron will continue lying as he has about 4 times now.

    You still haven't come up with any real proof as to why you discredit 3D labs. You even go on to say:

    3d labs aren't crap drivers, they just haven't updated in a long long time and there is room for improvement. The main thing bad about 3dlabs, from what I hear is the customer service, combined with drivers that are never worked to improve makes a bad company to go with. Sure they make good cards but many aren't too far behind

    Now you tell us they make good cards but their service and driver updates are poor. Is your mother a whore? Somehow you have come to believe that lying through your teeth is acceptable. I OWN a 3D labs VX1 and my support from 3D labs has been excellent. Whenever I email Charlie (yes he works their, email him yourself) he has responded with solid advice within 24 hours. You say they haven't updated their drivers in a long time....you smelly C>UNT...have a look at their drivers page and the dates posted next to each driver for the various platforms. I think you need to see a psycologist and fix this compulsive lying habit you have.

    You started out in the Graphics card forum by saying you mainly use your machine for animation. When pressured you said you hadn't done any animation for a while. You have never quoted from your own experiences but always state "what I was told"...or..."what I have heard". I tried to tell you how life works in a real 3D enviroment and you keep telling me about rendering...I keep telling you it isn't the biggest issue in the real world and now you seem to agree with me. And where exactly is your your proof that a 1.5 amd unit beats a dual p3 unit in a 3d enviroment? You are one demented arselicking f>cking loser. Don't waste you time trying to build a career in computer graphics, your simply to stupid to go anywhere with it.

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  40. your post is so full of erronous error and spews of opinion, that I cannot even begin to address it..

    How many P4 machines have you tested on MAx, LIghtwave,
    maya, etc,
    I have help design and build hundreds, and use several myself..

    we have doxens of design studios, effects houses,
    studios, etc using P4's, and to my knowledge very few even consider using an athlon..

    there are none in Sony PiCtures Imageworks, DIsney, Sigital Domaine, or Industrial LIght and MAgic, only P3's P4, and SGI,
    so it would seem that if what you say is true and the Athlon is 2 times faster than why is it no effeocts houses use them and you seem very few a siggraph..
    even been to Siggraph, I have for the last 10 years,,,

    the reason is you are full of crap, have no expierence with P4, cannot explain why no professionals use the Athlon
    in 3d rendering, and why the P4 floating point scores are so much faster than the Athlon..

    I promise you the 1.7 GHZ P4 wails on your Crapalon
    as I have tested one in several apps..

    no matter what you want to spew as far as opinion,
    you cannot change computer architectual design facts and the laws of engineering,

    the P4's bus is 2 times that of athlon, rambus has 2 times the bandwidth in dual channel, the ALU of the P3 operates at over 2 times that of Athlon, the SSE2 instruction execution is more advanced and can operate with more efficiency than Athlon, and the 850 chipset
    is by far faster and has more bandwidth than the 760 !

    nothing in the world you can say will change that..

    software will only become more optimized for P4,
    and the P4 will hit 2 ghz in 3 months while the Athlon will only be at 1.4- 1.4 ghz..

    RAmbus 2 will arrive in the next 6 months that will be
    running at 1000 and 1200 mhz and soon at 1600 mhz,

    and QUad channel rambus for servers will allow up to 6.4
    GPS bandwidth for memory...

    all of which leaves the Athlon for a low end gaming solution..

    the fact that you use an Athlon for professional rendering pretty much prives to me you are a total sham, and
    want to be, as most PRO artists are now using P4 with windows 2000...

    give it up buddy, and go back to your reading questionable Internet articles from people who have never seen a P4 much less used one regularly..

    believe me, if Athlon was faster and better I would have a few, as I tend to like using what's best..

    but sadly it is not, and I am not biased, just realistic
    based on what I have seen and others in the industry...

    and if Athlon was so good, PRO workstation companies like
    HP, SGI, etc would be using them , which they do NOT
    and Servers would be based on the Athlon,
    but sadly they are NOT....

    when we test a dual Foster P4 2 GHZ in a few months,
    with Quad channel RAMBUS, you will still be spewing that
    your athlon is faster, WHICH IS SAD...

    you are blinded by EGO and not reality..

    give it up already...
    facts and facts
    and you are not worth a moment more of my time with your insults..


    <A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
    Ultra High Performance Computers-
  41. Hey M Kelder. Have a look at Cyberimage's post's. I'm not going to pretend I'm smarter than all the well established studios but you probably will. Does this give you a better picture of the real 3D world, not the fake fairly tale world inside your stupid head?

    "Cock-a-doodle-do" is what I say to my girl when I wake her UP in the morning!!
  42. You tell em Tonestar !
    some poeple are legends in their own twisted minds and he is one of the apparently

    keep up the funny posts

    <A HREF="http://www.4CyberImage.com " target="_new">http://www.4CyberImage.com </A>
    Ultra High Performance Computers-
  43. Hi,
    I am not going to waste my time replying to some Idiot like CYBERIMAGE but I will reply to your post just because you seem much more reasonable (and thinking) man:
    CYBER is just a child who impersonate his father (who probably really has a hardware reseller company) and is just speaking some bullshits without ANY clue. I am sorry to disappoint you but all those Studios mentioned in CYBER response are NOT USING neither P3, neither P4 - most of them are using SGI multiprocessor servers (like 46 processor challenger). Also with the unability of P4 to do SMP until Northwood comes and with its SAD performing FPU P4 it is nowere to go. For real P4 test performance just have a look in rendering software at:
    1. For 3DS Max - go to Tomshardware (v2) or x-bit(v3) and you will notice that basically 1.4GHz P4 is roughtly equivalent to 800Mhz TBird (100MHz FSB!!!)
    2. For Maya results just have a look here:
    http://www.maya-testcenter.de/renderresults.html - here P4 is a little better - it takes only 1.2GHz TBird to beat it without any problem (BTW TBird 1.2GHz/133MHz is even faster than dual 800MHz PIII - "Intergraph Z3D" and faster than dual MIPS R12000A /400Mhz - "SGI Octane" and faster than quad MIPS R10000 /300MHz - "SGI Onyx2 Inf. Reality")

    Unfortunately I can not provide you with link for LightWave test (I just am too lazy to search) but my friend bought recently Dell P4/1500 and was very dissapointed seeing how cheaper machine can be much faster (OK, I admit my CPU is overclocked to 1.466 but..) so he had to returned it and now he is extremely happy with 1.33 TBird. Anyway if you are interested in Lightwave results I can render scene at work on P4 and can tell you time it took to render. So generally NOBODY uses P4 for any rendering (and for anything else too, BTW - Taiwanesse Motherboard Manufacturers sold 50% less than expected P4 motherboards with total about 100 000 for the whole Q1, which is really NOTHING especially keeping in mind that most ot those boards are still at resellers) and don't get sucked into stupid purchases by idiots like CYBERIMAGE.
    The best x86 now for rendering is dual 1GHz PIII (just performance) or 1.33GHz TBird if money matters with 1.33GHz beating signifficantly in terms of price/performance ratio.
  44. Ok folks, now I'm sorry I started this thread.
    Let's get some facts straight. Being able to debate technology without insulting each others mothers is a sign of maturity. This is lacking in these posts. What isn't lacking is an in depth knowledge of the P4, some misguided ideas about how MAX works, and a mix up of verbiage.
    I have used MAX for a few years now in a production environment. I have rendered animations although the lion's share of my work is stills (architectural visualization).
    I have used cards from Elsa (permedia 2 chipset~3d labs produced), 3D Labs GVX-1 and VX-1, GeForce2 Pro, and Wildcat 4110.
    1)The term "rendering" can be applied to both viewport display or final images intended for print or animation.
    MAX users use it to apply to the latter. Graphics cards intended for real time display of games tend to use it for the former.
    2)OPENGL and DIRECTX are used exclusively for viewport display. SSE and 3D NOW assist this display and do nothing to accelerate rendering (of stills and animations).
    3)Dual CPUs are of use to max users running NT or 2000 because:
    a)they render almost twice as fast (max has a multithreaded renderer- however preparing lights and several other functions neccesary before the actual rendering occurs is not mutithreaded- at least not with v3)
    b)viewport display is multithreaded if the graphics card drivers are also multithreaded (like the VX1) or under certain circumstances where the geometry processesing for display has been offloaded to the cpu because the graphics card has no hardware based geometry proccessing (the diff. between the VX-1 and GVX-1 or GeForce.
    c)3D labs drivers are generally very good but they stop working on revising them when new products surface. What company can afford to keep working on drivers for old products when they have new technology to iron out? Especially in a niche market. Understandable, but it still sucks for the end user who can't afford to upgrade.
    d)The GeForce2 GTS absolutely rocks for max. I get close to same viewport performance at home with a GeForce2 PRO ($209) as with a Wildcat 4110 ($3000) at work. Granted the image quality isn't as nice but it has no effect on a rendered image, I can still clearly see what's going on, and for $2800 less.
    E)AMD renders faster (again- final rendering not viewport rendering) than a P3 or P4. Tom has proved it. Various 3D sites have proved it. And there's no more reason to discuss it. The p4 may be a very advanced chip with future software developments allowing it to take advantage of it's qualities and kick Athlons ass. I personally don't give a flying f**k. When it happens let me know. And if the price of the processor and ram neccesary to use it are both proportionally priced to the performance then I will buy it.
    If anyone wants to know details about my setup, how fast I can render something, or how good my videocard is when using MAX feel free to ask and I will answer.
    As far as being insulting little pricks I suggest this:
    Why don't you go over to the HARDOCP forum or other such gamers/overclockers sites where they will promptly tear you a new [-peep-] for being such childish, insulting losers. They seem to be more adept at dealing with people who only want to insult each other in the name of fun.
    This forum is for those of an adult mentality who want to discuss technology and it's applications. Recently it's become a dumping ground for losers who only want to get a rise out of the people on this forum and if I didn't feel the need to back up someone who helped me out when I asked for it I wouldn't even have qualify this bullshit with a response.
  45. Oh, by the way, is this the same Cameron who got his ass spanked on the MAX forum more times than I can remember for trying to use it as an advertising channel and for hyping whatever technology he seemed to be selling at the time? The same Cameron who spent months telling everyone that Onboard Geometry processing was a waste of money only to flip his opinion when he started carrying cards with geometry proccesssing. The same Cameron who finally dissapeared when it was revealed that although he had been using MAX for 500 years or so he had no creative work whatsoever to show for it?
    I hope not. Although that would explain the attitude.
  46. LOL
    It is so easy to see how is Cameron's experience with 3d.
    Just take a look at their internet site.

    He Looks like a man with 500 years of 3d modelling/animation experience.

    Anyone has doubts?

  47. Just want to set a few things straight. I never lied, I never tried to hype anything up. I don't know why you keep insisting that I lied. Personally, I don't really care. I've seen benchmarks that proves that the Althon is by far the best cpu to have for rendering. Just take a look at the topic, I've tried to stick to that but people just keep trying to talk about the viewports. I've also tried to stay cool about this but you seem to want to blow things out of proportion. I believe I said that I am an amateur animator in the other thread and when I couldn't speak from experience I told so.

    Read mtedesco's newer message on better wording on what I was trying to say. Thx btw, for clearing up how drivers can be written for special usage in max. As you see, I didn't lie, I simply didn't know that; but if you don't have multi-threaded drivers my statement above is true.

    mtedesco, I hope to discuss more with you about this topic and I had hoped to get a good thread going here with tests and such but as you know things got blown out of proportion thanks to some immature people. Perhaps this could only happen in private messages since toms forums is loaded by insane people that can’t hold a normal conversation.
  48. Check out the Maya test center on Hihgend3D.com divide the Athalon score by 2 to get an idea how the dual Athalon will work. The *ntel dual uses the second procesor at only 60% (a limitation of the design) the Amd will be 100%
    Boxx Tehnology is releasing a dual Athalon 1400 in June.
    WWW.amdzone.com has the link twords the bottom of the page.

    Most people would be amazed how much 3D is done on old P2 300 (and lower)` and R4400s. Legacy systems get built up and it gets very expensive to replace them in terms of lost productivity wile the switch is being made.
  49. Anim88tor, I couldn't find that link to Boxx Technology's dual board. Please post the link if you get a chance.
    m_kelder don't let it get to you man, you kept your decorum even when they were at their worst. Any info I can share or questions I can answer just let me know.
Ask a new question

Read More

CPUs Benchmark