Parallel Processing, Part 2: RAM and Hard Drives
The second part of this article series deals with the differences between single and dual channel memory, and the performance benefits of using RAID with two or even four hard drives.
It would've been interesting to see if the older Pentium 4s and Pentium Ds profit from Dual Channel. Overall an interesting article though. I wasn't surprised by the results though, since the core 2 architecture is quite resistant against bad memory.
The funny part is, i just bought a new mainboard to replace my old one so i can run my memory in dual channel AND have access to an RAID 0 controller to get a RAID going.
muk said:The second part of this article series deals with the differences between single and dual channel memory, and the performance benefits of using RAID with two or even four hard drives.
Is there a part 3, to this? I always thought parallel processing has to do with either multi-threaded application performance using several processors to crunch the info down at once or large scale multi-processor platforms doing huge number crunching like a server render farm.....???
@wirelessfender: Yeah, I was wondering the same thing, all they did was run benchmarks. The fact there is little or no advantage (to RAID) as far as FPS in games goes, is not surprizing to me. But the real benefit is supposed to be load times, etc. Why didn't the author perform some good old stopwatch benchmarks for the setup (load times, file transfers, encoding, etc)?
The thing I'm really surprized about is that the dual channel memory had little effect. I would have thought this change should have a perceivable performance increase (5-10%, nothing huge). I wonder how running these tests on an AMD platform would have come out (because of the IMC, and lack of FSB). Or if they have lowered the multiplier and increased the FSB to run 1:1 at DDR667 or DDR800 but kept the operating frequency the same. I've heard all around that running ram on an Intel platform above 1:1 has VERY little effect on the performance (even in the synthetics). It could've even been educational to run the ram at DDR533 single-and-dual-channel and stock for the proc to see how that comes out. Somewhat of an unscientific approach to the question. A single control with multiple trials would have provided a better overall picture of hard drive and memory parallelism.
Jim_L9 said:I wonder if the lack of increased throughput going from 2 to 4 drives is due to a limitation of the onboard RAID controller.
Load times are faster
But once its in ram its all up to the video card and cpu(and slightly to the sound card)
The memory is surprising to say the least.
My friend when running single channel has some serious reload to desktop times after some games(ETQW demo). This may have been due to half the memory(was not like that in dual...). but i am willing to bet there is also a memory speed thing there. unfortunately the memory was also found later to be not 100% compatible with the board....
Good stuff to know.
This article didn't prove that RAID makes no difference, it just showed that in games it won't increase frame rates (honestly, who thought it would?). There are tangible gains in the encoding segments, remember that a typical encode takes 2-4 hours, and they tested for 90 seconds, a 3-5% gain in speeds is somewhat meaningful, and capturing via IEEE1394 (esp HD content) will require a faster storage option, such as RAID.
That said, what they didn't show is if RAID makes Windows load faster or if it makes files copy quicker. I'm constantly working with 4-10GB files and moving those around on a single drive would suck. Like my sig says, I have RAID10 & RAID5 arrays. I can tell the difference, Vista benchmarked the RAID10 array at 130MB/s write (you can access the XML file for the vista index and get this data), something no single drive can do.
Basically, RAID won't make your applications run faster (except for a few rare ones, mainly video based), but should make them load faster. THG could test this if they cared to, but we've seen graphs of performance in terms of throughput, it would just duplicate these results.
joex444 - no exactly correct. It makes sequential transfers faster. A lot of game loading requires decompressing textures, etc. and so RAID0 offers marginal benefits to gamers. There have been benchmarks done to prove that. Same for start-up.
File copying and media work absolutely do benefit; however, it is amazing that media work performance increase is, again, marginal.
File serving is where RAID shines, particularly with a good controller. And other application such as file moving, but that is hardly a very common operation.
The benchmark selection for RAID was, indeed, curious.
wirelessfender said:I'm confused. So having a raid setup for speed doesnt affect gaming much at all? My friend has a raid 5 and boasts that his games will load faster than people with a single hard drive. Is this still true? From those bar graphs it doesnt look like it.
One or both of two things are likely:
1) He is simply perceiving it as faster because he knows he has RAID
2) (and this is the more likely one) he is not comparing apples to apples - there are other factors interfering. Such as faster drivers in the first place hooked up to RAID. Maybe better file arrangement on the drive, etc.
yeah, at least they made partial headway into benchmarking performance for games and raid 0... framerate comparisons werent quite what i was hoping for, but its at least better than many of the synthetic benchmarks that used to be only run for the most part, which often dont correlate very well for practical uses. they could have made a fairly large impact on their readerbase possibly, if they had chosen to benchmark load times instead, for a variety of different games even, instead of the nearly pointless fps comparisons.
I think it would have been nice to have benchmarks done with 4 drive RAID 5 as well because I don't think many of us would actually use a 4 drive RAID 0. I say this because I'm sort of considering a RAID 5 for my next build. I know that in the server world its a big help but I was wondering more about normal desktop performance.
It was a bit stupid running fps benchmarks for harddrives. It does NOT increase fps. Sad part it doesn't bother using any real games that loads alot of data. Oblivion is just one example. HUGE amount of data being transfered throughout the ENTIRE game. It would be better knowing if raid vs single makes any difference in that respect (like the famous pauses of Oblivion).
It's already been proven raid does help with loading and writing data.
computertech82 said:It was a bit stupid running fps benchmarks for harddrives. It does NOT increase fps. Sad part it doesn't bother using any real games that loads alot of data. Oblivion is just one example. HUGE amount of data being transfered throughout the ENTIRE game. It would be better knowing if raid vs single makes any difference in that respect (like the famous pauses of Oblivion).
It's already been proven raid does help with loading and writing data.
I dont think its stupid at all - it just proves the point (again) that a raid setup does nothing for FPS numbers.
kittle said:I dont think its stupid at all - it just proves the point (again) that a raid setup does nothing for FPS numbers.
Well I don't know who would expect the FPS to benefit, since most of the HDD interface is generally done at the level loading stage, after which the FPS part is limited by the processor / memory / GPU performance, I think that's what the poster was saying
Simple but good article. You should maybe have tried a different motherboard chipset for the memory to see if it acts the same way. I was very impress though to see how very little difference it made. That's good to know. I don't always have to recommend to people to buy dual sticks now, unless that's what they want.
kittle said:I dont think its stupid at all - it just proves the point (again) that a raid setup does nothing for FPS numbers.
As i said, and others. raid does NOT improve fps PERIOD. So the test for it is stupid, most ppl already know it. The REAL question is if it improves the loads within the games and/or improves the pauses within high loading games.
No, raid is not going to affect fps in general, unless you have a small amount of ram, and then you will have occasional hiccups as data is transfered from HDD to RAM when you enter new graphical areas inside the game or something. In general, a game tries to put all the data into ram because it can access it fastest from there. raid should increase load times and reduce said hiccups though.
Parallel or multi-threaded processing can be severely limited by the HDD. At work, I process terrabytes of image data at a time. It is far too much data to load into ram, so I have to read it and process it as it is read from the HDD. I've also multi-threaded the reduction software to use the 2 cores of my 4400+. You might see the problem here - how can you read data for 2 threads simultaneously from one HDD? You can't. I wrote it anyway to see what would happen, and it works out well.
1) One batch of data is loaded off the HDD and then sent for processing - all with thread 1.
2) The HDD is free to load data for the other thread 2.
3) When thread 2 is done reading a few milliseconds later, thread 1 has just finished processing its data and now thread 1 can freely access the HDD again, while thread 2 now computes.
The process is self-structuring for maximum efficiency without any programming on my part, other than setting up the 2 threads. Wicked! However, you can see it is a balancing game. I was lucky to have file sizes (hundreds of thousands of them) small enough to be read quickly enough - in the time it takes the other thread to do the computation in ram on the data. I get 100% utilization on both cores. If the file sizes were too large, then you would have threads waiting around OR the HDD trying to read 2 things at once, both of which are not efficient. Same thing happens if file sizes are too small, you will again be limited by the HDD. If the computations take longer than the HDD reads, then at least you're not limited by HDD read time. In my case, the computations are very fast and HDD access is the limit.
I have loaded small batches of data just into RAM to see how much faster it was....and it was like 20 times faster - IF it could process all the data from ram. But that data still has to be loaded off the HDD at some point, so I can't really effectively use that.
But at least I have the data reduction time less than the data collection time - if I could only interface my software to the camera then it could process the data in live time!!
One thing I haven't tried is to split my files between 2 different HDD's, and run 2 single threaded instances of my software each reading from a different HDD. At least threads wouldn't be waiting around for eachother to finish with the one HDD. However, since I was lucky and my data reduction time for each file is nearly equal to the data read time, and the process self-steps itself for maximum efficiency (implying only one thread gets access to the HDD at a time), I haven't needed to bother.
I am looking forward to getting my Q6600 system this christmas (in my informations). With 4 threads wanting HDD access, the balance I have now will definitely change...I will see how it does.
I am so intimately familiar with how fast this data should be reduced, I find it a great little way to personally bench my systems. I can bench HDD limited performance and RAM limited performance. In the ram limited case, all 4 cores can access the ram simultaneously and so I am expecting some huge improvements there. The software is a mix of FP and integer computations, so it gives a good test of the system overall - except the GPU of course!
edit: holy c**p. I just looked at the HDD performance for raid0. In my case of processing so much data off my HDD, this would make another HUUUGE improvement!! Wow. I have 4 500GB HDD's now, all their own drives, no raid. I should raid0 two of them!
RAID + multiple threads will improve the performance for read only requests. Attempts to open a file for read/write access cannot benefit from multiple threading, because the OS must serialize accesses to the file to ensure there are no simultaneous clashes. So, it does NOT benefit the Windows swapfile accessing (though RAID could).
Games are about the only application that would read a lot of data files without the intention to write back to them. So, they could benefit from data management processes that ensure the data most needed is most likely to be available. I believe that most games now have a data/memeory manager that does just that.
If I have a winrar compression to do, I don't stare at it until it finishes. I just put it in background mode and do something else. And, as difference between multicore and single core seems small to justify, I generally set it up to run in single core. Who of you just put a video compression or winrar to work and just do nothing until it finishes? True, you can run it overnight, but, as the term says, you are supposed to give just one command at the computer before going to sleep. If I decide to let my system run overnight, I try to make sure it has work to do. I generally don't like the "shut down after finishing" option, that's why. So I might put 2 video conversions and 2 winrar, and, if I had internet at home (which I would never do to my main computer), some downloading software too. All of them possibly on a single thread mode and on different priority from the task manager (below normal to low, just in case I have to do something else, explorer having the higher "normal" priority would answer to me). As I said earlier the choice between single thread and multithread when I am multitasking is obvious, but not as important as what to do on what thread number and priority when you are on the computer. I have noticed the difference since my Hyper Threading P4. What I would have liked to see, an important experience/knowledge gain, is the winrar or video conversion running in parallel with a game or any other application, but on a different hdd from the one of the game and on a different single thread. Versus multithread but lower priority, and verus Raid0 on the same array instead of on 2 different hard drives. And, of course a small downloading program running in background, as this is what generally happens.
And what about a big and cheap memory compared to a smaller and faster one? If for the memory in the article it almost did not make any difference single and dual channel, for a more demanding setup it could, or for a slower memory in the case I wish explored. Most of the readers are system builders, for hobby in the smallest case. Would you tell a friend to go with 4 GB of dual channel "slow" memory or 2 GB of high end that would never be put to good use?
All the previous suggestions still stand IMO. Mine are just an addiction to the mass. It is a huge work to test all the different setups, but I think it will attract a great number of readers too, to pay with their add support
This is a matter of system setup and personal experience, but I would like your opinion over what works faster with 2+ workloads to be processed: one thread per core and all running in parallel, or each workload on a different priority level using all available cores so that when one finishes the other begins. I am thinking that request collisions from HDD and memory, and the cache pollution factor from multiple threads could balance the fact that most software is artificially and not natively multithreaded. But that depends on the setup. PentiumD for example is for multitasking, while core2 seems the opposite. Has anyone tried things on my previous post and has any result to share?
I didn't like the article... The RAID info hardly helped me at all...
You boneheads at Toms need to bench the loading times, not FPS..... And for petes' sake, use better games for your benches...
QUAKE IV!??! What the hell man? Oblivion is the perfect test for this, not quake!
dumb dumb dumb dumb.
Have you tried Postal2? Talking about perfection... on the same system, Oblivion load times are much less irritating than Postal2 ones, defragmentation or not, software running in background or not (well, Oblivion is more sensitive to software running in background, but that's because it is more demanding in general).
Anyway, no need to flame guys at Toms. Maybe they're just trying to make a point, that framerate does not depend on HDD in a well written game that loads areas while you're getting there and not during.
I still agree with you in what a hell lot more tests should have been done, but that means people working for peanuts. Advertisement should at least pay the electricity bill and shortening of the life of components if people truly have fun playing around different setups and have other ways to earn money.
RAID0 only improves bandwidth performance. It does not reduce latency. The drive arms still need to swing across the platters which yields in a perfect world the same latency. Since you need to wait for all arms to swing and return data, the seek performance is realistically going to be lower. Because of this, only large file transfers are going to see benefit. Small data transfers, non sequential IO, and metadata work are not going to see much if any improvement.
This is why all the calls for load time testing. Your load times should improve with RAID0 as this is sequential IO and often long sustained data transfers.
I can only assume dual channel RAM performance will only improve in the bandwidth department as well. Access times should remain the same or worse if I understand the technology correctly.
Are the reviews insane? The comparisons are apples to oranges. They are running tests which either cache data from the drive, and then execute, or run mathematical problems which have very small data input and require a lot of CPU time to come up with a result. The code to generate a new prime number is very small, but results could take years.
Running Windows from a Dual or Quad RAID 0 array is much faster, its one of the best ways to improve system performance (but not reliability) because often the HDD is the slowest part. Games dont run faster, they load faster. Games play from RAM, thats why its there. And quad core processors are great for anyone who does more than 2 things at once. I alt tab between games and browse and have torrents going and Antivirus software with all the options on with no slowdown because of it. I remember the days of alt-tabbing and waiting for an eternity because my game in windowed mode still needed CPU love. This article needs to be pulled and re-done with macro benchmark of starting windows, opening software, copying files from 1 drive to another and loading a video game and some more tests that are meaningful. The conclusion of this review should say: Apparently, the authors need to take computers 101 over again.
I thought the artice was good- kind of surprised myself that raid was not much help- but like others have said HDD perf only should affect load times. The part I as well was not thinking not well thought out was the dual channel tests, increasing the bandwidth of system memory should vastly improve load times not FPS, once it's in RAM it's all about swap time. but
wirelessfender said:I'm confused. So having a raid setup for speed doesnt affect gaming much at all? My friend has a raid 5 and boasts that his games will load faster than people with a single hard drive. Is this still true? From those bar graphs it doesnt look like it.wirelessfender said:I'm confused. So having a raid setup for speed doesnt affect gaming much at all? My friend has a raid 5 and boasts that his games will load faster than people with a single hard drive. Is this still true? From those bar graphs it doesnt look like it.
I got really surprised by the RAID 0 benchmarks results.
I don't know about you, but my real experience with RAID 0 is that the load times, boot time, shutdown time and disk access (read+write) is noticeable faster than a single disk, especially if it's used 7200rpm hard drives.
Performing benchmarks measured on FPS during a game isn't really appropriated specially if you have 2GB+ fast RAM available. Maybe if they used only 512MB it could show some difference since it would read and write from disks most of the time.
About dual-channel tests, it didn't surprise that much since i'm used to see the test comparison between low and high latency modules and the difference isn't that much also. Anyway, dual channel architecture provides surelly more stability and some resiliency to the system (even without memory mirroring).
Running Windows from a Dual or Quad RAID 0 array is much faster, its one of the best ways to improve system performance (but not reliability) because often the HDD is the slowest part. Games dont run faster, they load faster. Games play from RAM, thats why its there. And quad core processors are great for anyone who does more than 2 things at once. I alt tab between games and browse and have torrents going and Antivirus software with all the options on with no slowdown because of it.
While the RAID benchmark selection was, like I said, curious (to put it charitably) what you say is a bunch of unsupported hoo-ha. Really? RAID helps your torrents? I think those would be bottlenecked by the network troughput (with respect to the internet connection). How about the benchmarks that show that games load 3-5% faster at best with dual RAID0? Seriously, theoretical throughput is not always achievable in practice and unless you know precise access patterns for the task, there is no way to tell. And the funny thing is, access patterns for the vast majority of applications, games included, do not lend themselves to acceleration from RAID0. Sorry. Even mutlimedia tasks show much less than theoretical increase in speed, in the real world.
Now file copying is great, but unless you do that all day long (and if you do, please do tell us about your job / hobby), the disappointment is well warranted.
russki said:While the RAID benchmark selection was, like I said, curious (to put it charitably) what you say is a bunch of unsupported hoo-ha. Really? RAID helps your torrents? I think those would be bottlenecked by the network troughput (with respect to the internet connection). How about the benchmarks that show that games load 3-5% faster at best with dual RAID0? Seriously, theoretical throughput is not always achievable in practice and unless you know precise access patterns for the task, there is no way to tell. And the funny thing is, access patterns for the vast majority of applications, games included, do not lend themselves to acceleration from RAID0. Sorry. Even mutlimedia tasks show much less than theoretical increase in speed, in the real world.
Now file copying is great, but unless you do that all day long (and if you do, please do tell us about your job / hobby), the disappointment is well warranted.
Yes that is what I do! I read data off the HDD all day. I expect raid0 to speed that up, but not speed up the PROCESSING of my data once it is in ram. raid0 will get the data loaded from the HDD and into to ram faster, and htis helps a lot. I process terrrabyes of image data looking for individual photons in millions of frames of data.
edit: more recent discussion at http://www.tomshardware.com/forum/244224-32-noob-raid-question-simple#t1735029
Russki, the last part was in reference to the statement about quad core processors, not the RAID portion:
And quad core processors are great for anyone who does more than 2 things at once. I alt tab between games and browse and have torrents going and Antivirus software with all the options on with no slowdown because of it.
I also do IT work for a fortune 50 company and work on giant NAS and DAS arrays on a daily basis. I'm fully aware of where throughput and speed apply, my point was that the authors of the article arent, which is faithfully represented by the legion of comments calling them out. Tom's usually has good testing, which is why this article stands out. In the future, a complete grasp of what was said would be nice before making claims of "un-supported hoo-hah". The framers of the article deserve rebuttal, not the ones that blow the whistle.
Personally I am looking forward to testing the FusionIO drives to possibly replace many of the arrays we have here. From preliminary data, it would seem solid state is the future.
Martin, well, I take it back then. Besides, Anandtech just recently did a pretty good analysis on threading in Unreal 3, which was excellent, so although Tom's point that multicore is immature yet (part 1 of this article if memory serves me right) Unreal just may be a sign of things to come.
Anyway, Tom's quality of articles has plumeted, and there's nothing I could argue with that.
You can't notice any difference between single and dual channel because in most cases the FSB is the bottleneck.
I'm using DDR2-1066 and on a 1066FSB I don't notice much difference either. Until I crank the FSB way up to 1840. Then I start seeing large difference between single and dual channel. At that FSB speed you can drop down the mem divider to 1:1 and your still running DDR2-920.
RAID0 scales up linear when doing long reads/writes. Latency however doesnt't improve. So for short reads/writes you might as well be using a single drive. What does make a large difference hower is cache. I'm using an Areca controller with a 256MB cache in write-back mode and Windows uses all free memory to cache reads. (I have 4GB total) This made a huge difference compared to the onboard raid. Altough benchmarks tell me there is no improvement at all in latency and sequential reads/writes. Now I can write chunks up to ~250MB to disk and they finish instantly. (I can hear the drives writing it all away for several seconds afterwards). Desktop usage and loading levels in games also goes a lot faster after a while since Windows XP caches al reads in free memory.
-Crank the FSB way up in dual channel. (Atleast buy a FSB1333 CPU if you arent into overclocking)
-Stop adding more drives, start adding more cache. (Get a **** of RAM, think 4GB, maybe 8 if using 2GB modules)
-Use a dedicated RAID controller for write caching. (It doesn't have to be an expensive one with dedicated XOR unit, any controller with onboard RAM wich you can put in write-back mode will do)
I appreciate the article for what it helps to illuminate; more doesn't always mean better. Casual computer users won't benefit from all the horsepower these suped-up systems have. Gamers should build systems specifically for what they're trying to do; i.e. if you want a system that plays really smoothly at 1600x1200, then build a system to handle that load. Professional 2D graphics users have different needs, but they don't need a high powered gaming PC. 3D pro's have different needs - thank goodness 3D apps tend to utilize multicore systems. And I appreciate what that IT guy had to say about the company he works for. Huge processing loads require huge processing power and lots of fast moving storage. It's really quite pequliar to hear about multicore machines, and gaming. I know that's where alot of money and attention is focused these days. Even though it's a lot of fun, seems to be a waste of technology.
Its not just the bus, even PCI-E RAID controllers have internal limits. For example:
The Areca 1220 I'm using is internally using PCI-X and is limited to about 800MB/sec even tough its a PCI-E 8x card. I've placed it in the crossfire slot of a P35 mobo wich is wired up for 4x and it tops out at about 600MB/sec. Since this is a 8 port card that's fine: The drives will always be the bottleneck. (Unless you plug in 8 raptors, but if you can afford that you can also afford a bigger controller.)
Also, something nobody mentioned, (in regard to the original article) - if you compress with winrar, sometime in the future you have to decompress the data. I have noticed very little activity in the cpu during that time, with or without virus scan, and the most limiting factor is the hdd. It would be interesting to see this tested on THG.
Yes I'm using the 1220. http://www.areca.com.tw/products/pcie.htm I suggest you read that a little slower this time 'round. I'm running that 1220 on an Asus P5K-E in the crossfire slot. The Areca bios itself confirms its running on 4 lanes.
12xx series are SATA PCI-E models
11xx series are SATA PCI-X models
16xx series are SAS controllers (both PCI-X and PCI-E models)
Now the funny thing about all of Areca's PCI-E controllers except the 1280/16xx is that the processor and SATA controllers on the card are PCI-X. They have simply slapped on a PCI-E to PCI-X bridge. Quick and dirty solution to turn their PCI-X cards into PCI-E cards but it seems to work well. All their cards using this setup seem to top out at about 1GB/sec
BTW: All of Areca's PCI-E cards are 8x PCI-E. I don't know where you get that 8x cards are rare. 8x PCI-E has become the new standard in servers these days. You can just find about anything on 8x PCI-E: RAID controllers, fibre controllers, 10gb NIC's, Infiniband cards, ssl accelerators and so on. Mellanox already has 8x PCI-E 2.0 Infiniband cards and even Nvidia's QuadroPlex (4 videocards in external box) can be used with a 8x adapter card.
[EDIT] I really want to see you try and sqeeze 1GB/sec out of 8 drives. So in my situation (8 port card on 4x PCI-E) the drives WILL be the bottleneck.
I really did like the page 2 "Some memory history" section. Even though I was kind of annoyed it left somethings out(Rambus for example) Rambus was an interesting solution for a bit. It did get outpaced, but it was better for a short while in 2002. (I think it was first half of 2002, can't remeber)
Anyways, it would be cool to see more article's on "the history" of something. Like, how harddrive techs have changed. We see articles on the future, but it is fun to read/remeber how things worked back when. It be really cool to see benchmarks... hehe. What the IDT Winchip 200mhz Socket 7 chip behind me powered off compares to a Core 2 Duo...
I'm not sure if the RAID testing took into account windows caching. If doing any operations with a file that will fit in the windows memory cache, then your timing numbers wont be relevant to the HDD speed.
2 disk RAID0 is much faster than single disk
4 disk RAID0 is much faster than 2 disk RAID0
Most of the benchmarks didn't test anything that would be affected by the HDD speed, so they didn't show this.
I recently built a new system with an Intel DP35DP MOB, E6750 dual processor, and two 2-Gb 800 MHz mem sticks from G.Skill. Worked fine. Then a reader of my blog pointed out that I had configured the memory for single-channel rather than dual-channel.
I made the change, performing the Vista Windows Experience Index test before and after. The overall score didn't change, because it is limited to 5.4 by the RAID 1 hard disks, but the memory sub-score improved from 5.6 to 5.9.
I don't know how much difference that is in the real world, but apparently it's enough to be measured by that test. I wonder if Microsoft discloses the methodology of their Experience Index tests.
This was an interesting article to me, at least the single channel / dual channel portion of it. I've been under the impression that dual channel memory configuration made a noticeable difference in memory intensive operations, but according to the THG benchmarks this is not the case. I develop algorithms for processing radar data, both real-time and "offline", and could have sworn that the memory configuration made a significant difference. The article inspired me to do a couple of tests of my own using an algorithm I developed for processing radar data. For the sake of brevity, I found that dual channel configuration was only 0.4% faster than single channel for the section of the algorithm which was very computation intense, but was 9.1% faster for the more memory intensive portion of the algorithm. I used DDR400 on an Athlon 3500 (single core) for my test. I should note that the "memory intensive" section of the algorithm manages over 1GB of data in memory, which is atypical for the average user.
I was a little disappointed the article did not address the issue of onboard RAID versus independent RAID controller cards. I have read a number of places that running your raid on a separate PCI controller yields much better RAID performance. This is due to the fact that onboard controllers run with software and must utilize CPU time, while separate hardware controllers run independently of the motherboard and CPU. It would have been nice to see a comparison of motherboard based controllers versus independent hardware based controllers. In that case I think the Raid benchmarks would have been much different.
I know nobody wants it, but i'm going to put my 2 cents in.
This is a serious request for a lot of serious system builders and small businesses. And those of us that have grandmas. :-p
I would LOVE to see the test on dual channel memory repeated on an amd platform with integrated video and the video depends on system memory. I've built at least one system for a customer that had 1.5 gigs of ram and 1 gig was dual channel and there was an extra 512, laying around so i dumped it in. they all matched. my customer doesn't do a LOT of gaming, but i set the frame buffer up to the point where she could get good frame rates in simple 3d games and google earth and what not and still retain a quick system when not utilizing 3d applications. This is an area that I have not seen a lot of documentation (i may not be digging hard enough for it) but albeit I think this is an area that is heavily dependent on your memory. My hypothesis is that the single channel will lose MUCH more ground on a cost effective amd platform. But that's a hypothesis and can be proven wrong.
why i think it's worth running tests on?
It may not be a buzz-worthy topic, but I've noticed that most readers build systems not only for themselves. A good percentage of toms readers either have htpc's which depend on integrated graphics, or build systems for their family and friends which don't necessarily need "an enthusiast" system. But the topic touches that area of system building that is sometimes overlooked. I would have definitely liked to see it go that direction as well as for gaming and what not.
things in this article i liked that encompassed business solutions.
the small and adequate simple notes on how this would be used in server side situations and what not. in my limited experience, i have not had to encounter any type of server situations. but it is something useful that did not need to be included and yet was included.
and finally, trying not to be critical. just inquireing that maybe if there is time... or space... apply this to some highly memory dependent solutions. i'd do it myself but i REALLY don't have the money or resouces. :-p thanks for reading my 2 cents.
I second the previous post by nachowarrior.
Dual-channel memory is an issue when one are building a rig for friends and family. Often they want to upgrade, or a new build and they allways want it dirt cheap!
So dual-channel function on AMD IS a big issue for many people out there.
Thank you THG for doing this article thou, none other hardwaresite has anything like it.
Update: I found a better place for further reading about memory and AMD CPU's memory handling: