Sign in with
Sign up | Sign in
Your question

Intel X25-M Gen2 - Raid 0 - What stripe size?

Last response: in Storage
Share
August 12, 2009 2:40:23 PM

I've got 2 Intel X25-M Gen 2 disk, on 80 GB each, and I'm about to throw them in raid 0, as the million dollar question arises: What stripe size?

Any hints or pointers would be nice, cause I really am somewhat puzzled by it. All the different info I can find through google points in different directions.

Thanks in advance,

Ottosen
August 13, 2009 3:08:01 PM

Hmm, perhaps additional info will help. The disks will be used for my Windows and for all my games. Probably the only frequently used applications will be the office pack. I'm not sure if these aforementioned programs usually do small or big random writes/reads, and how that influence the prefered stripe size. That's why I'm asking here :) 
August 13, 2009 10:29:50 PM

Bump, while at the same time mentioning that it'll run on ICH10R.
Related resources
a b å Intel
a c 168 G Storage
August 14, 2009 12:50:36 AM

I have two X25-m 80gb gen1 drives in raid-0. My main objective was a single 160gb image, not particularly performance. I used 64k as a stripe size. I also asked the question, and got no definitive answer. It seems to be working well. I suspect that stripe size does not make that much difference. I think that for an OS, a bit smaller, like 32k or 16k might be better because lots of OS work is small random I/O. For loading apps, larger might be better. If you can experiment, post your results. Unfortunately, the synthetic tests are not much help. Your sense of speed is probably more valid.
August 14, 2009 1:03:34 AM

I'm planning to do just the same thing.

I came across a lot of stuff searching for RAID 0 and every now and then a review suggest a 128KB stripe. It's not specific to Intel's SSDs, but it seems to be the magic number.

As you will use ICH10R, may I ask a question to you. As you, I'm planning to throw 2 Intel X25-M in RAID 0. What happens if I get 3 of those in RAID 0 which in turn will top the 625MB limit of the ICH10R? Does it stutter? Does it corrupt data?

Be also aware that puting those drive in RAID 0 won't really help the random reads/writes of small files (<=4KB), only the sequential reads/writes of larger files will be improved.

John
August 14, 2009 1:07:26 AM

I forgot to mention...

The main reason for using 128 KB instead of 64/32/16 or someting else is that SSDs has to erase the data on the entire block to write even a small 4 KB file. And guess the block size? 128 KB. If you use smaller stripe, the SSD will anyway rewrite the entire block, might as well use the "native" size for the SSDs, in fact it's more the NAND flash memory block size used in SSDs, it's what seems to be the faster size because of this.

John
a b å Intel
a c 168 G Storage
August 14, 2009 1:20:54 AM

Using HDTACH, I get 250-300mb/sec. this is with ICH10R. So 3 drives should not be much of a problem with a 625mb/sec max. I understood the native size of the SSD to be 64K which was one reason that I picked 64k. To date, I have not seen any good analysis or benchmarks of the various configurations and options for the X25-M in raid-0.
August 14, 2009 2:31:07 AM

geofelt said:
Using HDTACH, I get 250-300mb/sec. this is with ICH10R. So 3 drives should not be much of a problem with a 625mb/sec max. I understood the native size of the SSD to be 64K which was one reason that I picked 64k. To date, I have not seen any good analysis or benchmarks of the various configurations and options for the X25-M in raid-0.


Now that's surprising, 250-300 MB/sec is a little weird considering the read speed is supposed to be 250MB for 1 X25-M. You would only get a mere 50 MB/sec to get them in RAID 0, that doesn't seem right.

I might be wrong, I am no professionnal of RAID, but this doesn't seem to be the results I've seen. As you said, there is to my knowledge no benchmark or good analysis of the various options for the X25-M in RAID 0. All I say is that I have read a lot of reviews on different SSDs, including Intel's one, and the few that mentionned the stripe size for the RAID 0 was always 128 KB.

Furthermore, some of the reviews even said they tried other stripe size and the best was always 128 KB. I wouldn't be able to pinpoint a specific review for you because I have read so many, but I can assure you that's what I have read. Feel free to consult one of these sites, they are probably the place where I found the information: Anantech.com, BenchmarkReview.com, DriverHeaven.net, LegitReviews.com, PCGamesHardware.com, PCPer.com, TweakTown.com, InsideHW.com.

I even read somewhere that using the Intel in AHCI instead of IDE for the SATA mode would yield better results, up to 20% in sequential reads. I don't remember the other stats for the AHCI, but that's something I am gonna test the day I get 2 of these drives.
a b å Intel
a c 168 G Storage
August 14, 2009 2:54:48 AM

jonnyberthiaume said:
Now that's surprising, 250-300 MB/sec is a little weird considering the read speed is supposed to be 250MB for 1 X25-M. You would only get a mere 50 MB/sec to get them in RAID 0, that doesn't seem right.

I might be wrong, I am no professionnal of RAID, but this doesn't seem to be the results I've seen. As you said, there is to my knowledge no benchmark or good analysis of the various options for the X25-M in RAID 0. All I say is that I have read a lot of reviews on different SSDs, including Intel's one, and the few that mentionned the stripe size for the RAID 0 was always 128 KB.

Furthermore, some of the reviews even said they tried other stripe size and the best was always 128 KB. I wouldn't be able to pinpoint a specific review for you because I have read so many, but I can assure you that's what I have read. Feel free to consult one of these sites, they are probably the place where I found the information: Anantech.com, BenchmarkReview.com, DriverHeaven.net, LegitReviews.com, PCGamesHardware.com, PCPer.com, TweakTown.com, InsideHW.com.

I even read somewhere that using the Intel in AHCI instead of IDE for the SATA mode would yield better results, up to 20% in sequential reads. I don't remember the other stats for the AHCI, but that's something I am gonna test the day I get 2 of these drives.


The specs read UP TO 250gb/sec. I do see spikes higher(and lower) but the range is about 250-300. Also, this is with a reasonably well used pair of drives. The Intel firmware update has been applied. That still makes it twice the velociraptor in data transfer rates. Also remember that not all that you do will be data transfer. Much of the OS work is small random updates. Tuning for sequential transfer may not be the overall best thing.

On my P6T motherboard bios, ACHI seems to be a subset of RAID. So, for raid-0, you will get ACHI.

Please do post the results of your tests on this thread. If there is enough advantage, I will find a way to change my stripes.
August 14, 2009 7:44:49 AM

geofelt said:
Using HDTACH, I get 250-300mb/sec. this is with ICH10R. So 3 drives should not be much of a problem with a 625mb/sec max. I understood the native size of the SSD to be 64K which was one reason that I picked 64k. To date, I have not seen any good analysis or benchmarks of the various configurations and options for the X25-M in raid-0.


I think you've forgot to enable write cache mate. You need to install the intel matrix storage manager and it enable it there. It should give you a huge boost in performance.

jonnyberthiau:

You could throw in a third disk if you've got the cash, however I've come to understand that the ICH10R chip is best left with just 2 disks. This might be because of it's loft. I saw some testing however, where a guy used 2 disks on this rather expensive raid controller, and came out with worse results. Other backed this up, saying that the IHC10R is actually somewhat superior if you're just doing a 2 disk home desktop setup. With write caching enabled, the lack of cache in the chip is made up for.

Regarding the stripe size, I too heard that 128kb is a good number. I though ICH10Rs max stripe size was 64 though?
I haven't gotten my new system up and running yet, so I don't know from personal experience yet :) 

Another idea I heard, not sure it's possible with ICH10R, was to make a smaller, say 4GB, part of the disk run with a stripe size of 4kb for the page file, and then the rest at your prefered stripe size. Supposedly the page file is THE most frequent with sending small random reads/writes.
August 14, 2009 1:28:54 PM

ottosen said:
I think you've forgot to enable write cache mate. You need to install the intel matrix storage manager and it enable it there. It should give you a huge boost in performance.

jonnyberthiau:

You could throw in a third disk if you've got the cash, however I've come to understand that the ICH10R chip is best left with just 2 disks. This might be because of it's loft. I saw some testing however, where a guy used 2 disks on this rather expensive raid controller, and came out with worse results. Other backed this up, saying that the IHC10R is actually somewhat superior if you're just doing a 2 disk home desktop setup. With write caching enabled, the lack of cache in the chip is made up for.

Regarding the stripe size, I too heard that 128kb is a good number. I though ICH10Rs max stripe size was 64 though?
I haven't gotten my new system up and running yet, so I don't know from personal experience yet :) 

Another idea I heard, not sure it's possible with ICH10R, was to make a smaller, say 4GB, part of the disk run with a stripe size of 4kb for the page file, and then the rest at your prefered stripe size. Supposedly the page file is THE most frequent with sending small random reads/writes.


I'm not sure about what you say about the third disk. I might be mistaken, but I think you misinterpet the "better with 2 disk" configuration. I see it more the way of SLI/Crossfire. Having 2 cards give you a 80% bump in performance, 3 cards give another 40%, 4 cards 10%... I see it more as a diminishing return. The reason behind that is that with 2 drives the controller that divide the task between the disk only have 2 disk to manage. When you add a third disk, the task take a little longer and this cause a lesser performance boost than with only 2 disk.

A RAID-0 with 3 disks will always perform better than one with only 2 disks if the controller can cope with the task. The ICHR10R chipset has an average 625 MB/sec limit seen in a lot of reviews of SSDs in RAID 0, so if the chipset can go that fast, I don't see why the 3 disks RAID would not perform better than the 2 disks one. Yes, the performance boot for the third disk will be lesser than the one added for the second disk, but there will still be one. The X25-M is rated to have 250 MB/sec sequential read, 2 disks only get me at most 475 MB/sec (that's only a supposition, no hard evidence at hands, I might be wrong). Adding a third disk will hit the 625MB/sec limit of the chipset and the bottleneck will limit the performance boost. But if you look the other way, the sequential writes will still be improved considerably which will lower the only drawback of the X25-M...

As for your 4 GB "partition" with 4 KB stripes, first I must say I'm no expert and my understanding of all this might be completly mistaken. Even if I don't think you can tell Windows to use a particular drive area for a specific task as his page file system, I'm not that sure it will help since even if you manage to allocate that "partition" for the page file, the flash NAND technology use 128 KB blocks anyway. So by writing 4 KB to a 128 KB block of NAND flash, you have first to erase the entire 128 KB block, then write back the entire 128 KB block. So that you use only 4 KB doesn't change anything at all... As I said, I might be completly mistaken, but I think the logic back me up here.

As for page file system, I'm with you there, I think it's THE most frequent with sending small random reads/writes.

Give me some feedback on this, I think it's an interesting thread we got here
August 14, 2009 2:23:55 PM

Well you're definitely more of an expert than me, so what I'm saying is mostly just personal experience and what I've recently read.

As for making the page file using a specific partition; yes it's easily possible.
Right click my computer -> Properties -> Advanced System Settings -> Performance Settings -> Advance tab -> "Change" button.

I suppose you're right, and I guess one could sort of say that ICH10Rs hard cap is 3 disks then, although the write would still be improved.

About the whole situation with the NAND blocks being 128KB anyway, and whether you use a stripe size of 4KB or 128KB makes no difference - I'm not so sure.

Theoretically you're right, I've heard what you state other places aswell. However the only thing I can think of, and the most obvious question: Doesn't TRIM change that?
August 14, 2009 3:00:24 PM

ottosen said:
Theoretically you're right, I've heard what you state other places aswell. However the only thing I can think of, and the most obvious question: Doesn't TRIM change that?


All TRIM change is that unused blocks will be erased at some time, I don't know exactly when, but I suppose it's when the disk is idle.

Here is the classic example, you create a new file. The drive has 2 choices, it can write to an unused block or write to a used block which still got space for it. I don't know the algorithm of the drive, but I can guess that the speediest way to write the file is to use an unused block. If I go that way all day long, the drive will finally got to a point where there is no unused block anymore. So the only choice left for the disk is to go for the re-write operation and use a used block instead. That takes much more time than just writing to an unused block. Then TRIM enters the scene. TRIM knows that block #7843378934789 whatever has been "erased" in the operating system. When you erase a file in the operating system, the file isn't erased on the disk, only a pointer to the location of the file is resetted. Because the HDDs were slow and didn't need to erase first the block (cluster) this was the perfect performance. Now returning to TRIM, TRIM pre-erase the block #784... whatever so it won't have to do it when it will need to write the block again. The gain in speed is that you now have an unused block known to the disk and when you will create a new file, the unused block will be used and the speed will be faster than using a used block, thus the "revert" to original speed.

Keep in mind that if every block are used, even if they are only used to 1%, the speed will then be slower and that's why reviews always test the speed of SSDs when every block of the drive has been written to, so you can see the "real" (worst case scenario) performance of the drive.

So all TRIM brings is that it clean the unused blocks when it cans and you whenever you need to use it, the performance is back to 100%. Same as with the dishes, if you have to wash a plate everytime you need to eat, the time required is much more than if you already had a clean plate. TRIM is like the dishwasher, it does it's job when you're not there and when you come back, everything is clean and ready to use. If you don't have a dishwasher, you can dream about it :) 
August 14, 2009 3:05:17 PM

Well I know how it works :p  but I am faily sure that TRIM is operating as the user is.

Read this, it's interesting: http://blogs.msdn.com/e7/archive/2009/05/05/support-and...

Quote: "Windows 7 requests the Trim operation for more than just file delete operations. The Trim operation is fully integrated with partition- and volume-level commands like Format and Delete, with file system commands relating to truncate and compression, and with the System Restore (aka Volume Snapshot) feature."
August 14, 2009 4:53:12 PM

Sorry for the over "simplifying", sometimes I forget that some people know about computers... You said I looked more an expert than you and I thought it would be easier to explain it that way, but I can see that you are as good as me on the subject. I'm really no expert, I simply try to understand it all to make the best decision when I'll build my next computer.

As for the time where TRIM is operating, when I say "idle" it doesn't necessarily mean that the user isn't there, simply that he hasn't done anything in the past milliseconds... Still I didn't thought the command would be issued so many times and that fast by Windows. Thanks for pointing this out.

Really good article, shed some light on the subject for sure. It actually make sense TRIM has been integreted with all those tools.

Still, even after reading the article, I maintain that the best stripe seems to be 128 KB. All the article says is that the pagefile system should be on the SSD because of it's random reads/writes speed. Even if an HDD can have a cluster of 4KB, the NAND seems to have only a 128 KB cluster and thus you can't manipulate less than that anyway. So the logical stripe size would be 128 KB. I don't think TRIM can change much of that, it can clean the cluster, but can't change it's size...
August 14, 2009 6:32:51 PM

Nope, I expect on trying the 128 KB stripe size too, and I won't bother too much with a seperate partition for the page file.

I'm more curious as to why so many find it an advantage if you can disable the page file because you have enough ram. I'm sure I do, but I don't realise the benefit.

Another thing..I might make another topic for this question: I have 2x 285 in SLI, and I'm wondering what I should do with PhysX operation? CPU, GPU or can I make it run on just one of the cards? Unless that disables the rest of what that card got to offer :) 
August 14, 2009 8:17:57 PM

ottosen said:
Nope, I expect on trying the 128 KB stripe size too, and I won't bother too much with a seperate partition for the page file.

I'm more curious as to why so many find it an advantage if you can disable the page file because you have enough ram. I'm sure I do, but I don't realise the benefit.

I wouldn't make a partition for the page file, it would almost kill the partition instantly. The reason they use wear leveling is to distribute the write cycles among the entire drive so the impact won't be so bad. Imagine now putting every writes in the same 4 GB space, that would use the total write cycles in no time, "killing" your OS when is has no space to write anymore...

As for using RAM and disabling the page file system, well I honestly don't know about this subject. I would be carefull though, since having more than 4 GB of RAM is possible since XP 64 and I never heard of this, thus I'm not sure it's totally reliable. If it would be that reliable, why Vista or Win 7 wouldn't use this by default? It would so much faster and they don't use it by default. Must have a reason...

I don't know anything about PhysX so I can't help you on this one
a b G Storage
August 14, 2009 9:24:54 PM

Actually, it wouldn't kill the partition instantly. The way wear leveling works means that LBAs do not correspond directly to physical locations in the flash. Even though the OS might think that the page file partition would be stationary, in reality, it would be constantly moving over the drive, with new writes occurring in different locations from old writes. It would not be restricted from doing this in any way by the partitioning.
a b å Intel
a c 415 G Storage
August 14, 2009 9:34:20 PM

jonnyberthiaume said:
I wouldn't make a partition for the page file, it would almost kill the partition instantly. The reason they use wear leveling is to distribute the write cycles among the entire drive so the impact won't be so bad.
Partitioning the drive has no impact on wear leveling. The SSD has no knowledge of partitions or the file structure, it only deals with logical block numbers. It will allocate new writes to different blocks no matter what the LBNs are - that's the whole point of it.
August 16, 2009 5:10:42 AM

Thanks for this precision guys, I didn't tought it will react like this.

Could you then tell ottosen and me if this would yield any benefits at all?

Quote from Ottosen:

"Another idea I heard, not sure it's possible with ICH10R, was to make a smaller, say 4GB, part of the disk run with a stripe size of 4kb for the page file, and then the rest at your prefered stripe size. Supposedly the page file is THE most frequent with sending small random reads/writes"

I answered that I doubt this would be usefull since the NAND technology seems to use 128 KB and thus anyway it would write 128 KB blocks. Since you both say the drive won't make any difference as for how the patitions are laid out on the SSD, then it would mean I am right. Right?

If you now consider the OS level, then does the OS benefit from the 4GB with 4 KB stripes? If the OS does care about the cluster size, it should be optimized for an operation at 4KB and thus this optimization should benefit the operation even if the drive doesn't give any. Am I right or completly mistaken?
August 16, 2009 7:49:39 AM

I'd like an answer, but I think it's a tough question :) 
August 16, 2009 8:16:39 AM

geofelt said:
Using HDTACH, I get 250-300mb/sec. this is with ICH10R. So 3 drives should not be much of a problem with a 625mb/sec max. I understood the native size of the SSD to be 64K which was one reason that I picked 64k. To date, I have not seen any good analysis or benchmarks of the various configurations and options for the X25-M in raid-0.



Seems real slow unless 1st gen are rated @ 150 each.
a b å Intel
a c 415 G Storage
August 16, 2009 9:45:05 AM

jonnyberthiaume said:
....make a smaller, say 4GB, part of the disk run with a stripe size of 4kb for the page file, and then the rest at your prefered stripe size...
ANY pagefile activity is bad. For my money I'd spend the time & effort into putting more RAM into the system to eliminate pagefile use altogether rather than splitting hairs over how best to optimize it's performance.

That having been said, I doubt that the stripe size would make much if any difference to the kind of small random I/Os that you see with the page file. Remember that a 128KByte stripe size does NOT mean that each I/O takes 128K bytes, it just means that the 1st 128K bytes is on drive 1, the next group of 128K is on drive 2, the third group is back to drive 1 again, and so on. A 4KByte I/O will still be a 4KByte I/O whether the stripe size is 4KBytes or 128KBytes (as long as the I/O doesn't cross a stripe boundary).

On interesting thing read was an Intel engineer's take on using an SSD for the pagefile in the first place. My inclination would be to avoid putting the pagefile on an SSD because I'd have thought it to be write-intensive, and SSDs aren't the best place to put such files. But the engineer claimed that their measurements showed a 40:1 read to write ratio for a typical pagefile. I would never have guessed that it was that read intensive.

Nonetheless, if you're spending money to get the performance of an SSD, you'd be foolish not to also get enough RAM to make the performance of the pagefile a moot point.
August 16, 2009 10:41:40 AM

sminlal said:


That having been said, I doubt that the stripe size would make much if any difference to the kind of small random I/Os that you see with the page file. Remember that a 128KByte stripe size does NOT mean that each I/O takes 128K bytes, it just means that the 1st 128K bytes is on drive 1, the next group of 128K is on drive 2, the third group is back to drive 1 again, and so on. A 4KByte I/O will still be a 4KByte I/O whether the stripe size is 4KBytes or 128KBytes (as long as the I/O doesn't cross a stripe boundary).

.


That made it really obvious. Well put.

Also, how much RAM do you figure that would require? I'm currently aiming for 6 GB, but maybe it's not enough?

I also heard that disabling the page file will cause a lot of errors. Can you verify that that's not true maybe? They argue that some programs will automatically look for the page file, and fault if none is there.
a b G Storage
August 16, 2009 11:13:11 AM

I did have some problems when I tried to disable the page file on mine. One thing you could do is put the page file on a different (non-ssd) drive though - it'll be a bit slower, but you won't wear out the SSD as quickly. As for RAM, I would say 6GB is more than enough for most users.
a b G Storage
August 16, 2009 11:14:33 AM

ottosen said:
That made it really obvious. Well put.

Also, how much RAM do you figure that would require? I'm currently aiming for 6 GB, but maybe it's not enough?

I also heard that disabling the page file will cause a lot of errors. Can you verify that that's not true maybe? They argue that some programs will automatically look for the page file, and fault if none is there.


On computers with 4GB of RAM or more and doesn't run any professional applications that I know will require even more physically RAM I've disabled pagefile completely. e.g. a few of Adobe software suite I use on my workstation. On my server it's disabled.

3x2GB (6GB) is the typical cost effective combo in term of $/GB for a LGA1336 rig. If you need more than that 2x4GB (8GB) is the next step.


Regarding stripe size, 64K is the minimum you'll want to use. Any less is usually counter-productive to modern controllers of today. Different controllers prefer a different stripe size as their most optimal.
If possible, experiment with different sizes then run PCMark Vantage (HDD tests) and Intel IOMeter 4K random read/write to gauge speed difference.
a b å Intel
a c 415 G Storage
August 16, 2009 9:39:58 PM

ottosen said:
I also heard that disabling the page file will cause a lot of errors. Can you verify that that's not true maybe?
Disabling the page file won't cause any problems for the operating system, as long as you have enough physical RAM to fit everything into memory. The only downside from the OS point of view is that it won't be able to save diagnostic information if it crashes - but that's of little use for most home users anyway.

Programs shouldn't depend on the pagefile, but some do. I run Adobe Photoshop V6, for example, and it refuses to start without a pagefile. I think it's trying to find the pagefile so that it can decide where to put it's work files. I've got no problem with the concept, but it's stupid for it to refuse to work altogether. I'm really hoping that CS4 can run without a pagefile, as I'd quite like to eliminate mine.

The thing to do is to disable the pagefile and then try all your applications out to see if there's a problem. If you find something that needs the pagefile, you can create a little runt of a pagefile on whatever disk seems the least busy.

I think 6GB of RAM is a pretty reasonable starting point. I'd try to outfit the system with some room for expansion - for example if you're buying a motherboard with six DIMM sockets, buy a kit of 3 x 2GB DIMM modules rather than 6 x 1GB - that will leave you with some empty slots to expand into if you need to.
August 16, 2009 9:58:00 PM

Yeah, I bought a 3x2GB kit, so there's room for expansion. When you're saying I should put a bit of pagefile on the disk least used, it makes me wonder. I'll be having the SSD disk(s) and then a server sort of disk, with all my large files and pictures etc. on it. I'm not sure which you'd prefer? The big storage disk would be the least busy, but it's just an old 500GB Samsung disk, so it's slow.

I do have a spare raptor disk though..I actually got a whole spare computer, running on a core2duo with a gf 9800gtx+ that I don't know what to do with :D 

Can I use the GPU for PhysX in my new computer? :p 
a b å Intel
a c 415 G Storage
August 17, 2009 12:11:00 AM

ottosen said:
When you're saying I should put a bit of pagefile on the disk least used, it makes me wonder.
Well by my plan you'd have enough RAM memory so that the pagefile wouldn't actually be used anyway, so as far as performance goes it doesn't really matter where you put it. The only reason you'd have one at all is if some silly application like Photoshop V6 complains if it's not there. Putting it on the SSD certainly wouldn't hurt, other than to use up space on a drive where space is at a bit of a premium. I wouldn't bother putting in an extra drive just for the pagefile though (again, based on the assumption that it's not going to get used).

Sorry, can't answer your GPU question...
a b G Storage
August 17, 2009 10:20:10 AM

Quote:
I run Adobe Photoshop V6, for example, and it refuses to start without a pagefile. I think it's trying to find the pagefile so that it can decide where to put it's work files. I've got no problem with the concept, but it's stupid for it to refuse to work altogether. I'm really hoping that CS4 can run without a pagefile, as I'd quite like to eliminate mine.

CS4 still requires the presence of a pagefile unfortunately. I regularly work with ~250MB TIFF files and 8GB of RAM seems to hold up well (6GB is about the max. I've seen used).
I almost never hear/feel any pagefile being used anyway so I don't bother disabling it on my other computers except for the server.

Quote:
Can I use the GPU for PhysX in my new computer?

You can with 9800GTX+, but you'll experience heavy frame rate drop with most PhysX enabled games. See this. PhysX is really just a gimmick feature and not really usable without another dedicated card (a low-end card that's PhysX capable works).
August 17, 2009 11:03:12 AM

Well 9800GTX+ would be my low-end card to run PhysX, while I had my 2x 285GTX handling the main thing.

As far as I can see from that review, that would be an optimal setting really. GTX285-GTX285-9800GTX+, and then just make the GTX+ handle PhysX..I guess you do that in the drivers or so!
a b å Intel
a c 415 G Storage
August 17, 2009 6:41:09 PM

wuzy said:
CS4 still requires the presence of a pagefile unfortunately.
Ah, thanks for that info - it's something I've been wondering about...
September 21, 2009 10:07:48 AM

I tried 16K and 128K. Results were a bit mixed but no much differences on benchmark. 16k was a bit faster on larger file and 128K faster with 4k files (?!?). I have on Raid0 2x intel SSD gen 2 and 2X Titan skill. on large file intel gave me 509 MB/s and the Titan gave me 339MB/s.
!