So ages ago, in internet time (or 9 months ago in real time), I set up a brand new machine, specifically for development with VS2005 and ASP.NET (because, well, it takes a dual core 2.66ghz, 4gb ram beast to be able to write a text file without your keystrokes backing up, but that’s another story).
I’d set up a RAID 0 (that’s striped), mainly for the added performance, which was pretty staggering. My intention was to pick up a couple more drives later and round it out to a RAID 0+1 (which I have in my server, protecting all my actual source, etc).
Anyway, a drive died, quite prematurely, and my machine is thus rendered a doorstop.
I sent off the RMA to replace the dead drive (still under warranty), and picked up 2 more to round the RAID out.
Then I discovered that the Gigabyte GA-965-DS3, otherwise a totally bad-ass mobo, doesn’t actually do raid 0+1. It only supports a 2 drive raid, either RAID 0 or RAID 1. Doh! Tip to manual writing guy: Don’t say things like…
“Before you begin
Please prepare
1) At least two SATA Harddrives…If you do not want to create a RAID, you can only prepare one harddrive.
…”
That is a quote directly from the manual.
In my book, the implication there is that you could use more than 2 harddrives in the RAID. Unfortunately, such is not the case.
So I went off to the local computer store and picked up a PCIE 4 port SATA II RAID card, thinking I’d slap it in and be done.
Nay, spoketh St. Ignacious PCMCIA of the holy hardware. That card is PCIE x4, and my Gigabyte motherboard only has 3 completely unused PCIE x1 slots.
Back to the computer store.
I could get a PCI SATA II RAID card, but SATA II drives can pump 3Gbps, whereas a PCI slot can only manage about 266MBps. Does seem right to build up a smokin’ machine, then force it to breath through a pixie stick.
Ok, new mobo time.
Looks like the Gigabyte GA-965-DQ6 has all the bells and whistles, plus the right combination of RAID, PCI slots, ATI Crossfire compatibility, USB and Firewire, plus, it even has a serial port on the backplane, just right for hacking X10.
Note to self: Don’t bother “presetting” up a harddrive that you might want to mirror in a RAID later. Both the Gigabyte built in RAID and Window’s software RAID (via “dynamic disks”) want to obliterate any disk they RAIDify.
The nice dedicated RAID cards (like the Promise Fasttrack), I believe can handle that. Hell, one I was reading specs on seemed to imply it could convert, on the fly, between RAID 1, 0, 5 or 0+1, provided you have enough free space. Now that’s a good time!
2 Comments
Congrats on getting your RAID situation under control. Please report on what the performance is like when you get the third drive striped in. Also, be aware that 10,000 RPM drives use more juice than their older 7200 RPM brethren, so if you’re packing your case with drives you might want to evaluate the power supply. Anything less than a 1000W supply nowadays (especially with a dual core, ESPECIALLY with all that RAM) will be taxed. I speak from experience: my system glitched randomly and hourly, until I moved my RAID to an externally powered enclosure. Now it purrs 24/7 without a reboot.
Re: Performance
I’m using an older Adaptec SATA controller plugged into a PCI slot, and while you’re right — I’m getting nowhere near the giga-throughput I should — I employ a secret weapon, with a really dumb name: SuperSpeed SuperCache-II.
I laugh everytime I type that. God, what were they thinking? But bear with me; it’s good stuff.
It’s a smarter replacement for Windows own disk caching algorithym, with *delayed write* capability. Allocate a gigabyte or two of RAM to this bad boy and it delivers RAM-disk performance. It’s eerie as heck watching a compile happen with no disk activity. Completely eliminates the bottleneck. Also stupidly simple to configure; set and forget.
Of course, there’s a price, beyond the $79 licensing fee… get past the truly ridiculous name (are 10-year-olds responsible for this?) and you’ll soon discover you MUST have a rock-stable Windows platform to benefit, or every twitch and GPF will cause you to turn hulk-green with frustration as unwritten data whiffs away in a cloud of discharged electrons. HULK SMASH BAD DISKY THINGY!
Luckily I am blessed with a stable system (knock, knock) and one heckova UPS, and feel confident enough to recommend SuperSpeed SuperCache II WhizzyFast SuperDuper Cacharoonie MegaRad Diskerator E733T to all my developer buds.
Check it out. And what it does for servers is a thing of beauty.
Thanks for the info.
I’ve got an Antec TruePower 2.0 550 running everything and the machine itself has been rock solid ever since I built it (with the exception of the one HD shuffling off to click click land).
Even with the 2 added drives, I’m not noticing any issues (I tried a few test renders out of Apophysis at a high res, now THAT can tax a machine) and have had no issues.
I’m staying away from the 10000rpm drives right now because
1) they’re $$$
2) well, they’re $$$, what can I say.
From what I’m seeing in my server though, a 4 drive raid 0+1 results in some pretty smokin’ throughput, and that’s using a PCI PATA Raid card. This is direct to motherboard SATAII raid in this machine.
I’ll have to post some disksped screenshots when I get the new mobo to compare.
I do like the idea of the super mega ultra maxi cache though. Write caching always is a bit scary, but I’ve got a pretty hefty UPS on it so it likely wouldn’t be that bad. What would be oh so sweet is a full on solid state hd (http://news.digitaltrends.com/news/story/12556/samsung_announces_64_gb_solid_state_drive)
I remember way back when seeing a full height (remember those) solid state HD that contained a little battery backup and just racks of SIMMS. Performance was astonishing.