Well, as it looks like, I was prematurely ecstatic about the new open source based NAS server. While the hardware works great, I had not much luck so far with setting up the software on top of that.

My original idea was to use FreeNAS, let it boot and install into USB drive and use all 4 SATA disks in RAID 5 set up by BIOS. It did not work. FreeNAS does not see the RAID5 volume created by BIOS and keeps referring to 4 separate SATA drives. So does Openfiles and few other distributions. After the USB key was initialized, the system did not boot and stopped with error message 'No Ufs'.

Some research later, I found out that the drivers for the chipset needs to be installed in order the hardware RAID to be recognized. In the process of searching I've learned more about RAID's than I ever wanted to know :-) and found out that the hardware RAID I have is in reality half software solution and without loaded drivers and help from OS will not work. Not much surprise, one cannot expect from $110 mainboard to be everything for everybody.

I did few experiments with using 4 SATA disks as separate volumes and set up software RAID 5 in FreeNAS. It worked OK, so as long as you resolve the problem with booting, this almost would be a workable solution. For now, while I am experimenting with the system, it is booting from additional 40GB IDE drive. The "almost" part is bad surprise in FreeNAS capabilities. It allows you to create users and even groups (dunno why), but the access control is all or nothing. For volume, you can set level of authentication required - anonymous, local user or domain, but you cannot define any restrictions on access. For example, you cannot have read only access. This makes FreeNAS completely unsuitable for what I need - I must be able to export read only shares. To do that, I will very likely have to use normal Linux distribution (preferably some with Web based admin interface), and properly configure servers and security. It should not be terribly hard, the trouble is that I know too little about all that Linux-hardisks stuff. On the other hand, it is a great learning opportunity.

As for RAID, there are two possible ways ahead: option one is to get the BIOS RAID working. This would require to find the proper drivers for the Linux kernel version I will be using and learn how to add driver during Linux installation. The other is use software RAID provided by several distributions - e.g. by Openfiler. It may not be as bad as it sounds, because using software RAID inside Linux distributions is exactly what cheaper NAS devices are doing. It does not even have to mean that the performance will be much worse: the main reason these lower end NAS devices are slow in RAID configuration is not enough CPU power and enough memory - typically they have some ARM processor and 256 MB RAM. On my box, I have full Athlon 64 and 1 GB RAM, which is way more powerful.

The tricky part is how do divide the 4 disks into partitions so that I can place /boot and swap somewhere and keep the root partition on RAID-ed disk. It can be tricky, because the partitions that participate in RAID should have same size and you still need to place the /boot swap and the root partition somewhere. Because they are so important, it would be great if they could sit on RAID, but of course it is a chicken-egg problem, because the RAID is created after Linux boots.

I see two options (and you guys who actually do understand this stuff, feel free to correct me if I am completely wrong):

a) keep the IDE drive (which will hold the MBR, /boot, swap and root file system) for boot and Linux installation and create one partition per SATA disk, all combined into large RAID-5. This way, all space on SATA drives is utilized. The IDE is single point of failure, but if it fails, it should be quite easy to boot some LiveCD and reconfigure the access to the data, because the sofware RAID support is built into new kernels and should work the same, regardless of distribution. The most of the IDE disk space will be available - the Linux distribution will comfortably fit into 2-4 GB, and the rest of  80 GB (the smallest disk you can buy) can be exported as quick, no-RAID, working disk space (staging area or temp).

b) Partition the SATA disks so that the boot, swap and system partition are on the first disk. The size of the rest of disk wil be determining the primary RAID volume size. The equivalent sized partitions to the system size on the other disks are combined into second RAID5 volume. For example

hda1 - 100 MB = boot, hda2 - 2 GB = swap, hda3 - 4 GB = system, hda4 - 3xx GB = space for RAID 5
hdb1 - 100 MB = (copy of boot), hdb2 - 6 GB = space for vol2 RAID, hdb3 - 3xx GB = space for RAID 5
hdc1 - 100 MB = (copy of boot), hdc2 - 6 GB = space for vol2 RAID, hdc3 - 3xx GB = space for RAID 5
hdd1 - 100 MB = (copy of boot), hdd2 - 6 GB = space for vol2 RAID, hdd3 - 3xx GB = space for RAID 5

After that, there will be two RAID5 volumes: one created from hda4, hdb3, hdc3 and hdd3 - which have all same capacity and one created from hdb2, hdc2 and hdd2. The capacities of the volumes will be 3 x 3xx GB (about 900 GB) for big one and about 12 GB for the smaller one. If any of the disks hdb, hdc and hdd fails, nothing happens and after replacement data will be restored. If disk hda fails, in order to restore, the system must be started from LiveCD, reinstalled on hda (with exact partitions and RAID table) and after booting the data will be restored. Kind of complicated but maybe doable.

There is always plan B, of course: stay with BIOS RAID and use Windows 2003. It would have exactly same issues with drivers as Linux had, but I know how to install this one (as I have done it when we were setting up the development lab). The machine is powerful enough to run it. What would be nice, is that both OS and data would sit on RAID-ed volume. What I do not like on the Windows idea is the necessity of using GUI to do anything - the Remote Desktop is pretty much only practical way how to administer the system. And I would not learn much new either, I think ...

Yep, more thinking and planning required. I will shelve the RAID project until next weekend. I have mixed feelings about all this. On one hand, it is great to be discovering new things and learning, but it takes so much time: after Yan built the box, we have tried to get it working until 3AM ... Why things cannot "just work" as in Mac world ? If budget would not be problem, here is perfect RAID solution :-).