Posts Tagged ‘RAID’
Some commands that come in handy for software RAID arrays.
To find out what your RAID array is doing issue:
To find out the status of a particular device:
mdadm –detail /dev/mdx
To remove a drive from the array:
mdadm -r /dev/md0 /dev/sdc1
(this will remove partition sdc1 from the md0 array)
To add a drive back in to an array:
mdadm /dev/md0 -a /dev/sdc1
(this will add partition sdc1 in to the md0 array)
To watch an array as it rebuilds itself:
watch -n1 cat /proc/mdstat
The different levels of RAID are:
A “striped” mode. Ideally the devices are the same size. There is no redundancy but you do gain performance (from parallel reads or writes). If you lose a drive you will lose data.
A mirrored RAID set. All data is written to all drives at once. You can lose a drive and not lose data. Write performance will be a little worse because you must wait until all data on all drives is finished. It is possible to saturate the PCI bus while writing and this causes the biggest bottleneck (hardware RAID suffers less from this). Read performance can be better than a single drive. It is also possible to have a spare dive kick in immediately in the event of a drive failure. RAID size is limited by the smallest disk available.
Requires 3 or more drives. It is essentially a RAID-0 array with an additional drive being used to store parity informaton so that a failed drive can be reconstructed. The parity drive becomes the performance bottleneck. In addition if the parity drive fails then redundancy is also lost.
Requires 3 or more drives. This is a very useful option as it combines the performance advantages of a RAID-0 array with the redundancy of RAID-1. In this case parity information is distributed across all drives. RAID-5 arrays can lose one drive but not two. Actual performance gains will depend on usage scenarios with heavilly fragmented data fairing not very well.
Setting up a RAID array on Linux is fairly easy and is definitely effective. I have even set this up on servers that have hardware RAID equipment because the hardware drivers were either flaky or not available.
Be aware that software RAID will steal some CPU cycles. I feel that most modern hardware has more than enough power to spare but, if you need every ounce of performance then hardware RAID is definitely the way to go. I can say that I haven’t noticed much of a performance hit with RAID running and the benefit has always been worth it but, as they say, your mileage may vary.
This is most easilly accomplished with a recent kernel (at least 2.4 but that covers almost all recent distros). The RAID tools are also usually installed as is mdadm.