

Any other level will prevent your system from booting. This is because GRUB does not have RAID drivers. The boot partition must be RAID1 i.e it cannot be striped (RAID0) or RAID5, RAID6, etc. The following example makes use of RAID1 and RAID5.Įach hard drive will have a 200 MiB /boot partition, 2048 MiB /swap partition, and a / partition that takes up the remainder of the disk. raid0, raid1, raid5, raid6, raid10) and LVM (i.e. Obtain the latest installation media and boot the Arch Linux installer as outlined in Getting and installing Arch. If you would like to use one of those boot loaders make sure to add the option -metadata=0.90 to the /boot array during RAID installation.

GRUB Legacy, LILO) will not support any 1.x metadata versions, and instead require the older version, 0.90. Syslinux only supports version 1.0, and therefore requires the -metadata=1.0 option. 1.2) when combined with an initramfs, which has replaced in Arch Linux with mkinitcpio. GRUB supports the default style of metadata currently created by mdadm (i.e.

GRUB when used in conjunction with GPT requires an additional BIOS boot partition. This tutorial will use Syslinux instead of GRUB. Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory. Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a LVM logical volume. In these cases your RAID setup requires your immediate attention in order to solve the issues that it is presenting, in order to avoid any possible data loss.Note: If you want extra performance, just let the kernel use distinct swap partitions as it does striping by default. This is the warning level that you’d want if you wish to be warned only in extreme cases, where the RAID setup is actually failing, or has already failed. Please note that such actions on the RAID setup might be routinely performed by your system, so you might get warned too often in some cases. So, if you’d like to be warned of such scenarios, then select to be warned when RAID health is ‘not ideal’, but this warning level should be considered informative, as you are most likely not taking any action at this stage. Furthermore when such actions (as described above) are performed on the RAID disks, your server may even experience slower than usual performance. We’ve included this under the ‘not ideal’ health because under such conditions the ‘healthy’ status cannot be 100% confirmed. Either one of these statuses can have a faulty outcome, so the RAID status is currently ‘not ideal’. In these cases the health of the RAID is most likely healthy, however the healthy status cannot be currently confirmed because your system is either checking up on the RAID integrity with the ‘check’ status, or resyncing the RAID drives with the ‘resync’ status, or rebuilding/recovering the RAID with the ‘recover’ status. This warning will include cases such as when the RAID is performing a check, recover, or resync. In this article we’ll go through what each of these warnings mean.

You can select to be warned when RAID health is not ideal or when the RAID health is critical.
Mdadm check disk health software#
If you are using our Software RAID Monitoring then you have probably already noticed that you can configure two different types of warnings under this section:
