Was this content helpful?
How could we make this article more helpful?
RAID levels differ in performance, usable storage capacity, and data‑protection capabilities, depending on the selected configuration and the number of drives in the array. Review the summaries for each RAID level before selecting a configuration for your device.
For RAID configuration instructions, see the Configure and Manage Arrays.
| RAID level | Min. drives | Max. drives | Notes |
|---|---|---|---|
| RAID 0 | 2 | 8 | |
| RAID 1 | 2 | 2 | Only two drives are supported for a RAID 1 array. |
| RAID 5 | 5 | 8 | A minimum of five drives are required to allow for background initialisation as an option.* |
| RAID 6 | 7 | 8 | A minimum of seven drives are required to allow for background initialisation as an option.* |
| RAID 10 | 4 | 8 | Requires an even number of drives (four, six, or eight). |
| RAID 50 | 6 | 8 | Requires an even number of drives (six or eight). Can only be created via foreground initialisation. * |
| RAID 60 | 8 | 8 | Can only be created via foreground initialisation. * |
| * To better understand the difference between a background initialisation and a foreground initialisation, see Create an array. | |||

RAID 0 provides the highest sequential performance by writing data across all drives in the array (striping). The usable storage capacity equals the combined capacity of all drives.
RAID 0 does not provide data protection. If a single drive fails, all data in the array is lost. RAID 0 is best suited for temporary or non‑critical data where performance is the primary requirement and data can be restored from another source.

RAID 1 mirrors data between two drives, providing enhanced data protection. If one drive fails, data remains available on the remaining drive.
Because all data is written to both drives, usable storage capacity is reduced by 50%. Write performance is lower than RAID 0, due to the time it takes to write data multiple times. RAID 1 is supported only with two drives and cannot be expanded.

RAID 5 writes data across all drives in the array and distributes parity information among them. If one drive fails, the array continues to operate and the missing data can be rebuilt onto a replacement drive.
If a second drive fails before the rebuild process completes, data in the array is lost.
RAID 5 performance can approach that of RAID 0 while providing protection against a single drive failure. Usable capacity is calculated by multiplying the capacity of the smallest drive by the total number of drives in the array, minus one:
Smallest drive capacity × (Total number of drives − 1)
Example 1: An array is assigned five 8 TB drives for a total of 40 TB. The equation is:
8 TB x 4 = 32 TB
Example 2: An array is assigned four 16 TB drives and one 24 TB drive for a total of 88 TB. The equation is:
16 TB x 4 = 64 TB

RAID 6 writes data across all drives in the array and stores two sets of distributed parity information. This configuration allows the array to withstand the failure of up to two drives without data loss.
Rebuilding data after a drive failure is slower than RAID 5 due to the additional parity calculations, but RAID 6 provides significantly greater protection for large‑capacity arrays.

RAID 10 combines the data protection of RAID 1 with the performance benefits of RAID 0. The array is composed of mirrored pairs of drives that are then striped together.
RAID 10 can tolerate the failure of one drive in each mirrored pair, as long as both drives in the same mirror do not fail simultaneously. This configuration provides strong data protection and high performance, particularly for workloads that involve frequent access to many small files and benefit from higher input/output operations per second (IOPS).

RAID 50 combines RAID 0 striping with RAID 5 parity by striping data across multiple RAID 5 groups. This configuration improves write performance compared to RAID 5 while offering greater fault tolerance than a single RAID level.
A minimum of six drives is required. Arrays with a large number of drives may take longer to initialise and rebuild due to increased capacity.
RAID 50 can only be created using foreground initialisation. During foreground initialisation, your device must be disconnected from the host computer. For details, see the Create an array.

RAID 60 combines RAID 0 striping with RAID 6 double parity by striping data across multiple RAID 6 groups. This configuration offers improved performance compared to RAID 6 while providing high fault tolerance.
A minimum of eight drives is required. Because RAID 60 arrays use a large number of drives, initialisation and rebuild operations take longer than with standard RAID levels.
RAID 60 can only be created using foreground initialisation. During foreground initialisation, your device must be disconnected from the host computer. For details, see the Create an array.

A RAID + Spare configuration includes a reserved drive that automatically replaces a failed drive. When a drive fails, data synchronisation to the spare begins immediately, reducing the time the array operates in a degraded state. Arrays with redundancy that do not include a spare must wait for a replacement drive to start before synchronisation.
For more details, see Assign a spare drive.
For RAID + Spare arrays, data remains intact when the minimum number of redundant drives fail. However, if an additional drive fails before or during data synchronisation with the spare, data in the array is lost. See the examples below.