In RAID all your disks are usually equal size. If you use different sizes of disks, the utilised space on each disk, will be the size of the smallest disk.
- The original amount of disks in your RAID is fixed and cannot be changed.
- You can replace disks with larger disks, but not utilise the higher capacity.
- When you have replaced all your disks with higher capacity disks, you can often (depending on your controller), expand the entire RAID to the new bigger size.
RAID 0 – stripe
Data is written across all disk. Highest performance, with zero redundancy. Usable disk space is same as raw size.
RAID 1 – mirror
The data is duplicated on two disks. Double read performance, same write performance, and you can survive loosing a disk. Usable disk space is half of raw size.
RAID 5 – one distributed parity (RAIDZ, RAID-Z1, RAID 7.1, SHR-1, F1)
Parity is calculated and written across all disks. Higher read performance, lower write performance. You can survive loosing 1 disk. With disk sizes larger than 4 TB, rebuild may take several days and create a serious risk. Usable disk space is raw size minus 1 disk.
SHR-1 is a Synology variation that supports utilising the space of different size drives in the same RAID, both when building the array and later when replacing and expanding. You can replace individual drives one by one with larger ones, thereby creating a mixed-size array and allow you to utilise the increased space from the larger disks added. When using disks of different sizes in the SHR RAID, performance will be a bit less than with equally sized disk. SHR is still same normal RAID on the disks, but it adds multiple RAID “slices” on top, to support mixed-size arrays.
F1 is another Synology variation, that only Synology supports. F1 is equal to RAID 5, but will write more parity data to a single drive, instead of an equal spread of parity across all drives. As there are more parity writes to a single drive, write performance will suffer a bit. This is for all-flash (SSD) arrays, by write more to one drive than the rest, this drive will fail before the other drives, and you avoid a situation where all SSDs have written an equal amount and therefore fail almost at the same time. Obviously with high endurance drives and proper monitoring of the endurance, this is not needed. If you do not have proper monitoring of the endurance left, F1 will provide an added safety.
RAID 6 – two distributed parities (RAID-Z2, RAID 7.2)
2 parity blocks are calculated and written across all disks. Higher read performance, bad write performance. You can survive loosing 2 disks. With disk sizes larger than 4 TB, rebuild may take many days and increase risk while degrading your performance, which may not be acceptable for such a long period. Usable disk space is raw size minus 2 disks.
SHR-2 is similar to SHR-1, but now with 2 parity disks, like in RAID 6.
RAID 7.3 – three distributed parities (RAID-Z3)
3 parity blocks are calculated and written across all disks. Higher read performance, very bad write performance. You can survive loosing 3 disks! Especially useful when using disks sized 10 TB and above, for which it may take several weeks for the RAID to rebuild.
RAID 10 – striping across a number of mirrors (1+0)
Usually a number of 2 disk mirrors, with a stripe set on top. Very high read and write performance. You can survive loosing as many disks as you have mirrors, as long as you don’t loose 2 disks in the same mirror. Rebuild period is very fast, often 30 minutes, with low performance degradation, minimising the period you are vulnerable. Using hot spares are advised, to facilitate the short rebuild periods. Usable disk space is half of raw size.
RAID 50 – striping across a number of RAID 5 sets (5+0)
A number of RAID 5 sets, with a stripe set on top for speed. You can survive loosing 1 disk in each RAID 5. Increases write perfomance compared to RAID 5.
RAID 50×2: A RAID 50 can be built using any number of RAID 5 subsets. Specifying RAID 50×2 (or 50/2) means that this RAID 50 is built by creating 2 RAID 5 subsets with a stripe on top. RAID 50×4 means 4 subsets – ex. in a 16 slot system RAID 50×4 means you have 4 RAID 5 groups with 4 disks in each, and a stripe on top.
RAID 60 – striping across a number of RAID 6 sets (6+0)
A number of RAID 6 sets, with a stripe set on top for speed. You can survive loosing 2 disks in each RAID 6. Increases write perfomance compared to RAID 6.
RAID 60×2: A RAID 60 can be built using any number of RAID 6 subsets. Specifying RAID 60×2 means that this RAID 60 is built by creating 2 RAID 6 subsets with a stripe on top. RAID 60×3 means 3 subsets – ex. in a 24 slot system RAID 60×3 means you have 3 RAID 6 groups with 8 disks in each, and a stripe on top.
RAID X/X (6/6, 5/5 etc.)
A storage box with two separate RAIDs. Example: RAID 6/6 means you have created two separate RAID 6 groups in the same storage box, which run completely independent. This may in some situations be preferred rather than using ex. RAID 60, if you need to be 99.999% sure that IO abuse of one RAID will not hurt the other. In most situtations, RAID 60 with good monitoring is preferred, as peak performance IO are lost, by separating RAIDs.
What RAID and when?
For large disk sets, the choice is usually between RAID 10 and RAID 60. RAID 10 provides the best performance especially for writes, and your performance degraded and vulnerable periods are short. With RAID 60 you can always survive loosing 2 disks, and if you are using a large number of disks in each RAID 6 set, you only waste a minimum of space. But the cost is lower write performance.
| Level | Description | Minimum number of drives | Space loss | Fault tolerance | Failure rate | Read performance | Write performance |
|---|---|---|---|---|---|---|---|
| RAID 0 | Stripe | 2 | None | None | High | High | Very high |
| RAID 1 | Mirror | 2 | Raw / 2 | Mirrored disks | Medium | High | Low |
| RAID 5 | 1 parity block distributed | 3 | Raw – 1 disk | 1 disk | Medium | High | Low |
| RAID 6 | 2 parity blocks distributed | 4 | Raw – 2 disks | 2 disks | Low | High | Very low |
| RAID 10 | Mirroring without parity, and block-level striping | 4 | Raw / 2 | 1 in each mirror |
Low | High | High |
| RAID 50 | Block-level striping with distributed parity, and block-level striping | 6 | Raw – (1 disk * number of RAID 5 sets) | One per RAID 5 | Low | Unknown | Unknown |
| RAID 60 | Block-level striping with double distributed parity, and block-level striping | 8 | Raw – (2 disks * number of RAID 6 sets) | Two per RAID 6 | Very low | Unknown | Unknown |
| RAID 100 | Mirroring without parity, and two levels of block-level striping | 8 | Raw / 2 | 1 in each mirror | Low | Unknown | Unknown |
There are many more RAID levels – but none that we use or support. More nested RAID levels.
RAID 5E, 5EE etc.
In a normal RAID the hot spare is passive, only waiting to be used when needed. In an RAID 5E (Enhanced), RAID 5EE etc. the hot spare is actively beeing used in the array, until needed as a hot spare. Running the hot spare Enhanced, means you know that the disk works. But you are also degrading the disk.
ZFS
ZFS Stripe = Striped Vdev’s = RAID 0 = Stripe
RAID 0 is called Striped Vdev’s in ZFS. Vdev’s also do checksumming to prevent silent data corruption.
ZFS Mirror = Mirrored Vdev’s = RAID 1 = Mirror
A mirror that also do automatic checksums to prevent silent data corruption. Mirrored Vdev’s also support using multiple mirrors, to allow duplicating onto more than 1 extra disk.
Striped Mirrored Vdev’s = RAID 10
Same as RAID 10 but with cheksums.
RAID Z compared to RAID
ZFS RAIDs are called RAID Z instead of RAID. RAID Z writes automatic checksums and therefore, do not suffer the RAID write-hole problem, that may occur if a RAID storage system crashes, resulting in interruption of a write operation which can leave parity inconsistent. However, writing the checksum data may create slowdowns as checksum data are spread across all drives in the zvol.