# unable to grow raid

## Cr0t

I have 3x 2TB in a RAID5, which works great. I decided to grow the raid with one drive, but it fails. Until today I never ran into any issues, but today I got this...

```
15:13:48^root@amy:~ > mdadm -D /dev/md0

/dev/md0:

        Version : 1.01

  Creation Time : Sat Apr 17 10:39:21 2010

     Raid Level : raid5

     Array Size : 3907023616 (3726.03 GiB 4000.79 GB)

  Used Dev Size : 1953511808 (1863.01 GiB 2000.40 GB)

   Raid Devices : 3

  Total Devices : 3

    Persistence : Superblock is persistent

    Update Time : Mon Apr 19 15:13:48 2010

          State : clean

 Active Devices : 3

Working Devices : 3

 Failed Devices : 0

  Spare Devices : 0

         Layout : left-symmetric

     Chunk Size : 128K

           Name : amy:0  (local to host amy)

           UUID : d64bd5fc:be602828:04c0c8c0:312502d9

         Events : 15859

    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       1       8       33        1      active sync   /dev/sdc1

       3       8       49        2      active sync   /dev/sdd1

15:13:53^root@amy:~ > mdadm --add /dev/md0 /dev/sde1

mdadm: re-added /dev/sde1

15:14:00^root@amy:~ > mdadm --grow /dev/md0 --raid-devices=4

mdadm: this change will reduce the size of the array.

       use --grow --array-size first to truncate array.

       e.g. mdadm --grow /dev/md0 --array-size 1565568128

15:14:18^root@amy:~ > mdadm -D /dev/md0

/dev/md0:

        Version : 1.01

  Creation Time : Sat Apr 17 10:39:21 2010

     Raid Level : raid5

     Array Size : 3907023616 (3726.03 GiB 4000.79 GB)

  Used Dev Size : 1953511808 (1863.01 GiB 2000.40 GB)

   Raid Devices : 3

  Total Devices : 4

    Persistence : Superblock is persistent

    Update Time : Mon Apr 19 15:14:00 2010

          State : clean

 Active Devices : 3

Working Devices : 4

 Failed Devices : 0

  Spare Devices : 1

         Layout : left-symmetric

     Chunk Size : 128K

           Name : amy:0  (local to host amy)

           UUID : d64bd5fc:be602828:04c0c8c0:312502d9

         Events : 15862

    Number   Major   Minor   RaidDevice State

       0       8       17        0      active sync   /dev/sdb1

       1       8       33        1      active sync   /dev/sdc1

       3       8       49        2      active sync   /dev/sdd1

       4       8       65        -      spare   /dev/sde1
```

Any ideas?

----------

## Mad Merlin

Presumably sde1 is a partition that consumes an entire (new) 2T drive. What is the exact partition size of the new drive though? If the disks aren't similar models, they may be slightly different sizes (and you'd be especially unlucky if the new driver was slightly smaller than the existing ones, but it happens).

----------

## Cr0t

 *Mad Merlin wrote:*   

> Presumably sde1 is a partition that consumes an entire (new) 2T drive. What is the exact partition size of the new drive though? If the disks aren't similar models, they may be slightly different sizes (and you'd be especially unlucky if the new driver was slightly smaller than the existing ones, but it happens).

 

```
[2:0:0:0]    disk    ATA      Hitachi HDS72202 JKAO  /dev/sdb

[3:0:0:0]    disk    ATA      Hitachi HDS72202 JKAO  /dev/sdc

[4:0:0:0]    disk    ATA      Hitachi HDS72202 JKAO  /dev/sdd

[5:0:0:0]    disk    ATA      Hitachi HDS72202 JKAO  /dev/sde
```

```
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xeab53057

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x591c0c51

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xa840ad2f

   Device Boot      Start         End      Blocks   Id  System

/dev/sdd1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/sde: 2000.4 GB, 2000398934016 bytes

255 heads, 63 sectors/track, 243201 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x4febbc9c

   Device Boot      Start         End      Blocks   Id  System

/dev/sde1               1      243201  1953512001   fd  Linux raid autodetect
```

----------

## Cr0t

```
23:52:13^root@amy:/usr/src/linux > mdadm --grow /dev/md0 --raid-devices=3

mdadm: /dev/md0: no change requested

23:52:15^root@amy:/usr/src/linux > mdadm --grow /dev/md0 --raid-devices=4

mdadm: this change will reduce the size of the array.

       use --grow --array-size first to truncate array.

       e.g. mdadm --grow /dev/md0 --array-size 1565568128
```

This makes no sense!

----------

## vincent-

Hope it helps:

http://scotgate.org/2006/07/03/growing-a-raid5-array-mdadm/

----------

## xibo

it's a bug in mdadm. see https://forums.gentoo.org/viewtopic-p-6104435.html , last post.

afaik a patched version of mdadm has been availible on ~arch for quite some time though.

----------

## Cr0t

Forgot to post my solution. Figured it out.... just upgraded to the latest version of mdadm and it worked.

----------

