Upgrading NetGear Stora NAS drives Without Copying Data

January 23rd, 2013

This article shows how I managed to upgrade my Netgear Stora from 1TB RAID1 to 2TB without having to move the data off the device before changing drives.

A few years ago, I purchased a NetGear Stora NAS for my home.  It's a great device, proving useful for general file storage as well as Time Machine backups and iTunes sharing.  Thanks to the guys at OpenStora, it has proven to be an easy machine to customize with additional functionality.  DLNA for media playback, a Transmission client for getting CentOS updates via BitTorrent, and even a CrashPlan node for backing up over the net.

I originally bought the machine 3 years ago with a single 1TB drive.  Once they became cheap enough I threw a second 1TB drive in for RAID1 security.  But over the past few months it started filling up, so I decided to invest in a couple of 2TB drives to get some more life out of it.

The question I had was: how can I upgrade these drives without having to spend forever copying all the files off of it and back on over a network (given I didn't have any 1TB USB drives)?

I eventually found a thread here which got me started.  Never having used most of the tools discussed before, I did a little reading around and did the following.  The whole process took about a day.  I found it so handy, and know a few friends with Storas who might benefit from knowing the process I followed.  If it helps you, let me know!  The usual disclaimers apply of course :)

Here's the procedure:

Shut down Stora

  1. Swap out right-hand side old drive for a new one.  Label the drive in case you need to put it back in.
  2. Boot up
  3. Log into web console and go to Preferences -> Disk Management

The second drive is marked as unconfigured. Click relevant button to add drive to RAID1 array

  1. Wait for rebuild to complete. Status can be seen either on web console or via SSH, cat /proc/mdstat (it will take at least 2 hours)
  2. Once the rebuild is complete, repeat the process for the left-hand drive, i.e:
  3. Shut down Stora
  4. Swap out left old drive for a new one and boot up
  5. Log into web console and go to Preferences - > Disk Management

This time the first drive will marked as unconfigured. Click relevant button to add the drive to RAID1 array

  1. Wait for rebuild to complete. Status can be seen either on web console or via SSH, cat /proc/mdstat (it will take at least 2 hours)

At this point, both 2TB drives will have taken over from the old 1TB drives in serving RAID, but they will show only 1TB capacity.

  1. ssh into Stora and get root:
-bash-3.2$ sudo bash

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

Password:
audit_log_user_command(): Connection refused
bash-3.2#
  1. Take a look at the disk space on the RAID. Check the row for /home:
bash-3.2# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
rootfs 212 158 54 75% /
ubi0:rootfs 212 158 54 75% /
none 62 1 62 1% /dev
nodev 62 1 62 1% /var/log
nodev 62 1 62 1% /mnt/tmpfs
nodev 62 0 62 0% /var/lib/php/session
nodev 953799 631836 321963 67% /tmp
nodev 62 1 62 1% /var/run
nodev 62 1 62 1% /var/cache
nodev 62 1 62 1% /var/lib/axentra_sync
nodev 62 1 62 1% /var/lib/oe-admin/minions
nodev 62 1 62 1% /var/lib/oe-admin/actions
nodev 62 1 62 1% /var/lib/oe-update-checker
nodev 62 1 62 1% /etc/blkid
nodev 62 1 62 1% /var/lib/dbus
nodev 62 1 62 1% /var/lib/dhclient
nodev 62 1 62 1% /var/lock
nodev 62 1 62 1% /var/spool
nodev 62 1 62 1% /etc/dhclient-eth0.conf
nodev 62 1 62 1% /etc/printcap
nodev 62 1 62 1% /etc/resolv.conf
/dev/md0 953799 631836 321963 67% /home
/dev/md0 953799 631836 321963 67% /tmp
/dev/md0 953799 631836 321963 67% /var/cache/mt-daapd

In this case, there is a 1TB RAID array that's 67% full.

  1. Now get an idea as to the structure of Stora's RAID configuration. In the example below, it's running RAID1, with the two drives.

sda1 is the single partition on drive sda - the drive on the left of the machine, and sdb1 is likewise for the one on the right.

bash-3.2# /sbin/mdadm -D /dev/md0 # sda1 is LHS, sdb1 is RHS
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 10:34:33 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.178622

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
  1. The next thing to do is tell the RAID manager that the right-hand side drive is faulty, so that it stops accessing it
bash-3.2# /sbin/mdadm --fail /dev/md0 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

The RHS drive now displays as faulty

bash-3.2# /sbin/mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 10:36:12 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.178628

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed

2 8 17 - faulty spare /dev/sdb1
  1. Now tell the system that the drive has been removed from the array (don't actually physically remove the drive - this is all done via software):
bash-3.2# /sbin/mdadm --remove /dev/md0 /dev/sdb1
mdadm: hot removed /dev/sdb1

The disk is now marked as removed:

bash-3.2# /sbin/mdadm -D /dev/md0 # The RHS drive no longer shows as part of the array
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 10:36:30 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.178636

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
  1. Now it's time to run fdisk on sdb). This will remove the old 1TB partition and create a new 2TB partition. This doesn't delete any data, so provided the new partition has the same start point, the data will still be accessible when the partition table is written to reflecting the new partition.

Run the following commands:

  • p: Show the existing partition structure (here it shows a single 1TB partition of type "fd" (Linux raid) starting at cylinder 1
  • d: Delete partition (as there's only 1 it doesn't prompt you for a number
  • p: Show that the partition table is empty
  • n: Create new partition
  • Primary partition
  • Partition #1
  • Default cylinder first and last values to use entire disk. Use hex code "fd" to set partition as Linux RAID type
  • p: Show that the partition is created as desired
  • w: Write the new partition table to disk
bash-3.2# /sbin/fdisk /dev/sdb
The number of cylinders for this disk is set to 243201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 121577 976562500 fd Linux raid autodetect

Command (m for help): d
Selected partition 1

Command (m for help): p

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-243201, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201):
Using default value 243201

Command (m for help): p

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 243201 1953512001 83 Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 243201 1953512001 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: If you have created or modified any DOS 6.x
partitions, please see the fdisk manual page for additional
information.
Syncing disks.
  1. Now, run fdisk again and use "p" to verify that the partition table was indeed written out correctly.
bash-3.2# /sbin/fdisk /dev/sdb

The number of cylinders for this disk is set to 243201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 243201 1953512001 fd Linux raid autodetect

Command (m for help): q
  1. Add the drive back into the array
bash-3.2# /sbin/mdadm --add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1
  1. Verify that the drive has started rebuilding again
bash-3.2# /sbin/mdadm -D /dev/md0 
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 10:48:02 2013
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 0% complete

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.178734

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
2 8 17 1 spare rebuilding /dev/sdb1
bash-3.2# cat /proc/mdstat # Get your estimate as to when the rebuild will be finished
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sdb1[2] sda1[0]
976562432 blocks [2/1] [U_]
[>....................] recovery = 2.2% (21843520/976562432) finish=128.4min speed=123842K/sec

This will take a while (over 2 hours in the example shown above). You can run the above command or simply cat /proc/mdstat to check on progress. When it looks finished, verify that both disks are active again:

bash-3.2# /sbin/mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 13:33:49 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.181086

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
  1. Now it's time to set up the left hand drive (sda). We just repeat the same procedure as above with sdb:
bash-3.2# /sbin/mdadm --fail /dev/md0 /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md0
bash-3.2# /sbin/mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 13:35:09 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.181092

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 17 1 active sync /dev/sdb1

2 8 1 - faulty spare /dev/sda1
bash-3.2# /sbin/mdadm --remove /dev/md0 /dev/sda1
mdadm: hot removed /dev/sda1
bash-3.2# /sbin/mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 13:35:33 2013
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.181102

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
bash-3.2# /sbin/fdisk /dev/sda

The number of cylinders for this disk is set to 243201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 243153 1953125000 fd Linux raid autodetect

Command (m for help): d
Selected partition 1

Command (m for help): p

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-243201, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-243201, default 243201):
Using default value 243201

Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 243201 1953512001 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
bash-3.2# /sbin/fdisk /dev/sda

The number of cylinders for this disk is set to 243201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 1 243201 1953512001 fd Linux raid autodetect

Command (m for help): q

bash-3.2# /sbin/mdadm --add /dev/md0 /dev/sda1
mdadm: added /dev/sda1
bash-3.2# /sbin/mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 15 13:26:52 2010
Raid Level : raid1
Array Size : 976562432 (931.32 GiB 1000.00 GB)
Used Dev Size : 976562432 (931.32 GiB 1000.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Tue Jan 22 13:42:22 2013
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 0% complete

UUID : f1330325:7aaf8fc4:c524c7d5:22e065bd
Events : 0.181250

Number Major Minor RaidDevice State
2 8 1 0 spare rebuilding /dev/sda1
1 8 17 1 active sync /dev/sdb1
bash-3.2# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid1 sda1[2] sdb1[1]
976562432 blocks [2/1] [_U]
[>....................] recovery = 0.6% (6342592/976562432) finish=115.3min speed=140204K/sec

unused devices: <none>
  1. So now it's a few hours later and the RAID is up and running again with the two disks now re-partitioned. The RAID itself however still thinks it's operating with the original disk size. So tell it to grow to fill the entire available space. Again, this will take a few hours.
bash-3.2# /sbin/mdadm --grow /dev/md0 -z max
  1. Growth progress can be checked with cat /proc/mdstat, or alternatively, use the --wait option to pause the command prompt from executing the next command until RAID growth is complete:
bash-3.2# /sbin/mdadm --wait /dev/md0
  1. Once it is complete, the filesystem on the virtual disk needs to be grown to fill it. This is a relatively quick operation.
bash-3.2# /usr/sbin/xfs_growfs -D max /dev/md0
meta-data=/dev/md0 isize=256 agcount=32, agsize=7629394 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=244140608, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
data blocks changed from 244140608 to 488377984
  1. Now take a look at mounted disk space. /home should now show plenty of free space!
bash-3.2# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
rootfs 212 158 54 75% /
ubi0:rootfs 212 158 54 75% /
none 62 1 62 1% /dev
nodev 62 1 62 1% /var/log
nodev 62 1 62 1% /mnt/tmpfs
nodev 62 0 62 0% /var/lib/php/session
nodev 1907599 631836 1275764 34% /tmp
nodev 62 1 62 1% /var/run
nodev 62 1 62 1% /var/cache
nodev 62 1 62 1% /var/lib/axentra_sync
nodev 62 1 62 1% /var/lib/oe-admin/minions
nodev 62 1 62 1% /var/lib/oe-admin/actions
nodev 62 1 62 1% /var/lib/oe-update-checker
nodev 62 1 62 1% /etc/blkid
nodev 62 1 62 1% /var/lib/dbus
nodev 62 1 62 1% /var/lib/dhclient
nodev 62 1 62 1% /var/lock
nodev 62 1 62 1% /var/spool
nodev 62 1 62 1% /etc/dhclient-eth0.conf
nodev 62 1 62 1% /etc/printcap
nodev 62 1 62 1% /etc/resolv.conf
/dev/md0 1907599 631836 1275764 34% /home
/dev/md0 1907599 631836 1275764 34% /tmp
/dev/md0 1907599 631836 1275764 34% /var/cache/mt-daapd

Hardware

NAS

Netgear

RAID

Stora