# Did I format the wrong drive? And what to do if so...

## simon78

I have a system with two 250Gb sata drives. On these I have a couple of raided partitions:

/boot is mirrored

/ is striped

swap is striped

I have to use a hacked grub for my mb/hd combo. Today I had to reboot and grub did not boot. I then thought that portage had installed the standard grub. I then booted up a ubuntu live cd to try to reinstall the hacked grub. However my /dev/md* was not mountable and it was hard to fit in the gentoo livecd iso on the ubuntu environment. So, In a strike of genius I formatted the swap partition (mke2fs /dev/md2) to store the iso. But after a df -h i am not that sure that i really formatted the swap but the /! There was an awful lot of space free for being the swap partition, it matches better with the /. As I am in the middle of rearrangment of the disks in all my computers i dont have any backups! I will do anything to get this data back (almost).

First, how do i find out which partition (/dev/md*) is wich. And if i formatted the wrong one, is the data gone? Is there any way to restore it?

Please help me!!!

//simon

----------

## DNAspark99

from the livecd you should have done...

```
cat /proc/mdstat
```

...to get an idea of what partitions are part of which arrays - ubuntu may assemble them under different numbers, so yea, formatting one of the arrays /dev/md* may not have wiped the data you thought you were. oops & ouch!

nevermind the raid for now, try mounting the individual partitions (/dev/sda1 etc) ro from the livecd to see if (hopefully) you didn't scrub anything important!

----------

## simon78

Thanks. That confirms it, i wiped the wrong drive. Probably because as md* numbers start at 0 and sda* numbers start at 1 i selected the wrong number of old habit (not that used to raid)

its not possible to mount any sd** devs using this ubuntu cd>

mount: /dev/sda1 already mounted or /mnt/sda1/ busy

BTW is a striped (raid0) member device really mountable? I can understand if mirrored devices are mountable, but are the data really sequentially written.

Any suggestions? would it be better to try to download the gentoo live cd and try mounting things from there?

----------

## NeddySeagoon

simon78,

You can mount any one part of a mirror on its own. Do it read only so you don't need to rebuild the mirror because the parts have different timestamps.

What happens when you try to mount one part of a raid0 depends on which part and the chunk size.

It may appear to mount but you won't like the result.

There is no unformat - your data is still there but its all unlinked now.

----------

## simon78

Thank for your answers!

I do hope that there is any useful data remaining on these disks. The problem is how do I extract anything from it. The filesystem is JFS  IRC. Does anyone have any recomendations? I would like to try testdisc and photorec, but I its not trivial to install when i dont have a gentoo system running. I have a spare 8Gb disk were i might do a fresh gentoo install. However, it takes quite some time to 'emerge system' as i recall!

Best regards,

Simon

----------

## NeddySeagoon

simon78,

Do a stage 3 install on your spare drive - no emerge system required.

Let me get this right, you have formatted a JSF partition as ext2 and want to recover some data.

You haven'r made new metadata over the top of the old, so sone of your data will be destroyed.

Good luck with the recovery

----------

## simon78

Most certainly some data will be destroyed. I do hope, that with some luck, some of the most important files can be recovered.

Will do a stage3 now. Thanks again.

----------

## simon78

Ok, im running from the new install. Almost, -as it did not boot the old PATA drive but tried the SATAs with the faulty raid i booted a live cd and chrooted into the stage3 install on the PATA.

The partition tables seems ok, as far as i can see (but they have no reason not to be, as i have not fiddled with them?)

fdisk listings of sda and sdb

```

Disk /dev/sda: 250.0 GB, 250059350016 bytes

255 heads, 63 sectors/track, 30401 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1               1          17      136521   fd  Linux raid autodetect

/dev/sda2              18         142     1004062+  fd  Linux raid autodetect

/dev/sda3             143       30401   243055417+  fd  Linux raid autodetect

Disk /dev/sdb: 250.0 GB, 250059350016 bytes

255 heads, 63 sectors/track, 30401 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1          17      136521   fd  Linux raid autodetect

/dev/sdb2              18         142     1004062+  fd  Linux raid autodetect

/dev/sdb3             143       30401   243055417+  fd  Linux raid autodetect

```

And mdadm --examine /dev/sda1 (member of /boot)

```

/dev/sda1:

          Magic : a92b4efc

        Version : 00.90.03

           UUID : 725bf2d1:74ea656e:e7f0d536:2cce4c0c

  Creation Time : Tue Aug  8 14:52:46 2006

     Raid Level : raid1

    Device Size : 136448 (133.27 MiB 139.72 MB)

     Array Size : 136448 (133.27 MiB 139.72 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Update Time : Wed Sep 13 12:23:25 2006

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 2f134454 - correct

         Events : 0.120

      Number   Major   Minor   RaidDevice State

this     1       8        1        1      active sync   /dev/sda1

   0     0       8       17        0      active sync   /dev/sdb1

   1     1       8        1        1      active sync   /dev/sda1

```

sdb1 (member of /boot)

```

/dev/sdb1:

          Magic : a92b4efc

        Version : 00.90.03

           UUID : 725bf2d1:74ea656e:e7f0d536:2cce4c0c

  Creation Time : Tue Aug  8 14:52:46 2006

     Raid Level : raid1

    Device Size : 136448 (133.27 MiB 139.72 MB)

     Array Size : 136448 (133.27 MiB 139.72 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 1

    Update Time : Wed Sep 13 12:23:25 2006

          State : clean

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 2f134462 - correct

         Events : 0.120

      Number   Major   Minor   RaidDevice State

this     0       8       17        0      active sync   /dev/sdb1

   0     0       8       17        0      active sync   /dev/sdb1

   1     1       8        1        1      active sync   /dev/sda1

```

sdb2 (member of swap)

```

/dev/sda2:

          Magic : a92b4efc

        Version : 00.90.03

           UUID : 6a69d311:e190128b:a075b66e:8e4b346f

  Creation Time : Tue Aug  8 14:53:02 2006

     Raid Level : raid0

    Device Size : 1003968 (980.60 MiB 1028.06 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 2

    Update Time : Tue Aug  8 14:53:02 2006

          State : active

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

       Checksum : ada78451 - correct

         Events : 0.1

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     1       8       18        1      active sync   /dev/sdb2

   0     0       8        2        0      active sync   /dev/sda2

   1     1       8       18        1      active sync   /dev/sdb2

```

sdb2 (member of swap)

```

/dev/sdb2:

          Magic : a92b4efc

        Version : 00.90.03

           UUID : 6a69d311:e190128b:a075b66e:8e4b346f

  Creation Time : Tue Aug  8 14:53:02 2006

     Raid Level : raid0

    Device Size : 1003968 (980.60 MiB 1028.06 MB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 2

    Update Time : Tue Aug  8 14:53:02 2006

          State : active

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

       Checksum : ada7843f - correct

         Events : 0.1

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     0       8        2        0      active sync   /dev/sda2

   0     0       8        2        0      active sync   /dev/sda2

   1     1       8       18        1      active sync   /dev/sdb2

```

sda3 (member of the formatted raid0 root partition)

```

/dev/sda3:

          Magic : a92b4efc

        Version : 00.90.03

           UUID : e8978cce:cf8c959b:5474883b:3b576943

  Creation Time : Tue Aug  8 14:53:14 2006

     Raid Level : raid0

    Device Size : 243055296 (231.80 GiB 248.89 GB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 3

    Update Time : Tue Aug  8 14:53:14 2006

          State : active

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 894a30db - correct

         Events : 0.1

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     1       8       19        1      active sync   /dev/sdb3

   0     0       8        3        0      active sync   /dev/sda3

   1     1       8       19        1      active sync   /dev/sdb3

```

and sdb3, also member of the formatted root partition

```

/dev/sdb3:

          Magic : a92b4efc

        Version : 00.90.03

           UUID : e8978cce:cf8c959b:5474883b:3b576943

  Creation Time : Tue Aug  8 14:53:14 2006

     Raid Level : raid0

    Device Size : 243055296 (231.80 GiB 248.89 GB)

   Raid Devices : 2

  Total Devices : 2

Preferred Minor : 3

    Update Time : Tue Aug  8 14:53:14 2006

          State : active

 Active Devices : 2

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 0

       Checksum : 894a30c9 - correct

         Events : 0.1

     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State

this     0       8        3        0      active sync   /dev/sda3

   0     0       8        3        0      active sync   /dev/sda3

   1     1       8       19        1      active sync   /dev/sdb3

```

I can do a

```

mdadm --assemble --verbose /dev/md0 /dev/sd[a-b]1

```

to assemble the former /boot partition without any problems. However, i do not know wether there are any risk associated with doing the same on the root partition (sda3 and sdb3). Is it a read-only operation to assemble the raid, or may it write stuff to the disk? As far as i can understand, the raid should assemble fine, but mounting will not work. But, if i can assemble it without any risk i can atleast try some recovery tools. My strategy is to first try some recovery tools lice photorec to try to restore some data. Then i will try a fsck.jfs and hope that it works out, or gives me ideas on how to continue.

Any ideas, varnings or encourengments are greatly appriciated.

Best regards,

Simon

----------

## NeddySeagoon

simon78,

It depends on what you fomatted, a partition that was a member of the raid (/dev/sdX) or the raid volume (/dev/mdY) itself.

Attempting raid assembly should be risk free - it looks like the raid info is still on both parts of the raid.

If the format completed, I would expect the raid to be mountable (do it -o ro) under its new filesystem but empty.

I wouldn't event try the mount until you had formed the raid set and made an image of the raid somehow,

or even image the two underlying partitions of the raid0 and try feeding the images to 

```
mdadm --assemble ....
```

everything in Linux is a file, so that should work. The main thing here is to make a backup to work with, so you don't do any further damage to your data.

Unfortunately, I know nothing of the internal structure of JFS. You are likely to want to recreate the JFS superblock somehow and see what happens when you patch the recreated superblock into the image.

This means you need your original damged root - untouched. A backup of it, a copy to work with and somewhere to put recovered data. 

If you are not *absolulely sure* of the original filesystem, your first step, after making your images is to attempt to recover /etc/fstab by searching the partition for some text you know you will find. hexedit works well for this. Then you can look into fakeing the superblock of the right filesystem.

If you are really lucky, JFS (or whatever) saves copies of the superblock in the filesystem, like ext2 and ext3 but hopefully at different locations. You can then restore the superblock with some careful use of the dd command.

You are going to want a lot of disk space for this operation.

----------

## simon78

Ok, i have now arranged some diskspace on the LAN (nfs). How do i copy  the /dev/sda3 and  /dev/sdb3 using dd? Are there any special options that i need to pass to dd?

----------

## NeddySeagoon

simon78,

```
dd if=/dev/sda3 of=/path/to/file
```

will copy /dev/sda3 to a file, block by block.

There is no safey net - check that the if= and of= are the right way round before you press return.

You will bring the network to its knees doing this over nfs, so if its not your own LAN, do it while the LAN is quiet.

If you are intending to bring up the raid from two nfs files, you probably want to practice with a small raid first.

Your boot would be good to allow you to do everything without copying lots of data.

----------

## simon78

Thanks! Im dd-ing over the network now. Im alone on this network now and i get a throughput of about 10000kB/s. That gives 7.5h for each image.

Thanks for the tip, i will try all the commands on my former /boot to test that i do everything right.

----------

## simon78

Hi again. There was not enough room on the device to backup my old root devices. So, while im trying to fix some more GB I tried to recreate the scenario on my former boot. This is what i done so far:

Created the raid0, formatted and mounted it to 'simulate' the state of my root before my mistake.

```

mdadm --create --verbose /dev/md1 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1

mkfs.jfs /dev/md1

mount /dev/md1 /mnt/md1

cp  /usr/portage/distfiles/* /mnt/md1/

```

Then I started to destroy the data by formatting the drive. IRC the drive was also used as swap for a very short period of time.

```
 

umount /dev/md1

mkswap /dev/md1

swapon /dev/md1

<surfed tha web a couple of minutes>

swapoff /dev/md1

mke2fs /dev/md1

mount /dev/md1 /mnt/md1

cp /usr/portage/distfiles/fuse-2.6.0-pre2.tar.gz /mnt/md1

umount /dev/md1

```

Now starting with recovery. First dd to create backup images, then mounted them as loopbacks.

```

dd if=/dev/sda1 of=sda1.practice bs=1M

dd if=/dev/sdb1 of=sdb1.practice bs=1M

md5sum sd* > md5sums

losetup /dev/loop1 sda1.practice

losetup /dev/loop2 sdb1.practice

```

Recreate the raid0

```

mdadm --create /dev/md10 --level=0 --auto=yes --raid-devices=2 /dev/loop1 /dev/loop2

fsck.jfs -n /dev/md10 > fsck.jfs.-n.txt 2>&1

```

fsck.jfs.-n.txt:

```

fsck.jfs version 1.1.8, 03-May-2005

processing started: 9/16/2006 13.27.37

The current device is:  /dev/md10

The superblock does not describe a correct jfs file system.

If device /dev/md10 is valid and contains a jfs file system,

then both the primary and secondary superblocks are corrupt

and cannot be repaired, and fsck cannot continue.

Otherwise, make sure the entered device /dev/md10 is correct.

```

Following this thread: http://sourceforge.net/mailarchive/message.php?msg_id=10803545

I took a shot at recreating a superblock, with no luck.

```

dd if=/dev/zero of=working_jfs_image bs=1M count=64

losetup /dev/loop3 working_jfs_image

mkfs.jfs /dev/loop3

dd bs=4096 skip=8 seek=8 count=1 if=/dev/loop3 of=/dev/md10

fsck.jfs -n /dev/md10

```

The last fsck yields the same results as above. Any comments/suggestions/pointers on how to recreate the superblock will be greatly appriciated. 

Best Regards,

Simon

----------

## NeddySeagoon

simon78,

It looks like you have a lot of reading to do  this IBM page looks like a good primer on JFS.

It appears there is JFS and JFS2. The dd command you have applies to JFS2. I don't know which one the kernel implements, hopefully there will be something useful in comments embedded the kernel JFS code or in /usr/src/Documentation/....

----------

## simon78

As far as I can understand there is an AIX JFS wich exists in two versions, JFS1 and JFS2. The linux JFS is not compatible with the AIX JFS and exists only in one version (JFS). 

http://en.wikipedia.org/wiki/JFS

----------

## simon78

 *NeddySeagoon wrote:*   

> simon78,
> 
> There is no unformat - your data is still there but its all unlinked now.

 

There is now  :Smile: 

I have implemented jfsrec, a tool that makes a read-only extraction of files and directories from a damaged JFS volume.

Its available under GPL from: http://sourceforge.net/projects/jfsrec/

----------

## NeddySeagoon

simon78,

I hope it works for you.

----------

