# [solved] howto recover soft raid from medion nas?

## Mgiese

dear peers,

a friend of mine had issues with his raid1 medion nas. since it is no longer supported at all (flash browser admin tool) we thought we give it a try on gentoo. i guess the raid in question was a raid1 (2 simple mirrored hdds)

when my medion nas crashed (hardware failure, 1 year ago) i could easily mount the AIX fs on my gentoo box and start copying the files. with the raid nas i am at a loss.

the disks are being recognized as /dev/sdb and /dev/sdc, with each one partition1 of type : Linux RAID

when i used lsblk they have been recognised as follows :

# lsblk

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS

sdb           8:16   0 931.5G  0 disk 

└─sdb1        8:17   0 931.5G  0 part 

  └─md127     9:127  0     0B  0 1    

sdc           8:32   0 931.5G  0 disk 

└─sdc1        8:33   0 931.5G  0 part 

  └─md127     9:127  0     0B  0 1

when i tried to mount /dev/sdc1 or sdb1, mount complained that the devices are busy. so i googled and found out, i need to "mdadm --manage --stop /dev/md127" and i guess this is when the crap started  :Sad:  i think here i deleted the metadata for the raid1 md127

next i tried to assign the both partitions to a new /dev/md0 device first with :

```
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1

```

 and later with various commands like :

```
# mdadm -A /dev/md0 -f --update=summaries /dev/sdb1

mdadm: Merging with already-assembled /dev/md/0

mdadm: --update=summaries not understood for 1.x metadata

 # mdadm -A /dev/md0 -f --update=summaries /dev/sdb1

mdadm: Merging with already-assembled /dev/md/0

mdadm: --update=summaries not understood for 1.x metadata

 # mdadm --assemble --run /dev/md0 /dev/sdb1 --force

mdadm: Merging with already-assembled /dev/md/0

mdadm: failed to RUN_ARRAY /dev/md/0: Invalid argument

 # lsblk

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS                                      /

sdb           8:16   0 931.5G  0 disk 

└─sdb1        8:17   0 931.5G  0 part 

  └─md0       9:0    0     0B  0 1    

sdc           8:32   0 931.5G  0 disk 

└─sdc1        8:33   0 931.5G  0 part 

  └─md0       9:0    0     0B  0 1    

```

i thought i reached my goal to create a new /dev/md0 with both partitions in question assigned. but mounting /dev/md0 always leads to 

```
# mount /dev/md0 /mnt/test

mount: /mnt/test: can't read superblock on /dev/md0
```

i tried to fix this by finding superblock on various locations, such as :

```
# e2fsck -f -b 8193 -y /dev/md0
```

```
# e2fsck -f -b 16384 -y /dev/md0
```

```
# e2fsck -f -b 16385 -y /dev/md0
```

```
# e2fsck -f -b 32768 -y /dev/md0
```

```
# e2fsck -f -b 32769 -y /dev/md0
```

but i guess i am looking for a needle in a haystick here, not to mention my suspicion that on this newly created /dev/md0 there is no superblock at all

```

# cat /proc/mdstat

reports  Personalities : 

md0 : inactive sdb1[0] sdc1[1]

      1953260909 blocks super 1.2

       

unused devices: <none>
```

i had a short look at ddrescue to find files on /dev/sdb1 or /dev/sdc1. but had to give up, to at least keep the possibility of finding some data alive and not doing more damage.

CAN SOMEONE HELP ??

i would really like to help my friend recovering the files on the disks (personal data as well as multimedia collection)

any help or suggestion is very much appreciated thanks in advance

----------

## Mgiese

if required and needed i could post (PM) the console log, after my first restart:

 i removed the /dev/md127 link from /dev/sdb1 then put the sdc in the pc and removed /dev/md127 link from sdc

----------

## Mgiese

after rebooting i have the notion that that my changes have not been permanent(since the devices /dev/sdc1 and /dev/sdb1 are reassinged to /dev/md127 (as setup by medion nas originally)) :

```
# cat /proc/mdstat

Personalities : 

md127 : inactive sdc1[1] sdb1[0]

      1953260909 blocks super 1.2

       

unused devices: <none>

# lsblk

NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS

                                      

sdb           8:16   0 931.5G  0 disk 

└─sdb1        8:17   0 931.5G  0 part 

  └─md127     9:127  0     0B  0 1    

sdc           8:32   0 931.5G  0 disk 

└─sdc1        8:33   0 931.5G  0 part 

  └─md127     9:127  0     0B  0 1    

```

but mounting the /dev/md127 also fails :

```
# mount /dev/md127 /mnt/test

mount: /mnt/test: can't read superblock on /dev/md127.
```

any suggestions ?

----------

## Mgiese

```
# mdadm --examine /dev/sdb1

/dev/sdb1:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : 09261958:edd4bc92:a0cffe03:d856e242

           Name : cori:0  (local to host cori)

  Creation Time : Wed Sep 29 00:16:41 2021

     Raid Level : raid1

   Raid Devices : 2

 Avail Dev Size : 1953260909 (931.39 GiB 1000.07 GB)

     Array Size : 976630400 (931.39 GiB 1000.07 GB)

  Used Dev Size : 1953260800 (931.39 GiB 1000.07 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=109 sectors

          State : active

    Device UUID : 52e45058:ab435e01:d2847c20:62748790

Internal Bitmap : 8 sectors from superblock

    Update Time : Wed Sep 29 00:16:41 2021

  Bad Block Log : 512 entries available at offset 16 sectors

       Checksum : fa952ce0 - correct

         Events : 0

   Device Role : Active device 0

   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
```

AND 

```
# mdadm --examine /dev/sdc1

/dev/sdc1:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : 09261958:edd4bc92:a0cffe03:d856e242

           Name : cori:0  (local to host cori)

  Creation Time : Wed Sep 29 00:16:41 2021

     Raid Level : raid1

   Raid Devices : 2

 Avail Dev Size : 1953260909 (931.39 GiB 1000.07 GB)

     Array Size : 976630400 (931.39 GiB 1000.07 GB)

  Used Dev Size : 1953260800 (931.39 GiB 1000.07 GB)

    Data Offset : 264192 sectors

   Super Offset : 8 sectors

   Unused Space : before=264112 sectors, after=109 sectors

          State : active

    Device UUID : 72dd8714:839e5817:905941ba:10e280be

Internal Bitmap : 8 sectors from superblock

    Update Time : Wed Sep 29 00:16:41 2021

  Bad Block Log : 512 entries available at offset 16 sectors

       Checksum : 9484c346 - correct

         Events : 0

   Device Role : Active device 1

   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
```

i am slighty possitive at the moment

----------

## Mgiese

```
 # mdadm --verbose --assemble --force /dev/md127 /dev/sdb1 /dev/sdc1

mdadm: looking for devices for /dev/md127

mdadm: /dev/sdb1 is busy - skipping

mdadm: /dev/sdc1 is busy - skipping

```

----------

## NeddySeagoon

Mgiese,

```
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
```

That command is destructive and it looks like it succeeded from 

```
# mdadm --examine /dev/sdc1

/dev/sdc1:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x1

     Array UUID : 09261958:edd4bc92:a0cffe03:d856e242

           Name : cori:0  (local to host cori)

  Creation Time : Wed Sep 29 00:16:41 2021
```

What that did was to write a new set of raid version 1.2 metadata at the start of the array, then synced the raid set by ensuring all the mirrors contain the same data.

The location of the raid superblock has changed of the years. With metadata 0.90, it was at the end of the raid set and individual members of a mirror could be mounted read only like it wasn't there at all.

First question then, is what raid metadata was there originally? 

If it was 0.90, the filesystem is damaged as the new superblock is occupying the space where it was.

Even if the original mdadm superblock was version 1.2, the default values of various other parameters have changed over the years.

They all need to be right to get your data back.

Luckily, as its raid 1, many don't matter and the filesystem, if its still there, can be mounted without any of it.

We need to calculate where on the drive the filesystem starts.

The partition table comes first on the drive.  The first partition follows it. It will either be sector 63 or sector 2048, depending on its age. 

What does 

```
fdisk -l
```

tell.

Following the partition table is the raid metadata, starting with a 4k gap. (8 blocks) 

The raid superblock is 1k but hopefully there is padding to a 4k boundary.    That's another 2 or 8 blocks.

Given that, its possible to do the arithmetic and attempt a read only mount of the filesystem on one member of the raid set.

----------

## Mgiese

thanks for your effort, here is the putput of fdisk -l :

```

Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

Disk model: Hitachi HDT72101

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt

Disk identifier: CDE57209-F6C4-41DF-BD21-47A1FE3E9928

Device     Start        End    Sectors   Size Type

/dev/sdb1     34 1953525134 1953525101 931.5G Linux RAID

Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

Disk model: Hitachi HDT72101

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt

Disk identifier: 8A69F6A4-0807-4792-8F96-C94DBF3B9D4A

Device     Start        End    Sectors   Size Type

/dev/sdc1     34 1953525134 1953525101 931.5G Linux RAID
```

what to do next ?

----------

## NeddySeagoon

Mgiese,

```
...

Disklabel type: gpt

Disk identifier: CDE57209-F6C4-41DF-BD21-47A1FE3E9928

Device     Start        End    Sectors   Size Type

/dev/sdb1     34 1953525134 1953525101 931.5G Linux RAID

...
```

That's horribly non standard. A normal gpt has space for 128 entries and is 1MB.

The start of the first partition is at block 2048, rather than 34.

That says your first partition starts inside the partition table. 

Lets take it at face value.

The partition starts at block 34.

Then there is 8 blocks of empty space.

That's followed by 2 blocks of raid metadata ... hopefully padded to 8 blocks, for 4k physical block size devices.

That puts the filesystem start at block 50 or there abouts.

Your partition start at block 34 is not 4k aligned as its not divisible by eight.  mdadm may have allowed for that.

A disk block is  512b, so the filesystem cannot start before byte 25600 and may be at block 56, for 4k alignment.

So fishing for a filesystem superblock that may not be there, try 

```
mount -o ro,offset=25600 /dev/sdb /mount/someplace 
```

ro, so we don't write anything and it really is sdb, as offset allows for all the space in front of the filesystem.

Feel free to try other offsets.

Some other things to try.

Do a scan with testdisk and see if it finds potential partitions. Do not let it write anfthing to the drive. Just make notes.

Look at the last few blocks of the partition with a hex editor.

If you find raid metadata version 0.90 metadata there, you know that the filesystem primary superblock has gone as it was where the version 1.2 superblock now is.

----------

## Mgiese

thank you, i tried :

```
# mount -o ro,offset=25600 /dev/sdb /mnt/test 

NTFS signature is missing.

Failed to mount '/dev/loop0': Invalid argument

The device '/dev/loop0' doesn't seem to have a valid NTFS.

Maybe the wrong device is used? Or the whole disk instead of a

partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
```

i really don`t know who or what is assuming NTFS here ?!?

to try diffent offsets, is like looking for a needle in a haystick am i right ?

do you have a good syntax at hand to search for a partition using testdisk, i am installing it atm

thanks so far

----------

## NeddySeagoon

Mgiese,

Start testdisk, its all ncurses menu driven. 

25600 is 50 * 512 (Fifty blocks)

you could try 50, 51 ... 56. That's up to including block 56 for 7 tests.

The error message won't mean a lot, as there can be anything at what you are assuming is a filesystem start.

The kernel will try all the filesystems it knows before it gives up.

Try block 57 too if you want to guard against the out by one error.

-- edit --

To gain some confidence in the process, try mounting the first filesystem on your HDD using the same method.

Look at the partition start in fdisk, multiply that by the block size ...

There is nothing like knowing the process works when all the parts come together. It will help your confidence.

----------

## Mgiese

i started fooling around with the 2*1tb hdds. but now we found the old harddiscs 2*512gb, which should be untouched, with them i tried all that you suggested:

```
50*512

# mount -o ro,offset=25600 /dev/sdd /mnt/test

# mount -o ro,offset=25600 /dev/sdc /mnt/test

```

+

```

51*512

# mount -o ro,offset=26512 /dev/sdb /mnt/test

# mount -o ro,offset=26512 /dev/sdc /mnt/test

```

+

```

52*512

# mount -o ro,offset=26624 /dev/sdc /mnt/test

# mount -o ro,offset=26624 /dev/sdb /mnt/test

```

+

```

53*512

# mount -o ro,offset=27136 /dev/sdb /mnt/test

# mount -o ro,offset=27136 /dev/sdc /mnt/test

```

+

```

54*512

# mount -o ro,offset=27648 /dev/sdb /mnt/test

# mount -o ro,offset=27648 /dev/sdc /mnt/test

```

+

```

55*512

# mount -o ro,offset=28160 /dev/sdb /mnt/test

# mount -o ro,offset=28160 /dev/sdc /mnt/test

```

+

```

56*512

# mount -o ro,offset=28672 /dev/sdb /mnt/test

# mount -o ro,offset=28672 /dev/sdc /mnt/test

```

+

```

57*512

# mount -o ro,offset=29184 /dev/sdb /mnt/test

# mount -o ro,offset=29184 /dev/sdc /mnt/test

```

about the confidence thing :

i would like to try that with my harddisc, but atm i have no live linux at hand. with a mounted fs it won`t work. do you know maybe some commands that can i pass to the kernel, that it does not mount/execute the fstab entries ?

and would the syntax be the same ? (50*512 or do i have to calculate the offset diffrently??)

i run ext4 on my discs, which metadata uses ext4??, i assumed only the raid system uses metadata.... 

am i in a better position now that i know that nothing has changed the hdds filesystem so far ? they are still attached to md127(been in same medion nas before)

```
# mount /dev/md127 /mnt/test

mount: /mnt/test: can't read superblock on /dev/md127.

# cat /proc/mdstat

Personalities : 

md127 : inactive sdc1[0](S) sdb1[2](S)

      976510957 blocks super 1.2

       

unused devices: <none>

```

and

```

sdb           8:16   0 465.8G  0 disk 

└─sdb1        8:17   0 465.8G  0 part 

  └─md127     9:127  0     0B  0 md   

sdc           8:32   0 465.8G  0 disk 

└─sdc1        8:33   0 465.8G  0 part 

  └─md127     9:127  0     0B  0 md

```

----------

## Mgiese

how can i force testdisk not to write anything to the filesystem ? is it asking for permission to write after scanning ?

----------

## Mgiese

 *Mgiese wrote:*   

> i started fooling around with the 2*1tb hdds. but now we found the old harddiscs 2*512gb, which should be untouched, with them i tried all that you suggested and could not mount the partition :
> 
> ```
> 50*512
> 
> ...

 

----------

## Mgiese

the deep analysis of testdisk shows the following :

```
Linux RAID               0   0 35 60784 248 50  976510600 [home]

HPFS - NTFS              0   1  1    31 254 63     514017 [SystemReserved]

HPFS - NTFS              0   1  5    27 254 63     449753

HPFS - NTFS             28   0  1 26139 254 56  419489273

HPFS - NTFS              0   1  1    31 254 63     514017 [SystemReserved]

HPFS - NTFS             32   0  1 26147 254 60  419553537

```

is this the by any chance the superblock location ? : 514017 [SystemReserved] 

i tried :

```
# mount -o ro,offset=26148 /dev/sdb /mnt/test

# mount -o ro,offset=26139 /dev/sdb /mnt/test 

# mount -o ro,offset=26147 /dev/sdb /mnt/test
```

but still i dont see anything in /mnt/test. also the error is always the same :

```
NTFS signature is missing.

Failed to mount '/dev/loop0': Invalid argument

The device '/dev/loop0' doesn't seem to have a valid NTFS.

Maybe the wrong device is used? Or the whole disk instead of a

partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
```

----------

## NeddySeagoon

Mgiese,

As process validation, a partitioned USB stick will do.

If /boot is mounted, you can unmount that. Then mount it again for testing. 

50*512 is 25600, Then its 26112, not the 26512 that you posted.

fdisk will tell you the block size. USB sticks are rarely 512b blocks. However, for HDD at least, 512b blocks are usually faked on drives with 4k physical blocks, so 512b is still correct.

Yes, The raid metadata describes the on disk layout of the raided space to the kernel, so the kernel knows how to to treat the space as a single device.

Changing/destroying the raid metadata does not harm the data on the raid set. It is still there. Rather like the partition table, which describes where filesystem can be found.

Removing the partition table does not harm the data, its just more difficult to access.  

The risk is that the original metadata was version 0.90, at the end of the partition and by using --create, you have overwritten the start of the filesystem with the new metadata format.

If testdisk doesn't show some potential filesystems, the next step is to look at the last MB of data on the drive with a hex editor.

The last 34 blocks should be a copy of the GPT. What comes in the last few blocks just before that is the interesting bit.

More on that once testdisk has done its thing.

Do NOT let testdisk write anything.

----------

## Mgiese

i am a bit confused by the testdisk output :

```
The harddisk (500 GB / 465 GiB) seems too small! (< 785 GB / 731 GiB)

Check the harddisk size: HD jumper settings, BIOS detection...

The following partition can't be recovered:

     Partition               Start        End    Size in sectors

>  MS Data                976768064 1533468504  556700441

[ Continue ]

NTFS, blocksize=4096, 285 GB / 265 GiB
```

more output to follow

```
Disk /dev/sdc - 500 GB / 465 GiB - CHS 60801 255 63

     Partition               Start        End    Size in sectors

>D Linux Raid                    34  976510633  976510600 [home]

 D MS Data                420067624  976768064  556700441
```

and 

```
Disk /dev/sdc - 500 GB / 465 GiB - CHS 60801 255 63

     Partition               Start        End    Size in sectors

>D Linux Raid                    34  976510633  976510600 [home]

 D MS Data                       63     514079     514017 [SystemReserved]

 D MS Data                       67     449819     449753

 D Linux filesys. data       262176  976772767  976510592

 D Linux filesys. data       262178  976772769  976510592

 D MS Data                   449819     899571     449753

 D MS Data                   449820  419939092  419489273

 D MS Data                   514079    1028095     514017

 D MS Data                   514080  420067616  419553537

 D Linux Swap                540706    1564689    1023984

 D MS Data                419939100  976768060  556828961 [Daten]

 D MS Data                420067620  976768060  556700441 [Daten]
```

but somehow it still analyses cylinders...

----------

## Mgiese

i can answer that question now :  *Quote:*   

> 
> 
> First question then, is what raid metadata was there originally? 

 

since the 500GB hdds havent been touched for writing :

```
# mdadm --examine /dev/sdb1 

/dev/sdb1:

          Magic : a92b4efc

        Version : 1.2

    Feature Map : 0x0

     Array UUID : dcf141b3:b7cf1edf:fe280764:bb330d91

           Name : home

  Creation Time : Tue Jan 28 18:29:57 2020

     Raid Level : raid1

   Raid Devices : 2

 Avail Dev Size : 976510957 (465.64 GiB 499.97 GB)

     Array Size : 488255296 (465.64 GiB 499.97 GB)

  Used Dev Size : 976510592 (465.64 GiB 499.97 GB)

    Data Offset : 262144 sectors

   Super Offset : 8 sectors

   Unused Space : before=262064 sectors, after=365 sectors

          State : clean

    Device UUID : a961a37f:c6ec1bb8:526c60e7:2da95717

    Update Time : Fri Nov 13 18:14:56 2020

       Checksum : fece310b - correct

         Events : 708

```

 is the original information of 1 of the 4 discs. and i take it that "Version : 1.2" refers to the metadata version.. and i herebey can confirm my assumption that the raid level is indeed 1.

----------

## Mgiese

for the purpose of learning what i am doing. i use a knoppix liveUSB stick.

```

Disk /dev/sdd: 14.45 GiB, 15513354240 bytes, 30299520 sectors

Disk model: USB Flash Drive 

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: dos

Disk identifier: 0x4c148216

Device     Boot   Start      End  Sectors  Size Id Type

/dev/sdd1  *         64  9031679  9031616  4.3G  0 Empty

/dev/sdd2       9031680  9062399    30720   15M ef EFI (FAT-12/16/32)

/dev/sdd3       9062400 30299519 21237120 10.1G 83 Linux

```

/dev/sdd1 gets mounted /mnt/somewhere. i unmounted that and the tried to manually mount it via :

```
mount -o ro,offset=25600 /dev/sdd /mnt/test
```

but i get the same result as before : 

```
NTFS signature is missing.

Failed to mount '/dev/loop0': Invalid argument

The device '/dev/loop0' doesn't seem to have a valid NTFS.

Maybe the wrong device is used? Or the whole disk instead of a

partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
```

what am i missing, what am i doing wrong here ? the block size is 512b. is the actuall block size "Units: sectors of 1 * 512 = 512 bytes" or "Sector size (logical/physical): 512 bytes / 512 bytes" ??

----------

## Mgiese

ok, i understood something now, fdisk -l showed :

```
Device     Boot   Start      End  Sectors  Size Id Type

/dev/sdd1  *         64  9031679  9031616  4.3G  0 Empty

/dev/sdd2       9031680  9062399    30720   15M ef EFI (FAT-12/16/32)

/dev/sdd3       9062400 30299519 21237120 10.1G 83 Linux

```

so my offset is 64*512 for sdd1 = 32768. and indeed i could mount this partition via :

```
# mount -o ro,offset=32768 /dev/sdd /mnt/test
```

now i just need the right offset for the partition on sdb or sdc, i see

----------

## Mgiese

my final scan of 500gb sdb reports this :

```
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63

     Partition               Start        End    Size in sectors

>D Linux Raid                    34  976510633  976510600 [home]

 D Linux filesys. data           34  976773129  976773096

 D MS Data                       63     514079     514017 [SystemReserved]

 D MS Data                       67     449819     449753

 D Linux filesys. data       262176  976772767  976510592

 D Linux filesys. data       262178  976772769  976510592

 D MS Data                   449819     899571     449753

 D MS Data                   449820  419939092  419489273

 D MS Data                   449827  419939099  419489273

 D MS Data                   514079    1028095     514017

 D MS Data                   514080  420067616  419553537

 D MS Data                   514083  420067619  419553537

 D Linux Swap                540706    1564689    1023984

 D Linux filesys. data    115028410  117053517    2025108 [BDdlh*~I^Cm^C:ageH

 D Linux filesys. data    115028411  117053518    2025108 [BDdlh*~I^Cm^C:ageH

 D MS Data                419939099  839428371  419489273

 D MS Data                419939100  976768060  556828961 [Daten]

 D MS Data                420067619  839621155  419553537

 D MS Data                420067620  976768060  556700441 [Daten]

 D MS Data                420067624  976768064  556700441
```

----------

## NeddySeagoon

Mgiese,

You have already tried

```
Disk /dev/sdb - 500 GB / 465 GiB - CHS 60801 255 63

     Partition               Start        End    Size in sectors

>D Linux Raid                    34  976510633  976510600 [home]

 D Linux filesys. data           34  976773129  976773096 
```

 so no point in trying them again.

```
 D MS Data                       63     514079     514017 [SystemReserved] 
```

 looks interesting.

That's where old versions of fdisk put the start of the first partition.

So if we ignore the current partition table and assume that the raid set originally had raid metadata version=0.90,

Thats where the filesytem would start.

Real GPT puts the start of the first partition at block 2048 but that's not listed, so its probably not worth trying but for completeness ...

So ... does the offset of 63 blocks work?

All the rest are too far down the drive. They are false positives. Look at the overlaps too. The same disk space can belong to at most, one partition. That is, partitions cannot overlap.

-- edit --

What is the date of manufacture of the HDD?

It should be on the drive label. That will give us a 'not before' date if we need old versions of fdisk and/or mdadm to discover the then defaults.

There is no hurry for this information yet, so don't handle the drives if you don't need to.

----------

## Mgiese

yeah i tried block 63 as startblock, did not work.

but testdisk gave me the possibility to look into the different FSs . i found my files in :

```
 D Linux filesys. data       262178  976772769  976510592 
```

 i used testdisk to copy those files and i achieved what i wanted, thank you very much. did learn a lot from you !

interessting was, that testdisk even found very old NTFS partitions(from a previous owner). that is interessting since the drive had been formated and the raid had been created on it and still there were all those files  :Smile:  i think in the future i will go an a treasure hunt for data on deleted and formated drives just to learn more about data that is lost, but is not really lost  :Very Happy: 

----------

## NeddySeagoon

Mgiese,

I'm pleased you found your data, 

I trust you learned from my signature too. :)

----------

## Goverp

 *Mgiese wrote:*   

> ...
> 
> interessting was, that testdisk even found very old NTFS partitions(from a previous owner). that is interessting since the drive had been formated and the raid had been created on it and still there were all those files
> 
> ...

 

hdparm security erase would have been their friend  :Smile: 

----------

## Mgiese

 *NeddySeagoon wrote:*   

> Mgiese,
> 
> I'm pleased you found your data, 
> 
> I trust you learned from my signature too. 

 

I learned so much during this topic!

Thanks Neddy!

----------

## NeddySeagoon

Mgiese,

On rotating rust, making a partition table does just that. It only writes the space that it uses.

Everything else is left as it was. All the filesystems are still there, intact. They are just harder to get at.

Likewise creating a raid set only writes the raid header for the array. Nothing else is harmed.

Even creating a filesystem only write the filesystem metadata. User space in untouched.

With SSDs, creating a filesystem forces a user space discard. What that does varies with drive firmware.

It may erase the user space immediately, or the drive may just make a note of it.

Don't expect to get much back after a fstrim.

----------

