# BTRFS disaster - Help! - can I recover?

## darkphader

I have had bad dreams about this particular fat finger but after a few years it finally happened.

Scenario: 2 drives in a raw btrfs array (no previous partitions and non-redundant) with various subvols as well. One was sdc the other was sde, although sde never displays with mount command and blkid is the same for both.

Thinking I was writing to a flash drive I sent 32MB via dd

```
dd if=file.iso of=/dev/sde
```

 to sde (instead of what I wanted - sdf) and now the volume nor any of it's subvol's can mount (of course that seems entirely reasonable, although you can imagine how unhappy I am).

With:

```
mount -t btrfs /mnt/butter/
```

I get:

```
[ 3421.193103] BTRFS info (device sde): disk space caching is enabled

[ 3421.193734] BTRFS (device sde): bad tree block start 8330001001141004672 20971520

[ 3421.193738] BTRFS: failed to read chunk root on sde

[ 3421.203221] BTRFS: open_ctree failed

```

If I specify /dev/sdc instead of relying on fstab I get:

```
mount -t btrfs -o degraded /dev/sdc /mnt/butter/

```

```

[ 3839.506766] BTRFS info (device sde): allowing degraded mounts

[ 3839.506769] BTRFS info (device sde): disk space caching is enabled

[ 3839.507154] BTRFS (device sde): bad tree block start 8330001001141004672 20971520

[ 3839.507159] BTRFS: failed to read chunk root on sde

[ 3839.515023] BTRFS: open_ctree failed
```

Is it possible to recover from this? What steps can be tried?

Thanks!

----------

## davidm

Disk Destroyer strikes again.  :Sad: 

Probably your best option at this point is to go on the mailing list and ask:

https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list

Also if nothing else the Restore tool (see https://btrfs.wiki.kernel.org/index.php/Restore) should probably be able to recover most of your data.

Whatever you do though do not do 'btrfs check --repair' without first consulting an expert or the mailing list.  Doing so sometimes can make things far worse than they were.  If you do this at least make an image of the disks first.

FWIW I would say there is a good chance of being able to restore the array too [minus a ~32 MB or so of destroyed data with no redundancy).  Unless you tell it not to btrfs typically stores the metadata in multiple places.  Even when using data=single (jbod).  That means the metadata is still probably there and available somewhere else on the damaged disk.  I always use RAID1 though so I don't know too much about this.  The recovery procedure is very different with redundancy.

----------

## davidm

Just wanted to say I'm following the discussion on the btrfs mailing list and I'm genuinely surprised they weren't able to get you sorted yet.  Please let us know if 'btrfs restore' is able to recover most of your data.  I would think it SHOULD given that you only nuked the first 32MB of one of the disks.  

In fact I'm not understanding at all why this isn't easier to recover from (albeit with about ~32MB of data lost) since most of the data should still be there.  Provided you used at least metadata=dup (which I think is the default) and didn't do something crazy like metadata=single (which IS crazy and ought to generate serious warnings upon creation because it makes no sense) then despite having data=single or data=raid0 I would think it should be trivial to recover most of your data.  It's understandable that you would lose some (~32 MB) of your data but I don't think you should lose all of it.

I'm just glad I chose meta=raid1 and data=raid1.  I was considering data=raid0 but I see now that would be a big mistake.  I hope they get some better recovery tools in the future.

----------

## darkphader

You'll see from my latest post to list that nothing has worked. I did try restore with a dry-run and basically more of the same - disc 2 is missing.

----------

## davidm

 *darkphader wrote:*   

> You'll see from my latest post to list that nothing has worked. I did try restore with a dry-run and basically more of the same - disc 2 is missing.

 

Hmmm.  Well if those guys aren't able to help you then it would be very arrogant of me to think I could do so because those guys have been on the list for years and are very experienced.  But I guess if you have nothing left to lose and still want to try...  I'm curious too because I use btrfs and it could happen to me.

One thing I note is that as you thought you were using data=raid0.  So that means pretty much to get anything useful it needs to get data from both disks as because of how raid0 works everything will be spread across different disks in stripes.  If it were data=single we would presumably be able to get data from just one disk but that isn't the case.  So btrfs restores needs to be able to read both disks to even have a remote chance of getting anything.

I noted from your 'btrfs-show-super' it looked like something was there.  /dev/sde is the one which was hit by dd and that seems to show superblocks.

From researching I found this:

 *Quote:*   

> 
> 
>  Superblock
> 
> The primary superblock is located at 0x1 0000 (6410 KiB). Mirror copies of the superblock are located at physical addresses 0x400 0000 (6410 MiB), 0x40 0000 0000 (25610 GiB), and 0x4 0000 0000 0000 (1 PiB), if these locations are valid. btrfs normally updates all superblocks, but in SSD mode it will update only one at a time. The superblock with the highest generation is used when reading.
> ...

 

https://btrfs.wiki.kernel.org/index.php/User:Wtachi/On-disk_Format#Superblock

So that might explain why it's not seeing device 2 or /dev/sde, right?  That first Superblock was destroyed by disk destroyer.  So shouldn't we try to read the other backup Superblocks which should still be there on /dev/sde?

From the restor wiki page:

https://btrfs.wiki.kernel.org/index.php/Restore

 *Quote:*   

> 
> 
> -u: Superblock mirror. Valid values are 0,1,2. Specifies an alternate superblock copy to use. This may be useful if your 0th superblock is damaged. 
> 
> 

 

So we should probably try '-u 1' or '-u 2' on /dev/sde, right?  Because the '0 superblock' was almost certainly blown away.

Also:

 *Quote:*   

> 
> 
> -v: Increase verbosity. May be given multiple times. 
> 
> 

 

We should probably do this as much as we can.  It might give us some useful info.

Also can btrfs restore work with multiple devices?  Because it would have to with raid0 otherwise there isn't a chance since it's raid0 the data is striped over both disks.   I wonder.... it doesn't look like the man page shows listing multiple devices or is it supposed to try to pull in the other devices automatically?  Hmmm.  It seems hard to believe there would be no way to have it work with raid0?

What happens with something like

'btrfs restore -u 1 -vvvv /dev/sde /mnt/whatever'

??? 

It might be worth asking them on the ML about what is supposed to happen with 'btrfs restore' on multiple devices.  It did ten minutes of research and didn't find much besides another person asking about multiple devices with it and then not getting an answer.

Good luck.  Let me know if you want to keep trying stuff.  I'll try to research more to help if I can.

Note: I'm not an expert.  Just a btrfs user like you.  So double check anything I suggest.  Though they say btrfs restore is safe....

----------

## darkphader

 *Quote:*   

> 'btrfs restore -u 1 -vvvv /dev/sde /mnt/whatever' 

 

```
btrfs restore -u 1 -vvvv -D /dev/sdc /mnt/saved/

warning, device 2 is missing

bytenr mismatch, want=20971520, have=0

Couldn't read chunk root

Could not open root, trying backup super

warning, device 2 is missing

bytenr mismatch, want=20971520, have=0

Couldn't read chunk root

Could not open root, trying backup super
```

```
btrfs restore -u 1 -vvvv -D /dev/sde /mnt/saved/

checksum verify failed on 20971520 found 8B1D9672 wanted 2F8A4238

checksum verify failed on 20971520 found 8B1D9672 wanted 2F8A4238

bytenr mismatch, want=20971520, have=8330001001141004672

Couldn't read chunk root

Could not open root, trying backup super

warning, device 1 is missing

checksum verify failed on 20971520 found 8B1D9672 wanted 2F8A4238

checksum verify failed on 20971520 found 8B1D9672 wanted 2F8A4238

bytenr mismatch, want=20971520, have=8330001001141004672

Couldn't read chunk root

Could not open root, trying backup super
```

As I mentioned on the latest posts I just made on mailing list, it seems that some identifying information needs to be placed back on disc 2 - maybe the so called "chunk root".

Thanks!

----------

## s4e8

btrfs superblock is 4K size at 64K, 64M, 256G. you may verify these data and copy back.

----------

## gcyoung

I'm not too optimistic, but if some kind soul can advise, I'd be very grateful.

Using midnight commander to copy files  to an external usb drive I mistakenly and unknowing pressed f6 (mv) instead of f5 (cp) to make a backup, and left the computer to let it get on with the job (transeferring about 250G of data).

This resulted in all files on the two btrfs drives being deleted. Before, however, I had noticed this, I found some of the files on the disk holding the copies were corrupted, so I thought, "Never mind, I''ll reformat the disk and copy them again." It was not until after I had done this, that I discovered the original files were no longer in place. That's my hard luck story!

My query is this, is there any way I can fix the drive to simply restore it to the condition it was in before the deletion?

1: I have not added to or modified the drives in any way since the incident.

2: The drives are not corrupted, since they mount in the normal way.

3: The system consists of an extended partition sda7 and a separate sdb drive formatted with btrfs raid 1

Help!

----------

## Zucca

There might be hope.

btrfs has datacow on by default.

As with earlier case, you should ask from the offical btrfs mailing list.

I assume you didn't use snapshots? ;/

----------

## gcyoung

Where do I find the "Official btrfs mailing list" ?

----------

## Zucca

 *gcyoung wrote:*   

> Where do I find the "Official btrfs mailing list" ?

 Here.

I went and found this discussion.

Please, report here how your progess of undeleting the files goes.

I have ~1TB SSD array on my desktop and 3.5TB array on my server. After reading this topic... I'll start to make snapshots RIGHT NOW. Also update backups.

----------

## gcyoung

Thanks for the help. I have sent the data and a request to the btrfs-mailing-list, and will post the results. I am hoping that since nothng was written to the disk since the deletion, that the deleted information is still present in either the original or the copy, and that there may be some way of recovering it.

I will post my success or failure, as requested,  since it may be helpful to others as careless as I was!

----------

## kernelOfTruth

Get your hands on an additional harddrive and do a dd of the content to it,

https://www.cgsecurity.org/ might help to recover files

----------

## gcyoung

I don't think one can dd in any set manner from a  raid system, and testdisk recognises, but has no tools to deal with btrfs. In any case, I have disconnected the disks containing the deleted files, in the hope that someone with good knowledge of the btrfs system will be able to advise me. The data must still be on the disks, it is just a question of knowing how to access it.

I have tried btrfs.undelete but have had no joy, as this seems only to work on single files where the whole path is known.

----------

## gcyoung

Correction! I was wrong about testdisk. I was using an older version of SystemRescue. I have updated to the latest version and found that it will carry out some functions. Am currently exploring the possibilities.

----------

## gcyoung

I have accepted that all files  are lost. Btrfs-undelete and similar scripts require that the path to the deleted file is known. In my case that is not so. I am convinced that if a  disk has not been overwriten since the deletion, it ought to be possible to recover files, since logic dictates that the data is still on the disk. However, recovering that data is beyond my knowledge. I can only advise now that anyone moving data from one disk to another must be very,very careful to check first that the copy has been successful, before deleting any of the original!

----------

## NeddySeagoon

gcyoung,

The data is still there but the root directory and the metadata are gone. You need to reconstruct that to get the filesystem to read your data.

That's hard.

All the subdirectories of the root directory are still there. However, I don't know how btrfs manages metadata, to find the blocks belonging to your files.

It may be that finding directories won't actually help much.  

In place data recovery is unlikely.

----------

## Zucca

Backups people, backups! :) Backups can save you from user-mistakes. Also snapshots.

But snapshots aren't as safe unless you have good redundancy and bit rot protection.

Anyway. Sad to hear that the files are gone forever.

----------

