# [ext3] undelete file

## kpoman

hello

i recently deleted a file and i now need that file (it was a temporary backup of another harddrive which died unexpectedly) then i want to recover it.... the file system has not changed a lot since the deletion so i hope the data wasn't reallocated to new data ...

so someone suggested me to use sleuthkit from http://www.sleuthkit.org/ . this seems to be a professional tool to recover and investigate on a hard drive so i installed it and also installed autopsy (its web frontend) ...

i created an image of the filesystem using:

```

dd if=/dev/PARTITION_WHERE_DATA_WAS of=/tmp/my_image

```

and analyzed it.

i found out the file i was looking for and get all this:

```

Pointed to by file:

/root/bkp_sofia.tar.bz2 (deleted)

File Type (Recovered):

empty

MD5 of recovered content:

d41d8cd98f00b204e9800998ecf8427e

Details:

inode: 37408

Not Allocated

Group: 2

uid / gid: 0 / 0

mode: -rw-r--r--

size: 0

num of links: 0

Inode Times:

Accessed: Sat Oct 25 22:06:17 2003

File Modified: Mon Jun 14 11:21:32 2004

Inode Modified: Mon Jun 14 11:21:32 2004

Deleted: Mon Jun 14 11:21:32 2004

Direct Blocks:

Enter number of Fragments to display: (because the size is 0) 

```

i dont know what this inode number means because i am not an ext3 guru ... but given the fact i have the initial inode, is there a way i can recursively find the other ones and then rebuild the file ?

i hope someone can help me i am looking for information and find lot of contradictions everywhere ! i would really appreciate some help here !

a last problem is that i also had a directory which i deleted, and these tools dont show it at all ... is it possible on ext3 to find out if a directory got delete ?

as u may see this seems kind of desperate post; well there was lot of sensible information on this backup :'(

thank you

----------

## gigel

http://e2undel.sourceforge.net/

maybe this would help...

[EDIT]

i know,this is for ext2 but look here about ext3 undeleting

http://batleth.sapienti-sat.org/projects/FAQs/ext3-faq.html

also,you could use debugfs,which has limitations,especialy for big files..

----------

## bkeating

FYI;  ext3 is ext2 + Journaling so ext2 tools should work(?)

----------

## kpoman

i have read that ext3 is ext2+journaling+some tricks to speed up, like zeroing something (dont know what, not an expert  :Rolling Eyes: )

from the link u posted (it was what i have read) i see this:

```

Q: How can I recover (undelete) deleted files from my ext3 partition?

Actually, you can't! This is what one of the developers, Andreas Dilger, said about it:

In order to ensure that ext3 can safely resume an unlink after a crash, it actually zeros out the block pointers in the inode, whereas

ext2 just marks these blocks as unused in the block bitmaps and marks the inode as "deleted" and leaves the block pointers alone.

Your only hope is to "grep" for parts of your files that have been deleted and hope for the best.

```

with sleuthkit i am able to find the starting inode of the big file .... (it was a 300MB file so maybe it will be damaged ... 

this is what i have: i have a complete disk dump of the partition i wanted to undelete files from .... so this dump, say, is called:

/path/to/image

then i run my tools on this image, which, actually, is a file (not a device anymore)

the things i want to restore are inside that image, they were in:

/root/bkp_sofia.tbz2

AND

/root/bkp_sofia/LOT_OF_FILES

(the tarball is duplicate of the LOT_OF_FILES, whic are little files, like 500k pdf files, or stuff like that)

sleuthkit gave me the inode of /root/bkp_sofia.tbz2 but couldn't find entries pointing to /root/bkp_sofia/ directory ... i dont know why it doesnt find this directory dust somewhere ... so i tried with:

```

cat /path/to/image | strings | grep bkp_sofia 

```

and found files that were on the /root/bkp_sofia/ directory, for example sofiaCV.doc or whatever !

so i have a hope that these files may be undeleted too... but i really need your help, even if it is technical help about ext3 or a way to, given an inode, recurse and get other inodes and get the whole file, or something similar

i really appreciate your help, and btw if i am successful, will try to write down an article, cause i havent been able to find successful simple documentation on this

thanks again, hope someone can help me  :Smile: 

----------

## gigel

if you know the inode you could use this(i dont guarantee it  :Razz:  )

```

#debugfs

debugfs 1.34 (25-Jul-2003)

debugfs:  open -w /path/to/file

debugfs:  undel

undel: Usage: undelete <inode_num> <dest_name>

debugfs:  close

debugfs:  quit

```

but 300 mb it's a large file though....

----------

## kpoman

i tried doing as u suggested but get this:

```

 root@shuttle kpoman # debugfs 

debugfs 1.35 (28-Feb-2004)

debugfs:  open -w /home/kpoman/panaroot/rootfs

debugfs:  undel

undel: Usage: undelete <inode_num> <dest_name>

debugfs:  undel 37408 /tmp/recovered

37408: File not found by ext2_lookup 

debugfs:  undel 37408 /root/bkp_sofia.tbz2

37408: File not found by ext2_lookup 

debugfs:  undel <37408> </tmp/recoverme>

</tmp: File not found by ext2_lookup 

debugfs:  undel <37408> /root/bkp_sofia.tbz2

<37408>: Inode is not marked as deleted

debugfs:

```

----------

## kpoman

i have a list of files which were deleted and that i'd really like to recover ... i could obtain it from a "locate" command before the updatedb stuff ... and when i "cat /the/image/file | strings | grep somefilename" i get some data ...i dont know wether it comes from the locate db itself or if it is inode/file data ... but i would really appreciate if there was a way to find where those files have their inode, because if i can extract them one by one it would be great (and maybe more successful than trying to recover the whole 300MB tarball).

another thing is about performance this time... my "dd" image file is 6GB size ... it would be great if there was a way to split this into kind of parts so i can optimize my searches because if i execute commands on such huge piece of data the computer slows really down, and sometimes it even jerks off :'(

i really appreciate all your help and experience on this !

cheer, patricio

----------

## gigel

i've tryed myself what i told you but all i get is an empty file!!

even dump <inode_number> new_file

doenst help me at all...

but,i've found smth that does work..

search on google for fsgrab-1.2.tar.gz,if you cant find it,pm me,and i'll give it to you...

first run debugfs

and issue 

```
stat the_file_you_have_lost
```

and if u are lucky enough to actualy find the blocks

you could do then

```

sudo fsgrab -c 256 -s 4384 /dev/hdb7 > /home/it_works

```

i've tested this for a little png that fits in only one block,if the file is larger

do then

```
sudo fsgrab -c 256 -s 4385 /dev/hdb7 >> /home/it_works
```

notice i added 1 to 4384 ..

again,for me it works,cause i'm dealing with little files here...

oh,btw, strings /dev/hdXY will take a lot o time on a large partition(but you know better  :Razz: ,cause you've tested it )

there's no need to do concatenate the file first..

i have done 

```
time strings 11mbfilemp3 
```

and 

```
time cat samefile | strings
```

and the time was shorter in the first case...

this is only 11 mb file,but i guess you can notice the speed if the file is 6 gb..

----------

## kpoman

hi mortix!

thanks for your suggestions !

i have a slight difference, i am not working on a device anymore, because i dumped all its data to a file so i dont need to be mounting readonly and all those complications ...

what i would really love to find out is:

how to find where a particular file, given its filename, starts ... then i could, for example if it is a pdf document that weighs 50k, i would love to find out the fist block then assume taking 10 blocks, 11, 12, 13, and then test those 4 pieces and find out if i habe a file;

this would assume: 

-the blocks are contigous

-i have the address of the first block

-i have a technique to dump the number of blocks i want to a new file

this throws me some questions:

=> how can i know if the blocks are contigous ? i think this cant be guessed on a ext3 (zeroing of file block indexes)

=> how can i obtain the first inode of a file ? i had problems with that, i could find the first inode of a parituclar file, but i also erased a directory containing lot of short files, and for all this, i cant find inodes, dont know why ... however if i cat/strings my diskdump (dd) image, i see many filenames i want to recover, so i guess somewhere in the image there are references !!! i reallly need help on this ... for example how can i discover the inode number if i see, catting, the filename i look for ? do i have to read some hex code or something ? well ... help here would be appreciated !

=> i suppose your tool (fsgrab) allows me to dump data given an  a starting block number and telling how many blocks to dump from that place ? if i dump, will i get only the data of the files, or will there be some inode/block/fs information on it ?

well, as u may notice, i have a problem understanding how the ext3 fs works ... where and how are reference to files stored ? what do they contain ? inode numbers, block numbers ? is there a way to find out the inode number if i see a file name on the dump ?

hope to find an answer on all this !

thanks again!

----------

## gigel

 *Quote:*   

> 
> 
> i have a slight difference, i am not working on a device anymore, because i dumped all its data to a file so i dont need to be mounting readonly and all those complications ... 
> 
> 

 

i believe that this is not an issue,specify instead of /dev/bla /path/to/yourfile

 *Quote:*   

> 
> 
> how to find where a particular file, given its filename, starts
> 
> 

 

i have no ideea,if it's a deleted file,than the inode(which stored this infos)is deleted,i guess you could strings that file and grep for patterns..

if it's not a deleted file,you could use stat from debugfs//

 *Quote:*   

> 
> 
>  ... then i could, for example if it is a pdf document that weighs 50k, i would love to find out the fist block then assume taking 10 blocks, 11, 12, 13, and then test those 4 pieces and find out if i habe a file; 
> 
> 

 

actually 50 k fit into 13 blocks,the size of a block is 4k.

 *Quote:*   

> 
> 
> -the blocks are contigous 
> 
> -i have the address of the first block 
> ...

 

this would be ideal,if you know these infos,than fsgrab can really help you here..

 *Quote:*   

> 
> 
> => how can i know if the blocks are contigous ? i think this cant be guessed on a ext3 (zeroing of file block indexes) 
> 
> 

 

i believe you cant,if it's a deleted file,again,these infos are stored in the inode,which is deleted...

if the file fits in less than 12 blocks,you could asume that the blocks are contiguous..

 *Quote:*   

> 
> 
> => how can i obtain the first inode of a file ?
> 
> 

 

AFAIK one file has only one inode,if that file is _REALLY_REALLY_BIG_ than i believe it can use two inodes,but this is unlikely

 *Quote:*   

> 
> 
>  i had problems with that, i could find the first inode of a parituclar file, but i also erased a directory containing lot of short files, and for all this, i cant find inodes, dont know why ... however if i cat/strings my diskdump (dd) image, i see many filenames i want to recover, so i guess somewhere in the image there are references !!! i reallly need help on this ... for example how can i discover the inode number if i see, catting, the filename i look for ? do i have to read some hex code or something ? well ... help here would be appreciated ! 
> 
> 

 

i'm no guru either(dont be fooled by whats under my nick  :Razz:  )

but i believe you are mixing things.

after deletion the inode is kaput,all that remains is the data itself,which is there.

 *Quote:*   

> 
> 
> => i suppose your tool (fsgrab) allows me to dump data given an a starting block number and telling how many blocks to dump from that place ? if i dump, will i get only the data of the files, or will there be some inode/block/fs information on it ? 
> 
> 

 

well,it's not my tool.

that tool will get whats on the blocks that you request

this is what stat file tells me from an existing file(from debugfs)

```

#sudo debugfs /dev/bla

debugfs 1.34 (25-Jul-2003)

debugfs:  stat file

----

Inode: 48874   Type: regular    Mode:  0644   Flags: 0x0   Generation: 630443121

User:  1000   Group:    10   Size: 1537345

File ACL: 0    Directory ACL: 0

Links: 1   Blockcount: 3016

Fragment:  Address: 0    Number: 0    Size: 0

ctime: 0x40d15394 -- Thu Jun 17 11:17:24 2004

atime: 0x40d15394 -- Thu Jun 17 11:17:24 2004

mtime: 0x40d15394 -- Thu Jun 17 11:17:24 2004

BLOCKS:

(0-11):109802-109813, (IND):109814, (12-375):109815-110178

TOTAL: 377

----

debugfs:  quit

```

now i delete file

after this,i use fsgrab,again,i _KNOW_ what blocks the file contains...

```

$ sudo fsgrab -b 4096 -c 11 -s 109802 /dev/bla > /tmp/la

$ sudo fsgrab -b 4096 -c 363 -s 109815 /dev/bla >> /tmp/la

$ file /tmp/la #to be sure

la: MP3, 128 kBits, 44.1 kHz, JStereo

```

----------

## kpoman

thanks for your reply !

so if i understood well, an inode is where u find information on the file rights, modification date, and blocks of data used ? and in ext2 it has a flag "deleted" but in ext3 it put zero's insted of "blocks of data used" so this difficults to find the stuff ?

if this is the case ... then maybe i could do this technique:

i would index all **allocated** inodes ==> this should give me all unallocated ones (then it would also give me all "free" blocks .... then i should dump all this free blocks (it would be about 1GB instead of 6GB) and then i could grep on it to find the headers of files i want to recover (for example %PDF 1.4, etc....) and then try dumping them ?

well, is there a tool that could scan fro free blocks then ?

----------

## gigel

yes,but this is not so easy task,lets say you trim that 6gb file into 1gb...

now how can you tell which blocks belong to a file and wich to other files...

if this resulting 1gb file would not be at all fragmented,than by knowing the size of your various files,you can recover all your files...

but this scenario is not likely to happen..

----------

## linux_girl

i have a similar problem : https://forums.gentoo.org/viewtopic.php?t=190954

i think i will comit a suicide   :Crying or Very sad: 

----------

