# dd:  /dev/zero or /dev/random ?

## danthehat

I've read about techniques that use the dd utility to wipe disks so that data cannot be recovered from them and one thing confuses.  One technique uses /dev/zero to write a bunch of zero bits to the disk in place of any usable data...

```
dd if=/dev/zero of=<device node>
```

And another technique uses /dev/urandom (or /dev/random, depending on how much patience you have) to write random bits to the disk in place of said recoverable data...

```
dd if=/dev/urandom of=<device node>
```

Almost all sources are adamant that the second technique is far more secure than the first.  This is the source of my confusion.  If the program is writing a contiguous series of zero bits to my disk, then any data being overwritten is just that...  overwritten.  There isn't much you can derive from a 1 gig series of zeros.  So how does writing random data come out ahead of writing no data?

The forum's input or some helpful reading material would be much appreciated.

----------

## i92guboj

In which regards overwritting filesystems, both techniques are the same, just fill your disk with useless info. You could also do "dd if=pamela-anderson.jpg of=/dev/<fs>" and it would be mostly the same. /dev/zero is faster, though.

----------

## Hu

 *danthehat wrote:*   

> 
> 
> Almost all sources are adamant that the second technique is far more secure than the first.  This is the source of my confusion.  If the program is writing a contiguous series of zero bits to my disk, then any data being overwritten is just that...  overwritten.  There isn't much you can derive from a 1 gig series of zeros.  So how does writing random data come out ahead of writing no data?
> 
> 

 

If we were dealing purely in software, you would be correct.  However, since the disk is a physical construct, writing zeros over the entire disk does not completely erase the disk.  Someone with sufficient patience and hardware can examine the disk to determine the state the bits had before you set them all to zero.  If you overwrite them with random data, especially if you do it multiple times, it becomes progressively more difficult to separate the random bits from the values that represent the data you are erasing.  Look up the United States Department of Defense standards on disk erasure if you want to be really thorough.  See also info shred.

----------

## danthehat

Thanks a lot.  I was not aware that overwriting data in place did not necessarily guarantee that the data was not recoverable.  I am looking into the subject and will report my findings.

----------

## wynn

Peter Gutman wrote a paper Secure Deletion of Data from Magnetic and Solid-State Memory which goes into this in detail ending up with 22 passes with different byte values.

Idly running "hdparm -I /dev/sda", I was surprised and interested to note, at the end of the output

```
Security:

        Master password revision code = 65534

                supported

        not     enabled

        not     locked

        not     frozen

        not     expired: security count

        not     supported: enhanced erase

        120min for SECURITY ERASE UNIT.
```

Googling, I got http://www.rockbox.org/lock.html *Quote:*   

> The disk lock is a built-in security feature in the disk. It is part of the ATA specification, and thus not specific to any brand or device.
> 
> A disk always has two passwords: A User password and a Master password. Most disks support a Master Password Revision Code, which can tell you if the Master password has been changed, or it it still the factory default. The revision code is word 92 in the IDENTIFY response. A value of 0xFFFE means the Master password is unchanged.
> 
> A disk can be locked in two modes: High security mode or Maximum security mode. Bit 8 in word 128 of the IDENTIFY response tell you which mode your disk is in: 0 = High, 1 = Maximum.
> ...

 It might have something to do with the Gutman method.

See also Sanitizing hard drives at the hardware level

----------

## i92guboj

Well, looking at a physical level, it is right that there are traces that can be found and used to rebuild pieces of info. But that is not a thing that can be achieved via software of with any cheap hardware. You need very specialized materials and a hi-tech lab and very knowledgeable people to do so. The average user can rely that /dev/zero and /dev/random are the same. That is, unless anyone is willing to pay the cost of a NASA alike lab to steal your data, which usually is not the case.

----------

