# Incorrect harddisksize? (not about the marketing size diff)

## ahubu

Hi,

I bought a 320Gb harddisk, and of course, since the marketing machine works this way, it isn't really 320Gb because vendors use the 1000*1000 calculation for 1 megabyte. That means that, as my hdparm info points out, 320Gb comes down to 305Gb real size:

```
[17:53:48 root@kiwi:/home/ahb] hdparm -I /dev/hdb

/dev/hdb:

ATA device, with non-removable media

        Model Number:       ST3320620A                              

        Serial Number:      5QF23ZNW

        Firmware Revision:  3.AAE   

Standards:

        Supported: 7 6 5 4 

        Likely used: 7

Configuration:

        Logical         max     current

        cylinders       16383   65535

        heads           16      1

        sectors/track   63      63

        --

        CHS current addressable sectors:    4128705

        LBA    user addressable sectors:  268435455

        LBA48  user addressable sectors:  625142448

        device size with M = 1024*1024:      305245 MBytes

        device size with M = 1000*1000:      320072 MBytes (320 GB)

```

That's fair. However, I made one big ext3 partition on it (since it is becomes my media drive), by first making one big primary partition in fdisk, followed by with the command:

```
mke2fs -j -m0 /dev/hdb1
```

Where the -j stands for journalling and -m0 for no reserved blocks (so the whole space will be available). However, when I do a df -h and/or df -H I see:

```
df -h (human readable, real Gb):

/dev/hdb1             294G  189G  106G  65% /media2

df -H (human readable, SI Gb):

/dev/hdb1              316G   202G   114G  65% /media2

```

Now, I ask you, where have the 4Gb gone? I found some other calculations on the internet that pointed out that the real Gb should be at least 298Gb (and not 305Gb as hdparm wants me to believe). Is there a price to pay for making one big partition maybe, or is there something else I'm forgetting?

I have a second, minor question: my (other) harddisk has a "recommended" (as stated in hdparm -I /dev/hda) setting of acoustic management of 192, but performs better on a 254 setting. Will putting the drive on the higher setting strain it (much) more, or is the recommendation solely on a noise/performance ratio? I would like a faster drive, but not is it means it lives shorter.

Thanks in advance,

----------

## NeddySeagoon

ahubu,

you are forgetting the space for the meta data in the filesystem, thats the space required for inodes, the free space map, the journal, directories and so on. None of this is used for storing your data but its all required to be abobe to store and find your data.

Accoustic Management is a bit of a misnomer. this parameter actually controls the head stepping speed of the drive. Slower steps are quieter. As fat as I know, drives today on respond to the sign bit of the parameter, there being two speeds, normal and slow.

For best data performance, use higher numbers.

----------

## ahubu

NeddySeagoon, thank you for your response. I am a bit surprised that the things you speak of take so much space, but I guess that for instance a journal is pretty big for such a drive, and I (wrongly) thought that that space was taken when things were actually put on the drive (growing incremently when the drive fills up). At least this means I'm not being ripped off by Seagate  :Wink: .

Regarding Acoustic Management, do you know if faster steps (a higher AM value, or the "on" bit, as you put it) put (significantly) more wear on a drive? I do notice that it occasionally makes hard ticks when on its loudest.

----------

## NeddySeagoon

ahubu,

Your journal will be 32Mb, so thats not much.

All the inodes are made at filesystem creation time, you cannot any or delete them.

Each inode will be one block, perhaps one filesystem block

```
df -i
```

will show how many inodes you have.

Faster head movements mean higher accelerations on the head arm, which makes more noise.

I would not expect it to affect the life of the drive to any significant extent. I run all my drives for maximum performace.

Maybe I have to play my music loader because of that ?

----------

## Gentree

I guess you just want to understand the discrepancy as much as get your 4G back so playing around with block size maybe benefitial since I image they're all pretty big files larger blocks may be more suitable.

Also do you need a journalling fs?  when you says it's media I imagine its most audio and video for entertainment, read mostly write rarely. If you're doing editing on it that would be different.

 :Cool: 

----------

## ahubu

Hm... I saw that my inodes are taking up only 38Mb.

```
/dev/hdb1                38M    234K     38M    1% /media2
```

Blocksize might help. Indeed, I do not really care so much about the 4gb, although I find it annoying that in addition to the vendor-lie you lose even more Gb's, so that from a 320gb harddisk you in total 'lose' 26Gb. I had hoped I would see a 30* Gb  figure at least. (and with larger harddisks, the loss will (seem to) be more and more significant, read that from a 1Tb harddisk you'll only see roughly 900Gb.)

I guess I don't really need the journalling (I do some editing but it's mostly media archiving), but if it only takes 32Mb I'd rather have it.

I will take a look at the blocksize though, that could make a difference in diskspace taken. However, I figured the blocksize is not taken into account for calculating the total disksize in df (since you cannot predict how many files you will store in the future, and what their size will be (nor does the total size change in df when adding/deleting files) ).

----------

## Gentree

 *Quote:*   

> I figured the blocksize is not taken into account for calculating the total disksize in df

 

well the fs must know the raw space available and how many blocks it will take to span it, irrespective of where they are used yet. Pls post the results when you try it.

It may be interesting to look at other fs as well and compare the overhead.

Apparently xfs is faster on this sort of data , few v. large files. I find reiser4 very robust if your kernel supports it or you fancy adding the patch. Just out of interest you could try ntfs as well.  :Cool: 

----------

## NeddySeagoon

ahubu,

I'm not sure how you arrive at 38Mb for your inodes.

My /home

```
 $ df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/md/6             440G  270G  149G  65% /home
```

and its inode count  

```
$ df -hi

Filesystem            Inodes   IUsed   IFree IUse% Mounted on

/dev/md/6                56M    578K     56M    2% /home
```

Thats a count of the inodes, not the space they use. Each inode must be at least 512b, so thats 28Gb taken up by inodes on my 440Gb /home. That space is not included in df, since inodes are outside the the file system data space.

I suspect your 38Mb is your journal and directories.

Making the block size bigger reduces the number of inodes created by default and increases the amount of wasted space at the end of each file. You can change the blocks per inode ratio by passing a paramter to mke2fs too. Be warned that each file you create needs at least one inode. The partition will appear to be full when you have run out of inodes and no more can be added.

Provided you won't have lots of small files, you can safely change the inodes per block ratio.

----------

## ahubu

Ah, I see a df HCI pitfall there  :Smile:  df -h shows human readable megabytes, df -hi shows human readable inode-amount. I thought it was 38Megabytes of inodes, but I see, it is 38Million. That explains it for me, and I am happen to have learned something new.

Sorry I'm a bit of a conservative type when it comes to my data, so I won't try other filesystems (second reason is I already migrated all my data from my previous defective drive). I read good things about reiser though, but for filesystems no version can be stable enough for me  :Smile: . I gladly refer to NeddySeagoon's signature for that, hehe.

----------

