# SSD Encryption speed RAID partitioning questions

## weingbz

OK, I'm stumped a bit. I want to set up my SSD as an encrypted drive. Since the TRIM command can't be used with an encrypted drive I want to partition the drive to not use every bit. Since encryption with dmcrypt is not multithreaded I though I'd use a RAID-0 to have mutliple kcrypt threads. Multithreading was supposed to be included with kernel 2.6.30 but I don't see it in my system monitors. I already have encrypted my other conventional hds.

First I tested the speed of my hd and its encryption speed and the SSD:

```

hdparm -t /dev/{sda,sdc,mapper/hdsafe}

/dev/sda:

 Timing buffered disk reads:  332 MB in  3.01 seconds = 110.35 MB/sec

/dev/sdc:

 Timing buffered disk reads:  512 MB in  3.01 seconds = 170.07 MB/sec

/dev/mapper/hdsafe

 Timing buffered disk reads:  234 MB in  3.01 seconds =  77.76 MB/sec

```

OK, with the harddisk I get 110 MB/s and with encryption 78 MB/s are left. The SSD is 170MB/s fast. It used to be 250MB/s, I changed the order of harddisks in BIOS and now it is slower but still fast enough since I have a dualcore and with perfect scaleability I should get 2*78 MB/s = 156 MB/s.

Next partition the SSD:

```

fdisk /dev/sdc

Command (m for help): p

Disk /dev/sdc: 115.0 GB, 115033153536 bytes

255 heads, 63 sectors/track, 13985 cylinders, total 224674128 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x8da0084b

   Device Boot      Start         End      Blocks   Id  System

```

OK I'm used to HD manufacturers lying to me but fdisk? 115033153536 bytes are 107.13... GB and not 115.0GB. OK, strange but I carry on:

```

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4, default 1): 1

First sector (2048-224674127, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-224674127, default 224674127):

Using default value 224674127

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

```

Now to setup the encryption:

```

cryptsetup  -c aes -s 128 create flash1safe /dev/sdc1

```

And test the speed:

```

hdparm -t /dev/mapper/flash1safe

/dev/mapper/flash1safe:

 Timing buffered disk reads:  238 MB in  3.02 seconds =  78.81 MB/sec

```

OK as expected, the encryption is CPU limited, the higher speed of the SSD cannot be used. So I delete the partion and set up two smaller partitions to set up a RAID:

```

cryptsetup remove flash1safe

fdisk /dev/sdc

Command (m for help): d

Selected partition 1

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4, default 1): 1

First sector (2048-224674127, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-224674127, default 224674127): 67110912

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4, default 2):

Using default value 2

First sector (67110913-224674127, default 67110913): 67112960

Last sector, +sectors or +size{K,M,G} (67112960-224674127, default 224674127): 134221824

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

```

The numbers are all calculated to be divisible by 2048.

OK set up the encryption and the RAID:

```

cryptsetup  -c aes -s 128 create flash1safe /dev/sdc1

cryptsetup  -c aes -s 128 create flash2safe /dev/sdc2

mknod /dev/md1 b 9 1

mdadm --create /dev/md1 --level=0 --raid-devices=2 /dev/mapper/flash1safe /dev/mapper/flash2safe

```

And now for the final test:

```

hdparm -t /dev/mapper/{flash1,flash2}safe /dev/md1

/dev/mapper/flash1safe:

 Timing buffered disk reads:  170 MB in  3.00 seconds =  56.61 MB/sec

/dev/mapper/flash2safe:

 Timing buffered disk reads:  170 MB in  3.03 seconds =  56.11 MB/sec

/dev/md1:

 Timing buffered disk reads:  246 MB in  3.00 seconds =  81.87 MB/sec

```

WHAT? Why are the flashsafes 30% slower? In total I get again the speed of the encrypted harddrive. I tried without the RAID and just one partition. That doesn't help. As soon as I don't use the whole SSD I get the low encryption speed. I also tried 

```

cryptsetup  --align-payload=8192 -c aes -s 128 create flash2safe /dev/sdc2

```

But that doesn't help. I tried putting one partition at the beginning of the disk and the second at the end but no change. 

It doesn't have anything to do with the SSD as when I test the speed of its partitions:

```

hdparm -t /dev/sdc{1,2}

/dev/sdc1:

 Timing buffered disk reads:  572 MB in  3.00 seconds = 190.45 MB/sec

/dev/sdc2:

 Timing buffered disk reads:  666 MB in  3.01 seconds = 221.60 MB/sec

```

And no that is not a fluke, the partitions consistently show higher data rates as the SSD as a whole which I also don't understand.

To sum up I have 3 questions:

What is wrong with the size calculation of fdisk (115 GB vs 107 GB)?

Why is a smaller encrypted partition slower?

Why do the partions show a higher data rate as the whole disk?

Thanks for the help

----------

## avx

 *Quote:*   

> What is wrong with the size calculation of fdisk (115 GB vs 107 GB)?

 nothing, read up on Gb vs Gib and how everyone throws them around not knowing what to use when.

For the rest, hdparm is far from being a benchmark, too many possible influences to be reliable.

----------

## weingbz

 *avx wrote:*   

>  *Quote:*   What is wrong with the size calculation of fdisk (115 GB vs 107 GB)? nothing, read up on Gb vs Gib and how everyone throws them around not knowing what to use when.
> 
> 

 

Yeah thanks, I noticed that after I posted. So that was the easy question  :Smile: .

 *avx wrote:*   

> 
> 
> For the rest, hdparm is far from being a benchmark, too many possible influences to be reliable.

 

That's not true, but here is another test with pretty much the same results:

```

dd if=/dev/sda1 of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 9.56513 s, 112 MB/s

dd if=/dev/sdc1 of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 5.21636 s, 206 MB/s

dd if=/dev/sdc2 of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 4.49405 s, 239 MB/s

dd if=/dev/sdc of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 5.33495 s, 201 MB/s

dd if=/dev/mapper/hdsafe of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 13.5308 s, 79.4 MB/s

dd if=/dev/mapper/flash1safe of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 18.435 s, 58.2 MB/s

dd if=/dev/mapper/flash2safe of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 18.6192 s, 57.7 MB/s

dd if=/dev/md1 of=/dev/null bs=1024 count=1M

1048576+0 records in

1048576+0 records out

1073741824 bytes (1.1 GB) copied, 13.0981 s, 82.0 MB/s

```

I could do other harddisk transfer rate benchmarks but I know from earlier comparisons that hdparm has similar results to dd and to bonnie. The results are repeatable (to +/- a few MB/s) and consistent, flashsafe slow, raid a bit faster than single hd, partitions faster than whole disk. Always. I repeated this quite a few times.

----------

## weingbz

If it interests someone I have found a reason for the slower speed, it seems the alignment of the partitions is really important for the decryption speed.

I tried

```

fdisk /dev/sdc

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4, default 1): 1

First sector (2048-224674127, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-224674127, default 224674127): +32G

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4, default 2):

Using default value 2

First sector (67110912-224674127, default 67110912):

Using default value 67110912

Last sector, +sectors or +size{K,M,G} (67110912-224674127, default 224674127): +32G

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

```

When I test the speed now I get:

```

hdparm -t /dev/{sdc1,sdc2,mapper/flash1safe,mapper/flash2safe,md1}

/dev/sdc1:

 Timing buffered disk reads:  474 MB in  3.00 seconds = 157.91 MB/sec

/dev/sdc2:

 Timing buffered disk reads:  506 MB in  3.01 seconds = 168.29 MB/sec

/dev/mapper/flash1safe:

 Timing buffered disk reads:  230 MB in  3.00 seconds =  76.58 MB/sec

/dev/mapper/flash2safe:

 Timing buffered disk reads:  232 MB in  3.01 seconds =  77.14 MB/sec

/dev/md1:

 Timing buffered disk reads:  242 MB in  3.02 seconds =  80.26 MB/sec

```

So the raw partitions have gotten slower but the encrypted speed is where I would have expected it. Sadly the RAID doesn't seem to help much  :Sad: . I still don't understand it.

Now I tried adding to the raid, this seems to speed things up:

3 16G partitions:

```

hdparm -t /dev/{sdc1,sdc2,sdc3,mapper/flash1safe,mapper/flash2safe,mapper/flash3safe,md1}

/dev/sdc1:

 Timing buffered disk reads:  404 MB in  3.00 seconds = 134.53 MB/sec

/dev/sdc2:

 Timing buffered disk reads:  670 MB in  3.01 seconds = 222.77 MB/sec

/dev/sdc3:

 Timing buffered disk reads:  464 MB in  3.01 seconds = 154.21 MB/sec

/dev/mapper/flash1safe:

 Timing buffered disk reads:  226 MB in  3.02 seconds =  74.74 MB/sec

/dev/mapper/flash2safe:

 Timing buffered disk reads:  236 MB in  3.01 seconds =  78.44 MB/sec

/dev/mapper/flash3safe:

 Timing buffered disk reads:  232 MB in  3.01 seconds =  76.97 MB/sec

/dev/md1:

 Timing buffered disk reads:  358 MB in  3.02 seconds = 118.46 MB/sec

```

4 16G partitions:

```

hdparm -t /dev/{sdc1,sdc2,sdc3,sdc4,mapper/flash1safe,mapper/flash2safe,mapper/flash3safe,mapper/flash4safe,md1}

/dev/sdc1:

 Timing buffered disk reads:  436 MB in  3.01 seconds = 144.68 MB/sec

/dev/sdc2:

 Timing buffered disk reads:  622 MB in  3.00 seconds = 207.23 MB/sec

/dev/sdc3:

 Timing buffered disk reads:  502 MB in  3.00 seconds = 167.22 MB/sec

/dev/sdc4:

 Timing buffered disk reads:  576 MB in  3.01 seconds = 191.62 MB/sec

/dev/mapper/flash1safe:

 Timing buffered disk reads:  230 MB in  3.01 seconds =  76.33 MB/sec

/dev/mapper/flash2safe:

 Timing buffered disk reads:  238 MB in  3.01 seconds =  79.13 MB/sec

/dev/mapper/flash3safe:

 Timing buffered disk reads:  232 MB in  3.00 seconds =  77.28 MB/sec

/dev/mapper/flash4safe:

 Timing buffered disk reads:  232 MB in  3.01 seconds =  77.20 MB/sec

/dev/md1:

 Timing buffered disk reads:  398 MB in  3.01 seconds = 132.41 MB/sec

```

5 16G partitions:

```

hdparm -t /dev/{sdc1,sdc2,sdc3,sdc5,sdc6,mapper/flash1safe,mapper/flash2safe,mapper/flash3safe,mapper/flash4safe,mapper/flash5safe,md1}

/dev/sdc1:

 Timing buffered disk reads:  350 MB in  3.02 seconds = 115.85 MB/sec

/dev/sdc2:

 Timing buffered disk reads:  644 MB in  3.00 seconds = 214.36 MB/sec

/dev/sdc3:

 Timing buffered disk reads:  502 MB in  3.01 seconds = 166.90 MB/sec

/dev/sdc5:

 Timing buffered disk reads:  574 MB in  3.00 seconds = 191.13 MB/sec

/dev/sdc6:

 Timing buffered disk reads:  678 MB in  3.00 seconds = 225.66 MB/sec

/dev/mapper/flash1safe:

 Timing buffered disk reads:  230 MB in  3.02 seconds =  76.05 MB/sec

/dev/mapper/flash2safe:

 Timing buffered disk reads:  236 MB in  3.01 seconds =  78.29 MB/sec

/dev/mapper/flash3safe:

 Timing buffered disk reads:  232 MB in  3.02 seconds =  76.90 MB/sec

/dev/mapper/flash4safe:

 Timing buffered disk reads:  230 MB in  3.02 seconds =  76.13 MB/sec

/dev/mapper/flash5safe:

 Timing buffered disk reads:  234 MB in  3.02 seconds =  77.52 MB/sec

/dev/md1:

 Timing buffered disk reads:  414 MB in  3.02 seconds = 137.10 MB/sec

```

If I look at my CPU monitor both processors are maxed out with 4 partitions. Since about 160MB/s is what I would get for perfect parallelization 140MB/s isn't too bad. I guess I will have to live with that until I upgrade to a prozessor with AES-NI.

Also it seems for writing AES uses all cores:

```

dd bs=1M count=256 if=/dev/zero of=/dev/md1 conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 3.48638 s, 77.0 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/mapper/flash1safe conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 3.52398 s, 76.2 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/mapper/flash2safe conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 3.46785 s, 77.4 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/mapper/flash3safe conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 3.50419 s, 76.6 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/mapper/flash4safe conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 3.4707 s, 77.3 MB/s

```

I can see on my CPU monitor that writing to even 1 flashsafe maxes out both cores.

The disk itself is not the limiting factor:

```

dd bs=1M count=256 if=/dev/zero of=/dev/sdc1 conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 1.23438 s, 217 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/sdc2 conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 1.28325 s, 209 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/sdc3 conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 1.23298 s, 218 MB/s

dd bs=1M count=256 if=/dev/zero of=/dev/sdc4 conv=fdatasync

256+0 records in

256+0 records out

268435456 bytes (268 MB) copied, 1.27504 s, 211 MB/s

```

Well that's another reason to upgrade in the future to AES-NI. That's enough benchmarks for now.

----------

