# initialize Corsair Force series 3 SSD

## jancici

hi, I did buy this SSD : Corsair Force series 3 60GB

I did read few articles abut align partition and trying to setup my new drive ...

but I am not able to find what is :

- what is erase block size? is it 512KiB?

- what is block size? is it 4KiB?

I think that block size is 4kiB but I am not sure about erase block size.

thanks for your tips for size of erase block.

I am reading this article 

http://www.nuclex.org/blog/personal/80-aligning-an-ssd-on-linux%22

and trying to follow Markus steps.

I want to crate boot partition with 32MiB size and root one with rest of size.

Well, let;s assume that 512KB is size of erase block.

here is output from fdisk with cylinder units

```
fdisk  /dev/sdc

Command (m for help): u

Changing display/entry units to cylinders (DEPRECATED!)

Command (m for help): p

Disk /dev/sdc: 60.0 GB, 60022480896 bytes

32 heads, 32 sectors/track, 114483 cylinders

Units = cylinders of 1024 * 512 = 524288 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x1aadf6e2

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               3          64       31744   83  Linux

/dev/sdc2              65      114483    58582528   83  Linux

```

and with sectors units 

```

Command (m for help): u

Changing display/entry units to sectors

Command (m for help): p

Disk /dev/sdc: 60.0 GB, 60022480896 bytes

32 heads, 32 sectors/track, 114483 cylinders, total 117231408 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x1aadf6e2

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1            2048       65535       31744   83  Linux

/dev/sdc2           65536   117230591    58582528   83  Linux

Command (m for help):
```

do you think it is correct?

from my point of view it is okay

start of partitions

2048 sectors * 512 B = 1048576 B

1048576 B / (512*1024) = 2 - this is okay

65536 sectors * 512 B = 33554432 B

33554432 B / (512*1024) = 32768 - this is also okay because it is integerLast edited by jancici on Mon Sep 26, 2011 12:34 pm; edited 1 time in total

----------

## jancici

next step is creating FS

I decided for EXT4. I am not sure about parameters stride=32,stripe-width=32.

And my question is, how those parameters are connected with single HDD / SDD?

I am not using any kind of raid. What is chunk size in my case? and how to calculate stride and stripe-width?

thanks

I did run this for BOOT partition

```
mkfs.ext4 -b 4096 -O ^has_journal -L BOOT /dev/sdc1
```

and this for root

```
mkfs.ext4 -b 4096  -L ROOT /dev/sdc2
```

thank you for your comments, I would like to start to using that SSD soon, so I have some time to tune it and setup correctly.

I am planing to put /home and /var and /usr/portage to another HDD. I already using /tmp in RAM.

----------

## HeissFuss

You've got the main thing right.  Fdisk set your block alignment to 1MB offset, so your partitions are aligned.

As far as EXT4 stride/stripe settings, I really doubt they'll make much difference.  Assuming you have a 512K erase block size though, you'd want stride=1024, and then don't bother with stripe-width (if you had to set it to something, you'd set it to 1.)

You can test you partition with dd (assuming the main thing you're concerned with is write speed.)

Since you have a sandforce, you'll need to test with something random (zero would be way too fast.) 

This creates a random 1GB file called /testfile.dd

```
dd if=/dev/urandom of=/testfile.dd bs=1M count=1024 conv=fsync
```

Assuming you have 3+GB of RAM, the testfile.dd will be cached.

Rewrite it with:

```
dd if=/testfile.dd of=/realtestfile.dd bs=1M count=1024 conv=fsync
```

You can re-run the second dd a couple of time to get an average.

For reads, you can just use 'hdaparm -tT /dev/sdc2'

----------

## jancici

thanks

I did that test with 1GB file of random content

```
dd if=testfile.dd of=realtestfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 12.6844 s, 84.7 MB/s

root@guliver /media/ROOT # dd if=testfile.dd of=realtestfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 12.6039 s, 85.2 MB/s

root@guliver /media/ROOT # dd if=testfile.dd of=realtestfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 12.6312 s, 85.0 MB/s
```

85MB/s, well don;t know if this is good or not

so I did run same test on classic hdd

```
dd if=/realtestfile.dd of=/realtestfile_dva.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 9.82114 s, 109 MB/s

root@guliver /media/ROOT # dd if=/realtestfile.dd of=/realtestfile_dva.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 9.50574 s, 113 MB/s

root@guliver /media/ROOT # dd if=/realtestfile.dd of=/realtestfile_dva.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 10.0342 s, 107 MB/s
```

this giving better numbers.

so, conclusion? should I tune something? thanks

----------

## HeissFuss

Sequential write is about the only metric magnetic drives still come close to SSDs on.

I couldn't find stats on sequential incompressible write speed for your 60GB drive, but it looks like the 120 can do ~138MB/s.  The 60 is probably a bit slower than that, depending on how they designed it.

http://www.anandtech.com/bench/Product/400

Are you using SATA 6, and if so, what type of controller?  Are you using AHCI or legacy mode?

Can you repeat the test with the initial file created from /dev/zero instead of /dev/urandom?

----------

## jancici

thanks for answers and infos

creating ZERO file

```
dd if=/dev/zero of=testfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 10.1716 s, 106 MB/s
```

test

```
root@guliver /media/ROOT # dd if=testfile.dd of=realtestfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 4.48724 s, 239 MB/s

root@guliver /media/ROOT # dd if=testfile.dd of=realtestfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 4.42384 s, 243 MB/s

root@guliver /media/ROOT # dd if=testfile.dd of=realtestfile.dd bs=1M count=1024 conv=fsync

1024+0 records in

1024+0 records out

1073741824 bytes (1.1 GB) copied, 4.41844 s, 243 MB/s
```

this looks much better  :Smile:  sandforce is doing its job

I did run same test on classic hdd, it is slower in this case.

I have only SATA II on my mainboard, and using AHCI mode

```

lspci

...

00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA AHCI Controller

...
```

----------

## PM17E5

Thank you, I found this thread to be very informative because I have a drive with the same exact specifications and I was implementing the same exact partition scheme and file system on my system. What are your preferred fstab options for root and boot? I'm assuming you have noatime and discard set in it? I was trying to make sense of the various alignment methods online but there are so many different ways to go about it, it looked like one could read for a good day or two and still find a differing opinion on how to do it right.

----------

## jancici

meanwhile I did change ext4 to btrfs

here is fstab

```
/dev/sda1   /boot    ext4    noatime                                    1 2

/dev/sda2   /        btrfs   noatime,nodiratime,discard      0 1
```

its good, that its helpfull ...

----------

## PM17E5

How is btrfs working out for you? I heard it doesn't have any recovery or repair software and that it's unstable? Is there a noticeable difference in performance?

----------

