# Postfix redundancy, resiliance and speed - Your thoughts

## trossachs

For the next new server to join the family, I have said previously that I will be entertaining the mirrored option to ensure a little resiliance. SCSI seems on the cards due to speed and reliability. But I was thinking of mirroring two drives to hold the /home and other important areas, but to use a single drive for the system.

I am finding now that to access webmail, where /home, Postfix and Apache are all on the same disk is really slowing everything down. What are your thoughts or experiences with this?

The disks I am looking at are some 15,000 rpm, but they are very expensive. Could I simply use IDE drievs and rely on the fact that if one fell over, the other would immediately kick and bring downtime to zero?

----------

## kashani

If you're truly having I/O issues, which you should verify using iostat from the sysstat package, adding more disks is going to be cheaper then adding better disks. 

RAID 1 with 2 total drives is one drive writting data and two drives reading.

RAID 5 with 5 total drives is five drives writting data + meta data and five drives reading. 

If you can use cheaper SATA/IDE drives and soft RAID you're probably going to triple your space and throughput at half the cost. A real hardware RAID card, not promise, hightpoint or other bullshit, with 32MB or more cache will significantly increase availible I/O in almost all situations. 

With RAID 5 you still can lose a disk. If you feel you need more protection use fives drive, four in the raid set and one as the hot spare. You need at least 3 drives to do RAID 5.

kashani

----------

## trossachs

With this in mind, should I concentrate on higher rpm's than the standard 7200?

----------

## kashani

Depends on your cash.

Looking around it appears that 10k RPM SATA drives are $100 at the 40GB level whereas you can get 7200 80 GB drives for $60.

I did however managed to server 200mb/s of streaming video off a 16 IDE 7200RPM drive RAID 5 setup. Of course I was doing almost no writting, had 1 GB of cache in the enclosure, and each drive was on its own IDE channel.

kashani

----------

## trossachs

Should be able to stretch to 10,000 rpm drives.

----------

## suso

Apache and postfix are slowing your system down???

How many people are you serving with this?  How much is it used?  How are you judging that it is slow?  Is this over a slow network connection?

Honestly, you should be able to run Postfix, IMAP, Apache and a webmail system on a P3 running software RAID on 5400rpm IDE drives and serve about 10 simultaneous webmail users with ease.  So perhaps drive speed is not the problem you are experiencing.

----------

## trossachs

I am running 10 users. System details: Athlon 1.3Ghz CPU, 7200RPM IDE drives, 512RAM. The network connection is a not a problem as all users connect via the internet; I am the only one served directly over the LAN.

I think my problem is perhaps that I should incorporate a RAID disk set. An attempt was made with mirrored drives, but one half of the mirror collapsed 10 days ago. Now the system is only supported by a single IDE. What RAID system are you referring to Suso?

----------

## j-m

You don´t need any RAID for 10 (!!!) people on such hardware. If it´s slow it is not caused by postfix and/or Apache - check you hardware (DMA enabled, etc.)

----------

## nobspangle

 *kashani wrote:*   

> I did however managed to server 200mb/s of streaming video off a 16 IDE 7200RPM drive RAID 5 setup. Of course I was doing almost no writting, had 1 GB of cache in the enclosure, and each drive was on its own IDE channel.

 

You can get astounding speeds from IDE or SATA RAID, often more than SCSI, with SCSI you are limited to 320MB on one channel, since SATA and IDE are using one channel per drive providing you have a fast system bus and plenty of drives you can get very quick.

I would recommend RAID on any setup where you would be more than a bit annoyed if you lost all your data due to a failed disk. RAID 1 on gentoo is very easy to implement even using the array for your system. 

You'll need quite a few users to get into the realms of requiring SCSI drives, maybe look at 10000 rpm SATA like raptors. They are basically SCSI disks with a SATA interface. That way you're not forking out for the SCSI contoller as well as the drives.

----------

## kashani

 *JulesF wrote:*   

> I am running 10 users. System details: Athlon 1.3Ghz CPU, 7200RPM IDE drives, 512RAM. The network connection is a not a problem as all users connect via the internet; I am the only one served directly over the LAN.
> 
> 

 

You should be having no issues unless there is something really screwy going on. Play with iostat a bit and see if there are actually I/) issues on your partitions.

kashani

----------

## suso

Well, for all we know, those 10 users might be checking their mail constantly and transfering large files.  But that's unlikely.  I really doubt that your disk slowdown is to do with Apache or postfix (as said by others above).

Also, you shouldn't be trying to speed things up by moving to a RAID setup (IMHO).  RAID is mainly for redundancy unless you have special needs (Like gigabit network transfers).

----------

## adaptr

 *Quote:*   

> RAID is mainly for redundancy 

 

Like hell it is.

With a proper RAID-5 setup (SATA 150 or SCSI) you can easily double or even triple your total I/O throughput - something you simply will not get from a single drive, no matter what type it is.

But to actually get results, you will need real hardware RAID - like a 3Ware Escalade card, which is not cheap.

Real performance is never cheap.

----------

## trossachs

Well having lost a drive a few days ago and now that I am beginning to host paying customers, I have to be a bit more business like. The box that I use at the moment has got so many nitty bitty things wrong with it. For example, this morning I went to send mail and the web client said: "cannot contact smtp server." I did some investigation and found that Posfix stopped functioning at around 02.00 this morning. It has NEVER, EVER done this before.

Nothing in the logs! When I tried to start Postfix via the init.d script it would not run. Re-emerged it as by this time it was 10.00 and I was desperate. Nothing! Then started Postfix with /usr/sbin/postfix and it kicked into life. Now, logs are the lifeblood of any system, what the hell do you do when your eyes are taken away from you?

I intiated "mailq" and there was tons of stuff in it. Released the Q and all is ok now. But how the hell do you find out what was wrong with the box to have Postfix come to a halt? With a drive going last week and Postfix this morning, I really need to get on and build this new server. So RAID it will deffinately have to be.

The other thing is that it is always the same user who just takes ages logging into mail, ME! Everyone else can log in quickly, move between maildirs, compose mail, change contacts etc all within a whikser of a second. ME, it can sometimes take 5 - 7 minutes to open a composition window. Could my Trash holding some 159,000 items have anything to do with it? Courier has a 7 day Trash empty config, but the bin is always full!

----------

## adaptr

Quick though maildirs with Courier-IMAP are, I think close to 160K messages in any one mailbox will do the trick  :Wink: 

Tip 1: disable server-side mailbox sorting - it will save your sanity with that many messages in a single box.

Tip 2: investigate why it won't empty the trash (duh, I know).

It's always worked flawlessly for me...

On another note, you would indeed be well advised to look into 2 memory-related issues:

1. Server RAM - get as much as you can afford, with a minimum of 512MB, but the more the better.

ECC too, if the box supports it.

2. RAID-5 - yes, you definitely want it.

It will - as I said earlier - give a tremendous performance boost, and some redundancy thrown in for kicks.

As for the log problem - what do you use for logging, and what drive system is your root partition on ?

RAID-5 is at least as important for your root partition as it is for your hosting partition(s).

Probably more.

----------

## suso

 *adaptr wrote:*   

>  *Quote:*   RAID is mainly for redundancy  
> 
> Like hell it is.
> 
> 

 

Uh, it is.  Last time I checked. the R in RAID stood for redundancy.  Sure, RAID-0 was about increasing possible space, but the number 1 reason why most people choose RAID-1, 4 or 5 is for redundancy, with speed as a side effect.

----------

## trossachs

Have just been looking at the 3Ware Escalade web site and am quite impressed. What is the difference between serial and parallel RAID?

----------

## adaptr

 *suso wrote:*   

> the number 1 reason why most people choose RAID-1, 4 or 5 is for redundancy, with speed as a side effect.

 

Er - no.

The speed you can get out of a 4-disk 10K rpm RAID-5 SATA stack is waaay beyond anything you could achieve without it.

In that case - if what you need is raw speed - RAID-5 is actually the first thing to look at.

Okay, you could also go for RAID-0 with multiple striped disks, say 4x for a quadrupling in speed - but you won't get it, since there is still the seek issue (there is always the seek issue) and the only thing that copes with that is a good controller with heaps of cache - which will also do RAID-5 in the same package.

I'd say we're talking about speed with the added side effect of redundancy.

That's how I would use it; the definition of RAID is irrelevant here.

----------

## trossachs

Well in the end for a web server, speed, yes is very important. But if you have not got a big enough pipe, either LAN or WAN to shove the throughput through, then it is somewhat irrelevant. 

Both of your arguments are invaluable to me. For the moment, resiliance is what I need most. But as the enterprise grows then speed is what I will be looking at. I do not think I will be able to stretch to four 10k drives due to the cost, so 72k it will have to be.

Serial or parallel, what's the difference?

----------

## kashani

 *JulesF wrote:*   

> 
> 
> Serial or parallel, what's the difference?

 

Parallel is what we have now with the whole master/slave nonsene. Serial is the new stuff with no master slave as each drive is on it's own channel.

In higher end systems there is no differnece between the two, because only idiots use master/slave in a RAID set. All your real RAID cards are going to have a single channel per drive which is optional with regular IDE and enforced with SATA.

For the record 0+1 RAID is the fastest reading and writing, followed by RAID 5+1, followed by RAID 0, followed by RAID 5. RAID 5 is nice since it writes at a reasonable rate, reads very quickly, and has some fault tolerance without doubling your costs like 0+1. Many placees are starting to use RAID 5+1 as it avoids the problem of losing half your I/O when you lose a single disk like 0+1. Unfortunately cards that support RAID 5+1 and 0+1 tend to be more expensive and neither of those are choices are availible in software RAID. To be honest unless you're doing crazy highend DB stuff you'd never need to use them. 

kashani

----------

## trossachs

Well have setup RAID 1 with me 3Ware 7006 - 3 card. All looking good. Is it possibe for me to configure the DMA for both /dev/sda drives now that they are part of a RAID set or should this have been done separately before they joined the array?

Whenever I run hdparm on on the other drives, I get this:

```
foo domes # hdparm -d1 /dev/hda

/dev/hda:

 setting using_dma to 1 (on)

 HDIO_SET_DMA failed: Operation not permitted

 using_dma    =  0 (off)
```

Is this some BIOS issue independant of the OS I should be looking at?

FYI:

```
foo software # hdparm -tT /dev/sda

/dev/sda:

 Timing cached reads:   888 MB in  2.00 seconds = 444.00 MB/sec

 Timing buffered disk reads:  118 MB in  3.00 seconds =  39.33 MB/sec
```

```
foo software # hdparm -tT /dev/hda

/dev/hda:

 Timing cached reads:   816 MB in  2.00 seconds = 408.00 MB/sec

 Timing buffered disk reads:   12 MB in  3.16 seconds =   3.80 MB/sec
```

Should I be content with the way things are given the report above?

----------

## adaptr

Definitely not!

40MB/sec for SATA is more than adequate, but only 4MB/sec on an EIDE drive is ridiculous.

Run

```
hdparm /dev/hda
```

and check whether:

- DMA is set(table)

- multiword transfers (32-bit) are enabled

- read-ahead is sane (8 or 16 sectors)

Only the absence of all of these* would produce such a horrendous performance.

* or a really crappy SiS/PCChips/ALi IDE controller...

Check whether DMA is enabled in the BIOS first...

----------

## Xerxes83

Interesting...

 *Quote:*   

> megumi root # hdparm -tT /dev/hda
> 
> /dev/hda:
> 
>  Timing cached reads:   956 MB in  2.00 seconds = 478.00 MB/sec
> ...

 

So next time I am near my server I better start tweaking  :Smile:  The HD is a Maxtor 60GB 7200 RPM.

----------

## nobspangle

 *kashani wrote:*   

>  Unfortunately cards that support RAID 5+1 and 0+1 tend to be more expensive and neither of those are choices are availible in software RAID.

 

I'm pretty sure you can reach any level of RAID in software, you just combine block devices. 

To get 5+1 you would create two RAID 5 arrays (say /dev/md0 /dev/md1) then use those two arrays to create a RAID 1

As far as I know the RAID system in linux is capable of combining any block devices - RAID floppy drives anyone?

----------

## adaptr

Okay, granted - but wouldn't setting up softRAID 5+1 incur a lot of overhead ?

Starting with the parity calculations, on top of which it will have to do double reads and writes for the whole system to mirror that.

There has to be a point at which the overhead catches up with the RAID throughput, no ?

The faster the RAID setup the more CPU power you will need.

----------

## nobspangle

I didn't say it would work well, just that you could do it  :Wink: 

----------

## trossachs

Hey Xerxes83. My results are like yours and each time I try and start DMA on my /dev/hda or my /dev/sda drives, it always fails. Any ideas?

----------

## trossachs

adaptr, further to your post a little earlier, I am unable to bring online DMA for the single IDE drive. I get the "Operation not permitted" error. There is also no setting in the BIOS to change this. 

During bootup, I get a message stating that DMA is not active on my drives. What do you suggest?

----------

## Xerxes83

I also tried it, and got the same problem:

 *Quote:*   

> megumi root # hdparm -d1 /dev/hda
> 
> /dev/hda:
> 
>  setting using_dma to 1 (on)
> ...

 

It seems you have to compile support for your chipset into the kernel in order to solve the problem. If I have time this weekend I'll try and report the results back here.

In my case it should be:

ATA/IDE/MFM/RLL support  --->

  IDE, ATA and ATAPI Block devices  --->

    < >     AMD and nVidia IDE support

----------

## adaptr

 *JulesF wrote:*   

> adaptr, further to your post a little earlier, I am unable to bring online DMA for the single IDE drive. I get the "Operation not permitted" error. There is also no setting in the BIOS to change this. 
> 
> During bootup, I get a message stating that DMA is not active on my drives. What do you suggest?

 

Well, check - as the previous post suggests - whether you really have everything set up properly in the kernel, and what chipset this is, and what type of drive.

And don't say that the BIOS offers no settings for this - every BIOS does, but you probably don't know what you should be looking for.

Recuring problems with DMA are either a kernel issue or bad hadware.

----------

## trossachs

Xerxes83 under which heading in the kernel should I be lookign at? I have searched everywhere and cannot find a mention of DMA. adaptr, I have definately painstaking looked at the BIOS of my box and can find no separate heading for DMA.

I will wait for Xerxes83 and see what he (or she  :Wink: ) comes up with!Last edited by trossachs on Sun Jan 09, 2005 1:40 pm; edited 1 time in total

----------

## adaptr

That's because there doesn't need to be one - as I said, you don't know what you should be looking for.

Check the Advanced Chipset page, if you have one, and see whether anything resembling (U)DMA, PIO or the like is set for each IDE channel, or even for each device.

It might be a smart idea to mention what mainboard you have, and what BIOS.

----------

## Xerxes83

It worked!  :Very Happy: 

 *Quote:*   

> megumi / # modprobe amd74xx
> 
> megumi / # hdparm -d1 /dev/hda
> 
> /dev/hda:
> ...

 

My motherboard has a Nforce chipset (A7N266-VM). If you have an AMD 755/756/766/8111 or nVidia nForce/2/2s/3/3s/CK804/MCP04 chipset, then select the same option as I have. See my previous post for the location within the kernel configuration menu (make menuconfig). If you have another chipset, then check if your chipset is listed in the ' IDE, ATA and ATAPI Block devices' submenu.

And now the speed results:

 *Quote:*   

> megumi / # hdparm -tT /dev/hda
> 
> /dev/hda:
> 
>  Timing cached reads:   976 MB in  2.00 seconds = 488.00 MB/sec
> ...

 

----------

## trossachs

My motherboard is a Gigabyte GA-7VKMLS KM266 AGP. Is it possible to enable DMA when both drives are now part of a RAID set: /dev/sda?

----------

## Xerxes83

I have no idea if it works with RAID (though I suppose it should), but this is the module you have to compile:

< >     VIA82CXXX chipset support

 *Quote:*   

> CONFIG_BLK_DEV_VIA82CXXX:                                                                                                                            │
> 
>   │                                                                                                                                                      │
> 
>   │ This allows you to configure your chipset for a better use while                                                                                     │
> ...

 

 *Quote:*   

>  * VIA IDE driver for Linux. Supported southbridges:
> 
>  *
> 
>  *   vt82c576, vt82c586, vt82c586a, vt82c586b, vt82c596a, vt82c596b,
> ...

 

----------

## trossachs

OK so I should compile this module into the kernel and then load up to see if I will be able to enable DMA?

----------

## Xerxes83

Yep (maybe I can win a price for the shortest correct answer in the history of these forums  :Smile: )

----------

## trossachs

Possibly. This was compiled as a module, or as something "permanently enbabled" in the kernel? You are aware of the difference?

----------

## Xerxes83

I have compiled my chipset support as a module. But I read that some people had to compile it into the kernel to prevent segfaults.

----------

## trossachs

The box does have a history of segmentation issues. Kernel option for me I think!    :Confused: 

----------

## adaptr

The box being your mainboard ?

In that case, I strongly suggest you tweak around with the kernel's APIC and ACPI setings, since a problematic mainboard is indicative of either buggy hardware (unaccepable in modern mainboards) or the wrong kernel options.

Oh and check your memory!

Segfaults are after all primarily a memory issue.

----------

## Xerxes83

After a reboot it sais 'using_dma    =  0 (off)'. How can I make the use of DMA permanent? I'm sure I have enabled the use DMA per default option in the kernel.

Edit: I have compiled the chipset support into the kernel, and now it works. But that still doesn't explain why it doesn't work as a module...

----------

## screwloose

 *nobspangle wrote:*   

> 
> 
> As far as I know the RAID system in linux is capable of combining any block devices - RAID floppy drives anyone?

 

I'm actually considering digging out a second floppy drive to try this....... damn you!

----------

## trossachs

adaptr the previous segmentation issues have been resolved with a new motherboard. I am further inspecting the BIOS for any mention of DMA. Can't see anything just ye and all enabled within the Kernel.

What Kernel is everyone using? For me: v2.4.26.

----------

