# Overclocking, RAM timings and mencoder stress in Linux

## suicidal_orange_II

I know this isn't really an overclocking forum but those that are aren't Linux forums, the two rarely mix it seems  :Sad: 

I am currently in the process of overclocking my system, but although it is 8 hours orthos stable in windows it crashes in under 10 mins when re-encoding xvid files to dvd (a real world stress test, that is also doing something useful).  As I'm on dual core its encoding 2 files at a time just to make sure its working hard.

On my previous motherboard I couldn't use 1t command rate at all in Linux, it caused random crashes.  In the new system 1t isn't even stable in windows so the fix isn't so easy.  Are there other timings that need to be adjusted?

Does anyone know if mencoder is really a great stress tester (better than orthos) or is there something that makes the Linux kernel more picky than windows?  Has anyone else found that windows tested overclocks are not at all stable in Linux?

I'd be interested in any experiences, especially ones that disagree with my findings  :Smile: 

Suicidal_Orange

----------

## frostschutz

app-benchmarks/cpuburn? Other projects in app-benchmarks may also be interesting for stressing your system.

Personally I'd advise against overclocking your system at all. Stable operation is much more important than a few ticks of speed you can squeeze out of your system. There are very few hardware configurations out there that are worth overclocking (such as fast CPUs that get relabeled to slower versions due to marketing/storage reasons which you then yank back to full speed).

----------

## suicidal_orange_II

I don't need a stress test - mencoder has the ability to freeze a system just as easily as a dedicated stress tester, but if it doesn't its done something useful (I would consider running a distributed system such as folding@home but they wouldn't be happy with unreliable results  :Smile: )

I appreciate your concern for my hardware, but I just got my hands on an engineering sample CPU so it would be a waste not to (imo)

Guess I was right then, Linux doesn't mix with overclocking  :Sad: 

Suicidal_Orange

----------

## eccerr0r

Ideally you should run lots of different apps since they all test different parts of the CPU... You never know what part of the cpu is stressed most by the overclocking until it fails  :Smile:  It's hard to determine exactly what program will fail...

just by the basis that the two are different software, it's no surprise that they behave differently.  You will likely find programs that fail faster in windows than in linux by the same respect, but finding those programs is not a trivial task.

----------

## frostschutz

Try running mencoder in Windows?

----------

## suicidal_orange_II

Mencoder in windows... why didn't I think of that  :Laughing:  maybe because the only reason I have windows installed was for stress testing, all my files are stored on xfs so can't be seen in it.

I guess its just me then, there is nothing inherently different in the OS's at a lower level, its just app specific.  We'll see in a couple of days, when I have time to sort out my testing.  I'm happier with a slower system in Linux than a faster windows one anyway, gcc 4.3 is so much slower than 4.1 it doesn't matter much (don't worry, I'm not mad enough to try compiling everything with it just some non critical X apps).

No-one seems to have noticed that my main concern is RAM timings, I'm not sure how different apps would stress memory differently (its only read or write) but its possible so until its tested I wont argue.

Keep the suggestions coming  :Smile: 

Suicidal_Orange

----------

## Gentree

 *frostschutz wrote:*   

> 
> 
>  There are very few hardware configurations out there that are worth overclocking (such as fast CPUs that get relabeled to slower versions due to marketing/storage reasons which you then yank back to full speed).

 

So you have not understood what you are trying to advise on. All manufacturers have to leave a safety margin in their designs to cover component differences and the wide range of different hardware thier products will be used in. This means just about all parts of the system are running underspec just to keep things reliable in an unknown environment.

Carefully studied overclocking can take advantage of all these accumuated losses of performance if you want to spend the time testing your specific hardware collection and maybe changing a weak link if it is found to be holding things back.

As you can see from my sig it's not just a few clicks , sometimes , oftentimes, it can be well worth the effort if you want to spend the time.

 *suicidal_orange_II wrote:*   

> Mencoder in windows... why didn't I think of that  maybe because the only reason I have windows installed was for stress testing, all my files are stored on xfs so can't be seen in it.
> 
> I guess its just me then, there is nothing inherently different in the OS's at a lower level, its just app specific. 

  No there are fundemental differences in the linux kernel compared to the way the window OS works. It is known that linux can be more prone to trip when overclocking. You also have a difference fs so it's not a direct comparison for that reason also. I'd suggest looking at other things first but it would be worth testing different linux fs.

 *Quote:*   

> We'll see in a couple of days, when I have time to sort out my testing.  I'm happier with a slower system in Linux than a faster windows one anyway, gcc 4.3 is so much slower than 4.1 it doesn't matter much (don't worry, I'm not mad enough to try compiling everything with it just some non critical X apps).
> 
> 

 I'm a little surprised to hear that. The benchtests I ran showed 4.1 to be the slowest gcc in living memory, although that was on 32bit arch. 

 *Quote:*   

> 
> 
> No-one seems to have noticed that my main concern is RAM timings, I'm not sure how different apps would stress memory differently (its only read or write) but its possible so until its tested I wont argue.
> 
> Keep the suggestions coming 
> ...

 

I think if you can get your memeory stable with memtest86+ you should not see any cases where it will trip out with any particular programs. If you are playing with mem timings, take careful note of the memory bandwidth shown by memtest86+ . It is possible to get faster timings to run extended tests perfectly stable but with less performance that nominally "faster" timings.

memtest86+ will stretch and twist you memory in every conceivable way. It if passes 24h memtest mencoder will not trip it.

As frostschutz suggested, cpuburn is a good tool to set up overclocking rather than waiting until you hit a random instability 80% of the way through a "real world" task.

If you can get all cpuburn tests to pass your mencoder will work.

But be VERY careful and get you temperature sensors and good cooling sorted out before playing and be ready to hit cntl-C to stop the test.

 The package is called cpuburn because it is designed to BURN YOUR CPU. Handle with care.

Have fun.   :Cool: 

----------

## Akkara

How are you overclocking?

If it is by increasing the FSB speed you might also be stressing the memory beyond what it can reliably deliver.  Try that memtest that Gentree suggested.

While I don't have specific experience overclocking, I did manage to resurrect a slightly defective / out-of-spec motherboard once by carefully tweaking the memory timings - and ended up getting a better memory-transfer rate along with rock-solid stability.  (It had been failing memtest every few hours even though everything was set to original factory values.)  Perhaps this will be helpful to you.

This is what I did.  WARNING: this takes a bit of time, and tests to POST failure - be sure you know how to clear your cmos ram if necessary, and write down any important settings you don't want to lose.

1. For reference, boot memtest and note how fast your memory is (MB/sec).

2. Boot and enter bios.  Go to the memory timings, write down what the ras active, precharge, cas, etc. timings are.

3. Set all the memory timings to their maximum (slowest) values.

4. Overclock by 5% (in your case, 5% faster than whatever clockrate you want to run at eventually).

5. Check that the machine POSTS (no need to run stress tests yet we're just getting a baseline here).

6. Starting with the first memory timing setting in bios, decrease its setting by one.  Verify that the machine POSTS.  If all's OK, decrease that setting again, verify it still POSTS, etc.  Write down the lowest you were able to get it before it failed to boot to bios.

7. Set that memory timing parameter back to its slowest value.

8. Repeat 6-7 for the rest of the memory timing parameters.

9. At this point you should be able to set all your memory timings to 1 step up from what you've written down and have the machine boot.  Run memtest briefly to check.  Make a note of the memory transfer speed.  Call this the "good set".

10. Pick a parameter, set it to 1 step faster, reboot and check for stability and make a note if it seems stable and the memory transfer speed.  Do this for all the parameters (one at a time, reset the old one to the setting of step 9 before trying a new one).

11. Whichever parameter in step 10 gave the best transfer rate boost, promote that setting to the "good set".

12. (Optional) Repeat step 10-11 again to find a second parameter that can be boosted.

13. Drop the clockrate to its original value and run memtest overnight as a final check.

Finally, sometimes a slower CPU speed but that allows for faster memory transfer rates will give you the best overall performance.

----------

## Gentree

 *Quote:*   

> Finally, sometimes a slower CPU speed but that allows for faster memory transfer rates will give you the best overall performance.

 

yes a general rule is to get the FSB as high as possible then bring up the cpu freq using the multiplier. (where this is possible with the mobo/BIOS in question.)  :Cool: 

----------

## suicidal_orange_II

Thanks for the continued advice  :Smile: 

My current motherboard has a terrible bios for overclocking (its not to be mature yet).  This leaves me with the option of using only standard FSB settings, so I have a core 2 engineering sample running at 10x333 (up from 10x266) which is perfectly stable in windows.  My RAM is at 1:1 as its DDR2 666, and the only timing I'm running faster than default is the command rate, which is at 1t instead of 2t.  I will be looking to go further but now is the time to sort out if it's me, Linux, or my poor bios thats holding it back.

I'm glad to hear Gentree say that Linux is known to "trip" when overclocking, glad its not just me.  Maybe it will improve when my bios does.  Then again it could be the fs, but I've used it for years with no problems; if it were ext4 or reiser4 maybe but xfs has been fine (apart from crash recovery... maybe I should be testing with cpuburn instead  :Surprised:  hadn't thought about that).

Thanks for all your help, some interesting thoughts and advice.  I suppose the best way to overclock is to forget windows, then I wont know what I'm missing  :Smile: 

Suicidal_Orange

Edit: Just incase anyone was wondering on the speed of gcc-4.3

```

real    18m36.416s

user    17m14.246s

sys     0m46.833s

```

vs 4.1

```

real    0m42.726s

user    0m28.421s

sys     0m9.391s

```

See why an overclock isn't really noticable?   :Laughing: 

----------

