# Overclocking with Gentoo

## thebigslide

This is quite a controversial thing to write a doc on, so I'm lining my cable modem with asbestos to prepare for the flames that will ensue...

Don't report bugs if you're overclocking unless you can reproduce them after booting at stock speeds and doing an emerge -e world.

This howto is not on how to overclock your computer.  I don't suggest you overclock your computer.  This howto is about how it CAN be done in a manner where you'll be less likely to wreck something simply due to the peculiarities of our choice of operating system.  No guarrantees.  This doc assumes a good understanding of how to safely overclock your computer in the first place.  If you haven't overclocked your computer before, the author suggests trying this with bootable CDs first as it is more foregiving. (knoppix or your own livecd, or even the gentoo livecd with memtest86)

As far as I am concerned, the biggest drawback (besides not being able to play the latest windows games  :Rolling Eyes:  ) to a linux system over windows is: interactivity is poor.  Overclocking your FSB can overcome this.  I have been overclocking gentoo systems for a couple of years now, have wrecked lots of installs, and would like to share my experience and pointers so others may not have to do so many installs as I have to get it right.

1.  Start from stock

Flash the latest bios's in your systemboard, video card, burners, scsi cards, etc and make sure the CPU is set to factory defaults.  Increase different busses independantly and SLOWLY.  Mount your file system read only the first time you attempt a new setting or use a livecd , even if it seems safer (some systemboards do not have completely functional bus clock locks and might use innacurate dividers instead: you might end up overclocking say the IDE bus accidentally by decreasing the FSB past a certain threshhold.  Any new decent overclocking motherboard has a bus clock lock, most OEM boards don't.

****NOTE: If you tweak something and things aren't working as normal, DON'T do a nice shutdown.  Hit the hard reset button.  The reasoning is this: if the IDE interface isn't working properly, or the buffer is full of junk, writing to the disk is a BAD idea.  Most overclocked gentoo installs fail because of disk corruption.  ****

2.  Set your FSB as high as it will stably go without overclocking the IDE bus.  You can determine it's limit by booting off a gentoo livecd with the memtest86 boot option repeatedly after making minor increases each time.  

2.1  There are usually dividers every 33MHz, some newer motherboards allow much more granular adjustments.  The IDE bus MUST NOT be overclocked or disk corruption will ensue.  

2.2  Interactivity in a gentoo system is normally not memory bandwidth limited, especially on AMD, but memory latency limited, so picking a FSB of 166 will give you a faster feeling system than at FSB 168, even though the CPU is faster and it probably will benchmark faster simply because the memory access time is reduced when you pick a 'round' FSB.

3.  Bump your voltages once you've found a safe limit and the system is aged for 2-4 weeks.  This applies to burned in CPUs only, obviously.  Giving an extra .05V here and there will afford a stability safety net if you have the cooling for it.  DON'T bump the voltages and then try to squeeze a little more out of the CPU until it's burned in at the new voltage for at least 2 weeks.  This applies to the initial burn in also, which should be done at default voltage or less.  Only use the voltage adjustment to make up the stability you lost after increasing the clock speed, not for any wild overclocks.

3.1  Burning in a chip works like this:  each 'transistor' in your CPU is not to be though of as 1 junction, but as a field of junctions which are redundant (at the atomic level).  By burning in the CPU, you actually burn out the weaker links, preventing them from leaking at higher than stock clock speeds.  This is beneficial, but must be done over a long period of time.  Be aware that the weaker links are the ones which operated at lower voltages, so if you've worked up to a high enough voltage, the chip may not POST at default voltage after time.  You're not burning out transistors.  When the dopants are sputtered onto the silicone crystal, it isn't perfect.  When you burn in the chip, you burn out the little spatters of dopants that 'missed', leaving the bulk of the junction fully functional.

3.2  Here's an analogy: Think of yourself flicking a lightswitch on and off repeatedly.  The rate at which you flick the switch is analogous to the clock speed of a chip.  The force with which you flick the switch is analogous to the voltage you're giving the chip.  The switches on your CPU that are able to be snapped open with a lot of force will be able to operate at a higher frequency more reliably because they snap open and shut a lot quicker.  Thinking back to 3.1, burning in a CPU basically destroys the switches that move too slowly.  You don't need the slow ones as they are superfluous; there are always faster ones in the same junction (up to a point).  Once enough switches are destroyed, the CPU will cease to function, but this normally won't happen unless the temperatures go too high.

3.3  The relationship between processor longevity and operating temperature is very non-linear.  The relationship between processor longevity and operating voltage at a fixed temperature is quite linear.  If you lower the temps sufficiently, you will see long life from your processor.  If one is cooling a processor with water or refridgerant, the voltage probably won't need to go up, as the little switches are less restricted at lower temperatures.

4.  Keep it cool.  If anything under your hood is running over 45*C, I wouldn't overclock it further.  That's just me.  Get a giant heatsink or watercooling if you want a stable system.  CPUs physically operate more quickely at lower temperatures.  Manufacturers have designed their CPUs so that they can operate at pretty liberal operating temperatures at their rated speeds.  If one drops the temp to a more reasonable level, the 'little switches' are less restricted, so they are able to switch faster.

5.  Don't be too pushey with RAM timings.  Anything except CAS and RCD doesn't really make a big difference using gentoo and if you go too far, you are risking instant death of any mounted rw filesystem without much warning due to linux's extensive disk cacheing.  If you want to tweak these anyways, try them out in memtest86 on a livecd for a good 24 hours before you try them in UT or DOOM  :Smile:   On some older or OEM systems, adjusting the ram timings involves a hex editor and flashing your BIOS and is definately not recommended.

6.  Make sure it will still chug hard before you build any packages.  Before using an untested but very overclocked system, I ALWAYS at least inflate a stage3 tarball to somewhere on the disk and do an emerge -e world inside it to make sure it is stable.  Often, I will boot off a livecd and do this on a spare HD 'just in case.'  Memtest86 works ok also, but it isn't as sure as the spare HD method as the typical load your system experiences during operation is not running memtest86.

7.  If you're getting segfaults or ICE's during compiling, back off the multiplier by 1 (don't use .5 multipliers with linux as the memory performance hit overwhelms any CPU performance boost that .5x gave you) and try again.  If that doesn't work, put the multiplier back up and try giving the RAM and chipset another voltage bump.  Failing this, back off the FSB.

8.  If you're getting segfaults running other programs and just dropping the clock doesn't fix it, then you might have to rebuild your toolchain after attempting the fixes listed in 7. and rebuild the offending package.Last edited by thebigslide on Fri Mar 11, 2005 10:32 am; edited 3 times in total

----------

## hardcore

I've been overclocking for years now.  It used to be a "black art" but now is somewhat commonplace.  And when things become popular with the general masses, certain inconsistencies and false truths come about.

Overclocking the FSB is actually quite easy now, with most motherboards these days, raising the FSB is as detrimental as raising the multiplier.  Most MB's have PCI/AGP locks, so you don't have to use memory dividers anymore.

1.)  Clearly start from stock, if your system is new, test out all your hardware to make sure it works beforehand.  My suggestion is to use a livecd, this way, you can't possibly bork your system.

2.)  I agree with setting your FSB high, but only with Intel systems.  Amd systems benefit more from low latency RAM timings as well as memory that is synchronous with the FSB.  If your MB has PCI/AGP locks, make sure they're set at 33 and 66MHz respectively.

    2.2.)  Interactivity is actually mostly affected by your kernel scheduler, I recommend a kernel with the staircase scheduler.

3.)  Once you have found your max overclock, (I usually use multipliers to give a rough estimate, then up the FSB so that I get around the max overclock), reduce your settings 1-5%.  Do NOT up the voltage more than needed, just reduce the speed by 1-5%.  Increased voltage leads to premature CPU death.

    3.1)  'Burning in' your CPU does NOT do anything, there is no proof that supports 'cpu burn ins'.  What does happen is your CPU thermal paste takes a few on/off thermal cycles to fully set.  Once set, it performs better, ususally by decreasing temps, and increasing possible overclocks.  I recommend running Prime95 to detect if your CPU is stable as it is very sensitive to numerical errors.

    3.2)  Again burn-in's do nothing but allow the CPU paste (especially the arctic silver's and the like) to settle.

4.)  Keeping things cool is generally a good idea, however CPU's have a high thermal threshold, generally ~90 degrees C.  As long as your rig performs stably at these temps, you're alright.  But be advised, if your CPU has a high temp, your case usually has a high temp as well, and other components besides the CPU are sensitive to heat (hard drive, etc).

5.)  Use the memtest86 boot disk to determine the lowest memory timings you can achieve (especially for AMD chips).  Intel chips don't really get much of a boost from this.

6.)  Again test with memtest86, testing using all the tests availiable, and 1-2 instances of prime95 running for at least a week using a LiveCD to assure you're stable, so two weeks total time of testing.  This will ensure that when you do build your gentoo system OR go back to your gentoo system, everything will be peachy.

Ensuring everything is stable during testing has ensured that I've never had any hardware related problems with my rigs.  I encourage you to test thoroughly as well.

----------

## thebigslide

I'd like to iterate that I've been overclocking chips for years also, just only recently with gentoo.  I have succesfully booted a 486 at over 200MHz.  My current system runs at over twice it's default speed and it's never crashed on me.  I haven't bought a single piece of hardware in the past 2 years besides disk drives and sound cards that I HAVEN'T overclocked.  This includes replacing components on motherboards, video cards, and SCSI cards with a soldering iron and a steady hand.

 *Quote:*   

> Do NOT up the voltage more than needed, just reduce the speed by 1-5%. Increased voltage leads to premature CPU death.

  Only with inadequate cooling or a chip that isn't burned in.  Multipliers factor in even on boards with working AGP/PCI locks.  For there to be communication between the busses with the least latency possible, the busses be running at frequencies that are common multiples of some number.  The larger that number, the lower the transaction latency.  If your FSB and RAM are running at 141MHz, and the PCI bus is at 33MHz, reads from the PCI bus will be out of sync with the FSB and there will be quite a bit of latency on the PCI bus for example.  If the system bus is running at 140MHz, the FSB transactions will always occur just ahead of the PCI transactions, thus decreasing latency.  This becomes a big factor in the overall speed of your gigabit network card/pci graphics card/SCSI card, etc.

 *Quote:*   

> 3.1) 'Burning in' your CPU does NOT do anything, there is no proof that supports 'cpu burn ins'. What does happen is your CPU thermal paste takes a few on/off thermal cycles to fully set. 

  BS.  My explaination came straight from a computer engineer.  I have a chip that won't POST at stock voltage because it's been gradually burned in at a much higher voltage.  In fact, this is very common with older celerons and Thoroughbred-B athlon XPs.

 *Quote:*   

> 4.) Keeping things cool is generally a good idea, however CPU's have a high thermal threshold, generally ~90 degrees C. As long as your rig performs stably at these temps, you're alright. But be advised, if your CPU has a high temp, your case usually has a high temp as well, and other components besides the CPU are sensitive to heat (hard drive, etc). 

  90 degrees?  Sure it will run, but not overclocked, and not gentoo.  That's why I wrote this howto; because gentoo is a little more finnickey.  At even 70*C, if the system is being pushed, half the data coming from the CPU will be errors and things will b0rk.

as for 5, I don't recommend pushing the RAM timings too far as this can lead to spontaneous disk corruption.  The reasoning here is that if your CPU starts to falter, the system will simply not work.  If the RAM starts to falter, you will have massive disk corruption.  If you are dead set on cranking down your RAM timings, let me offer a command that will do the same thing: dd if=/dev/random of=/dev/hda  (Don't actually do this)  BTDT.  I realized after retarring my system the 8th or 9th time that it just wasn't worth it for the 2-3% that bumping the RAM timings (besides CAS latency which is a good 5% from 3.0 to 2.0) gives you.

----------

## hardcore

Well first off, straight from an Intel Engineer.  The engineer you talked to was probably referring to the "burn in ovens" mentioned in the below article.  Burn in's are only the result of CPU thermal paste settling, nothing more, nothing less.

 *Quote:*   

> There is no factual basis for any method that could cause a CPU to speed up after being run at an elevated voltage for an extended period of time. There may be some effect that people are seeing at the system level, but I'm not aware of what it could be. I do know, however, that several years ago when I was motivated I asked for and looked at the burn-in reports for frequency degradation for approximately 25,000 200MHz Pentium CPU's, and approximately 18,000 Pentium II (Deschutes) CPU's and that, with practically no expections at all, they all got slower after coming out of burn-in by a substanial percentage.
> 
> To me there is no doubt in my mind that suggesting that users overvoltage their CPU's to "burn them in" is a bad thing. I'd liken it to an electrical form of homeopathy - except that ingesting water when you are sick is not going to harm you and overvoltaging a CPU for prolonged periods of time definitely does harm the chip. People can do what they want with their machines that they have bought - as long as the aware that what they are doing is not helping and is probably harming their systems. I have seen people - even people who know computers well - saying that they have seen their systems run faster after "burning it in" but whatever effect they may or may not be seeing, it's not caused by the CPU running faster.
> 
> Patrick Mahoney
> ...

 

http://forums.extremeoverclocking.com/archive/index.php/t-35376.html

Second, like I said, use a LiveCD.  There is no possibility for disk corruption unless you write to disk, or your hard drive sets on fire.  And like I said about the CPU, your CPU can survive ~90 C, it won't run stably, but I've had CPU's that run at 55-60 C @ load, that are rock solid, each CPU varies, just fore everyone to keep that in mind.  Also, you can't take the MB temp's @ face value, unless you have an infrared tempurature reader, you won't have anything close to an acurate tempurature.  So you can safely ignore most temps, again as long as everything remains stable.

Thurd,  you may have overclocked a 486 to 200 MHz, but that don't mean jack, if it's 100% stable that's a different story.  I've had my 2500+(1833MHz) @ 2900MHz Posting on Air cooling, but it sure as hell doesn't mean it's stable.

Fourth, RAM timings do make a difference on AMD systems, especially the Athlon64 line.  With the integrated memory controller on die, you can think of system memory as a HUGE L3 cache, and the lower the latency, the faster that 'L3 cache' will run.   As long as you test everything to be 99.99999% stable, it doesn't matter the speed, as long as it is stable.

----------

## thebigslide

hardcore, I'm not trying to shut you down or anything here, but some of the methods you describe may have worked well for you, but they are not something I'd recommend people try.  This howto isn't meant for people who know already know how to successfully overclock their systems.  I'd like to help people who haven't had success in overclocking with Gentoo and what you're describing isn't a conservative approach.  Escpecially in messing with RAM timings.  If you read my first response to that topic, if the RAM starts to falter (as it certainly can), not having tweaked the timings right out gives you an extra buffer of safety.  Tighter RAM timings might give you slightly faster gaming and encoding benchies, but that's about it.  It's not worth the risk IMHO, for an extra 5fps in doom3 and 20 seconds off encoding a movie.  If a system can't be left unattended and trusted to stay up, I won't recommend that setup to anyone.  

Secondly, initial burn in is done at LOW voltage, the voltage slowly increased with clockspeed.  This is what I've stated above: Don't overvolt anything until it's had time to burn in.  The intel engineer you've quoted is clearly talking about people that pump up the voltage right out of the box, which surely is a dumb thing to do.  Also, note the line at the bottom where is says Patrick Mahoney was NOT speaking on behalf of Intel Corp.  Interestingly, isn't this the sort of propaganda you'd expect from Intel anyways?  The engineer I am talking to is a friend and he was not talking about any burn-in ovens.  He also has chips that will fail to POST at default voltages.  If you were working for intel and you saw that as a result of people burning in chips, what would YOU tell the press?

If you read some of the RAM reviews on anandtech for example, where they show some of the fastest memory benchmarks I've seen using CAS3 and higher on AMD64.  The only timing that makes a big difference on AMD64 is command rate, which should be 1T.   These same results are shown by numerous other hardware review sites.  

No, the 486 I overclocked was dead stable running at -60 degrees or so using a chilled pelletier for cooling.  (-40*C winters are awesome for some things)  The RAM, however, was not and started smoking shortly after power on as I had a voltage leak somewhere.  I went through several sticks of RAM to get it to load a kernel succesfully.  Never did get a picture of the /proc/cpuinfo.  I was using turbolinux on that box.

----------

## pwhitt

 *Quote:*   

> burning in a CPU basically destroys the switches that move too slowly. You don't need the slow ones as they are superfluous; there are always faster ones in the same junction (up to a point). Once enough switches are destroyed, the CPU will cease to function, but this normally won't happen unless the temperatures go too high. 

 

my remarks are specific to the original posts points 3.1&2

if i had said something like that, my old profs would rise from their graves, beat down my door, and kill me in my sleep.

"burning in" a chip in this way does absolutely nothing but potential harm.  thebigslide, i think what your friend is talking about is related to junction capacitance and the lower rise time needed for transistors to "switch" quickly.  for a transistor to switch, you need a dV/dt at the gate, running from "off" to "on."  if you're flipping too quickly and the chip isn't made to be that fast, the apparent voltage at the gate will be reduced by capacitance in the junctions leading to it.  the result is missing clocks on various transistors and faulty logic, as things are no longer synchronised.  what the gate sees is a voltage that runs from "off' to "kinda more than off" and it just sits there.  if however you crank up the voltage, the resulting gate voltages will look like they are going from "off" to "on" again (the dt remains the same, but now dV increases).

when there are redundant transistors in a gate, they are there for a reason.  applying too great a potential on the gates of some transistors may very well remove them from the set, but that will in no way help the other transistors that now have to pick up the slack by carrying more current and contend with the increased voltage as well.  that is why once a chip is damaged, it gets hotter faster.  a crude way to look at it is this, when they are damaged, resistance increases and heat from passing current increases proportional to i^2*R.  more heat=more damage, more damage=more heat...  ad infinitum.  when you start to damage a chip in this way, it doesn't matter how well you remove the heat - there will be no way to keep up and the chip will litterally burn.

i feel compelled to say this for the kids at home, if you were to, as you say, "destroy the switches that move too slowly" you'd break the chip.  the chip is made to be the way it is, you never want to intentionally damage anything, ever.  armies of engineers spend a lot of time on each part of something that complicated.  they do not intend for monkies at home to start hammering on it to "fix it."  if what you are saying is true, then we could make chips the size of our leg, badly designed with many redundant parts, then burn 'em right into 9GHz monsters.  that is not how it works at all.

----------

## Merlin-TC

I also have to agree that so called burnins are just hypes.

Technically speaking it is pretty stupid to do that. Maybe you can compare it with smoking  :Wink: 

What is for sure is that you shorten the lifetime of your CPU considerably.

Also changing the timings of your RAM is not more or less dangerous than overclocking your CPU.

If you clock your CPU too high you also can get random crashes and corruptions.

That's why using a life CD is a very good idea because no data will be destroyed and you can make sure it works.

----------

## hardcore

 *Merlin-TC wrote:*   

> I also have to agree that so called burnins are just hypes.
> 
> Technically speaking it is pretty stupid to do that. Maybe you can compare it with smoking 
> 
> What is for sure is that you shorten the lifetime of your CPU considerably.
> ...

 

Precisely my point, you can use a livecd to get your max stable overclocks (CPU, FSB, RAM Hz, RAM timings, etc) with 100% or more load, then back them all off by 1-5%, and you are assured a stable system.  Well until July rolls around, then you gotta break out the AC  :Wink: 

----------

## penquissciguy

IMO, the best way to overclock is to use processors that are intentionally undervolted by the manufacturer to run at lower power levels, like mobile Bartons and LV Xeons.  That way, "increasing voltage" only brings the processor back up to stock voltage or slightly higher.  My dually has Xeons that technically are running at a 75% overclock at "stock" p4 voltages that will run at full tilt all day long.

Ken

----------

## Shienarier

I have an AMD Athlon XP 1700+ cpu and i was thinking about starting to overklock it.

Then what am i supposed to do?

1) Increase the FSB, boot into memtest and run that for a while, then raise FSB some more until i crash,

    then lover the FSB to the last number that worked. If so, how much should i raise the FSB at a time?

2) Then raise the cpu multiplier 0.5 at a time in a simular fashion as the FSB?

----------

## Gentree

Well if this guide is intended as a basic guide for debutant overclockers lets not forget to explain the basics.

Dont do anything until you have working temperature sensors. ACPI in kernel and emerge acpi. Search forum for details on getting it working.

Either monitor with repeated acpi -t commands or with gkrellm . I suggest the first since all of this is best in the more controlled env. of the command line console.

Also if you have PC health type section in the BIOS set up a CPU shut-off temp and a warning temp about 5C lower. These will bail you out if you are not in full control.

As said above , be wary of what some sensors put out . It may just be a thermistor "somewhere near" the cpu's underbelly.

Dont forget that in this area each system and each individual processor is individual, that's what you are trying benefit from. Dont think "xxx posted that his cpu was fine a 3.2GHz and I have the same", you dont.

Since it is usually CPU temperature that stops the fun, the biggest o/c gain you will ever get will probably come from good , solid-copper heatsink, so consider the moderate investment. I bought my mobo with a huge Aerocool ali heatsink and fan that made more noise than a 737 taxiing up for take-off. I replaced it with CoolerMaster copper sink with a quieter fan and lost about 8 degrees.

A similar improvement later came from adding an 80mm NoiseBreaker S2 on an adapter in place of the 60mm  CPU fans. This knocked another 6 degrees at the same time as making the machine almose silent.

Getting back to the software.  Memtest86 is a must (the more recent memtest86+ has _lower_ version numbers since it is a different project.)

As soon as you get out of the BIOS, boot to a CD or floppy with memtest86+ and give your memory a thorough thrashing. This can be just 5 mins or so when trying new values but once you are settling on some choises at least 30 mins. This again is not rigourous. You will need to give it several hours soak test later.

Well there's more to system stability than RAM . Next I recommend the cpuburn suite (in portage and on several rescue/boot CD so this can be done from CD at first for safety although make sure you have temp monitoring available also.)

This suite contains serveral very small progs that will push you system harder that it will ever get used in reailty. Harder than things like prime95 as well.

There are several progs in the suite which push the cpu, the mmx subsystem and the mobo IO circuitry. This should show up possible weaknesses in other areas than just cpu/ram. On this system it was always burnBX that tripped out first. Read the doc.

One more tip before you start fiddling , make sure you know what to do when you go too far and the Bios won't boot the system any more. This is what is meant by it failing to POST.

Some systems will detect multiple failures to start from of power-off situation and reset the BIOS to safe bootable values, some will need you to reset the BIOS. RTFM.

In any case a pencil and paper is always an invaluable toolkit even in new millenium! Jot your key BIOS settings before it happens and keep a log of all your tests as you go.

After that I would take the approach layed out by hardcore above, testing each step with the tools I suggested.

With the caveat that each system is different in this game , FWIW here's what I did to this system.

CPU: Athlon-xp 1800+  with 128k L1 256k L2

Mobo: ABIT kx7-333

FSB 176  (abs max 182)

divider 5:2:1

multiplier 12.5

cpu @2230MHz  (cf 1667 stock)

idle temp 47C (with reduces CPU fan speed.)

burnK7 temp 60C (=alarm temp)

Vcore 1.625 (cf 1.60)

The RAM would not take any tweeking.

HTH 

 :Cool: 

----------

## Gentree

Further note on more mechanical side of increasing your o/c

I have just knocked a healthy 8 degrees C off my cpu temp under full burnK7 workload.

I polished my heatsink !

Well dont polish the fins , what I did was to lap the underside to remove the machining marks. I thought it might just help a little .... maybe. I was gob-smacked.

Get a solid peice of optical glass. A decent quaility mirror is usually very flat.

Lay a peice of fine grade (wet or dry) emery paper on the glass and lap the base of the heat sink to remove all marks. If it is a bit too shiney afterwards , give it a quick circular rub on a fresh bit of paper to depolish it . Shine is not good for heat disappation.

Even good quaility heatsinks are not finished to this level , this difference it can sometimes make is surprising.

 :Cool: 

----------

## thebigslide

A mirror is what I use, too  :Smile:   It makes it easy to see if there's surface imperfections.  If you go to a hobby shop, you can get grit for rock tumblers up to about 8000 grit.  If you mix some in a little oil, you can make the heatsink even and reflective enough to shave in.  Watch the edges, tho as they will cutcha.

----------

## Gentree

Grit on glass is not so good because it eats the glass as well and pretty soon it's no good as a reference surface.

I lightly oil the back of the paper to make it stay flat , this minimises the rounding of the edges due to paper lift. Since the chip is well away from the edges this is not a real prob.

BTW/ Same technique works nicely to m/c head and barrel surfaces  :Cool: 

----------

## thebigslide

You're right.  I forgot that I'd used a jig to position everthing.  Still.  When you get into the x000 grits, you're not removing much material.

----------

## Hara

I feel like adding my two cents worth,

There are usually two types of overclockers, the one who tries to save money buy overclocking a chip that is rated lower than it can actually handle, and the other who tries to maximize performance completely to have the fastest computer possible. Depending on your goals and critieria would vastly determine you're overclocking capabilities and requirements.

Unless you are going for top of the line (although research is important here too), you need to understand that most of the Overclocking work is actually done BEFORE you buy your components. When someone setups up a system, its important to know not only how fast it goes, but how fast it can go and at what cost to get there. For instance, if you ever tried to overclock a palamino amd xp 1800, you'd find its very difficult to get any faster than what you have. But, if you tried to overclock a barton xp 1800, you'd find yourself being able to overclock to maybe 2.4 ghz with no sweat. Your overclocking ability of your particular hardware is more determined by the manufacturers than by the person at home building the machine. So when someone thinks that just because you have a phase change sub zero really expensive cooler, does not mean you'll be able to overclock well. You'd be much better off purchasing better hardware with the same money.

----------

## ScOut3R

Hey!

I like to overclock my pc too, but i cant do it with Gentoo. Before Gentoo i had slackware and i couldve run my cpu at 2500+@3600+ without the slightest problem. Under Gentoo i have serious problems. Okay, i can't compile while overclocked, i can accept that. But! The system only boots when i use a minimal overclock (2500+@3200+). I can use it fairly this way, but if the cpu load is at 100% constant the system crashes sometimes (like switching between X and console). If i go higher i can't even boot. Its kinda disturbing for me, 'cause i like to use boinc.

I use minimal CFLAGS (-march -fomit-frame-pointer -pipe) to compile my system. I'm seriously thinking about getting back to Slackware.

----------

## Gentree

So go back to Slackware or read the copious detail in this thread and start setting up your overclocking methodically and in a tested manner. It might just work on Gentoo as well.

 :Cool: 

----------

## ScOut3R

 *Gentree wrote:*   

> So go back to Slackware or read the copious detail in this thread and start setting up your overclocking methodically and in a tested manner. It might just work on Gentoo as well.
> 
> 

 

I used the same settings as with Slackware or any other OS. I'm thinking that it might be the higly optimized installation? I mean Slackware consists of i486 packages while my Gentoo system uses all the instruction sets. Could this be the problem? If yes then i'm gonna stay with Gentoo, 'cause it's much better than any other OS i have ever seen.   :Cool: 

Sorry for the hard manner i used in my previous message, it was too late and i was too harrased.

----------

## erikm

Since this seems to quite the watering hole for OC aficionados, I'd like to ask a question: I moderately OC some of my chips, mainly to recover the slight 'margin' that is built in by default, that is, I'd put myself in the former category defined by Hara.

I recently had a rather heated discussion with someone I thought to be an authority on the subject, who claimed that even the slightest overclock would completely destroy a source based OS like Gentoo, since OC'ing would make the CPU miss instructions every now and then, and thus not compile binaries correctly.

My stance was, that as long as you run a stable (as in benchmarks / memtest86 for 48 hours, error free) system, your chip can take the OC without producing faulty code.

What do you think? Is a moderate overclock ok in the long run, or is he right?

----------

## oggialli

I just can't help the urge to inform people about how that first post is almost 100% BS and should not be taken as any advice. Ask anyone a bit more familiar with OC'ing and its effects and he'll point out tons of false information in that text.

----------

## Hara

 *Quote:*   

> 
> 
> I recently had a rather heated discussion with someone I thought to be an authority on the subject, who claimed that even the slightest overclock would completely destroy a source based OS like Gentoo, since OC'ing would make the CPU miss instructions every now and then, and thus not compile binaries correctly.
> 
> My stance was, that as long as you run a stable (as in benchmarks / memtest86 for 48 hours, error free) system, your chip can take the OC without producing faulty code.
> ...

 

All computers have the risk of developing an error. It's an inherent property of silicon based electronics. There are probabilities that have to deal with the chemistries and electron flow of the media (think alignment of the planets) that are simply unavoidable. With servers, they usually have a mtbf, or mean time between failure, rating thats usually measured in months to years.

So even running a benchmark on a normal computer for several years would probably lead to an eventual computational error. The question is whether overclocking significantly decreases the mtbf. Overclocking, however, does not affect the chemistries of the computer chips, but rather decreases the time the processor has between clock cycles. These type of errors have a different cause. Instability due to overclocking is caused by the processor not having enough time to do the work its supposed too. (Heat has the added effect of slowing down electrical signals which affects the limit a processor can be pushed) The types of errors I was talking about earlier would only increase because more processing work is done over time and would do so at such an unsignificant level it would take more time to test than the life span of the processor.

Being a source based distribution, we are ALL more prone to less stability because we do more processing work to create our programs (rather than just running a decompression algorithm). So all of us have to deal with the possibility of a failed compile. Usually all that is required is a recompile. In reality, a stable and conservative overclock has no noticable effect on compiling.

Basically, a mild overclock should be stable for your needs. If you really needed reliability, you'd have multiple computers computing the same exact thing and cross referencing data to ensure error free results. That type of roboustness is usually saved for life-support critical devices like NASA would use.

----------

## saFFyre

Also remember many devices that we buy such as CPU/GPU are identical to much faster models. When they are produce they are speed binned, which means that if X amount of a batch fail to achieve the desired results they will be knocked down a grade(s). Many of these chips however will function fine at the higher speed. I run an amd 3000 64 venice chip happily at 2.2ghz (1.8 stock) with stock volts this is amd 3800 speeds. I would never consider a system to be stable without rigourous stress testing, usually 48 hours of prime stress testing and about 20 passes of memtest 86. If your system can pass rigourous tests like this i do not see any reason why a gentoo system would have problems. With modern motherboards and sensible approach and adequate cooling, overclocking is really very safe and a very good way of saving yourself some money.

----------

## thebigslide

 *oggialli wrote:*   

> I just can't help the urge to inform people about how that first post is almost 100% BS and should not be taken as any advice. Ask anyone a bit more familiar with OC'ing and its effects and he'll point out tons of false information in that text.

 

specifically?...

----------

## oggialli

Well lets see. Hardcore mentioned some of this already I see.

Interactivity worse than Windows - depends greatly on your system setup, especially CPU sched, but, not the point. It's not only FSB which matters here but the overall clockrate - FSB has no special effect on interactivity.

Motherboards have working AGP/PCI locks or they don't, I haven't heard of a single model that would have locks up to a certain FSB and past that would use divided FSB - and, there probably isn't anything that stupid, since the separate clock & synchronization circuits are already there, why not use them?

2.0) What does memtest have to do with IDE ? Nothing. It actually shows you how far your memory and memory controller can go, but FSB can usually be raised further if so desired (like on Athlon64).

2.1) IDE bus (actually PCI, where the clock comes from) can usually be safely overclocked quite a lot (from 33 to ~40 or beyond) depending on the HD model (at least a few MHz will never cause any problems). Of course there is no benefit, but yes, with older motherboards that don't have locks it's inevitable.

2.2) How do latencies go down by driving the memory slower? Not by itself, but actually they can, if you adjust your memory timings a little tighter at the same time (which is usually possible by turning the clocks down). But even then this shouldn't be linked to interactivity in any way, interactivity problems are of much larger scale than single memory accesses' latencies. And "round" FSBs don't help in itself. Actually, if you can keep the memory timings tight, running your memory/FSB faster will always help both latencies and throughput. There's one thing though; If you use an nforce2 based Socket A motherboard, always aim to keep your memory and FSB clock in synch - NF2 is for example faster by running both FSB and memory at 166 than by running 166/230 or alike, although the memory bandwith broadens considerably. On VIA and especially P4 platforms it isn't that much of a problem, and Athlon 64s with their on-die controllers are of course completely free of FSB/mem syncing issues.

3) Burn in... I wonder how long this urban myth will live. No one has ever actually perceived anything but DEGRADING of OC potential after a "burn-in" of components. Also, you don't need anything like an "introductional period" after choosing a new voltage, there is no problem pumping them where you want them straight away.

3.3) Voltage should always be bumped up after enhancing cooling if any benefit is wanted. Think of cooling efficiency as a factor that limits your voltage - you can administer more voltage if the chip runs at a lower temp.

4) Yes, cool is fine, but esp. with P4 systems one really can't keep the real temperature of the die when under load anywhere near 45'C (with air). Don't worry until you reach ~70 celcius or even more.

5) RAM timings are a lot more important than RAM clock rate, but again, this isn't anything specific to Gentoo, interactivity or even Linux in general. And command rate (1T vs 2T) makes a VERY big difference (like some 20% of pure memory clock).

6) Something like pifast/superpi/prime is a lot better measure of stability than compiling (and a lot quicker to notice instabilities too).

7) From where have you made up that .5 would hurt your memory performance ? There's no reason (and it won't). Otherwise, it's a good advice to try to keep the FSB up, but if you really need to back the multiplier by 1 (the CPU is that much beyond its limits at its current vcore), leaving the multiplier as is while backing FSB by a few percent and raising core voltage would likely achieve better results.

----------

## oggialli

And also, raising FSB usually is the only way to overclock newer AMD systems (multipliers upwards locked).

----------

## thebigslide

Sorry for ripping into your here, but I'm kinda sick of being told I'm full of BS.  I have a lot of experience here and am really just trying to help people who have been having problems overclocking.  Not experienced overclockers who would rather find out by themselves.  It also irks me when someone says what you're written is false in a derogatory manner because they've written it wrong.

 *oggialli wrote:*   

> Interactivity worse than Windows - depends greatly on your system setup, especially CPU sched, but, not the point. It's not only FSB which matters here but the overall clockrate - FSB has no special effect on interactivity.

 For the vast majority of systems, Windows (XP) shows better interactivity.  This is largely due to how much RAM is used for preloading parts of the operating system before they are needed, making them available instantly instead of having to be loaded off of disk.

 *oggialli wrote:*   

> Motherboards have working AGP/PCI locks or they don't, I haven't heard of a single model that would have locks up to a certain FSB and past that would use divided FSB - and, there probably isn't anything that stupid, since the separate clock & synchronization circuits are already there, why not use them?

 This statement you're commenting on was geared towards people having problems because of a divider throwing some other bus out of sync.  Most people don't realize when this happens.

 *oggialli wrote:*   

> 2.0) What does memtest have to do with IDE ? Nothing. It actually shows you how far your memory and memory controller can go, but FSB can usually be raised further if so desired (like on Athlon64).

 a) I was referring to finding the limit of the FSB (which you'd almost always want synchronous with RAM. b) Athlon 64 doesn't have a FSB.  It has a memory bus and a hypertransport bus.  Raising the "FSB" control in BIOS while leaving the memory speed alone will only overclock (usually) the hypertransport bus and will rarely result in any tangible performance unless you're I/O limited to the video card or maybe a network or fiberchannel controller..

 *oggialli wrote:*   

> 2.1) IDE bus (actually PCI, where the clock comes from) can usually be safely overclocked quite a lot (from 33 to ~40 or beyond) depending on the HD model (at least a few MHz will never cause any problems). Of course there is no benefit, but yes, with older motherboards that don't have locks it's inevitable.

 There is no benefit, and all it can cause is problems.  Big problems if this bus is on the verge of stability and gives out while performing a disk write.

 *Quote:*   

> 2.2) How do latencies go down by driving the memory slower? Not by itself, but actually they can, if you adjust your memory timings a little tighter at the same time (which is usually possible by turning the clocks down). But even then this shouldn't be linked to interactivity in any way, interactivity problems are of much larger scale than single memory accesses' latencies. And "round" FSBs don't help in itself. Actually, if you can keep the memory timings tight, running your memory/FSB faster will always help both latencies and throughput. There's one thing though; If you use an nforce2 based Socket A motherboard, always aim to keep your memory and FSB clock in synch - NF2 is for example faster by running both FSB and memory at 166 than by running 166/230 or alike, although the memory bandwith broadens considerably. On VIA and especially P4 platforms it isn't that much of a problem, and Athlon 64s with their on-die controllers are of course completely free of FSB/mem syncing issues.

 Think about the signaling.  If one bus is clocked at 33MHz and another at 68MHz and data is going from one to the other, the signal has to wait an extra clock than if the second bus were clocked at 66MHz.  This is oversimplified, but makes the point.  This is WHY on an NF2 (or ANY platform that supports asynchronous memory clocking), you get better performance with the memory bus in sync, even if slower.  It isn't much of a problem on Intel platforms for the same reason that increasing memory timings doesn't hurt performance too bad on an Intel platform.

 *oggialli wrote:*   

> 3) Burn in... I wonder how long this urban myth will live. No one has ever actually perceived anything but DEGRADING of OC potential after a "burn-in" of components. Also, you don't need anything like an "introductional period" after choosing a new voltage, there is no problem pumping them where you want them straight away.

 I was giving a safer method, but do what you want.  I wouldn't prescribe this to anyone, though.

 *oggialli wrote:*   

> 3.3) Voltage should always be bumped up after enhancing cooling if any benefit is wanted. Think of cooling efficiency as a factor that limits your voltage - you can administer more voltage if the chip runs at a lower temp.

 True, although you don't need it.  As temps go down, current flow also goes up.  That's why people doing liquid nitrogen cooling usually stick to stock voltages, even for massive overclocks.

 *oggialli wrote:*   

> 4) Yes, cool is fine, but esp. with P4 systems one really can't keep the real temperature of the die when under load anywhere near 45'C (with air). Don't worry until you reach ~70 celcius or even more.

 That's why I don't buy Intel system  :Wink: 

 *oggialli wrote:*   

> 5) RAM timings are a lot more important than RAM clock rate, but again, this isn't anything specific to Gentoo, interactivity or even Linux in general. And command rate (1T vs 2T) makes a VERY big difference (like some 20% of pure memory clock).

 That statement's validity really depends on what platform you're talking about.  1T vs 2T is a big performance modifies, but only applies to Athlon64.  I have found that because of disk cacheing, if your RAM cacks out, your filesystem is usually farked.  Totally and completely farked.  Also, if you run some decent LINUX memory benchmarks, you'll find that RAM latency doesn't really do much except the two variables I've mentioned.  Try hdparm with -T.  You can also time the application of some filter to a large image with the gimp for another decent non-synthetic benchmark.

 *oggialli wrote:*   

> 6) Something like pifast/superpi/prime is a lot better measure of stability than compiling (and a lot quicker to notice instabilities too).

 Using WINE?  Also, I will tell you this.  I have had systems spontaneously reboot when doing an emerge -e world that would run a synthetic processor benchmark for hours.

 *oggialli wrote:*   

> 7) From where have you made up that .5 would hurt your memory performance ? There's no reason (and it won't). Otherwise, it's a good advice to try to keep the FSB up, but if you really need to back the multiplier by 1 (the CPU is that much beyond its limits at its current vcore), leaving the multiplier as is while backing FSB by a few percent and raising core voltage would likely achieve better results.

 It's actually pretty well known that half dividers hurt your performance.  There's a good explaination (with benchmarks) on anandtech, and I won't repeat it, you can RTFM.  Also, this is a TEST to see if the processor overclock is hurting your performance.  I'm not saying to leave it there.  If your system is dying on you, it really helps to find out what is causing that. Increasing your vcore won't necessarily prove anything.  Since we don't all have scopes in our basement and signal analysers to connect to our mainboards, knocking back a clock is usually the easiest way to eliminate a variable from the overclocking equation when something's wrong.

Now, all I've provided here is some VERY conservative information for people who might have or have had issues and given up on overclocking linux boxes.  Linux systems seem overall more sensitive to overclocking than windows does.  That's all.  Windows systems can usually run just fine on a 'less than stable' overclock until you put the system under major load.  Linux boxes tend to cack right away.  It can be frustrating.  But what's more frustrating is when people say you're full of it just for trying to help others.

----------

## thebigslide

 *oggialli wrote:*   

> And also, raising FSB usually is the only way to overclock newer AMD systems (multipliers upwards locked).

 Which platform is that?  Athlon XP is usually unlocked upwards (or can be easily enough) and Athlon 64 doesn't have a FSB.

----------

## oggialli

HTT can be referred to as FSB for these purposes since it affects CPU clock rate just the same way. And yes, I was referring to A64's. Newer AXP's btw are usually locked both ways.

I didn`t say XP wouldn't have better interactivity - or that it would - and I'm not commenting on that this time either, it is not the point. That doesn't change the fact that drawing a tight line from FSB to interactivity is nonsense. And what crap are you trying to cover that up with ? "This is largely due to how much RAM is used for preloading parts of the operating system before they are needed, making them available instantly instead of having to be loaded off of disk." What the fuck is this ?

a) It doesn't have anything to do with the matter of discussion

b) It is nonsense (not exactly a surprise)

   1) When you have started an app (not counting swapping) every part of the binary and associated libraries are fully in RAM - no loading off the disc anywhere here.

   2) Interactivity problems have to do with bad CPU time / IO bandwith distribution scheduling, nothing else.

Are you sure you aren't referring with "interactivity" to "program startup times"? That would make at least one your statements somewhat true, but not related to OC'ing at all.

Dividers throwing "some other bus" out of the sync? That doesn't explain where your "magical locks-working-and not" barrier in FSB came from - it's either they are there or they aren't (dividers all the way).

"a) I was referring to finding the limit of the FSB (which you'd almost always want synchronous with RAM."

Where did the IDE come from then? Putting that aside...

Still not universally true. Maybe on Athlon XP systems, but on P4's you occasionally should use dividers to get the CPU clock higher. After all, the only platform having serious slowdown with asynch FSB/mem is AXP NF1/2.

"It has a memory bus and a hypertransport bus."

Correct. And so ?

"the hypertransport bus and will rarely result in any tangible performance"

Sure it does, since it is usually the only way to overclock the CPU as a whole.

"unless you're I/O limited to the video card or maybe a network or fiberchannel controller.."

You will never be - HTT is WAY faster even on defaults than any of these.

"There is no benefit, and all it can cause is problems. Big problems if this bus is on the verge of stability and gives out while performing a disk write."

Yes, but this doesn't change the fact that you can't avoid it on boards without (working) AGP/PCI locks. And neither that it isn't that strict - even the worst models can handle the 33->~40 bump easily if you need it elsewhere.

"Think about the signaling. If one bus is clocked at 33MHz and another at 68MHz and data is going from one to the other, the signal has to wait an extra clock than if the second bus were clocked at 66MHz. This is oversimplified, but makes the point. This is WHY on an NF2 (or ANY platform that supports asynchronous memory clocking), you get better performance with the memory bus in sync, even if slower. It isn't much of a problem on Intel platforms for the same reason that increasing memory timings doesn't hurt performance too bad on an Intel platform."

You didn't say anything about driving buses async in the first place. Of course that will cause a slowdown - but if you meant that and we were supposed to find that out by some magical means, this doesn't get it any closer to being "the key to interactivity" (which it has nothing to do with). Also, where does the "especially on AMD" hail from?

1) AXP/duron are seriously memory bandwith limited in all cases

2) A64, while not being memory bandwith limited (correct) doesn't have the asyncing problem which then falsifies your statement about that.

Burn-in...

Safer method? Like I said, "burn-in" will not do any good but HARM if anything. (and wastes time, of course). Strange view of "safe".

"people doing liquid nitrogen cooling usually stick to stock voltages, even for massive overclocks."

Hah ? They definately don't but instead bump the voltages to hell and beyond (because it's the way and you DO need it.). What's this foo again?

"That statement's validity really depends on what platform you're talking about. 1T vs 2T is a big performance modifies, but only applies to Athlon64."

Not true, applies to AXP too (and can be adjusted there too with modbioses on ie. 8rda, nf7-s and iirc a7n8x too).

"I have found that because of disk cacheing, if your RAM cacks out, your filesystem is usually farked. Totally and completely farked."

"Try hdparm with -T."

Stream synthetic benchmarks aren't much of an argument in real-world cases.

"Also, if you run some decent LINUX memory benchmarks, you'll find that RAM latency doesn't really do much except the two variables I've mentioned."

Namely? Definately not hdparm. And, memory access is such a low level business that what's good for you doesn't depend on if you run Windows 95 or Solaris - the story is the same, and if some "platform-specific" benchmarks give differentiative results, it's caused by the test in case and can't be generalized to the whole OS. The OS doesn't even handle stuff this low level, it's the memory controller (in CPU or NB), MMU (in CPU) and prefetch/prediction logic affecting what timings matter (and because of this it's os-independent but varies from platform to another).

"You can also time the application of some filter to a large image with the gimp for another decent non-synthetic benchmark."

Better, and if I do it I see differences with every timing option. How'd you explain that ? 

"It's actually pretty well known that half dividers hurt your performance."

You mean the case of memory divider roundings... ? That's A64 specific, which you failed to mention (and not even bad in every case if that gives you the fastest CPU+mem speed combination, which isn't always achievable by standard dividers). On AXP/Intel .5 muls (and .25's on Intel too) do no harm in any case and give a good fine-tune.

"Using WINE? Also, I will tell you this. "

Of course not but native linux superpi and if at all possible pifast/prime on windows. About your reboots, I'm so sorry it happened, but, that still doesn't make gcc a good stress (prime definately stresses your cpu (and every bit of it) and memory better). Your case sounds more like random happening or insufficient PSU capacity (and HD bringing it to its limits).

" But what's more frustrating is when people say you're full of it just for trying to help others."

I didn't say that because you were "helping others", but because you failed to do that by supplying false/incomplete information.

----------

## jmlxg

Mwhahahahahahahahahahahahahahah!!!!!!!!!!!!!  :Laughing: 

I'm back oggialli. Bwhahahahahahah  :Very Happy: 

Thoguht i must say you're right on the fact that newer AXP's are overlocked.  :Shocked: 

I was wondering what you guys would think of overclocking an Athlon XP-M 1800 at 25W or 1.25Vcore cuz i was thinking of getting one of those.  :Very Happy: 

I have a MSI K7D Master Dual Athlon MP mobo with a 1.82 bios which i am going to update in the near future.  :Laughing: 

Yes i will tell you that Linux is somewhat picky when it comes to overclocking but when you do overclock in Linux it is sure to work unlike in windows which my brothers want me to use.  :Very Happy: 

Thanks,

jmlxg

P.S. > I like lauching evily just for that matter of that sake for you people wanting to ask.  :Very Happy: 

----------

## oggialli

Be my guest.

----------

## mdeininger

*g* fun post to read really... just one thing, why would i want to o/c in the first place? i mean, if i want something stable and really rely on that, then usually i should be able to get enough funds to buy faster hardware instead of buying slower components that i have to overclock? i don't see the point in games either, not with cpu/ram at least. i'm still using an "old" athlon xp 1900+ with 1gb of ram @ 133/266 mhz and that does it for most games. the only thing that did make a difference to me was buying better graphics cards and *more* ram. I get to 25-30 fps in mid to max details everywhere, i can even *underclock* my cpu and ram and it hardly has any effect whatsoever unless i go below ~1.3ghz, so why bother? it's not gonna get significantly more fps, and even if it would my eye couldn't register those anyway so that'd be quite pointless. now, if i was an android with an eye that could process more than 30 fps or someone rigged my brain with faster video processing equipment, *then* i'd see a point in doing it  :Razz: 

and why exactly would i want to o/c for better interactivity? linux was always a lot better or at least as good in that as windows for me. fiddling around with kernel schedulers helps here and there but there's not much in interactivity or responsiveness changing with the clock speeds. the box at work that i'm usually supposed to work on is a p2 running on ~400mhz. right now i'm sitting on a p4 at 2.66ghz with hyperthreading, and the only difference in responsiveness (using about the same linux system and gnome instead of my usually preferred e17 or openbox) is that on the p4 some really long web pages render faster than on the p2. those extra seconds i can wait for, really. aside from that, the only other difference i notice is loading times (which is logical since the p2 has a slower hd). that's about another ~20 secs per hour on the box. so, again now that i have some people around that might sched some light on this, why exactly would i want to o/c?

----------

## Cintra

I came across the first post in this thread a while back, and it helped shock me back to sanity  :Wink: 

http://forums.sudhian.com/messageview.cfm?catid=38&threadid=21436

...only one snag, now I'm a Gentoo addict!

Mvh

----------

## mdeininger

 *Cintra wrote:*   

> I came across the first post in this thread a while back, and it helped shock me back to sanity 
> 
> http://forums.sudhian.com/messageview.cfm?catid=38&threadid=21436
> 
> ...only one snag, now I'm a Gentoo addict!
> ...

 

LOL that post is *so* true. *thumbs up*

I'll bookmark that one for the next time someone tells me he's about to squeeze another 1.923% memory bandwidth out of his gf fx 8712xxl TDR abc^2 (yeah i know that's not an actual model)

still i'd be interested in the motivation behind it? i fully agree to that first post, i'd rather have a silent k6-2 333 for work and watching videos instead of an airplane turbine giong off next to me just to play domm3 at 90 fps and it bugged me to no end that i couldn't underclock my athlon xp without modification so could use less of a fan... come to think of it, i DO have a silent k6-2 that i got silent by putting a spare athlon-xp cooler on the thing and taking the fan off... works like a charm... now that's the type of modding i like...  :Smile: 

----------

## djpenguin

 *oggialli wrote:*   

> 4) Yes, cool is fine, but esp. with P4 systems one really can't keep the real temperature of the die when under load anywhere near 45'C (with air). Don't worry until you reach ~70 celcius or even more.

 

Pardon me?   I have a Northwood B 2.53GHz chip that runs at 33C with the fan set to 4.5V and basic desktop stuff going, and 25C if I turn the fan up to 12V.  Under a long compiling load, temps rise to around 37-38C with the fan throttled up to around 6V.  If the fan is set to 12V, the temps will be around 32C while compiling.  

I use an Alpha aluminum/copper heatsink with an 80mm zalman fan on top.  I don't have crazy amounts of case cooling either, just a pair of 80MM fans and the big 120mm one in the PSU.

Normally, I wouldn't nitpick like this, but damn, if you're gonna rip someone a new one over the supposed factual inconsistency of their post, do some fact-checking of your own before you make your allegations.  I'm sure I'm not the only person in the world with an Alpha heatsink and a P4, and it's a given that some of the others have posted their temps on various overclocking/enthusiast forums.  Incidentally, I have a video of an Athlon XP die cooking off in a puff of smoke when it hits ~90C if you'd like a concrete reason not to suggest that people run their commodity hardware at '70C or even more.'

----------

## joaopft

 *thebigslide wrote:*   

> .
> 
>  *oggialli wrote:*   6) Something like pifast/superpi/prime is a lot better measure of stability than compiling (and a lot quicker to notice instabilities too). Using WINE?  Also, I will tell you this.  I have had systems spontaneously reboot when doing an emerge -e world that would run a synthetic processor benchmark for hours.
> 
> 

 

I can second that. My system (Athlon 64/ NF4 chipset) would run memtest/Prime for 24 hours straight with no errors, and then fail an emerge -e system. Part of the problem is that both memtest and prime are not 64 bit apps, so the memory subsystem will not get stressed enough. Also, there should be a lot of transistors dedicated to run the 64 bit instruction set that won't get tested with 16 or 32 bit apps.

An excelent (and quicker) test that works on A64 systems is (with -O3 optimization set up):

```
 emerge libquicktime 
```

This particular emerge takes a long time in compiling the files cmodel_default.c and cmodel_yuv420p.c, which stresses the system a lot. Common problems with the NF3/NF4 chipsets result in a compile fail of either 'cmodel_default.c' or 'cmodel_yuv420p.c'. I have come across this on the gentoo forum, reading about problems of instability on an early NF3 mobo with stock settings. From experimenting a little bit, I've found that this compile is a good test for overclocked settings. Most faulty NF3/NF4 systems will fail here before anywhere else (including memtest and prime).

----------

