# Linux more sensitive when it comes to overclocking?

## firaX

Hi i know this might not be the right forum for asking about overclocking , but all those overclocking forum geeks are soo arrogant when it comes to answering newbie questions :p

I got an axp 2500+barton / A7n8xdlx board gf5600fx,512mbpc3200twinmos ram

I tried to get my FSB to 400mhz, which is no problem for the ram (been running 400mhz async all the time) 

Upping the fsb from 333 to 400 doesnt even affect pci/agp as thats fixed on nforce2 boards. Yet whenever i run my machine with 200fsb (which ocs the cpu to 11x200 = 22000 mhz = XP3200, which is said to be absolutely no problem on an axp2500+barton) the system wont boot. So i upped the vcore..step by step until i saw my framebuffer screen, which still gave me kernel panics (CPU context corrupt etc). With 1.75vcore the system completely boots into linux.

I ve read posts that it boots into windows just fine with 1,65vcore.

Also compiling an app (used mozilla for testing) + running xmms AND mplayer at the same time (=cpu stress  :Smile:   ) freezes the system perm. gotta hit reset.

Yet windows ppl reported the barton to be rock stable @11x200 and they dont even bump the vcore up that high! hell i even tried 1,8volts, yet it freezes...

Is it linux being oversensitive or am i doing something generally wrong?

----------

## payam

now i'm not much of an overclocker myself, but i do know this, back when intel released the desktop pentium 3 1.13GHz coppermine processor, and subsequently recalled it, it was because the processor could not successfully compile the linux kernel, and they determined it was because the clock speed was too high for the core. it was NOT because windows applications wouldn't work with the processor (albeit, once they found one big bug, there really is no use finding others, so it COULD have just been coincidence that someone found the problem while testing linux). for this reason, i think that linux indeed is more sensitive to overclocking, as weird as it sounds. i've also seen a lot more warnings "don't overclock your processor, it's probably why you having problem XXX" in linux manuals, than in windows documentation.

----------

## firaX

yeah i noticed when someone posts "xxx wont compile" most ppl ask "did you overclock ...."  so i really guess linux is more sensitive.

Well windows users hardly compile anything so they might not notice those problems...hm

I d still love to get my FSB up to 400 as i read everywhere that my board and cpu should handle 400mhz without problems. Then again i do want my system to be as stable as before. BTW overclocking the cpu by increasing the multiplyer i.e. 12x166 instead of stock 11x166 doesnt give me those freezes...so its def. related to the bus speed.

----------

## nephros

Rule #1:Don't overclock your Linux box.

You are describing exactly the problems that leads to.

And if it won't fail that brutally as you describe, things might fail in much more subtle ways.

But IF you overclock, and you run into problems pleeeease pleeease pleease set your system back to normal settings and try again before posting somewhere for support.

Once too much I have seen threads here "my XXX segfautls on this", "my YYY locks up there", and after three forum pages of poking around someone asked "are you overclocking?"

They guy set the speeds back to normal and problems vanish.

A good test for stability is indeed the kernel compile (or any other long compile, like mozilla), because it puts a long stress on both CPU and memory.

If it bails out or freezes your machine, you've gone too far.

----------

## firaX

i reset my system to stock speeds immidately after freezing a couple of times with different settings.

Yet i wonder why windows users can overclock their athlons / FSB that much and dont experience any freezes (those ppl got apps that test system stability by putting huge loads on it for several days STILL they dont freeze like me in linux)

----------

## neuron

linux isn't "more sensitive", that's just plain wrong, you put instructions in the cpu, and if the wrong data is comming out, stuff will run less stable.

The diffrence is, linux TELLS you when something goes wrong, much unlike windows, which will run like nothing happened even if something is calculated completly wrong.

also, compiling something is a very nice way of testing some functions of the cpu, and it's also fairly likely to notice when it gets something wrong.

----------

## Odin

 *firaX wrote:*   

> i reset my system to stock speeds immidately after freezing a couple of times with different settings.
> 
> Yet i wonder why windows users can overclock their athlons / FSB that much and dont experience any freezes (those ppl got apps that test system stability by putting huge loads on it for several days STILL they dont freeze like me in linux)

 

My 1700 is running at 2ghz/333fsb and I've had no problems in linux. Even after doing a stage1 install.

Sometimes you just get unlucky.

----------

## firaX

hm well then its def. something i m doing wrong with my overclock...but all my components are of high quality. The board / chipset is very good for overclocking. The ram isnt cheap either and runs fine in async 400mhz mode , so synced 400mhz shouldnt be any different for the ram.

the barton2500 is praised as being very overclockable as well...even my stepping AQZEA isnt said to be bad on overclocker forums. And it does overclock fine increasing the multiplyer...yet as soon as i use 400mhz fsb the system freezes under load... (11,5x166=1900 = works stable as a rock, without increasing vcore / 9,5x200 = 1900 = unstable , even with 1,775 vcore = STRANGE)

----------

## Haukkari

I have a 1700+ Athlon XP overclocked to 1550MHz (originally 1467MHz) and it works perfectly with Linux and Windows. Overclocking this to 2000+ gave some hangups with Windows but I didn't notice anything terrible in Linux. Odd. =)

----------

## neuron

running memory 400 async and 400 sync is NOT the same.  In 400 async it'll put halt instructions and wait for the cpu.  This is the reason why users are highly adviced to run sync'ed on the nforce2 chipset, it WILL run slower async, as the only diffrence is, in async, it'll halt and wait for the cpu, to run in sync.

----------

## trajedi

alot of ppl who do o'c might not have the correct cooling for the extra heat.. stock fan's usually don't work.. maybe a color master and then instead of that goo you get Artic Silver.. but that's my 2 cents alot of ppl don't use correct cooling..

----------

## firaX

i got 5 case fans and am not using stock cooling but a Thermaltake Volcan 11+ fan (which is not the best but is more than sufficient for 2000mhz)

Also the cpu doesnt get hotter than 50C under load @1,8volts...50C under load is OK for that high voltage. So i doubt temperature is a problem

BTW i m currently running my RAM at 333mhz synced with my cpu  :Smile: 

I also didnt say its the same, all i said the stress put on the ram shouldnt be different => the ram runs at 400mhz asynced so it should also run 400mhz synced => ram shouldnt be my problem as it s been made to run at 400mhz anyways..oh well lol

----------

## AnvilDemon

 *firaX wrote:*   

> i got 5 case fans and am not using stock cooling but a Thermaltake Volcan 11+ fan (which is not the best but is more than sufficient for 2000mhz)
> 
> Also the cpu doesnt get hotter than 50C under load @1,8volts...50C under load is OK for that high voltage. So i doubt temperature is a problem
> 
> BTW i m currently running my RAM at 333mhz synced with my cpu 
> ...

 

Well, firaX, you are most of the way to a stable system, I would say. I run a 2500+@2600Mhz with 2.03vcore on an Abit NF7-S. Cooling the cpu better than normal is essential, I use water cooling but a good thermalright HS with a good 80mm fan on it would be plenty I would say. Also, 50c is the barrier for any OC'er never exceed that 50c mark. If you do get better cooling. My cpu with current settings as above with water I am at 36c full load. 

One thing you really need to check though is the temp of your northbrdge chipset. If it exceeds 32c get better cooling on it, trust me. Also set the vdd voltage to the highest allowabe the bios has the option for. Mine is set at 1.7v wich is the max the bios has for my board. This is the main cause of instability in an Nforce mobo, not having a high enough voltage for the chipset. Watch the temps closely though if they start getting over that 32c get better cooling. 

If the northbridge with better cooling and more volts didn't help the stability up the volts on the ram. I run my Corsair XMS Pc3200LL at 2.8v at 225fsb 1:1 ratio. The ram gets a bit toasty but it should not harm good quality ram, the cheap stuff is what usually dies quick from a high ram volt setting. I have been running this ram this way for about 3-4months straight.

If the NB chip and ram steps don't fix stability touch your southbrdge chip. If it is hot where you can't keep your finger on it for more than 10 seconds put a small HS on it. This usually helps if you use the onboard sound chip with these mobo's. As the southbridge can get very hot with no HS on it. Sound will distort sometimes with a high FSB if this chip is hot and may cause the coruption in the linux software possibely.

If you have tried all this try giving the cpu a bit more volts but do not go over 60c max, do it only to test if it is stable. If it is then definately get some better cpu cooling.

All my stuff is water cooled except my southbridge chip so I know temps are not an issue, I even have an 80w pelt on my GFFX 5800 OC'ed to 549mhz core, 1374mhz mem. This was when I had it setup in win but I use Linux now so I no longer have the OC on the vid card.  :Sad:  Only downfall I have so far.

One last thing to check if everything fails for ya check the voltage lines from your PSU if your 12v 5v or 3.3v lines are dipping way low then get a better PSU. I use (2) Antec True Power 480w PSU's, one for mobo specifically and one for all hardware components.

If the 3.3v goes down to 2.7 or less I say new PSU, 5v goes to 4.6v or less new PSU, 12v 11.6v or less get new PSU.

With this info I gave out I am not responsible for any damage or am liable for any misshaps to your computer/s. This info is used as is and I can bear no consequences for anything that may or may not happen. I hope you understand.  :Smile:  Main thing is just take it slow do not rush, it is a time consuming process and will take skill to troubleshoot what is actually happening to give you the errors you are experiencing.

Party On,

Anvil

----------

## erebus

Hmm well I tried running a overclocked system firstly with windows and that seemed to run fine then I switched over to gentoo with exactly the same system and started having the usually compile problems.. so I dropped back to the cpu defaults and everything worked fine.

What was causing the problem I think was that windows by default has (by default) better power management meaning that for some reason my cpu/system would run cooler most of the time and did (at least for me) control the temperature of the cpu and system a lot better than it did when I was using linux.

I'm not saying this was the actual reason but its what I believed to be causing the instability issues when I switched os's and they could have probably been fixed if I could have worked out how to implement these power management function in the kernel..

Oh hmmm,

Andy

----------

## TenPin

In my experience Linux is *always* more stable than windows with an overclock.

The problem comes when you start compiling stuff because it places an extremely high load on the cpu which will bring out any instabilties due to overclocking. People running windows generally don't compile stuff or produce loads quite as high but if they were to, windows would probably crash alot more quickly.

Also your type/model of cpu is pretty much unrelated to the amount of overclock you will get from your cpu. When cpus are manufactured they are all cut from a bunch of wafers and rigorously tested to see what speed they are capable of running stably at, not manufactured specifically to run at a certain speed. They try and make them all run as fast as possible then just sell them according to their capabilities.

Running a cpu overclocked also will cause it to "wear out" more quickly. This process is called electromigration. There are points on the cpus al/cu traces that are weaker than the rest of the trace and the metal at each side of this point will slowly move away from it. Eventually this can cause errors under high load and maybe crashes. Overclocking speeds up this process.

I ran a Duron 650 at 900 and after 3 years it started to get slower and slower. I clocked it back to 800 and it actually gave better performance than it did at 900. Not long after when trying to compile stage1 it would come up with the error "could not find file: mibfoobar.so" ever time. After clocking back to 700 it would find "libfoobar.so" ! I replaced it with a Duron 950 for £15.

----------

## AnvilDemon

 *erebus wrote:*   

> Hmm well I tried running a overclocked system firstly with windows and that seemed to run fine then I switched over to gentoo with exactly the same system and started having the usually compile problems.. so I dropped back to the cpu defaults and everything worked fine.
> 
> What was causing the problem I think was that windows by default has (by default) better power management meaning that for some reason my cpu/system would run cooler most of the time and did (at least for me) control the temperature of the cpu and system a lot better than it did when I was using linux.
> 
> I'm not saying this was the actual reason but its what I believed to be causing the instability issues when I switched os's and they could have probably been fixed if I could have worked out how to implement these power management function in the kernel..
> ...

 

I don't believe power management is really going to make that much of a difference really. The reason for OC'ing to to get more out of your hardware is to run it as fast as it can and still be stable. So power management would have no real influence on a machine that is OC'ed when running at full load all the time. 

And if what most people here are saying is true then I must be a very lucky person to have my Linux machine running at 225fsb, 2600mhz. 

Now I am not trying to get on anyone about this or be snotty in anyway but could it be that if this was true that win would be a more stable kernal than a linux kernal? This is from an OC'ing point of view so........

I have not had these problems and am an OC'er enthusiast so I will say that linux is stable with OC'ing it just takes skill to OC, depending on how far the components OC is luck (from the quality of the pcb that you recieved) and the quality of transistors and such that are used. 

Heat is the problem for most OC's then voltage then component quality. If you use cheap components don't expect to OC with stability. 

The best test in windows to compare with a linux box is to use Prime95. This is a very number intensive app and will return errors on a poor OC'ed system. Even if the system seems stable under win. If you get errors in Prime95 in win then expect to have an unstable system in linux when OC'ed. Plus let the test run overnight to make sure it is stable as you might not get an error right away. Full load overnight will test your mem, cpu, and NB chip. These are the main components stressed when OC'ed.

Hope this was informational.

----------

## AnvilDemon

 *TenPin wrote:*   

> In my experience Linux is *always* more stable than windows with an overclock.
> 
> The problem comes when you start compiling stuff because it places an extremely high load on the cpu which will bring out any instabilties due to overclocking. People running windows generally don't compile stuff or produce loads quite as high but if they were to, windows would probably crash alot more quickly.
> 
> Also your type/model of cpu is pretty much unrelated to the amount of overclock you will get from your cpu. When cpus are manufactured they are all cut from a bunch of wafers and rigorously tested to see what speed they are capable of running stably at, not manufactured specifically to run at a certain speed. They try and make them all run as fast as possible then just sell them according to their capabilities.
> ...

 

TenPin, electromigration is caused by heat not by OC'ing. Oc'ing generally increases the heat output that is why most people believe OC'ing is typically bad. If you can keep the temps low (below 45c) then you should have normal electromigration throughout the chip as if the chip was clocked at it's stock speed. So if I OC my cpu to a higher mhz and have a higher volts on it my cpu will produce more watts of heat. If I cool the cpu better say with watter and bring it back down to stock or lower temps than with the stock hsf then I essentially will not have any more eltromigration than a normal cpu at stock speeds with a stock HSF at stock temps. I should be able to expect the same life expactancey  from my chip at my current speed than if I had my chip at default speeds with the stock HSF.

Cpu's at stock speed no matter what have electromigration. It is because when they are on they are producing heat, current and electricity is cousing through them. Only reducing the heat can minimize this effect.

----------

## thesysadmin

Personally I am overclocking my Cleron Tualtion 1100A to 1452Mhz.  The other day Windows wouldn't boot with this setting, it would just reboot.  Gentoo on the other hand has been running rock stable with this overclock  :Wink: .

----------

## dma

If you've ever designed sequential logic circuits, and played around with the step size (clock speed), you'll know why overclocking can be bad.  

Basically, every logic component (inverters, AND gates, etc...) has a response time (which can vary with heat).  The clock pulses must be spaced apart far enough so as to give an entire chain of components time to settle into position and give the "correct answer".  

If you send the clock pulses too quickly, one stage of the circuit will get a value from the previous stage before it is ready.  This value may or may not be correct.  A single incorrect value (bit) can cause the machine to crash, or it could be harmless.

I'll post graphs if anyone wants them.

Overclock at your own peril.  Use insufficient cooling at your own peril.

As for thermal characteristics, certain operations use more power than others.  In addition, the OS can issue "halt" instructions to make the CPU use less power when it is idle.

If you want to TEST your machine's thermal stability, then the "cpuburn" package is available in portage.  Be sure to have lm-sensors working.  Use at your own risk!!!

----------

## firaX

In my cause it should neither be the fault of the NB nor of ram, as 1) the mainboard can take a native athlon xp 3200 which runs natively @400mhz system bus, 2) i ran the ram at 400mhz for weeks without locking up.

so it must have something to do with my cpu ...and yet ppl oc the barton 2500 to at least 3200speeds (11x200) 

So it must be possible...btw ppl even achieve 3200 speeds @ stock cooling! 

On the other hand running at default 333mhz system bus BUT ocling the cpu by increasing the multiplicator so the cpu mhz becomes the same (ie 12,5x166 = 2200 and 11x200 = 2200) works! 

weird thing :/

----------

## stonent

The reason being, not many windows home users do a lot of compiling.  Lets say you're building some linux app and it takes 30 minutes to compile.  Thats 30 minutes at 100% cpu utilization.  Now even if you're playing a high-end game in windows, your cpu is not solidly locked at 100%.  I've watched the graph in XP and there are large periods of time where the cpu utilization is under 50% because the video card is doing most of the work.  Another popular activity in Windows is browsing, that takes almost no cpu at all. Considering the cpu usage is at the page loads. If you average 3 minutes per page, you're letting the cpu cool down.

----------

## payam

this is some good info... how do i probe the die temperature of my athlon-xp while in linux?

----------

## firaX

you use lm_sensors for it.

i think for nforce2 you need 2.8.0 , portage version is 2.7.0 only so i compiled it myself. lm-sensors.nu    get i2c and lm-sensors...compile i2c first then lm-sensors...there s a good thread on that search for lm_sensors on nforce2. basically a set of modules.

gkrellm2 can use those modules to show your vcore fanspeed and temps then  :Smile: 

----------

## AnvilDemon

 *firaX wrote:*   

> In my cause it should neither be the fault of the NB nor of ram, as 1) the mainboard can take a native athlon xp 3200 which runs natively @400mhz system bus, 2) i ran the ram at 400mhz for weeks without locking up.
> 
> so it must have something to do with my cpu ...and yet ppl oc the barton 2500 to at least 3200speeds (11x200) 
> 
> So it must be possible...btw ppl even achieve 3200 speeds @ stock cooling! 
> ...

 

Ok, here is another way to test your system.

To test the ram do this:

1.)leave everything at stock. (i.e. voltage settings, fsb, cpu mhz, multiplier, etc) basically everything.

 2.) Change the divider for the ram to 4:3 or something like that so the ram is running at 400mhz, (this will make it so the fsb is still 166mhz but the ram is running asysnc). I may have it bakwards. The divider.

3.) If all is stable then the ram is capable of running at this speed.

To test the cpu:

1.) For this test put teh vcore to 1.8-1.85v

2.) Set the multiplier to equall the cpu speed you want and leave the fsb setting at stock.

3.) Bootup and stress the cpu for a decsent amount of time.

4.) If the system is stable then the cpu should be able to handle the OC.

To test the NB chip:

1.) Put the multiplier to default and voltage setting to default.

2.) Chnage the fsb to the desired frequency.

3.) Change the ram divider so it equalls 166fsb or 333mhz. i.e. 3:4 or something like that. I may have it backwards. The divider.

4.) now boot up and stress the system with a game, sound, hdd tester, everything you can. This will test the NB chip. Maybe if you like try Folding @ home or seti. They are good for stability testing actually.

5.) If your system is stable then the OC should have no problems whatsoever.

If you still cannot achieve a stable OC after testing the main components then you should look at the heat from your devices, and the voltage from the PSU.

After checking if the heat is low enough and the PSU lines are stable and you still cannot OC stable then check the NB chip and up the voltage on it to max. vdd is the voltage for controlling the voltage to the NB chip. 1.7-1.8v is reccomended for the NB chip. 

If you were not able to find the problem with this simple test then to be honest with ya I don't know what is the cause. Maybe linux just does not like the OC I guess.

Sorry I cannot be of further assistance but if you do have any other questions feel free to ask.

Anvil

----------

## neuron

 *AnvilDemon wrote:*   

> To test the ram do this:
> 
> 1.)leave everything at stock. (i.e. voltage settings, fsb, cpu mhz, multiplier, etc) basically everything.
> 
>  2.) Change the divider for the ram to 4:3 or something like that so the ram is running at 400mhz, (this will make it so the fsb is still 166mhz but the ram is running asysnc). I may have it bakwards. The divider.
> ...

 

in danger of repeating myself, running the memory at X speed in sync, puts WAY WAY more preasure on it than in async mode.

----------

## AnvilDemon

 *neuron wrote:*   

>  *AnvilDemon wrote:*   To test the ram do this:
> 
> 1.)leave everything at stock. (i.e. voltage settings, fsb, cpu mhz, multiplier, etc) basically everything.
> 
>  2.) Change the divider for the ram to 4:3 or something like that so the ram is running at 400mhz, (this will make it so the fsb is still 166mhz but the ram is running asysnc). I may have it bakwards. The divider.
> ...

 

Yes, I understand this but to teach someone that has a weak to midrange OC'ing ability needs to know how to test there components appropriatly. By doing the test they atleast have a base from wich to start, and a better understanding of where problems may be later down the OC'ing road.

----------

## firaX

i ve been running the ram 4:3 all the time 333:400 and it worked fine.

maybe me timings are bad? It s said to be good ram tho (CL2 on the packaging)

Anyways i m running 333mhz synced now cause i heard synced is alot better than asynced.

----------

## AnvilDemon

Right, synced is better for performance, no doubt about that. I am just saying run the fsb at 333mhz and the ram at 400mhz to test the ram for stability. It's just one of the tests to see if components can handle what you want to throw at them.

I know you pretty much have done this test but try the others if you have not gotten that far yet.

----------

## firaX

i ve done this yes. Ran fine without ever freezing (about 14 days uptime then i rebootet to try the overclock)

----------

## AnvilDemon

I'm sorry firaX just trying to help you out man. Maybe I should ask the question. Have you tested the cpu? Have you tested the NB chip? We know you have tested the ram. If you have tested all the parts accordingly and no stability problem's occurred, then well, like I said in my other post I really cannot contribute more as this is the basis for a system when I start to OC it.

I find the max of the components then slowly find the balance of fsb and cpu mhz that I like. It's just something I do I guess and everyones components and systems are differenet so not all rules apply to all systems.

Good luck man, hope you figure out what the problem is for the OC.

----------

## firaX

i ve tested the cpu with a minimal overclock and no vcore increase

as stated above:

11,5x166=1900  => stable as a rock WITHOUT vcore increase

9,5 x 200 = 1900 => unstable even with 1,80Vcore

i havent done the 4:3 thingy (ram in 333 mode , fsb in 400)

----------

## AnvilDemon

 *firaX wrote:*   

> i ve tested the cpu with a minimal overclock and no vcore increase
> 
> as stated above:
> 
> 11,5x166=1900  => stable as a rock WITHOUT vcore increase
> ...

 

From what I see here in this post is:

You have basically done the NB chip test and you said it is unstable. This tells me that the NB is the problem. Just for fun try 190 fsb synced could ya?

If the system is stable then the NB is the problem for sure. I have seen this on nforce mobo's plenty of times and it is the vdd issue. You will need more vdd on the NB chip to sustain a better fsb.

There has been talk about the timings used on the ram and NB chip when running a 3200+ cpu. When the bios sees a 3200+ chip in it it sets lower values less stringent timings on the ram and NB chip by default. But having a 2500+ in there it automaticlly sets  the tightest timings on the ram and NB chip. You still have controll over the ram timings but I am talking about the ones that are in the registers from the NB chip to the ram. These you do not have controll over.

The best thing I can say is run at a slightly less fsb ro achieve what you would like. Maybe 190fsb x 11.5 mult. This should give close performance to what you would like from your setup. 

There is a fix for this problem. I am looking for it now but it's been so long I don't know if I can find it. The fix was posted on www.futuremark.com forums. You have to cut a bridge on the cpu to make it think it is a 133fsb cpu. This is what I remember and don't remember how accurate I am.

Anvil

----------

## firaX

if its a nb problem, what if i had a native athlon XP 3200+ cpu ?

Those run natively at 400mhz fsb! My board officially supports 400mhz even (its a7n8x revision2) 

So its kinda weird that my board is at fault.

how would i increade the vdimm ?

----------

## AnvilDemon

You will not increase the vdimm you will increase the vdd. vdimm is for ram and vdd is for NB chip. As I said in my last post the problem is not that the nforce can't handle 200fsb it's that when the bios detects the 2500+ it sets tighter (more stringent timings), than if you had a 3200+. 

This is not the article I was looking for but it seems as though it could be worth a try.

http://discuss.futuremark.com/forum/showflat.pl?Cat=&Board=techmobocpu&Number=2490623&fpart=1

Hope this trick might work for ya as even I kinda hate to cut cpu bridges. That is what the other fix entails.

----------

## AnvilDemon

Hmmm, well, after going though most of the forums and searching for the answer I cannot find it. I think your safest bet would most likely to go with a 195fsb or lower. Almost always people's systems are stable from this setting and lower. The only way I know of getting a 2500+ that has this problem over the 200mhz or more mark is to volt mod the NB chip above 1.8v or to cut one of the bridges on the cpu but I am not for sure wich lvl bridge to cut. I can't remember.

----------

## firaX

i dont feel like cutting anything.

btw i ve read your posted link completely.

Those ppl have gotten their 2500+ Bartons to 11x200 without changing anything but their FSB! Well i got an asus board not abit , still the same chipset...

 :Sad: 

----------

## stonent

Sometimes overclocking can be affected by the smoothness of the voltage to the cpu.  Boards with more capacitors tend to have a more smooth and accurate voltage. That may be the difference between the different boards.

----------

## firaX

I guess i should have decided on the ABIT board, as most ppl who overclock use that one :/ but i thought asus was some kind of "sign for quality" so i got that one (not knowing much about abit that is...)

Perhaps its really the a7n8x board which is unsuitable for overclocking..? Has anyone successfully oc d on that board? (using FSB400 that is!)

----------

## AnvilDemon

I have read many posts over at futuremark and most Nforce2 chipsets can reach the 400mhz (200mhz) fsb setting with ease. I do not believe that it is manufacterer specific. Some people get more lucky than others. 

I am aware though that you should make sure you have the latest bios for your mobo. Usually this can take care of small issues especially when OC'ing. If you have tried everything try and update the bios if you do not have the current one and if that does not help maybe a volt mod to the NB chip is in order or the cutting of the bridge on the cpu. Although that can be dangerous to the cpu chip.

Maybe if you slap up a post over on futuermark forums they can help you further as I am not nearly as skilled as alot of the people over there. I do consider myself very knowledgeable in that area though. It's just more brains are better than just one or a few ya know.

----------

## firaX

i do have the latest bios, i even had to get that otherwise it wont recognize my chip  :Smile: 

You know its alright , its not that i NEED to oc it to live on  :Smile: 

Its just this feeling...you can get 460$ speeds for 95  :Smile: 

----------

## AnvilDemon

I understand firaX, it's like my cpu. I have it at 2600mhz and not the 1800mhz it's supposed to be at. I mean they don't even make a Barton cpu that fast. SO what is the value of my cpu. Well, it's definately worth more than the 3200+ at stock speeds in my opinion. I spent $125 at the time when I bought it. I know the 2500+'s are about $95 now and i still think you really can't find a better deal on this type of cpu. Think of it this way.

Even if you can run stable at 190fsb (380mhz mem) with a 11.5mult you still made out man. It may not be an exact 3200+ but you are very close for a lot less moeny than what you would have spent. I would give it a whirl and see if you are stable at that setting. I believe you would be. If not try 185fsb anything less than that and something is wrong or maybe you just got a shoddy chip on the mobo.

Well,

Part On,

Anvil

----------

## firaX

alright i ll try that next time i reboot (currently sick of rebooting :p )

thanks for your hard work on replying all the time  :Smile: 

----------

## neuron

http://www.nforcershq.com/forum/

many people have gotten their a7n8x boards to and over 200mhz fsb, but the revision of the board is important as hell.

----------

## firaX

i v got revision 2.0 (the one that *should* officially support 400mhz fsb)  :Smile: 

----------

## RagManX

Might just be that your particular CPU doesn't like running at 200 MHz FSB.  Try stepping back 1 MHz at a time until you find where you are stable.  Then, bump your multiplier up 0.5 at a time until you are unstable again.  That should help you zone in on the sweet spot for your CPU.

RagManX

----------

## krunk

Overclocking is a gamble. You can never predict the outcome you'll get based on others success. Every twoi systems are unique even if the parts are exactly the same on both. You must bear in mind that there is a reason that chip was specced at 2500 speeds    :Smile:  . 

[H] ardOCP did a pretty thorough test on a random sampling of 2500 bartons. They found that although many were excellant overclockers some did show issues at speeds of >=200fsb. 

FYI I can put my 2500 to 3200 speeds @ 200 fsb with no problem at all....even with my poetentiometer turned down and conventional cooling and no voltage increase, though it gets a bit warm and I have to turn the fans up when compiling. I don't know just how far it will go because 1) I haven't had time to fully exploring my newly upgraded systems capabilities and 2) I'm usig the 2.6 kernel now which doesn't have lm-sensor support yet. My instincts say I should be able to get 2400mhz out of it though.

As far as the 'windows or linux' OC question, there's only one way to tell. OC to the borderline stability mark and run Prim95 on torture test and see which one errors out first. Comparing 'normal' windows use to compiling in linux is unfair. I do know when my system locks under linux, I hard reboot without fear, but in windows there is always that fear that I've just screwed something up majorly and will have to do a registry recovery or some other nonsense.

My 2 cents

----------

