# NVidia GLX causes X crash [FIXED]

## chrisashton84

Hi,

Initially I thought I was having a problem related to the whole amd64 IOMMU fiasco but unfortunately that's not the case (as it's an easy fix).  When I launch X with any version of nvidia-drivers, it destroys the screen (blanks or garbage and doesn't usually get back to console).  No error message is displayed, and I only get the most minimal data from dmesg:

```
localhost ~ # startx

xauth:  creating new authority file /root/.serverauth.29134

X Window System Version 7.1.1

Release Date: 12 May 2006

X Protocol Version 11, Revision 0, Release 7.1.1

Build Operating System: UNKNOWN

Current Operating System: Linux localhost 2.6.17-gentoo-r9 #1 Wed Nov 29 07:58:38 CST 2006 x86_64

Build Date: 29 November 2006

        Before reporting problems, check http://wiki.x.org

        to make sure that you have the latest version.

Module Loader present

Markers: (--) probed, (**) from config file, (==) default setting,

        (++) from command line, (!!) notice, (II) informational,

        (WW) warning, (EE) error, (NI) not implemented, (??) unknown.

(==) Log file: "/var/log/Xorg.0.log", Time: Wed Nov 29 10:13:50 2006

(==) Using config file: "/etc/X11/xorg.conf"

(EE) Failed to load module "wfb" (module does not exist, 0)

Backtrace:

XIO:  fatal IO error 104 (Connection reset by peer) on X server ":0.0"

      after 0 requests (0 known processed) with 0 events remaining.

localhost ~ # dmesg | tail -n 10

agpgart: Putting AGP V3 device at 0000:01:00.0 into 8x mode

X[28997]: segfault at 0000000000000000 rip 00002b2358bf4ec5 rsp 00007fff51ec24b8 error 4

agpgart: Found an AGP 3.0 compliant device at 0000:00:00.0.

agpgart: Putting AGP V3 device at 0000:00:00.0 into 8x mode

agpgart: Putting AGP V3 device at 0000:01:00.0 into 8x mode

X[29074]: segfault at 0000000000000000 rip 00002b21ef1b5ec5 rsp 00007fffbb900ef8 error 4

agpgart: Found an AGP 3.0 compliant device at 0000:00:00.0.

agpgart: Putting AGP V3 device at 0000:00:00.0 into 8x mode

agpgart: Putting AGP V3 device at 0000:01:00.0 into 8x mode

X[29151]: segfault at 0000000000000000 rip 00002b8bbddcbec5 rsp 00007fffecce92d8 error 4

localhost ~ # cat /var/log/Xorg.0.log | tail -n 90

        [24] -1 0       0x0000d000 - 0x0000d01f (0x20) IX[B]

        [25] 0  0       0x000003b0 - 0x000003bb (0xc) IS[B]

        [26] 0  0       0x000003c0 - 0x000003df (0x20) IS[B]

(II) Setting vga for screen 0.

(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32

(==) NVIDIA(0): RGB weight 888

(==) NVIDIA(0): Default visual is TrueColor

(==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)

(**) NVIDIA(0): Option "NoLogo" "true"

(**) NVIDIA(0): Option "AddARGBGLXVisuals" "true"

(**) NVIDIA(0): Enabling RENDER acceleration

(II) NVIDIA(0): Support for GLX with the Damage and Composite X extensions is

(II) NVIDIA(0):     enabled.

(II) NVIDIA(0): NVIDIA GPU GeForce 6800 GT at PCI:1:0:0 (GPU-0)

(--) NVIDIA(0): Memory: 262144 kBytes

(--) NVIDIA(0): VideoBIOS: 05.40.02.15.01

(II) NVIDIA(0): Detected AGP rate: 8X

(--) NVIDIA(0): Interlaced video modes are supported on this GPU

(--) NVIDIA(0): Connected display device(s) on GeForce 6800 GT at PCI:1:0:0:

(--) NVIDIA(0):     DELL 2001FP (DFP-0)

(--) NVIDIA(0): DELL 2001FP (DFP-0): 155.0 MHz maximum pixel clock

(--) NVIDIA(0): DELL 2001FP (DFP-0): Internal Single Link TMDS

(II) NVIDIA(0): Assigned Display Device: DFP-0

(II) NVIDIA(0): Validated modes:

(II) NVIDIA(0):     "1600x1200"

(II) NVIDIA(0):     "1280x1024"

(II) NVIDIA(0):     "1024x768"

(II) NVIDIA(0): Virtual screen size determined to be 1600 x 1200

(--) NVIDIA(0): DPI set to (99, 98); computed from "UseEdidDpi" X config

(--) NVIDIA(0):     option

(--) Depth 24 pixmap format is 32 bpp

(II) do I need RAC?  No, I don't.

(II) resource ranges after preInit:

        [0] 0   0       0xf1000000 - 0xf1ffffff (0x1000000) MX[B]

        [1] 0   0       0xe0000000 - 0xefffffff (0x10000000) MX[B]

        [2] 0   0       0xf0000000 - 0xf0ffffff (0x1000000) MX[B]

        [3] -1  0       0x00100000 - 0x3fffffff (0x3ff00000) MX[B]E(B)

        [4] -1  0       0x000f0000 - 0x000fffff (0x10000) MX[B]

        [5] -1  0       0x000c0000 - 0x000effff (0x30000) MX[B]

        [6] -1  0       0x00000000 - 0x0009ffff (0xa0000) MX[B]

        [7] -1  0       0xf3017000 - 0xf30170ff (0x100) MX[B]

        [8] -1  0       0xf3000000 - 0xf300ffff (0x10000) MX[B]

        [9] -1  0       0xf3010000 - 0xf3013fff (0x4000) MX[B]

        [10] -1 0       0xf3014000 - 0xf30147ff (0x800) MX[B]

        [11] -1 0       0xf3016000 - 0xf3016fff (0x1000) MX[B]

        [12] -1 0       0xd0000000 - 0xcfffffff (0x0) MX[B]O

        [13] -1 0       0xf1000000 - 0xf1ffffff (0x1000000) MX[B](B)

        [14] -1 0       0xe0000000 - 0xefffffff (0x10000000) MX[B](B)

        [15] -1 0       0xf0000000 - 0xf0ffffff (0x1000000) MX[B](B)

        [16] -1 0       0xf3015000 - 0xf3015fff (0x1000) MX[B](B)

        [17] 0  0       0x000a0000 - 0x000affff (0x10000) MS[B](OprD)

        [18] 0  0       0x000b0000 - 0x000b7fff (0x8000) MS[B](OprD)

        [19] 0  0       0x000b8000 - 0x000bffff (0x8000) MS[B](OprD)

        [20] -1 0       0x0000ffff - 0x0000ffff (0x1) IX[B]

        [21] -1 0       0x00000000 - 0x000000ff (0x100) IX[B]

        [22] -1 0       0x0000e400 - 0x0000e41f (0x20) IX[B]

        [23] -1 0       0x0000e000 - 0x0000e01f (0x20) IX[B]

        [24] -1 0       0x0000dc00 - 0x0000dc1f (0x20) IX[B]

        [25] -1 0       0x0000d800 - 0x0000d80f (0x10) IX[B]

        [26] -1 0       0x0000d400 - 0x0000d407 (0x8) IX[B]

        [27] -1 0       0x0000d000 - 0x0000d01f (0x20) IX[B]

        [28] 0  0       0x000003b0 - 0x000003bb (0xc) IS[B](OprU)

        [29] 0  0       0x000003c0 - 0x000003df (0x20) IS[B](OprU)

(II) NVIDIA(0): Setting mode "1600x1200"

(II) Loading extension NV-GLX

(II) NVIDIA(0): NVIDIA 3D Acceleration Architecture Initialized

(II) NVIDIA(0): Using the NVIDIA 2D acceleration architecture

(==) NVIDIA(0): Backing store disabled

(==) NVIDIA(0): Silken mouse enabled

(**) Option "dpms"

(**) NVIDIA(0): DPMS enabled

(II) Loading extension NV-CONTROL

(==) RandR enabled

(II) Initializing built-in extension MIT-SHM

(II) Initializing built-in extension XInputExtension

(II) Initializing built-in extension XTEST

(II) Initializing built-in extension XKEYBOARD

(II) Initializing built-in extension XC-APPGROUP

(II) Initializing built-in extension SECURITY

(II) Initializing built-in extension XINERAMA

(II) Initializing built-in extension XFIXES

(II) Initializing built-in extension XFree86-Bigfont

(II) Initializing built-in extension RENDER

(II) Initializing built-in extension RANDR

(II) Initializing built-in extension COMPOSITE

(II) Initializing built-in extension DAMAGE

(II) Initializing built-in extension XEVIE

(II) Initializing extension GLX

Backtrace:

localhost ~ #

```

I can run X with nv driver or with glx commented out in xorg.conf:

```
localhost ~ # cat /etc/X11/xorg.conf

Section "ServerLayout"

        Identifier      "TwinViewLayout"

        Screen          0 "DFP"

EndSection

Section "Module"

        Load    "bitmap"

        Load    "freetype"

        Load    "type1"

        Load    "extmod"

        Load    "glx"

        Load    "v4l"

EndSection

Section "InputDevice"

        Identifier      "Logitech MX1000"

        Driver          "mouse"

        Option          "CorePointer"

        Option          "Device" "/dev/input/mice"

EndSection

Section "InputDevice"

        Identifier      "nat4000"

        Driver          "keyboard"

        Option          "CoreKeyboard"

EndSection

Section "Monitor"

        Identifier      "Dell 2001 FP"

        HorizSync       31.5 - 108.0

        VertRefresh     60.0 - 60.0

        Option          "dpms"

EndSection

Section "Monitor"

        Identifier      "Nokia Multigraph 445Xpro"

        VertRefresh     60.0 - 85.0

        Option          "dpms"

EndSection

Section "Device"

        Identifier      "nVidia GeForce 6800 GT"

        Driver          "nvidia"

        Card            "nVidia GeForce 6800 GT"

        Option          "NoLogo"                        "true"

        Option          "AddARGBGLXVisuals"             "true"

EndSection

Section "Screen"

        Identifier      "DFP"

        Device          "nVidia GeForce 6800 GT"

        Monitor         "Dell 2001 FP"

        DefaultDepth    24

        SubSection "Display"

                Depth   24

                Modes   "1600x1200" "1280x1024" "1024x768"

        EndSubSection

EndSection

Section "Extensions"

        Option          "Composite"                     "enable"

EndSection

Section "DRI"

        Group 0

        Mode 0666

EndSection

localhost ~ #

```

Is there anything else that might be helpful?  I'm fairly stumped as this was working perfectly, then (perhaps, even most likely unrelated) when my windows install started crashing on an agp file during startup, this started happening in linux.  I've ruled out hardware problems though as I have windows recovered with latest nvidia drivers running perfectly.  Any ideas?  This is _very_ annoying as school just started back up.  Thanks!Last edited by chrisashton84 on Sat Dec 02, 2006 8:13 pm; edited 1 time in total

----------

## downey

Well from the look of it you are having AGP issues as the crash is occurring when agp is set up.  It's possible that forcing your card to be setup as 4x AGP instead of 8x AGP would get things moving.  I think you can go into your BIOS and disable 8x AGP.  You'll have to look around though.  But first I would look at your kernel config to make sure you only have AGPGART built as a module and don't have any of the specific AGP chipsets set up.  NVidia has their own AGP system and only requires the base AGPGART kernel module to work, so it's best to remove any chipset specific AGP modules.  Other than that I would start checking your system setup.  Make sure NVidia is set as your OpenGL system by using eselect.  And then start looking at what kernel modules are loading up by using lsmod.

Other things I noticed is that you have "AddARGBVisuals" in your device section whereas it should be in your Screen section instead.  You also don't need to specify the Vertical and Horizontal refresh of each of your monitors as that can be automatically setup when X starts.  You can leave them in just wanted to point that out as it makes your X setup easier.

Hope that helps.

----------

## chrisashton84

I'll look into the AGP aperture speed issue, but this is a geforce 6800 gt, so it's AGP pro (8x) rated (as is my mobo) and has been running at that consistently.  It's possible, I suppose, that the BIOS got messed up and that might need to be fixed.  As to the other points, nvidia is set as the opengl provider.  I've tried both with kernel amd64 agpgart (built in) and without.  I've tried kernels 2.6.17-r9 and 2.6.18-r3, which changed the IOMMU requirements and therefor whether agpgart was built in by default, and hacked 2.6.18-r3 so that I could choose whether to include IOMMU as if I recall correctly that at least used to conflict with nvidia driver.  Nvidia is loaded (automatically, and when I'd re-emerge nvidia-drivers I'd rmmod and modprobe again).  I did even lsmod from time to time to make sure the correct version was loaded.  I have very few other modules as everything possible is built into the kernel.  As for the xorg.conf settings, I don't remember why I have the refresh rates in there (it pulls them correctly from edid) but I know I spent a long time trying to track down the correct placement for the other options.  It's possible that AddARGBVisuals is valid in both places, as compiz/beryl were working beautifully for me.  Thanks for the ideas!

----------

## Headrush

I'm curious as to this:

```
(EE) Failed to load module "wfb" (module does not exist, 0) 
```

as I don't see where in your xorg.conf a module named that is being loaded.   :Confused: 

You don't have any of the nvidia framebuffer drivers installed in the kernel also do you?

----------

## chrisashton84

In https://forums.gentoo.org/viewtopic-t-517224-highlight-wfb.html they mention that the module is only available for the 8800 series, and that it's available from nvidia's website.  Why the portage version of the drivers causes that module to be wanted I'm not sure (I'm using the latest unstable drivers - 9742).  If I use older drivers those lines go away but everything else stays the same.

Also, in answer to your second question, I only use the vesafb framebuffer driver.

I didn't notice anything incorrect in the BIOS.

This looks a little suspicious to me:

```
localhost linux # cat /proc/driver/nvidia/agp/status

Status:          Disabled

localhost linux # cat /proc/driver/nvidia/agp/card

Fast Writes:     Supported

SBA:             Supported

AGP Rates:       8x 4x

Registers:       0xff000e1b:0x00000000

localhost linux # cat /proc/driver/nvidia/agp/host-bridge

Host Bridge:     PCI device 1106:3188

Fast Writes:     Supported

SBA:             Supported

AGP Rates:       8x 4x

Registers:       0x1f000a1b:0x00000000

```

```
localhost linux # dmesg | grep agp

Linux agpgart interface v0.101 (c) Dave Jones

agpgart: Detected AGP bridge 0

agpgart: AGP aperture is 256M @ 0xd0000000

```

I might be remembering incorrectly, but I thought that the nvidia driver should show agp enabled.  Could someone double check those values are correct for

```
CONFIG_AGP=y

CONFIG_AGP_AMD64=y

# CONFIG_GART_IOMMU is not set

```

?

Thanks!

----------

## downey

You only need the CONFIG_AGP portion you don't need the CONFIG_AGP_AMD64 stuff.  That is what is causing your issue.  NVidia has it's own AGP system and if you build your motherboard specific stuff into the kernel then you will have problems with using nvidia.  Also you should build those as modules instead of building them into the kernel itself, use M instead of Y.  It's possible you had them as modules before and that is why it was working before.  Rebuild your kernel and you should be good to go.

----------

## chrisashton84

Odd, I've always used the AMD64 kernel driver (I just double checked my old configs to make sure).  Those three config options I posted above I've had for a couple of years.  Disabling the AMD64 part (just leaving generic AGPGART) does not help - the nvidia driver status does not say agp becomes enabled.  At least I know where to look now.

----------

## Headrush

 *downey wrote:*   

> You only need the CONFIG_AGP portion you don't need the CONFIG_AGP_AMD64 stuff.  That is what is causing your issue. 

 

That's only if your motherboard is supported by the NVIDIA agp implementation, not all are.

 *downey wrote:*   

> Also you should build those as modules instead of building them into the kernel itself, use M instead of Y.  It's possible you had them as modules before and that is why it was working before.  Rebuild your kernel and you should be good to go.

 

This shouldn't matter.

----------

## downey

I still stand on building them as modules.  That way you can let the system decide what is and isn't needed.  Also it will allow for diagnosing the problem a lot easier.  There aren't too many boards that nvidia doesn't support so in the majority of the cases you don't need the chipset specific AGP.  If everything is built as a module then if you do need it you can always modprobe it in.

----------

## Headrush

 *downey wrote:*   

> I still stand on building them as modules.  That way you can let the system decide what is and isn't needed.  Also it will allow for diagnosing the problem a lot easier.  There aren't too many boards that nvidia doesn't support so in the majority of the cases you don't need the chipset specific AGP.  If everything is built as a module then if you do need it you can always modprobe it in.

 

Mine is one.   :Wink: 

The problem is that IOMMU enables AGP support directly in the kernel when enabled in the lastest stable kernel. So to force it to a module you are going to have to edit the .config file manually.

From my experience, I found that even when the kernel AGP is compiled as a module, the system still loaded it before the system got close to xorg starting.

With the current changes in udev your going to have to find out how to stop it loading.

(I know there was issues going on with blacklisting and don't know if they have been corrected yet.)

chrisashton84, what motherboard do you have?

Maybe I missed it, but I don't see the NvAGP option in your xorg.conf for forcing AGP implementation.

(The default of trying both might be foobar and you need t explicitly state which to use.)

----------

## downey

Hmm, man I'm so glad that I don't use AGP anymore.  I ran into a number of these issues with my ATI card in my older machine.  PCI Express is the way to go.  My last bit of advice would be to keep X from starting and get your system to a state where the /proc info shows the proper status for the AGP bridge.  Once you have that set up then move on to the X issue.

Also at this point I would verify that xorg is set up correctly by checking your VIDEO_CARDS and INPUT_DEVICES variable in your make.conf.  If you didn't have nvidia listed in your VIDEO_CARDS variable you might be running into these issues.  I'm grasping here though.

----------

## Headrush

 *downey wrote:*   

> Also at this point I would verify that xorg is set up correctly by checking your VIDEO_CARDS and INPUT_DEVICES variable in your make.conf.  If you didn't have nvidia listed in your VIDEO_CARDS variable you might be running into these issues.  I'm grasping here though.

 

Sounds like a good point.

I've never had AGP issues since the days of the old VIA driving force problems, so I think it might be configuration issues. 

(Knowing his motherboard model will help.)

----------

## chrisashton84

I've got a MSI Master2FAR mobo (VIA K8T 800 chipset).  I have no idea whether it is supported by nvidia or not.  My choice to use the kernel driver comes from around when amd64 gentoo was just getting started.  Since it is built into the kernel I've never needed the NvAGP option though I did just try with =1 and the kernel module disabled (same result).  Also, I have been stopping X from loading, but the nvidia driver is loaded at boot.  I'll take that out and try to find step by step what agp info I can.

----------

## Headrush

 *chrisashton84 wrote:*   

> I've got a MSI Master2FAR mobo (VIA K8T 800 chipset).  I have no idea whether it is supported by nvidia or not.  My choice to use the kernel driver comes from around when amd64 gentoo was just getting started.  Since it is built into the kernel I've never needed the NvAGP option though I did just try with =1 and the kernel module disabled (same result).  Also, I have been stopping X from loading, but the nvidia driver is loaded at boot.  I'll take that out and try to find step by step what agp info I can.

 

Please post your output from lspci. We need to make sure you don't have the Pro version.

I have a MSI Neo2-F and it is a Via K8T 800 Pro. (It's basically the same as the FAR version minus firewire.)

The normal non-Pro version is supported by the nvidia agp implementation, the pro version is not. (you must use kernel agp.)

----------

## chrisashton84

lspci:

```
00:00.0 Host bridge: VIA Technologies, Inc. VT8385 [K8T800 AGP] Host Bridge (rev 01)

00:01.0 PCI bridge: VIA Technologies, Inc. VT8237 PCI bridge [K8T800/K8T890 South]
```

also, debugging X shows the following:

```
(II) Initializing extension GLX

[New Thread 8908560 (LWP 6809)]

Program received signal SIGSEGV, Segmentation fault.

[Switching to Thread 8908560 (LWP 6809)]

0x00002b0ce30878e5 in __ctype_tolower_loc () from /lib/libc.so.6

(gdb) bt

#0  0x00002b0ce30878e5 in __ctype_tolower_loc () from /lib/libc.so.6

#1  0x00000000004b32ee in xf86nameCompare ()

#2  0x0000000000882ce0 in ?? ()

#3  0x0000000000000000 in ?? ()

```

I'm currently re-emerging glibc to see if that helps.  Thanks for all the help so far!

----------

## chrisashton84

Is there any way to check the status of AGP besides dmesg and the nvidia proc entries?  If I'm not loading nvidia, is there some way to check the current status of AGP?

----------

## dmpogo

 *Headrush wrote:*   

>  *downey wrote:*   I still stand on building them as modules.  That way you can let the system decide what is and isn't needed.  Also it will allow for diagnosing the problem a lot easier.  There aren't too many boards that nvidia doesn't support so in the majority of the cases you don't need the chipset specific AGP.  If everything is built as a module then if you do need it you can always modprobe it in. 
> 
> Mine is one.  
> 
> The problem is that IOMMU enables AGP support directly in the kernel when enabled in the lastest stable kernel. So to force it to a module you are going to have to edit the .config file manually.
> ...

 

The default if NvAGP=3, which means to try first the kernel agpgart, and only if it is unavailable - try nvidia one.

This in particular will load kernel agpgart module if it is around. If you want to try Nvidia agp for sure - set NvAGP=1 option explicitly (plus not have kernel agpgart compiled into kernel, of course, neither general support nor specific chip version)

----------

## Headrush

 *dmpogo wrote:*   

> The default if NvAGP=3, which means to try first the kernel agpgart, and only if it is unavailable - try nvidia one.
> 
> This in particular will load kernel agpgart module if it is around. If you want to try Nvidia agp for sure - set NvAGP=1 option explicitly (plus not have kernel agpgart compiled into kernel, of course, neither general support nor specific chip version)

 

That was the point I was trying to make. The system, whether udev, coldplug or whatever, tended to load the kernel AGP module before X starts, so the NvAGP option set to use the nvidia AGP implementation wasn't sufficient.  :Smile: 

----------

## dmpogo

 *Headrush wrote:*   

>  *dmpogo wrote:*   The default if NvAGP=3, which means to try first the kernel agpgart, and only if it is unavailable - try nvidia one.
> 
> This in particular will load kernel agpgart module if it is around. If you want to try Nvidia agp for sure - set NvAGP=1 option explicitly (plus not have kernel agpgart compiled into kernel, of course, neither general support nor specific chip version) 
> 
> That was the point I was trying to make. The system, whether udev, coldplug or whatever, tended to load the kernel AGP module before X starts, so the NvAGP option set to use the nvidia AGP implementation wasn't sufficient. 

 

Agree, this is a complimentary to mine and very relevant point. If the system loads agpgart on its own, no nvidia options will help. So it is best to recompile the kernel with no agp support at all to try Nvidia AGP.

----------

## chrisashton84

Well despite all the problems that pointed towards agp/glx, my system suddenly works.  I assumed it was because I had tried again without any kernel agp support (in 2.6.17), but then I upgraded to 2.6.19-r1 which requires the full IOMMU, AGPGART and AMD64 support which I'd disabled before.  Surprisingly, it works this time.  I compiled several programs again but can't point towards any that might have caused this to suddenly work.  I do have several other issues that I can tell now that X is running again (can someone confirm that the nvidia /proc entry should show nothing if kernel agp support is used?) but those should belong in another topic.  Thanks for all the help!

----------

## dmpogo

 *chrisashton84 wrote:*   

> Well despite all the problems that pointed towards agp/glx, my system suddenly works.  I assumed it was because I had tried again without any kernel agp support (in 2.6.17), but then I upgraded to 2.6.19-r1 which requires the full IOMMU, AGPGART and AMD64 support which I'd disabled before.  Surprisingly, it works this time.  I compiled several programs again but can't point towards any that might have caused this to suddenly work.  I do have several other issues that I can tell now that X is running again (can someone confirm that the nvidia /proc entry should show nothing if kernel agp support is used?) but those should belong in another topic.  Thanks for all the help!

 

Well, in principle nvidia should work fine with kernel AGP after all, so good to see it is as it should be with 2.6.19

However, my /proc/driver/nvidia/agp  directory has 3 entrees (card, host-bridge, status) which are not empty.

But this is with 2.6.17,  and nvidia-driver-1.0.8776

----------

## Headrush

 *chrisashton84 wrote:*   

> can someone confirm that the nvidia /proc entry should show nothing if kernel agp support is used?)

 

I have been using kernel agpgart on amd64 for some time and you should still have these entries in /proc/

----------

## chrisashton84

```
localhost Chris # cat /proc/driver/nvidia/agp/status

Status:          Disabled

AGP initialization failed, please check the ouput

of the 'dmesg' command and/or your system log file

for additional information on this problem.

localhost Chris # dmesg | grep AGP

agpgart: Detected AGP bridge 0

agpgart: AGP aperture is 256M @ 0xd0000000

NVRM: not using NVAGP, an AGPGART backend is loaded!

localhost Chris # dmesg | grep agp

agpgart: Detected AGP bridge 0

agpgart: AGP aperture is 256M @ 0xd0000000

Linux agpgart interface v0.101 (c) Dave Jones
```

Who knows... I get high fps in the couple of games I've tried, and beryl is very smooth.

----------

## soulrider

what exactly is the solution for this problem?

got the same problem but could not fix it.

2.6.19-r1 and IOMMU, AGPGART?

what nvidia-drivers version? 

might you pm me your kernel config?

TIA

----------

## Izarilun

I've also the same problem with my Nvidia fx 5500.

I get an 104 Error when starting X.

I've disabled motherboard specific AGP support and use 1.0.8776 Nvidia drivers (same result with unestable ones).

Here's my info 

```

Portage 2.1.1-r2 (default-linux/x86/2006.1, gcc-4.1.1, glibc-2.3.6-r4, 2.6.18-gentoo-r3 i686)

=================================================================

System uname: 2.6.18-gentoo-r3 i686 Intel(R) Pentium(R) 4 CPU 2.60GHz

Gentoo Base System version 1.12.1

Last Sync: Sun, 10 Dec 2006 11:50:01 +0000

app-admin/eselect-compiler: [Not Present]

dev-java/java-config: 1.3.7, 2.0.30

dev-lang/python:     2.4.3-r1

dev-python/pycrypto: 2.0.1-r5

dev-util/ccache:     [Not Present]

dev-util/confcache:  [Not Present]

sys-apps/sandbox:    1.2.17

sys-devel/autoconf:  2.13, 2.59-r7

sys-devel/automake:  1.4_p6, 1.5, 1.6.3, 1.7.9-r1, 1.8.5-r3, 1.9.6-r2

sys-devel/binutils:  2.16.1-r3

sys-devel/gcc-config: 1.3.13-r3

sys-devel/libtool:   1.5.22

virtual/os-headers:  2.6.11-r2

ACCEPT_KEYWORDS="x86"

AUTOCLEAN="yes"

CBUILD="i686-pc-linux-gnu"

CFLAGS="-march=pentium4 -O3 -pipe -fomit-frame-pointer"

CHOST="i686-pc-linux-gnu"

CONFIG_PROTECT="/etc /usr/kde/3.5/env /usr/kde/3.5/share/config /usr/kde/3.5/shutdown /usr/share/X11/xkb /usr/share/config"

CONFIG_PROTECT_MASK="/etc/env.d /etc/env.d/java/ /etc/gconf /etc/java-config/vms/ /etc/revdep-rebuild /etc/terminfo"

CXXFLAGS="-march=pentium4 -O3 -pipe -fomit-frame-pointer"

DISTDIR="/usr/portage/distfiles"

FEATURES="autoconfig distlocks metadata-transfer sandbox sfperms strict"

GENTOO_MIRRORS="http://ftp.snt.utwente.nl/pub/os/linux/gentoo ftp://ftp.snt.utwente.nl/pub/os/linux/gentoo ftp://mirror.scarlet-internet.nl/pub/gentoo http://mirror.switch.ch/ftp/mirror/gentoo/ ftp://mirror.switch.ch/mirror/gentoo/ ftp://ftp.solnet.ch/mirror/Gentoo http://gentoo.mirror.solnet.ch "

LINGUAS="eu es"

MAKEOPTS="-j2"

PKGDIR="/usr/portage/packages"

PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --delete-after --stats --timeout=180 --exclude='/distfiles' --exclude='/local' --exclude='/packages'"

PORTAGE_TMPDIR="/var/tmp"

PORTDIR="/usr/portage"

SYNC="rsync://rsync.europe.gentoo.org/gentoo-portage"

USE="x86 alsa alsa_cards_emu10k1 apache2 bash-completion berkdb bitmap-fonts cdparanoia cdr cdrom cli cracklib crypt dlloader dri dvd dvdr dvdread elibc_glibc fluxbox fortran gdbm gpm iconv input_devices_evdev input_devices_keyboard input_devices_mouse ipv6 isdnlog java kde kdeenablefinal kernel_linux libg++ linguas_es linguas_eu lm_sensors mime mozilla mp3 mpeg ncurses nls nptl nptlonly ogg opengl pam pcre perl ppds pppd python q3t qt4 quicktime readline reflection samba session spl ssl tcpd truetype-fonts type1-fonts udev unicode usb userland_GNU v4l vcd verbose video_cards_nvidia vorbis wifi win32codecs xine xorg xvid zlib"

Unset

```

And my xorg.conf:

```

Section "Module"

   Load  "freetype"

   Load  "extmod"

   Load  "glx"

   Load  "dbe"

   Load  "record"

   Load  "xtrap"

   Load  "type1"

   Load  "v4l"

EndSection

Section "Files"

    FontPath   "/usr/share/fonts/misc/"

    FontPath   "/usr/share/fonts/75dpi/"

EndSection

Section "InputDevice"

    Identifier  "Keyboard1"

    Driver      "kbd"

    Option "AutoRepeat" "500 30"

    Option "XkbRules"   "xorg"

    Option "XkbModel"   "logicdp"

    Option "XkbLayout"  "es"

EndSection

Section "InputDevice"

    Identifier  "Mouse1"

    Driver      "mouse"

    Option "Protocol"    "ExplorerPS/2" # Explorer PS/2

    Option "Device"      "/dev/input/mice"

    Option "ZAxisMapping"   "4 5 6 7"

EndSection

Section "Monitor"

    Identifier  "My Monitor"

    HorizSync   31.5 - 64.3

    VertRefresh 50-70

EndSection

Section "Device"

    Identifier  "My Video Card"

    Driver      "nvidia"

    option      "NvAGP" "1"

#     Driver "nv"

#    VideoRam   262144

EndSection

Section "Screen"

    Identifier  "Screen 1"

    Device      "My Video Card"

    Monitor     "My Monitor"

    DefaultDepth 24

    Option "DPMS"

    SubSection "Display"

        Viewport 0 0

        Depth 24

        Modes "1280x1024" "1024x768" "800x600" "640x480"

    EndSubSection

EndSection

Section "ServerLayout"

    Identifier  "Simple Layout"

    Screen  "Screen 1"

    InputDevice "Mouse1" "CorePointer"

    InputDevice "Keyboard1" "CoreKeyboard"

EndSection

```

----------

## interested1

I'm sticking with my 2.6.17-r7 kernel to avoid this nvidia agp issue.  I spent a little bit of time trying to get a kernel working (2.6.19-r1), but without any luck.  Mostly I could not get nvidia's agp drivers to be used over the kernel's... even though it seemed I had no trace of the kernel's on my recompiled kernel.

Anyone have any good reasons that this might not be a good idea?

----------

## vrai

I'm also getting this problem - GLX ran fine (with the NVidia binary drivers) until I moved to modular x-org. Since then I've been unable to X with GLX, though it runs fine with GLX disabled.

I replies I've read seem to suggest that the problem lies with the AGP. My dmesg output after a failed attempt to run with GLX is ...

```
NVRM: not using NVAGP, an AGPGART backend is loaded!

NVRM: bad caching on address 0xd2d28000: actual 0x163 != expected 0x173
```

While after a sucessful run without GLX ...

```
agpgart: Found an AGP 3.0 compliant device at 0000:00:00.0.

agpgart: Putting AGP V3 device at 0000:00:00.0 into 8x mode

agpgart: Putting AGP V3 device at 0000:03:00.0 into 8x mode
```

What I'm not entirely clear on is what I should do. I didn't touch the kernel during the move to modular x-org and it's run for two years without any problems (I'm on 2.6.10-gentoo-r6). Will an upgrade to the latest kernel fix the problem? Or do I need to use an older version of the Nvidia binary drivers?

----------

