# Monitor not allowing full resolution...

## chix4mat

Hi all, 

I am in the process of setting up a new PC, but am running into a resolution problem. The native resolution for the display is 2560x1600, but no matter what I do, it will never go above 1280x800. I am using an 8800GT card with latest beta drivers. When hooked up to a smaller monitor, it works fine, but it's just on this display when problems arise. The monitor also works just fine in Windows. 

Is there a way I could somehow -force- a resolution to be applied? I believe the 1280x800 resolution it's choosing is the default for the monitor during the boot process. It could be that the monitor is simply not Linux-friendly... I'm not sure.

Here's the error I receive in the Xorg log: 

```
(II) Setting vga for screen 0.

(**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32

(==) NVIDIA(0): RGB weight 888

(==) NVIDIA(0): Default visual is TrueColor

(==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)

(**) NVIDIA(0): Enabling RENDER acceleration

(II) NVIDIA(0): NVIDIA GPU GeForce 8800 GT (G92) at PCI:2:0:0 (GPU-0)

(--) NVIDIA(0): Memory: 524288 kBytes

(--) NVIDIA(0): VideoBIOS: 62.92.16.00.02

(II) NVIDIA(0): Detected PCI Express Link width: 4X

(--) NVIDIA(0): Interlaced video modes are supported on this GPU

(--) NVIDIA(0): Connected display device(s) on GeForce 8800 GT at PCI:2:0:0:

(--) NVIDIA(0):     Gateway XHD3000 (DFP-0)

(--) NVIDIA(0): Gateway XHD3000 (DFP-0): 330.0 MHz maximum pixel clock

(--) NVIDIA(0): Gateway XHD3000 (DFP-0): Internal Dual Link TMDS

(II) NVIDIA(0): Assigned Display Device: DFP-0

(WW) NVIDIA(0): No valid modes for "2560x1600"; removing.

(WW) NVIDIA(0): No valid modes for "1680x1050"; removing.

(WW) NVIDIA(0): 

(WW) NVIDIA(0): Unable to validate any modes; falling back to the default mode

(WW) NVIDIA(0):     "nvidia-auto-select".

(WW) NVIDIA(0): 

(II) NVIDIA(0): Validated modes:

(II) NVIDIA(0):     "nvidia-auto-select"

(II) NVIDIA(0): Virtual screen size determined to be 1280 x 800

(--) NVIDIA(0): DPI set to (50, 50); computed from "UseEdidDpi" X config

(--) NVIDIA(0):     option

(==) NVIDIA(0): Disabling 32-bit ARGB GLX visuals.

(--) Depth 24 pixmap format is 32 bpp
```

The full error is here if it's of any use. Same with my xorg configuration.

I appreciate any help!

----------

## Monkeh

It looks like the display doesn't report all modes via EDID. Dig out your monitor manual and try generating a modeline here: http://xtiming.sourceforge.net/cgi-bin/xtiming.

Also, why have you got an 8800GT on a PCI-E 4x lane?

----------

## NeddySeagoon

chix4mat,

It looks like you need a modeline and to tell the nVidia driver to give you manual control. I suspect you need a 60Hz mode and I have never tested any of these mode line generators at such high resolutions. 

nVidia keep changing how you do that but its listed in /usr/share/doc/nvidia-drivers/README (Appendix A) from memory, that lists all the options the nVidia driver accepts.

You are looking for an option like UseEDID, which needs to be set false.

----------

## Monkeh

 *NeddySeagoon wrote:*   

> chix4mat,
> 
> It looks like you need a modeline and to tell the nVidia driver to give you manual control. I suspect you need a 60Hz mode and I have never tested any of these mode line generators at such high resolutions. 
> 
> nVidia keep changing how you do that but its listed in /usr/share/doc/nvidia-drivers/README (Appendix A) from memory, that lists all the options the nVidia driver accepts.
> ...

 

There should be no need to disable EDID usage for custom modes, only for more thoroughly broken EDIDs.

Edit: To quote the readme:  *Quote:*   

> when constructing the mode pool for a display device, the X driver uses custom ModeLines specified in the X config file (through the "Mode" or "ModeLine" entries in the Monitor Section) as one of the mode source

 

----------

## chix4mat

Thanks for the help guys! I haven't been able to figure it out yet... I followed the guide there and thought I had the configuration right, but it still won't load the proper resolution. 

```
Section "Monitor"

    Identifier     "Monitor0"

    VendorName     "Monitor Vendor"

    VertRefresh 59.0 - 61.0

    HorizSync   30.0 - 110.0

    Modeline "2560x1600@60" 428.78 2560 2592 4216 4248 1600 1632 1648 1681

    ModelName      "Monitor Model"

EndSection

Section "Device"

    Identifier     "Card0"

    Driver         "nvidia"

    VendorName     "nVidia Corporation"

    BoardName      "Unknown Board"

EndSection

Section "Screen"

    Identifier     "Screen0"

    Device         "Card0"

    Monitor        "Monitor0"

    DefaultDepth    24

    SubSection     "Display"

        Depth       24

        Modes      "2560x1600@60"

    EndSubSection

EndSection
```

I might give up on it soon. This is not the final monitor for this PC, and the one that is, works well. I just had hoped to be able to figure this out to know if it was a fault with the monitor itself or not.

----------

## Monkeh

Show the log with that, please.

----------

## chix4mat

From what I can see, not much has changed: 

http://deathspawner.net/xorg_error_2.txt

Thanks again,

----------

## Monkeh

Change the @ in both entries to an underscore, and run X with -logverbose 6

----------

## NeddySeagoon

chix4mat,

Your original log showed 

```
(--) NVIDIA(0): Gateway XHD3000 (DFP-0): 330.0 MHz maximum pixel clock
```

 So your display reports that it supports a maximum 330Mhz pixel clock. Your modeline says 

```
 Modeline "2560x1600@60" 428.78 2560 2592 4216 4248 1600 1632 1648 1681
```

Which demands a 428.78MHz pixel clock.

I suspect that the 330.0 MHz reported by the EDID is incorrect and you have one of the more throughly broken EDIDs that Monkeh warned about.

In short, I suspect that xorg is not using your modeline because its out of range.

----------

## chix4mat

Sorry for the slow reply!

 *Monkeh wrote:*   

> Change the @ in both entries to an underscore, and run X with -logverbose 6

 

I am not positive I did this right, because the log didn't seem to change too much, but here it is: 

http://deathspawner.net/xorg_error_3.txt

I also missed your question earlier about the 4x lane... I am not sure why that is. I am using the bottom PCI-E slot on the motherboard, because I have four S-ATA drives plugged in, and thanks to the long card, it makes it a very, very tight squeeze. I will check it out again later and see if I can fix it. I find it odd though, because I ran Windows 3D benchmarks and the scores and performance was good. I am not sure that would be the case if it were actually 4x. 

 *NeddySeagoon wrote:*   

> 
> 
> I suspect that the 330.0 MHz reported by the EDID is incorrect and you have one of the more throughly broken EDIDs that Monkeh warned about.
> 
> In short, I suspect that xorg is not using your modeline because its out of range.

 

Does that mean I am out of luck? I ran 330.0 through that calculator and it won't allow me, saying that it's above the maximum. I changed the entry in the xorg.conf to 330.00 and it didn't help things either. 

Again, I am not too terribly concerned if I am unable to have this monitor function properly in Linux since it's dedicated to a Windows machine, where it does work, but it would be nice to know if it's a fault of the monitor or not. Is there a slight chance that it could be the video card causing the mis-read information? I could install another card in and test it again to see if that's somehow the cause.

Thanks again, I truly appreciate the help.

Edit: I am going to call this one quits unless anyone has another idea I can try. I decided to hook up another computer to the monitor and boot with the SabayonLinux live CD (which tends to set up everything perfectly) and sure enough, it defaulted to 1280x800. I had a Dell 30-inch display hooked up to this same PC before, and SabayonLinux detected the appropriate 2560x1600 resolution, so I am going to point the blame towards the Gateway.

----------

## Monkeh

 *chix4mat wrote:*   

> Sorry for the slow reply!
> 
>  *Monkeh wrote:*   Change the @ in both entries to an underscore, and run X with -logverbose 6 
> 
> I am not positive I did this right, because the log didn't seem to change too much, but here it is: 
> ...

 

No, that didn't do it. Run startx -- -verbose 6 -logverbose 6

 *Quote:*   

> Again, I am not too terribly concerned if I am unable to have this monitor function properly in Linux since it's dedicated to a Windows machine, where it does work, but it would be nice to know if it's a fault of the monitor or not.

 

It's very likely an incorrect EDID. They don't usually care about getting them right, unfortunately for us.

 *Quote:*   

> Is there a slight chance that it could be the video card causing the mis-read information?

 

No.

E: Do everyone a favour and complain very loudly at Gateway about it.

----------

## NeddySeagoon

chix4mat,

Its a fault of the EDID data provided to Xorg by the monitor. The data in the read on;y memory in the monitor is incorrect.

Windows will have a big list of devices that lie in their EDID data and simply ignore it.

Linux expects displays to provide the correct data (as they are supposed to) and provides a manual override to handle cases where the EDID data is not correct.

----------

## chix4mat

 *Monkeh wrote:*   

> E: Do everyone a favour and complain very loudly at Gateway about it.

 

I plan to now that I know there is a problem. It's only a coincidence that I found this out, as well, because I only needed the monitor to help set up this new PC. I would not have tested Linux on it otherwise. This particular model I have is a review sample, so I will make the problems public and also get Gateway's thoughts on the matter. As for the latest Xorg error log... 

http://deathspawner.net/xorg_error_4.txt

 *NeddySeagoon wrote:*   

> Its a fault of the EDID data provided to Xorg by the monitor. The data in the read on;y memory in the monitor is incorrect.
> 
> Windows will have a big list of devices that lie in their EDID data and simply ignore it.
> 
> Linux expects displays to provide the correct data (as they are supposed to) and provides a manual override to handle cases where the EDID data is not correct.

 

How is it that it works fine in Windows again? That's the one thing I'm having a problem understanding. Is it the NVIDIA (or ATI) driver that's setting things straight? Regardless, is there anything else I can do, or am I out of luck? I tried the Modeline and it didn't do much of anything, so maybe I am.

----------

## Monkeh

 *Quote:*   

> (II) NVIDIA(0):   Validating Mode "2560x1600":
> 
> (II) NVIDIA(0):     2560 x 1600 @ 60 Hz
> 
> (II) NVIDIA(0):     For use as DFP backend.
> ...

 

 *Quote:*   

> (II) NVIDIA(0):   Validating Mode "2560x1600":
> 
> (II) NVIDIA(0):     2560 x 1600 @ 60 Hz
> 
> (II) NVIDIA(0):     Mode Source: EDID
> ...

 

That is a nicely broken EDID you've got there.

 *Quote:*   

> How is it that it works fine in Windows again?

 

It's likely they ignore the native resolution of the display. I know the Windows nVidia driver is pretty crap in that regard.

E: If you can dump the EDID for me (use nvidia-settings) I think I can fix it.

----------

## chix4mat

Why is it that there are two different entries for the same resolution... or is that the problem? Is there also an easy way to dump the EDID? I Google'd and couldn't find any solid information. The nvidia-settings man doesn't say anything about EDID, either. 

Thanks again, and apologies for the noob-ness.

----------

## Monkeh

 *chix4mat wrote:*   

> Why is it that there are two different entries for the same resolution... or is that the problem?

 

That's just how it writes it out. The EDID only contains the one entry. The problem is that the native resolution is set to 1280x800, and the driver is rejecting (correctly) any higher resolution.

 *Quote:*   

> Is there also an easy way to dump the EDID?

 

Actually, it's right there in the log, I hadn't noticed. Unfortunately, none of the software I have can handle it (they only go up to 1.3, you have a 2.0 EDID, which is twice as large).

----------

## NeddySeagoon

chix4mat,

I have nvidia-drivers-1.0.9639.if you look in  /usr/share/doc/nvidia-drivers-1.0.9639-r1/README.bz2 (use your own version number, you will find in Appendix D (X Config Options) something like 

```
Option "UseEDID" "boolean"

    By default, the NVIDIA X driver makes use of a display device's EDID, when

    available, during construction of its mode pool. The EDID is used as a

    source for possible modes, for valid frequency ranges, and for collecting

    data on the physical dimensions of the display device for computing the

    DPI (see Appendix Y). However, if you wish to disable the driver's use of

    the EDID, you can set this option to False:

    

        Option "UseEDID" "FALSE"
```

However, you can also turn on and off the use of sections of the EDID data.

nVidia have been changing this recently, so check the docs for your driver version.

----------

## mike42

Did you get this working?

I was planning on getting a new PC with the same monitor and graphics card, but not if it will only run at 1280x800...

----------

## mike42

There's a thread on the Hardforum with a similar problem. Different OS, different monitor - but it runs at 1280x800 when it's supposed to run at 2560x1600.

The supplied cable was a single-link DVI (18-pin) and when replaced with a real 24-pin dual-link DVI cable there was 2560x1600!

HTH

Mike

[/post]

----------

