# NVIDIA Optimus: can't get working

## r.osmanov

Hi,

I'm trying to configure X server to run with a couple of video cards:

integrated Intel + discrete NVIDIA (Optimus). Some time ago the proprietary driver had no support for Optimus. At the time we were forced to use Bumblebee or similar utilities. It looks like some driver version 3xx introduced Optimus support. So we don't need Bumblebee anymore, right?

Now I'm trying to configure Optimus thing without Bumblebee. But all the attempts result in a black screen with or without

cursor.

I had read threads regarding NVIDIA Optimus on this forum. I tried most of the mentioned configuratons. None worked for me. Please help me to figure out what's wrong with my configuration.

```
$ lspci | grep VGA

00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)

01:00.0 VGA compatible controller: NVIDIA Corporation GF108M [GeForce GT 630M] (rev a1)

```

/etc/X11/xorg.conf

```

Section "ServerLayout" 

    Identifier "layout" 

    Screen 0 "nvidia" 

    Screen 1 "intel" 

    Inactive "intel" 

EndSection 

Section "Module"

   Load  "glx"

EndSection

Section "Monitor"

   Identifier "Mon0"

   VendorName "unknown"

EndSection

Section "Device" 

    Identifier "nvidia" 

    Driver "nvidia" 

   BusID "PCI:1:0:0"

    Option       "ModeDebug" "True"

EndSection 

Section "Screen" 

    Identifier "nvidia" 

    Device "nvidia" 

   Monitor "Mon0"

    # Uncomment this line if your computer has no display devices connected to 

    # the NVIDIA GPU.  Leave it commented if you have display devices 

    # connected to the NVIDIA GPU that you would like to use. 

    #Option "UseDisplayDevice" "none" 

   #Option "AllowEmptyInitialConfiguration"

EndSection 

Section "Device" 

    Identifier "intel" 

    Driver "modesetting" 

   BusID "PCI:0:2:0"

EndSection 

Section "Screen" 

    Identifier "intel" 

    Device "intel" 

   Monitor "Mon0"

EndSection

```

xrandr

```

$ xrandr

Screen 0: minimum 8 x 8, current 1600 x 900, maximum 32767 x 32767

LVDS1 connected primary 1600x900+0+0 (normal left inverted right x axis y axis) 382mm x 215mm

   1600x900       60.0*+   40.0  

   1024x768       60.0  

   800x600        60.3     56.2  

   640x480        59.9  

VGA1 disconnected (normal left inverted right x axis y axis)

HDMI1 disconnected (normal left inverted right x axis y axis)

DP1 disconnected (normal left inverted right x axis y axis)

VIRTUAL1 disconnected (normal left inverted right x axis y axis)

$ xrandr --listproviders

Providers: number : 1

Provider 0: id: 0x48 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 4 outputs: 5 associated providers: 0 name:Intel

```

(NVIDIA-0 provider is missing)

~/.xinitrc

```

xrandr --setprovideroutputsource Intel NVIDIA-0

xrandr --auto

```

Xorg.0.log

 *Quote:*   

> 
> 
> [  4441.451] 
> 
> This is a pre-release version of the X server from The X.Org Foundation.
> ...

 

Of course, `glxgears` fails:

```

$ glxgears

Xlib:  extension "GLX" missing on display ":0".

Error: couldn't get an RGB, Double-buffered visual

```

Besides, I don't understand whether NVIDIA truly supports Optimus on Linux. I

can't find an exact statement that NVIDIA does support the full-fledged Optimus on

Linux, that the proprietary driver doesn't need extra configuration or even extra

packages like Bumblebee.

Please help!

----------

## Jaglover

Did you see /usr/share/doc/nvidia-drivers-337.25/README.bz2, chapter 18.

----------

## r.osmanov

 *Jaglover wrote:*   

> Did you see /usr/share/doc/nvidia-drivers-337.25/README.bz2, chapter 18.

 

Yes, I read it. In my understanding the chapter informs the user that some happy laptop users can take advantage of the Optimus technology.

Particularly, people with laptops having a hardware multiplexer (which connects NVIDIA GPU with the laptop display panel) are happy. Because

they have an option to switch between the video cards manually. Well, I'm not lucky, because I don't have such a "mux".

Then I see the following:

 *Quote:*   

> On muxless Optimus laptops, or on laptops where a mux is present, but not set
> 
> to drive the internal display from the NVIDIA GPU, the internal display is
> 
> driven by the integrated GPU. On these systems, it's important that the X
> ...

 

I don't understand what "internal display" is. Maybe some virtual display which can be used by CUDA apps, something that is not visible to me.

Then I skip this sentence. Now I'm trying to configure each GPU to use its own driver: intel-intel, nvidia-nvidia:

*/etc/portage/make.conf*

```
VIDEO_CARDS="intel nvidia"
```

and rebuild related packages: 

```
nvidia-drivers, @x11-module-rebuild, $(eix -# -I emul | xargs)
```

(I may be wrong, I don't know what to do exactly!)

Next lines of the docs:

 *Quote:*   

> Often, this can be determined automatically by the X server, and no explicit
> 
> configuration is required, especially on newer X server versions. If your X
> 
> server autoselects the NVIDIA X driver after installation, you may need to
> ...

 

Well, no-configuration didn't work for me, so I proceeded with custom X server configuration, that is /etc/X11/xorg.conf posted above.

The X server doesn't select NVIDIA X driver (and that is the actual issue); it selects "intel". In the xorg.conf I try to select "nvidia" driver for the discrete card explicitly.

The next lines are more clear. They refer to Chapter 33, which describes how to modify /etc/X11/xorg.conf. So I followed these instructions and modified the xorg.conf file

and the .xinitrc file. However, the commands mentioned in this chapter didn't work for me:

 *Quote:*   

> 
> 
> ```
> $ xrandr --setprovideroutputsource modesetting NVIDIA-0
> 
> ...

 

because I have the only xrandr provider called "Intel":

```

 xrandr --listproviders

Providers: number : 1

Provider 0: id: 0x48 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 4 outputs: 5 associated providers: 0 name:Intel

```

Now the next sentence comes very "handy":

 *Quote:*   

> If either provider is missing
> 
> or doesn't have the expected capability, check your system configuration.
> 
> 

 

Thus, I'm here on the Gentoo forum asking for help.

Thanks.

----------

## Dr.Willy

 *r.osmanov wrote:*   

> */etc/portage/make.conf*
> 
> ```
> VIDEO_CARDS="intel nvidia"
> ```
> ...

 

Mhh on my Optimus setup, I have 

```
VIDEO_CARDS="intel modesetting nvidia"
```

----------

## dachschaden

@Dr. Willy: For what earthly reason did you enable the modesetting driver? If my memory serves right the intel driver provices KMS (kernel mode setting), so you don't have to rely on user mode setting.

@r.osmanov: Without bumblebee? Well, I don't want to act like a ultra 1337 hacker, but I tried for several month to teach my Gentoo to use the nVidia card as off-loader. In the end I ended up with bumblebee.  :Smile: 

It was a 635M, nearly the same model as yours. The problem is, as you already stated, that the muxer is missing, so the intel chip drives the nVidia chip. The "battleship" (as I am used to call the dedicated card) is not directly connected to the monitor, so even if you'd provide the EDID (x11-misc/read-edid is your friend here, it provides a tool named get-edid to connect to your Display via DDC to do what Xorg normally does - getting the EDID), it still would not know how to "speak" with your display.

Bumblebee works around that problem by creating a secondary X server which is running inside your first X server (which is rendered by your Intel chip). The second X server is then able to use the nVidia chip, and since your Intel chip is running, the nVidia chip has a connection to your display and renders everything just fine.

In my opinion it's not worth the trouble, though. I once tried to run a PS1 emulator in Gentoo using bumblebee. Not only did the CPU usage occupied one core (while in Windows it was just 50% - there is something kaput with rendering in general in Linux, I never saw it running as smooth as in Windows), but also the Server seemed to crash each time I wanted to access the menu for load/save a state or reconfigure settings - apparently some problem with non-3D-rendering. Apart to that, I was no able to tell the card to use vsync, so it'd render 300 frames in 1 second while 60 would have been sufficient. I had spent three weeks before I have given up.  :Wink: 

(And don't get me started about the broken ALSA plugin, so I had to rely on the OSS plugin, which had a bug in accessing the wrong audio file, so I had to change the plugin's source and then recompile it. And of course the makefile wasn't suit for 64 bit operating systems ...)

I know it's hard to being told that you should just "give up for now" - hell, I am persistent as hell and wouldn't listen to anyone who'd tell me to give up. But I'd have saved some time if I did so.

The Linux, Nouveau and Wayland guys are currently working on fixing the broken graphic stack on Linux, but all that takes time. Unless you are a kernel hacker, I'd suggest you just relax and let them do their work until it got better.

----------

## r.osmanov

dachschaden, thanks for sharing experience. I've almost gave up myself. I will likely be forced to use Bumblebee again.

I made some progress with the following /etc/X11/xorg.conf.

```

Section "ServerFlags"

    Option   "AutoAddDevices" "true"

    Option   "AllowEmptyInput" "no"

EndSection

Section "Monitor"

   Identifier     "Monitor0"

   VendorName     "Unknown"

   ModelName      "Unknown"

   HorizSync       28.0 - 73.0

   VertRefresh     43.0 - 72.0

   Option         "DPMS"

   #Modeline       "1600x1200"  161.00  1600 1712 1880 2160  1200 1203 1207 1245 -hsync +vsync

EndSection

Section "ServerLayout"

    Identifier "layout"

    Screen 0 "nvidia"

    Inactive "intel"

EndSection

Section "Device"

    Identifier "nvidia"

    Driver "nvidia"

    BusID          "PCI:1:0:0"

EndSection

Section "Screen"

    Identifier "nvidia"

    Device "nvidia"

    Monitor        "Monitor0"

    Option "AllowEmptyInitialConfiguration"

EndSection

Section "Device"

    Identifier "intel"

    Driver "modesetting"

    BusID          "PCI:0:2:0"

    VendorName     "onboard"

EndSection

Section "Screen"

    Identifier "intel"

    Device "intel"

    Monitor        "Monitor0"

EndSection

```

It still shows up the black screen. However, xrandr now recognizes NVIDIA-0 provider on display :0:

```
$ xrandr --listproviders -d :0

Providers: number : 2

Provider 0: id: 0x2ac cap: 0x1, Source Output crtcs: 2 outputs: 1 associated providers: 0 name:NVIDIA-0

Provider 1: id: 0x46 cap: 0x2, Sink Output crtcs: 3 outputs: 4 associated providers: 0 name:modesetting
```

(launched in separate VT)

Now it looks like I need to set up xrandr with a couple of commands mentioned in Chapter 33 (/usr/share/doc/nvidia-drivers-337.25/README.bz2). At this point I try to run the commands at the same VT: 

```
$ xrandr --setprovideroutputsource modesetting NVIDIA-0 -d :0

XIO:  fatal IO error 11 (Resource temporarily unavailable) on X server ":0"

      after 17 requests (17 known processed) with 0 events remaining.
```

Segmentation fault. Apparently this is some kind of developer error. I don't feel like hacking the code right now. My guess is that something goes wrong because of the different session.

Well, README.bz2 suggests us to put these commands into ~/.xinitrc

```

exec ck-launch-session dbus-launch mate-session 

xrandr --setprovideroutputsource modesetting NVIDIA-0

xrandr --auto

```

As it might have been guessed, the commands below exec ... mate-session are not executed. That's what I got so far. Let me ask a stupid question: how do I fix it?  :Smile: 

----------

## dachschaden

I'd love to fiddle with my nVidia card and maybe get it running without bumblebee, I really would.

The problem is that my current card is not even supported by the nouveau driver - 3.15.0 should have fixed that since I am using a maxwell chip, but:

1. I'd need a firmware blob for this card first, and they do not ship it yet.

2. DRI does not seem to work, even the slightest access to /dev/dri/ will cause my kernel to hang.

And I don't really want to switch to the nvidia driver because of the mess with the OpenGL libs. There's eselect, but that didn't work last time, at least not for me.  :Sad: 

So it's a bit problematic to use your provided config and check it out. I don't have the same hardware since back then.

BUT: What wonders me is that line about modesetting. I do not remember giving something like that - neither in the naive attempt to let the XServer just handle the device, nor when I used bumblebee.

 *r.osmanov wrote:*   

> As it might have been guessed, the commands below exec ... mate-session are not executed. That's what I got so far. Let me ask a stupid question: how do I fix it? 

 

Adding a ' &' at the end of the first line? Unfortunately I cannot test it here, but it would be my very first approach.

----------

## Dr.Willy

 *dachschaden wrote:*   

> @Dr. Willy: For what earthly reason did you enable the modesetting driver? If my memory serves right the intel driver provices KMS (kernel mode setting), so you don't have to rely on user mode setting.

 

For the reason that it's not working without it.

At least for me X won't start without the modesetting driver installed due to the following X section.

```
Section "Device"

    Identifier "intel"

    Driver "modesetting"

    BusID          "PCI:0:2:0"

    VendorName     "onboard"

EndSection 
```

Setting the driver to "intel" doesn't work either.

Also the README says:

 *Quote:*   

> To use the NVIDIA driver as an RandR 1.4 output source provider, the X server needs to be configured to use the NVIDIA driver for its primary screen and to use the "modesetting" driver for the other graphics device.

 

 *r.osmanov wrote:*   

> Well, README.bz2 suggests us to put these commands into ~/.xinitrc
> 
> ```
> 
> exec ck-launch-session dbus-launch mate-session 
> ...

 

… put the xrandr commands before the exec ... mate-session line?

----------

## dachschaden

OK, I never had to assign that driver ... I might do another test on my box with the modesetting driver. Thanks for the suggestion, though.

If my memory serves right, xrandr only works on a running server. If you use it on a VT for example it will say that it can't open the display. That's why r.osmanov uses the -d :0 argument:

```

xrandr --listproviders -d :0

```

The XServer which cannot deliver the rendered scene to the display still runs by the nVidia driver, the "provider".

To get that information, he has to switch to a VT and then specify the display whose providers he wants to know.

----------

## Yamakuzure

Hi, I have a laptop with a muxless hybrid, Intel HD versus Nvidia K2100M:

```
 ~ $ sudo lspci | grep -i vga

00:02.0 VGA compatible controller: Intel Corporation 4th Gen Core Processor Integrated Graphics Controller (rev 06)

01:00.0 VGA compatible controller: NVIDIA Corporation GK106GLM [Quadro K2100M] (rev ff)
```

The laptop is configured to use Intel only:

```
 ~ $ sudo eselect opengl list

Available OpenGL implementations:

  [1]   nvidia

  [2]   xorg-x11 *
```

Then I have installed bumblebee with primus offloader:

```
 ~ $ eix bumblebee

[I] x11-misc/bumblebee

     Available versions:  3.2.1 (**)9999[1] {+bbswitch VIDEO_CARDS="nouveau nvidia"}

     Installed versions:  9999[1](13:32:49 30.05.2014)(bbswitch VIDEO_CARDS="nvidia -nouveau")

     Homepage:            http://bumblebee-project.org https://github.com/Bumblebee-Project/Bumblebee

     Description:         Service providing elegant and stable means of managing Optimus graphics chipsets

[1] "bumblebee" /var/lib/layman/bumblebee

 ~ $ eix -I primus

[I] x11-misc/primus [1]

     Available versions:  (**)9999 {ABI_MIPS="n32 n64 o32" ABI_X86="32 64 x32"}

     Installed versions:  9999(13:34:48 30.05.2014)(ABI_MIPS="-n32 -n64 -o32" ABI_X86="64 -32 -x32")

     Homepage:            https://github.com/amonakov/primus

     Description:         Faster OpenGL offloading for Bumblebee

[1] "bumblebee" /var/lib/layman/bumblebee
```

And here is the result:

```
# With intel card:

 ~ $ glxspheres64

Polygons in scene: 62464

Visual ID of window: 0x20

Context is Direct

OpenGL Renderer: Mesa DRI Intel(R) Haswell Mobile 

225.416832 frames/sec - 251.565184 Mpixels/sec

191.208057 frames/sec - 213.388192 Mpixels/sec

190.810194 frames/sec - 212.944177 Mpixels/sec

# With nvidia using optirun

 ~ $ optirun glxspheres64

Polygons in scene: 62464

Visual ID of window: 0x20

Context is Direct

OpenGL Renderer: Quadro K2100M/PCIe/SSE2

366.242147 frames/sec - 408.726236 Mpixels/sec

372.048190 frames/sec - 415.205780 Mpixels/sec

367.461145 frames/sec - 410.086638 Mpixels/sec

# With nvidia using primusrun:

 ~ $ primusrun glxspheres64

Polygons in scene: 62464

Visual ID of window: 0x20

Context is Direct

OpenGL Renderer: Quadro K2100M/PCIe/SSE2

463.456412 frames/sec - 517.217356 Mpixels/sec

481.095675 frames/sec - 536.902773 Mpixels/sec

481.906130 frames/sec - 537.807242 Mpixels/sec
```

I never got the hybrid system to work in any other way. xrandr always shows only one provider, no matter what.

It seems there is no way the nvidia driver can do this on its own. And if you look into the nvidia systemsetting under windows on a hybrid laptop (I have a Windows 7 dual boot via EFI) it does the same. Use the integrated chipset and activate the dedicated graphics card for chosen applications only.

The ideal solution would be to have nvidia drivers on linux that can be configured the same way. Automagically use the integrated chipset and only activate the nvidia card where configured to do so. (Going automatically for OpenGL is useless if a window manager uses OpenGl. That would result in a permanently powered on card, although the intel chip is perfectly capable to handle that.)

So until then, there at least *is* bumblebee/primus.  :Wink: 

----------

## Princess Nell

@r.osmanov:

How do you start X? If you use a display manager, it needs to run those xrandr commands you currently have in .xinitrc. E.g. I use lightdm with mate, and I have set it up with a helper script through the display-setup-script variable. Other DMs are listed at http://wiki.gentoo.org/wiki/NVIDIA_Driver_with_Optimus_Laptops. If you use startx, however, the .xinitrc method should work.

My xorg.conf is not a million miles away from yours. There are no Monitor and ServerFlags sections, the nvidia screen has no monitor line, accordingly, but I have the "UseDisplayDevice" "none" option. The intel screen, consequently, also has no monitor line. Xorg is version 1.15.0.

----------

## r.osmanov

 *Princess Nell wrote:*   

> @r.osmanov:
> 
> How do you start X? If you use a display manager, it needs to run those xrandr commands you currently have in .xinitrc. E.g. I use lightdm with mate, and I have set it up with a helper script through the display-setup-script variable. Other DMs are listed at http://wiki.gentoo.org/wiki/NVIDIA_Driver_with_Optimus_Laptops. If you use startx, however, the .xinitrc method should work.
> 
> 

 

I've managed to configure rendering via NVIDIA card (/usr/share/doc/nvidia-drivers-334.21-r3/README.bz2, Chapter 33) with LightDM + Openbox:

/etc/lightdm/lightdm.conf

```
[LightDM]

session-wrapper=/etc/lightdm/Xsession

[SeatDefaults]

session-wrapper=/etc/lightdm/Xsession

[Seat:0]

xserver-config        = /etc/X11/xorg.conf.optimus

xserver-command       = X -seat 0

xserver-share         = True

display-setup-script  = /opt/lightdm/bin/xrandr-optimus

```

/opt/lightdm/bin/xrandr-optimus

```

# Optimus (for /etc/lightdm/lightdm.conf and /etc/X11/xorg.conf.optimus)

xrandr --setprovideroutputsource modesetting NVIDIA-0 2>&1 >> /tmp/optimus.log

xrandr --auto  2>&1 >> /tmp/optimus.log

```

/etc/X11/xorg.conf.optimus

```

Section "Module" 

   # Disable        "dri"

   # Disable        "fb"

EndSection

Section "ServerFlags"

    Option   "AutoAddDevices" "true"

    Option   "AllowEmptyInput" "no"

EndSection

Section "Monitor"

   Identifier     "Monitor0"

   VendorName     "Unknown"

   ModelName      "Unknown"

   HorizSync       28.0 - 73.0

   VertRefresh     43.0 - 72.0

   Option         "DPMS"

   Modeline      "1600x900"    107.80  1600 1648 1680 1920  900 903 908 936 -hsync -vsync

EndSection

Section "ServerLayout"

    Identifier "layout"

    Screen 0 "nvidia"

    Inactive "intel"

EndSection

Section "Device"

    Identifier "nvidia"

    Driver "nvidia"

    BusID          "PCI:1:0:0"

EndSection

Section "Screen"

    Identifier "nvidia"

    Device "nvidia"

    Monitor        "Monitor0"

    Option "AllowEmptyInitialConfiguration"

   DefaultDepth    24

    Option         "metamodes" "1600x900 +0+0; nvidia-auto-select +0+0"

   SubSection     "Display"

      Depth       24

      Modes      "1600x900" "1280x1024"

   EndSubSection

EndSection

Section "Device"

    Identifier "intel"

    Driver "modesetting"

    #Driver "intel"

    BusID          "PCI:0:2:0"

    VendorName     "onboard"

EndSection

Section "Screen"

    Identifier "intel"

    Device "intel"

    Monitor        "Monitor0"

   DefaultDepth    24

    Option         "metamodes" "1600x900 +0+0"

   SubSection     "Display"

      Depth       24

      Modes      "1600x900" 

EndSubSection

EndSection

```

There is no ~/.xinitrc, and ~/.xprofile has nothing to do with xrandr/optimus/WM.

I haven't tried to test it with running bumblebee yet.

So, it works with Openbox. But shows black screen in Xfce xession, althought the X server commands are alike:

Openbox

```

root      5815  9.5  0.5 160644 46952 tty8     Ss+  17:05   0:00 /usr/bin/X -seat 0 :1 -config /etc/X11/xorg.conf.optimus -seat seat0 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch

ruslan    5841  3.4  0.4 326192 33012 ?        Ss   17:05   0:00 /usr/bin/openbox --startup /usr/libexec/openbox-autostart OPENBOX

root      6063  0.0  0.0 112580   960 tty1     S+   17:05   0:00 grep --colour=auto X

```

Xfce

```

root      5392  6.1  0.6 164524 51188 tty8     Ss+  17:04   0:00 /usr/bin/X -seat 0 :1 -config /etc/X11/xorg.conf.optimus -seat seat0 -auth /var/run/lightdm/root/:1 -nolisten tcp vt8 -novtswitch

ruslan    5416  0.0  0.0 115344  1588 ?        Ss   17:04   0:00 /bin/sh /etc/xdg/xfce4/xinitrc -- /etc/X11/xinit/xserverrc

root      5689  0.0  0.0 112580   956 tty1     S+   17:04   0:00 grep --colour=auto X

```

So I've almost got it   :Rolling Eyes:  . It would be perfectly good for me to have diffent Openbox/Xfce sessions for Intel and for NVIDIA.

Btw, analogous configuration worked when I launched X via startx.

----------

## r.osmanov

Finally, I've got it. Two VTs: vt7 on Intel, vt8 on NVIDIA card as off-loader.

However, it all works terribly slow making only about 50 FPS (glxgears,

Enemy Territory), so I'll stick with the good old Bumblebee and give

Primus a try (can't see any performance gains so far).

The following hopefully will save someone's time a little.

/etc/lightdm/lightdm.conf

```

[LightDM]

greeter-user=lightdm

user-authority-in-system-dir=false

session-wrapper=/etc/lightdm/Xsession

[SeatDefaults]

session-wrapper=/etc/lightdm/Xsession

[Seat:0]

xserver-layout          = INTEL_MODESETTING

xserver-command         = X -seat seat0

xserver-share           = false

[Seat:1]

xserver-layout          = NVIDIA

xserver-command         = X -seat seat1

xserver-share           = false

display-setup-script    = /opt/lightdm/bin/xrandr-optimus

autologin-user          = ruslan

```

/etc/X11/xorg.conf.d/00-layouts.conf

```

Section "ServerLayout"

  Identifier "INTEL"

  Screen 0 "intel"

EndSection

Section "ServerLayout"

  Identifier "INTEL_MODESETTING"

  Screen 0 "intel_modesetting"

  #Inactive "nvidia"

  #Option "Clone" "off"

EndSection

Section "ServerLayout"

  Identifier "NVIDIA"

  Screen 0 "nvidia"

  Inactive "intel_modesetting"

  #Inactive "intel"

  #Option "Clone" "off"

EndSection

```

/etc/X11/xorg.conf.d/01-server-flags.conf

```

Section "ServerFlags"

  # only pertains to mousedrv and vmmouse and has no effect on evdev or

  # others. It is probably not needed, but will not hurt us here

  Option  "AutoAddDevices" "false"

  # lets us control which seat gets which input devices

  Option  "AutoEnableDevices" "false"

  # prevents killing X with a Ctrl+Alt+Backspace event

  #Option "DontZap" "false"

  Option  "AllowEmptyInput" "no"

  Option  "DefaultServerLayout" "INTEL"

  Option "Xinerama" "off"

  # stop the X server adding non-primary devices as GPU screens

  Option "AutoAddGPU" "off"

  Option "ProbeAllGpus" "false"

  Option "DRI2" "on"

EndSection

```

/etc/X11/xorg.conf.d/02-monitor.conf

```

Section "Monitor"

  Identifier "Monitor0"

  VendorName "Unknown"

  ModelName "Unknown"

  HorizSync 28.0 - 73.0

  VertRefresh 43.0 - 72.0

  Option "DPMS"

  Modeline "1600x900" 107.80 1600 1648 1680 1920 900 903 908 936 -hsync -vsync

EndSection

```

/etc/X11/xorg.conf.d/03-devices.conf

```

Section "Device"

  Identifier "nvidia"

  Driver "nvidia"

  BusID "PCI:1:0:0"

  Option "RegistryDwords" "EnableBrightnessControl=1"

EndSection

Section "Device"

  Identifier "intel"

  Driver "intel"

  BusID "PCI:0:2:0"

  VendorName "onboard"

  Option "AccelMethod" "sna"

  Option "Backlight" "intel_backlight" # use your backlight that works here

EndSection

Section "Device"

  Identifier "intel_modesetting"

  Driver "modesetting"

  #Driver "intel"

  BusID "PCI:0:2:0"

  VendorName "onboard"

  Option "AccelMethod" "sna"

  Option "Backlight" "intel_backlight" # use your backlight that works here

EndSection

```

/etc/X11/xorg.conf.d/04-screens.conf

```

Section "Screen"

  Identifier "nvidia"

  Device "nvidia"

  Monitor "Monitor0"

  Option "AllowEmptyInitialConfiguration"

  DefaultDepth 24

  Option "metamodes" "1600x900 +0+0; nvidia-auto-select +0+0"

  SubSection     "Display"

    Depth 24

    Modes "1600x900"

  EndSubSection

EndSection

Section "Screen"

  Identifier "intel"

  Device "intel"

  Monitor "Monitor0"

  DefaultDepth 24

  Option "metamodes" "1600x900 +0+0"

  SubSection "Display"

    Depth 24

    Modes "1600x900"

  EndSubSection

EndSection

Section "Screen"

  Identifier "intel_modesetting"

  Device "intel_modesetting"

  Monitor "Monitor0"

  DefaultDepth 24

  Option "metamodes" "1600x900 +0+0"

  SubSection "Display"

    Depth 24

    Modes "1600x900"

  EndSubSection

EndSection

```

(xorg.conf is modular)

Thank you all for replies.

----------

## SDNick484

 *r.osmanov wrote:*   

> Finally, I've got it. Two VTs: vt7 on Intel, vt8 on NVIDIA card as off-loader.
> 
> However, it all works terribly slow making only about 50 FPS (glxgears,
> 
> Enemy Territory), so I'll stick with the good old Bumblebee and give
> ...

 

I'm curious if you stuck with these settings or have made any updates?  I'm trying to get primusrun to work, but am running into: "primus: fatal: failed to acquire direct rendering context for display thread"

----------

## r.osmanov

 *SDNick484 wrote:*   

>  *r.osmanov wrote:*   Finally, I've got it. Two VTs: vt7 on Intel, vt8 on NVIDIA card as off-loader.
> 
> However, it all works terribly slow making only about 50 FPS (glxgears,
> 
> Enemy Territory), so I'll stick with the good old Bumblebee and give
> ...

 

I gave up configuring primusrun. It is not working:

```

Xlib:  extension "GLX" missing on display ":0.0".

primus: fatal: broken GLX on main X display

```

However, optirun is fine. At least I'm satisfied with it.

----------

