# [SOLVED] nvidia module does not load on SSD disk

## dziadu

Hi,

I have perfectly working Gentoo on HDD. I bought SSD and moved the system to SSD. Migration as done via rsync, grub (EAFI and GPT) is installed, system boots nicely. But X doesn't start. I use nvidia-drivers.

lsmod shows that nvidia is installed.

But when I try to insert nvidia-drm or nvidia-uvm, the insmod command hangs. Just not inserting. I can terminate it with Ctrl+C. dmesg and messages and terminal doesn't show any messages related to insmod. dmesg desn't show anything wrong about nvidia. There is inly info about tainted drivers.

When I run X manually, it freezes on blank screen, the log is not very helpful:

InitConnectionLimits: MaxClients = 2048

 *Quote:*   

> [   121.041] 
> 
> X.Org X Server 1.20.8
> 
> X Protocol Version 11, Revision 0
> ...

 

When I change to (boot from) HDD disk, which is just a mirror of SSD, it just works. Any ideas what is wrong?Last edited by dziadu on Mon Nov 23, 2020 9:36 pm; edited 1 time in total

----------

## dmpogo

can you show what   lsmod gives about all nvidia modules when it is working from HDD ?

I have a feeling that      nvidia_drm  comes in  a set with nvidia_modesetting,    rather than plain nvidia

What modules do you have  in  /lib/modules/kernel-version/video   ?

----------

## molletts

Is the SSD a SATA one or NVMe? If it's NVMe, it might be worth checking the documentation for your motherboard to see whether it shares PCIe lanes with any of the slots - specifically, the slot your GPU is in. If it does share lanes with the GPU, try moving the GPU to another x16 slot, or the SSD to a different M.2 slot (if you have more than one PCIe x16/M.2 slot, that is). There may be BIOS settings that can help with this, too, perhaps by choosing which PCIe slot donates lanes to the M.2 slot.

----------

## Anon-E-moose

```
[ 121.048] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration 
```

I would imagine without logind X/<your dm> won't start

----------

## Ionen

What rsync command did you use? May possibly have lost extended attributes / caps. Not sure how much it'd matter to logind/xorg but would be a bad thing nonetheless.

----------

## dziadu

Replaying to questions.

It is plain SATA-3 disk, not NVME. The copy was made rsync -avz, I checked and all attributes were preserved.

It's dell Precision T1700 from 2014.

lsmod for HDD

 *Quote:*   

> # lsmod | grep nvidia
> 
> nvidia_drm             49152  8
> 
> nvidia_modeset       1171456  23 nvidia_drm
> ...

 

Indeed, i use it with modesetiing, I think it was required for blender and cuda, if I recall.

About systemd, the same message is in HDD version, so I guess is not related.

And in lib/modules/kernel-version/video I have all four expected: nvidia, nvidia-modesetting, nvidia-uvm and nvidia-drm.

----------

## dmpogo

 *dziadu wrote:*   

> Replaying to questions.
> 
> It is plain SATA-3 disk, not NVME. The copy was made rsync -avz, I checked and all attributes were preserved.
> 
> It's dell Precision T1700 from 2014.
> ...

 

So do you have nvidia_modeset loaded before you try to load  manually nvidia_drm  on a falied attempt ?

----------

## Anon-E-moose

can you wgetpaste both versions of the Xorg.0.org (ssd and hdd)

If you're doing startx, then start X, and just quit and save the Xorg.0.log file somewhere where it won't be overwritten, same for the other drive.

Just let us know which is which when you post them.

AFAIK the system shouldn't care whether the drive is hdd or ssd (nvmes are a little different)

----------

## dziadu

Here is Xorg.0.log for HDD system:

https://pastebin.com/Fhf2vuG5

This one is strip of hell of lines containing:

 *Quote:*   

> [ 25598.164] input-thread: InputThreadDoWork waiting for devices
> 
> [ 25085.617] AllocNewConnection: client index = 64, socket fd = 113
> 
> [ 13085.618] client(8000000): Released pid(4945).
> ...

 

Each of these line repeat across whole log, with varying some numbers, but they are harmless to the issue.

And this one for SSD system:

https://pastebin.com/rWDkkh7A

Be aware that X for SSD system doesn't start fully, it freezes after ramdac.

Oh, and I can add that the first time I moved the system to SSD, after startup the X didn't start, but I loaded manually nvidia-drm and X started. But this has never happen again after reboot. X still doesn't startup but insmod hands during nvidia-drm loading. I can cancel it with Ctrl+C so it is not full freeze.

And one interesting thing, in Recovery Mode all nvidia modules load fine.

----------

## Anon-E-moose

Hmmm

hdd

```
[    16.067] (**) ModulePath set to "/usr/lib64/xorg/modules"

[    16.067] (II) The server relies on udev to provide the list of input devices.

   If no devices become available, reconfigure udev or disable AutoAddDevices.

[ 16.067] (II) Loader magic: 0x555d92b72d00
```

ssd

```
[   143.972] (**) ModulePath set to "/usr/lib64/xorg/modules"

[   143.972] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled.

[   143.972] (WW) Disabling Keyboard0

[   143.972] (WW) Disabling Mouse0

[ 143.972] (II) Loader magic: 0x5570b57a1d20
```

You've got something different between the 2 disks. 

hdd

```
 [    16.069] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration

[    16.071] (II) xfree86: Adding drm device (/dev/dri/card0)

[ 16.140] (--) PCI:*(1@0:0:0) 10de:0ffe:10de:094c rev 161, Mem @ 0xf6000000/16777216, 0xe0000000/268435456, 0xf0000000/33554432, I/O @ 0x0000e000/128, BIOS @ 0x????????/131072
```

The line about drm device is missing on ssd, that's not good (and it propagates through the log)

Edit to add: It doesn't seem to pick up nvidia properly on the ssd, though why I don't know. 

What does "grep -i nvidia /var/log/dmesg" show on both disks?

----------

## dziadu

There might be slight differences. First I made couple of tries with exact same systems, then on SSD I made tries with upgrading like xorg-server, or other related parts, Thus maybe slight differences. Byt nevertheless, mirror system had the same problem.

The dri I saw, the device doesn't exists. Though nvidia module is load.

For SSD:

 *Quote:*   

> 776:[    1.685286] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input9
> 
> 777:[    1.685374] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input10
> 
> 778:[    1.685417] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input11
> ...

 

For HDD

 *Quote:*   

> 2:[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.8.18-gentoo root=/dev/sda4 ro video=uvesafb:1920x1200-32,mtrr:3,ywrap fbcon=scrollback:128K quiet console=tty1 pcie_aspm=force "acpi_osi=!Windows 2015" nvidia-drm.modeset=1
> 
> 168:[    0.061516] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.8.18-gentoo root=/dev/sda4 ro video=uvesafb:1920x1200-32,mtrr:3,ywrap fbcon=scrollback:128K quiet console=tty1 pcie_aspm=force "acpi_osi=!Windows 2015" nvidia-drm.modeset=1
> 
> 791:[    5.125545] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input9
> ...

 

In the meantime I removed nvidia-drm parameters from grub and moved into /etc/modprobe.d:

 *Quote:*   

> # cat /etc/modprobe.d/nvidia-drm.conf 
> 
> options nvidia-drm modeset=1

 

beside that there are no differences between both. What is strange. But as I wrote, in recovery mode the module loads. I will see what modules are loaded in recovery and normal modes, maybe there are some confilcts if certain modules are loaded.

----------

## Anon-E-moose

Do you use an initramfs? 

Is nvidia a module on both systems?

----------

## dziadu

No initramfs and nvidia is a module, it can be a module only..

----------

## Anon-E-moose

what does "grep -i modeset /var/log/dmesg" show? on  both systems, should be just a couple of lines each, 

Something to do with nvidia, the driver obviously loads on the ssd, but I'm not sure it's loading properly.

----------

## dziadu

So, I made new observation. As you asked me for X logs, I run startx and the system kind of hand, none of the Alt+Fx combinations worked, but I could ssh into the PC. Then I took a break and went to finish my 1000's Puzzle. After that suddenly the basic X session started with xterm. The nvidia-modesetting was loaded. Then when I manually loaded nvidia-drm instantly my xdm started. I stopped it and unloading and loading work fine. So I made another restart and tried to repeat the procedure.

I run

```
# date modprobe nvidia-modeset
```

and after another 56 minutes, while modprobe was stil working, I logged in into second tty and run

```
# startx
```

and after 3 minutes the X session opened.

The date command showed:

 *Quote:*   

> real  59m34.030s
> 
> user  0m0.000s
> 
> sys   0m0.014s
> ...

 

which suggests that there must be some weird delay woth SSD. I am now about to make another test and start X just after loading modeset driver, to se whether it will take another 1 hour or 3 minutes only.

----------

## Anon-E-moose

That's peculiar behavior.

wpasteget "lspci -nnk"

and what does grep " ata" /var/log/dmesg return

----------

## dziadu

Here it is.

```
# lspci -nnk

00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v3 Processor DRAM Controller [8086:0c08] (rev 06)

        Subsystem: Dell Xeon E3-1200 v3 Processor DRAM Controller [1028:05a6]

00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)

        Kernel driver in use: pcieport

00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family USB xHCI [1028:05a6]

        Kernel driver in use: xhci_hcd

00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family MEI Controller [1028:05a6]

        Kernel driver in use: mei_me

00:16.3 Serial controller [0700]: Intel Corporation 8 Series/C220 Series Chipset Family KT Controller [8086:8c3d] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family KT Controller [1028:05a6]

        Kernel driver in use: serial

00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-LM [8086:153a] (rev 04)

        DeviceName:  Onboard LAN

        Subsystem: Dell Ethernet Connection I217-LM [1028:05a6]

        Kernel driver in use: e1000e

        Kernel modules: e1000e

00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family USB EHCI [1028:05a6]

        Kernel driver in use: ehci-pci

00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset High Definition Audio Controller [1028:05a6]

        Kernel driver in use: snd_hda_intel

        Kernel modules: snd_hda_intel

00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d4)

        Kernel driver in use: pcieport

00:1c.1 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #2 [8086:8c12] (rev d4)

        Kernel driver in use: pcieport

00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 [8086:8c18] (rev d4)

        Kernel driver in use: pcieport

00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family USB EHCI [1028:05a6]

        Kernel driver in use: ehci-pci

00:1f.0 ISA bridge [0601]: Intel Corporation C226 Series Chipset Family Server Advanced SKU LPC Controller [8086:8c56] (rev 04)

        Subsystem: Dell C226 Series Chipset Family Server Advanced SKU LPC Controller [1028:05a6]

        Kernel driver in use: lpc_ich

00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [1028:05a6]

        Kernel driver in use: ahci

00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 04)

        Subsystem: Dell 8 Series/C220 Series Chipset Family SMBus Controller [1028:05a6]

        Kernel driver in use: i801_smbus

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [Quadro K2000] [10de:0ffe] (rev a1)

        Subsystem: NVIDIA Corporation GK107GL [Quadro K2000] [10de:094c]

        Kernel driver in use: nvidia

        Kernel modules: nouveau, nvidia_drm, nvidia

01:00.1 Audio device [0403]: NVIDIA Corporation GK107 HDMI Audio Controller [10de:0e1b] (rev a1)

        Subsystem: NVIDIA Corporation GK107 HDMI Audio Controller [10de:094c]

        Kernel driver in use: snd_hda_intel

        Kernel modules: snd_hda_intel

03:00.0 PCI bridge [0604]: Texas Instruments XIO2001 PCI Express-to-PCI Bridge [104c:8240]
```

and

```
# grep ata /var/log/dmesg 

22:[    0.000000] BIOS-e820: [mem 0x00000000dbfae000-0x00000000dbffffff] ACPI data

83:[    0.006605] ACPI: SSDT 0x00000000DBFFA9C8 00036D (v01 SataRe SataTabl 00001000 INTL 20120711)

174:[    0.134410] Memory: 32721120K/33495076K available (12291K kernel code, 1534K rwdata, 2816K rodata, 1152K init, 1540K bss, 773956K reserved, 0K cma-reserved)

223:[    0.144398] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.

366:[    0.168148] libata version 3.00 loaded.

553:[    0.366810] ata1: SATA max UDMA/133 abar m2048@0xf7136000 port 0xf7136100 irq 29

554:[    0.366812] ata2: SATA max UDMA/133 abar m2048@0xf7136000 port 0xf7136180 irq 29

555:[    0.366814] ata3: SATA max UDMA/133 abar m2048@0xf7136000 port 0xf7136200 irq 29

556:[    0.366816] ata4: SATA max UDMA/133 abar m2048@0xf7136000 port 0xf7136280 irq 29

557:[    0.366817] ata5: DUMMY

558:[    0.366817] ata6: DUMMY

647:[    0.402973] cfg80211: Loading compiled-in X.509 certificates for regulatory database

653:[    0.678067] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

654:[    0.678098] ata3: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

655:[    0.678131] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

656:[    0.678159] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)

657:[    0.679433] ata4.00: ATA-7: ST3250410AS, 3.AAF, max UDMA/133

658:[    0.679438] ata4.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 32)

659:[    0.679586] ata1.00: ATA-8: TOSHIBA DT01ACA200, MX4OAC70, max UDMA/133

660:[    0.679590] ata1.00: 3907029168 sectors, multi 16: LBA48 NCQ (depth 32), AA

661:[    0.679695] ata2.00: ATA-8: TOSHIBA DT01ACA200, MX4OABK0, max UDMA/133

662:[    0.679701] ata2.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 32), AA

663:[    0.680330] ata3.00: supports DRM functions and may not be fully accessible

664:[    0.680403] ata4.00: configured for UDMA/133

665:[    0.680888] ata1.00: configured for UDMA/133

667:[    0.681005] ata2.00: configured for UDMA/133

671:[    0.681164] ata3.00: ATA-11: Samsung SSD 860 EVO 500GB, RVT04B6Q, max UDMA/133

672:[    0.681166] ata3.00: 976773168 sectors, multi 1: LBA48 NCQ (depth 32), AA

683:[    0.683508] ata3.00: supports DRM functions and may not be fully accessible

684:[    0.686462] ata3.00: configured for UDMA/133

687:[    0.686685] ata3.00: Enabling discard_zeroes_data

693:[    0.686845] ata3.00: Enabling discard_zeroes_data

700:[    0.688178] ata3.00: Enabling discard_zeroes_data

710:[    0.742471] EXT4-fs (sdc4): mounted filesystem with ordered data mode. Opts: (null)

713:[    0.755129] Write protecting the kernel read-only data: 18432k

714:[    0.756289] Freeing unused kernel image (text/rodata gap) memory: 2044K

715:[    0.756886] Freeing unused kernel image (rodata/data gap) memory: 1280K

766:[    1.625859] wmi_bus wmi_bus-PNP0C14:01: WQBC data block query control method not found

821:[    3.122413] EXT4-fs (sdc5): mounted filesystem with ordered data mode. Opts: (null)
```

My system is on sdc, so ata3.

I started my second test, modprobe and startx just after it, and the system now is "processing" for 20 minutes.

..:: edit

Update of the second test. It again took nearly one hour to load module:

```
real   59m34.081s

user   0m0.000s

sys    0m0.013s
```

----------

## molletts

Clutching at straws here but assuming the copy from HDD to SSD was 100% correct (have you tried running Memtest86+? faulty RAM could corrupt a file copy) you could try adding udev-settle to the boot runlevel:

```
rc-update add udev-settle boot
```

(This assumes you're using openrc, not systemd.)

That would force the boot process to wait until all required modules and devices are finished initialising before continuing to the default runlevel. Maybe the SSD is simply so fast that some of the modules haven't had a chance to fully initialise before X get started. I had to use udev-settle to ensure my mdraid array came up successfully after upgrading the device containing my root FS because the SATA controller hadn't finished scanning for devices by the time mdraid was started whereas the old root device was slow enough that there was plenty of time.

----------

## dziadu

Hi, late reply as since I started the system I had no opportunity to restart it.

I tired with udev-settle and first, I started it manually after all services were started. It resulted with

 *Quote:*   

> ERROR: udev-settle failed to start

 

then I add it to boot runlevel and the service was run twice, both with the same result.

After that, it took again 1 hour for nvidia-drm to load.

Do you have other ideas?

----------

## Anon-E-moose

Do a wgetpaste of the whole dmesg output from beginning to when nvidia gets loaded on the ssd (the problem one)

----------

## dziadu

Here it is, full dmesg.

https://dpaste.com/HYCRYDAXJ

I assume that you ask for point where the nvidia is loaded, right? I believe it is the place in the log where nvidia word appears for the first time.

 *Quote:*   

> $ dmesg
> 
> [    0.000000] Linux version 5.8.18-gentoo (root@dell-rafal) (gcc (Gentoo 10.2.0-r3 p4) 10.2.0, GNU ld (Gentoo 2.33.1 p2) 2.33.1) #2 SMP Mon Nov 16 07:57:22 CET 2020
> 
> [    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-5.8.18-gentoo root=/dev/sdc4 ro video=uvesafb:1920x1200-32,mtrr:3,ywrap fbcon=scrollback:128K quiet console=tty1 pcie_aspm=force "acpi_osi=!Windows 2015"
> ...

 Last edited by dziadu on Tue Nov 24, 2020 12:34 am; edited 1 time in total

----------

## Anon-E-moose

```
...

/boot/vmlinuz-5.8.18-gentoo root=/dev/sdc4 ro video=uvesafb:1920x1200-32,mtrr:3,ywrap fbcon=scrollback:128K quiet console=tty1 pcie_aspm=force "acpi_osi=!Windows 2015"

...

[    1.723881] nvidia: loading out-of-tree module taints kernel.

[    1.723888] nvidia: module license 'NVIDIA' taints kernel.

[    1.723889] Disabling lock debugging due to kernel taint

[    1.727097] vboxdrv: Found 8 processor cores

[    1.745161] nvidia-nvlink: Nvlink Core is being initialized, major device number 244

[    1.745406] nvidia 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem

...

[  312.423137] NFS4: Couldn't follow remote path

[  342.403694] elogind-daemon[2703]: New session 15 of user root.

[ 3596.030961] elogind-daemon[2703]: New session 124 of user root.

[ 3602.679212] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  455.38  Thu Oct 22 06:06:59 UTC 2020

[ 3602.880214] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000d4000-0x000d7fff window]

[ 3602.880306] caller _nv000709rm+0x1af/0x200 [nvidia] mapping multiple BARs

[ 3605.796306] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  455.38  Thu Oct 22 05:57:59 UTC 2020

[ 3605.797940] [drm] [nvidia-drm] [GPU ID 0x00000100] Loading driver
```

I'm not sure what's going but I'm thinking it's nothing to do with nvidia, the major pause is between 2 elogind-daemons.

I suppose it's possible that udev is waiting for something, but it's not showing in dmesg.

Why force pcie_aspm?

```
    pcie_aspm=  [PCIE] Forcibly enable or disable PCIe Active State Power

            Management.

        off Disable ASPM.

        force   Enable ASPM even on devices that claim not to support it.

            WARNING: Forcing ASPM on may cause system lockups.
```

Also not sure why you setting acpi_osi and what effect it has on booting, and I'm leary of uvesafb

Maybe someone else will have some ideas about the above.

----------

## NeddySeagoon

dziadu,

```
...video=uvesafb:1920x1200-32,mtrr:3,ywrap...

[    0.349306] efifb: probing for efifb

[    0.349316] efifb: framebuffer at 0xf1000000, using 9024k, total 9024k

[    0.349316] efifb: mode is 1920x1200x32, linelength=7680, pages=1

[    0.349317] efifb: scrolling: redraw

[    0.349317] efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0

[    0.349370] Console: switching to colour frame buffer device 240x75
```

uvesafb is long dead. The userspace part was removed several years ago. I don't know if its harmless kernel bloat or not but dmesg says you are using efifb for the console anyway.

```
[    0.143007] SRBDS: Vulnerable: No microcode
```

Do you need a microcode update, is there one for your CPU.

The nVidia module loaded here

```
[    1.723881] nvidia: loading out-of-tree module taints kernel.

[    1.723888] nvidia: module license 'NVIDIA' taints kernel.

[    1.723889] Disabling lock debugging due to kernel taint

```

Two minuets to start your network

```
[  127.280327] e1000e 0000:00:19.0 eno1: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

[  130.867327] e1000e 0000:00:19.0 eno1: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
```

Somehing is broken it that gap.

```
[ 3602.679212] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  455.38  Thu Oct 22 06:06:59 UTC 2020
```

is the drm part of nvidia. That's late

For a test, edit /etc/conf.d/modules It should all be comments.

If you already have a modules= entry, add nvidia nvidia-drm to it.

If not add a single non comment line  

```
modules="nvidia nvidia-drm"
```

The modules service should already be in the boot runlevel. If not, add it.

Reboot to test.  

I don't see any nvidia related errors, but there is some very odd timing.

```
[ 3608.722538] kmixctrl[8823]: segfault at 0 ip 0000000000000000 sp 00007fff0d15f318 error 14 in kmixctrl[561f43236000+2000]

[ 3608.722542] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.

[ 3612.810113] skypeforlinux[9986]: segfault at 4 ip 00007f8b5e3bd8c
```

Skype hates you but its an evil binary blob.

----------

## dziadu

OK, I fixed it. Cannot say what was the exact step, but I followed the page

https://wiki.gentoo.org/wiki/NVidia/nvidia-drivers

and applied all recommendations, but I had to disable Simple Framebuffer, with it it didn't run. Without, works fine.

But as I made a plenty of changes, cannot say which one was definitive.

I also followed your suggestions and fixed all issues like microcode, uvesafb, etc. Thanks for spotting it. I was using this Grub settings for like 10y or more and didn't realised when it has changed.s

----------

## NeddySeagoon

dziadu,

Please post your dmesg now its working.

----------

## dziadu

 *Quote:*   

> $ dmesg
> 
> [    0.000000] microcode: microcode updated early to revision 0x28, date = 2019-11-12
> 
> [    0.000000] Linux version 5.8.18-gentoo (root@dell-rafal) (gcc (Gentoo 10.2.0-r3 p4) 10.2.0, GNU ld (Gentoo 2.33.1 p2) 2.33.1) #7 SMP Mon Nov 23 22:10:02 CET 2020
> ...

 

----------

## NeddySeagoon

dziadu,

That got rid of the two minute delay starting your network too. 

It looks much better.

----------

## dziadu

IMHO the two minute delay was caused by adding udev-settle. Before I add it to boot runlevel, the system was starting the same fast. But udev-settle was trying to 'settle' for like two minutes, then was failing, the init was continuing, and then udev-settle was again called, etc. In one boot I counted three tries to finish this task and each failed, and each consumed some significant amount of time, like 2 minutes.

----------

