# *really* old nvidia kernel module suddenly appears

## ExecutorElassus

so, because nothing on my system is ever normal, I flashed my mobo BIOS today (from a CD, using the manufacturer's instructions, and it went fine). Now, for some reason - and flashing the BIOS and updating the attendant CMOS being the only things I've done - I get the following messages when I try to start x:

```
NVRM: loading NVIDIA UNIX x86_64 Kernel Module  190.42  Tue Oct 20 20:25:42 PDT 2009

NVRM: API mismatch: the client has the version 270.41.03, but

NVRM: this kernel module has the version 190.42.  Please

NVRM: make sure that this kernel module and all NVIDIA driver

NVRM: components have the same version.

```

 For some reason, the kernel thinks it has some version of the nvidia kernel module from 2009 installed, despite that I keep remerging (and injecting via modprobe) the new one. 

Any guesses what's going on here?

EDIT: found some more weirdness. the kernel it's loading is 2.6.31-gentoo-r6, which isn't listed in grub.conf, and doesn't even exist in the system any more. Any ideas what that's all about?

Thanks,

EELast edited by ExecutorElassus on Sat Apr 23, 2011 6:02 pm; edited 1 time in total

----------

## NeddySeagoon

ExecutorElassus,

My guess is that your are emerging against one kernel and running another.

Compare readlink /usr/src/linux with uname -r

----------

## ExecutorElassus

Hi Neddy,

so, grub.conf lists kernel-2.6.38-gentoo-r1 as its primary option, but when I get to the grub menu at bootup, the first option is 2.6.31-gentoo-r6. If I re-write the grub menu from within grub to point to the newer kernel, it says "file not found." Apparently the nonexistent kernel it's loading has the old nvidia module.

/boot is on a mirrored RAID array; is it possible that flashing the mobo BIOS somehow changed which disc grub is reading?

Thanks,

EE

----------

## NeddySeagoon

ExecutorElassus,

What did you get from the test I described ?

What do you have in /boot, both when boot is mounted and /boot is not mounted ?  When boot is not mounted /boot should be empty.  The BIOS update may have changed your drive detection order. Often the first drive reported by the BIOS is the selected boot drive.  As you updated the BIOS, this will have been reset to some default.

Your raid will still assemble properly as thats done using the raid superblock information.

If you have another install, not on your raid, its possible thats whats being booted.

----------

## ExecutorElassus

I must have swapped the cables. But now I'm on to a new adventure. Want to come along? I tossed the old drives, and have new ones in. Now, I am re-installing the OS from a LiveCD, and at the stage where I create RAID arrays. I screwed up the partition table, and went back to change it. Now, mdadm won't let me re-create the array, because it says that one of the partitions is in use. How do I delete the array, and free up the partition?

Thanks,

EE

EDIT: gah, nvm. I discovered the --zero-superblock command, and am now rebuilding the array. So, now md4 is re-syncing and building, hurrah!

----------

## NeddySeagoon

ExecutorElassus,

I hope you used version 0.9 raid superblocks on /boot or you can't install grub there.

That's the only version raid superblock that the kernel will auto assemble too, so for root on raid, you may find you need an initrd to run mdadm before you can mount root. 

See this thread

----------

## ExecutorElassus

Hi Neddy,

I'm rebuilding my system. I followed the guide up to the point where I reboot, and it's not correctly detecting the md partitions at startup (and thus hanging). 

I get to this point:

```
sd 0:0:0:0 [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

sdc: sdc1 sdc2 sdc3 sdc4

sd 2:0:0:0 [sdc] Attached SCSI disk

sda: sda1 sda2 sda3 sda4

sd 0:0:0:0 [sda] Attached SCSI disk

md: Waiting for all devices to be available before autodetect

md: If you don't use raid, use raid=noautodetect

md: Autodetecting RAID arrays

md: Scanned 0 and added 0 devices

md: autorun ...

md: ... autorun DONE

Root-NFS: no NFS server address

VFS: Unabla to mount root fs via NFS, trying floppy

VFS: Insert root floppy and press ENTER
```

then there's a bit about usb, and it tries to read a nonexistent floppy disk, and then it hangs on a kernel panic:

```
Kernel panic - not syncing; VFS: Unable to mount root fs on unknown-block(2,0)

Pid: 1, comm: swapper Not tainted 2.6.38-gentoo-r3 #1
```

So, what's going wrong here? I'm certain I configured the md array with --metadata=0.90, but it's still not booting properly. Any suggestions?

Thanks,

EE

UPDATE: I forgot to set the partition type to fd and 82 on the partitions. Now md4 is rebuilding, and I'll try again in a few hours. Question! If I only change the partition type id, and all the data was already there, do I need to reformat all the lv partitions, reinstall everything, etc? Or can I simply change the type, re-initialize the md array, and reboot?

UPDATE2: Apparently I can. Thanks for the help

----------

## NeddySeagoon

ExecutorElassus,

Changing the partition type byte in the partition table entry does just that - changes a single byte.

The only thing Linux uses it for is to determine if the partition should be considered for raid autodetect.

That makes me wonder why your raid is rebuilding ...

----------

## ExecutorElassus

Thanks again for the help, Neddy,

Well, the latter part was because I rebooted back into the liveCD after the kernel panic. It renamed all the md nodes, and consequently assumed they were tainted and in need of rebuilding. I shut down the two mirrors (md1 and md3, /boot and / respectively), but the large RAID5 array remains /dev/md127. I'd like to re-name that back to /dev/md4 at some point without rebuilding the whole system; is there an easy (ish) way to do that?

thanks again,

EE

----------

