# [Solved] devtmpfs not detecting partition table

## ub1quit33

I am working right now on a custom initramfs from which to boot my system using busybox and a few binaries I packaged in. I have built support for devtmpfs into my kernel. However, my kernel seems to be having difficulties detecting the partition tables on my block devices. Details of what I know about the problem are included below.

- The devices in question are a pair of 500GB toshiba SATA HDDs with a 4kb physical block size.

- The devices are arranged in a soft RAID1 array with two partitions each; a 256MB partition 1 for /boot, the rest on partition 2. All partition types are fd (linux RAID autodetect)

- The kernel *is* able to detect the partition table on my inserted USB drive (which is /dev/sda)

- The kernel is able to seemingly able to detect the presence of my block devices. I gather as much due to the following:

   -> The kernel prints messages stating it has detected the SATA connected devices

   -> There are device nodes present for /dev/sdb, /dev/sdc

- There issue is definitely not boot loader related, as GRUB has no problem either being initialized from the MBR or launching the kernel/initramfs from it's storage location on partition 1.

I made sure to compile my kernel with support for SATA/ATA targets, and have generic support for both interface types.

I'm at a bit of a head scratching place, so any tips anyone might be able to give me to help troubleshoot this a little further would be greatly appreciated!Last edited by ub1quit33 on Sun Jun 09, 2013 2:17 pm; edited 1 time in total

----------

## khayyam

 *ub1quit33 wrote:*   

> I am working right now on a custom initramfs from which to boot my system using busybox and a few binaries I packaged in.

 

ub1quit33 ... and this "system" is Arch Linux, right?

best ... khay

----------

## NeddySeagoon

ub1quit33,

What partition type did you use on your HDDs, e.g. MSDOS, GPT, SUN ...?

What partition types did you build kernel support for?

Are you having problems with partitions outside or inside the raid?

What raid superblock version are you using ?

----------

## ub1quit33

 *khayyam wrote:*   

>  *ub1quit33 wrote:*   I am working right now on a custom initramfs from which to boot my system using busybox and a few binaries I packaged in. 
> 
> ub1quit33 ... and this "system" is Arch Linux, right?
> 
> best ... khay

 

Heh, nice catch. I do troll the Arch forums at times, as I run Arch on my laptop... but the "system" I'm referring to is a home server I'm building out using Gentoo  :Smile: 

----------

## ub1quit33

 *NeddySeagoon wrote:*   

> ub1quit33,
> 
> What partition type did you use on your HDDs, e.g. MSDOS, GPT, SUN ...?
> 
> What partition types did you build kernel support for?
> ...

 

As mentioned in the OP, partition type is fd on all partitions, and I built support for RAID autodetect into the kernel, as well as ext2+4, squashfs, and aufs.

 *NeddySeagoon wrote:*   

> 
> 
> Are you having problems with partitions outside or inside the raid?
> 
> 

 

The partitions which are not displaying are all members of a soft RAID array (either md0 or md1), however the problem occurs well before the kernel can even attempt to decide whether those partitions are members of a RAID array or not. It seems to be unable to discover that they even exist at all.

 *NeddySeagoon wrote:*   

> What raid superblock version are you using ?

 

I'm not sure what a "superblock version" is... are you asking about my mdadm version? In any case, the members of the first array (/dev/md0: /dev/sda1, /dev/sdb1) are built with --metadata=0.90 while the members of the second array (/dev/md1: /dev/sda2, /dev/sdb1) are not. Is that what you were asking about?

----------

## NeddySeagoon

ub1quit33,

fd is a flag attributed to a partition.  Is the partition table in MSDOS format, or some other?

What tool did you use to make the partition table?

What does fdisk -l show for the raid drives?

--metadata=0.90 is the raid superblock version.

Kernel raid auto assembly only works with  --metadata=0.90 raid sets. You must use mdadm in your initrd to assemble /dev/md1, thats after you have the required /dev/ nodes.

----------

## khayyam

 *ub1quit33 wrote:*   

>  *khayyam wrote:*   ub1quit33 ... and this "system" is Arch Linux, right? 
> 
> Heh, nice catch. I do troll the Arch forums at times, as I run Arch on my laptop... but the "system" I'm referring to is a home server I'm building out using Gentoo

 

ub1quit33 ... ok, first posts with generic linux questions are often users who couldn't get answers elsewhere, and a quick search suggested Arch. Anyhow, NeddySeagoon is far more experienced with RAID than myself, so I'll leave it to him.

best ... khay

----------

## ub1quit33

 *NeddySeagoon wrote:*   

> ub1quit33,
> 
> fd is a flag attributed to a partition.  Is the partition table in MSDOS format, or some other?
> 
> What tool did you use to make the partition table?
> ...

 

Ah the partition *table* type! Sorry for misunderstanding. Yes, I believe it is MSDOS, as I setup the partitions with cfdisk. 

I'm not going to be physically at the machine for another few days, so I can't get the fdisk -l output yet.

I do have mdadm in my initrd to run --assemble, but the issue I'm experiencing seems to be with all the necessary /dev/nodes appearing. mdadm can't even figure out what disks to assemble, as /sda1,2 and /sdb1,2 do not appear (in the event that I have a usb thumb drive plugged in, it's partition will appear as /sda1; it is only for my HDDs that partition nodes fail to appear).

Quick question since we're on the topic... assuming everything was working correctly, would I need to manually run mknod for /md0 and /md1, or does automatic mdadm assembly take care of this?

----------

## NeddySeagoon

ub1quit33,

The DEVTMPFS kernel code creates the md nodes when the raid set is assembled.

Please check your fdisk -l with either parted or fdisk.  I know both those tools check for the correct boot block signature in the last two bytes of LBA 0.

If they are incorrect, the partition table will be ignored.

----------

## ub1quit33

Thanks, when I get back in town I'll run fdisk -l and update this thread.

----------

## ub1quit33

Okay, here's my fdisk -l output

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *          63      498014      248976   83  Linux

Partition 1 does not start on physical sector boundary.

/dev/sda2          498015   976773167   488137576+  fd  Linux raid autodetect

Partition 2 does not start on physical sector boundary.

Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 4096 bytes

I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1   *          63      498014      248976   83  Linux

Partition 1 does not start on physical sector boundary.

/dev/sdb2          498015   976773167   488137576+  fd  Linux raid autodetect

Partition 2 does not start on physical sector boundary.

----------

## NeddySeagoon

ub1quit33,

That looks OK but you have advanced format hard disk drives and have your partitions misaligned.  It will still work but will be very slow as the drive has to do read/modify/writes to accomplish the misaligned writes.  That costs you at least an extra rev of the platter.  Anecdotal evidence suggests that writes will be 30x slower than with correctly aligned partitions.

Partition starts need to be at a block number that is an integer multiple of 8, so 64 works, 63 does not.

You cannot fix it without destroying your data.

This does not explain why /dev/sd[ab][12] are missing from /dev though.

Can you post your kernel .config file to a pastebin please.  Its at /usr/src/linux/.config

wgetpaste is your friend.

----------

## ub1quit33

Thanks for all your help on this issue. I am somewhat noobish still in the realm of manual kernel configuration, so this is a huge help. Very much appreciated!

Also, thanks for the tip on my disk alignment. No biggie on the data, I'll just export disk images so I can make the alignment fix  :Smile: 

Here is a paste of my kernel configuration: http://pastebin.com/0i938MAA

(I grepped out all the default settings)

----------

## NeddySeagoon

ub1quit33,

If by disk images, you mean to use dd, that won't work as the data will go back in the same place.  You need to preserve the data, remake the partitions and filesystems, then restore the data.

You can almost do it in situ.  Fail the same drive in in each raid.  Say /dev/sdb[12].  This will leave you with degraded raids based on /dev/sda[12]

Remake the raids on sdb, with the components from sda missing. Copy your data over from the old degraded raids to the new degraded raids.

When you are happy it works, partition and add sda[12] to the new raids.

You will end up with new raid numbers and will have practiced raid failed drive replacement at no extra expense. Of course, you still need a backup.  If you mess up, or the only drive in the degraded raid set dies, you have lost your data. Going the degraded raid route, means very little downtime.

On the face of it your kernel looks "mostly harmless", however you have some debug options on, which is generally a bad thing unless you need the logspam they generate.  A few debug options interfere with normal operation too.  As you stripped the comments from your kernel .config, I can easily check for what you have set but checking the options that are off that you might need is much harder as it relies on my memory.  Please post an unstripped copy of your .config

----------

## ub1quit33

My plan was to image the FS as a squashfs image which I would just unsquash on the fixed block devices, but I like your way better. Sounds like a great way to get some practice with RAID drive management  :Smile: 

Ah sorry about that kernel config. I thought I was reducing the noise level by stripping out the comments. Here's a full paste of the kernel config.

http://pastebin.com/Bg1WpHZA

----------

## ub1quit33

I wanted to update this topic for the records, in case anyone searches their way across it, as I was able to resolve my issue.

The problem was I neglected to enable support for the appropriate manufacturer's Serial ATA IDE controller. In my specific case, my machine has an AMD processors/motherboard, so I enabled the following

Device Drivers --->

    Serial ATA and Parallel ATA Drivers --->

        <*> AMD/NVidia PATA Support

        <*> ATI PATA Support

That took care of my issue with the kernel being unable to detect my block devices. I have since encountered issues with some unsupported parts of my setup, but I'll open a new issue for that in the appropriate forum area. 

The other thing I wanted to follow up on in this topic was my plan to realign the drive partitions. Since my last post, I have happend across a pair of 2T drives, and am looking to switch out my pair of 500G drives. I am planning to replace the drives this weekend, and I wanted to throw my rough plan out there and see if you'd be willing to comment on how well it may or may not work.

So based on your last recommendation, I was thinking I would perform the drive change in the following manner.

- Mark one of my disks (say dev/sdb[12]) as failed. Remove the drive.

- Attach the first 2T drive. Create /dev/sdb[12] as RAID autodetect partitions

- Create partitions on sdb[12] to be same size as sda[12] respectively (ensuring partition starts are appropriate multiples of  :Cool: 

- Rebuild array from sda[12] onto the new sdb[12]

- Repeat steps taken for sdb[12] on sda[12]

- Extend VolGroup on /dev/md1 to cover the remainder of the spare drive space (to cover the full 2TB of space)

Does this plan seem like it would work? I know that the 2T drives will have to be GPT partition type... is that going to cause problems when I try to create the interim RAID array with a drive uing the MSDOS partition type? Any feedback is welcome feedback. Thanks for all the help!

----------

## NeddySeagoon

That works but it may be overly complex.

The only reason to fail a drive, is so you can use the 'failed' drive for something else.

If you can connect the 2TB drives at the same time, there is no need to fail anything.

You can partition your new raid however you like, unless you have a particular reason to copy the partition sizes.

On the new raid, make new partitions, new LVMs and so on.

Copy everything over with tar or rsync or even cp -a.  You will need to boot with a liveCD for this step.

Install your bootloader, fix your grub.conf and /etc/fstab on  the new raid.

Boot into the new raid to test.

----------

## ub1quit33

Awesome! I should be able to work it out on my own from here. Thanks a ton for all of your help!

----------

