# Gentoo on a VPS can't install grub (vda)

## jhon987

Hi folks,

I decided to install Gentoo on a VPS I own.

Alas! I'm pretty much stuck at the final step of the installation where I cannot install grub.

This is what I get:

```
grub2-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible.

grub2-install: warning: Embedding is not possible.  GRUB can only be installed in this setup by using blocklists.  However, blocklists are UNRELIABLE and their use is discouraged..
```

When trying to install it on /dev/vda.

According to this guide -> https://wiki.gentoo.org/wiki/User:Flow/Gentoo_as_KVM_guest  - it seems that there's no un-ordinary action need be taken in order to get it through and yet...

any suggestions?

Unfortunately, I cannot post any data from console since rescue mode doesn't support copy paste...

----------

## NeddySeagoon

jhon987,

Boot loaders need some space outside both the MBR and the filesystem space.

With an MSDOS disk label, there is some unallocated space before the first partition. Boot loaders help themselves to this.

With a GPT disk label, there is no free space.  grub2 expects to find a small partition of type BIOS Boot Partition that it can use.

The handbook covers creating this.

If you have this partition, its typically about 2Mb, check the partition type.

```
GRUB can only be installed in this setup by using blocklists.  However, blocklists are UNRELIABLE and their use is discouraged..
```

but they do work. However ... every time grub is updated, you must reinstall it to the MBR to update the block list or you risk an unbootable install.

The fix for that is to mask grub, so you don't get any grub updates, then it can't break.

I use grub-static (legacy grub) with GPT which has the same block list issue.

----------

## jhon987

Hello Neddy,

Thank you for your informing answer. To be honest, I haven't formated the drives, I just booted into rescue mode > mounted vda and "rm -rf" everything from there.

This is all new to me, there are so many things I still have no idea how to go through, therefore I didn't do a full format - so I won't accidentally break anything (further).

I trust you that the blocklists thing work, I'll save it as a last resort.

BTW, currently grub 0.97 is installed on the system (that's how I boot into rescue mode) - do you know a way I can find where the current grub sits and maybe then I could replace it?

----------

## NeddySeagoon

jhon987,

Grub stage1 is the the first 446 bytes of the MBR.  The last 66 bytes are the fake MSDOS disk lable, so be careful with erasing this.

There is no grub legacy stage1.5 installed as there is nowhere to put it with a GPT disk lable.  grub legacy will not use a BIOS Boot Partition, even if it exists.

It gives a warning on install that embedding was not possible and falls back to using a block list.

Grub stage2 is in /boot/grub.  Its this that is loaded using block lists.

----------

## jhon987

Neddy,

First, thank you for helping.

second, I think I didn't explained things well enough, I'll try to do better:

I'm using a VPS rescue mode console just like someone would use a liveCD in order to see files of a non booting OS.

It's through that rescue mode, I've mounted /dev/vda1 onto /mnt and then wiped all the files inside there.

Next, I've commenced installing Gentoo (on that vda1 partition) until I got stuck trying to install grub2.

Now, when I exit rescue mode (meaning - the liveCD) and boot into local drive, what I get is a grub version 0.97 shell which can't find a loaded kernel (obviously).

So, the only thing I can do is to return to rescue mode and figure out my way from there.

One thing I noticed is that when booting to rescue mode or local drive both show an initial screen mentioning BIOS version, and also a Machine UUID - which I can't seem to find (the same UUID) upon using "blkid".

I'm not sure if that's important though.

Anyways, what I'd like to do is to replace that same legacy grub (which boots to both rescue and local modes) with a new grub2 from within my chrooted Gentoo.

So, if you or anyone else know how I can do so, please don't hesitate to share...

Going to sleep now (been sitting almost 20 hours on this S#!@)...

----------

## jhon987

Some progress I made:

I copied stage 1 and 2 from rescue mode liveCD as per suggestion on the last section here:

https://wiki.archlinux.org/index.php/GRUB_Legacy

created grub.conf manually using this guide:

https://wiki.gentoo.org/wiki/GRUB

Am now getting kernel panic though - 

```
kernel panic - not syncing vfs unable to mount root fs on unknown-block(0 0)
```

----------

## jhon987

For the sake of anyone else who would like to go the same path as I did, writing after successfully booting into a Gentoo on remote server, here are my conclusions and what I eventually did as a result:

Conclusions:

1. The boot sector block was never on the /dev/vda device, otherwise there would be room for installing another one (replacing the old one).

2. Strengthening the first point, vda1 which was the closest sector to its vda container was completely wiped by me, and yet I could still boot into a minimal grub shell.

3. Rescue mode, which I was afraid I won't be able to access had I accidentally wiped its grub boot sector, was not dependent on /dev/vda, which as I mentioned, did not contain the "primary" grub in the first place.

4. I say "primary", for what I concluded is that the server was built with a primary gateway leading to both Rescue mode (liveCD) and the local [virtual] disk.

5. The local drive, in addition to the primary boot grub, contained grub boot of its own so that once accessed through the primary gateway, the boot process would then continue on to the next boot loader, or so it seems.

What I Did!

Based on the conclusions above, and since I only managed to make the local disk boot via a dirty hack - I've decided to start all over again, this time, formatting /dev/vda with no worries about being locked out due to deleting the primary boot sector. 

And so I did, and everything worked smoothly then.

What I Still Don't Know?

What I'm still not sure of is how did the server company managed to squeeze grub into the default setup.

Before wiping it, the box contained a centOS 6.X distro.

Upon doing my yum updates, I could see grub was installed. grub was also present inside the boot folder.

Thus I'm still not sure whether the original setup was using that grub or perhaps the boot was somehow handed over straight to the kernel (I read it's basically possible)...?

----------

## NeddySeagoon

jhon987,

 *Quote:*   

> 1. The boot sector block was never on the /dev/vda device, otherwise there would be room for installing another one (replacing the old one).

 

That's not always true.  Grub1 and Grub2 install differently.  Grub1 gives you a warning when it falls back to block lists. Gub2 demands that you tell it explicitly to use block lists.

 *Quote:*   

> 2. Strengthening the first point, vda1 which was the closest sector to its vda container was completely wiped by me, and yet I could still boot into a minimal grub shell. 

 What does 'wiped' mean?

The partition table contains pointers to your data. Filesystems contain pointers to your data.

Removing the partition table destroys the pointers to your data, not the data itself.  The data is still there, its just become inaccessible. 

Restoring the pointers makes your data accessible again.  Likewise, with a filesystem.  Creating a new filesystem clears all the pointers to the data.

The data is still there.  Its actually quite difficult and time consuming to 'wipe' a HDD so the data is gone.

/dev/vda is to all intents a HDD.  When you add a disk label and make a partition The data structures created a

MBR at the very start of the drive

The (primary) partition table

Filesystem Space

The backup partition table. 

Only the Filesystem Space can be allocated to partitions

I've somewhat laboured the difference between the data and the pointers to data.  When grub installs using block lists, it installs to the MBR, which is outside filesystem space. Block lists are pointers to data. When grub loads from the MBR, it loads the contents of the disk blocks in the block list. It does not use the filesystem pointers, it just closes it eyes and blindly loads those blocks.  The grub stage2 is still on disk where it was installed. The space it occupies is marked free in the filesystem but the data contents are still there. 

 *Quote:*   

> 3. Rescue mode, which I was afraid I won't be able to access had I accidentally wiped its grub boot sector, was not dependent on /dev/vda,

 

Its often a net boot image, so your HDD is not involved at all.

 *Quote:*   

> What I'm still not sure of is how did the server company managed to squeeze grub into the default setup. 

 

Its not a squeeze.  Its the same as it is on a real HDD.

----------

## jhon987

Once again Neddy, thanks for an informative reply.

By what you're saying I make that it's possible that grub1 was installed on /dev/vda and perhaps used block lists, thus it was able to work even though I wiped everything.

 *Quote:*   

> What does 'wiped' mean? 

 

By wipe I'm referring to the "rm -rf" command I issued on all files residing under /dev/vda1 which I mentioned previously.

Since I see that upon using this command - memory which was taken has been freed - I assume that the data was actually deleted...

Anyways, the important point in my view is that the way the drive (vda) was originally partitioned has effectively prevented a standard grub2 installation, i.e.grub2 installation that doesn't uses block lists.

Another important point is that, it was OK to format and repartition the drive all along from rescue mode and I shouldn't been afraid of doing so right from the start - could have saved me valuable time and headaches.

 *Quote:*   

> Its often a net boot image, so your HDD is not involved at all. 

 

Might be so, but how does the system knows to use that image? is it going there straight from BIOS or is it booting to some local disk and then fetches the net boot?

----------

## NeddySeagoon

jhon987,

```
rm -rf
```

only 'unlinks' files from the directory and marks the space occupied by the files as free.

The space is not overwritten until its actually required.

Overwriting a drive takes several hours, and its not required, unless you are scrapping the drive. 

If you are curious, try 

```
dd if=/dev/vda of=/dev/null bs=1024000
```

That reads the entire /dev/vda which timewise is the same as a complete overwrite.

I don't know how your particular system boots directly into rescue mode.  

I have always had to ask for it in a web interface when I've made a mess of of a remote system.

----------

