# [SOLVED] Gparted shows different partition table as Parted

## Spargeltarzan

Dear Community,

I use 8 disks in my computer where 6 are used as a ZFS datapool and 2 SSDs are used for gentooroot, Windows and in future for a zfs cache. Previously I created already a zfs cache, however, I have issues to find it since my gparted shows different information as my parted. That I definitely want to fix firstly before I start anything with partitions in my setup.

My Output for my 512GB SSD /dev/sda from parted:

```

(parted) select /dev/sda

/dev/sda wird verwendet

(parted) print                                                            

Modell: ATA Samsung SSD 850 (scsi)

Festplatte  /dev/sda:  500GB

Sektorgröße (logisch/physisch): 512B/512B

Partitionstabelle: gpt

Disk-Flags: 

Nummer  Anfang  Ende   Größe   Dateisystem     Name                 Flags

 1      1049kB  538MB  537MB   fat32           EFI System Parition  boot, esp

 2      538MB   221GB  220GB   ext4

 4      221GB   325GB  105GB   ext4            gentooroot

 3      325GB   439GB  114GB   ext4

 6      439GB   448GB  8193MB  linux-swap(v1)

```

My Gparted shows the same size for the disk, but fully used as zfs filesystem. What is most likely to be wrong. 

What can be the issue that the output of information differs?

Thanks in advance!

----------

## NeddySeagoon

Spargeltarzan,

Please show both outputs

----------

## Spargeltarzan

NeddySeagoon,

I really would like to, do you have an idea how I can attach a screenshot of my gparted?

Gparted shows my ssd with only one, whole disk partition with zfs on /dev/sda and the name dpool. 

I definitly do only have one 512GB SSD in my computer, the second one is 256GB. Therefore it is not likely to be an issue with /dev/sdX naming.

----------

## Jaglover

Isn't gparted just a GUI frontend for parted (and perhaps some other CLI tools)? Once again, added complexity makes the whole thing error prone. I'm guessing.

----------

## Spargeltarzan

That I don't know, but if it was a gui for parted, wouldn't it even more weired it shows different information?

That is the parted output of my second ssd:

```

Modell: ATA Samsung SSD 840 (scsi)

Festplatte  /dev/sdd:  256GB

Sektorgröße (logisch/physisch): 512B/512B

Partitionstabelle: msdos

Disk-Flags: 

Nummer  Anfang  Ende   Größe   Typ      Dateisystem  Flags

 1      1049kB  368MB  367MB   primary  ntfs         boot

 2      368MB   185GB  185GB   primary  ntfs

 4      185GB   230GB  44,5GB  primary  ext4

 3      230GB   230GB  472MB   primary  ntfs         diag

```

Beside parted doesn't show names of the partitions for /dev/sdd the partitions and its filesystem are the same as in gparted.

I believe/suspect on /dev/sda sits one zfs partition but parted shows all as ext4(+fat32 /boot/esp) and gparted only one full zfs.

----------

## NeddySeagoon

Spargeltarzan,

You will need to post the screenshot to an image hosting service, then post a link to it here.

----------

## Jaglover

 *Quote:*   

> That I don't know, but if it was a gui for parted, wouldn't it even more weired it shows different information?

 

I looked it up and of course it is GUI for parted. Just proves another time - stay with real tools and do not rely on frontends.

----------

## Anon-E-moose

what does blkid show?

----------

## Spargeltarzan

NeddySeagoon

Gparted SSD 512GB

Gparted SSD 256GB

Anon-E-moose

```

/dev/sda1: LABEL_FATBOOT="ESP" LABEL="ESP" UUID="*" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Parition" PARTUUID="*"

/dev/sda2: UUID="*" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="*"

/dev/sda3: UUID="*" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="*"

/dev/sda4: LABEL="gentooroot" UUID="*" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="gentooroot" PARTUUID="*"

/dev/sda6: UUID="*" TYPE="swap" PARTUUID="*"

/dev/sdd1: LABEL="System-reserviert" BLOCK_SIZE="512" UUID="*" TYPE="ntfs" PARTUUID="*"

/dev/sdd2: BLOCK_SIZE="512" UUID="*" TYPE="ntfs" PARTUUID="*"

/dev/sdd3: BLOCK_SIZE="512" UUID="*" TYPE="ntfs" PARTUUID="*"

/dev/sdd4: LABEL="gentoobackup" UUID="*" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="*"

```

 *Quote:*   

> 
> 
> I looked it up and of course it is GUI for parted. Just proves another time - stay with real tools and do not rely on frontends.
> 
> 

 

That would mean parted is 100% correct and there is no zfs partition on /dev/sda. It might be that I forgot I deleted the cache partition. Gparted is usually a very mature software, it should be able to show the right partition table...

EDIT: Add eix gparted

```

eix gparted

[I] sys-block/gparted

     Verfügbare Versionen:   1.1.0^t{tbz2} {btrfs cryptsetup dmraid f2fs fat hfs jfs kde mdadm ntfs policykit reiser4 reiserfs test udf wayland xfs}

     Installierte Versionen: 1.1.0^t{tbz2}(18:41:06 27.05.2020)(btrfs fat hfs kde ntfs policykit wayland xfs -cryptsetup -dmraid -f2fs -jfs -mdadm -reiser4 -reiserfs -test -udf)

     Startseite:             https://gparted.org/

     Beschreibung:           Gnome Partition Editor

```

ADD: That seems to be a gparted issue indeed. Github

ADD: And it seems that parted reports zfs as ext4. Gitlab

ADD: I tried to mount the sda1-4 and can say it's indeed ext4 containing a linux install or home directory. Weird again that gparted than states it as zpool.

The output of blkid is different if I state especially for /dev/sda:

```

blkid /dev/sda                                                                              

/dev/sda: LABEL="dpool" UUID="5324304982938754299" UUID_SUB="12059425828241542702" BLOCK_SIZE="512" TYPE="zfs_member" PTUUID="1222d814-35ce-46c5-b82e-78e87cf6269b" PTTYPE="gpt"

```

----------

## NeddySeagoon

Spargeltarzan,

Your image that shows /dev/sda as a zfs pool shows the whole of sda.

Its like the partition table is not recognised and gparted is interpreting the start of the drive as a zfs pool, not the partition table it really is.

----------

## Spargeltarzan

NeddySeagoon,

do you think there is something corrective I can do to fix it?

Since I can mount all the ext4 partitions and read data on it, there is no zfs partition on /dev/sda and the information in the partition table seems to be wrong. Also 

```

blkid /dev/sda

```

reports zfs.

Maybe fdisk can help?

----------

## mv

 *Jaglover wrote:*   

> I looked it up and of course it is GUI for parted. Just proves another time - stay with real tools and do not rely on frontends.

 

Unfortunately, >=parted-3 removed important functionality (resizing existing partitions) which was only restored in later versions partially in the form of a library. The only tool to use this library is gparted, AFAIK. Thus, unless you want a crippled parted you are currently forced to use gparted. Or you stayed with gparted-2.4 which probably has other features missing...

----------

## mv

My guess us that at some time /dev/sda was mistakenly formatted as a ZFS which was only partially "repaired" by creating a partition table over it.

gparted - in contrast to parted - is able to analyze the filesystem actually stored (all functionalities concerning the underlying filesystem was completely removed from parted).

Whether the non-zero data actually can cause harm is hard to say.

It might help to re-create the partition label with parted manually, but be aware that this is possibly a destructive command!

----------

## ct85711

Just one thing, I have run into; is ZFS makes some special label(s) on the drive (intended to make it easier for zfs to find/load the pools).  The catch part, is that even when you reformat the drives, it doesn't get rid of the labels.  So I suspect you are encountering a case of gparted is seeing the leftover zfs labels.  Other partition types don't use the labels, so they just ignores them.  Now, you could possibly have zfs remove them, but only AFTER you backup your data.  It is very dangerous on removing to labels, in that it can/will corrupt any other partition type's superblocks/MBR.  When I removed them on my drive it killed ext4's partition table, I didn't see on it any of the backup superblocks could be used to restore the table.  Do note, those labels can cause problems when you want to use zfs again, in that zfs will see the old labels and try use them even though they are invalid.

----------

## Spargeltarzan

 *ct85711 wrote:*   

> Now, you could possibly have zfs remove them, but only AFTER you backup your data

 

How can ZFS remove the labels? When I type zpool status, it only lists my 6 hard drives but not a cache any more...

ADD: It seems some work has been done in meanwhile for zpool labelclear: https://github.com/openzfs/zfs/commit/066da71e7fe32f569736b53454b034937d0d3813

But I will create a backup first and let you know how it worked.

----------

## ct85711

To list the zfs pools (and also find where it is), you can use 

```
zdb -l /dev/[part/drive]
```

This command is fairly safe, and according to documentation, is mostly for debugging; so it shouldn't do anything except list any labels it sees.  After you find where the label is, you can then go and use 

```
zpool labelclear -f [drive/partition]
```

The labelclear portion is the dangerous part, as there are reports from others that find out the labelclear could wipe the GPT info. The -f option is force, you may or may not need to use it.  Otherwise, the main thing you will need to find the drive and/or partition with the offending label to remove it.

For those interested, I found a comment on freebsd forum in that zfs stores 4 labels/headers on the drive.

 *Quote:*   

> ZFS places four 256KB vdev headers on disks, two at the beginning and two at the end.  You'll probably need to erase the end of the disk as well.

 

Assuming zfs on linux does the same; any of those could be causing the problem.  Going by my experience of thise happening, I'm guessing since the header is larger, it starts earlier than a regular partition table, thus some of the label stays behind and would also explain why it could/would kill the partition info too.

----------

## Spargeltarzan

 *ct85711 wrote:*   

> 
> 
> ```
> zpool labelclear -f [drive/partition]
> ```
> ...

 

Many thanks for pointing me to this command. I created a backup of all my partitions with dd into an image file and performed the zpool labelclear without -f and it worked fine. Gparted realises my partitions without troubles now. I gave zfs a partition on it for cache and a second partition for zil and my second 256GB SSD goes fully for ZFS cache. 

All disks are still realised fine.

----------

