# Mounting HFS+ JBOD Volume fails

## k_klunz

Hello everyone,

I am having a problem and I am running out of ideas:

I run gentoo on a MacBook, because of this most of my external Harddrives are formatted with the HFS+ filesystem.

One of those is a Macpower Pleiades Single Bay with a 2TB Western Digital HDD inside.

This is the drive I work with most of the time and it mounts without problems. Reading and Writing also works flawlessly.

The second drive causes problems. It is a Macpower Hydra 4-Bay enclosure with 4 1TB Western Digital drives in JBOD-Mode.

These form one single 4 TB HFS+ Volume.

Checking that drive with OSX tells me that the drive is perfectly fine.

When I attach the drive to the gentoo-system, dmesg gives me:

 *Quote:*   

> [  395.442249] scsi 5:0:0:0: Direct-Access     External RAID             0    PQ: 0 ANSI: 5
> 
> [  395.442608] sd 5:0:0:0: Attached scsi generic sg2 type 0
> 
> [  395.446186] sd 5:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
> ...

 

However, when I try to mount the drive, I get

 *Quote:*   

> Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error In some cases useful info is found in syslog -try dmesg | tail or so

 

dmesg gives me:

 *Quote:*   

> [  490.374972] mount: sending ioctl 5310 to a partition!
> 
> [  490.374978] mount: sending ioctl 5310 to a partition!
> 
> [  490.375937] hfs: invalid secondary volume header
> ...

 

Especially the last dmesg output is strange to me, since I can mount the other external hfs+-drive just fine. Furthermore, the problematic drive works flawlessly on the mac mini in the living room, which also tells me the drive is fine.

I am running kernel 3.2.12-gentoo and I am pretty sure that I have all the necessary options compiled into my kernel.

What am I missing here?

Thank you very much in advance, have a nice evening

tobe

----------

## NeddySeagoon

k_klunz,

Four Drives in JBOD mode should appear as four drives but the partition table on /dev/sdb describes all the space on all four drives.

You have some sort of raid or LVM going on here. dmesg says its an external raid0 ... 

If it were raid of some sort, its quite possibe that the MBR would be on a single drive, so the partition table would describe all the space your dmesg shows.

Check your kernel for 

```
 [*] Probe all LUNs on each SCSI device 
```

 I'm expecting that to be off and you have three mode drives show up when its on as 

```
scsi 5:0:0:1: 

scsi 5:0:0:3: 

scsi 5:0:0:3:
```

with the kernel complainging about their partition tables.

----------

## khayyam

tobe ...

two things to check: 

1). is the disk that fails HFS+ Journaled?

2). partitioning GPT or Apple partiton map?

For 1. you can --force but you risk damaging the journal (I say "risk" becuase thats why the --force is required, its a warning, many do mount HFS+ journaled filesystems without data loss).

For 2. you need to make sure you have CONFIG_EFI_PARTITION and CONFIG_MAC_PARTITION enabled for GPT and Apple partition map respectively. 

HTH & best ... khay

----------

## NeddySeagoon

khayyam,

The kernel sees the partitions

```
[ 395.462565] sdb: sdb1 sdb2 
```

so the partition code is there.

Its looking at the first drive of a raid0 set.

----------

## k_klunz

Hi there,

thank you very much for your answers.

1.) The disk is not journaled, as this apparently causes problems with write access.

2.) The partition table is GPT, support for this is compiled into the kernel.

I compiled the option that NeddySeagoon mentioned into the kernel, however I still cant mount the drive.

However it seems I was mistaken when I said the drives were attached as JBOD.

I set the RAID-option on the HDD enclosure to "Not in use" and therefore assumed that the raid controller was not in use.

Apparently that setting doesnt create a JBOD, but rather configures the 4 HDDs as "spanning", which means one big Volume is created over all 4 drives.

That is why dmesg mentioned a raid i guess.

After putting some more options into the kernel, dmesg now looks like this:

 *Quote:*   

> [  480.367369] usb 2-1: New USB device found, idVendor=0dc4, idProduct=000a
> 
> [  480.367376] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
> 
> [  480.367383] usb 2-1: Product: Hydra Super S Combo
> ...

 

However, trying to mount the drive still results in the same messages as above.

I still think I am missing some kernel option. Could it be I need some low-level SCSI drivers?

I now have the following kernel configuration:

 *Quote:*   

> [root@laptobi:src/linux]# cat .config | grep SCSI                               
> 
> # SCSI device support
> 
> CONFIG_SCSI_MOD=y
> ...

 

Thanks again

tobe

----------

## NeddySeagoon

k_klunz,

Either your drive enclosure is going to assemble your raid and hide it from the kernel, so the raid set appears as a single SCSI drive or your enclosure will export all four drives individually and the kernel will need to assemble them into a raid set.

Raid comes in three basic sorts, kernel software raid, when the kernel assembles the individual block devices into a raid set.  There are two arrangements of raid0, striping and linear. Striping is fastest as sripes are read/written to each block device in turn. Linear uses each block device in turn.  Both make it impossible to recover your data if one element of the raid set dies,   

BIOS (fake) raid, where the kernel can see all of the underlying devices and the raid set. This needs dmraid to see the raid set and you mount the partitons you find in /dev/mapper/  You also need Device Mapper Support in the Kernel.

Real hardware raid which makes a huge hole in your bank account and hides all of the individual drives from the kernel. It appears as a single drive.

I'm surprised that you can't see the individual dribe with the Check all LUNs option on.

What does 

```
uname -a
```

 show?

I'm particularly interested in the date/time as thats the date and time the running kernel was built.

----------

## k_klunz

NeddySeagoon,

thanks a lot for your answer, as requested:

 *Quote:*   

> [tobi@laptobi:~]$ uname -a
> 
> Linux laptobi 3.2.12-gentoo #14 SMP Wed Aug 22 21:14:01 CEST 2012 x86_64 Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz GenuineIntel GNU/Linux
> 
> [tobi@laptobi:src/linux-3.2.12-gentoo]$ cat .config | grep LUN
> ...

 

I am not sure I understand what the LUN option does, but my understanding (which obviously to simple but still should get the point across) goes as follows:

The 4 HDDs in the enclosure are attached to an scsi- and a raid-controller, that connects them to one big spanning device.

So everything the kernel sees is one big device, connected to a USB-port. It simply doesnt know that its actually 4 HDDs connected to an SCSI controller, that fact is hidden by the raid-controller.

Please correct me if I am wrong.

Greetings

tobe

----------

## NeddySeagoon

k_klunz,

Your kernel build date is today, so you are probably runnng the kernel you posted the config snippit for.

I needed to check because many users recompile their kernel correctly to fix their problem, then mess up the install step, so the new kernel is not used.

Real SCSI is a bus, early implementisons allowed for 8 devices to be connected, including the controller.  Later, this was expanded to 16 devices.

USB storage devices use the SCSI protocol over USB, so the USB carrier is effectively 'invisible' to the actual devices involed in the link.

Your system thinks it has a real SCSI device, or maybe more than one, in the drive enclosure.

When a single box contains several physical hardware devices, the SCSI data structures allow them to be exposed in two ways.

The first is as separate SCSI devices on the same SCSI bus. However individual SCSI devices may contain subdivisions called Logical Units.

The kernel option, Probe all Logical Unit Numbers makes the kernel look for these subdivisions.  If your raid set was exporting the other drives in JBOD mode using Logical Unit Numbers, you would have found three new drives in /dev.  It seems that didn't happen. Without the LUN kernel option, you only seen LUN 0.  USB card readers use LUNs for the diffrent card types. More exotic SCSI devices like tape drives with an autochanger use a separate LUN for each tape.

We now know that your external enclosure is doing the raid0 and exporting the raid set as a single drive.  Your PC is correctly seeing the first drive in the raid set as thats where the partition table is.  This partition table correcty describles all of the space on all four drives.  The other three drives will have a blank partition table.     

We know the size of your /dev/sdb

```
 [ 481.377302] sd 5:0:0:0: [sdb] 7814100664 512-byte logical blocks: (4.00 TB/3.63 TiB) 
```

Can you read a block from close to the end with dd ? 

```
dd if=/dev/sdb of=/dev/null bs=512 count=1 skip=7814100000
```

That will read one block from close to the end of the volume (the fourth drive) and throw it away.  If it fails, you will get an error message.

dd does raw device reads and writes. There is no undo, so be sure you get the input file and output file the right way round.

If it works, you have proved the raid0 is provided by the enclosure and its probably all there.  If not, the kernel cannot read the last drive in your linear raid.

----  edit  ----

What does *Hydra Super-S Combo User Manual wrote:*   

>  In order for the computer to access volumes larger than 2TB, both the hardware and Operating
> 
> System need to support large volumes (e.g.: WinVista 32bit/64bit or Mac OS 10.4 and above) or
> 
> the 2TB switch has to be set to position A.

 mean?

Which position is that switch in ?

----  edit  ----

IS your Gentoo install 32 bit or 64 bit ?

----------

## k_klunz

NeddySeagoon,

thanks again for your help and your detailled explanations.

I fully understand your caution as to the date of my kernel, those simple mistakes are just far to common and most times the reason for problems.

I did in fact not know that a SCSI device connected over USB uses the SCSI protocoll over USB and therefore basically doesnt have anything to do with the USB protocoll, thanks again for the explanation.

However, regarding my problem it seems the problem lies elsewhere.

After some more researching the topic I found some interesting pages:

An older post on the kernel mailing list concerning the problem:

http://www.gossamer-threads.com/lists/linux/kernel/1140400

A somewhat newer bug report that seems related (especially the answer by Seth Forshee is interesting):

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/889928

The comment by Seth sounds like I have to add some parameters to the mount command, although I am not sure what parameters that would be.

Most interesting and newest is this history log of /linux/stable/fs/hfsplus/wrapper.c:

http://code.metager.de/source/history/linux/stable/fs/hfsplus/wrapper.c

The comment to the newest revision sounds a lot like a fix to my problem, so it seems like I will just have to wait until this fix makes it to the mainline kernel.

Regaring your other questions: I'm currently at work and will have a chance to check those things in the evening.

The switch you mentioned has to be in the correct position, as the 4TB work just fine under OSX. Otherwise I would only see 2TB.

As far as my system goes, I am 100% positive its a 64bit system.

Greetings

tobe

----------

## NeddySeagoon

k_klunz,

The updated http://code.metager.de/source/history/linux/stable/fs/hfsplus/wrapper.c is in the 3.5.2 gentoo-sources kernel.

That was the gentoo-sources testing kernel when I updated my kernel on 19 Aug.

Try the testing kernel.  If the sound of testing makes you nervous, insyall the testings kernel beside your current kernel, so you can choose kernels at boot.

----------

