# Filesystem question [unsolved]

## LIsLinuxIsSogood

I know it isn't a desktop environment question but not really sure where this can go otherwise.  I am trying to do some fancy sharing of a data partition between two different operating systems located on same disk in different partitions.  The partition scheme is as follows

EFI partition (vfat)

Data partition (ntfs)

Linux OS1 partition (ext4)

Linux OS2 partition (crypto_LUKS with 2 LVM volumes)

My question is tricky one, since I use hibernation usually with BOTH systems, although I may need to change that behavior and only use hibernation with one of the two OS.  But because the data (ntfs) is often mounted in both systems how to prevent this problem from occurring where suddenly there are input/output errors going on because I guess writing and moving, renaming of files happening.  Would be nice if there was another filesystem option, or service in the operating system that addresses the mis-synced data generally between the two instances of the disk partition.  Maybe a scripted way also of handling it.  But basically the idea was to have my user files available on both systems.

Would it improve the situation at all to be sharing these via some service, or else mounting them differently even?

Right now when I move or create a file in one, and the other system resumes from hibernation file then the file shows up with bunch of weird properties and input/output error.  Is there a better procedure for what I'm trying to do in terms of sharing data partition across several "live" but not necessarily simultaneously active partitions?Last edited by LIsLinuxIsSogood on Thu Aug 01, 2019 10:12 pm; edited 2 times in total

----------

## Hu

Kernel & Hardware seems like a decent choice.  Would you like for this topic to be moved there?

To your question, the answer is simple: don't do that.  Linux hibernation has long been documented not to permit you to (1) mount a filesystem, (2) hibernate, (3) modify that filesystem in any way outside the hibernated system, (4) resume from hibernate, and (5) get any sort of useful result.  This caveat is typically applied to people trying to share filesystems between Linux installs on the same system, but the same problem applies to foreign filesystems like NTFS.  Your choices are: (a) unmount the filesystem before you hibernate or (b) leave the filesystem completely unmodified while hibernating.  Mounting it read-only in both systems is one choice that will probably satisfy (b).  (There is a caveat here: sometimes a read-only mount will try to replay a journal or do other housekeeping, at which point it is not truly read-only, and the requirement of non-modification is violated.  You will likely be fine if the filesystem is unmounted in both, then mounted read-only in one, then you hibernate, then mount it read-only in the other.)  Of course, last I looked, Windows did not make it particularly easy to mount a filesystem read-only, so even if that advice is technically correct, it may not help you.

To partially handle this problem, many common Linux hibernation helpers can be configured to unmount filesystems on hibernation, so that the kernel can gracefully remount it when you resume, thus avoiding the inconsistency problems.

----------

## LIsLinuxIsSogood

 *Hu wrote:*   

> To partially handle this problem, many common Linux hibernation helpers can be configured to unmount filesystems on hibernation, so that the kernel can gracefully remount it when you resume, thus avoiding the inconsistency problems.

  Unmounting each time before would be a good option.  I was unaware there was a hibernation helper, would this be like a script, i.e. shell script, or complete application program?  

 *Hu wrote:*   

> Would you like for this topic to be moved there?

 

Makes sense, Yes.

Also just fyi is actually for a laptop that I use to develop and program that uses CentOS and another Debian variant, but no actual Gentoo installation if that makes a difference.

----------

## Hu

There are several ways you might be suspending the system.  Each has their own mechanism for hooking in special handling.  One common approach for CLI-oriented users is sys-power/hibernate-script, which reads a configuration file that can, among many other things, unmount filesystems or run programs of your choosing on suspend or on resume or both.

I expect all the desktop/laptop oriented Linux distributions will have capable offerings in this area.  You might have to search a bit to find what other distributions named their helpers, but I would be surprised if the helpers were not available.

----------

## LIsLinuxIsSogood

So I'm making some progress here...

1) I forgot to mention that I want to recover the corrupt files at this point too so is this a job for ntfs-fix or is there any other tool?  I've had problems before with that tool thats why.  

2) But then to address the repeat problem and make sure it doesn't happen in the future, so I believe making 1 of 2 installed operating system have read only privileges is going to work, but also just to be very very safe about the use of hibernation I would like to  use what I've been learning about with the hooks for scripts pre hibernation and at resume with the  systemd service that controls it.  In order to mount and unmount though I am a bit confused how that will work because if files are actually open I don't think that the shell or operating system is going permit unmounting the partition due to open files on it...or will it?  Could be my script may need to check that kind of thing and take action there first...seems like a big challenge to have to figure out without closing open applications at which point this defeats the idea of freezing the system state to hibernation in the first place.  How might I use the functionality of suspend/hibernate to check first then unmount or is that not a necessary precaution to take?

----------

## LIsLinuxIsSogood

More bizarre stuff keeps happening to me, so after I wrote a test script and dropped into the location for use with sleep/hibernation.  It worked one time the shell script executed at both  pre hibernation and post for resume, even though the unmount command clearly worked and the mount command did not, I'm thinking that might be an issue of timing when the command is executed upon resume.  The bigger problem is that while the script ran correctly for the first time, i can't seem to make it work again!!!!  What could be the problem IDK?  Is it worth maybe fresh restart and trying again, or even just logging out and back in?  Weird that the systemd service hook script woud work once and then stop!

Correction:  This was my own mistake, user error...both my test script and the other one had a mistake in line 1...continuing on, but I still can't seem to get remount working on resume.

Here is a copy of the script I'm using.

```

#!/bin/bash

if [ "${1}" == "pre" ]; then

touch /home/jonathanCentOS/hook.sleep.started

umount /home/data.loc

echo "time for unmount is" > /home/jonathanCentOS/unmount

elif [ "${1}" == "post" ]; then

touch /home/jonathanCentOS/hook.sleep.complete

# nothing goes here

#service NetworkManager restart

touch /home/jonathanCentOS/hook.resume.started

sleep 3

set -x

mount /home/data.loc

touch /home/jonathanCentOS/hook.resume.complete

fi

```

Could someone please tell me why maybe the mount command isn't working, of course this is the mount point only being passed, which /etc/fstab has an entry for, which I would think is enough for the bash script to work with, but I don't know anything about this sort of stuff really so of course help is appreciated.

Further detail's

 *Quote:*   

> Jul  8 23:08:32 Reznik-CentOS ntfs-3g[5812]: Version 2017.3.23AR.4 integrated FUSE 27
> 
> Jul  8 23:08:32 Reznik-CentOS ntfs-3g[5812]: Mounted /dev/nvme0n1p4 (Read-Write, label "USER_DATA.ntfs", NTFS 3.1)
> 
> Jul  8 23:08:32 Reznik-CentOS ntfs-3g[5812]: Cmdline options: rw
> ...

 

Why it looks like it tries to successfully mount, but then Ownership and permission disabled and then unmount right away is very strange?!

Another Update:

Whie coming to grips with the damage done on the NTFS file system by doing things the wrong way, I was able to luckily make copies of about 90-95% of everything using rsync from the ntfs over to existing vfat partition that was unused in the form of a SD card.  So now I am going to take my last stab at recovering the files I just downloaded windows (Ahhh) and burning image to usb to try and see if booting that will give me a repair option to run chckdsk since from the sounds of it there is no comparable tool available for NTFS from within linux, which seems strange given that Microsoft and Linux are probably two of the most common operating systems found across the Internet as a whole.  Oh well.

----------

## LIsLinuxIsSogood

Marking as solved and going to open a new thread regarding data recovery for the ntfs partition that was compromised.  I believe tending to that will also maybe help to solve the current issue with remounting at resume from sleep after the disk is fixed!

----------

## Hu

 *LIsLinuxIsSogood wrote:*   

> 1) I forgot to mention that I want to recover the corrupt files at this point too so is this a job for ntfs-fix or is there any other tool?

 I'd use whatever tool is most efficient at restoring your backups of that drive.  :Smile:  *LIsLinuxIsSogood wrote:*   

> In order to mount and unmount though I am a bit confused how that will work because if files are actually open I don't think that the shell or operating system is going permit unmounting the partition due to open files on it...or will it?

 It will not permit the unmount.  Depending on the hibernation script, this failure may be ignored and leave the filesystem mounted or it may abort the entire hibernation.  The latter is preferable here, since the goal is to ensure data integrity by performing the unmount.  In some cases, you can arrange for the offending processes to be killed, but that's disruptive.  I prefer to have the hibernation fail and let me, as the system owner, decide whether I want to close the processes or refrain from suspending. *LIsLinuxIsSogood wrote:*   

> Could be my script may need to check that kind of thing and take action there first...seems like a big challenge to have to figure out without closing open applications at which point this defeats the idea of freezing the system state to hibernation in the first place.  How might I use the functionality of suspend/hibernate to check first then unmount or is that not a necessary precaution to take?

 If the hibernation tool has a dedicated hook for unmounting, it should handle both running umount and checking that the unmount was successful.  If you reimplement this by using a more generic hook, then at minimum you will need to propagate the exit code of umount up from your script so that your parent can see that your script failed. *LIsLinuxIsSogood wrote:*   

> It worked one time the shell script executed at both  pre hibernation and post for resume, even though the unmount command clearly worked and the mount command did not, I'm thinking that might be an issue of timing when the command is executed upon resume.

 Is the device in question one which can be slow to appear after a resume, such as a USB device?  Builtin drives ought to be online soon enough for a mount to succeed. *LIsLinuxIsSogood wrote:*   

> The bigger problem is that while the script ran correctly for the first time, i can't seem to make it work again!!!!  What could be the problem IDK?

 You have more information about it than we do. *LIsLinuxIsSogood wrote:*   

> Is it worth maybe fresh restart and trying again, or even just logging out and back in?  Weird that the systemd service hook script woud work once and then stop!

 I don't use systemd, but I am never surprised when it exhibits unexpected results. *LIsLinuxIsSogood wrote:*   

> 
> 
> ```
> #!/bin/bash
> ```
> ...

 Missing set -e. *LIsLinuxIsSogood wrote:*   

> Could someone please tell me why maybe the mount command isn't working, of course this is the mount point only being passed, which /etc/fstab has an entry for, which I would think is enough for the bash script to work with, but I don't know anything about this sort of stuff really so of course help is appreciated.

 What output did mount produce?  It may have been written to a log file instead of a terminal.  What is the fstab line for this mount? *LIsLinuxIsSogood wrote:*   

> Why it looks like it tries to successfully mount, but then Ownership and permission disabled and then unmount right away is very strange?!

 Yes, it is strange. *LIsLinuxIsSogood wrote:*   

> to try and see if booting that will give me a repair option to run chckdsk since from the sounds of it there is no comparable tool available for NTFS from within linux, which seems strange given that Microsoft and Linux are probably two of the most common operating systems found across the Internet as a whole.

 NTFS is a complicated filesystem.  Linux support for it has never been at the level of support for native filesystems.  If I were to guess, the lack of a checker is because no one has both the knowledge and motivation to write a good one.

----------

## LIsLinuxIsSogood

Also thanks by the way, but what is the purpose of the set -e command, versus the set -x in context of bash scripts?  Some of the points here are arguable, but I like all of what was mentioned.

Since going forward I want to use this chance to have some data recovery practice I've given it some thought and what I will do...

1) Get a cheap 1TB harddrive to use to copy everything over.

2) Use the dd command to create a perfect bit for bit copy

3) Then attempt some various possible reading of files, using various techniques including possibly attempting to mount the partition on a windows system

If none of this works then I'm over it.

Actually dealing with another important problem hardware on a separate motherboard, so going to open up that new post now.

----------

## Hu

set -e is roughly equivalent to appending || exit $? to most statements.  It asks the shell to treat the failure of (most) called programs as fatal, and exit the script with the same code as the failure code of the called program.  It is useful because ignoring errors often makes a bigger mess as later steps mishandle the broken state left behind by the earlier failed command.

set -x causes the shell to print statements as it runs them.  It is useful for understanding execution flow, but has no impact on what the shell runs.

----------

## LIsLinuxIsSogood

As I'm totally new to bash scripting and have literally not a clue in this category, so this may sound like a dumb question (I know there aren't dumb questions, just dumb people...ha)  but seriously so is this command set -e used before every single line or command to be executed?  Or is it enough to include it one time at the top of the script or in other words how often does it get invoked once it gets included, is it for the remaining rest of the line, or the entire script afterwards?

On another note I just want to be very sure that I'm thinking correctly about the plan going forward...IF I MOUNT the filesystem read-only on one of two partitions with operating system installation, and go switch back and forth, then would the idea be to include an unmount hook of some kind on which of the two systems then?  Clearly the read/write access one should not experience any changes to its mount if the read only one is used in the interim time, however going the other way could be tricky and writing to the partition, even with unmounting afterwards is still likely to leave some weird discrepancies on the read only mount "instance" if you will.  For that reason I would assume if I have 1 rw and 1ro that the hook for hibernation script to remount needs to be on the ro partition only.  Phew, is that pretty much right?

----------

## Hu

set -e is used once, and persists until set +e reverses it or the shell exits.  You can freely toggle it during the script, and there are sometimes good reasons to do so.  For example, if you need to call multiple commands that you expect will return a failure status, and you wish to ignore the failures, temporarily disabling errexit is easier than overriding it on every line.  The failures could bebecause the called program is badly written or because you are doing an operation that may fail, such as deleting files that may or may not exist.  If you had to put it on every line, you could just as well write the corresponding long form of adding || exit $? to every line.

I would be most comfortable if the filesystem were always unmounted before hibernation, in both systems.  You could probably get away with making the filesystem read-only in both systems, and then leaving it mounted.  If you expect that one side will modify the filesystem, then the other side must unmount it first.  You might be safe if the non-modifying side has the filesystem mounted read-only, but I would avoid that configuration if possible.

----------

## LIsLinuxIsSogood

Regarding the the dual boot linux systems on the same laptop I want to take and follow your advice to unmount on both sides and not risk any issues at all.  As it turns out for one of the two operating systems this has already been fixed via the hibernation pre-execution script for that operating system, which is working fine!  That is except for one small thing, which is remounting the drive, but I am sure that I can fix that after I reformatted partition because I still have yet to do that.  Finishing with some data recovery on it still, next step is a bit for bit copy of data....one step at a time.

I am taking your suggestion to heart about the unmounting on both sides.  The way I am thinking to execute that will be in the kernel by disallowing hibernation completely from the other (second) operating system.

But I have some questions regarding that...

1) Is shutdown and suspend to ram uneffected if I need to remove the modules in the kernel for swsusp or whichever one is the right one to remove...

2) Oh by the way, which module is it that I need to remove, is it swsusp or uswusp?

3) Are there other programs or services that will need to be cleaned up to remove the hibernating functionality?

In doing this I will hopefully be following advice and not have the switching back and forth between two active operating systems with a shared mount (that isn't unmounted and remounted each time it powers down). 

I would really like to avoid the overhaul of having to work with the operating system services and programs if at all possible.

I would like to finally confirm before I attempt any changes that modifying the kernel isn't pointless here.  Oh and thank you for prior explanation regarding bash scripts!  I find it helpful.

----------

## Hu

Disabling hibernation completely is a bit overkill, but that's certainly one way to stay out of trouble.  You can have a system that can suspend-to-RAM, but not able to suspend-to-disk.  Set CONFIG_HIBERNATION=n to disable suspend-to-disk.  No, you can leave everything else hibernation aware while running a hibernation-disabled kernel.  Nothing should mind, other than that if you try to trigger a hibernation, it will fail and not sleep at all.

----------

## LIsLinuxIsSogood

So I am unsolving this problem but at the same time I am open to some new possible solutions, even more involved like a virtualized or cloud solution.  The problems with some of these, e.g. Cloud relies on internet and is insecure, also USB I don't like because it can be misplaced.

So reminding the situation I have a dual boot laptop with two linux systems (Neither is Gentoo, eoops).  I use this laptop to do programming/developing, and also to do things like office productivity and other multitasking.  I like to keep the two systems separate because of the number of folders and files on each of them and also some of the different features for example I would say that security is more a concern on the day to day stuff (productivity, think email, office, etc.) whereas with programming (flexibility is main and only concern to test software etc.).  Unfortunately I have a limited budget and just 1 such laptop.  I do have 2 other PCs a HP and custom built desktop both run Gentoo  :Smile: 

Hu, I am in agreement that disabling hibernation is going to be overkill, especially because I would actually prefer to have that feature and would rely on it quite a lot.  I think the solution (if there is a viable one) is going to involve read only mounts, but as you've already mentioned that mount as well is going to present some risk.  I'm not sure what kind though?  Is the risk involved  because I am mounting the same partition on two systems 1 write and 1 read only?  Also more generally is the type of exposure from this risk ever going beyond the partition in question, to the other disk partitions...the last thing I need is data corruption on the 2 system partitions just because I cleverly wanted a way to share data between them.  I believe that another potential solution should exist with a SD card!  However as far as I can tell the same problem will exist when hibernating....so back to the initial question about dual boot and hibernation (the big no no in question).  If there was some kind of functionality in the kernel or user space on either of these systems to manage it so that a particular disk (by ID or mount point) is completely, and I mean 100% absolutely completely unwriteable by disabling write mounting in any capacity from just the programming environment (1 of my 2 system), I would then have a no conflict situation with these files.  Is it?

Also what about a solution that would run a quick sync of the data from one disk to the other (say from HD to SD card) right before hibernation.  That way assuming at some point the disk does get corrupted there is a backup of it from right before...

I really don't know, but need to figure this out.  Thanks!

----------

## NeddySeagoon

LIsLinuxIsSogood,

Read only mounts are not read only.

You mount the filesystem RW, so the dirty flag is set.

You mount it again RO, mount notices that the dirty flag is set and does a journal replay, or even a fsck. That's effectively writes.

It may well trash the filesystem.

It gets worse ...

The caches in the RO and RW side are not coherent, so you are likely to see different filesystem contents in both mounts.

----------

## Hu

Neddy's post nicely summarizes the problem.  As a consequence of the problem, at any given time, the filesystem must be in one of these states:Mounted by nothing.Mounted read-write in one system.  Not mounted anywhere else, even read-only.Mounted read-only in two or more systems.That last is theoretically safe, but I would avoid it if possible because it is dangerously close to a known-bad configuration.  In particular, it is dangerously easy to move from that state to a definitely bad state, if anything decides to mount -o remount,rw the filesystem.  As Neddy says, there is a critical difference between the filesystem being read-only to applications versus the filesystem being wholly immutable, even to the kernel.

I would expect the risk to be confined to the filesystems you use in this way.  The only way I know of for corruption to spread would be if you corrupted the shared filesystem in a way that the kernel then panicked at an inopportune time, and the panic left writes to a non-shared filesystem incomplete in a way that caused corruption.  As I understand the dire warnings about trying to multi-mount, panic is a real risk.  Whether that panic would interrupt a critical write is less clear.

I believe you can make a block device read-only, but I am unsure whether such a device could still let you mount the contained filesystem.  Also, I vaguely recall seeing a kernel bug that allowed writes to a read-only block device.  That has been fixed, but it demonstrates the limits on calling something "read-only."

I think your best solution is to have both systems explicitly unmount the shared volume during hibernation, and abort the hibernation if that fails.  It is simple and easy to reason about, because it stays away from the scenario of trying to multi-mount the filesystem at all.  If you are adamant not to do this, you could instead remount read-only during hibernation, and abort on failure.  This is less safe, because it assumes that the other side will not remount read-write before you resume, and it has no way to enforce that assumption.  It also greatly diminishes the utility, since you can't write to the shared volume from the other system.  If it were my system, I would go with the full unmount on hibernation.  It's a bit less convenient to the system entering hibernation, but much safer.

Have you considered using only one Linux system, but using a chroot from one context to the other?  This would provide a decent level of file isolation, although on its own it does nothing for process isolation.  Careful use of containers could provide that, though.

----------

## LIsLinuxIsSogood

Truth is I already implemented unmount pre-hibernation from one of the systems, so now just need to repeat that on the other system.  I didn't previously understand how RO is sort of a misnomer so I will be sure to have it unmounted before hibernating either operating system instance.  I think from the sound of it my options are many, but seeing as how I'm not really interested in taking on a steep learning curve with something like containers, or even virtualization maybe sticking with something I already know a chroot environment is a good solution!   The first problem I could imagine running into is since right now I'm operating two different kernels.  But I guess one way to see what the effect would be is to go ahead and try this out and see how it goes.  If it works to build a developer environment inside a chroot then I would use that to go in and out of the sandbox inside my more secure operating system.    Thanks for that idea, now I just have to try to do it.  Would you suggest using a particular method of installation for chroot, and is that going to be distro specific, e.g. a distro1 chroot inside a distro2 is maybe going to be different if it is a distro2 chroot inside of a distro1.

----------

## 1clue

This is actually a dual boot system, or is it a VM and a guest?

Why NTFS? You have 2 Linux operating systems, and you're using an alien filesystem as an intermediate. NTFS support is still not as good as a Linux-native filesystem when all the "owners" are Linux.

Feel free to ignore or flame the rest of this response if it offends you, but I only see one legitimate solution.

Make your setup into a host and VM guest.

You get access to both systems simultaneously.

The shared drive is owned by one system and exported using NFS or similar.

The other system mounts it and uses it.

Hibernating either system protects the state of the shared partition.

----------

## LIsLinuxIsSogood

Thanks for the reply 1clue.  I actually should've mentioned but no this is not a VM currently, but also I have a problem with the VM solution because I don't know the BIOS password anymore to my own computer!!! And that means I can't turn on the feature in hardware for virtualization support.  Do you think it might still work without it?

Alternatively I might just be forced to combine the two systems into one, and hope that the features of the two different system are more or less compatible when combined into a single one.  I've never had to do this, so I'm not really even sure where I would start.  Maybe by just creating a temporary location for all the files from one system, like in /mnt somewhere or else in /home and then begin installing everything again hopefully utilizing some package list.  It would be nice if a tool existed to migrate linux systems like this.  Anyway, thanks for the help!  I think I will try the chroot option and VM option and report back on which seems better.

----------

## NeddySeagoon

LIsLinuxIsSogood,

Would two users provide the separation you need?

Both have their own /home/<user_name>

----------

## LIsLinuxIsSogood

The two user idea is actually going to help a lot here, in order to set it up inside a chroot I will do a second user for the fact that I have applications, libraries and other stuff like that installed on the second system, and I honestly don't want the new user with access to that stuff having any direct access to the rest of the operating system's root filesystem.  I am still going to attempt the VM solution offered but only once I am able to test it on this hardware and see how fast/slow it is without the CPU hardware support enabled for virtualization in BIOS. The hybrid approach of chroot with a new user is basically what I was going for, and is hopefully done smoothly after I back up everything (which can be accessed via a chroot) and then create a new user account like Neddy suggested which I might be able to have only inside the chroot and can operate with complete sandbox privelege.  Come to think of it that is exactly what I want, not /home/user1 and /home/user2, but /home/user1 and /chroot/user2.  Thanks for all the help.  This a great forum to be a part of because of the responsiveness of people involved.  Time to get to work on some of this stuff!

----------

## NeddySeagoon

LIsLinuxIsSogood,

Many BIOSes have a default password that you cannot change. They vary from vendor to vendor and BIOS to BIOS.

Google knows lots of them :)

A few save the password in the CMOS battery backed RAM, so that a clear CMOS clears the password too.

However. that was in the days when the CMOS battery backed RAM was the only writable storage available to the BIOS.

That's rare on modern systems and a real pain on laptops, where there is no easy CMOS reset.

----------

## 1clue

Google your brand and model, along with "bios password reset"

----------

## LIsLinuxIsSogood

Sorry for the delayed response, but with this laptop I have previously looked into the fix for reset of BIOS and it involves a hack that is WAY TOO BIG that I don't want to take any chances with at this point in damaging equipment.  Meanwhile I've gone ahead with copying all files and making a functional chroot that is able to run graphical applications and I'm happy with that result.  Just one weird thing so far that is one of the mounts to setup the chroot which is /dev (and is using --rbind) but there is something very strange seen as a reuslt with this.  

/dev/mapper/centos_reznik-root /var/tmp xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

That is my root filesystem (on the root host, as seen inside the chroot)!  The logical volume is inside of a LUKS encrypted partition so that I know that the file exists inside /dev and therefore the chroot has to have some really weird behavior to figure it out for itself that mapping of /dev/mapper/centos into the location of /var/tmp inside the chroot environment.  

Anyway I would understand if this isn't really the right place since it is a CentOS system.  I may have to look elsewhere for the help with it.    Even though I don't really understand why it appears there in the mount list.  At least the files are not visible in the chroot. So that's good. 

Here is the  output of mount from inside the chroot... see the last two lines.

```
root@Reznik-CentOS:/# mount

devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8041408k,nr_inodes=2010352,mode=755)

tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)

devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)

mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)

hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)

/proc on /proc type proc (rw,relatime)

sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)

securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)

tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)

cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)

cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)

cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)

cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls)

cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu)

cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)

cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)

cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)

cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)

cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)

cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)

pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)

efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)

configfs on /sys/kernel/config type configfs (rw,relatime)

selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)

debugfs on /sys/kernel/debug type debugfs (rw,relatime)

fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)

/dev/mapper/centos_reznik-root on /var/tmp type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)

devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8041408k,nr_inodes=2010352,mode=755)

tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)

devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)

mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)

hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)

sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)

securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)

tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)

cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)

cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)

cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)

cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls)

cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu)

cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)

cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)

cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)

cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)

cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)

cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)

pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)

efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)

configfs on /sys/kernel/config type configfs (rw,relatime)

selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)

debugfs on /sys/kernel/debug type debugfs (rw,relatime)

fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)

/dev/mapper/centos_reznik-root on /var/tmp type xfs (rw,relatime,seclabel,attr2,inode64,noquota)

tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)

```

----------

## LIsLinuxIsSogood

By the way in case I run into some other problems later like security, network, graphical interface, etc.  Would the user-mode linux provide any differences for the better than a chroot for the purpose of isolating user accounts, and applications/services, etc.  from one another?

----------

## Hu

Some directory, which may or may not be the root directory, is stored on /dev/mapper/centos_reznik-root and is bind-mounted to /var/tmp.  If this directory is /var/tmp on the CentOS root, then this is reasonable.

I would prefer well-configured containers over UML, though UML may provide better isolation.  A key question you need to consider is whether you are using this isolation to prevent accidents or to confine rogue processes that may actively try to escape their isolation.

----------

## LIsLinuxIsSogood

Hu, I would have to check because as far as I know the only mount in the root filesystem going to the root filesystem shouldn’t be to anything in /var at all, but simply to /

I can paste the mount output from the host operating system shortly.

I just started reading about systemd’s containers feature.  For now I agree that containers might present a better solution, which is sort of an answer in itself to your questions since I also agree that isolation from writes to the filesystem and preventing rogue processes are priorities.  It seems systemd has this “protection” built in at least to the extent that I would not purposefully be navigating around that to screw with the host system.  Of course if the virtual machine stuff will work fast enough I will give it a try as well.

----------

