# Oh oh - seems my Gentoo's been ransomwared!!  Oh No!

## eohrnberger

Now this is really troubling.  Seems that my Gentoo system has been ransomwared.

So, yeah, I'm looking for some pointer as to how to detect where it's sitting, and eradicate it.

Came home from work yesterday, logged into my Gentoo machine and was greeted with this message:

```

Using username "root".

****************************************!WARNING!**************************************

*************************************YOU ARE INFECTED**********************************

***********************WITH THE MOST CRYPTOGRAPHIC ADVANCED RANSOMWARE*****************

=======================================================================================

All your data of all your users, all your databases and all your Websites are encrypted

=======================================================================================

Send your UID to e-mail: johnmorcbw@seznam.cz

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

***************************************************************************************

***************************************************************************************

YOUR UUID IS : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx

****************************************!WARNING!**************************************

```

Come to find out that my home dir files were all encrypted with an '.enc' on the end of the file name.  Yeah, they are binary (well no big deal there either, I have them under revision control).

Seems to have crawled through the file system, left most of the files alone (thank goodness), except the web site (no big deal, it was not being used), but still.

I noticed that there was a hugely high CPU python2 task running, so I killed that, and renamed 

/usr/bin/python2.7 => python2.7.disabled  (symlink to python-exec2c)

/usr/bin/python-exec2c => python-exec2c.disabled

/usr/lib/python-exec/python-exec2 => /usr/lib/python-exec/python-exec2.disabled

I also set the permissions for these files to 000 to prevent this thing from being able to run again, at least for now (this will stop it form running, won't it?)

But I want this out of my system (can you blame me?), but I have to admit that I've never faced this with a Gentoo system before, and I'm hoping that there's a good reference (or a set of good hints) that can help me eradicate this.

Please help.

----------

## Schnulli

Well, nearly same trouble here.....

YOU ARE HACKED !

Kick adobe-flash and take care what Websites you visit ^^

rkhunter and sharp firewall rules are usefull as well.

The firewall rules aoso should disable some outgoing ports whom arent used usually......

i disallowed all and only http, imap, ssh and ftp outgoing is allowed here, they hate it ^^

if destination port = 80 and so on allow and so on

I am trying to figure it since weeks out....

I gave up and installed on another drive a fresh Gentoo.... 

safed me alot greay hairs

Listen, you are not the only one who got hacked ^^

You can do also this.. safe your whole drive and send it to a Cybercrime Dapartment.... they will love to read out who and from where it was..... 

Its well know that Gentoo is beeing attacked since a while also.... so take care and shutdown or lock your comp, shutdown net, when not at it

RegardsLast edited by Schnulli on Fri Mar 17, 2017 11:47 pm; edited 1 time in total

----------

## eccerr0r

It would be interesting on how they got in, but you do have a mess on your hands.

I'm not sure if python is your culprit program, it may just be the python interpreter which normally shows up whenever a python script is running - though it's still good to suspect that they are or have been trojaned.   At this point you should assume everything is compromised and start a fresh build, copying the important stuff over.  Especially when root has been compromised, this is the only way to safely eradicate this.

Note that you probably cannot run emerge, equery, etc. if you disabled python as they too require python.  Equery is a good script to use as you can use it to check the integrity of files (provided the hacker did not muck with the checksums) -

# equery check packagename

and start from those files that fail checksum.  Again, since they got root, these checksums may no longer be trustable.

Though adobe-flash is definitely an intrusion risk, unless they only took over your user account, it would be extremely unlikely they could take over root.  My guess is it's one of those semi-recent bugs like shellshock or perhaps just those pesky bruteforce attacks that got your machines.

If you're not too embarrassed about it, curious how often/when was the last time the box was (completely) updated, along with which kernel you're using (like if you were vulnerable to dirtyc0w)?  This would perhaps give some clues on what packages were used to exploit your box.

----------

## Schnulli

hi eccerr0r

they use python and crash it locally later.... they also load code from external.... 

remember the DNS Problem when infected years ago.. seems to me the same weird idea is behind maybe same guys

whole portage is trash than.... and they try to redirect to "somewhere"

nothing new that they attac Gentoo too lately.....

Be warned with Adobe-Flash the the first Door they use..... Wrong permissions and the "got ya"

seemed to me that also some layman repos got infected as well....

Who?? no idea still..... 

I´ll setup a transparent bridge in a few an log the whole traffic to figure out

----------

## eccerr0r

Well we don't know if this is Gentoo specific or any Linux could have been vulnerable.

I'm not surprised they had a "intrusion intrusion detector" and crash your box when you find out that you've been exploited and try to fix your machine.  Best thing to do when dealing with this kind of stuff is disconnect the network, cold reboot off a livecd and and go from there.

I'll knock on wood that I haven't seen any adobe-flash exploits on my box yet...

----------

## eohrnberger

Yeah, I'm guilty of running FireFox as root.  Shame on me - I should have known better.

Well, pretty much flushed everything in the home directory, .mozilla, etc. figuring that it's not that important, and is already encrypted, so . . what am I going to do with it anyway?

Been thinking should be a list of file mtimes and see what's changed on the system as of a few days ago.  See if that leads me to anything suspicious, although, that can easily programmatically be set backwards to any time desired.  Still, you never know.

The Good News:

All the real important data is safe, as I'm using zfs, and have a grand-father-father-son snapshot script in cron (hourly, daily, weekly, monthly), and only very few files seem to have been encrypted, based on the ".enc" in the filename.  If I find others, I have a year's worth of snapshots to restore from.  Makes me think I need to learn how to configure a gentoo that uses zfs for the root file system as well.  Been meaning to, just haven't had the time, because Gentoo just runs so reliably.

I have it's sister server (same patch config and software load), which appears to be non-infected, so I can clone that system disk and recover pretty quick, with some conf files I have in version control.  Couple of hours I figure.

Interesting to note that another system, also a sister, seems to have caught the same, and I can't recall ever having run anything but VirtualBox VMs on that one, it's turned off right now until I figure out a recovery plan, so at least 2 systems to recover, and have one clean one to do so from.

The infected systems are internal systems, only access to the Internet is through a firewall, and yes, minimal ports open on the Linux firewall machine, and also an ssh port knocking log scan that injects an iptables drop for offending IPs (think a primitive fail2ban shell script).

So maybe not all that bad.  Not sure as to the next step forward, but I appreciate the contribution and ideas.

Yeah, I know that the python interpreter is probably just running some code downloaded off of the Internet, and probably isn't a replaced binary, but that code that download the encrypter, that has to live someplace, if it's going to survive between reboots.  Tracking that one down.  Hmm.

Really sad to learn that there are Gentoo specific attackers.  What'd Gentoo ever do to them?  Guess there's no figuring some people out.

----------

## eccerr0r

Yeah shame shame, dont firefox as root.

However if it really is adobe-flash as the vector, this would not be Gentoo specific and would equally infect Ubuntu, Fedora, etc.  -- but I don't know, is it really adobe-flash?  Then again I don't know how pervasive firefoxing as root is...

----------

## eohrnberger

Well, the message is coming from the /etc/motd file.  That's simple stuff.

----------

## eccerr0r

Now what else did they edit to keep them in the machine?

Did they ssh in?

I wonder if they were using a python script to encrypt your files instead of some compiled binary, that could slow down the encryption to reduce the amount of damage done... ha.

----------

## eohrnberger

 *eccerr0r wrote:*   

> Now what else did they edit to keep them in the machine?
> 
> Did they ssh in?
> 
> I wonder if they were using a python script to encrypt your files instead of some compiled binary, that could slow down the encryption to reduce the amount of damage done... ha.

 

No, not ssh, I don't think so.  But yeah, what'd they leave behind?  That's the question.

----------

## NeddySeagoon

eohrnberger,

/etc/motd can only be edited by root. That means that they got root.

You can't clean that up, its a reinstall.

Either they gained access to root directly or broke in as another user and ran a privilege escalation exploit to gain root.

It doesn't matter much. Its a reinstall either way.

If you want to do forensics, make a disc image of the install and work on that. You need the filesystem free space too, as that's where the interesting stuff will be.

A few of these ransomware attacks have known decryption methods.  If you are lucky, you might get the data back.

You can't salvage the install though.

----------

## cboldt

The attack isn't Gentoo specific.  The exploit works against many distros.

I'm curious about the vector too, how they malicious code made its way onto your system.  One of the remarks here has me working to regulate outgoing IP traffic - heretofore, I'd been concerned about incoming, but not outgoing.  But I can see where closing off outgoing ports might stifle an attack.

On that front, I'm stuck at ftp, which includes outgoing NEW packets aimed at random, unprivileged ports.

----------

## NeddySeagoon

cboldt,

It helps to stop evil intruders phoning home if they do get in.

My firewall drops unwanted incoming packets and denies unwanted outgoing packets.

You need the logs to know what to allow out :)

I use shorewall and shorewall6 with similar rule sets.

For ftp, which is horribly insecure, you need to use passive mode. 

sftp is preferred.

----------

## eohrnberger

 *NeddySeagoon wrote:*   

> eohrnberger,
> 
> /etc/motd can only be edited by root. That means that they got root.
> 
> You can't clean that up, its a reinstall.
> ...

 

I very much appreciate your post, NeddySeagoon.  Many thanks.

While I've been running my various Linux flavors at home over the years, this is the first encounter with something like this on Linux.

----------

## jonathan183

It is worth trying to work out how you were compromised, a fresh install with identical configuration and use will probably have similar results in future ... surfing the net as root is not wise ... but you already know that   :Rolling Eyes: 

Knowing what binaries/logs were attacked would also be useful.

Was ssh open to the net with password access or key based?

----------

## NeddySeagoon

eohrnberger,

Is your normal user in the disk group?

That's a very bad thing.  It gives the user raw access to the block devices, so they can do what they want, avoiding filesystem restrictions.

```
ls /dev/sda -l

brw-rw---- 1 root disk 8, 0 May 12  2013 /dev/sda
```

That would effectively give them root access without ever being root. 

It gives easy access to root, since they can modify /etc/passwd and /etc/shadow with a tool like hexedit, while the run as a normal user.

Being somewhat paranoid, I mount user writeable space with the noexec option, so a break in as a non root user can't execute random binaries.

/tmp and /home need to be their own partitions.  That does not stop scripts being run, so 

```
python27 encrypt_home
```

would still have worked.

All the .bash_history files on your system will make interesting reading.

Its especially informative if they appear to be truncated.

----------

## eohrnberger

 *jonathan183 wrote:*   

> It is worth trying to work out how you were compromised, a fresh install with identical configuration and use will probably have similar results in future ... surfing the net as root is not wise ... but you already know that  
> 
> Knowing what binaries/logs were attacked would also be useful.
> 
> Was ssh open to the net with password access or key based?

 

No, this machine is behind the firewall, and does not have an ssh route from the outside to it.  You'd have to use ssh and jump through the firewall to get to it.  I don't think that this is what happened.  On the firewall, any ssh password knocking, even a single failed password attempt, injects an iptables drop rule for that source IP (think primitive fail2ban).

 *NeddySeagoon wrote:*   

> eohrnberger,
> 
> Is your normal user in the disk group?
> 
> That's a very bad thing.  It gives the user raw access to the block devices, so they can do what they want, avoiding filesystem restrictions.
> ...

 

This is as mine reads:

```
ls -l /dev/sda

brw-rw---- 1 root disk 8, 0 Mar 16 21:41 /dev/sda
```

What's recommended for this device node?

 *Quote:*   

> That would effectively give them root access without ever being root. 
> 
> It gives easy access to root, since they can modify /etc/passwd and /etc/shadow with a tool like hexedit, while the run as a normal user.
> 
> Being somewhat paranoid, I mount user writeable space with the noexec option, so a break in as a non root user can't execute random binaries.
> ...

 

The partician layout is really simple.  A small /boot as sda1, swap as sda2, and root as sda3, the rest of sda, including /home, /var, etc...  The important data in the zfs pools are mounted off of /, as this machine's primary role is to be something like a NAS.

 *Quote:*   

> All the .bash_history files on your system will make interesting reading.
> 
> Its especially informative if they appear to be truncated.

 

Those were encrypted, including .bash_history, and since have been deleted, being useless, from my view.

I want to figure out what code is being run to encrypt, so I created shell script replacements for 

```
/usr/lib/python-exec/python-exec2:

#!/bin/bash

echo "`date` $0 $*" >> /root/python-exec2.execution.log
```

and

```

/usr/bin/python2.7:

#!/bin/bash

echo "`date` $0 $*" >> /root/python2.7.execution.log

```

Really simple and primitive, but might capture something.  Going to sit and watch for the next 24 hours, and see what happens.  If I'm lucky, I can catch from where the encryption code is being run.  Since python is never executed any python code on this system is rendered null, for now, but can easily be reverted by moving back the original binaries and symlinks to what they were.Last edited by eohrnberger on Sat Mar 18, 2017 1:17 pm; edited 1 time in total

----------

## NeddySeagoon

eohrnberger,

The block device node is correct.  What does groups say for your normal user?

```
$ groups

tty wheel uucp audio cdrom video games kvm cdrw users vboxusers scanner wireshark plugdev roy
```

Its important that disk is not there.

Is /root/.bash_history still there or it it encrypted too?

----------

## eohrnberger

 *NeddySeagoon wrote:*   

> eohrnberger,
> 
> The block device node is correct.  What does groups say for your normal user?
> 
> ```
> ...

 

Users are only in the group that is the same as their username.  A user 'ted' would only belong to the group 'ted'.  root, of course, contains the disk group.

The Windows clients access the zfs storage via samba, and that has it's own smbusers.  Other Linux machines access the zfs storage via nfs.

/root/.bash_history was encrypted, and was deleted.  Maybe that was a hasty decision on my part.

----------

## NeddySeagoon

eohrnberger,

 *eohrnberger wrote:*   

> /root/.bash_history was encrypted ..

 .

Yes, change nothing if you want to do forensics.

As users don't have raw block device access, the attacker must have got root to encrypt /root/.bash_history.

That's another file that is only accessible to root, through the filesystem anyway.

----------

## cboldt

 *NeddySeagoon wrote:*   

> 
> 
> It helps to stop evil intruders phoning home if they do get in.
> 
> My firewall drops unwanted incoming packets and denies unwanted outgoing packets.
> ...

 

Yes on the "stifle the call home" notion.  And "you need the logs to know what to allow in" too, at least I did, because I forgot about half of the services!

My firewall is built with a combination of router, and a homebrew script that has been in use nd grown over the course of a decade of so.

No ftp service running on any machine - sftp is available locally as one means to use the local cloud, which aims to give the family a place to offload phone/camera and music.

So, the "ftp problem" for me is just outgoing ftp, which starts with a packet to the server's port 21 (DPT=21), followed by a NEW packet to an unprivileged port.  I get this hourly on one machine that visits a noaa website to get solar activity data, and on a different machine that fetches packages for the system, that is, the "fetch" part of "emerge -u @world" uses ftp in addition to http.

wget is using passive ftp for this.

```
 Active FTP :

     command : client >1023 -> server 21

     data    : client >1023 <- server 20

 Passive FTP :

     command : client >1023 -> server 21

     data    : client >1024 -> server >1023
```

I'm still pondering how to handle this.  For now the connections are just logged, so at least I have a chance to detect something abonrmal.  Yesterday, when I first "closed" OUTPUT (actually, changed to allow certain packets and log the rest), I noticed those packets headed out to high port numbers, had a "WTF?" moment, then figured out the source.

----------

## szatox

 *Quote:*   

> It helps to stop evil intruders phoning home if they do get in. 

 Easier said than done.

Source ports are usually randomized and they provide no information regarding the service in use. Destination ports are controlled by the attacker, so they can have the exploit pretend to be a legitimate user of some common service like www, and you're not going to block THAT.

DPI can be fooled too, even accidentally. Especially in case of ransomware which only needs to send a few bytes, so the connection is already over by the time you discover you should have shut it down.

You could try blocking by destination IP, but this would require prompting for user input every time something tries to reach an unknown machine. A lot of work to train it to your needs.

----------

## cboldt

 *Quote:*   

> On the firewall, any ssh password knocking, even a single failed password attempt, injects an iptables drop rule for that source IP (think primitive fail2ban).

 

The firewall doesn't know if there was even a password attempt.  I run a honeypot here, and the number of hits vs. port 22 is amazing, hundreds of different IP's per day.  I let a given IP "hit it" half a dozen times before banning.  Port 23 is even busier.  On the machine that does have sshd open to the outside (different port), there are occasional intrusion attempts that include password.  A user gets multiple password attempts on a single connection.  The only way to know a password attempt was made is to watch the sshd activity log (auth.log).

Nobody gets into sshd here with a password.  That method is closed off.  Funny assortment of usernames.  I'd guess on the order of 1 intrusion attempt per day, there.

----------

## Tony0945

 *eohrnberger wrote:*   

> The Windows clients access the zfs storage via samba, and that has it's own smbusers.  Other Linux machines access the zfs storage via nfs.

   Can samba access any root owned files? I try to keep samba restricted to one directory, but others make the whole machine accessible. Maybe the malware got in via Windows and samba?  

If your users only belong to their own group they can't do much. Maybe that's why you were web surfing as root? I have fired the browser up as root, but only to access my modem, not the internet.

My apologies for intruding. I am in no way an expert. Listen to Neddy, he is.

----------

## eohrnberger

 *cboldt wrote:*   

>  *Quote:*   On the firewall, any ssh password knocking, even a single failed password attempt, injects an iptables drop rule for that source IP (think primitive fail2ban). 
> 
> The firewall doesn't know if there was even a password attempt.  I run a honeypot here, and the number of hits vs. port 22 is amazing, hundreds of different IP's per day.  I let a given IP "hit it" half a dozen times before banning.  Port 23 is even busier.  On the machine that does have sshd open to the outside (different port), there are occasional intrusion attempts that include password.  A user gets multiple password attempts on a single connection.  The only way to know a password attempt was made is to watch the sshd activity log (auth.log).
> 
> Nobody gets into sshd here with a password.  That method is closed off.  Funny assortment of usernames.  I'd guess on the order of 1 intrusion attempt per day, there.

 

The firewall doesn't allow 23 to the Internet.  That's silently dropped.  While the firewall doesn't log everything, it is configured to log the banned traffic. sshd is configured to logs to the secure log (at least on this configuration), and the secure log is scanned, offensive IPs gathered, and iptables rules injected.

Yeah, I'm seeing tons of traffic knocking on the ssh port.  Not exactly sure when I setup the banning script, must have been years ago, but seems that such port knocking has increased as of late.Last edited by eohrnberger on Sat Mar 18, 2017 3:04 pm; edited 1 time in total

----------

## cboldt

The router here directs a few ports to the honeypot, and the honeypot has no services running on those ports.  Mostly an exercise in curiosity.  21, 22, 23, 110, 1433, 3306, 8086, 10000, and 12345

What are you using to scan your sshd log?  Just curious.  I've tried fail2ban and sshguard, and ended up composing a homebrew that taps into syslog-ng, which gives a way to direct selected messages to the "homebrew log watcher."  End result is similar, an iptables ban is inserted.

----------

## NeddySeagoon

cboldt,

Nothing.  Its key based login, non root users only.  

I've thought about fail2ban and port knocking to stop the logspam but I've never got round to setting it up. 

To get root, you need to log in with a key, then know the users password to use sudo.

You also need a valid username to get in in the first place, since root won't work.

----------

## eohrnberger

 *Tony0945 wrote:*   

>  *eohrnberger wrote:*   The Windows clients access the zfs storage via samba, and that has it's own smbusers.  Other Linux machines access the zfs storage via nfs.   Can samba access any root owned files? I try to keep samba restricted to one directory, but others make the whole machine accessible. Maybe the malware got in via Windows and samba?  
> 
> If your users only belong to their own group they can't do much. Maybe that's why you were web surfing as root? I have fired the browser up as root, but only to access my modem, not the internet.
> 
> My apologies for intruding. I am in no way an expert. Listen to Neddy, he is.

 

No worries.  

Not sure how likely a Window vector for a Linux infection would be.  It's not unknown for a vector and infection to address different platforms.  StucksNet is an example.  It'd be easier to just hack a Windows machine with canned code.

----------

## NeddySeagoon

eohrnberger,

I have not had a Windows install at home since early 2002, when I dumped dual boot Windows NT and Red Hat for Gentoo.

I've never used Samba, so can't comment on the possibility it was the attack vector.

----------

## eccerr0r

I still find it hard to believe adobe-flash was the vector, because at minimal I would also have been affected (but not as root - however regular user or root, encrypted files are encrypted files and they'd make money off of it either way.).

I have pretty much the most lax firewall policy on my main "server" - it's completely open to the outside world.  I do block off a few ports like samba, cups, and portmap/rpcbind, and so far so good.  I have pretty much everything else open like dns, http, sshd, imaps, sendmail, openvpn, etc. open to the world.

Knock on wood however.  Don't know how long this streak will last.

----------

## NeddySeagoon

eccerr0r,

If flash was the vector, it depends on the website you visit, if you get attacked or not.

----------

## eccerr0r

Indeed it is quite true that it depends on the site visited, but I can't say that I'm anything abnormal in randomly clicking sites, including sites that may be of questionable content.  Granted I do not tend to click on things not in English, that would probably be the only exception, but knowing that these ransomers tend to choose USA citizens and assume English...

I do however heed Firefox warnings and update flash fairly quickly.  It may make a difference, I don't know.

----------

## jonathan183

 *NeddySeagoon wrote:*   

> eccerr0r,
> 
> If flash was the vector, it depends on the website you visit, if you get attacked or not.

 

My approach so far has been to use a separate limited user account for general websurfing, separate limited user account for emails, separate limited user account for documents etc.

Each user requiring network access must be in a network_user_group with relevant application started with sg network_user_group otherwise the firewall blocks access. Websurfing user has browser ports only open, email user has imap/smtp/pop3 ports only open so click on email links fails.

I'm hoping this strategy will limit damage to a users home area, but since I move anything I want to keep to somewhere network access users either have no or read only access to - encryption should be limited to things I can do without.

----------

## eccerr0r

Well, if you do that, you should be able to easily segregate the vector... but what vectors have you run across and can we learn from that?

Then the other side of the coin, if it's impossible to get infected, then was the effort spent worth the time?

----------

## lost+found

 *cboldt wrote:*   

> (...) On that front, I'm stuck at ftp, which includes outgoing NEW packets aimed at random, unprivileged ports.

 

FTP and iptables is hard, but can be done for passive and active mode. I've posted the relevant lines here, as an example. Works for running a client and a server.

--> https://forums.gentoo.org/viewtopic-p-7817794.html#7817794

----------

## cboldt

Not to sidetrack the thread too much, my "stuck at" ftp wasn't that I couldn't conjure iptables rules, it was along the lines of "if the point is to lock down outgoing packets to stifle interlopers from "calling home," that point is lost when all the unprivileged ports are opened to NEW OUTPUT packets, to service active ftp.

----------

## lost+found

Only related packets can get out, not new ones...

[0:0] -A OUTPUT -o eth0 -p tcp -m conntrack --ctstate RELATED -m helper --helper ftp -m tcp --dport 1024:65535 -j ACCEPT

----------

## cboldt

Ahhh, that -m helper --helper ftp is something I haven't seen or used before.  It appears to check the contents of the packet or similar (maybe the calling program), and only if THAT relationship meets the criteria, is the packet allowed to pass.

Thank you.

Testing showed that this alone doesn't work.  The second outgoing ftp packet, the one to a non-privileged port, is not identified as RELATED, even with your suggested iptables entry.  There is more than one fix.  I took the easy way out.

```
echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper

modprobe nf-conntrack-ftp
```

With those two changes, the outgoing packets that are (supposedly) RELATED to the outgoing passive ftp request are picked up by ...

```
iptables -A OUTPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
```

The alternative, and more secure/preferred method is currently beyond my ken, but I believe is involves use of the "raw" iptable.

I implemented the two commands above with 1) an entry in /etc/sysctl.conf (net.netfilter.nf_conntrack_helper = 1) and 2) an entry in /etc/conf.d/modules (modules="${modules} nf-conntrack-ftp")

Of course the kernel and modules had to be set up to facilitate those moves.

----------

## cboldt

Hahahah, LOL, and ROTFL.  Locking down OUTPUT sure does make mincemeat of nmap!

Easy enough to take down the barrier as needed, as well as restore it, but the way I see it, a fairly humorous oversight relating to the ramifications of a firewall OUTPUT wall.

----------

## eccerr0r

I just read there exists a "ransomware" encrypter that encrypts your files but the private key that was used either never existed or is deleted... so even if you cough up the money, you still can't decrypt...

So anyone who even thinks about coughing up should at least get proof that they can decrypt or have the private key.

I do wonder if the wave of the future is to simply do privilege separation between every process, and make it hard/annoying for users to pass data between processes (as this is the whole point of privilege separation).  Or we really need to find and fix the actual security holes, which I'd rather see...

----------

## jonathan183

I think there will be a combination of approaches, software with zero exploits is probably not achievable - at least not using methods we have at our disposal today. Something like qubes might be the approach taken in future ...

----------

## Tony0945

Drawing and quartering exploiters might help.

----------

## eohrnberger

 *Tony0945 wrote:*   

> Drawing and quartering exploiters might help.

 

I'd agree with that.  

Something akind to horse thieves in the wild west, but perhaps a bit more drastic, painful, and longer lasting, say at least 3 days worth non-stop.

Might have to even enforce some of the that on some of the CIA and NSA, even.

----------

## depontius

 *eccerr0r wrote:*   

> I just read there exists a "ransomware" encrypter that encrypts your files but the private key that was used either never existed or is deleted... so even if you cough up the money, you still can't decrypt...
> 
> So anyone who even thinks about coughing up should at least get proof that they can decrypt or have the private key.
> 
> I do wonder if the wave of the future is to simply do privilege separation between every process, and make it hard/annoying for users to pass data between processes (as this is the whole point of privilege separation).  Or we really need to find and fix the actual security holes, which I'd rather see...

 

I believe that's where systemd is headed - containerizing all applications, essentially cheap-virtualizing everything.  Theoretically that might so exactly the separation you're looking for, but for the next step.  After containerization, I expect to see some sort of ActiveX-like thing added to systemd, so that the containers can share data.  Potentially add security, then take it away. (again)

----------

## eccerr0r

Things like android shows that even with privilege separation, one still can do lots of damage.

It seems like it's fear that we got to this state...

 *jonathan183 wrote:*   

> I think there will be a combination of approaches, software with zero exploits is probably not achievable - at least not using methods we have at our disposal today. Something like qubes might be the approach taken in future ...

 

We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price?  Then again if the interaction between two software causes a security hole, then who's the blame (blame both!)

----------

## Ant P.

 *eccerr0r wrote:*   

> We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price?

 

That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't.

----------

## 1clue

It seems that everyone else spanked you about running anything at all as root, and I'm sure you already covered that yourself. AFAIK there's no way out except complete reformat (delete/reinstall partition table too) and reformat.  Might want to check for bios malware while you're at it.

Please understand this is NOT an "I told you so," I've been bitten by malware several times over the last 30 years. The old timers who tell you what you should have done have almost certainly been there before. That's why we've thought it through so thoroughly.

I might point out that the only backups which are safe vs ransomware are the ones which are:

Physically removed from all running hardware except during the times they're used.

Frequent enough to minimize the lost data

Complete enough to replace the data you lost

I might also point out that the best backups:

Either completely restore the device (software and all) -- complicated.  Or completely restore data and custom settings, leaving software to a reinstall

Are each complete backups not relying on some prior backup

Offline media are moved to one or more physical sites not attached to the computer's site

Are NOT solely an rsync'd copy of current production. (meaning that deleted files and modified files both have an audit trail of their prior contents back through previous backups)

RAID IS NOT A BACKUP! It's an insurance policy to prevent loss of data since the most recent backup in the case of a hardware failure. It does not prevent accidental deletion or software errors.

Personally I keep my backups on removable SATA drives (I have a slot-loaded bay which holds a standard SATA drive at each site) without any compression or archiving. The directories I backup are readily searchable and readable on pretty much any linux box. My backup process is scripted but not automated, because I only attach the drive when I make a backup, and then immediately eject it.

----------

## eccerr0r

 *Ant P. wrote:*   

>  *eccerr0r wrote:*   We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price? 
> 
> That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't.

 

As long as we didn't pay for it, it is our responsibility if it breaks.  So all GNU software we get to keep the pieces if it breaks.  I suppose the comment was not a solution but rather hoping that we have a chance to detect/fix it ourselves, but the complexity is a big problem, and hoping that for-pay software will indemnify our problem (FAT CHANCE!  or do we need more laws...)

---

Backups are costly as well if we have to go back many, many revisions.  I'm hoping that there won't later be "dormant" or "logic bomb" software that looks innocuous at first, and gets backed up for many revisions.  Once again, complexity is the killer.

----------

## 1clue

 *eccerr0r wrote:*   

>  *Ant P. wrote:*    *eccerr0r wrote:*   We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price? 
> 
> That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't. 
> 
> As long as we didn't pay for it, it is our responsibility if it breaks.  So all GNU software we get to keep the pieces if it breaks.  I suppose the comment was not a solution but rather hoping that we have a chance to detect/fix it ourselves, but the complexity is a big problem, and hoping that for-pay software will indemnify our problem (FAT CHANCE!  or do we need more laws...)
> ...

 

Are you guys actually going there? Even for-pay software does absolutely nothing to protect the computer from the idiots at the keyboard. The OP ran a web browser as root and went out to the unprotected Internet. That's the same thing as putting your car through the crusher and then expecting the original manufacturer's warranty to cover it. If people start suing the programmers for idiotic shit done by the users then programmers will stop writing software. Or they'll charge enough to protect them from people who do stupid shit.

This is, by the way, exactly the reason health care costs so much in the USA. Having been to other countries where a full-spine MRI costs USD $5 without insurance and you can get half a dozen cavities filled along with a crown and a root canal for USD $250 -- again without insurance -- I can assert that litigation against others for one's own mistakes can have no good long-term outcome.

----------

## eccerr0r

However we still don't know for sure what the entry mechanism is.  We know running stuff as root because we know that there may be bugs in it.  This is not like driving a car in a crusher, this is like driving a car off road because the car should have been fine off road, but history as told that the manufacturer sometimes puts questionable springs and shocks on it.  Without scrutinizing the car, we won't know.  People who know that it's very likely that off road suspension was not installed but have no way of checking (and neither can the manufacturer), so we just drive the car on paved roads so we never lose suspension and crash because of steering failure.  These are the people who don't run as root.

Running code on the native machine from within a VM is completely improper behavior...  Running as root should have not been an issue, after all, VMWare needs to be run as root and you'd certainly be up in arms if running a code sequence inside the VM and suddenly your host is infected with encryption ransomware.  It's not like the OP gave permission to run code that encrypts his computer.  The buggy code allowed something that should have never been allowed.

I am tired of people charging money for crap software that have bugs, never mind the security bugs.  It's always time to market, time to market.  And people think buggy software is somehow "acceptable".  No.  This is bad practice and it needs to stop despite the bean counters.  Can't say it's 100% their fault, it's one of these things where someone jumped, it wasn't so bad, now everyone else now needs to follow suit or be left behind.

---

Where did those other countries get the MRI machine?  They could not have recuperated the cost of the machine at $5 per scan unless there perhaps was a government subsidy somewhere that you don't see.  In the US someone made an investment, are only owned by a few for-profit companies, they can charge however much they want.  Not an insurance issue, pure greed.

In any case, the USA is clearly a litigious society... unfortunately the underlying reason behind it is because everyone wants to be equal to everyone else.  I'll leave it at that.

----------

## eohrnberger

 *1clue wrote:*   

>  *eccerr0r wrote:*    *Ant P. wrote:*    *eccerr0r wrote:*   We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price? 
> 
> That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't. 
> 
> As long as we didn't pay for it, it is our responsibility if it breaks.  So all GNU software we get to keep the pieces if it breaks.  I suppose the comment was not a solution but rather hoping that we have a chance to detect/fix it ourselves, but the complexity is a big problem, and hoping that for-pay software will indemnify our problem (FAT CHANCE!  or do we need more laws...)
> ...

 

I'd have to agree.  As the OP, I did something stupid, something I should have known not to do, and did it anyway.  It's on me, and I wouldn't want to have it any other way.

That being said, I've changed root passwords, banned a few more IPs, setup what I needed under a non-privileged user account, and haven't seen anything weird going on.  I may be through this, but time will tell.

Cautiously optimistic at this point.

----------

## NeddySeagoon

eohrnberger,

So the root kits are still installed?

----------

## 1clue

 *eccerr0r wrote:*   

> However we still don't know for sure what the entry mechanism is.  We know running stuff as root because we know that there may be bugs in it.  This is not like driving a car in a crusher, this is like driving a car off road because the car should have been fine off road, but history as told that the manufacturer sometimes puts questionable springs and shocks on it.  Without scrutinizing the car, we won't know.  People who know that it's very likely that off road suspension was not installed but have no way of checking (and neither can the manufacturer), so we just drive the car on paved roads so we never lose suspension and crash because of steering failure.  These are the people who don't run as root.
> 
> 

 

Bad analogy. The core of Linux is solid. Running as root is the same as deliberately turning off all security and all sanity checks. Running a browser as root means you give not only the website you logged into full authority to run any script, but also any third-party website ad that might get loaded, and any tertiary sites that might be loaded from there. You've already clicked OK and you want everything that tries to run to be successful. Because you're root.

 *Quote:*   

> 
> 
> Running code on the native machine from within a VM is completely improper behavior...  Running as root should have not been an issue, after all, VMWare needs to be run as root and you'd certainly be up in arms if running a code sequence inside the VM and suddenly your host is infected with encryption ransomware.  It's not like the OP gave permission to run code that encrypts his computer.  The buggy code allowed something that should have never been allowed.
> 
> 

 

VMs run as constrained users all the time. Running authorized code on the bare metal from a VM is completely proper. That's what the virtualization software does when it emulates a CDROM or some other piece of "hardware" you don't actually have.

 *Quote:*   

> 
> 
> I am tired of people charging money for crap software that have bugs, never mind the security bugs.  It's always time to market, time to market.  And people think buggy software is somehow "acceptable".  No.  This is bad practice and it needs to stop despite the bean counters.  Can't say it's 100% their fault, it's one of these things where someone jumped, it wasn't so bad, now everyone else now needs to follow suit or be left behind.
> 
> ---
> ...

 

The economy in Colombia is almost completely capitalism. There is no sales tax, there is a small property tax but that's it as far as I can see. Cars pay for license plates and gas is extremely expensive. The average employed worker I saw there made about $10 USD per 12 hour work day. Chances are they feed their family and probably their parents or grandparents with that. There is no government subsidy because the government collects almost nothing from the population. You can buy insurance for the big stuff like heart surgery but most people don't have it and get along fine without it.

Edit: There is also no unemployment insurance, no medicare, no safety net. If you don't have a job and you don't have caring family who has a job, then you're on the streets. Unemployment is extremely high there. If you don't want to work then there are lots of people who do.

The MRI machine was about 10 years old, based on the opinion of a neurosurgeon who looked at the scan files in the USA when we got back. It was probably bought used. That said, it wasn't at a hospital. Most of that sort of expensive hardware gets its own building and its own staff, and the doctors tell you to go get an MRI. You get on the bus, travel from your home town to one of the cities which have that sort of machine and you sit in line and wait for your turn. The machine is constantly in use. While one patient is being scanned the next one is dressing in the gown and removing jewelry. The one after that is getting the gown and the instructions from the staff. They rely on continuous use to pay for the machines that cost a lot of money, because if they charged what a single MRI costs in the USA then nobody would get an MRI. They found a way to minimize costs (either buy old equipment or buy new and keep it longer, and continuous utilization) and they went with it.Last edited by 1clue on Tue Mar 21, 2017 5:34 pm; edited 1 time in total

----------

## 1clue

 *eohrnberger wrote:*   

> I'd have to agree.  As the OP, I did something stupid, something I should have known not to do, and did it anyway.  It's on me, and I wouldn't want to have it any other way.
> 
> That being said, I've changed root passwords, banned a few more IPs, setup what I needed under a non-privileged user account, and haven't seen anything weird going on.  I may be through this, but time will tell.
> 
> Cautiously optimistic at this point.

 

Please keep in mind that I'm not trying to abuse you here. I believe you understand what happened and why. eccerr0r seems to think bugs are involved, and although it's possible I think it's not very likely.

Personally of all the times I've been hacked or had malware I have never been comfortable with the box after that. The nature of my work is such that if even a minor box gets hacked then that poses a serious risk to every other box in my control, and I can't tolerate that. For me, your situation requires scraping off the entire system and starting over. There's no other way to be absolutely sure that you got everything off.

----------

## eccerr0r

Indeed the crusher analogy was indeed a bad analogy.  It is for a fact that these complicated pieces of software we do not know of every aspect of them and somewhere there could be a bug that can be exploited to run code that was not expected.

User accounts are only a safety net that should not have been necessary if the VM was doing its job - whether it's firefox running html code, adobe-flash running flash code, or VMWare running a VM.  All of these should be shielding running arbitrary native code on the host processor and only running code that were permitted by the code itself.  If it's allowing arbitrary native code that's presented inside the VM to run outside the VM, this is a bug.  Running as root only amplifies the problem as now you lose control of the entire system instead of the specific actions the program was supposed to be able to control.

What is this "authorized code?" Code is code... Of course you have to give it code to run whether it's HTML or flash or a VM image, it's supposed to run that well defined code.  But it's not supposed to bypass the protections given by the VM and run arbitrary code directly on the host.

There are clearly bugs involved.  If there are no bugs involved then we should see user accounts being taken over left and right, regular user accounts on Linux boxes are VERY useful to hackers for DDoS, mail spam, fake user hits, and more ssh hacking - all of which do not need root.  It is absolutely not normal for a web page or flash program to be able to write to your ~/.profile and make it automatically start sending spam or running ssh bruteforce attacks to other machines every time you log in, whether you run the browser or not.

If you wish to prove wrong, be my guest - start writing flash and html that will either encrypt my home directory or upload everything to another server without my knowledge by merely clicking on a website.  Or write a code sequence that runs VM that will also encrypt my home directory on my VM's hypervisor host.  Then we can submit how you did them to Adobe or QEMU as bug exploits that need to be fixed.

----------

## 1clue

 *eccerr0r wrote:*   

> Indeed the crusher analogy was indeed a bad analogy.
> 
> 

 

With respect to the expectation of financial compensation from programmers of free software for bugs found in the code, it was perfect. Root account exists explicitly for the purpose of limited system administration where privileges are escalated.

 *Quote:*   

> 
> 
> It is for a fact that these complicated pieces of software we do not know of every aspect of them and somewhere there could be a bug that can be exploited to run code that was not expected.
> 
> 

 

You have just described all of software, everywhere. There are reliable mathematical formula which describe the number of bugs based on code complexity and maturity.

 *Quote:*   

> 
> 
> User accounts are only a safety net that should not have been necessary if the VM was doing its job - whether it's firefox running html code, adobe-flash running flash code, or VMWare running a VM.  All of these should be shielding running arbitrary native code on the host processor and only running code that were permitted by the code itself.  If it's allowing arbitrary native code that's presented inside the VM to run outside the VM, this is a bug.  Running as root only amplifies the problem as now you lose control of the entire system instead of the specific actions the program was supposed to be able to control.
> 
> 

 

Explicitly false. EVERY Linux distro has strong warnings about running software unnecessarily as root. EVERY howto, every security wiki, every bit of documentation. It used to be that opening any browser as root user would get a warning dialog saying you're doing something very stupid. I don't know if it still does because I don't and won't open a browser as root. The entire purpose of the root account is that sometimes an administrator needs to do something special, and they are explicitly assumed to know what they are doing (because why else would you be root?) and all the security stops are pulled. Some virtualization software refuses to run as root.

https://wiki.archlinux.org/index.php/Users_and_groups

LINUX IS NOT WINDOWS!  LINUX IS NOT MAC OS! You are not a consumer. You've learned about Linux and UNIX in general, you are supposed to understand certain core facts about it, and you've chosen to install Linux in place of some other operating system. Linux is an expert system for expert users. It has fairly sophisticated user security and it is configured by default to use that security.

If you run random end-user software as root, then you pretty much deserve whatever you get.

 *Quote:*   

> 
> 
> What is this "authorized code?" Code is code... Of course you have to give it code to run whether it's HTML or flash or a VM image, it's supposed to run that well defined code.  But it's not supposed to bypass the protections given by the VM and run arbitrary code directly on the host.
> 
> 

 

Operating system code is authorized code. Hypervisor code is authorized code. For example, if your system supports hardware acceleration for graphics, encryption, compression, whatever else you may choose to pass that functionality on to your VM. Authorized code SHOULD be carefully vetted by somebody. Authorized code should NOT be some random flash video from a third-party popup ad. Typically we prevent the latter by not doing something like running a web browser as root.

Look. You can try to insist that programmers protect you from your own laziness and ignorance all you want, but it will not happen. Go run ios or something. Even Windows doesn't want you running as Administrator all the time, and Mac OS doesn't even tell its users that a root account exists. Pretty much every ios or android phone warranty is void as soon as you root it, for good reason.

 *Quote:*   

> 
> 
> There are clearly bugs involved.  If there are no bugs involved then we should see user accounts being taken over left and right, regular user accounts on Linux boxes are VERY useful to hackers for DDoS, mail spam, fake user hits, and more ssh hacking - all of which do not need root.  It is absolutely not normal for a web page or flash program to be able to write to your ~/.profile and make it automatically start sending spam or running ssh bruteforce attacks to other machines every time you log in, whether you run the browser or not.
> 
> 

 

If bugs were around then we would see more privilege escalations. Everyone wants a root exploit because there are explicitly, BY DESIGN, no security checks on the root account.

 *Quote:*   

> 
> 
> If you wish to prove wrong, be my guest - start writing flash and html that will either encrypt my home directory or upload everything to another server without my knowledge by merely clicking on a website.  Or write a code sequence that runs VM that will also encrypt my home directory on my VM's hypervisor host.  Then we can submit how you did them to Adobe or QEMU as bug exploits that need to be fixed.

 

Why would I waste my time? You're the one going against UNIX best practices, you should bear the burden of proof. You're arguing whether a user should be protected from themselves when running end-user software as root.  I want you to find even ONE linux distro or commercial UNIX vendor who suggests that this is not an insanely bad idea. Ask the question on any Linux forum or UNIX forum. And brace yourself.

I don't write flash code. I can certainly write normal code which encrypts my home directory, it's no big deal. I can certainly run that same script as root and get the entire disk-based file structure as long as I don't try hitting /dev or /proc or other virtual filesystems. You're trying to get me to prove something you know to be false.

----------

## 1clue

@eccerr0r,

http://unix.stackexchange.com/questions/1052/concern-about-logging-in-as-root-overrated

This pretty much sums it up. I would have tried to make a few more points, but long story short you can do whatever you want on your box.

I take issue when you claim to others that they should also do something stupid, or by asserting that there should be no reason why you can't run a web browser as root, as that's a deliberate untruth regarding the nature and purpose of root user, since the concept of UNIX security came into existence.

Edit: Rather than spam the thread I'll add this link: https://en.wikipedia.org/wiki/Principle_of_least_privilege

----------

## eohrnberger

 *NeddySeagoon wrote:*   

> eohrnberger,
> 
> So the root kits are still installed?

 

Don't think there are any.  rkhunter and chrootkit both come up empty.  If you know of another scanner, I'll run it.

I'm keeping my eye on it.  Might not have installed one.  I suppose that if I were to recompile everything, wouldn't that flush out any altered binaries?

I'm still learning (aren't we all always all the time?), and this is a learning experience too.

----------

## eohrnberger

 *1clue wrote:*   

>  *eohrnberger wrote:*   I'd have to agree.  As the OP, I did something stupid, something I should have known not to do, and did it anyway.  It's on me, and I wouldn't want to have it any other way.
> 
> That being said, I've changed root passwords, banned a few more IPs, setup what I needed under a non-privileged user account, and haven't seen anything weird going on.  I may be through this, but time will tell.
> 
> Cautiously optimistic at this point. 
> ...

 

No worries.  If I feel abused, I'll let you know.

----------

## NeddySeagoon

eohrnberger,

 *eohrnberger wrote:*   

> I suppose that if I were to recompile everything, wouldn't that flush out any altered binaries? 

 

Maybe.  Who knows what the altered/hidden binaries are doing while you rebuild, or even now while you continue to use the install.

Perhaps you have an open mail relay that's distributing  spam.

Perhaps you are part of a bot net, being used for DDos attacks.

The point is you don't know.

rkhunter and chrootkit are not perfect.  

The only responsible thing to do with a compromised system is to isolate it from the network, destroy the install and reinstall it.

By all means, image the install if you want to perform some forensic examinations.

Destroying the install means making new partition tables. 

Paranoid readers may want to flash their BIOSs too, to ensure there is no trojan lurking there.

----------

## jonathan183

 *eohrnberger wrote:*   

> I suppose that if I were to recompile everything, wouldn't that flush out any altered binaries?

  No ... a fresh install is the only way you can be sure the system is clean. Ken Thompson trusting trust.

I'm still interested in how your system got compromised because the same method may be available on other systems. Doing things as root made things worse but a regular user account could still result in that users data being encrypted ...

----------

## 1clue

As well, the malware could have added a file to a directory and made it executable, set up for some future attack with the same security hole.  Something in /etc/init.d for example, or some config file snippet that gets added to apache because of its presence in a conf.d folder which turns on a feature set and thereby opens a hole.

Or it could be any number of other things.

----------

## C5ace

 *eohrnberger wrote:*   

>  *NeddySeagoon wrote:*   eohrnberger,
> 
> So the root kits are still installed? 
> 
> Don't think there are any.  rkhunter and chrootkit both come up empty.  If you know of another scanner, I'll run it.
> ...

 

I had last year a similar problem caused by a macro virus in MS Excel running in Wine. 

Tried emerge @world without success, My final solution was to run 'dd if=/dev/random of=/dev/sda' from a SystenRescue CD and re-install Gentoo, Wine and the other software, except for MS Office.

----------

## ct85711

One thing you need to remember is that portage ONLY knows of files it installed.  This means it won't know of all the other files that was created during run-time or through some other action.  Portage will know of a file if it tries to install an file that already exists on the system (i.e. a file collision).  The other thing you need to remember is that there is no rule saying on where an executable file must be located at, to would have to look at in every folder for any executable file (which may be hidden).

Rootkit scanners generally look for patterns of known rootkits (like virus scanners).  The point we are getting at, is there is no way to be 100% certain that your system is clean.  Now if you can again trust an system that is/was known to have been compromised again is your choice.  For a lot of us, including me, the only we can trust it again is by wiping and reinstalling, so that the system is back in an known state.

----------

## eccerr0r

Again I am not advocating using root by any means.  But what's good for the goose is good for the gander:  if the code is good such that you can't escape out into to a user account, and because root is also a user,  you also can't escape out to root either. 

So all software needs to be vetted?  So you want all binaries to be signed as proof that they are vetted?  Do we need some sort of Android and central signing for everything such that all binaries are signed for authenticity too?  Who gets to sign the keys?  What if you keep your keys and some exploit finds them or bypasses them?

And what difference does it make between OS (including Windows) and VM hypervisor code - on the same token you can also call web browsers and adobe-flash "authorized code" by your definition because VM code runs on top of an OS, so why not a web browser?  They are all software and they all can have bugs.  What about OS bugs, some way that protects against dirtyc0w?  Should that copy on write code been more carefully vetted?  Why wasn't it correct?  I don't claim there necessarily is a solution, but if the code was simpler and people were more careful when writing code, less of these things would happen.  Adobe-flash is clearly a travesty, Macromedia or whoever wrote the code did not think through of all the possible ways to exploit the code.

Also, even adobe-flash was supposed to provide a level of protection against the "unauthorized code" coming from the internet, otherwise people should just distribute random binary plugins to run on everyone's computers which clearly isn't savory.  Adobe-flash should only allow doing certain well defined tasks: drawing in your browser, playing sounds over your speakers, possibly download video or audio, perhaps work with cookies, and maybe some other stuff.  None of these is "allow unauthorized code to edit your /etc/passwd or ~/.bash_profile" and is no different than the kernel giving up root access when racing a copy on write situation causes privilege escalation or a VM damaging the host machine when running exploitive code.

If you're claiming that I'm advocating for running only as root, that is utterly false.  The claim is that if we have ideal software with ideal humans, then running as root is no different than running unprivileged - a simple if/then relationship, and the constraint is not satisfied.  To better the situation there needs to be reason or incentive for people to be less sloppy with writing software for other people - writing them such that they can be proven correct and not because the bean counter wants it out on XYZ date.

---

And agreed, the root cause of what caused the OP's issue has not been found.  Adobe-flash is merely a "usual suspect" but we're simply blaming it again just because it was convicted in the past and not because we definitively proved it left the gate unlocked again.  Agreed, reinstall is the only way to ensure that any contamination is gone.  There are too many configuration files that need to be checked along with the questionable binaries and possibly contaminated package manager files as stated earlier.  However I still am interested in forensics on the disk as a learning experience as to find out what really was the vector.  The portage checksums at least is a starting point for what got changed (we know /etc/motd got changed so portage, as a sanity check, should report that got changed.)

--- 

C5ace, you were also running Excel in Wine as root? :(  That would be a new one, a MS excel macro virus that targets wine in Linux...

----------

## 1clue

 *eccerr0r wrote:*   

> Again I am not advocating using root by any means.  But what's good for the goose is good for the gander:  if the code is good such that you can't escape out into to a user account, and because root is also a user,  you also can't escape out to root either. 
> 
> 

 

No. Root is a special case of security. It is the equivalent of the Windows 'system' user, only with ownership. It's there because we need a way to maintain a system without giving crazy access to a normal user. The goose/gander rule does not apply.

 *Quote:*   

> 
> 
> So all software needs to be vetted?  So you want all binaries to be signed as proof that they are vetted?  Do we need some sort of Android and central signing for everything such that all binaries are signed for authenticity too?  Who gets to sign the keys?  What if you keep your keys and some exploit finds them or bypasses them?
> 
> 

 

Absolutely not. If I find out that a sudoer on one of my systems -- even a noncritical system -- ran unsafe software as root then they will no longer be a sudoer on any of my systems and will no longer have access to any of my networks even as a guest. I've essentially fired IT staff because they ran a browser as root, and defended it to management by describing the kind of mayhem that can result. It is understood, meaning printed in system administrator manuals, that you don't run a @#$@ browser as root.  You don't run ANYTHING as root which is not known to be safe.

Let me be crystal clear: If you stick a Raspberry Pi on one of my networks and run a browser on it as root, and I find out about it, you're fired. No second chance, no warning, end of discussion. Any administrator of any enterprise network would no doubt do exactly the same thing.

The root user is there because it's impossible to vet all code. It's there because some code you DO NOT WANT a normal user to have access to, under any circumstances. The written rule is that only trusted code be executed by root, and the rule is written in human terms not in code because an administrator or collection of administrators can create or find software they trust to be run as root, which cannot be dictated from some operating system vendor IT department on another continent.

 *Quote:*   

> 
> 
> And what difference does it make blah blah blah...

 

----------

## 1clue

It is not the role of software engineers to babysit idiots. Administrators of systems are supposed to be concerned with security and smart enough to not do stupid shit. Everything you've said regarding geese and ganders is completely irrelevant. Go look up how many services strongly recommend that they be run as a non-root user either in part or in their entirety. Apache httpd runs a single thread as root because of the port 80 limitation, and runs everything else as a non-root user with minimal permissions. Why?  Because the goose and gander rule does not apply to root.

Edit: To complete the thought,

No server software vendors in the UNIX world, either free software or commercial software, have tried to force all software to be vetted so it can be run as root. It's obvious to anyone why such a user is needed, and why it needs to circumvent normal security. It's also obvious that not all software is safe for that user to execute.

Even if you can vet firefox for example, you go to a typical web site with advertising and all bets are off. Those sites sell ad space to customers who provide their own content from their own servers. Those customers also have customers, and those customers of customers have their own servers. Each ad can contain javascript or a java applet or who knows what. Or simply have an enticing ad that convinces the user to click a button which downloads a file which was signed by a key from a registered vendor but stolen and not yet known to be compromised, ad nausea infinitum.

You keep focusing on bugs. Software has bugs, and while we can work to step on them the fixes and new features is always adding more bugs.

The thing that should make what I'm saying obviously correct is not about bugs, it's about deliberate malware. It's about bad people trying to get into your system for whatever their own ends might be.

It's not even theoretically possible to vet all of a Linux distro's software repository to the degree necessary to allow root to run everything with any assurance of bug-free operation. The software is updating too fast for that. How much less possible is it for all of the Internet to be vetted? Every javascript line, every clickbait link, every software download, every excel spreadsheet, every Word document? Even accidental mistakes would be insanely overwhelming, and deliberate malware takes it to a new level of impossible.

Last edited by 1clue on Wed Mar 22, 2017 3:14 am; edited 2 times in total

----------

## eohrnberger

I'm not disagreeing with anyone, WRT the only way to be sure is to wipe and reload.  I may end up there.  I've had to a do a number of Windows systems like that.

True, there are quite a few config files on a Linux machine  I've compared the system against the version controller copies of the config files, and it appears that none were altered, or trivial changes, such as adding the $ID tag (I think it was).  Further, since comparing the content of the directories, i.e. file names as well as the file contents, against version control, I think would be a bit like tripwire in that regard.  It will reveal any changes / additions (deletions too) in the /etc tree.

I'm intrigued by the idea of running a 'portage checksums' test.  

Googled a bit, but didn't quite find the magic command line for that.  Any references please?  

Is this going to check the /usr/portage tree integrity, or the installed files integrity?

----------

## 1clue

 *eohrnberger wrote:*   

> I'm not disagreeing with anyone, WRT the only way to be sure is to wipe and reload.  I may end up there.  I've had to a do a number of Windows systems like that.
> 
> True, there are quite a few config files on a Linux machine  I've compared the system against the version controller copies of the config files, and it appears that none were altered, or trivial changes, such as adding the $ID tag (I think it was).  Further, since comparing the content of the directories, i.e. file names as well as the file contents, against version control, I think would be a bit like tripwire in that regard.  It will reveal any changes / additions (deletions too) in the /etc tree.
> 
> I'm intrigued by the idea of running a 'portage checksums' test.  
> ...

 

The problem with the idea of portage checksums test is that many config files are generated after install and thus not in portage, and not included in the checksum.

----------

## eohrnberger

 *1clue wrote:*   

> 
> 
> The problem with the idea of portage checksums test is that many config files are generated after install and thus not in portage, and not included in the checksum.

 

OK, granted, but I think I have a handle on the config file part of that, and if we can detect an altered binary, that'd be the gain that I can see.

Heck, I'm willing to let the thing run all night and log to a file, check it out when I get back home from work.  See if it detected any mismatches.

----------

## 1clue

 *eohrnberger wrote:*   

>  *1clue wrote:*   
> 
> The problem with the idea of portage checksums test is that many config files are generated after install and thus not in portage, and not included in the checksum. 
> 
> OK, granted, but I think I have a handle on the config file part of that, and if we can detect an altered binary, that'd be the gain that I can see.
> ...

 

It's an interesting exercise, but real security is not about what you know. It's about what the other guy knows.

----------

## khayyam

 *eohrnberger wrote:*   

> I'm intrigued by the idea of running a 'portage checksums' test. Googled a bit, but didn't quite find the magic command line for that.  Any references please? Is this going to check the /usr/portage tree integrity, or the installed files integrity?

 

eohrnberger ... as I think was mentioned earlier in the thread, this is no guarentee (the checksums exist on the same filesystem as the tampered files ... and so may be considered equally suspect). That said, here are a few examples:

```
# equery belongs -e /usr/bin/qcheck

 * Searching for /usr/bin/qcheck ...

app-portage/portage-utils-0.62 (/usr/bin/qcheck -> q)

# qcheck -C --quiet > qcheck.lst
```

```
# equery belongs -e /usr/bin/equery

 * Searching for /usr/bin/equery ...

app-portage/gentoolkit-0.3.3 (/usr/bin/equery -> ../lib/python-exec/python-exec2)

# equery belongs -e /usr/bin/qlist

 * Searching for /usr/bin/qlist ... 

app-portage/portage-utils-0.62 (/usr/bin/q)

# equery -NC check $(qlist -IC) 2> equery-check.lst
```

... or a more homebrew (and inefficent) approach:

```
# find /var/db/pkg -name "CONTENTS" -exec awk '/^obj\s/{print $3,$2}' {} + | md5sum -c 2>/dev/null | grep -v "OK" | sort
```

... app-portage/portage-utils are generally faster in my experience (which I would expect, given they are C, rather than python, or pipes).

best ... khay

----------

## Goverp

 *eohrnberger wrote:*   

>  *NeddySeagoon wrote:*   eohrnberger,
> 
> So the root kits are still installed? 
> 
> Don't think there are any.  rkhunter and chrootkit both come up empty. 
> ...

 

AFAIK, rkhunter isn't like a virus scanner, it doesn't look for known (or unknown) rootkits; it (a) looks for symptoms such as incorrect security settings, and (b) compares important system files for changes from a previous baseline.  That baseline is created in the previous rkhunter run.   So if you've only just installed it and run it, its baseline is the rooted system, and you'll get no warnings.  Indeed, it would report a change if you succeeded in removing the rootkit!  Anything else is about dodgy configuration.

I've no idea if that's true about chrootkit.

----------

## eccerr0r

1clue - I can see that you're really into viciously enforcing rules, which is reasonable if the software isn't completely understood, which has its base but still very arbitrary.  It's clear all the examples you're giving because we know each of them have exhibited bugs in the past in both the actual server/program AND poor user configuration, both of which will benefit from privilege separation.  A lot of application developers are suggesting privilege separation simply because they are not willing to find and fix bugs that can cause privilege escalation, mostly because they know it's becoming too difficult to know every possibility (and thus not "correct by design").  And again, it doesn't matter if it's escalating from a "VM" environment like HTML to running arbitrary machine code on the machine as a regular user or root, it's still a security hole that needs to be fixed.

But what really should be the lower bound?  Why not run 'ls' as a unprivileged user as well, why does it get an OK for being run as root.  Granted 'ls' is simple enough to be fairly well checked such that specially crafted directories with bad filenames will never cause buffer overflow and cause 'ls' to run arbitrary code.  Why shouldn't other software also be subject to similar scrutiny, even if it's much harder?  And if it were carefully inspected, why shouldn't it get the same treatment like 'ls' is trusted by most people?  After all, after 'ls' grabs disk blocks and inode contents getting past permissions, it no longer needs root to process and print on your display - yet nobody is running it under privilege separation when it too could also harbor bugs.

Now curl/wget must also be run under privilege separation by your rules too?  I suspect so, but I do I wonder how many people wouldn't bat an eye before running wget as root when they wouldn't and shouldn't run firefox as root.

----------

## cboldt

rkhunter has a few functions built in.  One is similar to what you describe, file permissions, hashes, sizes, etc for a collection of 170 or so files that are in rkhunter's "repertoire"  I see this as a sort of "limited tripwire" functionality.

rkhunter ALSO has a list of files and techniques that are often used by rootkits, above and beyond the short list of expected files, often taken over by a rootkit.  This list is like a typical virus hunter, in that it only knows about the things it is looking for, and will overlook any hiding place not in its list.

I run both, rkhunter and tripwire, on a daily basis.  Each tool has strengths and weaknesses.  On the ultimate issue, once compromised, if I didn't know the vector and the attack, I would not trust the machine, even though I make an effort to keep a very close watchful eye on it.

Edit to add ...

```
 rkhunter --list tests

Current test names:

    additional_rkts all apps attributes avail_modules deleted_files

    filesystem group_accounts group_changes hashes hidden_ports hidden_procs

    immutable known_rkts loaded_modules local_host malware network

    none os_specific other_malware packet_cap_apps passwd_changes ports

    possible_rkt_files possible_rkt_strings promisc properties rootkits running_procs

    scripts shared_libs shared_libs_path startup_files startup_malware strings

    suspscan system_commands system_configs trojans

Grouped test names:

    additional_rkts => possible_rkt_files possible_rkt_strings

    group_accounts  => group_changes passwd_changes

    local_host      => filesystem group_changes passwd_changes startup_malware system_configs

    malware         => deleted_files hidden_procs other_malware running_procs suspscan

    network         => hidden_ports packet_cap_apps ports promisc

    os_specific     => avail_modules loaded_modules

    properties      => attributes hashes immutable scripts

    rootkits        => avail_modules deleted_files hidden_procs known_rkts loaded_modules other_malware possible_rkt_files possible_rkt_strings running_procs suspscan trojans

    shared_libs     => shared_libs_path

    startup_files   => startup_malware

    system_commands => attributes hashes immutable scripts shared_libs_path strings
```

----------

## 1clue

 *eccerr0r wrote:*   

> 1clue - I can see that you're really into viciously enforcing rules, which is reasonable if the software isn't completely understood, which has its base but still very arbitrary.  It's clear all the examples you're giving because we know each of them have exhibited bugs in the past in both the actual server/program AND poor user configuration, both of which will benefit from privilege separation.  A lot of application developers are suggesting privilege separation simply because they are not willing to find and fix bugs that can cause privilege escalation, mostly because they know it's becoming too difficult to know every possibility (and thus not "correct by design").  And again, it doesn't matter if it's escalating from a "VM" environment like HTML to running arbitrary machine code on the machine as a regular user or root, it's still a security hole that needs to be fixed.
> 
> But what really should be the lower bound?  Why not run 'ls' as a unprivileged user as well, why does it get an OK for being run as root.  Granted 'ls' is simple enough to be fairly well checked such that specially crafted directories with bad filenames will never cause buffer overflow and cause 'ls' to run arbitrary code.  Why shouldn't other software also be subject to similar scrutiny, even if it's much harder?  And if it were carefully inspected, why shouldn't it get the same treatment like 'ls' is trusted by most people?  After all, after 'ls' grabs disk blocks and inode contents getting past permissions, it no longer needs root to process and print on your display - yet nobody is running it under privilege separation when it too could also harbor bugs.
> 
> Now curl/wget must also be run under privilege separation by your rules too?  I suspect so, but I do I wonder how many people wouldn't bat an eye before running wget as root when they wouldn't and shouldn't run firefox as root.

 

You get two scenarios:

Go to the nearest beach and get a 1 liter container of sand. Examine each particle of sand in the container, discover what type of rock/mineral it is, the size and the irregularity of surface. Put this information into a database.

Perform the same test on ALL sand on the planet. Anything from the center of the earth to the edge of space. This includes at the beach, on playgrounds, in random hills in nature, sand which is underground a mile. Every particle of rock or mineral which is of an appropriate size to be called sand must be included. Include micrometeorites which fall from the sky every day. Include sand made by rocks bumping in rivers, wave action at lakes and oceans, human machine activity, volcanic action, earthquakes, anywhere where sand is created. The list must always be current.

I and every admin on the planet chooses item 1. You choose item 2. Item 1 is acceptable and reasonable and not particularly difficult because the tools one must use as root on a UNIX system are fairly static and relatively small. Item 2 contains everything on the planet, sources you can think of and sources you can't think of, deposits of sand which have never been seen or detected by humans, particles suspended in water or ice or floating around in the air. Even if you could name all the places to find sand, there are so many examples of sand that you would need a billion programmers on each of a billion planets to catalog everything, and there's no point because it's an unnecessary task that nobody wants to do and certainly that nobody would pay for. In the case of software you must vet not only the software but also the testers because you know nothing of their abilities or intentions. Some would gladly take your money to catalog bugs but then put that information into their own database while putting false goodness in your database, and then develop malware to exploit the bugs.

Again you're fixated on bugs. Bugs we can live with.  Bugs are funny that way, they exist without anyone's knowledge because nobody has thought of the reason why they're bugs. Nobody has triggered the badness, and nobody has figured out why that particular bit of code is wrong. Bugs don't accidentally encrypt your hard drive or accidentally send your credit card info to Nigeria, or accidentally coordinate with thousands of other boxes to initiate a DDOS attack on some other site.  Or accidentally turn on your webcam and microphone and then accidentally send that stream to someplace like China or North Korea or Russia or some section of the middle east occupied by ISIS.

Deliberate malware is what counts here. You do understand that malware exists right? This thread is about a guy who ran a web browser as root and got ransomware from it. Ransomware is not buggy software, it's functional software written to take your system hostage for money. You understand that people dedicate their time to stealing things to which they have no right and no permission to take? Malware would be hidden from public view since the authors and users of it don't want you to know about it. If it were vetted, it would surely be vetted by people who want to use it. Those people would give it a big gold star because they want everyone to trust it.

This is my last post on your off-topic discussion. If you can't see the obvious line in the sand then you need to start using a Mac. The OP figured out that he'd done something wrong before he even started the thread.

----------

## eccerr0r

You're simply not accepting the fact that bugs (or rm -rf / stupidity) is/are the only reason why running any piece of software as root becomes a security risk.  Ultimately the VM should NOT be letting malware running outside of the VM.  Sure it can run amok INSIDE the VM but it should not escape, which is what happened to the OP.  This is excepted if the malware wrote on the screen:

```
please type in the following commands (in your root window):

# wget http://questionable-site/neatprogram

# ./neatprogram

```

I sure hope the OP did not do this and I'm sure this did not happen.  (If this did happen, then we're done with this discussion and thread.) Rather I give benefit of the doubt and suspect malware was automatically installed without knowledge, and the only thing we know that "usual suspect" software has been running.  As root.

We are assuming the "malware" running inside the adobe-flash/firefox "VM" escaped (which we have not confirmed).  The malware is allegedly HTML or flash.  But these things should only affect the browser (as it is html) or adobe-flash (as it is flash).  This malware is NOT supposed to affect bash (it is NOT a shell script) or ld-linux.so (it is NOT a Linux binary).

I still want to see the exact HTML or flash that actually did in his system if it is indeed the vector.  If he would post the websites visited, I would gladly set up a VM running a VM (i.e. adobe-flash / firefox) AS ROOT to learn what the vector is, and report to adobe-flash and/or mozilla.org what API call had a bug to allow running the native malware code.  Only then we can prove without question that this was the entry point.  But right now we're only pointing fingers.  As most adobe-flash exploits are written for Windows and maybe Macintosh, I still have a hard time believing that malware was written specifically for Linux, as they do know *MOST* people behave like you and I and DON'T run firefox as root.  This is absolutely ludicrous a hacker writes malware for the 1% Linux boxes out there times the 0.01% Unix users who run firefox as root giving a very, very small number of target victims.  This isn't worth it.

Yes I've had enough fact twirling, because you know that we now have too much complex software with hidden bugs that with mischievous code allow unwanted code to run where they shouldn't.  The ultimate goal is that we need to prevent unwanted code from running on our machines in the first place, and more effort needs to be done to write software correctly.

----------

## 1clue

You're kind of back on topic, so I'll bite.

 *eccerr0r wrote:*   

> You're simply not accepting the fact that bugs (or rm -rf / stupidity) is/are the only reason why running any piece of software as root becomes a security risk.  Ultimately the VM should NOT be letting malware running outside of the VM.  Sure it can run amok INSIDE the VM but it should not escape, which is what happened to the OP.  This is excepted if the malware wrote on the screen:
> 
> 

 

https://superuser.com/questions/29611/can-a-virus-from-a-virtualbox-vm-affect-the-host-computer#29614

That's the first hit in google. There are tons of hits. Any more questions?

I haven't run virtualbox for awhile, but anything like vmware-tools is software on the guest which hooks into an API on the host. Meaning there is a guest-side component and a server-side component.  You can create your own tools, and you can install tools someone else gave you.  This is trusted software, whether rightly trusted or not. It can be either something hooking into the hypervisor API or it can be separate from it. 

KVM has shared drives using 9p and other approaches. This is also guest code hooked up to host code, at the kernel level. Guest gets a virus or some sort of malware and it gets on that shared drive, sure enough it's on the host and any other guest too, they simply need to touch that file and they're done.

Long story short there are lots of ways that a host or co-guest can get malware from a single guest, and they mostly boil down to convenience tools or to stupid security shortcuts because it's a VM. Use Google instead of ranting about how things should or shouldn't be.

 *Quote:*   

> 
> 
> As most adobe-flash exploits are written for Windows and maybe Macintosh, I still have a hard time believing that malware was written specifically for Linux, as they do know *MOST* people behave like you and I and DON'T run firefox as root.  This is absolutely ludicrous a hacker writes malware for the 1% Linux boxes out there times the 0.01% Unix users who run firefox as root giving a very, very small number of target victims.  This isn't worth it.
> 
> 

 

I'm not a flash programmer but most languages I use have some sort of equivalent of getFilesystemRoot(), or getHomeDirectory(), and programmers are encouraged to use that as it goes across all platforms, not just the PC platforms but mainframes and other oddball stuff as well. Given that scenario if the black hat is half competent then the malware might work on IBM's AS/400 other odd hardware as well, potentially without even knowing these platforms exist.

----------

## eccerr0r

If you set up a 9p (or nfs or whatever, it doesn't matter) share that gives a VM full read/write access to the home directory of your host, this is the equivalent of being able to run rm -rf $HOME on your host too.  The answer is still the same: be careful configuring - don't do stupid stuff.  The hypervisor should, if written properly, deny all access outside of its well defined API which is used to gate keep all commands issued by the guest, checking and translating them to host commands.  If it normally allows unfettered access to the host, there really is no reason to run a VM as you might well run the programs straight on the box and not pay the virtualization performance penalty.

I set up QEMU to run a 9p share to pass data back to the host, and gave it access only to a specific directory deep within the host filesystem.  Unless there's a QEMU bug, there should be no chance for any code to write on my host's root directory - One "possible" bug is that QEMU shouldn't be allowing 9p file access to "../../../../../../etc/passwd" which accidentally gets passed through the host VFS to /etc/passwd.  If it does, this is clearly a bug.

 *1clue wrote:*   

> I'm not a flash programmer but most languages I use have some sort of equivalent of getFilesystemRoot(), or getHomeDirectory(), and programmers are encouraged to use that as it goes across all platforms, not just the PC platforms but mainframes and other oddball stuff as well. Given that scenario if the black hat is half competent then the malware might work on IBM's AS/400 other odd hardware as well, potentially without even knowing these platforms exist.

 

However flash is not a language used general purpose programming: it's meant for web applications only.  There is no need for one of these flash programs to know your home directory, only perhaps a well shielded directory of which the untrusted flash program doesn't need to know the path.  If the untrusted flash program somehow breaks out, say with opening "../../../../../../etc/local.d/baselayout.start", the flash "VM" is broken and is clearly a bug.  (In fact it seems that Flash supposed to have its own sandbox ... but it leaks...badly...)  So under normal situations, flash should only be able to write to files the user has specified - not /etc/motd .  Once containment escape happens, all bets are off.

Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM.  Why should there be an API to pass to a client absolute paths on the host?  These separation VMs really has no need to know what directory its images are lying.  There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.

On the other hand, Java is a special exception, only because Java "VM" was actually intended to be running on the host computer like the output of a C compiler and was not meant to provide isolation.  Thus anything written for Java was indeed meant to affect the host despite java being a virtual machine.  Java indeed has functions like getcwd(), chdir(), getpwent(), etc. and they are indeed useful for applications.  On the other hand, a Java web plugin running a java web app I would hope would have isolation - which I'm not sure if it does or not.  By this aspect, and the fact that Java does have reason to use chdir(), it should never be a plugin to run questionable webapps, regardless if it's run as root or as a regular user.  We have seen time and time again that Java has all sorts of security violations, all in turn Oracle submits fixes to, indicating it was not intended behavior.

----------

## NeddySeagoon

Team,

This will be a bug or two then?

----------

## ct85711

 *Quote:*   

> This will be a bug or two then?

 

I would think it probably be like 4 or more bug reports since one of the teams went through 3 different items (VMware, Edge Browser, the Windows Kernel, among others too).  Sadly, there security ends up being a compromise of usability and security.  The most we can do, is try getting as secure we can do without loosing too much of usability in the process.  Even on the increasing use of VM's, we have more of a need to share information in and out of that VM.  So you end up needing to decided what is secure enough to minimize the risk associated with it.  Even then, it doesn't get rid of the biggest security risk, the person behind the keyboard.

----------

## eccerr0r

Security is always a loss of usability.

----------

## 1clue

 *eccerr0r wrote:*   

> If you set up a 9p (or nfs or whatever, it doesn't matter) share that gives a VM full read/write access to the home directory of your host, this is the equivalent of being able to run rm -rf $HOME on your host too.  The answer is still the same: be careful configuring - don't do stupid stuff.  The hypervisor should, if written properly, deny all access outside of its well defined API which is used to gate keep all commands issued by the guest, checking and translating them to host commands.  If it normally allows unfettered access to the host, there really is no reason to run a VM as you might well run the programs straight on the box and not pay the virtualization performance penalty.
> 
> I set up QEMU to run a 9p share to pass data back to the host, and gave it access only to a specific directory deep within the host filesystem.  Unless there's a QEMU bug, there should be no chance for any code to write on my host's root directory - One "possible" bug is that QEMU shouldn't be allowing 9p file access to "../../../../../../etc/passwd" which accidentally gets passed through the host VFS to /etc/passwd.  If it does, this is clearly a bug.
> 
>  *1clue wrote:*   I'm not a flash programmer but most languages I use have some sort of equivalent of getFilesystemRoot(), or getHomeDirectory(), and programmers are encouraged to use that as it goes across all platforms, not just the PC platforms but mainframes and other oddball stuff as well. Given that scenario if the black hat is half competent then the malware might work on IBM's AS/400 other odd hardware as well, potentially without even knowing these platforms exist. 
> ...

 

Yet both flash and javascript have available functionality to use files on the client hard disk. I spent 5 minutes verifying both. I don't know how good their security is, but let's just assume for a moment that some black hat knows something nobody else knows.

Back when Java made a big splash, everyone thought it was good for a web browser and not really much else. The original intent was that Java byte code would be natively executed by an embedded controller, so of course files were in the language. Later everyone figured out that Java sucks inside a web browser, to the point that  Java support will be officially eradicated from all major browsers by next year some time. However it turns out that on the server side, Java is a huge deal, even today. It has fantastic filesystem support, and an outstanding networking stack. And finally as of a few years ago there is now CPU hardware which uses Java byte code as its native machine code.

I guess my point here is that most languages start with one or two fairly elegant goals and then either die in obscurity or blossom into something nobody had quite imagined in the beginning, often the end result bears little resemblance to the original plan.

 *Quote:*   

> 
> 
> Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM.  Why should there be an API to pass to a client absolute paths on the host?  These separation VMs really has no need to know what directory its images are lying.  There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.
> 
> 

 

Who's going to know about turning those settings on, and who's going to know why not to do it? Most office people I know would have no idea how, or if you explained it to them no idea why it wouldn't be desirable.

 *Quote:*   

> 
> 
> On the other hand, Java is a special exception, only because Java "VM" was actually intended to be running on the host computer like the output of a C compiler and was not meant to provide isolation.  Thus anything written for Java was indeed meant to affect the host despite java being a virtual machine.  Java indeed has functions like getcwd(), chdir(), getpwent(), etc. and they are indeed useful for applications.  On the other hand, a Java web plugin running a java web app I would hope would have isolation - which I'm not sure if it does or not.  By this aspect, and the fact that Java does have reason to use chdir(), it should never be a plugin to run questionable webapps, regardless if it's run as root or as a regular user.  We have seen time and time again that Java has all sorts of security violations, all in turn Oracle submits fixes to, indicating it was not intended behavior.

 

There's a security library involved in Java. If you have Java code which was written or installed onto a host, then it's running in unprotected mode and therefore has pretty good access to the hardware within certain limits. If you opened an applet in a web browser there are fairly decent security measures to prevent access, but of course if it's code then there is a way around that whether we know about it or not. 

None of this actually is affected by root user directly. Your premise that all code should be vetted is ridiculous and extremely obviously impossible to do. The purpose of root user is not to perform normal user tasks at all, but to perform a limited subset of system administration tasks which cannot be performed without escalated privileges. It is obvious to everyone except you that many tasks are extremely undesirable to do as root.

For example, I'm writing a C app which will be modifying large blocks of the disk. If I use my own user account then if I mess something up it's probably limited to my own account. If I use root then I could literally destroy my entire installation by having a misplaced character. And yet the C compiler is heavily reviewed by capable developers and as such it should be allowed to run as root right? And even if I download some C code as root from some website in Afghanistan I've never heard of, the C compiler should be smart enough to prevent that source code from doing anything dangerous to my system right?  Because it's vetted? Surely every bit of source code on an actual web page does exactly what the documentation says it does right? If it's on a web page then it's been vetted or "they" wouldn't let it be on the site.

----------

## eccerr0r

 *1clue wrote:*   

>  *eccerr0r wrote:*   Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM.  Why should there be an API to pass to a client absolute paths on the host?  These separation VMs really has no need to know what directory its images are lying.  There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.
> 
>  
> 
> Who's going to know about turning those settings on, and who's going to know why not to do it? Most office people I know would have no idea how, or if you explained it to them no idea why it wouldn't be desirable.
> ...

 

That's why it's default off in "secure mode" and should never be turned on except if the user knows why it's necessary.  The idea is the write flash code such that enabling these are unnecessary and they shouldn't be necessary.  Any code that you didn't write yourself is automatically suspect if it requires enabling.  One would hope they would know enabling writes to your hypervisor by default on your VMware VM is potentially a risk for the system.

 *Quote:*   

> Your premise that all code should be vetted is ridiculous and extremely obviously impossible to do. The purpose of root user is not to perform normal user tasks at all, but to perform a limited subset of system administration tasks which cannot be performed without escalated privileges. It is obvious to everyone except you that many tasks are extremely undesirable to do as root.

 

You're making a completely incorrect assumption and therefore making an nonsensical conclusion, seemingly out of self righteous spite here, adding no value to your argument.  After all, you first suggested vetting and now you say it's ridiculous, so why did you suggest vetting?  Of course there is no reason to run things as root if it's not necessary specifically because one is never absolutely sure what "real" code will do - whether it's buggy code or untrusted code (or to protect yourself from accidentally running system killing commands) but you have to admit it does get frustrating after a while (note 1).  Also "extremely undesirable" is an extremely vague term which has different meaning to different people - at the very least this statement creates an unclear and very arbitrary line between what's OK and what's not OK.

As a curiosity, how do people handle OpenSSL server keys and the signing tools.  Handling OpenSSL server keys absolutely does not need root to handle at all.  Yet I suspect significant number of people use openssl as root because the servers need to be started as root anyway and it's a "usability issue" to switch back and forth to create and sign keys, yet you may have to sign someone else's keys which is technically untrusted data.  Also the fact that well, if that OpenSSL user gets compromised no matter what it is, it's pretty much game over anyway.

---

note 1:  Aren't we all annoyed when we have to download click-through package files (i.e., "restricted fetch" ebuilds) for portage?  "Convenience" is to just download and click-through as root through firefox and save as /usr/portage/distfiles/ directly - which I'd never do - I always endure the pain of download as a regular user, chown/chmod it, and move it into place...

----------

## 1clue

 *eccerr0r wrote:*   

>  *1clue wrote:*    *eccerr0r wrote:*   Flash is no different than a traditional VM specifically meant to provide separation between questionable applications/OS and the machine hosting the VM.  Why should there be an API to pass to a client absolute paths on the host?  These separation VMs really has no need to know what directory its images are lying.  There may be a "bypass" configuration option in case you really do need to do it, but by default this should be an option and manually enabled, and thus a configuration problem.
> 
>  
> 
> Who's going to know about turning those settings on, and who's going to know why not to do it? Most office people I know would have no idea how, or if you explained it to them no idea why it wouldn't be desirable.
> ...

 

I remember one time years ago a secretary infected the entire windows-based office by opening an infected email which supposedly contained a video. She did it like 6 times. Small company at the time, but it spammed everybody in it since she had mailing lists for all of it. We kept telling her not to open that email, just delete it.  She kept insisting that she wanted to see the video.

 *Quote:*   

> 
> 
>  *Quote:*   Your premise that all code should be vetted is ridiculous and extremely obviously impossible to do. The purpose of root user is not to perform normal user tasks at all, but to perform a limited subset of system administration tasks which cannot be performed without escalated privileges. It is obvious to everyone except you that many tasks are extremely undesirable to do as root. 
> 
> You're making a completely incorrect assumption and therefore making an nonsensical conclusion, seemingly out of self righteous spite here, adding no value to your argument.  After all, you first suggested vetting and now you say it's ridiculous, so why did you suggest vetting?  Of course there is no reason to run things as root if it's not necessary specifically because one is never absolutely sure what "real" code will do - whether it's buggy code or untrusted code (or to protect yourself from accidentally running system killing commands) but you have to admit it does get frustrating after a while (note 1).  Also "extremely undesirable" is an extremely vague term which has different meaning to different people - at the very least this statement creates an unclear and very arbitrary line between what's OK and what's not OK.
> ...

 

You vet code which needs to be run as root. You don't bother with anything special for any other code. You're the one who insists that all code must be usable as root. You're the 'goose and gander' guy, and then you insist that the software should be smart enough to check for security issues.

In other words, you seem to be saying that every piece of software someone might want to run must duplicate the Linux security system in its own code, so that when running with escalated privileges no security is lost.

 *Quote:*   

> 
> 
> As a curiosity, how do people handle OpenSSL server keys and the signing tools.  Handling OpenSSL server keys absolutely does not need root to handle at all.  Yet I suspect significant number of people use openssl as root because the servers need to be started as root anyway and it's a "usability issue" to switch back and forth to create and sign keys, yet you may have to sign someone else's keys which is technically untrusted data.  Also the fact that well, if that OpenSSL user gets compromised no matter what it is, it's pretty much game over anyway.
> 
> ---
> ...

 

I've never configured a keyserver, but since they're especially focused on security I suspect they do what apache2 does: Run a single very simple thread as root for the low-numbered port issue, and run the keyserver as a non-privileged user.

One of my favorite features of Ubuntu is the fact that they've disabled root user. It's not hard to get a root shell if you know how, but most of the n00bs don't know how as it's not documented on their site. By the time they figure it out, hopefully they've developed enough common sense to realize they shouldn't have a root login just hanging around for whatever they might want to do.

----------

## eccerr0r

 *1clue wrote:*   

> I remember one time years ago a secretary infected the entire windows-based office by opening an infected email which supposedly contained a video. She did it like 6 times. Small company at the time, but it spammed everybody in it since she had mailing lists for all of it. We kept telling her not to open that email, just delete it.  She kept insisting that she wanted to see the video.
> 
> 

 

That program should not have allowed those privileges by default.  Administrator shutoff.  Program has a bug if it automatically allowed administrator access of any code that attempts to run.

Stupid is as stupid does.  Did she get fired just like the root firefoxer (though stupid, may not have actually done any damage unlike disrupting the whole company with spam)?

 *Quote:*   

> You vet code which needs to be run as root. You don't bother with anything special for any other code. You're the one who insists that all code must be usable as root. You're the 'goose and gander' guy, and then you insist that the software should be smart enough to check for security issues.
> 
> In other words, you seem to be saying that every piece of software someone might want to run must duplicate the Linux security system in its own code, so that when running with escalated privileges no security is lost.
> 
> 

 

I'm purely targeting the software development itself to stop TTM as the top goal - take a step back and reduce mistakes.  A whole security system built in for each is not only unnecessary but redundant (if the user runs unprivileged, which they should).  But they better take more time to make sure their software doesn't allow race condition stack smashing (harder to prevent) and disallow accessing arbitrary files when it doesn't need to access those files (should be easy!).  I just don't want devs to assume that their programs will always be run in a unprivileged environment so they can be sloppy with their security.  After all, running Firefox as root in a clean environment without opening any webpages should better have no inherent risks despite being a questionable thing to do as you'd just be staring at a blank page.  However if anyone is skittish about even doing this, I'd suggest deleting firefox altogether.

And well, code bugs in the kernel or whole system virtualization, there's not always good defense against these bugs and better left for the developers to handle.  If they release too often, there's no way to constantly vet these every release.

----------

## eohrnberger

 *khayyam wrote:*   

>  *eohrnberger wrote:*   I'm intrigued by the idea of running a 'portage checksums' test. Googled a bit, but didn't quite find the magic command line for that.  Any references please? Is this going to check the /usr/portage tree integrity, or the installed files integrity? 
> 
> eohrnberger ... as I think was mentioned earlier in the thread, this is no guarentee (the checksums exist on the same filesystem as the tampered files ... and so may be considered equally suspect). That said, here are a few examples:
> 
> ```
> ...

 

Thanks.  Running this now.  So far it's only turned up conf files that have changed, so so far so good.  And, yes, I realize that it's only going to detect files that portage has installed which are different, and not ones that have been added.  Accepting this as a limitation.

----------

## eohrnberger

 *eccerr0r wrote:*   

> 1clue - I can see that you're really into viciously enforcing rules, which is reasonable if the software isn't completely understood, which has its base but still very arbitrary.  It's clear all the examples you're giving because we know each of them have exhibited bugs in the past in both the actual server/program AND poor user configuration, both of which will benefit from privilege separation.  A lot of application developers are suggesting privilege separation simply because they are not willing to find and fix bugs that can cause privilege escalation, mostly because they know it's becoming too difficult to know every possibility (and thus not "correct by design").  And again, it doesn't matter if it's escalating from a "VM" environment like HTML to running arbitrary machine code on the machine as a regular user or root, it's still a security hole that needs to be fixed.
> 
> But what really should be the lower bound?  Why not run 'ls' as a unprivileged user as well, why does it get an OK for being run as root.  Granted 'ls' is simple enough to be fairly well checked such that specially crafted directories with bad filenames will never cause buffer overflow and cause 'ls' to run arbitrary code.  Why shouldn't other software also be subject to similar scrutiny, even if it's much harder?  And if it were carefully inspected, why shouldn't it get the same treatment like 'ls' is trusted by most people?  After all, after 'ls' grabs disk blocks and inode contents getting past permissions, it no longer needs root to process and print on your display - yet nobody is running it under privilege separation when it too could also harbor bugs.
> 
> Now curl/wget must also be run under privilege separation by your rules too?  I suspect so, but I do I wonder how many people wouldn't bat an eye before running wget as root when they wouldn't and shouldn't run firefox as root.

 

It was already installed, and run, but not updated it's local baseline.  So Yea, I noticed this and made a mental note to update the local baseline once I'm satisfied that the system is OK.  That's gonna take a while.  So far, no additional incidences of weirdness.

----------

## eohrnberger

 *eccerr0r wrote:*   

> Security is always a loss of usability.

 

True.  

But the balance between usability and security is one that each organization needs to find for themselves, and the last component in the triad is cost, not only the cost of imposing the security, but also the cost should a system be compromised as well as the cost of any data that might be compromised and the cost of confidential data escaping into the wild.

----------

## khayyam

 *eohrnberger wrote:*   

> But the balance between usability and security is one that each organization needs to find for themselves, and the last component in the triad is cost, not only the cost of imposing the security, but also the cost should a system be compromised as well as the cost of any data that might be compromised and the cost of confidential data escaping into the wild.

 

eohrnberger ... a great majority of such costs are in fact negative externalities. In terms of the major software vendors (and subservient industry) who bears the cost of "compromise [... and ...] data escaping into the wild"? In fact, they not only avoid bearing the cost, they can profit from the fact, an entire service industry is built around "usability ... and security [sic]" ... with linux being similarly driven (a trend which can be seen from how linux has gone from being a user driven phenomena to being almost entirely under the governance of the vendors who have been able extract revenue from others labour, and then control and monopolise the market).

best ... khay

----------

## 1clue

 *eccerr0r wrote:*   

>  *1clue wrote:*   I remember one time years ago a secretary infected the entire windows-based office by opening an infected email which supposedly contained a video. She did it like 6 times. Small company at the time, but it spammed everybody in it since she had mailing lists for all of it. We kept telling her not to open that email, just delete it.  She kept insisting that she wanted to see the video.
> 
>  
> 
> That program should not have allowed those privileges by default.  Administrator shutoff.  Program has a bug if it automatically allowed administrator access of any code that attempts to run.
> ...

 

In that case, all software courtesy of Microsoft. And I didn't hold the position then that I hold now, nor were any of us connecting to confidential data of the nature I currently work with.

 *Quote:*   

> 
> 
>  *Quote:*   You vet code which needs to be run as root. You don't bother with anything special for any other code. You're the one who insists that all code must be usable as root. You're the 'goose and gander' guy, and then you insist that the software should be smart enough to check for security issues.
> 
> In other words, you seem to be saying that every piece of software someone might want to run must duplicate the Linux security system in its own code, so that when running with escalated privileges no security is lost.
> ...

 

Dude. You're using a web browser as your example code that you think should be flawless. I can think of no more inherently insecure application except if we knew of a repository of pure malware that we could download and infect our own systems voluntarily. That's my point! A browser inherently runs code from untrusted sources. Video. Javascript. Dozens of scripting languages and markup which is semi-executable or completely executable. The very nature of the app makes it inherently unsafe to run with escalated privileges. It doesn't matter how secure you THINK the browser code itself is. Somebody always knows more than the programmers and the code review people.

Virtual machines consist of a software application which, possibly with hardware assistance, can pretend to be a physical computer. Think about that for awhile.

You still insist on calling all of this "bugs." Stop it. Yes, bugs exist and yes some of them are security vulnerabilities, but the real danger is malware. Malware is not a bug. Malware could be theoretically flawless and therefore bug free, but it's still malware.

----------

## 1clue

So if you take a live hand grenade and wire the pin in so it can't be pulled out, does that make it safe for your kids to play with it?

----------

## eccerr0r

The premise is that bugs allow malware to do their stuff.  Malware cannot run if the software does not allow it to run.

If the 'bug' is the person deciding to run the malware in a terminal or doubleclicking on an video icon, that person is the bug.

----------

## 1clue

 *eccerr0r wrote:*   

> The premise is that bugs allow malware to do their stuff.  Malware cannot run if the software does not allow it to run.
> 
> If the 'bug' is the person deciding to run the malware in a terminal or doubleclicking on an video icon, that person is the bug.

 

You presume that:

All bugs are known.

All bugs are known to the people who are writing or reviewing the software being vetted.

All potential attack vectors are known.

That Team White is bigger and/or more knowledgeable than Team Black.

None of the above points are true. In fact, there is much more monetary incentive to be part of Team Black, and many more "unconventional" players on Team Black.

Bugs exist in software which are not known to be bugs by even one human. For a human to call a code snippet a bug means that a human understands how this snippet can operate in a way other than designed. For the bug to be called a security vulnerability, the human needs to understand how, at least in theory, that the code could be used to gain unauthorized access or at least prevent normal operation of the system containing the bug.

Knowledge that a bug or vulnerability exists is not required for the system containing it to be affected badly.

For the developers and reviewers to address these bugs, they need to know about the bugs. Bugs discovered by Team Black stay with Team Black. Bugs discovered by Team White are systematically shared, and some of the people being shared with secretly belong to Team Black.

The premise that ALL software needs to be vetted the way you describe is impossible for several reasons, many of which I've mentioned above. Since you seem impervious to those reasons they won't be mentioned here again.

----------

## 1clue

 *eccerr0r wrote:*   

> ...If the 'bug' is the person deciding to run the malware in a terminal or doubleclicking on an video icon, that person is the bug.

 

So even if you understand that the browser itself is not perfect, you DO understand why running a browser as root (which has the security system bypassed) is a bad idea. Why are we arguing? Your point has no merit. Coders already spend a lot of effort on ensuring their software is not only functional but bug free and secure under normal circumstances.

You're arguing because you like to argue, not for any anticipated gain for software security.

----------

## jonathan183

 *eohrnberger wrote:*   

> made a mental note to update the local baseline once I'm satisfied that the system is OK.  That's gonna take a while.  So far, no additional incidences of weirdness.

 

I'm still interested in how the system got compromised. Have you got any closer to working this out ? Has all potential evidence and clues to this been destroyed ?

I suggest you boot from a live CD/DVD when doing any comparison of checksum values so you know that you get true values from tools you can trust. If you have a clean system with same use flags etc that you can compare with that will save you having to build the whole system again in a chroot. Consider everything as bad until you can confirm with the package manager and checksum values that it is OK. This will still leave you with quite a bit of investigation work, but you may get lucky and identify only a small number of binaries which fail to match.

----------

## 1clue

For the record I'm interested in the real thread of what's going on here too. I'm not just in it to pointlessly argue with eccerr0r.

----------

## Tony0945

 *1clue wrote:*   

> For the record I'm interested in the real thread of what's going on here too. I'm not just in it to pointlessly argue with eccerr0r.

 

Good! Please move the philosophical discussion to Gentoo Chat

----------

## 1clue

As far as I'm concerned it was over at my first post.

----------

## Hu

I could split out the philosophical posts, but some parts of the thread have posts that weave between philosophy and the original topic, so splitting could make the conversation harder to follow.  If the philosophy debaters want to continue, I'll try to carve up the thread and leave appropriate cross-links.  Otherwise, I'll leave the posts all in one thread.

----------

## Tony0945

 *Hu wrote:*   

> I could split out the philosophical posts, but some parts of the thread have posts that weave between philosophy and the original topic, so splitting could make the conversation harder to follow.  If the philosophy debaters want to continue, I'll try to carve up the thread and leave appropriate cross-links.  Otherwise, I'll leave the posts all in one thread.

 Thank you, Hu.  The discussion is interesting but getting away from the OP's problems.

----------

## jonathan183

 *Hu wrote:*   

> I could split out the philosophical posts, but some parts of the thread have posts that weave between philosophy and the original topic, so splitting could make the conversation harder to follow.  If the philosophy debaters want to continue, I'll try to carve up the thread and leave appropriate cross-links.  Otherwise, I'll leave the posts all in one thread.

 

For me

1. Establishing method of system compromise is interesting and of use to the community more generally, as is escape from VM.

2. Use of root, why that is a bad idea for things like surfing the net may be of use, what and whether software should provide mitigation is probably a separate topic.

3. Forensic investigation may be of use but is probably already better covered elsewhere - a simple don't trust anything on the system certainly applies in this case.

4. Recovery - again probably already covered elsewhere - a fresh install is the only way to be sure.

I am particularly interested in 1 above, so 3 is also relevant to help establish how. I think OP is already aware of 2 and appreciates 4 even though they may be painful.

----------

## eccerr0r

I'm only claiming that I have my doubts that root run firefox is the entry method, and I don't want the OP to give up assuming this is the rootcause because some people think "it's a bad idea, so, it must be the entry method."  This is the whole reason why this philosophical problem exists.

Things we now understand:

1. Rotating backups are good and reinstall is only way to safely rid of contamination.

2. Running firefox / adobe flash as root is a very bad idea

2a. ...but is NOT a guarantee to get infected by malware.

3. Likely this is a privilege escalation (or "VM" escape) of some sort versus explicitly running a trojan horse.

4. Crowdsourcing forensic investigation is hard.

5. We still have no definitive entry method.

----------

## eohrnberger

Just reporting in.

So far so good.  Systems seem to be functioning as expected.  MRTG is not noticing any spikes in outgoing traffic.  All looks normal.

----------

## radg

It's possible the malware is based on this proof of concept, as there is a similar /etc/motd message:

https://github.com/jdsecurity/CryptoTrooper

In which case, there are decryption tools provided.

----------

## destroyedlolo

Hello,

In order to detect intrusion and specifically system changes, do you think using inotifywatch to monitor system's critics parts is a good idea ?

Laurent

----------

## Maitreya

This post made it to HN...

----------

## bunder

It was there 7 months ago too.   :Rolling Eyes: 

----------

