# opportunity to migrate our little company to linux

## Elleni

We are ~10 employes using mostly office applications and our boss wants to finally upgrade our it infrastructure. My technician colleague and I convinced him to migrate our server and clients to linux while replacing the most of the hardware too, as it is quite aged. But as my colleague only has very limited knowledge in linux I will need to implement central user- and accessrights administration outside console, so I would like to know, what would be my options. Searching the web, it seems openldap with a webinterface would be a possibility, is that the way to go or are there other, better supported or easier options? And what would be a good gui interface to maintain such a setup?

We will need one host that will run some virtual machines for infra services (dns, dhcp, backup, ldap, webserver, ...), one windows vm with core application only running on windows, file- and printserver, maybe a lamp server and some desktops. A requirement is that user management is not configured locally but centralized as it would be done if we would have setup a windows domain, so access rights to files and user administration should be doable on a web- or gui- interface. 

Furthermore we plan to setup application virtualization so apps like browser are rather run on the server than locally. Is it possible to get this done also with the windows application, that it is run on the server but its window is opened on the desktop and for the user seems to run locally? We will probably need a true window server for that with rds licences, right - I don't think it would be possible to do this on a lets say windows 10 prof. vm?

The whole setup will be carefully planed and implemented while having reasonable/minimal maintenance in mind. Acquiring identical hardware, build binaries and distribute them to the clients, ...

My boss now asked me to propose which distribution to use for desktops and servers, and why. 

As gentoo is my first and only distribution that I used for many years, I will propose gentoo because of its benefits we all know, and because of having acquired all my linux knowhow on my homeuse of gentoo boxes. After years of using desktop, I also have setup a little vps with nextcloud, mailserver, apache and drupal some years ago; while my boss will have security in mind but also that it should be well supported, in case we would not be available anymore, I'm afraid he will also consider RH as there is professional support. But there is very limited knowhow on our (technicians) side, and everything would have to be learned from scratch   :Confused: 

I am looking forward to see your comments and suggestions/recommandations.

----------

## szatox

 *Quote:*   

> Furthermore we plan to setup application virtualization so apps like browser are rather run on the server than locally. Is it possible to get this done also with the windows application, that it is run on the server but its window is opened on the desktop and for the user seems to run locally? We will probably need a true window server for that with rds licences, right - I don't think it would be possible to do this on a lets say windows 10 prof. vm? 

 X-Forwarding for windows sounds like citrix. I don't know much about it, I've been using it in a corpo, it's been just as painfully slow as anything there.

I know there are some free VNC servers for windows that would allow you to login remotely without triggering the limit of 2 remote users allowed by the typical windows server licence. No idea about legal aspects though.

Managing users with AD/LDAP/NIS/samba etc is a funny thing... It's convenient until you find yourself unable to access your domain controler, and everything dies.

Make sure you have both, redundant setup and validated backups you can easily restore even with your controler down. Don't lock yourself out.

The same thing regarding network. Redundant switches, redundant cables, loop protection.

 *Quote:*   

> My boss now asked me to propose which distribution to use for desktops and servers, and why. 

 

The company I work for uses ubuntu because it's free, easy, and there is a readily available group of admins who already know debian.

Honestly, the differences between distributions are not very big, you know one package manager, you can learn another one. The userland is the same, and system configs are mostly the same*, and you will need manuals for application-specific stuff anyway.

This said, we've been doing some really weird stuff too, which is an up-hill battle on any binary distro. Somehow I feel more comfortable building my own packages on a system designed with building my own packages in mind, so IMO gentoo would be a perfect pick for those cases. I doubt many companies would need this level of flexibility, though.

*except for red-hat, which does everything in their own, clunky way.

 *Quote:*   

> I'm afraid he will also consider RH as there is professional support. But there is very limited knowhow on our (technicians) side, and everything would have to be learned from scratch

 

The good news is: almost all the important things you leant breaking and fixing your gentoo will be relevant to any distribution you will ever work with. Being a seasoned gentoo admin, you surely have a decent understang of gnu, linux, and computing in general. Tap into that experience for courage, you already know the enemy even if it goes by a different name. Even if it's that clunky red hat.

----------

## Elleni

Hi szatox and thank you for your comments. I wonder if openldap is an overkill as for ten users separated in perhaps 4 different groups it would probably be easy enough to centralize user management by mapping /etc/passwd and /etc groups and thus centralize their management on a single box. I think, I can easily manage that, but I don't know if there would be gui for my co-worker who is an "experienced" supporter but on windows world so he is accustomed to click button administration. Thats why I had considered openldap in the first place. If I can convince him to make friends with console, that would be the best. I would still need to convince my boss, that he is on the sure side, if we document everything very detailed, so he will not need to be afraid that his whole business is too dependent to us two system administrators who will set everything up. 

What in my opinion speaks against most other distributions and thus for gentoo is my lack of experience with systemd, being used to openrc from the beginning and avoided this beast until now.   :Very Happy: 

I did not think of VNC server for windows - but this could make the trick as the goal would be to not need to login to the windows box, but only X-forward the window, I will have a look at those. 

Redundancy in every aspect and validated backups are a must, thats absolutely true   :Exclamation: 

I am looking forward as this is a nice oportunity to set all this up from the beginning and trying to do it the right way. It certainly will be a nice learning opportunity.   :Twisted Evil: Last edited by Elleni on Wed May 29, 2019 10:51 pm; edited 1 time in total

----------

## Jaglover

How about this. Build a powerful enough server, with RAID to have at least some redundancy. This will host all user files and their home directories. All workstations mount users HOME over NFS. Thus, it becomes irrelevant which machine is used by a particular user. They will always be in their home with all their files and settings intact.

----------

## Elleni

Thanks Jaglover, good point and definitively something I should and will consider in our planning   :Smile: 

----------

## C5ace

Hi Elleni,

I remotly support a number of small commercial users.  One has a LAN with 7 Desktops, running  XFCE4. The Server has 16 GB and 3*1 TB as RAID 5, runs Open Office, Thunderbird, Firefox. Mail, webserver and WebERP runs on Debian (Devuan) with Ispconfig installed in VirtualBox. Backup is to an identical box synced using rsnapshot. Remote and local administration is done by ssh terminal, webmin and Ispconfig. The backup box also acts as a standby server.  

The users connect with NFS to their home directory Server. User allplications are accessed by Xforwarding.

WebERP: http://www.weberp.org/

ISPconfig: https://www.ispconfig.org/

The Gentoo boxes use openrc. No systemd. No Windoze, No Samba.

I reccomend "Suse Linux" as linux with paid support: www.suse.com.

----------

## Elleni

Thanks C5ace, I will check them. Weberp will be difficult to convince my boss to change from the one we are using now, as the business totally depends on this (selectline), but why not setup it and have a look if it would fit our needs and migrate later

----------

## szatox

 *Quote:*   

> I did not think of VNC server for windows - but this could make the trick as the goal would be to not need to login to the windows box, but only X-forward the window, I will have a look at those. 

 Er... VNC does require you to login, it's a remote desktop. Citrix does not, it's able to detach GUI from the application and display it over the network. I've never tried to set it up, I don't know what it takes. It does behave just like x-forwarding on linux.

 *Quote:*   

>  wonder if openldap is an overkill as for ten users separated in perhaps 4 different groups it would probably be easy enough to centralize user management by mapping /etc/passwd and /etc groups and thus centralize their management on a single box.

 If centralized authentication is not a hard requirement, you may consider doing something similar to what my company does: local accounts created with a data center automation tool.

We have a version controlled database holding configuration of all machines, and we apply those settings as needed. Losing the database would be bad, but already configured machines dont require it to remain operational.

Adding a new admin requires 3 steps in this case: Enter account information, add account to a group (accociate account with a function "admin" that exists on all servers), and apply changes to all machines in the group.

----------

## Elleni

Hello szatox, 

I will test both, I thought, if vnc server exists for windows, maybe it also can x-forward just an application window as it is possible under linux, but good to know, that this can be achieved with citrix - I will have to check the pricing though. 

As for the central administration, I will have to check, how we will implement this. Now, working with windows workgroup and local accounts, we do  create same user accounts on the fileserver once again with identical password to enable fileaccess, and even with few users, its a pain in the ass to administer as password has to be changed on multiple sides. Thats why we had started thinking of windows domain and later as linux is being considered by our boss as alternative solution, I started to think we would need openldap, but now I think, that this could be an overkill, so yes, I am open to ideas and for the moment I think, we could be good with homes on the fileserver, mapped over nfs to the clients and maybe just having a central passwd and group file, that would be symlinked to users homes. That should enable us to centrally "administer" the accounts ant it's access rights. I will look around to see what datacenter automation tool exist, and maybe find something that could fit our needs. 

I also had a look to fog project for backup and deployment of clients, seems like a cool thing to have a closer look at, but then again, I'll see if it really serves us.

----------

## Elleni

Questions: Talking about redundancy I am starting to think it could be better to buy two performant workstations, for the money we would get a new server, so we could have two identically configured "servers", one for production and one for testing/standby/compiling and switch from one to another transparently. 

- How can we implement a switch from server a  to server b transparently - meaning that the user would not notice? Is "bonding" the right keyword to look for or would one need clustering for that? Naturally we would have to take care that snapshots of all vm's of one host are synced over to the other second host and also that fileserver(s) are in sync as we would have to take care that the windows vm with its core app and sql db would also be on a sane state to take over once we switch from server a to b...

- Besides regularly snapshotting kvm/qemu vms on the host and normal backups of the fileserver, something similar to file version history in windows would be nice to have so users could navigate to a folder and maybe rightklick a file and have access to prior versions of it. I have seen, that nextcloud has fileversion history so that might be a way to do it. What are your suggestions for not too complex but still good backup solution? Right now I am looking at some tool like back-in-time, lucky-backup, burp, plasma and gnome integrated backup tools and crawling the web for linux backup solutions but it would be nice to hear what solutions you experts would suggest.   :Smile: 

----------

## 1clue

I'm not exactly going to answer your questions directly. I'm going to ask some more questions mixed in with my own observations, and you may want to take them into account.

Choice of distro is both minor and major. You're choosing stability vs new features, support vs fly-on-your-own, certifiability vs internal-only. You can choose what you want based on your needs.

Are all your company's users on-board with this change? At least interested? If not, you'll likely experience some angst and things will not work out.

It's a mistake to try to emulate every feature of what you're moving from. Mac OS X is different from Windows, and people who switch between them can figure it out without too much difficulty. Same with iphone and android. It's better to show them how it works in general, then do one-on-one or even group lessons after as needed.

Use a real VPN instead of opening ports to your local network. OpenVPN is fine.

What functionality do you actually need from the Windows world? If you have a person creating fantastic spreadsheets or presentations which are shared with other companies AS A SPECIFIC DOCUMENT TYPE then you may need a Microsoft product. Pay attention and learn to recognize a real need, and what can be an alternate approach or alternate product.

A device like a Synology NAS can "speak" to UNIX, Mac and Windows, each in their native modes. You may want one rather than try to figure some things out.

I've been using Linux since 1995. I've been the enthusiastic crusader and the steadfast support guy and the alternate "free" solution guy. In my experience both in the office and at home, people don't like it when someone comes in and dictates a change that they will have to adapt to. If you have time (seems to me you don't) it's better to simply use the distro you want in a productive but not obnoxious way and when people see that they'll be curious and possibly want to try it themselves.

Personally I use many distros. My company mostly uses Ubuntu Server LTS, because it's a decent mix between bleeding edge and stable, and it has a very minimal default install. I use Gentoo if there's a special circumstance not covered by a tested distro.

If you're going for some sort of certification (e.g. SOX compliance for using credit cards) you need to be careful. Your server will need regular security audits and you will be responsible for applying the patches. Most distros which have paid customer support options do pretty good with applying the fixes, but sometimes the paperwork from the audit can be painful.

Systemd: I saw people mention it here. Personally I don't like the architecture but because we use Ubuntu I have to know it. From a system administrator's perspective systemd is a simple thing to learn.

----------

## 1clue

 *Elleni wrote:*   

> Questions: Talking about redundancy I am starting to think it could be better to buy two performant workstations, for the money we would get a new server, so we could have two identically configured "servers", one for production and one for testing/standby/compiling and switch from one to another transparently. 
> 
> 

 

IMO you're looking at 3 identically configured 'servers.' One of which (testing) can likely be a VM. The other two would be a primary and backup. You upgrade the backup first, then switch it to primary before updating the old primary.

 *Quote:*   

> 
> 
> - How can we implement a switch from server a  to server b transparently - meaning that the user would not notice? Is "bonding" the right keyword to look for or would one need clustering for that? Naturally we would have to take care that snapshots of all vm's of one host are synced over to the other second host and also that fileserver(s) are in sync as we would have to take care that the windows vm with its core app and sql db would also be on a sane state to take over once we switch from server a to b...
> 
> 

 

Enterprise environments use hardware to handle this invisible failover switching. Just saying.

Depending on what you're switching, that can also mean software which is aware of failover nodes and which can share session information across the nodes. There are different levels of high availability, this is a separate topic for each service being handled like this, and for all hardware involved.

 *Quote:*   

> 
> 
> - Besides regularly snapshotting kvm/qemu vms on the host and normal backups of the fileserver, something similar to file version history in windows would be nice to have so users could navigate to a folder and maybe rightklick a file and have access to prior versions of it. I have seen, that nextcloud has fileversion history so that might be a way to do it. What are your suggestions for not too complex but still good backup solution? Right now I am looking at some tool like back-in-time, lucky-backup, burp, plasma and gnome integrated backup tools and crawling the web for linux backup solutions but it would be nice to hear what solutions you experts would suggest.  

 

For actual text files (even if they have images referenced too, like html) I would recommend git for version control. 

For backups, whatever you choose you need to actually test both the backup and the restore before you adopt the solution. Both for feasibility (does it work at all?) and for the chances of getting your office back into production in the shortest period of time. I personally have abandoned purpose-built third party software with lots of features in favor of straight uncompressed network copies and storage on raw SATA drives using my own scheme.Last edited by 1clue on Fri May 31, 2019 10:34 pm; edited 1 time in total

----------

## 1clue

Come to think of it, there may be an alternate approach to your setup, which will give you flexibility to try new things:

First approach: VMs

For your entire "server space" you could get 2 pieces of hardware, and provision them as a KVM/QEMU hypervisor cluster. Each physical box has a minimal operating system and just enough to run as a virtual machine host. This can be Gentoo or something else, including VMware but of course then it won't be KVM/QEMU anymore.

From that point, you could create your "bare" Gentoo guest. You could plan this nicely by sharing many of the common files, like the portage directories not specific to each individual host, and then updating on the "owner" system will update every guest.

To create a new guest, clone the bare guest and modify it.

You can do well enough for a 10-person shop without any payware. Or if you wanted to get fancy there are commercially available apps to let you manage your cluster easily.

The benefit here is that you can fire up a trial of new software easily, while leaving the existing production software and its failover alone. Once you've tested your new setup, you can retire the old setup and adopt the new one.

Second approach: Containers/docker/rancher

The new trend toward microservices is controversial, I'm not sure I'd want to use it outside of a good firewall. But internally it can be fine. It all is based on Linux Containers, which you can look up if you're interested, there are also some really interesting discussions on this on the forum. Docker, rancher, kubernetes, all that is sort of interrelated. Or you can do it the Gentoo way and use the host OS as the core of your containers.

In any case, the rancher approach can be used concurrently on the same system as the QEMU approach.

Hardware:

The tried-and-true approach toward virtualization is to get the biggest, heaviest, hottest fire breathing dragon your boss can afford. There are benefits to this approach and there are drawbacks too, like a really noisy, hot server room and a bigger electric bill. But if you have a huge database you need to keep going, this may be necessary for you.

Another approach is to get a lightweight server box. I have an 8-core Atom c2758 box running Gentoo with KVM/QEMU on it. In some scenarios it's faster than my i7 desktop, and it's ALWAYS quieter and cooler. I got mine years back but lately Antsle has been marketing a very similar box specifically to be a VM host, and it's based on Gentoo.

Example: I have an i7 with 5x Realtek NICs, and this c2758 atom with 7x Intel NICs. The atom box is faster than the i7 at networking tasks. When I'm running a test between the two boxes with multiple cables directly attached, the CPU load on the atom stays pretty much constant and very close to what it was when idle. The i7's load goes up quite noticeably, even though the only thing happening is my test transfer. It's less than half the speed of the i7 with compilation. My particular chip has hardware acceleration for encryption & compression, and the entire device was designed with networking in mind. If you are contemplating the atom route, you'll want to go to https://ark.intel.com/content/www/us/en/ark/products/series/97941/intel-atom-processor-c-series.html and browse for something you like. And then pick a system that has one (they are soldered to the motherboard) from somewhere like SuperMicro (I don't work there, they just get my money) https://www.supermicro.com/products/motherboard/Atom/

I'm going to show 2 recommendations for Atom-based boxes. The first is the one I have, and I'm going to list what's wrong with it IMO. Then I'm going to give an alternative, and why I think it's better.

https://www.supermicro.com/products/motherboard/Atom/X10/A1SRM-LN7F-2758.cfm

This is a SuperMicro board based on the c2758 Atom processor. It has 7x Intel gigabit NICs on it. It was originally designed to be one of those switches we were talking about earlier. It has a max of 64GiB RAM, and has 2x Sata3 ports. The chip has ~20w TDP, the 1u cabinet I got has I think a 100w supply in it. It's really nice hardware with a couple drawbacks:

It does NOT have vt-d support, so you can't donate an entire NIC to a guest, for example. You need to configure the NIC in the host OS and then ignore it.

It has IPMI management hardware in it. This is actually a benefit to me, but I'm listing it here because many people freak about a closed-source BLOB which can do things even if the power is off. So you need to manage the risk by understanding it.

It has some flaws that required the board to be sent in. But the good news is that those flaws will have been fixed in anything you buy now.

It's an Atom processor. Compilation is much slower than an i7 would be, for example. 

My all-in cost here was about USD $1100.00. I got 16GiB ecc registered memory. The board I got allows both ecc and non-ecc memory, but if you're going to have a server I strongly recommend registered ecc memory.

https://www.supermicro.com/products/motherboard/atom/A2SDi-H-TP4F.cfm

This is an example of what I would have bought if I had waited a few more years. It's still an atom board, but this one has 16 cores, 12x SATA3 ports and 4x 10-gigabit ports. It also has support for all the really neat virtualization options for donating hardware and all that I wish I had on my c2758 box. And can have 256 GiB RAM.

Why Atom?

Here's the thing: A small shop's server requirements tends to be served better by a bunch of small cores than by a few big cores. 

Let's say you get 4-core E3 or something as your hosts. You have 6 VMs on each, and they're running constantly. There is an automatic context switch just keeping these boxes going.

If you get a 16-core c3000 box, you can dedicate a number of cores to each guest and still have some left over for the host OS. Those cores only context switch when the guest OS has a context switch, and it's not necessarily a full-on context switch.

These c3000 boxes (and the earlier c2000 boxes) are designed for communications. The NICs are really good Intel ones with the igb driver. Most of the networking tasks are handled by the NIC itself, with minimal interrupts which means your CPU is left alone until the transfer is done. Same with the SATA3 ports. Everything is real server hardware. These boxes were designed to replace much bigger boxes and cool down the server room.

The caveat is that some things are very definitely slower than the same money put into an E3 for example. Like compilation. 

I've been bouncing around as I write this, so I hope it's coherent. If not I'll edit after the post.

----------

## 1clue

Sorry to keep spamming the thread, but I've put some thought into this and have worked for quite awhile in a scenario much like that presented by the OP. This actually resembles what I would like to do.

If it were me provisioning this setup, this would be my layout:

2x or 3x c3xxx systems with:

1 or 2 big sticks of ECC registered memory. The reasoning is that you can upgrade RAM by adding more sticks rather than replacing everything.

An SSD for the host OS

An SSD for the guest operating systems

A few spinners for data. Possibly as RAID but personally I'd make them stand-alone and use LVM2 for volume management. Separate volumes for host and guests.

Several NICS if it's gigabit NICs. Otherwise 1 10gbps nic is fine.

Provision boxes such that N-1 boxes can host all of your VMs. So if 1 box dies then you can still run your office.

They have your choice of core count: 2, 4, 8, 16. Size appropriately.

A really good switch, either managed or smart.

Capable of 10gbps on at least 2 general-purpose ports. 1 for each c3xxx system.

If you get c3xxx systems with only gigabit ports, get more ports on the switch so you can bond the NICs.

2x switches which can talk (managed) is better, for failover reasons. In that case you want 2x as many open ports because if one fails you need to put all the cables into a single switch.

Security appliance.

A really good security appliance. Not just a standard home/office one. 

Good VPN on it.

Building your own IS an option. The c3000 2-core or 4-core setups would be great for up to gigabit internet.

I hate wifi. If you must have it, I would put it on its own VLAN with no access to your critical systems.

I'm really stuck on these atom systems. The one I have exceeded my expectations except with regards to VT-d. They are very quiet and can be built fanless. They don't do the heavy lifting but if your business is mostly file servers and some lightweight apps and webapps, your office is likely to be EXACTLY what these systems were designed around. Mine handles small VMs every bit as well as bigger hardware does, and seems to have less latency.

----------

## pjp

 *Elleni wrote:*   

> my colleague only has very limited knowledge in linux
> 
> My boss now asked me to propose which distribution to use for desktops and servers, and why. 
> 
> my boss will have security in mind but also that it should be well supported, in case we would not be available anymore, I'm afraid he will also consider RH as there is professional support. But there is very limited knowhow on our (technicians) side, and everything would have to be learned from scratch

  The responsibility of your recommendation ought to be aligned with how it benefits the business. Your recommendation will almost certainly have risks to the business. How do those risks benefit the business? Your Linux knowledge may reduce some risks, possibly more so if you choose your preferred OS. How is that choice balanced against the ability of the team to support the new environment? Were I the owner or principle decision maker, that would be on the short list of primary factors I used to make a decision. Related to that would be local knowledge on difficulty of recruitment for replacement or additional staffing. Also consider that the "wrong" mistakes could make future proposals more difficult.

----------

## C5ace

Elleni:

Have a look at these articles:

https://www.suse.com/documentation/sle_ha/book_sleha/data/book_sleha.html

https://www.suse.com/products/highavailability/features/

https://www.suse.com/documentation/sle-ha-12/singlehtml/book_sleha/book_sleha.html

https://www.suse.com/products/highavailability/

I belive it's all open source. Probably overkill for you.

----------

## 1clue

One thing you have to think about in a situation like this is, what happens if you get hit by a bus?

Absolutely nobody is indispensable in real life. Your business should look at things the same way. If everyone does their job right, then anyone can be replaced in a relatively short time.

It's not out of the question to use Linux as your company's chosen operating system. There are some distros specifically set up for that. They all come with some sort of customer support, including paid support. They all have certification programs for people to indicate qualifications to maintain that distribution.

While I think any of us might be tempted to make Gentoo the distro of choice, it may be more responsible to the company if you chose a different one. Which one you choose depends on a careful study of what your company needs and which distros best provide the solutions. Not in terms of software availability, because all of the distros provide everything you need to run a business. But in terms of support and stability and security. The type of security that provides patches in a timely manner and has dedicated testing staff.

----------

## Elleni

Guys, and especially 1clue, I am overwhelmed and still reading your posts, and I really appreciate it - still reading - I had to interrupt to post a warm THANK YOU   :Very Happy:   :Very Happy:   :Very Happy: 

----------

## Elleni

pjp, 1clue your points about recommendation of which distribution are very valid and indeed, I really think the same way you describe it. On one side, there is the point of choosing something widely spread and payed support being offered for exactly the reasons you are writing (risks/benefits for the business / what if we two who will set everything up are suddenly not available). 

I also appreciate the hardware recommendations 1clue, that I definitely did not evaluate in that detail yet, I was just thinking of buying one powerful server (I would love to get a server powered by threadripper   :Twisted Evil: ) or alternatively acquiring two powerful "workstations" for the same money - but your suggestions are very, very interesting indeed!

I don't like to go the red hat way - but I will do it anyway if the decision goes that way. Reason is that they do many things their own (clunky) way as szatox has put it; and often differ to what mainstream distros are doing things. I believe that this is happening because there is a big company behind it, and similar to Microsoft it might be, that the goal is to accomplish user-lock-in to make money. The encouraging words of szatox make me confident enough to be convinced that I will be able to implement all this successfully even if it will be on a distribution that I am not (yet) very experienced   :Smile: 

So if primary concern will be paid support I would rather go for SUSE or Ubuntu. That's what I proposed in the case this is the primary concern of my boss. 

On the other side, I still believe that if everything is properly designed setup and above all documented in every detail, gentoo could still be a good fit. The advantage would be, that we would get exactly what we need, and nothing more, and we two who will set everything up, would know our system by heart and thus be able to maintain it in an optimal manner, while minimizing the risk for our boss to block his business if we both would turn unavailable, as everything is well documented. Last but not least, I think the community support, I get here - and this thread is best prove for that - is just... AMAZING. While I beleive - but I admit, I cannot really prove it because of lacking experience that it will not be on the same level with Ubuntu or SUSE, nor Red Hat. 

Heck, I even suggested univention corporate server being on the other end of the complexity - explaining that if being able to have the whole environment setup with the least possible expertise needed is a priority, it could be an option too. Although I think, that being two techies in a company of ~10-12 employes who are rather experienced on different areas, we should set everything needed up ourselves. 

I will probably go the vm way rather than docker; I did think of one or two hosts inhouse and additionally maybe a rented root server in a datacenter but now I am considering a kvm/qemu cluster as suggested, I will have to read and learn about clustering and high availability first though - until now I only thought about a second hypervisor for fail-over and trying to keep vm's on both hosts in sync somehow. 

Our needs are not very complex/demanding; one or two windows 10 Prof or server for some client apps and above all for our core windows crm which my boss certainly does not want to switch too, as what we are planing to implement is already a big change, and crm will have to just continue to function as it did now for years. Maybe some later time, we will check if there are fitting alternatives and have a look at them. 

The rest is just the usual infra and office stuff, dhcp, dns, file- and print services, web browsing, office applications, pdf creating and reading, email clients (no local mailserver), validated backup-solution, probably a little lamp server for nextcloud. - Nothing too fancy and nothing, I am afraid of setting up basically. 

One thing that we want to do is application virtualization, so as many as possible programs should not need to be executed on the clients themselves but on the application server while being x-forwarded to the linux desktops. And user and accessright management shall be maintained as easy as possible. I will search the web for available datacenter automation tools as szatox suggested. 

There are still many things to consider - not only on the design of the solution but also profane things as which desktop environment to chose for the user desktops - and to well plan, as it was stated in this thread, redundancy and good failure planing to circumvent possible outages; this is a challenging task as I will do this the first time for a productive environment, but in the same time something I am exited to get the chance to do, and I am willing to give my best to make it a success. 

On my last job, I had the opportunity to implement an asterisk server with freepbx as web-interface to administer to replace the old analog telephone system which was challenging too (the company run a call center and wanted to use features as call robot, listening to calls and whispering abilities for the management), and it works well. There I had a gentoo test server but production asterisk instance was installed on Ubuntu server. 

Fortunately my co-workers are motivated and interested. Because of security being a sensible subject and a threat to every business, we often do small workshops on different IT themes, and so I can say that my co-workers are quite sensitized to themes like social engineering, spam, and pay attention to not open suspicious mail attachments and so on, so I  am optimistic that they will be interested in learning how to adapt to a new Linux-based system. 

I am still reading on different topics, you linked (thank you C5ace) or brought up by keywords, and cannot thank you all enough for your very valuable and useful inputs!

----------

## genbort

Apart from pjp, 1clue advice its much better to consult a virtualization expert like Alan Renouf, Chad Sakac, Chris Wolf, etc for assisting with the entire process.

----------

## 1clue

 *Elleni wrote:*   

> ...
> 
> On the other side, I still believe that if everything is properly designed setup and above all documented in every detail, gentoo could still be a good fit. The advantage would be, that we would get exactly what we need, and nothing more, and we two who will set everything up, would know our system by heart and thus be able to maintain it in an optimal manner, while minimizing the risk for our boss to block his business if we both would turn unavailable, as everything is well documented. Last but not least, I think the community support, I get here - and this thread is best prove for that - is just... AMAZING. While I beleive - but I admit, I cannot really prove it because of lacking experience that it will not be on the same level with Ubuntu or SUSE, nor Red Hat. 
> 
> 

 

If you and your partner can meticulously document everything and keep that documentation up to date and consistent, then perhaps you're right. And keep in mind that sometimes less documentation is better. A well-managed configuration file is the best documentation of all.

 *Quote:*   

> 
> 
> Heck, I even suggested univention corporate server being on the other end of the complexity - explaining that if being able to have the whole environment setup with the least possible expertise needed is a priority, it could be an option too. Although I think, that being two techies in a company of ~10-12 employes who are rather experienced on different areas, we should set everything needed up ourselves. 
> 
> 

 

Least possible expertise is not exactly what I'm recommending. I'm recommending that you choose a route where there is a readily available pool of people who can step in and figure it out fast enough for your business to survive with minimum impact. You and your coworkers need to decide what is an acceptable risk.

 *Quote:*   

> 
> 
> I will probably go the vm way rather than docker; I did think of one or two hosts inhouse and additionally maybe a rented root server in a datacenter but now I am considering a kvm/qemu cluster as suggested, I will have to read and learn about clustering and high availability first though - until now I only thought about a second hypervisor for fail-over and trying to keep vm's on both hosts in sync somehow. 
> 
> 

 

VMs are a well-traveled road. failover and various levels of high availability are also pretty well traveled, but I'm not an expert in this. I only know what if I were setting something up now, it would be on 2 or 3 VM hosts, on the lowest-power-consumption hardware I can get, and pay attention to availability and failover for every possible service. Containers are all the rage right now, but as yet I'm not clear if they're a fad or if they have genuine value in a production environment.

 *Quote:*   

> 
> 
> Our needs are not very complex/demanding; one or two windows 10 Prof or server for some client apps and above all for our core windows crm which my boss certainly does not want to switch too, as what we are planing to implement is already a big change, and crm will have to just continue to function as it did now for years. Maybe some later time, we will check if there are fitting alternatives and have a look at them. 
> 
> 

 

There are a ton of good CRM apps out there.

This brings to mind another thing I would shoot for: Make every new app install use a web browser as a client. Certainly your CRM app can be web-based. Which means it makes no difference what OS your employees use.

In reality what this approach does is enable you to switch things over one thing at a time, not en masse.

 *Quote:*   

> 
> 
> The rest is just the usual infra and office stuff, dhcp, dns, file- and print services, web browsing, office applications, pdf creating and reading, email clients (no local mailserver), validated backup-solution, probably a little lamp server for nextcloud. - Nothing too fancy and nothing, I am afraid of setting up basically. 
> 
> One thing that we want to do is application virtualization, so as many as possible programs should not need to be executed on the clients themselves but on the application server while being x-forwarded to the linux desktops. And user and accessright management shall be maintained as easy as possible. I will search the web for available datacenter automation tools as szatox suggested. 
> ...

 

Again, shoot for web apps. This won't work for everything, but any kind of database-driven information engine should be spectacular this way.

 *Quote:*   

> 
> 
> There are still many things to consider - not only on the design of the solution but also profane things as which desktop environment to chose for the user desktops - and to well plan, as it was stated in this thread, redundancy and good failure planing to circumvent possible outages; this is a challenging task as I will do this the first time for a productive environment, but in the same time something I am exited to get the chance to do, and I am willing to give my best to make it a success. 
> 
> 

 

You may find that you need not do this at all. Don't bake a Linux dependency into your desktops. If you go with a traditional app on a workstation, then you complicate your backups, your maintenance and your upgrades. Put all that on a webapp or at least on a file server and you simplify it all again.

IMO part of what makes Linux great is that you can choose your distro, choose your window manager, choose your apps and interact with data which has an open format.

For that matter, if you use webapps for everything you may find your users doing their work on their phones during their lunch break.

----------

## pjp

@Elleni: Sounds like you are generally handling it well.

 *Elleni wrote:*   

> I don't like to go the red hat way - but I will do it anyway if the decision goes that way.

  To clarify, I wasn't advocating for  or against anything specific. Another avenue to consider is training. Perhaps primarily with you being the teacher at least initially. Don't forget including others as things happen, as well as post-incident reviews. Nothing replaces experience, but the more something is "seen," the easier learning from experience becomes (IMO).

 *Elleni wrote:*   

> On the other side, I still believe that if everything is properly designed setup and above all documented in every detail, gentoo could still be a good fit. The advantage would be, that we would get exactly what we need, and nothing more, and we two who will set everything up, would know our system by heart and thus be able to maintain it in an optimal manner, while minimizing the risk for our boss to block his business if we both would turn unavailable, as everything is well documented.

  Your experience may differ, but I've yet to encounter an environment that had consistently well maintained documentation after delivery (ignoring the quality of the initial documentation). Even if you are able to maintain the documentation, put yourself in the shoes of the incoming team. The more you customize, the greater the difficulty the incoming team will have to learn the environment. Which parallels a problem even with excellent documentation. The more documentation, the more time it takes a newcomer to parse it and understand the big picture of the environment. Please don't misunderstand me. I'm not suggesting you skip documentation or customization.

 *Elleni wrote:*   

> Last but not least, I think the community support, I get here - and this thread is best prove for that - is just... AMAZING. While I beleive - but I admit, I cannot really prove it because of lacking experience that it will not be on the same level with Ubuntu or SUSE, nor Red Hat.

  A factor to keep in mind with any support is were your responsibility is in relation to solving the problem. With unpaid support, "you" are the one who needs to solve the problem. At some point, you're likely to encounter a difficult problem. Paid support has an advantage besides the "someone to blame" angle that I don't recall  seeing mentioned anywhere: it provides a "security blanket" that can help reduce stress during "emergencies." If you're struggling, opening a ticket may help you think through the issue while also having in the back of your mind that someone else will also be helping (eventually). If management is stressed, they get to feel useful by checking that box and reporting it upward, and "everything is happening that can/needs to happen" is achieved early on in the process.

If I haven't already offered too much of my opinion, I'll finish with this: don't use a technology in production for the sake of using the technology. Chasing technology will likely introduce problems you didn't have and may not solve the old problems. Congratulations, now you have unfamiliar problems to solve on top of the old ones. That also seems to apply equally well to "clever solutions." ;)

----------

## 1clue

While I think everything @pjp said on this thread is right on track, the most recent post is spectacularly good advice.

The best documentation is the brief but pertinent kind. Some sort of document saying what service is where, and which specific app it is. Then the configuration files should be self-documenting. 

Based on my experience, a new guy isn't going to read a book before digging in. 

Put yourself in the new guy's shoes. You and your Linux partner went across the street for lunch and got hit by a bus, and then some time goes by. Something happens which makes your employer find a new guy. That something is probably "production system down" status, which means your entire company is sitting there twiddling their thumbs while some new guy tries to figure out what happened to your hardware, and then to your software, and then why doesn't it work?

Your documentation should be oriented toward that scenario IMO. If your new guy sees a well-ordered list which occupies no more than 2 pages of text, he may actually read it before diving in.

----------

## Elleni

Wow, guys, thanks a lot for your profound and well balanced thoughts and advice's! I am deeply impressed by the time you contributed writing those down here, helping us to think well and choose wisely! 

Even if we will not go the gentoo do everything yourself way, there will certainly be some gentoo box in our future linux ecosystem, for example my own desktop, maybe one vm here and there  :Smile:  I cannot resist  :Very Happy: 

It is obvious now, that the decision has to be on a widespread distribution with payed support for the mentioned reasons, at least for some of the infrastructure and at least for the beginning. 

pjp, you certainly did not offer to much of your opinion, and everything is very appreciated, the same goes to the others writing here. And although you are certainly correct on the ticket/paid support/reassuring while looking for a solution of a serious problem thing, I can't help believing, that making a support request here in the forum, or even going to irc livechat with corresponding gentoo experts would give me a good coverage and reassuring feeling either  :Wink:  And your last point makes me think of something; it is surely worth and a good opportunity to carefully check what of all we had installed until now, is really (still) relevant and absolutely needed; in my opinion its the best moment and a good opportunity to get rid of cruft that is not essential to our business, thus simplifying as much as possible before migrating to the a new ecosystem. 

For now, we made a software inventory to effectively see, what we are using, and I am identifying what can be replaced by native linux alternatives and where we will still be dependent on windows at least at the beginning. Meanwile I am jumping the net and reading here about application virtualization, and there about hardware, and elsewhere if there is a webapp for replacing local app and so on.. I think with about 3-5 vms per host we will be good. Training will be a key factor to make the transition smooth for my colleagues that's absolutely true too. You gave me tons of material to think about and carefully consider, thank you all very much!!!

----------

## szatox

 *Quote:*   

> Even if we will not go the gentoo do everything yourself way, there will certainly be some gentoo box in our future linux ecosystem, for example my own desktop, maybe one vm here and there  I cannot resist 

 Don't go that way. IT is cheap when it's standardized.

Even if it's just your own standard and not necesarily the industry-wide one, at least stick to that.

Your environment will probably grow. At some point you will log in to a VM created by someone else. How do you know what to do when that happens?

When I have to do something on any machine in this environment I haven't visited yet, I start with assumption that it's the same shit I already know from 500 machines I already visited. They all look the same.

----------

