how do you manage servers' root passwords - linux

In our administration team everyone has root passwords for all client servers.
But what should we do if one of the team members is not longer working with us?
He still has our passwords and we have to change them all, every time someone leave us.
Now we are using ssh keys instead of passwords, but this is not helpful if we have to use something other than ssh.

The systems I run have a sudo-only policy. i.e., the root password is * (disabled), and people have to use sudo to get root access. You can then edit your sudoers file to grant/revoke people's access. It's very granular, and has lots of configurability---but has sensible defaults, so it won't take you long to set up.

I would normally suggest the following:
Use a blank root password.
Disable telnet
Set ssh for no-root-login (or root login by public key only)
Disable su to root by adding this to the top of /etc/suauth: 'root:ALL:DENY'
Enable secure tty for root login on console only (tty1-tty8)
Use sudo for normal root access
Now then, with this setting, all users must use sudo for remote admin,
but when the system is seriously messed up, there is no hunting for
the root password to unlock the console.
EDIT: other system administration tools that provide their own logins will also need adjusting.

While it is a good idea to use a sudo only policy like Chris suggested depending on the the size of your system an ldap approach may also be helpful. We complement that by a file that contains all the root passwords but the root passwords are really long and unmemorable. While that may be considered a security flaw it allows us to still log in if the ldap server is down.

Aside from the sudo policy, which is probably better, there is no reason why each admin couldn't have their own account with UID 0, but named differently, with a different password and even different home directory. Just remove their account when they're gone.

We just made it really easy to change the root passwords on every machine we admininster so when people left we just ran the script. I know not very savvy but it worked. Before my time, everyone in the company had access to root on all server. luckily we moved away from that.

Generally speaking, if someone leaves our team, we don't bother changing root passwords. Either they left the company (and have no way to access the machines anymore as their VPN has been revoked, as has their badge access to the building, and their wireless access to the network), or they're in another department inside the company and have the professionalism to not screw with our environment.
Is it a security hole? Maybe. But, really, if they wanted to screw with our environment, they would have done so prior to moving on.
So far, anyone leaving the team who wants to gain access to our machines again has always asked permission, even though they could get on without the permission. I don't see any reason to impede our ability to get work done, i.e., no reason to believe anyone else moving onwards and upwards would do differently.

Reasonably strong root password. Different on each box. No remote root logins, and no passwords for logins, only keys.

If you have ssh access via your certificates, can't you log in via ssh and change the root password via passwd or sudo passwd when you need to do something else that requires the password?

We use the sudo only policy where I work, but root passwords are still kept. The root passwords are only available to a select few employees. We have a program called Password Manager Pro that stores all of our passwords, and can provide password audits as well. This allows us to go back and see what passwords have been accessed by which users. Thus, we're able to only change the passwords that actually need to be changed.

SSH keys have no real alternative.
For management of many authorized_keys files on many servers you have to implement your own solution, if you do not want the same file on every server. Either via an own tool, or with some configuration management solution like puppet, ansible or something like that.
Else a for loop in bash or some clush action will suffice.
Anything besides SSH logins:
For services you run that are login-based, use some sort of authentication with a central backend. Keep in mind that noone will do any work if this backend is unavailable!
Run the service clustered.
Don't do hacks with a super-duper-service backdoor account, to always have access in case something breaks (like admin access is broken due to a misconfiguration). No matter how much you monitor access or configuration changes affecting this account, this is 'just bad'(TM).
Instead of getting this backdoor right, you might as well just cluster the application, or at the very least have a spare system periodically mirroring the setup at hand if the main box dies, which then can be activated easily through routing changes in the network. If this sounds too complicated, your business is either too small and you can live with half a day to two days downtime. Or you really hate clusters due to lacking knowledge and are just saving on the wrong things.
In general: If you do use software unusable with some sort of Active Directory or LDAP integration you have to jump the shark and change passwords for these manually.
Also a dedicated password management database that can only be accessed by a very selected few directly, and is read-only to all the others, is very nice. Don't bother with excel files, these lack proper rights management. Working with version control on .csv files doesn't really cut it either after a certain treshold.

Related

How to securely host file on RHEL server and enable download for user

I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.

How to let users run arbitrary source code on my server

I want to automate testing of my users' source code files by letting them upload c++,python, lisp, scala, etc. files to my linux machine where a service will find them in a folder and then compile/run them to verify that they are correct. This server contains no important information about any of my users, so there's no database or anything for someone to hack. But I'm no security expert so I'm still worried about a user somehow finding a way to run arbitrary commands with root privileges (basically I don't have any idea what sorts of things can go wrong). Is there a safe way to do this?
They will. If you give someone the power to compile, it is very hard not to escalate to root. You say that server is not important to you, but what if someone sends you an email from that server, or alters some script, to obtain some info on your home machine or another server you use?
At least you need to strongly separate you from them. I would suggest linux containers, https://linuxcontainers.org/ they are trendy these days. But be careful, this is the kind of service that is always dangerous, no matter how much you protect yourself.
Read more about chroot command in Linux.
This way you can provide every running user program with separate isolated container.
You should under no circumstances allow a user to run code on your server with root privileges. A user could then just run rm –rf / and it would delete everything on your server.
I suggest you make a new local user / group that has very limited permissions, e.g. can only access one folder. So when you run the code on your server, you run it in that folder, and the user can not access anything else. After the code has finished you delete the content of the folder. You should also test this vigorously to check that they really cant destroy / manipulate anything.
If you're running on FreeBSD you could also look at Jails, which is sort-of a way of virtualization and limiting a user / program to that sandbox.

Why web sites run under less privileged user account (IUSR_ComputerName)?

Usually we define iis web sites which allow anonymous authentication to run under the IUSR_ComputerName account which has very limited privileges. For example we may decide it cannot access the file system. How does that make our web site any more secured? The user cannot run code on it anyway - only our website code runs and we make sure it does not cause any harm.
Edit: I understand why it is good to be on the safe side (e.g. iis exploit). My question is if there is any direct reason. For example, if I would never give a guest account full privileges on a sql server as it would immediately allow him full control over my server. This does not seem to be the case with iis.
we make sure it does not cause any
harm
You can be never sure about it doesn't cause any harm. One day, it might be exploited, and probably the less privileged user would save your data. No offense, but no one writes perfect code, therefore no code is vulnerability free.
If you have any network service you should assume that some random person on the internet has a command prompt on your machine running as that services's owner.
Now ask what damage that user good do?
Typically, you may need to run your web site in a way that is a little less hardened from a security standpoint than, say, a domain server or exchange. For example, you may need to permit FTP access. Obviously, Internet web sites need to be accessed from the Internet so you cannot simply block all access with your firewall.
Because of the higher vulnerability, it is prudent to run your service with an account that has limited permissions. In the case where a malicious user does succeed in copying their own programs to be run on your server, those programs will have severe limitations as to what they can do.
You can run code on the server, for example you can delete files in a directory if the permissions are not set.

How do I secure a production server after inheriting it from the previous development vendor?

We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.

Samba, other non interactive accounts - noshell, nologin, or blank?

Conducting a user account cleanup accross Solaris and Redhat linux systems, many of which have a number of Samba shares.
What preference do people have for creating the local unix accounts for non interactive Samba users? In particular, the shell entry:
noshell
nologin
blank
And why?
JB
I have seen the shell set to the passwd command so that logging in only gives an opportunity to change the password. This may or may not be appropriate in your non-interactive user case, but it has the upside of allowing people to change passwords without bothering an admin.
I've always thought /bin/false was the standard. Some ISPs use a little menu system that lets them change their password / contact / finger info, check usages, etc. Whatever you use, you may want to add it to your /etc/shells file as well if you want the user to be able to use FTP for instance, as some services will be denied to users who are not using a shell listed in that file.
I usualy send all mine to /dev/null that way I don't ever have to worry about it.
I have known some people who set it to /bin/logout so that when someone logged in they were logged back out.
Don't do blank. That runs /bin/sh.

Resources