I want to automate testing of my users' source code files by letting them upload c++,python, lisp, scala, etc. files to my linux machine where a service will find them in a folder and then compile/run them to verify that they are correct. This server contains no important information about any of my users, so there's no database or anything for someone to hack. But I'm no security expert so I'm still worried about a user somehow finding a way to run arbitrary commands with root privileges (basically I don't have any idea what sorts of things can go wrong). Is there a safe way to do this?
They will. If you give someone the power to compile, it is very hard not to escalate to root. You say that server is not important to you, but what if someone sends you an email from that server, or alters some script, to obtain some info on your home machine or another server you use?
At least you need to strongly separate you from them. I would suggest linux containers, https://linuxcontainers.org/ they are trendy these days. But be careful, this is the kind of service that is always dangerous, no matter how much you protect yourself.
Read more about chroot command in Linux.
This way you can provide every running user program with separate isolated container.
You should under no circumstances allow a user to run code on your server with root privileges. A user could then just run rm –rf / and it would delete everything on your server.
I suggest you make a new local user / group that has very limited permissions, e.g. can only access one folder. So when you run the code on your server, you run it in that folder, and the user can not access anything else. After the code has finished you delete the content of the folder. You should also test this vigorously to check that they really cant destroy / manipulate anything.
If you're running on FreeBSD you could also look at Jails, which is sort-of a way of virtualization and limiting a user / program to that sandbox.
I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'
So I'm trying to create a way for users in Linux to create groups, then add other users to them. As far as I can tell, only root can modify groups. The reason is, I want users to be able to create a group, then chgrp/chown the files they own and give read access to specified groups of users.
My idea was to make a utility that encapsulates a "sudo groupadd " call. This utility would have access to some root-accessible file that keeps track of group ownership. Is it possible to do this without needing the user to enter root password?
I know that this is probably very exploitable, but please note that this is not going into production of any sort.
Please let me know how I can accomplish my approach (namely letting non-sudo run sudo program) or if you have any alternative suggestions.
I want to write a server that receives and executes code on behalf of non-trusted parties.
In order to make this more secure I want the code to be run as a user that is created exclusively for running this code and then deleted.
How can I grant a non-root user the permission to create, impersonate and remove other users?
The only way you could conceivably do this is with a proxy program that's setuid root (i.e., chown root my-proxy-process; chmod 47nn my-proxy-process. And that setuid proxy program takes care of handling security considerations, setuid'ing to the named user, etc.
However the security problems with this should be fairly clear. One way it could be limited is to ensure your non-priveliged process runs with a user in a name group. Then chown thet proxy command with root:myprivategroup and chown it as 4710 so only users that are members of myprivategroup can execute it.
This is probably as secure as it can be. The main thing is making sure the proxy process is good and secure and locked down so only pertinent users can run it through group membership.
If there's any other restrictions you know can be applied to user-supplied processes, then the proxy program could validate that the process confers to those rules. Note that whilst this may be a stop gap, it wouldn't be beyond the realms of technology for someone to craft a "bad" program that passes the tests.
For added security (recommended), use a sandbox or chroot jail as mentioned by mark4o.
Maybe you can grant permissions to non-root user with sudo and /etc/sudoers, but this way seems not secure and not linear.
I don't know what you need, but maybe you can pre-create some users and then use they.
You will want to use some kind of sandboxing or virtualization; at least a chroot environment at minimum. See also Sandboxing in Linux.
This is a problem as you can not use su vie script without the terminal (probably a security feature). I'm sure you can hack your way through this, but it was probably done for a good reason.
In our administration team everyone has root passwords for all client servers.
But what should we do if one of the team members is not longer working with us?
He still has our passwords and we have to change them all, every time someone leave us.
Now we are using ssh keys instead of passwords, but this is not helpful if we have to use something other than ssh.
The systems I run have a sudo-only policy. i.e., the root password is * (disabled), and people have to use sudo to get root access. You can then edit your sudoers file to grant/revoke people's access. It's very granular, and has lots of configurability---but has sensible defaults, so it won't take you long to set up.
I would normally suggest the following:
Use a blank root password.
Disable telnet
Set ssh for no-root-login (or root login by public key only)
Disable su to root by adding this to the top of /etc/suauth: 'root:ALL:DENY'
Enable secure tty for root login on console only (tty1-tty8)
Use sudo for normal root access
Now then, with this setting, all users must use sudo for remote admin,
but when the system is seriously messed up, there is no hunting for
the root password to unlock the console.
EDIT: other system administration tools that provide their own logins will also need adjusting.
While it is a good idea to use a sudo only policy like Chris suggested depending on the the size of your system an ldap approach may also be helpful. We complement that by a file that contains all the root passwords but the root passwords are really long and unmemorable. While that may be considered a security flaw it allows us to still log in if the ldap server is down.
Aside from the sudo policy, which is probably better, there is no reason why each admin couldn't have their own account with UID 0, but named differently, with a different password and even different home directory. Just remove their account when they're gone.
We just made it really easy to change the root passwords on every machine we admininster so when people left we just ran the script. I know not very savvy but it worked. Before my time, everyone in the company had access to root on all server. luckily we moved away from that.
Generally speaking, if someone leaves our team, we don't bother changing root passwords. Either they left the company (and have no way to access the machines anymore as their VPN has been revoked, as has their badge access to the building, and their wireless access to the network), or they're in another department inside the company and have the professionalism to not screw with our environment.
Is it a security hole? Maybe. But, really, if they wanted to screw with our environment, they would have done so prior to moving on.
So far, anyone leaving the team who wants to gain access to our machines again has always asked permission, even though they could get on without the permission. I don't see any reason to impede our ability to get work done, i.e., no reason to believe anyone else moving onwards and upwards would do differently.
Reasonably strong root password. Different on each box. No remote root logins, and no passwords for logins, only keys.
If you have ssh access via your certificates, can't you log in via ssh and change the root password via passwd or sudo passwd when you need to do something else that requires the password?
We use the sudo only policy where I work, but root passwords are still kept. The root passwords are only available to a select few employees. We have a program called Password Manager Pro that stores all of our passwords, and can provide password audits as well. This allows us to go back and see what passwords have been accessed by which users. Thus, we're able to only change the passwords that actually need to be changed.
SSH keys have no real alternative.
For management of many authorized_keys files on many servers you have to implement your own solution, if you do not want the same file on every server. Either via an own tool, or with some configuration management solution like puppet, ansible or something like that.
Else a for loop in bash or some clush action will suffice.
Anything besides SSH logins:
For services you run that are login-based, use some sort of authentication with a central backend. Keep in mind that noone will do any work if this backend is unavailable!
Run the service clustered.
Don't do hacks with a super-duper-service backdoor account, to always have access in case something breaks (like admin access is broken due to a misconfiguration). No matter how much you monitor access or configuration changes affecting this account, this is 'just bad'(TM).
Instead of getting this backdoor right, you might as well just cluster the application, or at the very least have a spare system periodically mirroring the setup at hand if the main box dies, which then can be activated easily through routing changes in the network. If this sounds too complicated, your business is either too small and you can live with half a day to two days downtime. Or you really hate clusters due to lacking knowledge and are just saving on the wrong things.
In general: If you do use software unusable with some sort of Active Directory or LDAP integration you have to jump the shark and change passwords for these manually.
Also a dedicated password management database that can only be accessed by a very selected few directly, and is read-only to all the others, is very nice. Don't bother with excel files, these lack proper rights management. Working with version control on .csv files doesn't really cut it either after a certain treshold.