I have some functionality that interfaces with the server's OS in my web application. I've written a bash script and am able to run it from within my app.
However, some functionality of the script requires superuser privileges.
What is the most sane way to run this script securely? It is being passed arguments from a web form, but should only be able to be called by authenticated users that I trust not to haxxor it.
Whichever way you do this is likely to be very dangerous. Can you perhaps write a local daemon with the required privileges, and use some some of message-bus that produces/consumes events to be processed by this super-user requiring component?
That way, you can carefully validate the contents of the message, and reduce the likelihood of exploitation..
What is the most sane way to run this script securely?
If you really care about security, require the web client to provide a passphrase and use an ssh key. Then run the script under ssh-agent, and for the sensitive parts do ssh root#localhost command.... You probably will want to create ssh keypairs just for this purpose, as typing one's normal SSH passphrase into a web form is not something I would do (who trusts your web form, anyway?).
If you don't want quite this much security, and if you really, really believe that your web form can correctly authenticate its users, without any bugs, you could instead decide to trust the web server to run the commands you need. (I wouldn't.) In this case I would use the /etc/sudoers file to allow the web server to run the commands of interest without providing a password. Then your script should use sudo to run those commands.
Related
I have programmed an application that users can use to process genome data. This application relies on a 10GB database file, that users have to download in order to run the application. At the moment, I have stored this file on Google Drive, but the download bandwith is limited, so if a number of users download the file on a certain day, it will not work for others and they will get errors running the application.
My solution would be to host the file on our research server, create a user that only has access rights to this folder and nothing else, and make the file downloadable from the server via scp within the application (which is open source) through that user.
My question now is, is this safe to do or are people potentially able to hack into our server? If this method would be a security risk, what would be a better way to provide this file?
Thank you in advance!
Aloha
You can setup something like free Seafile https://www.seafile.com/en/home/, or ask the admin to set it up for you which is pretty secure like a self hosted google drive with 2fa authentication.
Another nice and easy tool is Filebrowser on github (https://github.com/filebrowser/filebrowser)
I would not really advice giving people shell/scp access inside your network.
And hosting anything inside a company network is in general not wisest idea, there is a always a risk involved.
I would setup a Seafile/filebrowser solution at a cheap rented server outside your network and upload it there. Or if you have a small pc left set it up in a DMZ Zone, a zone that has special access restrictions inside your company.
You want to use SSH (scp) as a transportation and authentication method for file hosting. It's possible to keep this safe with caution. For example, GitHub uses SSH for transport when providing git access with the git+ssh protocol.
Now for the caution part, if you haven't done it before, it's not a trivial task.
The proper way to achieve this would be set up an isolated SSH server in a chroot environment, and set up an SSH user on this isolated SSH instance only (not a user in the system that is added by eg useradd). Then you can add the files that's absolutely necessary to the chroot, and provide SSH access to users.
(Nowadays you might want to consider using Linux filesystem namespaces, if applicable, to replace chroot, but I'm not sure on this.)
As for other options, setting up a simple Nginx server for static file hosting might be a lot easier, provided you have some understanding of HTTP and TLS. There're lots of writings on the Internet about this.
Both ways, if you are to expose your server to the Internet or Intranet, you need to make sure of firewalling. Consider to learn about nftables or firewalld or the like, if you haven't already.
SSH is reasonably safe. Always keep software up-to-date.
Set up an sftp-only user with chrooted directory. In /etc/ssh/sshd_config:
Match User MyUser
ChrootDirectory /var/ssh/chroot
ForceCommand internal-sftp
AllowTcpForwarding no
PermitTunnel no
X11Forwarding no
This user will not get a shell (because of internal-sftp), and cannot see files outside of /var/ssh/chroot.
Use a certificate client-side, additional to password.
Good description of the setup process for certificates:
https://www.digitalocean.com/community/tutorials/how-to-configure-ssh-key-based-authentication-on-a-linux-server
Your solution is moderately safe.
A better solution is to put it on a server accessible via sftp, behind a password, but also encrypt the file: in this way you introduce a double layer of protection.
On a Linux server you should be able to use a tool like gpg to encrypt your file.
Next you share the decryption key with your partners using a secure channel with e.g. an end2end encrypted messaging software.
Let's state a situation:
I have the possibility to run arbitrary commands on a server as an unprivileged user, through "unconventional means".
I do not have the possibility to login using ssh to that server, either as my unprivileged user or anything else. So I do not have currently a CLI allowing me to run any commands I would like in a "normal" way.
I can ping that server and nothing prevents me to connect to arbitrary ports.
I still would like to have a command line to allow me to run arbitrary command as i wish on that server.
Theoretically nothing would prevent me to launch any program as my unprivileged user, including one that would open a port, allow some remote user to connect to it and just forward any commands to bash, returning the result. I just don't know any good program to do that.
So, does any one know? I looked at ways to launch ssh_server as an unprivileged user but some users reported that recent versions of ssh_server do not allow that anymore. Actually I don't even need ssh specifically, any way to get a working CLI would do the trick. Even a crappy node.js program launching an http server would work, as long as I have a CLI (... and it's not excessively crappy, the goal is to have a clean CLI, not something that bugs every two characters).
In case you would ask why I would like to do that, it's not related to anything illegal ^^. I just have to work with a very crappy Jenkins server for which I'm not allowed to have direct access to its agents. Whoever is responsible for that server doesn't give a sh** about its users' needs so we have to use hacky solutions just to have some diagnostic data about that server (like ram, cpu and disk usage, installed programs, etc...). Having a CLI that I can launch some time instead of altering a build configuration and waiting 20 minutes to have an answer about what's going on would really help.
Thanks in advance for any answer.
So do you have shell access to the server at least once? E.g., during the single day of the month when you are physically present at the site of your client or the outsourcing contractor?
And if you have shell access then, can you or your sysmin install Cockpit?
It listens on port 9090.
You can then use the credentials of your local user and open a terminal window in your browser. See sidebar item "Terminal" on the screenshots of the cockpit homepage.
According to the documentation
Cockpit has no special privileges and doesn’t run as root. It creates a session as the logged in user and has the same permissions as that user.
Use case:
I would like to host a console application I built on an EC2 instance on AWS and give very strict limited access to the people who will connect to it:
They must not be able to access the shell or execute any command on the machine
They must not be able to use port forwarding
They must not be able to copy or read anything from that machine, especially not environment variables
They are only allowed to use that console application
My solution:
Create a user:
I replace its shell by the console application so the user can only access to that and nothing else
Disable port forwarding
I'm not sure if that would be enough to secure the machine. That's why I'm asking here some advice or confirmation that this will work and will be 100% secure.
As we discussed on the comment section of your question:
If you manage to replace the shell of the user for your application console and guarantee that it's not possible to run bash commands, terminal built-in functions (like export, enable, disable), and make sure that your application console have the right permissions (rwx) to interact with only the files and paths that your application needs to interact, then, you should be fine.
My aim is to start/stop services(like httpd, sshd, iptables, etc) from a Perl CGI Script.
#!/usr/bin/perl
use strict;
use warnings;
print "content-type:text/html\n\n";
print <<EOM;
<html>
<body>
EOM
`/etc/init.d/httpd stop`;
my $res=`/etc/init.d/httpd status`;
print <<EOM;
<h3>$res</h3>
</body>
</html>
EOM
Here the first command inside back tics isn't working, whereas the second command which is assigned to $res is working.
Output on the browser is as follows:
httpd (pid 15657) is running...
I suggest displaying the output from the stop command. I strongly suspect that you will see an error indicating that you do not have permission to stop the service.
A correctly configured web server process will be owned by a user that has almost no permissions on the system. This is a security feature. CGI programs on your web server can be run by anyone who can access the web server. For that reason, the web server user is usually configured to only run a very limited set of programs.
Starting and stopping your web server is something that you will usually need root permissions for. Your web server process will not have root permissions (for, hopefully, obvious reasons). But it's entirely possible that every user on the system (including the web server user) will have permissions to get the status of the web server process. This is why your httpd status command works, while the httpd stop command doesn't.
You could give the web server user temporary permission to start or stop services, using sudo or something like that. But you would need to do it very carefully - perhaps requiring a password on the web page (and transmitting that password securely over https).
But it's probably a better idea to reconsider this approach completely.
Can I also point that it's a bad idea to use backticks to run external commands that you don't want to collect the output from. In cases like that, it will be more efficient to use the system() function.
I also note, that you are loading the CGI module, but not using any of its functionality. You even manually create your Content-Type header, ignoring the module's header() function.
And here's my traditional admonition that writing new CGI programs in 2017 is a terrible idea. Please read CGI::Alternatives and consider a PSGI-based approach instead.
You should not even think of having a CGI script which has the privileges to start/stop services on a computer. There are, of course, valid reasons to want to have remote control over servers, but if giving the HTTP daemon super user privileges is the only way you can think of achieving that end, you need to realize that you ought not to be the person implementing that functionality.
I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.