How to start/stop services using Perl CGI script - linux

My aim is to start/stop services(like httpd, sshd, iptables, etc) from a Perl CGI Script.
#!/usr/bin/perl
use strict;
use warnings;
print "content-type:text/html\n\n";
print <<EOM;
<html>
<body>
EOM
`/etc/init.d/httpd stop`;
my $res=`/etc/init.d/httpd status`;
print <<EOM;
<h3>$res</h3>
</body>
</html>
EOM
Here the first command inside back tics isn't working, whereas the second command which is assigned to $res is working.
Output on the browser is as follows:
httpd (pid 15657) is running...

I suggest displaying the output from the stop command. I strongly suspect that you will see an error indicating that you do not have permission to stop the service.
A correctly configured web server process will be owned by a user that has almost no permissions on the system. This is a security feature. CGI programs on your web server can be run by anyone who can access the web server. For that reason, the web server user is usually configured to only run a very limited set of programs.
Starting and stopping your web server is something that you will usually need root permissions for. Your web server process will not have root permissions (for, hopefully, obvious reasons). But it's entirely possible that every user on the system (including the web server user) will have permissions to get the status of the web server process. This is why your httpd status command works, while the httpd stop command doesn't.
You could give the web server user temporary permission to start or stop services, using sudo or something like that. But you would need to do it very carefully - perhaps requiring a password on the web page (and transmitting that password securely over https).
But it's probably a better idea to reconsider this approach completely.
Can I also point that it's a bad idea to use backticks to run external commands that you don't want to collect the output from. In cases like that, it will be more efficient to use the system() function.
I also note, that you are loading the CGI module, but not using any of its functionality. You even manually create your Content-Type header, ignoring the module's header() function.
And here's my traditional admonition that writing new CGI programs in 2017 is a terrible idea. Please read CGI::Alternatives and consider a PSGI-based approach instead.

You should not even think of having a CGI script which has the privileges to start/stop services on a computer. There are, of course, valid reasons to want to have remote control over servers, but if giving the HTTP daemon super user privileges is the only way you can think of achieving that end, you need to realize that you ought not to be the person implementing that functionality.

Related

How to launch a "rogue" cli server as unprivileged user

Let's state a situation:
I have the possibility to run arbitrary commands on a server as an unprivileged user, through "unconventional means".
I do not have the possibility to login using ssh to that server, either as my unprivileged user or anything else. So I do not have currently a CLI allowing me to run any commands I would like in a "normal" way.
I can ping that server and nothing prevents me to connect to arbitrary ports.
I still would like to have a command line to allow me to run arbitrary command as i wish on that server.
Theoretically nothing would prevent me to launch any program as my unprivileged user, including one that would open a port, allow some remote user to connect to it and just forward any commands to bash, returning the result. I just don't know any good program to do that.
So, does any one know? I looked at ways to launch ssh_server as an unprivileged user but some users reported that recent versions of ssh_server do not allow that anymore. Actually I don't even need ssh specifically, any way to get a working CLI would do the trick. Even a crappy node.js program launching an http server would work, as long as I have a CLI (... and it's not excessively crappy, the goal is to have a clean CLI, not something that bugs every two characters).
In case you would ask why I would like to do that, it's not related to anything illegal ^^. I just have to work with a very crappy Jenkins server for which I'm not allowed to have direct access to its agents. Whoever is responsible for that server doesn't give a sh** about its users' needs so we have to use hacky solutions just to have some diagnostic data about that server (like ram, cpu and disk usage, installed programs, etc...). Having a CLI that I can launch some time instead of altering a build configuration and waiting 20 minutes to have an answer about what's going on would really help.
Thanks in advance for any answer.
So do you have shell access to the server at least once? E.g., during the single day of the month when you are physically present at the site of your client or the outsourcing contractor?
And if you have shell access then, can you or your sysmin install Cockpit?
It listens on port 9090.
You can then use the credentials of your local user and open a terminal window in your browser. See sidebar item "Terminal" on the screenshots of the cockpit homepage.
According to the documentation
Cockpit has no special privileges and doesn’t run as root. It creates a session as the logged in user and has the same permissions as that user.

Accessing user files from a perl webserver

I have a perl server which needs the ability to read user's files and data, and write to them. The users are authenticated via LDAP, so I can verify passwords and learn their home directory.
From here I need some way for this webserver (running as www-data) to access their files. I've thought about running every command through su/sudo but that's really not optimal when I just need to open/write/close/glob files in their home directories.
Is there standard practice for this? I haven't been able to turn up anything so far.
Notes
I want the files in their home directory, as the users will be SSHing in and running other commands on them that won't be available via the web
The web connection is made over HTTPS of course.
Related
How to successfully run Perl script with setuid() when used as cgi-bin?
You might want to reconsider your architecture. This sounds like a job for virtual hosts in an ISP-like configuration.
First, read "Dynamically configured mass virtual hosting" page in the Apache VirtualHost documentation. Then read about how to run each virtual host as a different user
Under this approach you would have a vhost for each user running as $user.example.com; when Apache forks off a worker for the vhost, the fork runs suid as the appropriate user. Then you set up docroot and scriptalias for the vhost which point to the site code.
Long story short, it's probably better to use Apache's (well-tested and well-documented) features for managing user identity than it is to do it in Perl or whip up your own suid wrapper. It's notoriously difficult to get it right.
Are you running Apache? This sounds like a job for WebDAV.
The trouble is that your web server is running as www-data. By design, it won't be able to change the owner of any file. Some other privileged process will need to change ownership on the webserver's behalf.
You could write a minimal set UID script to handle changing the ownership of files and deleting them, but this path is fraught with peril (especially if you've never written a setUID program before.)

How CUPS web interface works

I'm wondering how the Common Unix Printing System "CUPS" handels the user actions and affects the configuration files, from my humble background, a webpage only can access/edit files when there is some web server and a serverside script, so how it works without installing web server?
does it work through some shell script? if yes, how that occurs?
It is not the web frontend that alters the configuration files. At least not if you compare it to the 'typical' setup: http server, scripting engine, script.
CUPS itself contains a daemon, this also acts as a minimal web server. That deamon has control over the configuration files. And it is free to accept commands from some web client it serves. So no magic here.
Turned that around you could also setup a system running a 'normal' http server with such rights that is is able to alter all system configuration files. That's all a question of how that server/daemon is setup and started. It breaks down to simple rights management. You certainly do not want to do that, though ;-)

Running shell scripts with sudo through my web app

I have some functionality that interfaces with the server's OS in my web application. I've written a bash script and am able to run it from within my app.
However, some functionality of the script requires superuser privileges.
What is the most sane way to run this script securely? It is being passed arguments from a web form, but should only be able to be called by authenticated users that I trust not to haxxor it.
Whichever way you do this is likely to be very dangerous. Can you perhaps write a local daemon with the required privileges, and use some some of message-bus that produces/consumes events to be processed by this super-user requiring component?
That way, you can carefully validate the contents of the message, and reduce the likelihood of exploitation..
What is the most sane way to run this script securely?
If you really care about security, require the web client to provide a passphrase and use an ssh key. Then run the script under ssh-agent, and for the sensitive parts do ssh root#localhost command.... You probably will want to create ssh keypairs just for this purpose, as typing one's normal SSH passphrase into a web form is not something I would do (who trusts your web form, anyway?).
If you don't want quite this much security, and if you really, really believe that your web form can correctly authenticate its users, without any bugs, you could instead decide to trust the web server to run the commands you need. (I wouldn't.) In this case I would use the /etc/sudoers file to allow the web server to run the commands of interest without providing a password. Then your script should use sudo to run those commands.

best approah (security) to do some admin work through web page in Linux?

I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.

Resources