Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.
Related
I sometimes have a need to pay someone to perform some programming which exceeds my expertise. And sometimes that someone is someone I might not know.
My current need is to configure Apache which happens to be running on Centos.
Giving root access via SSH on my main physical server is not an option.
What are my options?
One thought is to create a VPS (guest as Linux) on my main physical server (operating system as Linux) using virtualbox (or equal), have them do the work, figure out what they did, and manually implement the changes my self.
Seem secure? Maybe better options? Thank you
I suggest looking into the chroot command.
chroot() changes the root directory of the calling process to that specified in path. This directory will be used for pathnames beginning with /. The root directory is inherited by all children of the calling process.
This implications of this, are that once inside a chroot "jail" a user cannot see "outside" of the jail. You've changed their root file. You can include custom binaries, or none at all (I don't see why you'd want that, but point being YOU decide what the developer can and can't see.)
We can use a directory for chroot, or you could use my personal favorite: a mounted file, so your "jail" is easily portable.
Unfortunately I am a Debian user, and I would use
debootstrap to build a minimal system to a small file (say, 5GB), but there doesn't seem to be an official RPM equivalent. However the process is fairly simple. Create a file, I would do so with dd if=/dev/zero of=jailFile bs=1M count=5120. Then we can mkfs.ext4 jailFile. Finally, we must mount and include any files we wish the jailed user to use (this is what debootstrap does. It downloads all the default goodies in /bin and such) either manually or with a tool.
After these steps you can copy this file around, make backups, or move servers even. All with little to no effort on the user side.
From a short google search there appears to be a third party tool that does nearly the same thing as debootstrap, here. If you are comfortable compiling this tool, can build a minimal system manually, or can find an alternative; and the idea of a portable ext4 jail is appealing to you, I suggest this approach.
If the idea is unappealing, you can always chroot a directory which is very simple.
Here are some great links on chroot:
https://wiki.archlinux.org/index.php/Change_root
https://wiki.debian.org/chroot
http://www.unixwiz.net/techtips/chroot-practices.html
Also, here and here are great links about using chroot with OpenSSHServer.
On a side note: I do not think the question was off topic, but if you feel the answers here are inadequate, you can always ask on https://serverfault.com/ as well!
Controlling permissions is some of the magic at the core of Linux world.
You... could add the individual as a non-root user, and then work towards providing specific access to the files you would like him to work on.
Doing this requires a fair amount of 'nixing to get right.
Of course, this is one route... If the user is editing something like an Apache configuration file, why not set-up the file within a private bitbucket or github repository?
This way, you can see the changes that are made, confirm they are suitable, then pull them into production at your leisure.
I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'
I downloaded this shell script from this site.
It's suspiciously large for a bash script. So I opened it with text editor and noticed
that behind the code there is a lot of non-sense characters.
I'm afraid of giving the script execution right with chmod +x jd.sh. Can you advise me how to recognize if it's safe or how to set it's limited rights in the system?
thank you
The "non-sense characters" indicate binary files that are included directly into the SH file. The script will use the file itself as a file archive and copy/extract files as needed. That's nothing unusual for an SH installer. (edit: for example, makeself)
As with other software, it's virtually impossible to decide wether or not running the script is "safe".
Don't run it! That site is blocked where I work, because it's known to serve malware.
Now, as to verifying code, it's not really possible without isolating it completely (technically difficult, but a VM might serve if it has no known vulnerabilities) and running it to observe what it actually does. A healthy dose of mistrust is always useful when using third-party software, but of course nobody has time to verify all the software they run, or even a tiny fraction of it. It would take thousands (more likely millions) of work years, and would find enough bugs to keep developers busy for another thousand years. The best you can usually do is run only software which has been created or at least recommended by someone you trust at least somewhat. Trust has to be determined according to your own criteria, but here are some which would count in the software's favor for me:
Part of a major operating system/distribution. That means some larger organization has decided to trust it.
Source code is publicly available. At least any malware caused by company policy (see Sony CD debacle) would have a bigger chance of being discovered.
Source code is distributed on an appropriate platform. Sites like GitHub enable you to gauge the popularity of software and keep track of what's happening to it, while a random web site without any commenting features, version control, or bug database is an awful place to keep useful code.
While the source of the script does not seem trustworthy (IP address?), this might still be legit. With shell scripts it is possible to append binary content at the end and thus build a type of installer. Years ago, Sun would ship the JDK for Solaris in exactly that form. I don't know if that's still the case, though.
If you wanna test it without risk, I'd install a Linux in a VirtualBox (free virtual-machine software), run the script there and see what it does.
Addendum on see what it does: There's a variety of tools on UNIX that you can use to analyze a binary program, like strace, ptrace, ltrace. What might also be interesting is running the script using chroot. That way you can easily find all files that are installed.
But at the end of the day this will probably yield more binary files which are not easy to examine (as probably any developer of anti-virus software will tell you). Therefore, if you don't trust the source at all, don't run it. Or if you must run it, do it in a VM where at least it won't be able to do too much damage or access any of your data.
I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.
Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..