Update or modify files owned by root from nodejs server - node.js

I plan to create a web interface to configure a part of my system, including some files owned by root. I will be a NodeJS server and I know that running it as root is not a good idea.
Any suggestions about how to perform that without performance and security issues?
Thank you.

I decided to create a specific script that will be owned by root with high restricted rights and allow a sudo on that script for a dedicated user without password so that could not log in (only root can do a su on it).
In the script i will perform wanted action (updte, upgrade, files copy, etc.)
Let's hope that scurity is good enought

Related

How to let users run arbitrary source code on my server

I want to automate testing of my users' source code files by letting them upload c++,python, lisp, scala, etc. files to my linux machine where a service will find them in a folder and then compile/run them to verify that they are correct. This server contains no important information about any of my users, so there's no database or anything for someone to hack. But I'm no security expert so I'm still worried about a user somehow finding a way to run arbitrary commands with root privileges (basically I don't have any idea what sorts of things can go wrong). Is there a safe way to do this?
They will. If you give someone the power to compile, it is very hard not to escalate to root. You say that server is not important to you, but what if someone sends you an email from that server, or alters some script, to obtain some info on your home machine or another server you use?
At least you need to strongly separate you from them. I would suggest linux containers, https://linuxcontainers.org/ they are trendy these days. But be careful, this is the kind of service that is always dangerous, no matter how much you protect yourself.
Read more about chroot command in Linux.
This way you can provide every running user program with separate isolated container.
You should under no circumstances allow a user to run code on your server with root privileges. A user could then just run rm –rf / and it would delete everything on your server.
I suggest you make a new local user / group that has very limited permissions, e.g. can only access one folder. So when you run the code on your server, you run it in that folder, and the user can not access anything else. After the code has finished you delete the content of the folder. You should also test this vigorously to check that they really cant destroy / manipulate anything.
If you're running on FreeBSD you could also look at Jails, which is sort-of a way of virtualization and limiting a user / program to that sandbox.

Accessing user files from a perl webserver

I have a perl server which needs the ability to read user's files and data, and write to them. The users are authenticated via LDAP, so I can verify passwords and learn their home directory.
From here I need some way for this webserver (running as www-data) to access their files. I've thought about running every command through su/sudo but that's really not optimal when I just need to open/write/close/glob files in their home directories.
Is there standard practice for this? I haven't been able to turn up anything so far.
Notes
I want the files in their home directory, as the users will be SSHing in and running other commands on them that won't be available via the web
The web connection is made over HTTPS of course.
Related
How to successfully run Perl script with setuid() when used as cgi-bin?
You might want to reconsider your architecture. This sounds like a job for virtual hosts in an ISP-like configuration.
First, read "Dynamically configured mass virtual hosting" page in the Apache VirtualHost documentation. Then read about how to run each virtual host as a different user
Under this approach you would have a vhost for each user running as $user.example.com; when Apache forks off a worker for the vhost, the fork runs suid as the appropriate user. Then you set up docroot and scriptalias for the vhost which point to the site code.
Long story short, it's probably better to use Apache's (well-tested and well-documented) features for managing user identity than it is to do it in Perl or whip up your own suid wrapper. It's notoriously difficult to get it right.
Are you running Apache? This sounds like a job for WebDAV.
The trouble is that your web server is running as www-data. By design, it won't be able to change the owner of any file. Some other privileged process will need to change ownership on the webserver's behalf.
You could write a minimal set UID script to handle changing the ownership of files and deleting them, but this path is fraught with peril (especially if you've never written a setUID program before.)

Is setting the SUID/SGID bit on the SVN binary a security risk?

I would like to use a callback feature of an SVN repository (Unfuddle) to ping a URL on my server whenever a commit has been made. I have a PHP script accepting the message and attempting to call a shell script to execute an 'svn update'.
The problem I'm facing is that Apache is running under user 'www-data' and does not have access to the local repository: '.svn/lock' permission denied. I have read all about setting SUID/SGID on shell scripts and how most *NIX OS's simply don't support it because of the security risks involved.
However I can set the SUID/SGID bit on the SVN binary file located at /usr/bin/svn. This alleviates the problem by allowing any user to issue SVN commands on any repository; not the most ideal...
My question is what's the most logical/sane/secure way to implement this type of setup and if I did leave the bits set on the svn binary does that open up a major security risk that I'm failing to realize?
Sorry for the long-winded post; this is my first question and I wanted to be thorough.
Thanks
There are 2 types of solutions for this kind of problem, polling or event driven.
An example of a polling solution would be to have a cronjob running on your server updating every N minutes. This would probably be the easiest to maintain if it works for you. You would sidestep the whole permissions issue by running the cron from the correct account.
The solution you covered is an event driven solution. They are typically less resource intensive, but can be harder to set up. An another example of an event driven solution would be to have www-data belong to an svn group. Set the SGID bit and chown the repository directory to the svn group. This should allow anyone in that group to check-in/out.
If you need to limit to updating, you can escalate privileges or change user temporarily. You use ssh single purpose keys (aka command keys) to ssh in as the user with the correct privileges. The single purpose key can then be used to do the update.
Another way to escalate privileges would be to use sudo -u [user] [command]. Update the /etc/sudoers file to allow www-data to escalate/change user to one that can perform the update.
Either way I would NOT use SUID/SGID scripts.
As CodeRich already said, you can set up a cron job to frequently update tue working copy(that's also the solution I would use).
Setting svn SUID/SGID is bad, because svn can write files everywhere in the file system (think of a public accessible repository containing a passwd and shadow file, checked out into your /etc). You could use a little suid wrapper program(which is SUID to your user account, not root), which chdir into your working copy and executes svn with the correct parameters there. You can look at ikiwiki which does this when it is used as a cgi.
Another way is to change the permissions of your working copy, so that the www-data user can create and write files there.
change the permissions on your working copy so that Apache can write to it. You have a directory you want to change, you need permissions to do so. Its that simple :)
The problem you then face is allowing any Apache user (or hacked page) to write all over your repo, not a good thing. So - you need to only allow a part of the system to write to it, and the best way to do that is to run your php script as the user who already owns the repo.
That's easily achieved by running the php script as a cgi, or fastcgi process. You can specify the user to use, it doesn't have to be www-data at all, though it does require a bit more setting up, you can have the best of event-driven and security as you're likely to get.
Here's a quick explanation of phpSuexec that does this.

best approah (security) to do some admin work through web page in Linux?

I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.

CHMOD and the security for the directories on my server

I have a folder on my server on which I have changed the permissions to 777 (read, write and execute all) to allow users to upload their pictures.
So I want to know, what are the security risks involved in this?
I have implemented code to restrict what file formats can be uploaded, but what would happen if someone was to find the location of the directory, can this pose any threat to my server?
Can they start uploading any files they desire?
Thanks.
When users are uploading files to your server through a web form and some PHP script, the disk access on the server happens with the user id the web server is running under (usually nobody, www-data, apache, _httpd or even root).
Note here, that this single user id is used, regardless of which user uploads the file.
As long as there are no local users accessing the system by other means (ssh, for example), setting the upload directories permissions to 0777 would make not much of a difference -- appart from somebody exploiting a security vulnerability somewhere else in your system there's no one those permissions apply to anyway, and such an attacker would probably just use /tmp.
It is always good practice to set only those permissions on a file or directory that are actually needed. In this case that means probably something like:
drwxrws--- 5 www-data www-data 4096 Nov 17 16:44 upload/
I'm assuming that other local users besides the web server will want to access those files, like the sysadmin or a web designer. Add those users to the group your web server runs under and they don't need sudo or root privileges to access that directory. Also, the +s means that new files and directories in upload/ will automatically be owned by the same group.
As to your last question: just because an attacker knows where the directory is, doesn't mean he can magically make files appear there. There still has to be some sort of service running that accepts files and stores them there... so no, setting the permissions to 0777 doesn't directly make it any less safe.
Still, there are several more dimensions to "safety" and "security" that you cannot address with file permissions in this whole setup:
uploaders can still overwrite each others files because they all work with the same user id
somebody can upload a malicious PHP script to the upload directory and run it from there, possibly exploit other vulnerabilities on your system and gain root access
somebody can use your server to distribute child porn
somebody could run a phishing site from your server after uploading a lookalike of paypal.com
...and there are probably more. Some of those problems you may have addressed in your upload script, but then again, understanding of unix file permissions and where they apply comes usually waaaay at the beginning when learning about security issues, which shows that you are probably not ready yet to tackle all of the possible problems.
Have your code looked at by somebody!
By what means are these users uploading their pictures? If it's over the web, then you only need to give the web server or the CGI script user access to the folder.
The biggest danger here is that users can overwrite other users files, or delete other users files. Nobody without access to this folder will be able to write to it (unless you have some kind of guest/anonymous user).
If you need a directory that everyone can create files in, what you want is to mimic the permissions of the /tmp directory.
$ chown root:root dir; chmod 777 dir; chmod +t dir;
This way any user can create a file, but they cannot delete files owned by other users.
Contrary to what others have said, the executable bit on a directory in unix systems means you can make that directory your current directory (cd to it). It has nothing to do with executing (execution of a directory is meaningless). If you remove the executable bit, nobody will be able to 'cd' to it.
If nothing else, I would remove the executable permissions for all users (if not owner and group as well). With this enabled, someone could upload a file that looks like a picture but is really an executable, which might cause no end of damage.
Possibly remove the read and write permissions for all users as well and restrict it to just owner and group, unless you need anonymous access.
You do not want the executable bit on. As far as *nix goes, the executable bit means you can actually run the file. So, for example, php scripts can be uploaded as type JPEG, and then someone can run that script if they know the location and it's within the web directory.

Resources