SSH user lock to their home dir & one service - linux

I'm really new to Linux. I Google'd for couple of days, and installed Java and Tomcat in CentOS.
Now I need a user, that has all privileges in their home directory (including files, subdirs and files in subdirs), but cannot access any other dir.
Also this user has to have a permission to manage one service (I created tomcat service, which I can 'start','stop' and 'restart').
Can anyone explain how to do this?

You've asked for a lot.
There's a few approaches possible:
Entirely with "native" Linux permissions
Using a mandatory access control system
Native Linux permissions
Create this new user their own new group. Make them the only member of this group.
Remove world read, write, execute permissions on all your data files. If any users were getting their privileges to the data files via world permissions, either create new groups for all the users and data as appropriate (maybe one for accounting, one for billing, one for sales, one for engineering, etc. Whatever works.)
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
If you really want to lock this user down, copy in the utilities that this person will need into their ~/bin/ dir and then proceed to remove the world execute bit on /bin, /sbin, /usr/bin, /usr/sbin. (Leave /lib, /usr/lib, etc. alone -- copying in the libraries this user will need is doubtless a lot of work.)
Mandatory access controls
I'll explain this using the AppArmor system; I've worked on AppArmor for over a decade, and it is the system I know best. There are more choices: TOMOYO, SMACK, and SELinux are all excellent tools. AppArmor and TOMOYO work on the idea that you confine access to pathnames. SMACK and SELinux work on the idea that every object on the system is assigned a label and the policy specifies which labels (on processes) can read, write, execute, etc. labels (on data or other processes). If you wanted to enforce a comprehensive Open, Classified, Secret, Top Secret style of protection, SMACK or SELinux would be the better tools. If you want to confine some programs to some files, AppArmor or TOMOYO would be the better tools.
AppArmor should come ready-to-use on most Ubuntu, SUSE, PLD, Annvix, Mandriva, and Pardus distributions.
The AppArmor system confines processes and controls how processes can move from domain to domain when the processes execute new programs. Domains are usually assigned by program.
The easiest way to get started is to copy /bin/bash to /bin/jail_bash (or some other name not in /etc/shells), set the shell for the user in /etc/passwd (chsh(1) can make this easy), and create an AppArmor profile for /bin/jail_bash that allows only the actions you want to allow. If we confine the process correctly, then the user cannot escape the profile we make for them.
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
In one terminal, run aa-genprof jail_bash. In another terminal, log in as the user (or otherwise run /bin/jail_bash) and begin doing tasks that you want to allow the person to do. We'll use what you do as training material to build a profile iteratively. You might be interested to watch /var/log/syslog or /var/log/audit/audit.log (if you have the auditd package installed) to see what operations AppArmor notices your program doing. Don't do too much at once -- just a few new things per iteration.
In the aa-genprof terminal, answer the questions as they come up. Allow what needs to be allowed. Deny what ought to be denied. When you are asked about execution privileges, prefer inherit or child over profile. (The profile option will influence every one else on the system. Inherit or child will only influence executions from whatever profile you're currently working on improving. Child breaks apart privileges into smaller pieces, while inherit keeps permissions in larger profiles. Prefer inherit for this case.)
Once you get to questions about executing tomcat, use the unconfined execute privilege. This is dangerous -- if a bug in the way tomcat is started allows people to start unconfined shells, then this can be used to break out of the jail. You could confine tomcat (and this is even a good idea -- tomcat isn't perfect) to prevent this from being an escape route, but that is probably not necessary right away.
AppArmor is designed to make it easy to grow the profiles on a system over time. AppArmor isn't applicable to all security situations, but we deployed scenarios very similar to this at the DEF CON Capture-the-flag hacking contest with excellent results. We had to allow fellow attackers root (and ephemeral user accounts) access to the machine via telnet, as well as POP3, SMTP, HTTP+CGI, and FTP.
Be sure to hand-inspect the profiles in /etc/apparmor.d/ before allowing your user to log in. You can fix anything you want with a plain text editor; run /etc/init.d/apparmor restart to reload all profiles (and unload the profiles you might remove).
It's handy to keep an unconfined root sash(1) shell open when you're first learning how to configure AppArmor. If you ignore the warning about programs that shouldn't have their own profile, it might be difficult to get back into your own system. (Don't forget about booting with init=/bin/sh in the worst of situations.)

You can easily create a very restricted environment by starting bash in restricted mode. Set the user's shell to rbash instead of bash, and that will put it into restricted mode.
http://www.gnu.org/s/bash/manual/html_node/The-Restricted-Shell.html
There's a chance that rbash will be too restrictive for your needs. Among other things, the restricted environment forbids changing directories. But take a look at it and see if it's sufficient for your needs.

Related

How to give root access to linux application

I am making a Linux application using Python3 and Qt5. When user will install this application, It will create some files in /usr/share folder. To create the files the application needs to have root access.
I plan on having the application show a prompt box to user and user will enter the root password to give root access to the application. But I don't know that how can I give root access to the application by using that password?
This is a world of pain. It's certainly possible to have an application that runs as a normal user carry out certain actions as a privileged user, but I always feel that the need to do this suggests that installation and maintenance hasn't been thought through properly.
To elevate privileges, and assuming the "sudo" isn't appropriate (it probably won't be in this case), you will either need to use an operating system tool that does the job (prompting for credentials and then running something), or implement a helper for your program that has suid attributes on its executable.
I expect all Linux systems have access to "su", but the standard su doesn't have a graphical interface, which is a drag for GUI programs. You can collect the user credentials in your application, and then pass them to su (which is fiddly), or you can use one of the various graphical su-type utilities such as gksu. Of course, that only works if those utilities are available on your platform.
If you go the route of providing your program with a suid part that handles the work that needs elevated privileges, you need to be inordinately careful about security -- how you collect the credentials, how you verify that intruders can't do things they shouldn't, etc.
Frankly, it's a can of worms. I would think that it's nearly always better to provide your application with an installation or maintenance module that has to be run as root. That way all the hard work gets done by the platform.

How can I access files of different users and retain permission restrictions in Linux using Node.JS?

I'm trying to reimplement an existing server service in Node.JS. That service can be compared to a classic FTP server: authenticated users can read/create/modify files, but restricted to the permissions given to the matching system user name.
I'm pretty sure I can't have Node.JS run as root and switch users using seteuid() or alike since that would break concurrency.
Instead, can I let my Node.JS process run as ROOT and manually check permissions when accessing files? I'm thinking about some system call like "could user X create a file in directory Y?"
Otherwise, could I solve this by using user groups? Note that the service must be able to delete/modify a file created by the real system user, which may not set a special group just so that the service can access the file.
Running node as root sounds dangerous, but I assume there aren't many options left for you. Most FTP servers run as root too, for the same reason. Though, it means you need to pay a severe attention to the security of the code you are going to run.
Now to the question:
You are asking whether you can reimplement the UNIX permissions control in node.js. Yes you can, but Should Not! It is almost 100% chance you will leave holes or miss edge cases Unix core has already taken care of.
Instead use the process.setuid(id) as you mentioned. It will not defeat concurrency, but you need to think of parallel concurrency rather than async now. That is an extra work, but will release you of an headache of reinventing the Unix security.
Alternatively, if all of the operations you want to carry on filesystem involve shell commands, then you can simply modify them to the following pattern:
runuser -l userNameHere -c 'command'

Intricacies of Launching a complex shell script from CGI

Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.

How do I disable SELinux for a subprocess launched from Apache?

My Apache module launches a helper subprocess which does, for example, but not limited by, the following things:
It sets up a socket so that it can communicate with Apache.
Reads and writes files in a temporary location that is deleted when Apache exits. These files are used e.g. for storing large amounts of data received over the network, in case that data does not comfortably fit in RAM.
It spawns user-specified executables. Similar to CGI. Each of these spawned processes are run as their own dedicated user.
The helper subprocess is launched as root so that it can manage file ownerships and permissions and can spawn more processes as specific users.
Some users of my module run on systems with SELinux installed, e.g. RedHat-based distros. SELinux usually interferes with my module. Until now I've been telling people to disable SELinux system-wide because I can't figure out how to write a proper policy for my software. Documentation is very scattered, complex and usually only targets system administrators, not software developers.
As a step into the right direction, I want to implement minimal support for SELinux. I'm looking for a way to launch my helper subprocess without any SELinux constraints without disabling SELinux system-wide. Is there a way to do that, and if so, how?
Well... you could write a rule that transitions your domain to unconfined_t, but then you'd piss off quite a few sysadmins. Best to write yourself a new domain that inherits from httpd_t and also adds the appropriate contexts for access.

best approah (security) to do some admin work through web page in Linux?

I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.

Resources