Log processes and review its permissions - linux

I need to check the permissions of all processes on Linux to see if some of them are not running with unnecessarily high privileges (with sudo or even worse as root). Is there a way to log processes to be able to check them subsequently?
I have checked the journal for commands, but I'm not sure if that's all or if I should look for something else as well.

Related

Elevate privileges of running process

Is there a way for one process (such as an executable or bash script) to elevate the privileges of another, running, process? e.g. If I have a program running as normal user user, is it possible for another process, running as root to elevate the privileges of the first as if it had been run as root originally?
I have seen exploits modify the credential struct of a process to perform this, but I'm not sure if there's a way to do this more legitimately.
Looking further into this, it appears that there is no way to do this without installing a kernel module; essentially a rootkit. The kind of thing I want is demonstrated here.
No, these properties of a process cannot be altered after it starts.
No. The only way to elevate a process’s privileges is by execing a setuid binary (such as /usr/bin/sudo); you can’t do it to an already running process.
You can, however, ask sudo to copy a file to a temporary path, launch your editor with your own privileges on the temporary path, and then copy the result back in place as root:
sudo -e filename
This is possible, but only at Ring 0, using the commit_creds(prepare_creds(0)), which will update the task struct associated with the userland process, setting UID/GUID to 0. This is only possible with code already running in Ring 0, such as a Kernel module/rootkit or kernel exploit. An example of how this may be done is here.
You could start a new process using sudo, but starting a new instance with higher permissions will always result in a new process being created.
It's not possible to grant additional permissions to an already running process.

Forensic analysis - process log

I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.
If there was no "command accounting" enabled, there is no.
Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).
So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.

SSH user lock to their home dir & one service

I'm really new to Linux. I Google'd for couple of days, and installed Java and Tomcat in CentOS.
Now I need a user, that has all privileges in their home directory (including files, subdirs and files in subdirs), but cannot access any other dir.
Also this user has to have a permission to manage one service (I created tomcat service, which I can 'start','stop' and 'restart').
Can anyone explain how to do this?
You've asked for a lot.
There's a few approaches possible:
Entirely with "native" Linux permissions
Using a mandatory access control system
Native Linux permissions
Create this new user their own new group. Make them the only member of this group.
Remove world read, write, execute permissions on all your data files. If any users were getting their privileges to the data files via world permissions, either create new groups for all the users and data as appropriate (maybe one for accounting, one for billing, one for sales, one for engineering, etc. Whatever works.)
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
If you really want to lock this user down, copy in the utilities that this person will need into their ~/bin/ dir and then proceed to remove the world execute bit on /bin, /sbin, /usr/bin, /usr/sbin. (Leave /lib, /usr/lib, etc. alone -- copying in the libraries this user will need is doubtless a lot of work.)
Mandatory access controls
I'll explain this using the AppArmor system; I've worked on AppArmor for over a decade, and it is the system I know best. There are more choices: TOMOYO, SMACK, and SELinux are all excellent tools. AppArmor and TOMOYO work on the idea that you confine access to pathnames. SMACK and SELinux work on the idea that every object on the system is assigned a label and the policy specifies which labels (on processes) can read, write, execute, etc. labels (on data or other processes). If you wanted to enforce a comprehensive Open, Classified, Secret, Top Secret style of protection, SMACK or SELinux would be the better tools. If you want to confine some programs to some files, AppArmor or TOMOYO would be the better tools.
AppArmor should come ready-to-use on most Ubuntu, SUSE, PLD, Annvix, Mandriva, and Pardus distributions.
The AppArmor system confines processes and controls how processes can move from domain to domain when the processes execute new programs. Domains are usually assigned by program.
The easiest way to get started is to copy /bin/bash to /bin/jail_bash (or some other name not in /etc/shells), set the shell for the user in /etc/passwd (chsh(1) can make this easy), and create an AppArmor profile for /bin/jail_bash that allows only the actions you want to allow. If we confine the process correctly, then the user cannot escape the profile we make for them.
Add a new sudoers(5) entry for this user for the sudo stop tomcat, sudo start tomcat, sudo restart tomcat, sudo status tomcat -- or whichever commands this user will need to execute to manage the tomcat service. See visudo(8) for details on editing the sudo(8) config file.
In one terminal, run aa-genprof jail_bash. In another terminal, log in as the user (or otherwise run /bin/jail_bash) and begin doing tasks that you want to allow the person to do. We'll use what you do as training material to build a profile iteratively. You might be interested to watch /var/log/syslog or /var/log/audit/audit.log (if you have the auditd package installed) to see what operations AppArmor notices your program doing. Don't do too much at once -- just a few new things per iteration.
In the aa-genprof terminal, answer the questions as they come up. Allow what needs to be allowed. Deny what ought to be denied. When you are asked about execution privileges, prefer inherit or child over profile. (The profile option will influence every one else on the system. Inherit or child will only influence executions from whatever profile you're currently working on improving. Child breaks apart privileges into smaller pieces, while inherit keeps permissions in larger profiles. Prefer inherit for this case.)
Once you get to questions about executing tomcat, use the unconfined execute privilege. This is dangerous -- if a bug in the way tomcat is started allows people to start unconfined shells, then this can be used to break out of the jail. You could confine tomcat (and this is even a good idea -- tomcat isn't perfect) to prevent this from being an escape route, but that is probably not necessary right away.
AppArmor is designed to make it easy to grow the profiles on a system over time. AppArmor isn't applicable to all security situations, but we deployed scenarios very similar to this at the DEF CON Capture-the-flag hacking contest with excellent results. We had to allow fellow attackers root (and ephemeral user accounts) access to the machine via telnet, as well as POP3, SMTP, HTTP+CGI, and FTP.
Be sure to hand-inspect the profiles in /etc/apparmor.d/ before allowing your user to log in. You can fix anything you want with a plain text editor; run /etc/init.d/apparmor restart to reload all profiles (and unload the profiles you might remove).
It's handy to keep an unconfined root sash(1) shell open when you're first learning how to configure AppArmor. If you ignore the warning about programs that shouldn't have their own profile, it might be difficult to get back into your own system. (Don't forget about booting with init=/bin/sh in the worst of situations.)
You can easily create a very restricted environment by starting bash in restricted mode. Set the user's shell to rbash instead of bash, and that will put it into restricted mode.
http://www.gnu.org/s/bash/manual/html_node/The-Restricted-Shell.html
There's a chance that rbash will be too restrictive for your needs. Among other things, the restricted environment forbids changing directories. But take a look at it and see if it's sufficient for your needs.

best approah (security) to do some admin work through web page in Linux?

I want to build a web based admin tools that allow the system admin to run pre-configured commands and scripts through a web page (simple and limited webmin), what is the best approach?
I already started with Ubuntu installing LAMP and give the user www-data root's privileges !!!
as I learned (please check the link) this is a really bad move !!!, so how to build such web-based system without the security risk?
cheers
I did something like this a couple of years ago. It was (I like think) fairly secure and only accessible to a limited number of pre-vetted, authenticated users, but it still left me with an uneasy feeling! If you can avoid doing it, I'd recommend you do :)
I had a database sitting between the frontend web-tier and the script which was actually executing actions. The relevant table contained a symbolic command name and an optional numeric argument, which was sufficient for my needs. This allows you to audit what's been executed, provides a quick and dirty way to have a non-www user do things, and means if the website is compromised they're constrained by the DB structure (somewhat) and the script which pulls data from it.
The data from the DB can be read by a daemon running in a separate, unprivileged account. The daemon pulls and sanitises data from the DB and maps the 'command' to an actual executable (with a hard-coded map, so commandA executes A, commandB executes foo, and anything else would get flagged as an error). The account can be locked down using AppArmor (or SELinux, I imagine) to prevent it from executing, reading or writing anything you don't expect it to. Have a system in place to alert you of any errors from either the daemon or AppArmor/SELinux.
The executables which the daemon runs can be setuid'd if appropriate, or you can use the sudoers mechanism to allow the unprivileged account to execute them without a password.
I already started with Ubuntu installing LAMP and give the user www-data root's privileges
Don't do this.
If you really want to execute some very specific scripts under root privileged. Create such predefined very limited scripts, allow their password-less execution with sudo for specific user and then run them via script and don't forget authentication.
Generally this is bad idea.
SSH is your best friend.

Temporarily prevent linux from shutting down

I have a backup script that runs in the background daily on my linux (Fedora 9) computer. If the computer is shut down while the backup is in progress the backup may be damaged so I would like to write a small script that temporarily disables the ability of the user to reboot or shut the computer down.
It is not necessary that the script is uncirumventable, it's just to let the users of the system know that the backup is in progress and they shouldn't shut down. I've seen the Inhibit method on the DBus Free desktop power management spec:
http://people.freedesktop.org/~hughsient/temp/power-management-spec-0.3.html
but that only prevents shutdowns if the system is idle not explicitly at the users request.
Is there an easy way to do this in C/Python/Perl or bash?
Update: To clarify the question above, it's a machine with multiple users, but who use it sequentially via the plugged in keyboard/mouse. I'm not looking for a system that would stop me "hacking" around it as root. But a script that would remind me (or another user) that the backup is still running when I choose shut down from the Gnome/GDM menus
Another get-you-started solution: During shutdown, the system runs the scripts in /etc/init.d/ (or really, a script in /etc/rc.*/, but you get the idea.) You could create a script in that directory that checks the status of your backup, and delays shuts down until the backup completes. Or better yet, it gracefully interrupts your backup.
The super-user could workaround this script (with /sbin/halt for example,) but you can not prevent the super-user for doing anything if their mind is really set into doing it.
There is molly-guard to prevent accidental shutdows, reboots etc. until all required conditions are met -- conditions can be self-defined.
As already suggested you can as well perform backup operations as part of the shutdown process. See for example this page.
If users are going to be shutting down via GNOME/KDE, just inhibit them from doing so.
http://live.gnome.org/GnomePowerManager/FAQ#head-1cf52551bcec3107d7bae8c332fd292ec2261760
I can't help but feel that you're not grokking the Unix metaphor, and what you're asking for is a kludge.
If a user running as root, there's nothing root can do to stop root from shutting down the system! You can do window dressing things like obscuring shutdown UI, but that's not really accomplishing anything.
I can't tell if you're talking about this in the context of a multi-user machine, or a machine being used as a "desktop PC" with a single user sitting at a console. If it's the former, your users really shouldn't be accessing the machine with credentials that can shutdown the system for day-to-day activities. If it's the latter, I'd recommend educating the users to either (a) check that the script is running, or (b) use a particular shutdown script that you designate that checks for the script's process and refuses to shutdown until it's gone.
More a get-you-started than a complete solution, you could alias the shutdown command away, and then use a script like
#!/bin/sh
ps -ef|grep backupprocess|grep -v grep > /dev/null
if [ "$?" -eq 0 ]; then
echo Backup in progress: aborted shutdown
exit 0
else
echo Backup not in progress: shutting down
shutdown-alias -h now
fi
saved in the user's path as shutdown. I expect there would be some variation dependant on how your users invoke shutdown (Window manager icons/command line) and perhaps for different distros too.
But a script that would remind me (or another user) that the backup is still running when I choose shut down from the Gnome/GDM menus
One may use polkit to completely block shutdown/restart - but I failed to find method that would provide a clear response why it is blocked.
Adding the following lines as /etc/polkit-1/localauthority/50-local.d/restrict-login-powermgmt.pkla works:
[Disable lightdm PowerMgmt]
Identity=unix-user:*
Action=org.freedesktop.login1.reboot;org.freedesktop.login1.reboot-multiple-sessions;org.freedesktop.login1.power-off;org.freedesktop.login1.power-off-multiple-sessions;org.freedesktop.login1.suspend;org.freedesktop.login1.suspend-multiple-sessions;org.freedesktop.login1.hibernate;org.freedesktop.login1.hibernate-multiple-sessions
ResultAny=no
ResultInactive=no
ResultActive=no
You still see a confirmation dialog but there are not buttons to confirm. Looks ugly, but works ;)
Unfortunately this applies to all users, not only the lightdm session, so you have to add a second rule to white-list them if desired.
Note that this method block solely reboot/etc commands issued from GUI. To block reboot/etc commands from command line one may use molly-guard - as explained in https://askubuntu.com/questions/17187/disabling-shutdown-command-for-all-users-even-root-consequences/17255#17255

Resources