Bash script with selective root permissions for commands - linux

Is it possible to have a bash script invoked with root permissions to run different commands with different privileges?
Right now i have a script which runs a C-program with root permissions and creates a folder and some files which i want to have non-root permissions. Looking at the man page i see that the mkdir command takes a permissions parameter but i was wondering whether there's a smarter way of doing this.

Have a look at the man pages for the chmod and chown commands. Depending on what you are trying to do, either one of these should be our solution.
If you want to change the directory ownership to a user/group other than root, use chown -R user:group [directory] to recursively change ownership. If you just want the permissions changed, but with root still in ownership then use chmod -R 754 [directory]; keep in mind you will need to alter the permissions to suit your needs.

Related

how to set file permission to directory in linux..?

how to set file permission to directory that directory permission is assign automatically to files under that directory when we create new file into the directory
please any one let me know who know the answer
I think you're asking about how to make files inherit permissions from their parent directory. I will assume you're using GNU/Linux (you've added the redhat and ubuntu tags), in which case this can be done from the terminal by assigning group ownership to the directory and getting the children (files) to inherit from there.
To do this:
Recursively set the directory permissions:
chmod -R <octal permission code> /path/to/parent_dir
Recursively change ownership of directory:
chown -R <you>:<yourgroup> /path/to/parent_dir
Set inheritance of group ownership with setgid bit:
chmod g+s /path/to/parent
Note that the setgid bit means that executables will run with the same permissions as if they'd been run by the group: see here. If you don't understand the permissions, the easiest way to find the chmod octal code (for a beginner) is with a calculator. This is also a duplicate and probably doesn't belong on here, but since I can't see it mentioned on stackoverflow (it is on the Ubuntu StackExchange and Super User StackExchange) I'll answer here :)

Using mkdir in my bash script and getting permission denied

i have script that is owned by root in a directory owned by root. part of the script is to make a directory that will hold the inputs/outputs of that script. i also have a sim link to that script so any user can run it from anywhere. i don't use the temp directory so this info can be used as logs later.
Problem: when a user tries to run the script they get an error that the directory cannot be created because of permission denied.
Questions: why won't the script make the directory so root owns it independent of what user runs it? how can the script make the directory so root owns it instead of the user that ran it? only the script needs this info, not the user.
Additional info:
the directory is: drws--s--x.
the script is: -rwxr-xr-x.
(if you need to know) the line in the script is simply: mkdir $tempdirname
i am matching the permissions of other scripts on the same server that output text files correctly, but since mine is a directory i'm getting permission errors.
i have tried adding the permissions for suid and sgid. suid sounded like the correct solution since it should make the script run as if it were run by the user that owns the script. (why isn't this the correct solution?)
i would like any user to be able to type in the sim link name, that will run the script that is owned by root in the directory that is owned by root, and the directories created by that script will stay in its own directory. and the end user has no knowledge or access to the inner workings of this process. (hence owned by root)
Scripts run as the user that runs them; the owner of the file and/or the directory it's in are irrelevant (except that the user needs read and execute permission to the file and directory). Binary executables can have their setuid bit set to make them always run as the file's owner. Old unixes allowed this for scripts as well but this caused a security hole, so setuid is ignored on scripts in modern unixes/Linuxes.
If you need to let regular users run a script as root, there are a couple of other ways to do this. One is to add the script to your /etc/sudoers file, so that users can use sudo to run it as root. WARNING: if you mess up your /etc/sudoers file, it can be hard to recover access to clean it up and get back to normal. Make a backup first, don't edit it with anything except visudo, and I recommend having a root shell open so if something goes wrong you'll have the root access you need to fix it without having to promote via sudo. The line you'll need to add will be something like this:
%everyone ALL=NOPASSWD: /path/to/script
If you want to make this automatic, so that users don't have to explicitly use sudo to run the script, you can start the script like this:
#!/bin/bash
if [[ $EUID -ne 0 ]];
then
exec sudo "$BASH_SOURCE" "$#"
fi
EDIT: A simpler version occurred to me; rather than having the script re-run itself under sudo, just replace the symlink with a stub script like this:
#!/bin/bash
exec sudo /path/to/real/script "$#"
Note that with this option, the /etc/sudoers entry must refer to the real script's path, not that of the symlink. Also, if the script doesn't take arguments, you can leave the "$#" off. Or use it, it won't do any harm either.
If messing with /etc/sudoers sounds too scary, there's another option: you could "compile" the script with shc (which actually just makes a binary executable wrapper around it), and make that setuid root (chmod 4755 /path/to/compiled-script; chown root /path/to/compiled-script). Since it's in a binary wrapper, setuid will work.

How to get around subshell problem with sudo and file permissions

I have a specific problem. Here's a simplified example:
File /opt/test is owned by root. Has file permissions of 700.
I need to cp /opt/test /home/user/.
So I need this exact command set in my sudoers file. I can't open up permissions to any other command.
But if I put this as a NOPASSWD command in /etc/sudoers it doesn't work because my user does not have permissions to see /opt/test before it su's to the root user (sudo is 'globbing' or whatever the file paths before it runs an su to root).
How can I invoke this command in a subshell or something so that I can get the exact command laid out in /etc/sudoers? Putting the command in a script and then laying out the path to the script in sudoers fails (permissions). I think I need to invoke a subshell of sorts, but don't know how to lay that out in sudoers.

Best practices in assigning permissions to web folders

I would like to know what is the best, correct and recommended way of doing chown and chmod to website files and folders.
I recently started working on linux and I have been doing it in the site root directory like the following:
sudo chown www-data:www-data -R ./
sudo chmod 775 -R ./
I know it is not the best way. There is a protected folder which should not be accessible with browsers and should not be writable, so I did the following to protected folder:
sudo chown root:root -R protected/
sudo chmod 755 -R protected/
Is it correct? If anything can be improved please let me know.
Read your command again. What you are saying is "make everything executable" below these directories. Does an HTML or gif to be executable? I don't think so.
Regarding a directory which should not be writable by the webserver. Think of what you want to do. You want to revoke the right to write a directory from the webserver and the webserver group (and everybody else anyway). So it would translate to chmod -w theDir. What you did is to tell the system "I want root to make changes to that directory which shall be readable by everybody and the root group". I highly doubt that.
So I would suggest having the directory owned by a webserver user with only minimal read access, it should belong to a group (of users, that is) which is allowed to do the necessary of the modification. The webserver does not belong to that group, as you want the outside world to be prevented from making modifications. Another option would be to hand over all the directories to root and to the editor group and modify what the webserver can do via the "others" permission group. But what to use heavily depends on your environment.
Edit:
In general, the "least rights" policy is considered good practice: give away as few rights as possible to get the job done. This means read access to static files and depending on your environment php files, read and execute rights for cgi executables and read and execute rights for directories. Execute rights for directories allow you to enter and read it. No directory in the document root should be writable by the webserver ever. It is a security risk, even though some developers of bigger CMS do not seem to care to much about that. For temporary folders I would set the user and groups to nobody:nogroup and set the sticky bit for both user and groups.

cd into directory without having permission

When cding into one of my directories called openfire the following error is returned:
bash: cd: openfire: Permission denied
Is there any way around this?
#user812954's answer was quite helpful, except I had to do this this in two steps:
sudo su
cd directory
Then, to exit out of "super user" mode, just type exit.
Enter super user mode, and cd into the directory that you are not permissioned to go into. Sudo requires administrator password.
sudo su
cd directory
If it is a directory you own, grant yourself access to it:
chmod u+rx,go-w openfire
That grants you permission to use the directory and the files in it (x) and to list the files that are in it (r); it also denies group and others write permission on the directory, which is usually correct (though sometimes you may want to allow group to create files in your directory - but consider using the sticky bit on the directory if you do).
If it is someone else's directory, you'll probably need some help from the owner to change the permissions so that you can access it (or you'll need help from root to change the permissions for you).
chmod +x openfire worked for me. It adds execution permission to the openfire folder.
Alternatively, you can do:
sudo -s
cd directory
You've got several options:
Use a different user account, one with execute permissions on that directory.
Change the permissions on the directory to allow your user account execute permissions.
Either use chmod(1) to change the permissions or
Use the setfacl(1) command to add an access control list entry for your user account. (This also requires mounting the filesystem with the acl option; see mount(8) and fstab(5) for details on the mount parameter.)
It's impossible to suggest the correct approach without knowing more about the problem; why are the directory permissions set the way they are? Why do you need access to that directory?
I know this post is old, but what i had to do in the case of the above answers on Linux machine was:
sudo chmod +x directory
Unless you have sudo permissions to change it or its in your own usergroup/account you will not be able to get into it.
Check out man chmod in the terminal for more information about changing permissions of a directory.

Resources