Current method
Actually, I'm sharing some of my dotfiles with the root user using symbolic links:
ln -s ~user/.vimrc /root/
ln -s ~user/.zshenv /root/
ln -s ~user/.zlogin /root/
ln -s ~user/.zshrc /root/
Former method
Before, I was using the sudo command with the -E which preserves the environment. So, the root user, when in an interactive shell, use the standard user home directory and read the corresponding dotfiles.
It works, but :
Some files may be created in the standard user directory with root as owner
Some commands does not allow (or warn me) using files on directory which the owner is not the current user (it's obviously for security reasons). So, executing those commands as root is problematic.
Better method ?
The simplest method is to put shared settings in the system-wide configuration files (/etc/zshrc, /etc/vimrc).
But I want to keep all the settings in my home directory, where I can keep them synchronized with a Git remote repository. This way, I can deploy them easily on a new computer.
As my current method is tedious and the former was pleasant but problematic,
is there a better method to make root use my current configuration file ?
What I usually do is to include a deployment script in the git repository. I then invoke that script using sudo. The script then runs with root credentials and updates the dotfiles, either in the root account or globally.
I keep the install script as light as possible with no dependencies beyond shell and the core utilities (so no rsync).
Related
On this particular Linux server, we have a directory on which people can add certain files and we want those files to be owned by a particular user, editable by a specific group, and not viewable to public. Right now, what I have to do is to occasionally run sudo chown this_user:that_group /foo/bar/*.ext; sudo chmod 750 /foo/bar/*.ext from the command line. I would prefer if I could turn this into a command-line program that other users could invoke, including those who don't have sudo access. Imagine a program called /usr/bin/fixpermissions which would run the above chown and chmod commands and return a success message.
How should I write this script so that it wouldn't ask for a password for the sudo part? And how can I make it available to other users (is putting it in /usr/bin/ sufficient or appropriate)?
That's not so much a question of "How to write the script", but rather of "How to make it usable via sudo".
The canonical location for the script would be /usr/local/bin ...
To achieve the "execute as sudo w/o password" I'd create a separate sudoers file:
sudo visudo -f /etc/sudoers.d/fixpermissions
with the following content:
%group ALL = NOPASSWD: /usr/local/bin/fixpermissions
Obviously adjust names of files and groups to match your personal preferences and existing setup.
Careful with creating the sudoers file above w/ other means than visudo - you might end up locking yourself out of the box if you save a file with syntax errors (visudo will check it for validity on exit and prompt you to fix if it's borked).
i have script that is owned by root in a directory owned by root. part of the script is to make a directory that will hold the inputs/outputs of that script. i also have a sim link to that script so any user can run it from anywhere. i don't use the temp directory so this info can be used as logs later.
Problem: when a user tries to run the script they get an error that the directory cannot be created because of permission denied.
Questions: why won't the script make the directory so root owns it independent of what user runs it? how can the script make the directory so root owns it instead of the user that ran it? only the script needs this info, not the user.
Additional info:
the directory is: drws--s--x.
the script is: -rwxr-xr-x.
(if you need to know) the line in the script is simply: mkdir $tempdirname
i am matching the permissions of other scripts on the same server that output text files correctly, but since mine is a directory i'm getting permission errors.
i have tried adding the permissions for suid and sgid. suid sounded like the correct solution since it should make the script run as if it were run by the user that owns the script. (why isn't this the correct solution?)
i would like any user to be able to type in the sim link name, that will run the script that is owned by root in the directory that is owned by root, and the directories created by that script will stay in its own directory. and the end user has no knowledge or access to the inner workings of this process. (hence owned by root)
Scripts run as the user that runs them; the owner of the file and/or the directory it's in are irrelevant (except that the user needs read and execute permission to the file and directory). Binary executables can have their setuid bit set to make them always run as the file's owner. Old unixes allowed this for scripts as well but this caused a security hole, so setuid is ignored on scripts in modern unixes/Linuxes.
If you need to let regular users run a script as root, there are a couple of other ways to do this. One is to add the script to your /etc/sudoers file, so that users can use sudo to run it as root. WARNING: if you mess up your /etc/sudoers file, it can be hard to recover access to clean it up and get back to normal. Make a backup first, don't edit it with anything except visudo, and I recommend having a root shell open so if something goes wrong you'll have the root access you need to fix it without having to promote via sudo. The line you'll need to add will be something like this:
%everyone ALL=NOPASSWD: /path/to/script
If you want to make this automatic, so that users don't have to explicitly use sudo to run the script, you can start the script like this:
#!/bin/bash
if [[ $EUID -ne 0 ]];
then
exec sudo "$BASH_SOURCE" "$#"
fi
EDIT: A simpler version occurred to me; rather than having the script re-run itself under sudo, just replace the symlink with a stub script like this:
#!/bin/bash
exec sudo /path/to/real/script "$#"
Note that with this option, the /etc/sudoers entry must refer to the real script's path, not that of the symlink. Also, if the script doesn't take arguments, you can leave the "$#" off. Or use it, it won't do any harm either.
If messing with /etc/sudoers sounds too scary, there's another option: you could "compile" the script with shc (which actually just makes a binary executable wrapper around it), and make that setuid root (chmod 4755 /path/to/compiled-script; chown root /path/to/compiled-script). Since it's in a binary wrapper, setuid will work.
Good morning everyone! I have a bash script starting automatically when the system boots via the .profile file in the users home directory:
sudo menu.sh
The script starts just as expected however, when calling things like ssh UN#ADDRESS inside the script, the known_hosts file gets placed in the /root/.ssh directory instead of the user account calling the script! I have tried modifying .profile to call 'sudo -E menu.sh' and 'sudo -H menu.sh', but both fail to have the known_hosts file created in the users home directory that's calling the script. My /etc/sudoers is as follows:
# Declarations
Defaults env_keep += "HOME USER"
# User privilege specification
root ALL=(ALL) ALL
user ALL=NOPASSWD: ALL
Any help would be appreciated!
Thanks
Dave
UPDATE: so what I did as a work around is go through the script and add 'sudo -u $USER' before specific calls (since sudo is supposed to keep the $USER env var). This to me seems like a very bad way of resolving this problem. It sudo is supposed to keep the USER and HOME directory variables upon launching menu.sh, why would I need to explicitly call sudo once again as a specific user in order to retain that information (even though sudo is being told to keep it via the /etc/sudoers file). No clue, but wanted to update this post for anyone that comes across it until a better solution can be found.
Regarding OpenSSH, the default location for known_hosts is ~/.ssh/known_hosts. Ssh doesn't honor $HOME when expanding a "~" in a filename. It looks up the user's actual home directory and uses that. When you run ssh as root, it's going to interpret that pathname relative to root's home directory no matter what you've set HOME to.
You could try setting the ssh parameter UserKnownHostsFile to the name of the file you'd like to use:
ssh -o UserKnownHostsFile=$HOME/.ssh/known_hosts user#host...
However, you should test this. Ssh might complain about using a file that belongs to another user, and if it has to update the file then the file might end up being owned by root.
Really, you're best off running ssh as the user whose .ssh folder you want ssh to use. Running processes through sudo creates a risk that the user can find a way to do things you didn't intend for them to do. You should limit that risk by using the elevated privileges as little as possible.
Is it possible to have a bash script invoked with root permissions to run different commands with different privileges?
Right now i have a script which runs a C-program with root permissions and creates a folder and some files which i want to have non-root permissions. Looking at the man page i see that the mkdir command takes a permissions parameter but i was wondering whether there's a smarter way of doing this.
Have a look at the man pages for the chmod and chown commands. Depending on what you are trying to do, either one of these should be our solution.
If you want to change the directory ownership to a user/group other than root, use chown -R user:group [directory] to recursively change ownership. If you just want the permissions changed, but with root still in ownership then use chmod -R 754 [directory]; keep in mind you will need to alter the permissions to suit your needs.
I would like to know what is the best, correct and recommended way of doing chown and chmod to website files and folders.
I recently started working on linux and I have been doing it in the site root directory like the following:
sudo chown www-data:www-data -R ./
sudo chmod 775 -R ./
I know it is not the best way. There is a protected folder which should not be accessible with browsers and should not be writable, so I did the following to protected folder:
sudo chown root:root -R protected/
sudo chmod 755 -R protected/
Is it correct? If anything can be improved please let me know.
Read your command again. What you are saying is "make everything executable" below these directories. Does an HTML or gif to be executable? I don't think so.
Regarding a directory which should not be writable by the webserver. Think of what you want to do. You want to revoke the right to write a directory from the webserver and the webserver group (and everybody else anyway). So it would translate to chmod -w theDir. What you did is to tell the system "I want root to make changes to that directory which shall be readable by everybody and the root group". I highly doubt that.
So I would suggest having the directory owned by a webserver user with only minimal read access, it should belong to a group (of users, that is) which is allowed to do the necessary of the modification. The webserver does not belong to that group, as you want the outside world to be prevented from making modifications. Another option would be to hand over all the directories to root and to the editor group and modify what the webserver can do via the "others" permission group. But what to use heavily depends on your environment.
Edit:
In general, the "least rights" policy is considered good practice: give away as few rights as possible to get the job done. This means read access to static files and depending on your environment php files, read and execute rights for cgi executables and read and execute rights for directories. Execute rights for directories allow you to enter and read it. No directory in the document root should be writable by the webserver ever. It is a security risk, even though some developers of bigger CMS do not seem to care to much about that. For temporary folders I would set the user and groups to nobody:nogroup and set the sticky bit for both user and groups.