zsh compinit: insecure directories. Compaudit shows /tmp directory - linux

I'm running zsh on a Raspberry Pi 2 (Raspbian Jessie). zsh compinit is complaining about the /tmp directory being insecure. So, I checked the permissions on the directory:
$ compaudit
There are insecure directories:
/tmp
$ ls -ld /tmp
drwxrwxrwt 13 root root 16384 Apr 10 11:17 /tmp
Apparently anyone can do anything in the /tmp directory. Which makes sense, given it's purpose. So I tried the suggestions on this stackoverflow question. I also tried similar suggestions on other sites. Specifiacally, it suggests turning off group write permissions on that directory. Because of how the permissions looked according to ls -ld, I had to turn off the 'all' write permissions as well. So:
$ sudo su
% chmod g-w /tmp
% chmod a-w /tmp
% exit
$ compaudit
# nothing shows up, zsh is happy
This shut zsh up. However, other programs started to break. For example, gnome-terminal would crash whenever I typed the letter 'l'. Because of this, I had to turn the write permissions back on, and just run compinit -u in my .zshrc.
What I want to know: is there any better way to fix this? I'm not sure that it's a great idea to let compinit use an insecure directory. My dotfiles repo is hosted here, and the file where I now run compinit -u is here.

First, the original permissions on /tmp were correct. Make sure you've restored them correctly: ls -ld /tmp must start with drwxrwxrwt. You can use sudo chmod 1777 /tmp to set the correct permissions. /tmp is supposed to be writable by everyone, and any other permissions is highly likely to break stuff.
compaudit complains about directories in fpath, so one of the directories in your fpath is of the form /tmp/… (not necessarily /tmp itself). Check how fpath is being set. Normally the directories in fpath should be only subdirectories of the zsh installation directory, and places in your home directory. A subdirectory of /tmp wouldn't get in there without something unusual on your part.
If you can't find out where the stray directory is added to fpath, run zsh -x 2>zsh-x.log, and look for fpath in the trace file zsh-x.log.
It can be safe to use a directory under /tmp, but only if you created it securely. The permissions on /tmp allow anybody to create files, but users can only remove or rename their own files (that's what the t at the end of the permissions means). So if a directory is created safely (e.g. with mktemp -d), it's safe to use it in fpath. compaudit isn't sophisticated enough to recognize this case, and in any case it wouldn't have enough information since whether the directory is safe depends on how it was created.

Related

path /tmp does not correspond to a regular file

this happens when I have
an executable that is in the /tmp directory (say /tmp/a.out)
it is run by a root shell
linux
selinux on (default for RedHat, CentOS, etc)
Apparently trying to run an executable that sits in the /tmp/directory as root revokes the privileges. Any idea how to go around this issue, other than turning off selinux? Thanks
You can set file context on binary or directory (containing binary) that are in /tmp that you want to run.
sudo semanage fcontext -a -t bin_t /tmp/location
Then restorecon:
sudo restorecon -vR /tmp/location
Just have a look at the mount options for /tmp directory, most probably you have no-exec option on it (there are many security reasons of doing that, the first being that anyone can put a file in the /tmp directory)

Symlink giving "Permission denied"... to root

I wrote a simple script to automate creating a symbolic link.
#!/pseudo
today = "/tmp/" + date("Y-m-d")
exec("ln -sf " + today + " /tmp/today")
Simple enough; get today's date and make a symlink. Ideally run after midnight with -f so it just updates it in-place.
This works just fine! ...for my user.
xkeeper /tmp$ ls -ltr
drwxrwxrwx xkeeper xkeeper 2014-10-21
lrwxrwxrwx xkeeper xkeeper today -> /tmp/2014-10-21/
xkeeper /tmp$ cd today
xkeeper /tmp/today$ cd ..
Notice that it works fine, all the permissions are world-readable, everything looks good.
But if someone else wants to use this link (we'll say, root, but any other user has this problem), something very strange happens:
root /tmp# cd today
bash: cd: today: Permission denied
I am at a complete loss as to why this is. I've also tried creating the links with ln -s -n -f (not that "--no-dereferencing" is very well-explained), but the same issue appears.
Since /tmp usually has the sticky bit set, the access to /tmp/today is denied because of protected_symlinks.
You can disable this protection by setting
sysctl -w fs.protected_symlinks=0
protected_symlinks:
A long-standing class of security issues is the symlink-based
time-of-check-time-of-use race, most commonly seen in world-writable
directories like /tmp. The common method of exploitation of this flaw
is to cross privilege boundaries when following a given symlink (i.e. a
root process follows a symlink belonging to another user). For a likely
incomplete list of hundreds of examples across the years, please see:
http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
When set to "0", symlink following behavior is unrestricted.
When set to "1" symlinks are permitted to be followed only when outside
a sticky world-writable directory, or when the uid of the symlink and
follower match, or when the directory owner matches the symlink's owner.
This protection is based on the restrictions in Openwall and grsecurity.
For further details check this.

Chown a specific folder without root privilleges

I need to chown a file to some other user, and make sure it is unreadable again. Sounds complicated but it will be mainly look like this:
cd /readonly
wget ...myfile
cd /workdir
chmod -R 444 /readonly
chown -R anotheruser /readonly
ls /readonly # OK
echo 123 > /readonly/newfile # Should not be allowed
cat /readonly/myfile # OK
chown 777 /readonly # Should not be allowed
In SunOS I saw something similar to this, I remember not being able to delete the disowned files by Apache, but I could not find something similar to this in Linux, as chmod requires root privilleges.
The reason I need this, I will fetch some files from web, make sure they will be unchangable by the rest of the script, only root can change it. The script can not definetely run as root.
On many *nixes (Linux, at the very least), this will be impossible.
chown is a privilege restricted to root, since otherwise you could pawn off your files on other users to avoid quota restrictions.
In a related case, it would also pose something of a semantic problem if arbitrary users could chown files to themselves to gain access.
More precisely, you can chown files that you own to change their group ownership information, but you can only change user ownership if you are root.
In any case, chown is the wrong hammer for this particular nail.
chmod, which you are already using, is the correct way to make a file read-only within a script.
The chmod 444 that you are already doing will protect against accidental modifications to the files.
You cannot "freeze" or otherwise render permissions static as a Unix/Linux user without elevating to root privileges (at which point, you can chown them to root:root and no one other than root can change permissions or ownership on them).
In terms of script design, you should not need to be more restrictive than this.
If your script is haphazardly chmoding or rm -fing files, then you have much more serious correctness problems to worry about than ensuring that the downloaded data is safe and sound.

Copy files from one user home directory to another user home directory in Linux

I have the logins and passwords for two linux users (not root), for example user1 and user2.
How to copy files
from /home/user1/folder1 to /home/user2/folder2, using one single shell script (one single script launching, without manually switching of users).
I think I must use a sudo command but didn't found how exactly.
Just this:
cp -r /home/user1/folder1/ /home/user2/folder2
If you add -p (so cp -pr) it will preserve the attributes of the files (mode, ownership, timestamps).
-r is required to copy hidden files as well. See How to copy with cp to include hidden files and hidden directories and their contents? for further reference.
sudo cp -a /home/user1/folder1 /home/user2/folder2
sudo chown -R user2:user2 /home/user2/folder2
cp -a archive
chown -R act recursively
Copies the files and then gives permissions to user2 to be able to access them.
Copies all files including dot files, all sub-directories and does not require directory /home/user2/folder2 to exist prior to the command.
(shopt -s dotglob; cp -a /home/user1/folder1/* /home/user2/folder2/)
Will copy all files (including those starting with a dot) using the standard cp. The /folder2/ should exist, otherwise the results can be nasty.
Often using a packing tool like tar can be of help as well:
cd /home/user1/folder1
tar cf - . | (cd /home/user2/folder2; tar xf -)
I think you need to use this command
sudo -u username /path1/file1 /path2/file2
This command allows you to copy the contents as a particular user from any file path.
PS: The parent directory should be list-able at least in order to copy files from it.
Just to add to fedorqui 'SO stop harming' answer.
I had this same challenge when I tried to change the default admin user for a server from stage_user to prod_user on an Ubuntu 20.04 machine:
First, I created a prod_user using the command below:
sudo adduser prod_user
And then I added the newly created prod_user to the sudo group:
sudo adduser prod_user sudo
Next, I copied all the directories that I needed from the home directory of the stage_user to the prod_user:
sudo cp -r /home/stage_user/folder1/ /home/prod_user/
Next, I changed the ownership of the copied folders from stage_user to prod_user to avoid permission issues:
sudo chown prod_user:prod_user /home/prod_user/folder1
That's all.
I hope this helps
The question has to to do with permissions across users.
I believe by default home permission does allow all people to do listing and changing working directory into another's home:
eg. drwxr-xr-x
Hence in the previous answers people did not realise what you might have encountered.
With more restricted settings like what I had on my web host, nonowner users cannot do anything
eg. drwx------
Even if you use su/sudo and become the other user, you can still only be ONE USER at one time, so when you copy the file back, the same problem of no enough permission still apply.
So. . . use scp instead, treat the whole thing like a network environment let me put it that way and that's it. By the way this question had already been answered once over here (https://superuser.com/questions/353565/how-do-i-copy-a-file-folder-from-another-users-home-directory-in-linux), only cared to reply because this ranked 1st result from my search.

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

Resources