I wrote a simple script to automate creating a symbolic link.
#!/pseudo
today = "/tmp/" + date("Y-m-d")
exec("ln -sf " + today + " /tmp/today")
Simple enough; get today's date and make a symlink. Ideally run after midnight with -f so it just updates it in-place.
This works just fine! ...for my user.
xkeeper /tmp$ ls -ltr
drwxrwxrwx xkeeper xkeeper 2014-10-21
lrwxrwxrwx xkeeper xkeeper today -> /tmp/2014-10-21/
xkeeper /tmp$ cd today
xkeeper /tmp/today$ cd ..
Notice that it works fine, all the permissions are world-readable, everything looks good.
But if someone else wants to use this link (we'll say, root, but any other user has this problem), something very strange happens:
root /tmp# cd today
bash: cd: today: Permission denied
I am at a complete loss as to why this is. I've also tried creating the links with ln -s -n -f (not that "--no-dereferencing" is very well-explained), but the same issue appears.
Since /tmp usually has the sticky bit set, the access to /tmp/today is denied because of protected_symlinks.
You can disable this protection by setting
sysctl -w fs.protected_symlinks=0
protected_symlinks:
A long-standing class of security issues is the symlink-based
time-of-check-time-of-use race, most commonly seen in world-writable
directories like /tmp. The common method of exploitation of this flaw
is to cross privilege boundaries when following a given symlink (i.e. a
root process follows a symlink belonging to another user). For a likely
incomplete list of hundreds of examples across the years, please see:
http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
When set to "0", symlink following behavior is unrestricted.
When set to "1" symlinks are permitted to be followed only when outside
a sticky world-writable directory, or when the uid of the symlink and
follower match, or when the directory owner matches the symlink's owner.
This protection is based on the restrictions in Openwall and grsecurity.
For further details check this.
Related
I'm running zsh on a Raspberry Pi 2 (Raspbian Jessie). zsh compinit is complaining about the /tmp directory being insecure. So, I checked the permissions on the directory:
$ compaudit
There are insecure directories:
/tmp
$ ls -ld /tmp
drwxrwxrwt 13 root root 16384 Apr 10 11:17 /tmp
Apparently anyone can do anything in the /tmp directory. Which makes sense, given it's purpose. So I tried the suggestions on this stackoverflow question. I also tried similar suggestions on other sites. Specifiacally, it suggests turning off group write permissions on that directory. Because of how the permissions looked according to ls -ld, I had to turn off the 'all' write permissions as well. So:
$ sudo su
% chmod g-w /tmp
% chmod a-w /tmp
% exit
$ compaudit
# nothing shows up, zsh is happy
This shut zsh up. However, other programs started to break. For example, gnome-terminal would crash whenever I typed the letter 'l'. Because of this, I had to turn the write permissions back on, and just run compinit -u in my .zshrc.
What I want to know: is there any better way to fix this? I'm not sure that it's a great idea to let compinit use an insecure directory. My dotfiles repo is hosted here, and the file where I now run compinit -u is here.
First, the original permissions on /tmp were correct. Make sure you've restored them correctly: ls -ld /tmp must start with drwxrwxrwt. You can use sudo chmod 1777 /tmp to set the correct permissions. /tmp is supposed to be writable by everyone, and any other permissions is highly likely to break stuff.
compaudit complains about directories in fpath, so one of the directories in your fpath is of the form /tmp/… (not necessarily /tmp itself). Check how fpath is being set. Normally the directories in fpath should be only subdirectories of the zsh installation directory, and places in your home directory. A subdirectory of /tmp wouldn't get in there without something unusual on your part.
If you can't find out where the stray directory is added to fpath, run zsh -x 2>zsh-x.log, and look for fpath in the trace file zsh-x.log.
It can be safe to use a directory under /tmp, but only if you created it securely. The permissions on /tmp allow anybody to create files, but users can only remove or rename their own files (that's what the t at the end of the permissions means). So if a directory is created safely (e.g. with mktemp -d), it's safe to use it in fpath. compaudit isn't sophisticated enough to recognize this case, and in any case it wouldn't have enough information since whether the directory is safe depends on how it was created.
I need to chown a file to some other user, and make sure it is unreadable again. Sounds complicated but it will be mainly look like this:
cd /readonly
wget ...myfile
cd /workdir
chmod -R 444 /readonly
chown -R anotheruser /readonly
ls /readonly # OK
echo 123 > /readonly/newfile # Should not be allowed
cat /readonly/myfile # OK
chown 777 /readonly # Should not be allowed
In SunOS I saw something similar to this, I remember not being able to delete the disowned files by Apache, but I could not find something similar to this in Linux, as chmod requires root privilleges.
The reason I need this, I will fetch some files from web, make sure they will be unchangable by the rest of the script, only root can change it. The script can not definetely run as root.
On many *nixes (Linux, at the very least), this will be impossible.
chown is a privilege restricted to root, since otherwise you could pawn off your files on other users to avoid quota restrictions.
In a related case, it would also pose something of a semantic problem if arbitrary users could chown files to themselves to gain access.
More precisely, you can chown files that you own to change their group ownership information, but you can only change user ownership if you are root.
In any case, chown is the wrong hammer for this particular nail.
chmod, which you are already using, is the correct way to make a file read-only within a script.
The chmod 444 that you are already doing will protect against accidental modifications to the files.
You cannot "freeze" or otherwise render permissions static as a Unix/Linux user without elevating to root privileges (at which point, you can chown them to root:root and no one other than root can change permissions or ownership on them).
In terms of script design, you should not need to be more restrictive than this.
If your script is haphazardly chmoding or rm -fing files, then you have much more serious correctness problems to worry about than ensuring that the downloaded data is safe and sound.
I have the logins and passwords for two linux users (not root), for example user1 and user2.
How to copy files
from /home/user1/folder1 to /home/user2/folder2, using one single shell script (one single script launching, without manually switching of users).
I think I must use a sudo command but didn't found how exactly.
Just this:
cp -r /home/user1/folder1/ /home/user2/folder2
If you add -p (so cp -pr) it will preserve the attributes of the files (mode, ownership, timestamps).
-r is required to copy hidden files as well. See How to copy with cp to include hidden files and hidden directories and their contents? for further reference.
sudo cp -a /home/user1/folder1 /home/user2/folder2
sudo chown -R user2:user2 /home/user2/folder2
cp -a archive
chown -R act recursively
Copies the files and then gives permissions to user2 to be able to access them.
Copies all files including dot files, all sub-directories and does not require directory /home/user2/folder2 to exist prior to the command.
(shopt -s dotglob; cp -a /home/user1/folder1/* /home/user2/folder2/)
Will copy all files (including those starting with a dot) using the standard cp. The /folder2/ should exist, otherwise the results can be nasty.
Often using a packing tool like tar can be of help as well:
cd /home/user1/folder1
tar cf - . | (cd /home/user2/folder2; tar xf -)
I think you need to use this command
sudo -u username /path1/file1 /path2/file2
This command allows you to copy the contents as a particular user from any file path.
PS: The parent directory should be list-able at least in order to copy files from it.
Just to add to fedorqui 'SO stop harming' answer.
I had this same challenge when I tried to change the default admin user for a server from stage_user to prod_user on an Ubuntu 20.04 machine:
First, I created a prod_user using the command below:
sudo adduser prod_user
And then I added the newly created prod_user to the sudo group:
sudo adduser prod_user sudo
Next, I copied all the directories that I needed from the home directory of the stage_user to the prod_user:
sudo cp -r /home/stage_user/folder1/ /home/prod_user/
Next, I changed the ownership of the copied folders from stage_user to prod_user to avoid permission issues:
sudo chown prod_user:prod_user /home/prod_user/folder1
That's all.
I hope this helps
The question has to to do with permissions across users.
I believe by default home permission does allow all people to do listing and changing working directory into another's home:
eg. drwxr-xr-x
Hence in the previous answers people did not realise what you might have encountered.
With more restricted settings like what I had on my web host, nonowner users cannot do anything
eg. drwx------
Even if you use su/sudo and become the other user, you can still only be ONE USER at one time, so when you copy the file back, the same problem of no enough permission still apply.
So. . . use scp instead, treat the whole thing like a network environment let me put it that way and that's it. By the way this question had already been answered once over here (https://superuser.com/questions/353565/how-do-i-copy-a-file-folder-from-another-users-home-directory-in-linux), only cared to reply because this ranked 1st result from my search.
I am using Cygwin and trying to change the group access permission with chmod, e.g.
$ls -l id_rsa
-rwxrwxr-- 1 None 1679 Jun 13 10:16 id_rsa
$ chmod g= id_rsa
$ ls -l id_rsa
-rwxrwxr-- 1 None 1679 Jun 13 10:16 id_rsa
But this does not work. I can change permission for user and others. Seems that the permission level for group somehow keeps the same as that of user?
I was having a similar problem to you, and I was using the NTFS filesystem, so Keith Thompson's answer didn't solve it for me.
I changed the file's group owner to the Users group:
chown :Users filename
After doing that I was able to change the group permissions to my will using chmod. In my case, since it was an RSA key for OpenSSH, I did:
chmod 700 filename
And it worked. In Cygwin you get two groups by default, the Root group and the Users group. I wanted to add another group, but I wasn't able to do it with the tools I'm used to use on Linux. For that reason I just used the Users group.
Cygwin doesn't like files to be owned by groups that it doesn't know.
Unfortunately, that happens quite often in Cygwin, especially if your PC is in a Windows domain where things keep changing.
I also synchronise my files between two PCs, via an external drive, and the uids/gids are different between the different PCs, so this is a source of problems.
If you do ls -l and see a numeric group id instead of a group name, it means Cygwin doesn't know the gid - i.e. it's not in /etc/group, and Cygwin can't query it from Windows either. You can confirm this by running getent group <gid>, where <gid> is the numeric group id.
To fix it, you can either use chgrp to change the group for all affected files/directories, as described in the accepted answer above, or create an entry for the unknown gid in /etc/group, with any unused group name (e.g. Users2).
After doing this, it may be necessary to close all of your Cygwin windows and then re-open them.
An experiment shows that chmod does work correctly to change group permissions under Cygwin.
The experiment used a file on an NTFS partition. Cygwin implements a POSIX layer on top of Windows, but it still ultimately uses the features of Windows itself, and of the particular filesystem implementation.
On modern versions of Windows, most hard drives are formatted to use NTFS, which provides enough support for chmod. But external USB drives typically use FAT32, which doesn't have the same abilities to represent permissions. The Cygwin layer fakes POSIX semantics as well as it can, but there's only so much it can do.
Try
$ df -T .
If it indicates that you're using a FAT32 filesystem, that's probably the problem. The solution would be to store the file on an NTFS filesystem instead. A file named id_dsa is probably an SSH private key, and it needs to be stored in $HOME/.ssh anyway.
Is your home directory on a FAT32 partition? As I recall, recent versions of Windows ("recent" meaning the last 10 or more years) are able to convert FAT32 filesystems to NTFS.
The remainder of this answer was in response to the original version of the question, which had a typo in the chmod command.
Cygwin uses the GNU Coreutils version of chmod. This,
chmod g=0 fileName
is not the correct syntax. I get:
$ chmod g=0 fileName
chmod: invalid mode: `g=0'
Try `chmod --help' for more information.
(This is on Linux, not Cygwin, but it should be the same.)
To turn off all group permissions, this should work:
$ chmod g= fileName
$ ls -l fileName
-rw----r-- 1 kst kst 0 Jun 13 10:31 fileName
To see the chmod documentation:
$ info coreutils chmod
To see the documentation on symbolic file mode:
$ info coreutils Symbolic
The format of symbolic modes is:
[ugoa...][+-=]PERMS...[,...]
where PERMS is either zero or more letters from the set 'rwxXst', or a
single letter from the set 'ugo'.
Like previous answers, not recognized groups cause such issues. It mostly happens in Windows Domains.
The easiest way to fix it is regenerate your /etc/passwd and /etc/group files (parameter -d is needed for domain users):
mkpasswd -l -d > /etc/passwd
mkgroup -l -d > /etc/group
Close and launch Cygwin again.
This is a very annoying issue for me. In my case user135348's solution worked best. The biggest issue with the chown :Users -R approach is that every time a new file is created, it will be assigned to the unknown gid 1049120. It's very frustrating to keep changing file gid.
I tried mkgroup too, but in my case it didn't work: My gid is 1049120.
Based on the rules explained in Mapping Windows SIDs to POSIX uid/gid values : : 0x100000 offset is used for account from the machine's primary domain.
Trying to remove the same offset from 1049120, you get 544, which is built-in Administrators group's RID.
This account is not a member of the local Administrators group; we use SuRun to grant administrator rights without giving out credentials. In this case, mkgroup failed to generate all the possible gids.
Editing the group file and adding a customized group name seems always to fix the issue easily.
I had this issue when working remotely from the Domain and using cygserver.
Running ls -l showed a numeric group id instead of a group name.
I stopped cygserver, net stop "CYGWIN cygserver, and other Cygwin processes, then ran the ls -l again, and group names were then displayed correctly.
I guess cygserver was holding incomplete domain group information.
After restarting cygserver the system continued to work correctly.
#!/bin/bash
find . |while read obj; do
if [[ -d "$obj" ]]; then
setfacl --set "user::rwx,group::r-x,other::r-x" "${obj}"
elif [[ -f "$obj" ]]; then
setfacl --set "user::rw-,group::r--,other::r--" "${obj}"
fi
done
You must specify the group name on the Windows system which your user belongs to.
So I just did this:
chown -R ONEX:Users ~/*
You can find your user name and group here:
Please let me explain what I mean by the question:
This is the context: I'm a user on a webserver, where I have phpicalendar installed; then, I choose a directory, say /webroot/mylogin/phpicalendar/mycals to host my .ics calendar text files.
EDIT: Previously, instead of '/webroot', I had used '/root' - but I really didn't mean the Linux '/root' directory - I'm just wanted to use it as a stand in for the real location on the webserver (so it serves just as a common point of reference). Otherwise, what I mean by common point of reference, is simply /webroot = /media/some/path ..
Then, I can enter this directory in the phpicalendar's config.inc.php:
$configs = array(
'calendar_path' => '/webroot/mylogin/phpicalendar/mycals;
...
Then, phpicalendar will run through this directory, grab the .ics files there (say, mycal.ics and mycal2.ics) and render them - so far, so good.
The thing is, I would now like to add a second calendar directory, located at the same webserver, but where I have read-only permissions, say /webroot/protected/cals. I know that I have read permissions, because I can do in the shell, say
$ less /webroot/protected/cals/maincal.ics
and I can read the contents fine.. So now:
If I enter /webroot/protected/cals as a 'calendar_path', phpicalendar can read and render the files there (say, 'maincal.ics', 'maincal2.ics') without a problem
However, phpicalendar can have only one 'calendar_path', so I can either use the protected calendars, or my customized calendars - but not both
So, I thought, I could symlink the protected calendars in my customized directory - and get the best of both worlds :)
So, here is a shell snippet of what I would do
$ cd /webroot/mylogin/phpicalendar/mycals
$ ls -la
drwxrwxrwx 2 myself myself 4096 2011-03-03 12:50 .
-rw-r--r-- 1 myself myself 1234 2011-01-20 07:32 mycal.ics
-rw-r--r-- 1 myself myself 1234 2011-01-20 07:32 mycal2.ics
...
$ ln /webroot/protected/cals/maincal.ics . # try a hard link first
ln: creating hard link `./maincal.ics' => `/webroot/protected/cals/maincal.ics': Invalid cross-device link'
$ ln -s /webroot/protected/cals/maincal.ics . # symlink - works
$ ln -s ../../../protected/cals/maincal.ics relmaincal.ics # symlink via relative
$ ln -s mycal.ics testcal.ics # try a symlink to a local file
$ ls -la # check contents of dir now
drwxrwxrwx 2 myself myself 4096 .
-rw-r--r-- 1 myself myself 1234 mycal.ics
-rw-r--r-- 1 myself myself 1234 mycal2.ics
lrwxrwxrwx 1 myself myself 21 testcal.ics -> mycal.ics
lrwxrwxrwx 1 myself myself 56 maincal.ics -> /webroot/protected/cals/maincal.ics
lrwxrwxrwx 1 myself myself 66 relmaincal.ics -> ../../../protected/cals/maincal.ics
Ok, so here's what happens:
less maincal.ics works on shell
less relmaincal.ics fails with 'relmaincal.ics: No such file or directory' (even if shell autocompletion for the relative path did work during the execution of the symlink command!)
When you open phpicalendar now, it will render mycal.ics, mycal2.ics and testcal.ics (and they will work)
however, maincal.ics and relmaincal.ics will not be parsed or displayed
Now - this could be that PHP cannot resolve symlinks; however I speculate that the situation is this:
When I do less maincal.ics - it is myself who is user, who has read permission for /webroot/protected/cals
phpicalendar (so Apache webserver user) can otherwise also access /webroot/protected/cals as read-only, when given 'hardcoded' path
phpicalendar is also capable of reading local symlinks fine
Thus, I suspect, that the problem is: when trying to read the symlinks to protected cals, the user that is visible to the shell during that operation is Apache web user, which then doesn't get permissions to access a symlink to the protected/cals location!
The thing now is - I can easily copy the .ics files locally; however they are being changed by someone else, which is why I'd have preferred a symlink.
And my question is: can I do some sort of trickery, so that when phpicalendar/Apache tries to access a symlink to protected/cals, it 'thinks' that it is a local file - and otherwise, the contents of the protected/cals file are being 'piped' back to phpicalendar/Apache?? I guess I'm thinking something in terms of:
$ mkfifo mypipe
$ ln -s mypipe testpipe.ics
$ cat ./testpipe.ics # in one terminal
$ cat /webroot/protected/cals/maincal.ics > mypipe # in other terminal
... which would otherwise (I think) handle the permissions problem - except that, I don't want to cat manually; that would be something that would have to be done in the background, each time an application requests to read testpipe.ics:)
Well, thanks in advance for any comments on this - looking forward to hearing some,
Cheers!
Umm, I really doubt that the account the web server runs under can read anything under /root. That directory is usually mode 0700, user root, group root, or something very similar to that - meaning no non-root access is allowed. If you're running the web server as root, file read permissions are the least of your problems...
Your best bet then would be to place the read-only calendar files somewhere publicly available, and symlink to that location from wherever under /root you want to be able to access them.
Start by checking whether the Apache user can view your calendars:
you#host $ sudo -i -u <apache-user> -s /bin/bash
apache#host $ less /root/protected/cals/maincal.ics