Chown a specific folder without root privilleges - linux

I need to chown a file to some other user, and make sure it is unreadable again. Sounds complicated but it will be mainly look like this:
cd /readonly
wget ...myfile
cd /workdir
chmod -R 444 /readonly
chown -R anotheruser /readonly
ls /readonly # OK
echo 123 > /readonly/newfile # Should not be allowed
cat /readonly/myfile # OK
chown 777 /readonly # Should not be allowed
In SunOS I saw something similar to this, I remember not being able to delete the disowned files by Apache, but I could not find something similar to this in Linux, as chmod requires root privilleges.
The reason I need this, I will fetch some files from web, make sure they will be unchangable by the rest of the script, only root can change it. The script can not definetely run as root.

On many *nixes (Linux, at the very least), this will be impossible.
chown is a privilege restricted to root, since otherwise you could pawn off your files on other users to avoid quota restrictions.
In a related case, it would also pose something of a semantic problem if arbitrary users could chown files to themselves to gain access.
More precisely, you can chown files that you own to change their group ownership information, but you can only change user ownership if you are root.
In any case, chown is the wrong hammer for this particular nail.
chmod, which you are already using, is the correct way to make a file read-only within a script.
The chmod 444 that you are already doing will protect against accidental modifications to the files.
You cannot "freeze" or otherwise render permissions static as a Unix/Linux user without elevating to root privileges (at which point, you can chown them to root:root and no one other than root can change permissions or ownership on them).
In terms of script design, you should not need to be more restrictive than this.
If your script is haphazardly chmoding or rm -fing files, then you have much more serious correctness problems to worry about than ensuring that the downloaded data is safe and sound.

Related

What permissions can I give my logs for Laravel to be able to read from them?

I am trying to display my logs on my website to verified users in Laravel based on my role based access control.
$file = fopen("/var/log/auth.log", "r") or die();
$content = fread($file, filesize("/var/log/auth.log"));
fclose($file);
This hits me with an error:
fopen(/var/log/auth.log): failed to open stream: Permission denied
I can see that Laravel does not have the correct read permissions for this file and I do not what to do a typical chmod -R 777 due to security. I am using nginx but Laravel executes with php-fpm.
What user-group does my site execute in? What permissions should I give that user-group on my log files?
Try:
chown {your_user}:nginx /var/log/auth.log
chmod ug+rwx /var/log/auth.log
For this situation it's strongly not recommended to change permissions like "chmod 0777" (the same 777), "chmod 0755" (the same 755) or something like that for avoiding security vulnerabilities.
Actually the files which will used by web-server, will attach to your "storage" directory. You can just change owner, as web-server user (Apache or Nginx). Lot of cases it's "www-data".
Also don't forget about bootstrapped cache-files (configurations, services and packages) under "bootstrap/cache" directory.
sudo chown -R www-data:www-data storage/ bootstrap/cache/
After this, when you will want to run some artisan-commands, you can do them with "sudo", or just can make the current user as owner:
sudo chown -R $USER:$USER storage/ bootstrap/cache/
And after running your command(s) you can revert back the owner to "www-data" user (1-st command).
The advantage of this method is that this will not be tracked by version control system.

zsh compinit: insecure directories. Compaudit shows /tmp directory

I'm running zsh on a Raspberry Pi 2 (Raspbian Jessie). zsh compinit is complaining about the /tmp directory being insecure. So, I checked the permissions on the directory:
$ compaudit
There are insecure directories:
/tmp
$ ls -ld /tmp
drwxrwxrwt 13 root root 16384 Apr 10 11:17 /tmp
Apparently anyone can do anything in the /tmp directory. Which makes sense, given it's purpose. So I tried the suggestions on this stackoverflow question. I also tried similar suggestions on other sites. Specifiacally, it suggests turning off group write permissions on that directory. Because of how the permissions looked according to ls -ld, I had to turn off the 'all' write permissions as well. So:
$ sudo su
% chmod g-w /tmp
% chmod a-w /tmp
% exit
$ compaudit
# nothing shows up, zsh is happy
This shut zsh up. However, other programs started to break. For example, gnome-terminal would crash whenever I typed the letter 'l'. Because of this, I had to turn the write permissions back on, and just run compinit -u in my .zshrc.
What I want to know: is there any better way to fix this? I'm not sure that it's a great idea to let compinit use an insecure directory. My dotfiles repo is hosted here, and the file where I now run compinit -u is here.
First, the original permissions on /tmp were correct. Make sure you've restored them correctly: ls -ld /tmp must start with drwxrwxrwt. You can use sudo chmod 1777 /tmp to set the correct permissions. /tmp is supposed to be writable by everyone, and any other permissions is highly likely to break stuff.
compaudit complains about directories in fpath, so one of the directories in your fpath is of the form /tmp/… (not necessarily /tmp itself). Check how fpath is being set. Normally the directories in fpath should be only subdirectories of the zsh installation directory, and places in your home directory. A subdirectory of /tmp wouldn't get in there without something unusual on your part.
If you can't find out where the stray directory is added to fpath, run zsh -x 2>zsh-x.log, and look for fpath in the trace file zsh-x.log.
It can be safe to use a directory under /tmp, but only if you created it securely. The permissions on /tmp allow anybody to create files, but users can only remove or rename their own files (that's what the t at the end of the permissions means). So if a directory is created safely (e.g. with mktemp -d), it's safe to use it in fpath. compaudit isn't sophisticated enough to recognize this case, and in any case it wouldn't have enough information since whether the directory is safe depends on how it was created.

permission denied in a folder for a user after chown and chmod

I have a directory at
/home/ec2-user/vertica1
and I'm trying to get user dbadmin all privilages in that folder.
I've done chown to dbadmin and chmod 777 on that folder but dbadmin still gets a permission denied error.
If I put sudo in front of the command(I put dbadmi in sudoers), then it works. Why can't I get it to work without sudo?
Can dbadmin traverse /home/ec2-user? Try doing chmod a+x /home/ec2-user
There could be more reasons for being denied, like a specific acl or a LSM but this is the most likely cause.
UNIX permissions on directories
The UNIX permissions rwx¹ work on directories as follows:
r: You can view the contents of the directory (the names of the files or folders inside)
w: You can create new files, delete or rename existing files.
x: You can traverse the folder.
The traverse permission means that you can access the folder children (assuming you know its name -which you can obtain if you also have read permission-).
In this case dbadmin could read and traverse / as well as /home, but /home/ec2-user probably had a mode like drwx------ 2 ec2-user in order to protect its contents. Thus, even if you had an important file readable by anyone deep inside your home folder, other users can't get into it, since they wouldn't be able to go pass /home/ec2-user (which is exactly what you wanted to do, in this case).
¹ Note that I am skipping over the more exotic ones.
what is the result of ls -la for this dir and also parent dir? Maybe the directory doesn't have read permissions for your user.
sudo chmod ug+r vertica1
Also ec2-user directory should be writable by the user dbadmin.

Why the need for 755 permissions for /var/www/? Why not 700 after chown?

My Apache directory for storing files is /var/www.
If i run,
sudo chown -R www-data:www-data /var/www
This makes the www-data the owner of the www folder. Since all static/dynamic files will be served by the Apache User, why do i now need to give this folder 755 permissions? It should just work with giving 700 permissions, right? Since with 700 permissions, the owner(www-data) has full permissions for the folder.
Therefore, my question, why do i need to run:
sudo chmod -R 755 /var/www
instead of
sudo chmod -R 700 /var/www
EDIT: I am not facing any error. I am only asking this question for knowledge. I have been suggested to put 755 permission on the /var/www/ folder by a lot of people. Just wanted to know why couldn't i use 700.
The best layout depends on a few factors. Primarily this is a question of security. Here are a few things to consider:
1) Do you want your web server to be able to write files to your DocumentRoot? Most of the time the answer is no... the exception being things like upload directories. In this case you want something like 755, where the owner/group is not the user that Apache is running as.
2) Do you have local user accounts (like developers) that should be able to access the content? If yes, you might want something like 755, root:developers for permissions, with Apache running as "www-data" or "apache", and not in the group (subject to #1 above).
3) Do those devs need to be able to edit the content (do a code push)? In that case, perhaps 775 root:developers is better.
The primary problem with 700 is that it requires the owner to be the user that Apache is running as, and that gives it full permissions to modify any file in the DocumentRoot. This is usually considered a security weakness because generally speaking the web server should not be modifying files in the DocumentRoot apart from pretty specific exceptions.
A common exploitation, is for an attacker to trick your web app into uploading something like a malicious PHP script somewhere into the DocumentRoot, and then visiting that page. One of the countermeasures is to disallow Apache from writing to the DocumentRoot via this sort of filesystem permissions.

How to set the user and group for a set of autogenerated files

I am not sure whether this is a Sphinx question or a Linux question.
Anyway, I am running Sphinx on my CentOS server box.
I successfully managed to change the ownership of some Sphinx-related files and directories as I want only root and the Apache group to be able to read those information:
chown root.apache /etc/sphinx/sphinx.conf
chmod 640 /etc/sphinx/sphinx.conf
chown -R root.apache /var/lib/sphinx
chmod -R 750 /var/lib/sphinx
My problem is when Sphinx generates the files containing the index in the directory /var/lib/sphinx, they have ownership
root:root
How can I make them have permission
root:apache
?
The Sphinx documentation doesn't mention about that.
Thanks.
Set SETGID for directory /var/lib/sphinx/:
root#centos:~# chmod g+s /var/lib/sphinx/
This way, all created file in directory /var/lib/sphinx/ will inherit group from parent.
If sphinx is not messing with ownership by it self (I guess it't not) then this will work.
You can read more here about SETGID on directories.
Or you can run the cron job as the user sphinx rather than as root, either by adding sudo -u sphinx in the existing crontab entry, or by removing it from root's crontab and adding it to the appropriate user's crontab. (Perhaps you should report this as a bug against the package, if you are using a packaged version.)

Resources