rsync sets wrong group - linux

I have a bash script to sync a Zendframework site between two servers, but for some reason one file doesn't get the correct owner/group. Since the file then becomes unreadable by apache the site goes down on that server.
On the first server I have the following file:
-rwxrwx--- 1 monit www-data 4184 2012-03-14 05:39 application.ini
This should be exactly the same on the second server since both the user monit and the group www-data exists there to, but this is not the case as seen below.
-rwxrwx--- 1 monit monit 4184 2012-03-14 05:39 application.ini
This file is the only one affected. All other files gets the correct permissions, owners and groups. The rsync command is as follows
rsync -az --delete --stats --include="document_root/.*" --exclude=".*" SERVER1 SERVER2
rsync is version 3.0.3, Server 1 is a Ubuntu 9.04 and Server2 is Debian 5.0
At the moment the problem is circumvented by setting the permission on the original file to -rwxrwxr--. The synced file will still have the wrong group, but is at least readable.

Check that monit user is in www-data group on the target server.
Try rsyncing the problematic file only, while running rsync on the target server and add one or more -v options, then look at the output:
$ groups monit |grep www-data
$ rsync -avv source_host:path/to/application.ini ./application.ini

Related

Apache/Linux - Deploying website - as root?

I want to test my website in the real server, and I’m about to deploy it now. I was wondering about security issues, should I upload the files (via sftp) to /var/www/site/public_html as root, or should I create a user for the upload, and set the directory permissions to that user?
Thanks
Although you have accepted an answer here, the information provided is very wrong.
I appreciate that you will already know some of the things I'm going to say here, but since you accepted a very misleading answer there are some gaps in your understanding. Also not everyone reading this may have the same knowledge you do.
Yes, you should be careful with the root account and only use it where you have to. Only use it as it is intended to be used.
What you need is for the files to be readable by the webserver. And on a correctly configured device the webserver does not run as root. If you were to run
ps auxw | grep httpd
or
ps auxw | grep nginx
You would see at least 2 processes running. And actually one of them probably will be owned as root. That's because on the root user can start up a listening socket on a port number below 1024. That's a security thing. But having started the socket, this instance of the webserver then starts another instance of itself (running as a non-privileged user) to actually process requests.
On the box in front of me, I see....
root 20784 0.0 0.0 125116 1552 ? Ss 00:43 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 20785 0.0 0.0 125476 3252 ? S 00:43 0:00 nginx: worker process
www-data 20786 0.0 0.0 125476 3252 ? S 00:43 0:00 nginx: worker process
www-data 20787 0.0 0.0 125476 3252 ? S 00:43 0:00 nginx: worker process
www-data 20788 0.0 0.0 125476 3252 ? S 00:43 0:00 nginx: worker process
root 22579 0.0 0.0 14224 968 pts/1 SN+ 00:43 0:00 grep --color=auto nginx
So your content files need to be readable by the www-data user.
This account has even less privileges than the other account you describe. You can't login with this username. It is the opposite of the root account, it's a system or service account.
So how do you ensure you have the right permissions? You should have a regular account on the target you use to login (if you can ssh in or scp/sftp using the root account then your security is bad and should be fixed). Let's call this account auser. Once you have a shell on the system, then you should have the ability to become root via 'su' or 'sudo'. You can also transfer files using this account via sftp and scp (because you know that ftp is bad and have made sure there is no ftp server running on your host).
But this normal user account won't have permission to write to the directory containing the content (usually /var/www/html). So your first step in setting up the permissions is to change the ownership of this file to auser - and only root can do that. So...
sudo chown -R auser /var/www/html
Now /var/www/html should look something like this....
drwxr-xr-x 14 auser root 4096 Apr 5 19:52 /var/www/html
Looking at that first string of letters, the d means it is a directory, the rwx are the permissions for the owning user (read write and execute - but execute means something a bit weird for a directory - if a user should have any access to the directory then they need the execute permission). The first r-x indicates that the group has read and execute but not write. The second r-x means that every other account on the system (including www-data) has read and execute permissions on the directory.
The usual config for PHP, Python and Perl are that these are run as the webserver user (www-data). They do not need the executable permission bit set because they are not binaries (it would be required for pre-compiled languages like C and Go). There are models where the active content runs as different user than the webserver exist but are very rare and do not apply to your case.
But the default configuration on most Linux systems is to create files as ?rwx???--- (where the '?' represents stuff we're not that interested in. i.e. the "all other users" (or just "other" for short) don't get any permissions. And www-data can't read your files or cd into your directories. You could make sure you run....
chmod -R go+r /var/www/html
find /var/www/html -type d -exec chmod go+x {} \;
to fix this, but it's easier to just amend the config of your sshd to set the permissions appropriately. Typically you should be looking for the sftpserver definition in /etc/ssh/sshd_config and append '-u 002':
Subsystem sftp /usr/lib/openssh/sftp-server -u 002
Subsequently all transferred files will be -rw?rw?r-- and directories will be drwxrwxr-x
Yes, you could transfer the files as root / have the files owned by root and that does not compromise the security of your host. But allowing root to connect over ssh does compromise the security. That's why your sshd requires you to explicitly permit this to allow for edge cases - your sshd_config should contain...
PermitRootLogin No
Things get slightly more complicated when you need to allow multiple users to manage files. But we don't need to cover that now.
You will see mention on the the internet of chmod 0777 which gives everyone all permissions - never ever do this. Permissions are there to allow you to selectively share access.
So....
the root account is used to setup the initial permissions for a normal login account to control the files.
The webserver user should only have enough permissions to serve up the content (read-only).
Use a normal user account for managing files.

mod_perl can't see files in /tmp

I have some mod_perl code trying to access a file under /tmp ... but it throws a 'no such file or directory' error. I added an 'ls -al /tmp' to my code to see what Perl was seeing inside the directory, and it only gave me . and .. :
drwxrwxrwt. 2 root root 6 Jan 21 13:36 .
drwxrwxrwx. 18 root sysadmin 4096 Nov 22 22:14 ..
In reality there are a mixture of files under /tmp, including some owned by the Apache user. Changing my code to 'ls -al /' gives a correct directory listing (nothing missing).
I tried sudo'ing to the Apache user, and can see under /tmp file, so it must be something mod_perl related.
Ideas? I'm running mod_perl 2.0.8 and Apache 2.4 under CentOS 7. SELinux is set to permissive.
So based on comments the answer here is - it's an RHEL 7 feature.
https://securityblog.redhat.com/2014/04/09/new-red-hat-enterprise-linux-7-security-feature-privatetmp/
PrivateTmp=
Takes a boolean argument. If true sets up a new file system
namespace for the executed processes and mounts a private /tmp
directory inside it, that is not shared by processes outside of
the namespace. This is useful to secure access to temporary files
of the process, but makes sharing between processes via /tmp
impossible. Defaults to false.

Linux: share permissions between users for SVN folders

On a Ubuntu machine I've setup a SVN repository, served with Apache.
All the SVN repository folders and subfolders (located under /var/svn/repos/) belongs to www-data user and group:
drwxr-xr-x 7 www-data www-data 4096 gen 21 10:38 software_repository
www-data is the Apache user.
Next I've a cron job that makes a nightly svnadmin dump of the repository, using my home user, let's say john_doe (joining the www-data group too). svnadmin dump command (and more...) are contained in a sh file called by the crond.
During cron job or launching it manually using user john_doe I get:
svnadmin: E160052: Revprop caching for '/var/svn/repos/sw/software_repository/db' disabled because SHM infrastructure for revprop caching failed to initialize.
svnadmin: E000013: Can't open file '/var/svn/repos/sw/software_repository/db/rev-prop-atomics.mutex': Permission denied
Because of Permission denied error, I've run the same sh script prepending sudo command, and everything works fine.
So, we have 2 possibilities:
Understand where the SVN error come from.
Change permissions in a correct way for the john_doe user, used by cron.
For point #1 I've done some Google search but I've found nothing...
For point #2, I think the correct way is not to set all permissions (recursively) of the group www-data to all SVN folders and subfolders. What it could be done is to share permissions on SVN folders between www-data user and john_doe. Or give to the www-data group the same permissions (recursively) of the www-data user. Or something else, but for both solutions I've no idea of the correct command or configuration setting.
Solved running command:
chmod -R g=u software_repository
This fix is for solution 2. By the way I've no clue where the SVN errors come from...

cp/rsync command with destination as symlink to a directory

I am working on a cPanel backup solution at the moment. We are now informed about this exploit.
Exploit : Full ROOT ACCESS to server
1.) create malicious file from, normal user account:
mkdir root
echo "hello" > root/.accesshash
2.) Wait for backup to run
3.) Replace root with a symlink:
ln -s /root root
4.) Restore root/.accesshash ( I am running this command as root for this: "cp -rf /backup/.accesshash /home/username/root/")
5.) User now have root access because We overwrote /root/.accesshash. An attacker will be able to login to WHM as root by placing a access hash into this file.
root#cpanel [/home/master]# cat /root/.accesshash
hello
root#cpanel [/home/master]# ls -l /root/.accesshash
-rw-r--r-- 1 master master 3 Nov 20 21:41 /root/.accesshash
root#cpanel [/home/master]#
Can somebody advise me on this for a workaround? Thanks in advance.
The key problem here is running the restore command as root. When doing it for a specific restricted user (who might have malicious intents), you must run it as that user (or maybe as an even more restricted one, restoring files in a sandbox and copying them back later).

How do I run a command as a different user from a root cronjob?

I seem to be stuck between an NFS limitation and a Cron limitation.
So I've got root cron (on RHEL5) running a shell script that, among other things, needs to rsync some files over an NFS mount. And the files on the NFS mount are owned by the apache user with mode 700, so only the apache user can run the rsync command -- running as root yields a permission error (NFS being a rare case, apparently, where the root user is not all-powerful?)
When I just want to run the rsync by hand, I can use "sudo -u apache rsync ..." But sudo no workie in cron -- it says "sudo: sorry, you must have a tty to run sudo".
I don't want to run the whole script as apache (i.e. from apache's crontab) because other parts of the script do require root -- it's just that one command that needs to run as apache. And I would really prefer not to change the mode on the files, as that will involve significant changes to other applications.
There's gotta be a way to accomplish "sudo -u apache" from cron??
thanks!
rob
su --shell=/bin/bash --session-command="/path/to/command -argument=something" username &
Works for me (CentOS)
Use su instead of sudo:
su -c "rsync ..." apache
By default on RHEL, sudo isn't allowed for processes without a terminal (tty). That's set in /etc/sudoers.
You can allow tty-less sudo for particular users with these instructions:
https://serverfault.com/questions/111064/sudoers-how-to-disable-requiretty-per-user
If you want to permanently enable you to fiddle around as apache:
chsh apache
this allows you to change the shell for the user
place it in /etc/crontab and specify apache instead of root in the user field

Resources