cPanel Cronjob ln -s symbolic security problem - cron

I configured my server as cPanel, CloudLinux, LiteSpeed, CWAF, CageFS, CXS.
All my services are running smoothly.
However, I can create a cronjob from one user and access other users' files symbolically.
For example, I can read the config.php file in user2's public_html folder by adding a cron to user1 as follows.
ln -s /home/user2/public_html/config.php config.txt
When cron runs in this way, a shortcut in the form of config.txt occurs on user1. When we view this config.txt file, the contents of the config.php file on user2 appear.
This is a very large vulnerability, how can I prevent this?
My English is not good. Forgive me.
thanks

How exactly are you reading this file after the symlink has been created? Because it doesn't work on any of the cPanel servers I've tested.
Additionally, the cronjob is executed as the user, so I'm not sure how this would allow an escalation to happen, because it would be similar to executing it in a shell anyway.
If you're within the user1's jail (su - user1), add a cronjob such as:
0 * * * * ln -s /home/user2/public_html/wp-config.php /home/user1/config.txt
Whenever the symlink is actually created, and you then do a cat /home/user1/config.txt as user1, you'll end up with a 'No such file or directory':
cat: config.txt: No such file or directory
Why? Because you're creating a symlink that points to a file that doesn't exist (within CageFS).
But if you're absolutely sure that it's possible (despite not being able to replicate it), then report it to CloudLinux, because it would clearly be something they'd have to fix.
Heck, I'm surprised you didn't create a ticket with them in the first place, and instead decide to go to Stackoverflow to bring up your issue.

Related

Using mkdir in my bash script and getting permission denied

i have script that is owned by root in a directory owned by root. part of the script is to make a directory that will hold the inputs/outputs of that script. i also have a sim link to that script so any user can run it from anywhere. i don't use the temp directory so this info can be used as logs later.
Problem: when a user tries to run the script they get an error that the directory cannot be created because of permission denied.
Questions: why won't the script make the directory so root owns it independent of what user runs it? how can the script make the directory so root owns it instead of the user that ran it? only the script needs this info, not the user.
Additional info:
the directory is: drws--s--x.
the script is: -rwxr-xr-x.
(if you need to know) the line in the script is simply: mkdir $tempdirname
i am matching the permissions of other scripts on the same server that output text files correctly, but since mine is a directory i'm getting permission errors.
i have tried adding the permissions for suid and sgid. suid sounded like the correct solution since it should make the script run as if it were run by the user that owns the script. (why isn't this the correct solution?)
i would like any user to be able to type in the sim link name, that will run the script that is owned by root in the directory that is owned by root, and the directories created by that script will stay in its own directory. and the end user has no knowledge or access to the inner workings of this process. (hence owned by root)
Scripts run as the user that runs them; the owner of the file and/or the directory it's in are irrelevant (except that the user needs read and execute permission to the file and directory). Binary executables can have their setuid bit set to make them always run as the file's owner. Old unixes allowed this for scripts as well but this caused a security hole, so setuid is ignored on scripts in modern unixes/Linuxes.
If you need to let regular users run a script as root, there are a couple of other ways to do this. One is to add the script to your /etc/sudoers file, so that users can use sudo to run it as root. WARNING: if you mess up your /etc/sudoers file, it can be hard to recover access to clean it up and get back to normal. Make a backup first, don't edit it with anything except visudo, and I recommend having a root shell open so if something goes wrong you'll have the root access you need to fix it without having to promote via sudo. The line you'll need to add will be something like this:
%everyone ALL=NOPASSWD: /path/to/script
If you want to make this automatic, so that users don't have to explicitly use sudo to run the script, you can start the script like this:
#!/bin/bash
if [[ $EUID -ne 0 ]];
then
exec sudo "$BASH_SOURCE" "$#"
fi
EDIT: A simpler version occurred to me; rather than having the script re-run itself under sudo, just replace the symlink with a stub script like this:
#!/bin/bash
exec sudo /path/to/real/script "$#"
Note that with this option, the /etc/sudoers entry must refer to the real script's path, not that of the symlink. Also, if the script doesn't take arguments, you can leave the "$#" off. Or use it, it won't do any harm either.
If messing with /etc/sudoers sounds too scary, there's another option: you could "compile" the script with shc (which actually just makes a binary executable wrapper around it), and make that setuid root (chmod 4755 /path/to/compiled-script; chown root /path/to/compiled-script). Since it's in a binary wrapper, setuid will work.

What determines in Linux the permissions, a file is written with?

I have a technical user which writes a file to a directory. The file is automatically granted permissions (rw-r--r--).
What determines that/why is it exactly 644 instead of any other rights combination?
And what/how do I have to configure so that the automatic permissions when writing the file are rw-rw-rw / 666?
I would like to refrain from a chmod after copying, as this causes continuous additional work - better that every file copied to this directory by that user gets these permissions.
..and bonus question: does this also cover moving a file there?
Thanks!
This is called the umask and it could be set to 600, 644 or 666.
floridopower - you have to modify the umask for that specific user , in order to do this , you have to see the default environment of the system user, run this:
cat /etc/passwd | grep -i thenameoftheuser
And if you see anywhere /bin/bash in the returned string just run this command:
echo "umask 111" >> /home/thenameoftheuser/.bashrc
So if the user is a system user and the home directory of that user is located in the directory /home/ you can safely run the above commands and then run a test ( create a new file with that user and look at the permissions )

ssh not working correctly with sudo

Good morning everyone! I have a bash script starting automatically when the system boots via the .profile file in the users home directory:
sudo menu.sh
The script starts just as expected however, when calling things like ssh UN#ADDRESS inside the script, the known_hosts file gets placed in the /root/.ssh directory instead of the user account calling the script! I have tried modifying .profile to call 'sudo -E menu.sh' and 'sudo -H menu.sh', but both fail to have the known_hosts file created in the users home directory that's calling the script. My /etc/sudoers is as follows:
# Declarations
Defaults env_keep += "HOME USER"
# User privilege specification
root ALL=(ALL) ALL
user ALL=NOPASSWD: ALL
Any help would be appreciated!
Thanks
Dave
UPDATE: so what I did as a work around is go through the script and add 'sudo -u $USER' before specific calls (since sudo is supposed to keep the $USER env var). This to me seems like a very bad way of resolving this problem. It sudo is supposed to keep the USER and HOME directory variables upon launching menu.sh, why would I need to explicitly call sudo once again as a specific user in order to retain that information (even though sudo is being told to keep it via the /etc/sudoers file). No clue, but wanted to update this post for anyone that comes across it until a better solution can be found.
Regarding OpenSSH, the default location for known_hosts is ~/.ssh/known_hosts. Ssh doesn't honor $HOME when expanding a "~" in a filename. It looks up the user's actual home directory and uses that. When you run ssh as root, it's going to interpret that pathname relative to root's home directory no matter what you've set HOME to.
You could try setting the ssh parameter UserKnownHostsFile to the name of the file you'd like to use:
ssh -o UserKnownHostsFile=$HOME/.ssh/known_hosts user#host...
However, you should test this. Ssh might complain about using a file that belongs to another user, and if it has to update the file then the file might end up being owned by root.
Really, you're best off running ssh as the user whose .ssh folder you want ssh to use. Running processes through sudo creates a risk that the user can find a way to do things you didn't intend for them to do. You should limit that risk by using the elevated privileges as little as possible.

Creat Cron Job to Reapply Permissions on QNAP NAS

I have a QNAP NAS running Google Drive sync so that my QNAP, Computers and Google Drive are all in Sync.
When I create a file on my work computer and get home to the QNAP I get an access denied error on the file I created at work.
If I view the permissions I can see they are set incorrectly. From the QNAP web manager I simply right click the folder containing my files and set permissions to "Reapply and apply to subfolders/files".
How would one go about doing the above via a cron job that runs say every 5 minutes?
I had a similar problem myself and also made a cron job for it.
start of with making a script in a easy to find place.
I used "/share/MD0_DATA/" because all the shares live here.
Create a file like perms.sh and add the following:
#!/bin/bash
cd /share/MD0_DATA/(folder you want to apply this)
chmod -R 775 *
chown -R nobody:nogroup *
I used the nobody:nogroup just for example you can use any user and group you want.
Now you need to add this script to crontab.
To see whats in your crontab use:
crontab -l
to edit the crontab use:
crontab -e
This editor works like vi if you don't like vi and want to access the file directly edit:
/etc/config/crontab
Add this line to your crontab:
*/5 * * * * /share/MD0_DATA/perms.sh
The 5 represents a 5 minute interval.
Then you need to let crontab know about the new commands:
crontab /etc/config/crontab
I hope this helped you.

Linux Crontab: doing no effect

Have made backup script that does well: makes backup zip-file and then uploads it via ftp to another server. It's located here: /home/www/web5/backup/backup
Then I decided to put this script into crontab to be done automatically.
I'm doing (as root)
crontab -e
On the blank row I put:
*/1 * * * * /home/www/web5/backup/backup
Escape key, :wq!, Enter
I set it to be done each minute to test it.
Then went to the FTP folder, where script uploads the files. I'm waiting, but nothing happens: catalog is empty after each refresh in my Total Commander.
But when I execute /home/www/web5/backup/backup manually (as root as well), it works just fine and I see the new file at FTP.
What's wrong? This server is kind of heritage, so I might know not everything about it. Where to check first? OS is
Linux s090 2.6.18.8-0.13-default
(kind of very old CentOS).
Thanks for any help!
UPD: /home/www/web5/backup/backup has chmod 777
UPD2: /var/log/cron doesn't exist. But /var/log/ directory exists and contains logs of apache, mail, etc.
*/1 may be the problem. Just use *.
* * * * * /home/www/web5/backup/backup
Also, make sure /home/www/web5/backup/backup is executable with chmod 775 /home/www/web5/backup/backup
Check /var/log/cron as well. That may show errors leading to a fix.
From Crontab – Quick Reference
Crontab Environment
cron invokes the command from the user’s HOME
directory with the shell, (/usr/bin/sh). cron supplies a default
environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so
in the crontab entry or in a script called by the entry.

Resources