What user does cPanel run a cron job as? - cron

From what I've read, people are saying it runs it as the user under which the cron job is running. However, this does not seem to be the case. I have GPG set up on my server and have a script that unencrypts data, generates XML, encrypts the XML using someone else's public key, then sends it off to them. If I run the script myself from the web I have no issues. If I have my cron job run the script, I get issues saying that the directory holding the public key does not exist (which of course it does). The owner of the folder is nobody. If I chmod the folder permissions to 777 it works (of course) and runs just fine, but then I get the error gpg: WARNING: unsafe ownership on homedir which is a given. So I'm wondering what user this is running as so I can just change the ownership of the folder instead of the permissions. If it is not the website owner under who's cPanel account the cron job has been set up, then what user would that be?

Related

AWS refuses the key.pem if permissions on folder are changed

I'm using a Linux server from AWS and have been encountering the issue of Server refused our key and I couldn't login to the server anymore (The key and the login accounts ec2-user are correct as I've been connecting for days already).
After some investigation I found out the the issue occurs when I change the permission on the account's folder. In this case, the /home/ec2-user/, by default it has --- on group, by running the command chmod g-rwx /home/ec2-user/ to allow access for the ec2-user (I have nginx user added to ec2-user which needs the access).
Once the above is applied, if I try to connect, I always get the Server refused our key message, tried restarting the server, creating new servers, same scenario. I only managed to figure it out by keeping one PuTTY connection open, changing the permissions, trying another session, shows the error, I set the permissions back to what they were, connects successfully.
I'm very newbie still to Linux, so can someone enlighten me if possible on what might be causing this issue or whether it's something on AWS?
Note: I'm referring to connecting with the .pem file which was converted properly to the right .ppk file, I've been connected for a while and working on the server, so the credentials are not the issue.
Thanks.
Avoid change permissions to the ec2-user.
Homedir permissions require that you clear understand what you are doing before change it.
If you require a nginx user to use it own space, try to create at /opt or where it has by default /var/www.
You shot yourself in the foot.
The more specific explanation for what happened is related to the permissions on ~/.ssh/authorized_keys.
This is a list of the public ssh keys whose matching private key can be used to log in as you.
Make this file writable by anyone other than yourself, and the implications are obvious: anyone who can write to this file can add an arbitrary public key to the list, thereby allowing them to log in as you.
The secure shell daemon, sshd sees this misconfiguration, and calls foul -- if the file of authorized keys is compromised by being writable by anyone other than you, then its contents are inherently unsafe, and it therefore is ignored... and since that's the mechanism by which your key was trusted to allow you to log in... you no longer can.
This is by design, standard *nix behavior, and unrelated to AWS.
Recursively changing permissions is unwise unless you absolutely know what you're doing.

How can i save automatically the SSH_CLIENT at login?

i want to save the user's IP when he connects to it's home folder, this is because i'm a user in a server where my team has a folder where our public_html is located, but we use the same account, so i just want to register who connected.
So i want to make a script that triggers when a connection is made and save the user's IP into a hidden file.
But i don't know if i could leave running a script in background to do it, and How?
If you're a root on that machine, you can simply check the auth log / messages / journal / ... (depends on the distribution). By default sshd logs all you need already.
If you're not a root, then you'll have to keep in mind this will never be secure. You can do this in the user's bash profile, but:
Since it's running as the same user, whoever logs in can just change the file (you can't hide it)
Anyone can workaround the script by executing some other command instead of the shell (for example ssh user#host /some/command will not be logged)
It's not secret.
If that's ok with you, then you just need to add this to bashrc
echo "new connection at $(date) from ${SSH_CLIENT}" >> ~/your_connection_log
Different solution, which should've been the default actually. Most distributions provide login history which you can request for your account without root privileges.
Running last your_username should give you the details of last few logins which cannot be manipulated by the user. (the log can possibly be spammed with entries however)

Send email when user changes password

I have a remote server to which I login using ssh. Is there a way to be notified through email (using a bash script) when someone changes the user password using passwd including the new password?
I am guessing it has to do with /etc/pam/passwd, but not entirely sure what the trigger and flags should be.
This would be useful if for example I give my access to a "friend" and they decide to lock me out of my account. Of course I could create a new account for them etc, but this is more of a "it should be possible" task rather than a practical one.
First, a Dope Slap
There's a rule that this question reminds me of... What is it? Oh yeah...
NEVER SHARE YOUR PASSWORDS WITH ANYONE!
Which also goes well with the rule.
NEVER SEND SOMETHING SECRET THROUGH EMAIL!
Sorry for the shouting. There's a rule in security that the likelihood a secret will get out is the square of the number of people who know it. My corollary is:
if ( people_who_know_secret > 1 ) {
It ain't a secret any more
}
In Unix, even the system administrator, the all powerful root, doesn't know your password.
Even worse, you want to email your password. Email is far from secure. It's normally just plain text sent over the Aether where anyone who's a wee bit curious can peek at it.
Method One: Allowing Users to use SSH without Knowing Your Password
Since you're using SSH, you should know that SSH has an alternate mechanism for verifying a user called Private/Public keys. It varies from system to system, but what you do is create a public/private key pair. You share your public key with the system you want to log into, but keep your private key private.
Once the remote machine has your public key, you can log into that system via ssh without knowing the password of that system.
The exact mechanism varies from machine to machine and it doesn't help that there are two different ssh protocols, so getting it to work will vary from system to system. On Linux and Macs, you generate your public/private key pair through the ssh-keygen command.
By default, ssh-keygen will produce a file called $HOME/.ssh/id_rsa.pub and $HOME/.ssh/id_rsa. The first one is your public key. You run ssh-keygen on both your machine and the machine you want to log into.
On the machine you're logging into, create a file called $HOME/.ssh/authorized_keys, and copy and paste your public key into this file. Have your friend also send you his public key, and paste that into the file too. Each public key will take up one line in the file.
If everything works, both you and your friend can use ssh to log into that remote machine without being asked for a password. This is very secure since your public key has to match your corresponding private key. If it doesn't you can't log in. That means even if other popel find your public key, they won't be able to log into that remote system.
Both you and your friend can log into that system without worrying about sharing a password.
Method Two: A Better Solution: Using SUDO
The other way to do this is to use sudo to allow your friend to act as you in certain respects. Sudo has a few advantages over actually sharing the account:
All use of SUDO is logged, so you have traceability. If something goes wrong, you know who to blame.
You can limit what people can do as SUDO. For example, your friend has to run a particular command as you, and nothing else. In this case, you can specify in the /etc/sudoers file that your friend can only run that one particular command. You can even specify if your friend can simply run the command, or require your friend to enter their password in order to run that command.
On Ubuntu Linux and on Macintoshes, the root password is locked, so you cannot log in as root. If you need to do something as root, you set yourself up as an administrator (I believe by putting yourself in the wheel group) and then using sudo to run required administrator functions.
The big disadvantage of Sudo is that it's more complex to setup and requires administrator access on the machine.
Try setting up public/private keys using SSH. It might take some tweaking to get it to work, but once it works, it's beautiful. Even better, you can run remote commands and use sep to copy files from one machine to the other -- all without the password prompt. This means that you can write shell scripts to do your work for you.
By the way, a sneaky trick is to set your remote shell to /bin/false. That way, you can't log into that system -- even using ssh, but you can run remote commands using ssh and use sep to copy files back and forth between systems.
Personal passwords are only supposed to be known by the user themselves. Not even the root user is supposed to know them, which is why they are stored encrypted. Of course, the root user has sufficient access to decrypt them, but the principle is the same.
If you are giving your "friend" access, them assign them proper privileges! Do not make them a root user, and you shouldn't be a root user either. Then you're "friend" won't have access to change your password, let along muck about in areas they aren't supposed to be in.
If you absolutely must monitor the passwd and shadow files, install iwatch. Then set it to watch the /etc/passwd and /etc/shadow files. If they change, it runs a script that decrypts the file and emails someone. If you keep a copy to diff against, you'll even know who changed. You should probably also gpg the email, so that it does not go over the internet in plain text, since it has everyone's password in it. Please note that any other users on the system will be upset by the dystopian world they find themselves in.
Just because root is the law of the land does not mean we want to be living in 1984.
Try some kind of:
alias passwd='passwd && echo 'Alert! Alert! Alert!' | mail -s 'pass change' alert#example.com'
Should be enough for you:)
Another possible solutions for those, who think, that alias is too mainstream)) :
1) You could make a cron job, that will be checking your /etc/shadow file every, for example, minute, and when the file changes, it will send you an alert-email. The easiest way here, I think, will be making md5 checksum
2) You can move /usr/bin/passwd to /usr/bin/passwd.sys and make a script with /usr/bin/passwd.sys && echo 'Alert! Alert! Alert!' | mail -s 'pass change' on it's place. And yes, this way is also could be discovered be the user and scrubed round:)

how to safely receive files from end-users via rsync

I'd like to allow users of my web application to upload the contents of a directory via rsync. These are just users who've signed up online, so I don't want to create permanent unix accounts for them, and I want to ensure that whatever files they upload are stored on my server only under a directory specific to their account. Ideally, the flow would be something like this:
user says "I'd like to update my files with rsync" via authenticated web UI
server says "OK, please run: rsync /path/to/yourfiles uploaduser123abc#myserver:/"
client runs that, updating whatever files have changed onto the server
upload location is chrooted or something -- we want to ensure client only writes to files under a designated directory on the server
ideally, client doesn't need to enter a password - the 123abc in the username is enough of a secret token to keep this one rsync transaction secure, and after the transaction this token is destroyed - no more rsyncs until a new step 1 occurs.
server has an updated set of user's files.
If you've used Google AppEngine, the desired behavior is similar to its "update" command -- it sends only the changed files to appengine for hosting.
What's the best approach for implementing something like this? Would it be to create one-off users and then run an rsync daemon in a chroot jail under those accounts? Are there any libraries (preferably Python) or scripts that might do something like this?
You can run ssh jailrooted and rsync normally, just use PAM to authenticate against an "alternate" authdb.

What is the appropriate way to run higher privilege commands via over HTTP

I have a web project for which I need to run a command when a specific URL is requested, but that command requires root privileges.
The project is served with a Python process (Django), of course running it with root privileges is not an option.
The command's parameters are hardcoded making it impossible to inject anything and it's a right protected application so I can be slightly more liberal on security since the users who will have access to it will be trustworthy (hopefully). However ideally I would like to do it securely.
.
Call out to the command via sudo with the NOPASSWD: option, that allows you fine grained access control and gives you auditing in syslog for free. Avoid using a shell and use an exec variant that takes the parameters directly as array.
Use setuid: http://en.wikipedia.org/wiki/Setuid
"setuid and setgid (short for set user ID upon execution and set group ID upon execution, respectively) are Unix access rights flags that allow users to run an executable with the permissions of the executable's owner or group."
...And be very, very careful!
By all means avoid using setuid and setgid, you want to keep the HTTP server with as low permissions as possible. For the process that requires root privileges, use the http server to create a whole new separate process and invoke sudo. You should however not include the HTTP server uid or gid in the sudoers, but some other user that is the only one that can access the program, and that program is the only one that can run as root. The HTTP SERVER then starts a new process, it changing the UID to the dumb user and then executes the process with sudo as root.

Resources