authentication for SSH into EC2 with new user failing - linux

I am working with Chef on EC2 instances, and created a user data script to be passed in through the knife ec2 command, which creates a new user, copies the public key file from the default ec2-user and sets the correct ownership and permissions.
#!/bin/bash
CHEFUSER="$(date +%s | sha256sum | base64 | head -c 32)"
useradd $CHEFUSER
echo $CHEFUSER 'ALL=(ALL) NOPASSWD:ALL' | tee -a /etc/sudoers
cp -f /home/ec2-user/.ssh/authorized_keys /tmp/
chown $CHEFUSER /tmp/authorized_keys
runuser -l $CHEFUSER -c 'mkdir ~/.ssh/'
runuser -l $CHEFUSER -c 'mkdir ~/.aws/'
runuser -l $CHEFUSER -c 'chmod 700 ~/.ssh/'
runuser -l $CHEFUSER -c 'mv -f /tmp/authorized_keys ~/.ssh/'
runuser -l $CHEFUSER -c 'chmod 600 ~/.ssh/authorized_keys'
Checking ownership and permissions seems to return as expected after running the script:
# ls -l .ssh/authorized_keys
-rw-------. 1 NWYzMThiMDBmNzljOTgxZmU1NDE1ZmE0 root 396 May 29 11:28 .ssh/authorized_keys
# stat -c '%a %n' .ssh/
700 .ssh/
# stat -c '%a %n' .ssh/authorized_keys
600 .ssh/authorized_keys
If I SSH with the new user, the key is rejected. On a new instance, if I copy/paste the same commands as root in the terminal (which is how the script runs according to Amazon), everything works fine and I can then SSH in with the new user.

Related

How to edit the mosquitto.conf in a mosquitto Docker container?

I have a linux system running with several Docker containers. One of them is mosquitto container which runs from mosquitto 1.6.7 docker image.
I do not have control how the Mosquitto container is created as it is given by default from a supplier/client.
I need to make changes in the mosquitto/config/mosquitto.conf file. This is the output when I run ls -l
/mosquitto/config # ls -l
total 4
-rwxrwxr-x 1 nobody nobody 210 May 24 05:35 mosquitto.conf
I tried the codes below to add a comment in the mosquitto.conf, but I am not successful.
/mosquitto/config # echo '#test' | su nobody -c 'tee -a mosquitto.conf'
nologin: this account is not available
/mosquitto/config # echo '#test' | su nobody -s sh -c 'tee -a mosquitto.conf'
su: can't execute 'sh': No such file or directory
/mosquitto/config # echo '#test' | su nobody -s bin/sh -c 'tee -a mosquitto.conf'
su: can't execute 'bin/sh': No such file or directory
/mosquitto/config # echo '#test' | su nobody -s /bin/sh -c 'tee -a mosquitto.conf'
tee: mosquitto.conf: Permission denied
#test
Is it possible to change the mosquitto.conf?
If yes, how? Thanks.
You don't.
You make a copy of it on the host machine, edit there and then mount that edited copy into the container when you start it.
e.g.
docker run -d -v /path/to/local/mosquitto.conf:/mosquitto/config/mosquitto.conf mosquitto

"echo "password" | sudo -S <command>" asks for password

I trying run a script without become the su user and I use this command for this:
echo "password" | sudo -S <command>
If I use this command for "scp", "mv", "whoami" commands, the command works very well but when I use for "chmod", the command asks for password for my user. I don't enter password and the command works. My problem is the system asks password to me. I don't want the system asks for password.
Problem ss is like this:
[myLocalUser#myServer test-dir]$ ls -lt
total 24
--wx-wx-wx 1 root root 1397 May 26 12:12 file1
--wx-wx-wx 1 root root 867 May 26 12:12 script1
--wx-wx-wx 1 root root 8293 May 26 12:12 file2
--wx-wx-wx 1 root root 2521 May 26 12:12 file3
[myLocalUser#myServer test-dir]$ echo "myPassw0rd" | sudo -S chmod 111 /tmp/test-dir/*
[sudo] password for myLocalUser: I DONT WANT ASK FOR PASSWORD
[myLocalUser#myServer test-dir]$ ls -lt
total 24
---x--x--x 1 root root 1397 May 26 12:12 file1
---x--x--x 1 root root 867 May 26 12:12 script1
---x--x--x 1 root root 8293 May 26 12:12 file2
---x--x--x 1 root root 2521 May 26 12:12 file3
You can use the sudoers file, located in /etc/sudoers, to allow specific users execute commands as root without password.
myLocalUser ALL=(ALL) NOPASSWD: /bin/chmod
With this line the user myLocalUser can execute chmod as root without a password is needed.
But this also breaks parts of the system security, so be aware not allow too much and fence the task as much as possible.
sudoers information
sudo -S prints prompt to stderr.
If you don't want to see it, redirect stderr to /dev/null
The following command redirects stderr at the local host:
echo <password> | ssh <server> sudo -S ls 2>/dev/null
It is equivalent to echo <password> | ssh <server> "sudo -S ls" 2>/dev/null
The following command redirects stderr at the remote server:
echo <password> | ssh <server> "sudo -S ls 2>/dev/null"
If you need to keep stderr, but hide [sudo] password for ... then you can use process substitution on the local or remote machine. Since sudo prompt has no newline, I use sed to cut out the sudo prompt. I do this to save the first line of stderr of the created process.
# local filtering
echo <password> | ssh <server> "sudo -S ls" 2> >(sed -e 's/^.sudo[^:]\+: //')
#remote filtering
echo <password> | ssh <server> "sudo -S ls 2> >(sed -e 's/^.sudo[^:]\+: //')"

Generating ssh-key file for multiple users on each server

I have to create 60 ssh users on one of the servers. I created users using small user creation script which loops though each users from the user list.
I'm trying to run the similar script which will generate sshkeys for each user.
#!/bin/sh
for u in `cat sshusers.txt
do
echo $u
sudo su - $u
mkdir .ssh; chmod 700 .ssh; cd .ssh; ssh-keygen -f id_rsa -t rsa -N '';
chmod 600 /home/$u/.ssh/*;
cp id_rsa.pub authorized_keys
done
when i run this script, it basically logs into all 60 users account but does not create. ssh dir or generate passwordless ssh.key. Any idea to resolve this would be greatly appreciated!
Thanks
sudo su - $u starts a new shell; the commands that follow aren't run until that shell exits. Instead, you need to run the commands with a single shell started by sudo.
while IFS= read -r u; do
sudo -u "$u" sh -c "
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -f id_rsa -t rsa -N ''
chmod 600 '/home/$u/.ssh/'*
cp id_rsa.pub authorized_keys
"
done < sshusers.txt
After trying several times, i made little changes that seems to work now.
#!/bin/bash
for u in `more sshuser.txt`
do
echo $u
sudo su - "$u" sh -c "
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -f id_rsa -t rsa -N ''
chmod 600 '/home/$u/.ssh/'*
cp id_rsa.pub authorized_keys "
done

correct way to rename user and remove password with ec2 user-data

I have a centos7 base ami and have successfully changed the user name using the ec2-launch user data modified from an amazon-linux script
#!/bin/bash
groupadd ec2-user
usermod -d /home/ec2-user -m -g ec2-user -l ec2-user centos
echo "" | sudo tee -a /etc/sudoers
echo "Defaults:root !requiretty" | sudo tee -a /etc/sudoers
echo "ec2-user ALL=(ALL) NOPASSWD: ALL" | sudo tee -a /etc/sudoers
echo "Defaults:ec2-user !requiretty" | sudo tee -a /etc/sudoers
log in works as expected and home directory has been changed, however when I use sudo it still asks for a password. As I cannot get into the file to check format I wonder if I am using the correct syntax?
How do change the user and remove the sudo password requirement in a single script?
I believe your Cloudinit userdata script is failing because it's attempting to use sudo without a tty (and the !requiretty hasn't been added yet). Since that script runs as root anyways, this should work:
#!/bin/bash
groupadd ec2-user
usermod -d /home/ec2-user -m -g ec2-user -l ec2-user centos
echo "" | tee -a /etc/sudoers
echo "Defaults:root !requiretty" | tee -a /etc/sudoers
echo "ec2-user ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers
echo "Defaults:ec2-user !requiretty" | tee -a /etc/sudoers

how to escape quote in ssh command

I want to install a the pub key for user test using the command below.
I know the root password and the user test does not exist.
cat test.pub | ssh root#127.0.0.1 "useradd -m test || su - test -c 'umask 077; mkdir /home/test/.ssh; cat >> /home/test/.ssh/authorized_keys'"
But the command does not work.
Error: Creating mailbox file: File exists
The problem is useradd -m test. I delete user test by userdel test && rm -rf /home/test. It should be userdel -r test.
The command below works:
cat test.pub | ssh root#127.0.0.1 "useradd -m test && su - test -c 'umask 077; mkdir /home/test/.ssh; cat >> /home/test/.ssh/authorized_keys'"

Resources