Generating ssh-key file for multiple users on each server - linux

I have to create 60 ssh users on one of the servers. I created users using small user creation script which loops though each users from the user list.
I'm trying to run the similar script which will generate sshkeys for each user.
#!/bin/sh
for u in `cat sshusers.txt
do
echo $u
sudo su - $u
mkdir .ssh; chmod 700 .ssh; cd .ssh; ssh-keygen -f id_rsa -t rsa -N '';
chmod 600 /home/$u/.ssh/*;
cp id_rsa.pub authorized_keys
done
when i run this script, it basically logs into all 60 users account but does not create. ssh dir or generate passwordless ssh.key. Any idea to resolve this would be greatly appreciated!
Thanks

sudo su - $u starts a new shell; the commands that follow aren't run until that shell exits. Instead, you need to run the commands with a single shell started by sudo.
while IFS= read -r u; do
sudo -u "$u" sh -c "
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -f id_rsa -t rsa -N ''
chmod 600 '/home/$u/.ssh/'*
cp id_rsa.pub authorized_keys
"
done < sshusers.txt

After trying several times, i made little changes that seems to work now.
#!/bin/bash
for u in `more sshuser.txt`
do
echo $u
sudo su - "$u" sh -c "
mkdir .ssh
chmod 700 .ssh
cd .ssh
ssh-keygen -f id_rsa -t rsa -N ''
chmod 600 '/home/$u/.ssh/'*
cp id_rsa.pub authorized_keys "
done

Related

Bash Scripting - passing no value

My aim is to create a bash script that can add new users to Ec2 and give access to ssh keys but I am having abit of an issue.
This is my current script and the script stops whenever it requires to generate a private/public key because it asks for the passphrase . How can I configure my script to just press enter?
#!/bin/bash
username=$1
ssh-keygen -b 1024 -f $username -t dsa
chmod 600 $username.pub
useradd $username
mkdir /home/$username/.ssh
chmod 700 /home/$username/.ssh
chown ball:ball /home/$username/.ssh
cat ball.pub >> /home/$username/.ssh/authorized_keys
chown 600 /home/.ssh/$username/authorized_keys
chown ball:ball /home/$username/.ssh/authorized_keys
[root#ip-172- /]# ssh-keygen -b 1024 -f ball -t dsa
Generating public/private dsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ball.
You can pipe empty lines to ssh-keygen:
ssh-keygen -b 1024 -f ball -t dsa <<< ''
or
printf "" | ssh-keygen -b 1024 -f ball -t dsa
or reading from /dev/null:
ssh-keygen -b 1024 -f ball -t dsa < /dev/null

authentication for SSH into EC2 with new user failing

I am working with Chef on EC2 instances, and created a user data script to be passed in through the knife ec2 command, which creates a new user, copies the public key file from the default ec2-user and sets the correct ownership and permissions.
#!/bin/bash
CHEFUSER="$(date +%s | sha256sum | base64 | head -c 32)"
useradd $CHEFUSER
echo $CHEFUSER 'ALL=(ALL) NOPASSWD:ALL' | tee -a /etc/sudoers
cp -f /home/ec2-user/.ssh/authorized_keys /tmp/
chown $CHEFUSER /tmp/authorized_keys
runuser -l $CHEFUSER -c 'mkdir ~/.ssh/'
runuser -l $CHEFUSER -c 'mkdir ~/.aws/'
runuser -l $CHEFUSER -c 'chmod 700 ~/.ssh/'
runuser -l $CHEFUSER -c 'mv -f /tmp/authorized_keys ~/.ssh/'
runuser -l $CHEFUSER -c 'chmod 600 ~/.ssh/authorized_keys'
Checking ownership and permissions seems to return as expected after running the script:
# ls -l .ssh/authorized_keys
-rw-------. 1 NWYzMThiMDBmNzljOTgxZmU1NDE1ZmE0 root 396 May 29 11:28 .ssh/authorized_keys
# stat -c '%a %n' .ssh/
700 .ssh/
# stat -c '%a %n' .ssh/authorized_keys
600 .ssh/authorized_keys
If I SSH with the new user, the key is rejected. On a new instance, if I copy/paste the same commands as root in the terminal (which is how the script runs according to Amazon), everything works fine and I can then SSH in with the new user.

How to execute multiple commands on machineB from machineA efficiently?

I have a tar (abc.tar.gz) file in my machineA which I need to copy and untar it in my machineB. In my machineB, I have an application running which is part of that tar file so I need to write shell script which will do these below things on machineB by running the shell script on machineA -
execute command sudo stop test_server on machineB
delete everything inside /opt/data/ folder leaving that tar file.
copy that tar file in machineB inside this location /opt/data/
now untar that tar file in /opt/data/ folder.
now execute this command sudo chown -R posture /opt/data
after that execute this command to start the server sudo start test_server
In machineA, I will have my shell script and tar file in the same location. As of now I have got below shell script by reading the tutorials as I recently started working with shell script.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB) # I will have more machines here in future.
readonly TAR_FILE_NAME=abc.tar.gz
ssh david#${MACHINE_LOCATION[0]} 'sudo stop test_server'
ssh david#${MACHINE_LOCATION[0]} 'sudo rm -rf /opt/data/*'
sudo scp TAR_FILE_NAME david#${MACHINE_LOCATION[0]}:$SOFTWARE_INSTALL_LOCATION/
ssh david#${MACHINE_LOCATION[0]} 'tar -xvzf /opt/data/abc.tar.gz'
ssh david#${MACHINE_LOCATION[0]} 'sudo chown -R posture /opt/data'
ssh david#${MACHINE_LOCATION[0]} 'sudo start test_server'
Did I got everything right or we can improve anything in the above shell script? I need to run the above shell script in our production machine.
Is it possible to show the status message for each step, saying this step got executed successfully so moving to next step otherwise terminate out of the shell script with the proper error message?
You can do it all in a single ssh round-trip, and the tarball doesn't need to be stored at the remote location at all.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB)
readonly TAR_FILE_NAME=abc.tar.gz
ssh david#${MACHINE_LOCATION[0]} "
set -e
sudo stop test_server
sudo rm -rf '$SOFTWARE_INSTALL_LOCATION'/*
tar -C '$SOFTWARE_INSTALL_LOCATION' -x -v -z -f -
sudo chown -R posture '$SOFTWARE_INSTALL_LOCATION'
sudo start test_server" <"$TAR_FILE_NAME"
This also fixes the bugs that the tarball would be extracted into your home directory (not /opt/data) and you lacked a dollar sign on the interpolation of the variable TAR_FILE_NAME.
I put the remote script in double quotes so that you can use $SOFTWARE_INSTALL_LOCATION inside the script; notice how the interpolated value is interpolated in single quotes for the remote shell (this won't work if the value contains single quotes, of course).
You could perhaps avoid the chown if you could run the tar command as user posture directly.
Adding echo prompts to show progress is a trivial exercise.
Things will be a lot smoother if you have everything -- both ssh and sudo -- set up for passwordless access, but I believe this should work even if you do get a password prompt or two.
(If you do want to store the tarball remotely, just add a tee before the tar.)
You can generate status message for each step using exit status of ssh command like I tried with first two ssh command.
#!/bin/bash
set -e
readonly SOFTWARE_INSTALL_LOCATION=/opt/data
readonly MACHINE_LOCATION=(machineB) # I will have more machines here in future.
readonly TAR_FILE_NAME=abc.tar.gz
# An error exit function
function error_exit
{
echo "$1" 1>&2
exit 1
}
if ssh david#${MACHINE_LOCATION[0]} 'sudo stop test_server'; then
echo "`date -u` INFO : Stopping test_Server" 1>&2
else
error_exit "`date -u` ERROR : Unable Stop test_Server! Aborting."
fi
if ssh david#${MACHINE_LOCATION[0]} 'sudo rm -rf /opt/data/*'; then
echo "`date -u` INFO : Removing /opt/data" 1>&2
else
error_exit "`date -u` ERROR : Unable to remove! Aborting."
fi
sudo scp TAR_FILE_NAME david#${MACHINE_LOCATION[0]}:$SOFTWARE_INSTALL_LOCATION/
ssh david#${MACHINE_LOCATION[0]} 'tar -xvzf /opt/data/abc.tar.gz'
ssh david#${MACHINE_LOCATION[0]} 'sudo chown -R posture /opt/data'
ssh david#${MACHINE_LOCATION[0]} 'sudo start test_server'
Password management you can create ssh pass-wordless authentication.
http://www.linuxproblem.org/art_9.html

how to escape quote in ssh command

I want to install a the pub key for user test using the command below.
I know the root password and the user test does not exist.
cat test.pub | ssh root#127.0.0.1 "useradd -m test || su - test -c 'umask 077; mkdir /home/test/.ssh; cat >> /home/test/.ssh/authorized_keys'"
But the command does not work.
Error: Creating mailbox file: File exists
The problem is useradd -m test. I delete user test by userdel test && rm -rf /home/test. It should be userdel -r test.
The command below works:
cat test.pub | ssh root#127.0.0.1 "useradd -m test && su - test -c 'umask 077; mkdir /home/test/.ssh; cat >> /home/test/.ssh/authorized_keys'"

script to copy, install and execute on multiple hosts

I am trying to copy few files into multiple hosts, install/configure those on each with running specific commands depending on OS type. The IP addresses for each host are read from host.txt file.
It appears when I run the script, it does not execute on the remote hosts. Can someone help identify the issues with this script? Sorry for this basic one, I am quite new into shell scripting.
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
echo "####Installing hqagent####"
while read host; do
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1/
cd /etc/init.d
chmod 755 hqagent.sh
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
else
cd /opt
tar -xvzf $AGENT
mv -f /opt/agent.properties.nondmz /opt/agent-5.8.1/conf/agent.properties
rm -rf /opt/hqagent.sh
mkdir /opt/hqagent/
ln -s /opt/agent-5.8.1/ /opt/hqagent/agent-current
useradd hqagent
groupadd hqagent
chown -R hqagent:hqagent /opt/hqagent /opt/agent-5.8.1
cd /etc/init.d
ln -s /opt/hqagent/agent-current/bin/hq-agent.sh hqagent.sh
cd /etc/init.d/rc3.d/
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
cd ../rc5.d
ln -s /etc/init.d/hqagent.sh S99hqagent
ln -s /etc/init.d/hqagent.sh K01hqagent
chkconfig --add hqagent.sh
su - hqagent
/opt/agent-5.8.1/bin/hq-agent.sh start
fi
done < hosts.txt
error:
tar (child): agent-x86-64-linux-5.8.1.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
mv: cannot stat `/opt/agent.properties.nondmz': No such file or directory
mkdir: cannot create directory `/opt/hqagent/': File exists
ln: creating symbolic link `/opt/hqagent/agent-current': File exists
useradd: user 'hqagent' already exists
groupadd: group 'hqagent' already exists
chown: cannot access `/opt/agent-5.8.1/': No such file or directory
chmod: cannot access `hqagent.sh': No such file or directory
error reading information on service hqagent.sh: No such file or directory
-bash: line 1: 10.145.34.6: command not found
-bash: line 2: 10.145.6.10: command not found
./hq-install.sh: line 29: /opt/agent-5.8.1/bin/hq-agent.sh: No such file or directory
It appears that the problem is that you run this script on the "master" server, but somehow expect the branches of your if-statement to be run on the remote hosts. You need to factor those branches out into their own files, copy them to the remote hosts along with the other files, and in your if-statement, each branch should just be a ssh command to the remote host, triggering the script you copied over.
So your master script would look something like:
#!/bin/bash
export AGENT=agent-x86-64-linux-5.8.1.tar.gz
export AGENT_PROPERTIES_NONDMZ=agent.properties.nondmz
export agent_INIT=agent.sh
# Scripts containing the stuff you want done on the remote hosts
centos_setup=centos_setup.sh
other_setup=other_setup.sh
echo "####Installing hqagent####"
while read host; do
echo " ++ Copying files to $host"
scp $AGENT $AGENT_PROPERTIES_NONDMZ $agent_INIT root#$host:/opt
echo -n " ++ Running remote part on $host "
if ssh -n root#$host '[ "$(awk "/CentOS/{print}" /etc/*release)" ] '
then
echo "(centos)"
scp $centos_setup root#$host:/opt
ssh root#host "/opt/$centos_setup"
else
echo "(generic)"
scp $other_setup root#$host:/opt
ssh root#host "/opt/$other_setup"
fi
done < hosts.txt
The contents of the two auxiliary scrips would be the current contents of the if-branches in your original.

Resources