Replace Amazon EC2 SSH hostnames (text file) from terminal output - linux

I have deployed an Amazon EC2 cluster of 3 Ubuntu machines (2 of them make up the cluster and the last one is just a client who submits jobs and manages their storage). I connect to all of them via password-less SSH.
What happens is that every time I restart these machines they get new public hostnames from Amazon which I want to replace in my SSH configuration file located in ~/.ssh/config
So far, I figured out a way to get their names and hostnames using Amazon CLI with the following command at my local machine (CentOS 7):
aws ec2 describe-instances --query "Reservations[*].Instances[*].[PublicDnsName,Tags]" --output=text | grep -vwE "None"
This prints something like
ec2-XX-XX-XXX-XXX.us-east-2.compute.amazonaws.com
Name datanode1
ec2-YY-YY-YYY-YYY.us-east-2.compute.amazonaws.com
Name namenode
ec2-ZZ-ZZ-ZZZ-ZZZ.us-east-2.compute.amazonaws.com
Name client
i.e. the hostname, a new line, the corresponding name and so on. The IP fields above like XX-XX-XXX-XXX and so on, are basically 4 hyphen separated numbers of 2 or 3 digits. The grep command I have simply removes the last useless line. Now I want to find a way to replace these hostnames to the SSH configuration file or maybe regenerate it, which looks like
Host namenode
HostName ec2-YY-YY-YYY-YYY.us-east-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Host datanode1
HostName ec2-XX-XX-XXX-XX.us-east-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Host client
HostName ec2-ZZ-ZZ-ZZZ-ZZZ.us-east-2.compute.amazonaws.com
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Please note that I don't know how the Amazon CLI command sorts the output. But of course, I can change the order of the machines in my SSH file or maybe it is a good idea to delete it and recreate it.

Below is what I finally figured out and it works. This is Bash script you can just save as .sh file like script.sh and execute. If it can't run simply do chmod +x script.sh. I have added comments to clarify what I am doing.
#Ask Amazon CLI for your hostnames, remove the last line, replace the "Name\t" with "", combine every 2 consecutive lines and save to a txt file
aws ec2 describe-instances --query "Reservations[*].Instances[*].[PublicDnsName,Tags]" --output=text | grep -vwE "None" | sed 's/Name\t//g' | sed 'N;s/\n/ /' > 'ec2instances.txt';
#Change the following variables based on your cluster
publicKey="mykey.pem";
username="ubuntu";
#Remove any preexisting SSH configuration file
rm config
touch config
while read line
do
#Read the line, keep the 1st word and save it as the public DNS
publicDns=$(echo "$line" | cut -d " " -f1);
#Read the line, keep the 2nd word and save it as the hostname you will be using locally to connect to your Amazon EC2
instanceHostname=$(echo "$line" | cut -d " " -f2);
#OK, we are now ready to store to SSH known hosts
sshEntry="Host $instanceHostname\n";
sshEntry="$sshEntry HostName $publicDns\n";
sshEntry="$sshEntry User $username\n";
sshEntry="$sshEntry IdentityFile ~/.ssh/$publicKey\n";
#Attach to the EOF, '-e' enables interpretation of backslash escapes
echo -e "$sshEntry" >> config
#Below is the txt file you will be traversing in the loop
done < ec2instances.txt
#Done
rm ~/.ssh/config
mv config ~/.ssh/config
rm ec2instances.txt

Related

How to remotely SCP latest file in one server to another?

I have 3 Linux machines, namely client, server1 and server2.
I am trying to achieve something like this. I would like to copy the latest file in a particular directory of server1 to server2. I will not be doing this by logging into server1 directly, but I always log on to client machine first. Let me list down the step by step approach which is happening now:
Log on to client machine using ssh
SSH into server1
Go to directory /home/user1 in server1 using the command ls /home/user1 -Art | tail -n 1
SCP the latest file to /home/user2 directory of server2
This manual operation is happening just fine. I have automated this using a one line script like below:
ssh user1#server1 scp -o StrictHostKeyChecking=no /home/user1/test.txt user2#server2:/home/user2
But as you can see, I am uploading the file /home/user1/test.txt. How can I modify this script to always upload the latest file in the directory /home/user1?
If zsh is available on server1, you could use its advanced globbing features to find the most recent file to copy:
ssh user1#server1 \
"zsh -c 'scp -o StrictHostKeyChecking=no /home/user1/*(om[1]) user2#server2:/home/user2'"
The quoting is important in case your remote shell on server1 is not zsh.
You can use SSH to list the last file and after user scp to copy the file as follow:
FILE=$(ssh user1#server1 "ls -tp $REMOTE_DIR |grep -v / | grep -m1 \"\""); scp user1#server1:$FILE user2#server2:/home/user2
Before launch these command you need to set the remote directory where to search for the last modified file as:
REMOTE_DIR=/home/user1

How to save FTP session logs in file in Linux

I am using a bash script in Linux to transfer files to a server. My script is running from cron and I have directed output to a file but I cannot know from logs if the file has been transferred to B server or not.
This is the cron:
1>>/home/owais/script_test/logs/res_sim_script.logs 2>>/home/owais/script_test/logs/res_sim.logs
And the FTP is as below:
cd ${dir}
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
lcd $dir
cd $destDir
bin
prompt
put FILENAME
bye
The only thing that I get in the logs is:
Local directory now Directory_Name
Interactive mode off.
Instead of using FTP, there is rsync. Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to or from another host over any remote shell, or to, or from a remote rsync daemon.
More information at the following webpage, https://linux.die.net/man/1/rsync
I have used ftp -inv Host << EOF >> LogFilePath and it worked. Thank you all for the support

Passwordless execution of local script on remote machine as root into a local file

I've been working on a bash script that automatically runs certain scripts on remote machines and saves the logs to certain folders. As of now I have been copying the local script to the remote machine, executing it into a remote log, copying the remote log into a local folder, and then deleting the remote log and remote copy of the script.
This works, but I know it can work better if I can avoid doing all the in between steps. The one caveat is I need this to be automatic and passwordless (meaning no user input at all). One of the scripts needs to be ran as root or it won't display all the necessary information and will userlock the machine temporarily.
The code I am currently using to execute the remoteScript into a log that I later retrieve with scp is below.
sshpass -f password.txt ssh user#1.1.1.1 "echo $password | sudo -S /home/user/remoteScript.sh > remoteLog.txt"
And in my testing, execution of local script on remote machine into local log file works like below
sshpass -f password.txt ssh user#1.1.1.1 "bash -s" < /home/user/localScript.sh >> localLog.txt
How could I combine the elements of the two code examples above in order to make a local script run on a remote machine with root privilege and log the output into a local text file?
Some things I have tried that do not work include:
sshpass -f password.txt ssh user#1.1.1.1 "bash -s" < "echo $password | sudo -S /home/user/script.sh >> log.txt"
sshpass -f password.txt ssh user#1.1.1.1 "echo $password | sudo -S /home/user/script.sh" >> log.txt
and notably
sshpass -f password.txt ssh user#1.1.1.1 echo $password | sudo -S /home/user/script.sh >> log.txt
which just executes the local script with root privilege on the local machine.
I have tried many variations of the above commands and I believe its some sort of piping or flow issue but I cannot figure it out. Is there anyway to do this?
Machines are Ubuntu 16.04 and you cannot ssh in already as root.
Thanks in advance
A) It might be worth looking into an orchestration/config management solution (e.g. ansible). It's a steep learning curve at first, but initial outlay will pay off on spades down the line if you're managing multiple servers.
B) Setup password-less sudo for the scripts you want to execute, so you don't have to pass around the password in plaintext, and can run without any input. In sudoers:
user ALL=(ALL) NOPASSWD:/home/user/script.sh
C) Setup an SSH key, so you don't need to use a password at all.
But in nutshell, the code you're looking for is something like:
cat /home/user/localScript.sh | ssh user#1.1.1.1 "sudo bash" > log.txt
Which executes a non-interactive bash shell as root on the remote machine, which will take commands to execute on standard in, and the standard output will come back over the ssh channel for you to write to your local log.
Look into &> or 2>&1 if you want standard error too.

multiple passwords in sshpass

Is there a way to try multiple passwords when using sshpass command? I have a txt file named hosts.txt listing multiple system IPaddresses and each system uses different passwords (for example - 'mypasswd', 'newpasswd, nicepasswd'). The script reads the hosts.txt file and execute a set of commands on each system. Since I don't know which system uses which of these given passwords, i wanted to try all these set along with sshpass command and execute the script with the password that works.. Is that possible?
#!/bin/bash
while read host; do
sshpass -p 'mypasswd' ssh -o StrictHostKeyChecking=no -n root#$host 'ls;pwd;useradd test'
done < hosts.txt
Instead of trying to get password based authentication, isn't it an option to setup key based auth? You can then either add your one public key to ll systems or optionally generate different ones and use the -i keyfile option or create an entry in the ssh configuration file as below.
Host a
IdentityFile /home/user/.ssh/host-a-key

authorized_keys does not present for new user

I want to setup an ssh key in a machine of Linux running under AWS in EC2 cloud.
For that firstly, I installed cygwin, then I followed the following steps :
ssh-keygen -t dsa -f ~/.ssh/<key name> -C "<username of remote server>#<ip>"
cat ~/.ssh/<key name>.pub | ssh <username of remote server>#<ip> "cat >> ~/.ssh/authorized_keys"
Now the 1st statement executes successfully but the 2nd statement shows
bash: /home/<username of server>/.ssh/authorized_keys: No such file exists
Prior to this, I connected to the remote machine in root mode and created the user, that I am specifying at the command 1, 2 (username)
And I saw that the file is not present in the remote server for the user I created explicitly, but it is present for the user root.
bash: /home//.ssh/authorized_keys: No such file exists
When you create a new user, the ~/.ssh directory is not created by default. You will have to create the ~/.ssh/ directory and ~/.ssh/authorized_keys file yourself.
On your server, check whether ~/.ssh or ~/.ssh/authorized_keys exists. Looking at the error you have, it seems that it does not.
When you create a new linux instance, you specify a key pair that you want to use. You have a choice of creating a key pair, and downloading the public key, or uploading a private key.
In your steps, you never reference the key pair you specified when you created the instance. So the 2nd command should be something like:
cat ~/.ssh/<key name>.pub | ssh -i ~/.ssh/<key specified when launching instance> ec2-user#<public id> ...
ec2-user may be different depending on what AMI you used to create your instance - ubuntu is the default user for ubuntu instances, for example.

Resources