how to use openssh to send commands to a CLI service running inside a vm? - linux

I'm fairly new to openssh, i am writing a framework to automate a couple of tasks.
I understand that its easy to send commands over ssh to a remote vm to do some things. But i want to send some commands to a CLI-like service that's running inside my remote vm.
To explain it better:
[c:\~]$ ssh root#x.x.x.x
Connecting to x.x.x.x:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.
[root#vm1 ~]# ssh admin#0
admin#0's password:
admin connected from 127.0.0.1 using ssh on vm1
Your last successful login was at 2021-2-1 19:25:27
Your last successful login was from 127.0.0.1
admin#vm1>
This is what i'm doing,
a)I log into my remote machine via ssh.
b)Then i log into the CLI-like service that's running inside my vm.
So what i want to achieve is, i want to run some commands on the CLI on my vm. My perl script presently looks like this. (Note: My perl script is running in a different vm altogether)
#! /usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
use IO::Pty;
print "New connection\n";
my $ssh = Net::OpenSSH->new("x.x.x.x",user=>'admin', password=>'password', port=>22, timeout=>30);
$ssh->error and die "Could not connect\n". $ssh->error;
my $cmd = $ssh->pipe_out("show alarm");
while(<$cmd>){
print "$_";
}
close $cmd;
undef $ssh;
This is the o/p that i see:
root#debian:~# perl automate.pl
New connection
Your last successful login was at 2021-2-1 19:40:20
Your last successful login was from x.x.x.x
^C
root#debian:~#
The o/p i'm expecting is for the command "show alarm" but looks like the command is not reaching the CLI of my target machine.
It would be of great help if can get some guidance from y'all.
Thanks.

Related

Using the remote server's aliases while connecting through a ssh connection

i've been trying to configure my application to send through commands via ssh. The ssh connection definitely works okay but I want to be able to send through '1' on the command line and this to open a file on the remote server. This alias works correctly on the remote machine but it won't work when the '1' command is given through ssh. I've read around and apparently this is happening due to the shell being non-interactive. However, due to the constraints of my application I can't alter the ssh launch script easily. I'm instead looking for a way to alter the remote machine's ~bashrc file to allow the local machine to access the aliases on it. I've tried adding
if [ -z "$PS1" ]; then
shopt -s expand_aliases
fi
To the ~bashrc file but it doesn't work.Any help would be a godsend!
I worked out why this was not working, within my code i had connect to ssh and then immediately disconnect from ssh channel. The process was being run but then immediately shut down before it had time to execute.

Run command multiple linux server

One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"

Using local system as ssh client and server

I am using local system to learn ssh and what I am trying to do is execute a command on the remote server.
I have ssh server running on terminal1 and client on terminal2.
I used the following command on terminal2:
ssh user1#127.0.0.1 echo Display this.
but it echoes on terminal2. How would I know if the command actually worked if it's not displaying in terminal1?
Thank you.
It worked correctly. It ssh'd into the server, executed the command, and returned the stdout of that command back to you.
SSH gains access to the server, but not necessarily any TTY's active on it. You would have to jump through some hoops to send text to a specific TTY, such as your Terminal1.
A better test would be:
ssh user1#127.0.0.1 'touch ~/testfile'
Then you can check on your server (which is localhost) to see if testfile was created in your user1 home folder. If it did, then the connection and the command succeeded.

ssh without key and collect the output using bash script

I want to create a bash script that will login to all the linux servers in my network using ssh and collect the output of 'uptime' command to a local file. There is no keypair installed between these local server and the remote servers. So I need to give the password (username and password is same for all the remote servers) in the script itself. I know this is not a secure way to do it, but it is just for learning purpose. I see 'expect' command can be used for the ssh login with password but confused how to use it together with the 'uptime' command that provide the server status. So my requirement is
1. I have local server test1 which contains a text file 'server_status.txt'
2. I need a script in test1 that will try to login to all the remote servers (say 192.168.0.1 to 192.168.0.50) using the same username and password. It will execute the command 'uptime' once logged in to the remote servers and store the output to the local file 'server_status.txt'
REVOKE: paste your public key into the server's /path2userthatshouldlogon/.ssh/authorized_keys and run the your commands remotely using ssh user#host commandtoexecute
due to connection wanted to be established without key.
UPDATE: have a look at sshpass if you really want to need passwords, which is NOT RECOMMENDED
Note: Doing this is poor practice. If you are testing around with this then you are learning a bad habit. Don't do this in production on servers you care about.
You'll want to execute the expect call as a $? and be sure to store the $USER and $SERVER variables or just replace them:
uptime=$(expect -c 'spawn ssh $USER#$SERVER send "uptime"; exit;')
echo $uptime

how to write a shell script to make an ssh connection to a machine and continue remaining script on that machine

I am writing a script where I run the script on one server through which I create an ssh connection to another machine and want to continue the following script code on the remote machine.... Can any body tell what is the way to do it?
if you want to execute a command on remote host through ssh and get the output on local host, that what i understand from you question then use.
ssh -n <hostname/IP> 'command'
where -n will Redirects stdin from /dev/null
you can store the output to a variable or file as
var =`ssh -n <hostname/IP> 'command'`
or
ssh -n <hostname/IP> 'command' >> output.txt
also if you want to send multiple command use ; for command separator.
NOTE: ssh without password should be enable from local host to remote host.
else you need to specify password explicitly.

Resources