Using local system as ssh client and server - linux

I am using local system to learn ssh and what I am trying to do is execute a command on the remote server.
I have ssh server running on terminal1 and client on terminal2.
I used the following command on terminal2:
ssh user1#127.0.0.1 echo Display this.
but it echoes on terminal2. How would I know if the command actually worked if it's not displaying in terminal1?
Thank you.

It worked correctly. It ssh'd into the server, executed the command, and returned the stdout of that command back to you.
SSH gains access to the server, but not necessarily any TTY's active on it. You would have to jump through some hoops to send text to a specific TTY, such as your Terminal1.
A better test would be:
ssh user1#127.0.0.1 'touch ~/testfile'
Then you can check on your server (which is localhost) to see if testfile was created in your user1 home folder. If it did, then the connection and the command succeeded.

Related

Using the remote server's aliases while connecting through a ssh connection

i've been trying to configure my application to send through commands via ssh. The ssh connection definitely works okay but I want to be able to send through '1' on the command line and this to open a file on the remote server. This alias works correctly on the remote machine but it won't work when the '1' command is given through ssh. I've read around and apparently this is happening due to the shell being non-interactive. However, due to the constraints of my application I can't alter the ssh launch script easily. I'm instead looking for a way to alter the remote machine's ~bashrc file to allow the local machine to access the aliases on it. I've tried adding
if [ -z "$PS1" ]; then
shopt -s expand_aliases
fi
To the ~bashrc file but it doesn't work.Any help would be a godsend!
I worked out why this was not working, within my code i had connect to ssh and then immediately disconnect from ssh channel. The process was being run but then immediately shut down before it had time to execute.

Run command multiple linux server

One of my tasks at work is to check the health/status of multiple Linux servers everyday. I'm thinking of a way to automate this task (without having to login to each server everyday). I'm a newbie system admin by the way. Initially, my idea was to setup a cron job that would run scripts and email the output. Unfortunately, it's not possible to send mail from the servers as of the moment.
I was thinking of running the command in parallel, but I don't know how. For example, how can I see output of df -h without logging in to servers one by one.
You can run ssh with the -t flag to open a ssh session, run a command and then close the session. But to get this fully automated you should automate the login process to every server so that you don't need to type the password for every server.
So to run df -hon a remote server and then close the session you would run ssh -t root#server.com "df -h". Then you can process that output however you want.
One way of automating this could be to write a bash script that runs this command for every server and process the output to check the health of the server.
For further information about the -t flag or how you can automate the login process for ssh.
https://www.novell.com/coolsolutions/tip/16747.html
https://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password
You can use ssh tunnels or just simply ssh for this purpose. With ssh tunnel you can redirect the outputs to your machine, or as an alternative, you can run the ssh with the remote commands on your machine then get the ouput on your machine too.
Please check the following pages for further reading:
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
https://www.google.hu/amp/s/www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/amp/
If you want to avoid manual login, use ssh keys.
Create a file /etc/sxx/hosts
populate like so:
[grp_ips]
1.1.1.1
2.2.2.2
3.3.3.3
share ssh key on all machines.
Install sxx from package:
https://github.com/ericcurtin/sxx/releases
Then run command like so:
sxx username#grp_ips "whatever bash command"

Is it possible to write a script that will ssh to client and dump log file, and download it to your server

My code below doesn't work, when ssh to client and dump log file to the server. Please look at the code below.
ssh 192.168.0.10
dmesg >>/log.txt
You need to include the command to run on the server as part of your ssh command. You can then do the output redirection on the client side:
ssh 192.168.0.10 'dmesg' >> local_file.log
As Khanna111 mentions, this will require a password to be entered (by default), which you can avoid by setting up SSH keys for passwordless login.
How about doing ssh to the client and run the dmesg command and then rsync the logs back. Assuming you can use rsync.
You could also have a CRON that periodically run on the client that invokes dmesg and dumps the log file which can subsequently be copied over. This way you do not have to do an explict ssh.
Another option that I would prefer is to get rysnc to run the command "dmesg" before the transfer. The parameter to use is --rsync-path. The details are explained here: http://www.schwertly.com/2013/07/forcing-rsync-to-create-a-remote-path-using-rsync-path/
EDIT 1: I am assuming that in case of ssh, you have thought about password less logins and the setup they require.

how to write expect script to login and run command on remote box

i wanted to execute commands on remote linux box from windows and also wanted to collect result of executed command. Basically i have to pass 2 boxes to execute that command here is flow.
Login to a box
ssh to another box
run command
collect output of command locally (in file)
I tried following
F:\xyz>plink xyz#a1.b1.com -i F:\x\y\PRIVATEKEY.ppk -pw xyz
ssh -f root#166.1.8.1 yum upgrade Cyberc
but this is asking for password. I can do it by adding id_rsa.pub value in to authorized_keys but we dont have permission to do. So instead of that i wanted to write EXPECT script to pass user/pass and commands to complete my job.
Any help on EXPECT script would be much appreciated.
Unless the program on the remote linux host is interactive (i.e. it has prompts that the user must respond to), then you probably don't need to use expect - you can simply use plink to connect to the remote Linux host from your windows machine and run the command. You can specify the username and password to authenticate with the remote host in the plink command. See the following links for more info:
http://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter7.html
http://stackoverflow.com/questions/12844944/login-syntax-for-plink-using-ip-username-and-password

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

Resources