How to interact with telnet using empty - linux

I need to replace a very simple expect script that looks like this:
#!/usr/bin/expect
spawn telnet 192.168.1.175
expect {
"assword" {send "lamepassword\r"}
}
interact
With the equivalent bash script using empty, like this:
#!/bin/bash
empty -f -i in -o out telnet 192.168.1.175
empty -w -i out -o in "assword" "lamepassword\n"
After which I need the user to interact with telnet, which I do not know how to do. The closest thing that comes to my mind is binding stdin and stdout with named pipes using something like socat - in. Any suggestions are more than welcome!

I tried cat out & cat /dev/stdin >in, it works, but it has an extra
newline, tab completion does not work and ctr+c terminates cat and
not the running host process. I am trying to persuade socat to act
according to those needs.
Using socat for transmitting keyboard input to the telnet process is a good idea. Example:
cat out & socat -u -,raw,echo=0 ./in
For allowing Ctrl-C to terminate socat, add escape=3:
cat out & socat -u -,raw,echo=0,escape=3 ./in
But note that this will not terminate the telnet session, since it did start in daemon mode, so you can reconnect to telnet by executing socat again. To end telnet, you could just logout.

Related

SSH tunnel prevents my spawned script from exiting

I'm executing a bash script in Node.js like this:
const script = child_process.spawn('local_script.sh');
const stdout = fs.createWriteStream('stdout');
const stderr = fs.createWriteStream('stderr');
script.stdout.pipe(stdout);
script.stderr.pipe(stderr);
script.on('close', function(code) {
console.log('Script exited with code', code);
});
My local_script.sh uploads a script to my remote server and executes it:
#!/bin/bash
FILE=/root/remote_script.sh
HOST=123.456.78.9
scp remote_script.sh root#${HOST}:${FILE}
ssh root#${HOST} bash ${FILE}
Finally, my remote_script.sh is supposed to open an SSH tunnel (and perform some other actions that are not relevant for this question):
#!/bin/bash
REDIS_HOST=318.353.31.3
ssh -f -n root#${REDIS_HOST} -L 6379:127.0.0.1:6379 -N &
The problem is that even though I'm opening the SSH tunnel in the background, it seems my remote_script.sh never exits, because the Node.js close event is never called. If I don't open the SSH tunnel, it exits and emits the event as expected.
How can I make sure the script exits cleanly after opening the SSH tunnel? Note that I want the tunnel to persist after the script finishes.
I haven't tested this, but my guess is that the backgrounded ssh session (remote -> REDIS) is keeping the remote tty alive, and thus preventing the local -> remote session from closing. Try changing remote_script.sh to this:
#!/bin/bash
redis_host=318.353.31.3
nohup ssh -f -n "root#${redis_host}" -L 6379:127.0.0.1:6379 -N >/dev/null 2>&1 &
BTW, note that I switched the variable name to lowercase; there are a number of all-caps variables with special meanings (including HOST), and re-using any of them can have weird effects, so lower- or mixed-case variables are preferred for script use. Also, I double-quoted the variable reference, which won't matter in this case but is a good general habit for those cases where it does matter.
The way I managed to solve it is by using ssh -a root${HOST} bash ${FILE} in my local_script.sh. Note the -a flag which disables my forwarding ssh agent.
The important clue was that when I ran remote_script.sh on my remote machine directly, it would do everything as expected including a clean exit, but when I would try to logout, that's where it would hang.
Apparently ssh doesn't want to terminate while there are still active connections. When I typed ~#, which shows active ssh connections, it did indeed show my forwarding ssh agent.

Forward Ctrl+C &c. over "ssh -tt" running w/ command from heredoc

I'm trying to have a shell script I can use to do connect somemachine to simplify the following pattern
ssh somemachine
tmux a
I've tried using here docs as Python subprocess with heredocs in order to send "tmux a" command
#!/bin/bash
ssh $1#$2 << 'ENDSSH'
tmux a
ENDSSH
however that fails with "stdin is not a terminal". Following suggestion in Pseudo-terminal will not be allocated because stdin is not a terminal I did following modification
#!/bin/bash
ssh -tt $1#$2 << 'ENDSSH'
tmux a
ENDSSH
But now all my shortcuts are intercepted. IE, CTRL+C will kill my SSH session rather than forwarding SIGINT to the process. Any suggestions?
I think you just need the -t flag and not use the heredoc. Using the heredoc means that the ssh process doesn't have the terminal as its stdin (it has the heredoc instead) so it can't forward it to pseudo-terminal on the remote side. Using the -tt flag forces the pts to be allocated without having an input which means that keypresses go to the local process not the remote one.
#!/bin/bash
ssh $1#$2 -t tmux a
works for me

Output a linux command to a url/port or scocket instead of writing it to a file

I have a command which out outputs certain data which i store in a ext file using a '>>' command.Now Instead of doing that I want to have a socket or a port on any server which will catch the output of the command.Basically i want to output all my script data to a socket or url which ever is possible.
Any help in this direction is most welcomed.
You can use socat to listening on a port 12345 and echo any data sent to it like this:
socat -u TCP-LISTEN:12345,keepalive,reuseaddr,fork STDOUT
If you want to capture it to a file as well (file.log), you can use the same command with tee:
socat -u TCP-LISTEN:12345,keepalive,reuseaddr,fork STDOUT | tee file.log
You can run your program to output to bash's TCP virtual device:
./prog > /dev/tcp/localhost/12345
If you don't want to use bash magic then you can also use socat to send the data:
./prog | socat - TCP-CONNECT:localhost:12345
The above example assume you are running your program and "logger" on the same system but you can replace "localhost" with the hostname or address of the system you wish to send to (where the socat is listening).

How to get the output of a telnet command from bash?

I'm trying to get the list of processes running on my Windows machine from Linux, but I don't get any output when I do it in a script. If I use telnet manually and use the command pslist I get the complete list of processes, but not in my script.
Here is the bash script (minus the variables):
( echo open ${host}
sleep 1
echo ${user}
sleep 3
echo ${pass}
sleep 1
echo pslist
sleep 2
) | telnet
and I simply call it with bash pslist.sh and the output is something like that:
telnet> Trying ip_address...
Connected to ip_address.
Escape character is '^]'.
Welcome to Microsoft Telnet Service
login: my_loginmy_passwordpslistConnection closed by foreign host.
What am I doing wrong ?
telnet is notoriously tricky to script. You may be able to succeed more often if you add a longer still sleep between the commands.
A better approach is to switch to a properly scriptable client, viz. netcat (aka nc). Better still would be to install an SSH server on your Windows box (perhaps for security only make it accessible from inside your network) and set it up with passwordless authentication. Then you can simply ssh user#ipaddress pslist
Terminate each echo with \r character, like this: echo -e "${user}\r"

SSH: guarding stdout against disconnect

My server deployment script triggers a long-running process through SSH, like so:
ssh host 'install.sh'
Since my internet connection at home is not the best, I can sometimes be disconnected while the install.sh is running. (This is easily simulated by closing the terminal window.) I would really like for the install.sh script to keep running in those cases, so that I don't end up with interrupted apt-get processes and similar nuisances.
The reason why install.sh gets killed seems to be that stdout and stderr are closed when the SSH session is yanked, so writing to them fails. (It's not an issue of SIGHUP, by the way -- using nohup makes no difference.) If I put touch ~/1 && echo this fails && touch ~/2 into install.sh, only ~/1 is created.
So running ssh host 'install.sh &> install.out' solves the problem, but then I lose any "live" progress and error output.
So my question is: What's an easy/idiomatic way to run a process through SSH so that it doesn't crash if SSH dies, but so that I can still see the output as it runs?
Solutions I have tried:
When I run things manually, I use screen for cases like this, but I don't think it will be of much help here because I need to run install.sh automatically from a shell script. Screen seems to be made for interactive use (it complains "Must be connected to a terminal.").
Using install.sh 2>&1 | tee install.out didn't help either (silly of me to think it might).
You can redirect stdout/stderr into install.out and then tail -f it. The following snippet actually works:
touch install.out && # so tail does not bark (race condition)
(install.sh < /dev/null &> install.out &
tail --pid "$!" -F install.out)
But surely there must a less awkward way to do the same thing?
Try using screen:
screen ./install.sh
If your ssh session gets interrupted, you can simply reattach to the session via another ssh connection:
screen -x
You can provide a terminal to your ssh session using the -t switch:
ssh -t server screen ./install.sh
install.sh 2>&1 | tee install.out
if the only issue is not getting stderr. You didn't say exactly why the tee wasn't acceptable. You may need the other nohup/stdin tweaks.

Resources