gdb remote debugging of c++ process started via ssh: how to redirect stdin - linux

First, some background. I run a program by starting a process on a remote_host using ssh:
ssh -T remote_host "cd ~/mydir && ~/myprogram" < input.txt
The program, myprogram, reads stdin, which is attached to a local file input.txt.
Now, I need to remotely debug this program under gdb. If there was no stdin redirection, i.e. < input.txt, I would be able to do this using gdb's target remote, something like this (at gdb prompt):
(gdb) target remote | ssh -T remote_host gdbserver - myprogram
However, in the above example, I don't know how to attach myprogram's stdin to input.txt.
Is there something that would do the trick?

gdbserver doesn't read from stdin, so the program it invokes will have unfettered access to stdin. You should be able to do this:
ssh -T remote_host "cd ~/mydir && gdbserver :1234 ~/myprogram" < input.txt
where 1234 is an unused port. Then,
(gdb) target remote remote_host:1234
A drawback with this is that the gdb-gdbserver TCP connection won't be encrypted.

Related

SSH, run process and then ignore the output

I have a command that will SSH and run a script after SSH'ing. The script runs a binary file.
Once the script is done, I can type any key and my local terminal goes back to its normal state. However, since the process is still running in the machine I SSH'ed into, any time it logs to stdout I see it in my local terminal.
How can I ignore this output without monkey patching it on my local machine by passing it to /dev/null? I want to keep the output inside the machine I am SSH'ing to and I want to just leave the SSH altogether after the process starts. I can pass it to /dev/null in the machine, however.
This is an example of what I'm running:
cat ./sh/script.sh | ssh -i ~/.aws/example.pem ec2-user#11.111.11.111
The contents of script.sh looks something like this:
# Some stuff...
# Run binary file
./bin/binary &
Solved it with ./bin/binary &>/dev/null &
Copy the script to the remote machine and then run it remotely. Following commands are executed on your local machine.
$ scp -i /path/to/sshkey /some/script.sh user#remote_machine:/path/to/some/script.sh
# Run the script in the background on the remote machine and pipe the output to a logfile. This will also exit from the SSH session right away.
$ ssh -i /path/to/sshkey \
user#remote_machine "/path/to/some/script.sh &> /path/to/some/logfile &"
Note, logfile will be created on the remote machine.
# View the log file while the process is executing
$ ssh -i /path/to/sshkey user#remote_machine "tail -f /path/to/some/logfile"

Redirect running process STDOUT/STDERR to SSH STDOUT using GDB

I have a process running on an embedded system (linux).
its STDOUT/STDERR is the console which is on a serial port.
I would like to redirect its ouputs (standard and error) to that of an SSH session.
I have read you can do similar operations with GDB, but I don't know how you would redirect to the SSH session's STDOUT/STDERR instead of to a file.
I can't do it to a file because of low disk resources. Also I have seen some examples using a named pipe, but I don't have mkfifo command available. I do have GDB.
Also, assuming this is possible, would the process terminate when I close the SSH session? If so, could I redirect back before I do?
Thanks.
You can do it as long as you can call libc functions from gdb.
#ssh root#embedded
Query daemon output location:
# ls -l /proc/`pidof daemon`/fd/1
/proc/13202/fd/1 -> /dev/null
It can be not null, it can point to some other console or even some pipe or file, store this location somewhere. Query your ssh session output location:
# ls /proc/self/fd/1 -l
lrwx------ 1 root root 64 дек. 15 16:51 /proc/self/fd/1 -> /dev/pts/9
or simply call tty if you have it.
Now goes the work:
# gdb -p `pidof daemon`
..
81 ../sysdeps/unix/syscall-template.S: No such file or directory.
(gdb) call open("/dev/pts/9",2,0)
$1 = 0x3
(gdb) call dup2(3,2)
$2 = 0x2
(gdb) call dup2(3,1)
$3 = 0x1
(gdb) quit
Detaching from program: /root/daemon, process 13202
daemon output/errorput
Repeat the same steps before exit from ssh session, just replace /dev/pts/9 with initial output location in the open syscall.

Getting stty: standard input: Inappropriate ioctl for device when using scp through an ssh tunnel

Per the title, I'm getting the following warning when I try to scp through an ssh tunnel. In my case, I cannot scp directly to foo because port 1234 on device foo is being forwarded to another machine bar on a private network (and bar is the machine that is giving me a tunnel to 192.168.1.23).
$ # -f and -N don't matter and are only to run this example in one terminal
$ ssh -f -N -p 1234 userA#foo -L3333:192.168.1.23:22
$ scp -P 3333 foo.py ubuntu#localhost:
ubuntu#localhost's password:
stty: standard input: Inappropriate ioctl for device
foo.py 100% 1829 1.8KB/s 00:00
Does anyone know why I might be getting this warning about Inappropriate ioctl for device?
I got the exact same problem when I included the following line on my ~/.bashrc:
stty -ixon
The purpose of this line was to allow the use of Ctrl-s in reverse search of bash.
This gmane link has a solution: (original link dead) => Web Archive version of gmane link
'stty' applies to ttys, which you have for interactive login sessions.
.kshrc is executed for all sessions, including ones where stdin isn't
a tty. The solution, other than moving it to your .profile, is to
make execution conditional on it being an interactive shell.
There are several ways to check for interecative shell. The following solves the problem for bash:
[[ $- == *i* ]] && stty -ixon
Got the same issue while executing the script remotely. After many tries didn't get any luck to solve this error. Then got an article to run a shell script through ssh. This was an issue related to ssh, not any other command. ssh -t "command" -t will allocate a pseudo TTY to the ssh and this error won't come.
at the end i created a blank .cshrc file ( for ubuntu 18.04). worked

pipe timely commands to ssh

I am trying to pipe commands to an opened SSH session. The commands will be generated by a script, analyzing the results, and sending the next commands in accordance.
I do not want to put all the commands in a script on the remote host, and just run that script, because I am interested also in the status of the SSH process: sending locally the commands allow to "test" whether the SSH connection is alive or not, and get the appropriate return code from the SSH process.
I tried using something along these lines:
$ mkfifo /tpm/commands
$ ssh -t remote </tmp/commands
And from another term:
$ echo "command" >> /tmp/commands
Problem: SSH tells me that no pseudo-tty will be opened for stdin, and closes the connection as soon as "command" terminates.
I tried another approach:
$ ssh -t remote <<EOF
$(echo "command"; while true; do sleep 10; echo "command"; done)
EOF
But then, nothing is flushed to ssh until EOF is reached (in my case, never).
Do any of you have a solution ?
Stop closing /tmp/commands before you're done with it. When you close the pipe, ssh stops reading from it.
exec 7> /tmp/commands. # open once
echo foo >&7 # write multiple times
echo bar >&7
exec 7>&- # close once
You can additionally use ssh -tt to force ssh to open a tty on the remote.

Piping data to Linux program which expects a TTY (terminal)

I have a program in Linux which refuses to run if its stdin/stdout is not a TTY (terminal device). Is there an easy-to-use tool which will create a PTY, start the program with the newly created TTY, and copy all data over stdin/stdout?
The use case is not interactive, but scripting. I'm looking for the most lightweight solution, preferably not creating TCP connections, and not requiring too many other tools and libraries to be installed.
unbuffer, part of expect (sudo apt-get install expect-dev on Ubuntu Lucid), can fool a program into thinking it's connected to a TTY.
$ tty
/dev/pts/3
$ echo | tty
not a tty
$ echo | unbuffer tty
/dev/pts/11
You can use socat for this: echo your stdin strings | socat EXEC:"your_program",pty STDIO >/stdout_file
For example with bash: echo ls | socat EXEC:'bash',pty STDIO >/tmp/ls_out
Or as described here, for a program run with docker:
# Run the docker task, here bash, in background
docker run -it --rm --name test ubuntu &
# Send "ls -la" to the bash running inside docker
echo 'ls -la' | socat EXEC:'docker attach test',pty STDIN
# Show the result
docker logs test

Resources