How do i copy the output from ttyS0 to a file but still see it on ttyS0 in Putty - linux

I can use this to send ttyS0 to a log.txt file at the beginning of the script
exec >> /mnt/Carousel_Games/systeminfo/pcuae-log.txt
exec 2>&1
thats ok but now it will not show ttyS0 on Putty, I can use this instead
exec >> /dev/ttyS0
exec 2>&1
And it will show it on Putty but not in the log.txt.
Is there a way of getting it to do both, show it on Putty plus send it to the log.txt file...?
Its so I can see it booting on ttyS0 and I can look at the log.txt file if I need too, if someone is having a problem with it booting properly, I can see it booting on there machine with the log.txt file they send me.

One way to do it is to open another terminal and tail your file in which the data is being written:
tail -f /mnt/Carousel_Games/systeminfo/pcuae-log.txt
Then, from 2nd terminal, run your command (or your script containing command):
exec >> /mnt/Carousel_Games/systeminfo/pcuae-log.txt
and your 1st terminal will show you what's being coming into the file.

Related

Stop exec and telnet from showing login outputs

I have script that access module locally using the code
exec 3<> /dev/tcp/127.0.0.1/5037 ; echo -e "my command here" >&3 ; cat <&3
In the telnet session, I got the lines
Remote connection from 127.0.0.1:51698 to 127.0.0.1:5037
Closing connection to 127.0.0.1:51698
These outputs appears also with telnet sessions (without script)
How can I stop them as the script is running multiple timer per min and is spamming the console?
You can redirect it in a .txt file.
This might help you:
your_command > log.txt 2>&1
This will leave your console clean while all the logs will be saved on log.txt

SSH, run process and then ignore the output

I have a command that will SSH and run a script after SSH'ing. The script runs a binary file.
Once the script is done, I can type any key and my local terminal goes back to its normal state. However, since the process is still running in the machine I SSH'ed into, any time it logs to stdout I see it in my local terminal.
How can I ignore this output without monkey patching it on my local machine by passing it to /dev/null? I want to keep the output inside the machine I am SSH'ing to and I want to just leave the SSH altogether after the process starts. I can pass it to /dev/null in the machine, however.
This is an example of what I'm running:
cat ./sh/script.sh | ssh -i ~/.aws/example.pem ec2-user#11.111.11.111
The contents of script.sh looks something like this:
# Some stuff...
# Run binary file
./bin/binary &
Solved it with ./bin/binary &>/dev/null &
Copy the script to the remote machine and then run it remotely. Following commands are executed on your local machine.
$ scp -i /path/to/sshkey /some/script.sh user#remote_machine:/path/to/some/script.sh
# Run the script in the background on the remote machine and pipe the output to a logfile. This will also exit from the SSH session right away.
$ ssh -i /path/to/sshkey \
user#remote_machine "/path/to/some/script.sh &> /path/to/some/logfile &"
Note, logfile will be created on the remote machine.
# View the log file while the process is executing
$ ssh -i /path/to/sshkey user#remote_machine "tail -f /path/to/some/logfile"

redirect serial input from device to file in bash script

I'm working with a device connected to my PC via serial port.
When i send a command to this device (using putty or minicom) it send me back some output.
I would like to save this output (serial input of my PC) on a local file.
I found this workaround to do this:
-In the first terminal i type:
cat -v /dev/ttyACM0 > filename
-In another terminal i type:
echo -ne 'cat "filename"\n\r' > /dev/ttyACM
It works, but i'd like to automate the process with a single bash script.
./serialDownload.sh filename
I tried to put in background the first command before executing the second one but it doesn't work...
#!/bin/bash
SERIAL_PORT="/dev/ttyACM0"
BAUDRATE=9600
stty raw speed $BAUDRATE
./serialListen.sh $1& (->this put in background the first command)
sleep 2
echo -e 'cat "'$1'"\n\r' > $SERIAL_PORT
Thank you in advance for any kind of help!

Send tee command via ssh

I send a tee command from host 1 to host 2:
ssh user#host2 '/path/run |& tee myFile.txt'
I use tee so that I get the output of the binary to be added to myFile.txt
The problem I have then is after a bit of time, I want to regain control of my local host without having a lot of printout. So I do CTRL+C. This lets the process on host2 continue to run, which is what I want, but it stops the tee process itself, so the file is not populated.
I tried to replace |& tee myFile.txt' by 2>&1 myFile.txt' & but it did not help.
How can I ensure that the file continues to be populated on host2, while regaining control to my session on host1?
If you want to record the results in some file (work with IO redirection inside of the nohup), you need to enclose all the pipeline in the nohup. It does not use shell expansions, since argument is only COMMAND ARGS, so using a sh is a good way:
ssh user#host2 'nohup sh -c "/path/run |& tee myFile.txt" &'
but note that nohup will disconnect the terminal from the command ant it might fail. Useful would be to redirect it directly to the file:
ssh user#host2 'nohup sh -c "/path/run &> myFile.txt" &'
Inspiration from the SO answer.
use nohup, screen or tmux for backgrounding a process.

bash: what to do when stdout does not exist

In a very simplified scenario, I have a script that looks like this:
mv test _test
sleep 10
echo $1
mv _test test
and if I execute it with:
ssh localhost "test.sh foo"
the test file will have an underscore in the name as long as the script is running, and when the script is finished, it will send foo back. The script SHOULD keep running, even if you terminate the ssh command by pressing ctrl+c or if you lose connection the the server, but it doesn't (the file is not renamed back to "test"). So, I tried the following:
nohup ssh localhost "test.sh foo"
and it makes ssh immune to ctrl+c but flaky connection to the server still causes trouble. After some debugging, it turns out that the script WILL actually reach the end IF THERE IS NO ECHO IN IT. And when you think about it, it makes sense - when the connection is dropped, there is no more stdout (ssh socket) to echo to, so it will fail, silently.
I can, of course, echo to a file and then get the file, but I would prefer something smarter, along the lines of test tty && echo $1 (but tty invoked like this always returns false). Any suggestions are greatly appreciated.
The following command does what you want:
ssh -t user#host 'nohup ~/test.sh foo > nohup.out 2>&1 & p1=$!; tail -f ~/nohup.out & wait $p1'
... test.sh is located in the users home directory
Explanation:
1.) "ssh -t user#host " ... pretty clear ... starts remote session
2.) "nohup ~/test.sh foo > nohup.out 2>&1" ... starts the test.sh script with nohup in background
3.) "p1=$!;" ... stores the child pid of the previous command in p1
4.) "tail -f ~/nohup.out &" ... tail nohup.out in background to see the output of test.sh
5.) "wait $p1" ... waits for proccess test.sh (which pid is stored in p1) to finish
The above command works even if you interrupt it with ctrl+c.
you can use ...
ssh -t localhost "test.sh foo"
... to force a tty allocation
As st0ne suggested, tail fails, but does not cause the script to terminate, as opposed to cat and echo. So, there is no need for nohup, redirecting stdout to a temporary file, etc. just plain and simple:
mv test _test
sleep 10
echo $1 | tail
mv _test test
and execute it with:
ssh localhost "test.sh foo"

Resources