I have a script as below, when I am running this script in the 1st putty session. The output of script is displayed on the 1st putty session. If I open another (2nd) putty session and run the same script on 2nd putty session, still the output goes to the old putty (1st) session.
echo "*******Output command $args starts ***********"
pf=`oprmsg 'VARY CN(*),ACTIVATE'`
echo $pf
sleep 1
echo `oprmsg "$#"`
sleep 7
echo `oprmsg 'VARY CN(*),DEACTIVATE'`
sleep 1
echo "******* Output of command $args ends *************"
any idea why?
Related
I have script that access module locally using the code
exec 3<> /dev/tcp/127.0.0.1/5037 ; echo -e "my command here" >&3 ; cat <&3
In the telnet session, I got the lines
Remote connection from 127.0.0.1:51698 to 127.0.0.1:5037
Closing connection to 127.0.0.1:51698
These outputs appears also with telnet sessions (without script)
How can I stop them as the script is running multiple timer per min and is spamming the console?
You can redirect it in a .txt file.
This might help you:
your_command > log.txt 2>&1
This will leave your console clean while all the logs will be saved on log.txt
I am trying to write a shell script that creates a new window and run a minicom terminal (connected to /dev/ttyACM0) in that.
Here's the script file my_script.sh:
#!/bin/bash
gnome-terminal --command minicom
echo "\n" >> /dev/ttyACM0
sleep 5
echo "\n" >> /dev/ttyACM0
echo "run x_boot" >> /dev/ttyACM0
sleep 5
echo "root" >> /dev/ttyACM0
sleep 3
echo "cd /tmp" >> /dev/ttyACM0
sleep 1
In the above code all the echo commands i'm passing directly to the device file not to the minicom terminal.
Requirements:
Now I need to send command1 to minicom
Make the terminal to sleep for 5 sec before sending next command
Send Command2
Wait again for 5sec.
And many commands automated
After that exit the terminal without closing minicom
Please help me with this.
Use minicom scripting (runscript) instead of bash echoes. It has both send and sleep commands:
-S, --script=SCRIPT : run SCRIPT at startup
I am trying to pipe commands to an opened SSH session. The commands will be generated by a script, analyzing the results, and sending the next commands in accordance.
I do not want to put all the commands in a script on the remote host, and just run that script, because I am interested also in the status of the SSH process: sending locally the commands allow to "test" whether the SSH connection is alive or not, and get the appropriate return code from the SSH process.
I tried using something along these lines:
$ mkfifo /tpm/commands
$ ssh -t remote </tmp/commands
And from another term:
$ echo "command" >> /tmp/commands
Problem: SSH tells me that no pseudo-tty will be opened for stdin, and closes the connection as soon as "command" terminates.
I tried another approach:
$ ssh -t remote <<EOF
$(echo "command"; while true; do sleep 10; echo "command"; done)
EOF
But then, nothing is flushed to ssh until EOF is reached (in my case, never).
Do any of you have a solution ?
Stop closing /tmp/commands before you're done with it. When you close the pipe, ssh stops reading from it.
exec 7> /tmp/commands. # open once
echo foo >&7 # write multiple times
echo bar >&7
exec 7>&- # close once
You can additionally use ssh -tt to force ssh to open a tty on the remote.
I'm making a web kiosk display board using a raspberry pi and I want to send some key strokes to the browser window 2 minutes after it's loaded. The script sends the logon details for a webserver.
I've got a script that sends the keystrokes which works fine from the telnet console:
#!/usr/bash
username="username"
password="password"
echo "Setting Display"
export DISPLAY=:0
echo "Sending Username"
for char in $(sed -E s/'(.)'/'\1 '/g <<<"$username"); do
xdotool key $char
done
xdotool key Tab
echo "Sending Password"
for char in $(sed -E s/'(.)'/'\1 '/g <<<"$password"); do
xdotool key $char
done
xdotool key Return
echo "Waiting 5 Seconds"
sleep 5
echo "Setting Remember Password"
xdotool key Tab
xdotool key Tab
xdotool key Return
echo "Finished"
I've tried to add bash /home/pi/logon.sh to the rc.local file - but it doesn't send the keystrokes to the browser?
Does any one know why that would be? As I say - it works fine from the telnet window if I run the script, but it doesn't work when run from boot.
I had sleep 120 on the line before it to stop if firing right away and wait until the browser has loaded - and I know the script is running from rc.local, because when I remove the sleep command, I see the echos from the script.
Any ideas?
The reason it wasn't working was because the script needed to be run as the user pi.
I changed the code in the rc.local script to this: su - pi -c "bash /home/pi/logon.sh &"
This makes the script run as the user pi and the ampersand is used to make the script run separate to the rc.local script by forking it. (http://hacktux.com/bash/ampersand)
Put this in your crontab
#reboot /path/to/script
Edit it using
#crontab -e
I am using SSH to start a background process on a remote server. This is what I have at the moment:
ssh remote_user#server.com "nohup process &"
This works, in that the process does start. But the SSH session itself does not end until I hit Ctr-C.
When I hit Ctr-C, the remote process continues to run in the background.
I would like to place the ssh command in a script that I can run locally, so I would like the ssh session to exit automatically once the remote process has started.
Is there a way to make this happen?
The "-f" option to ssh tells ssh to run the remote command in the background and to return immediately. E.g.,
ssh -f user#host "echo foo; sleep 5; echo bar"
If you type the above, you will get your shell prompt back immediately, you will then see "foo" output. Five seconds later you will then see "bar" output. In the meantime, you could have been using the shell.
When using nohup, make sure you also redirect stdin, stdout and stderr:
ssh user#server 'DISPLAY=:0 nohup xeyes < /dev/null > std.out 2> std.err &'
In this way you will be completely detached from the remote process. Be carefull with using ssh -f user#host... since that will only put the ssh process in the background on the calling side. You can verify this by running a ps -aux | grep ssh on the calling machine and this will show you that the ssh call is still active, but just put in the background.
In my example above I use DISPLAY=:0 since xeyes is an X11 program and I want it started on the remote machine.
You could use screen to run your process on this screen, detach from screen Ctrl-a :detach and exit your current session without problem. Then you can reconnect to SSH and attach to this screen again to continue with your task or check if is finished.
Or you can send the command to an already running screen. Your local script should look like this:
ssh remote_user#server.com
screen -dmS new_screen sh
screen -S new_screen -p 0 -X stuff $'nohup process \n'
exit
For more info see this tutorial
Well this question is almost 10 years old, but I recently had to launch a very long script (taking several hours to complete) on a remote server and I found a way using the crontab.
If can edit your user's crontab on the remote server, connect with ssh to the server, edit the crontab and add an entry that will start your script the next minute. Let's say it's 15h03. Add this line :
4 15 * * * /path/to/your/script.sh
save your crontab, wait a minute for the script to be launched. Then edit again your crontab to remove this entry.
You can then safely exit ssh, even shut down your computer while the script is running.