I am working on a script which gets the script name and time to run that script and login /host name from a configuration file.
I dont have cron,at and crontab permission.
Now is there any other way to implement the logic to run a script on the input time (set in a configuratble file) from another script running on different host.
In Detail:
It is like script_A reads a configuration file from where it gets three inputs script_B , time to run (ddmmyyyy h24:mm:ss),login1#machine1. This script_B has to be run at a time provided on the given host.
None of the connected machines have cron,crontab,at permissions
I am using solaris
Can we have something like this in unix that the script_A creates a script_c which have the script_B with a check on time parameter. Now this script_c is copied to remote machine and it keeps running there in background till the time provided is reached.Once the time has come it execute script_b (located at remote host in the config file) and exit.
Thanks.
If you want to execute command foo at epoch time xxxxxxxxxx on host, you could do:
$ delay=$((xxxxxxxxxx - $( date +%s ))); test $delay -gt 0 &&
sleep $delay && ssh host foo
The simplest method is to compile cron from source and deploy it on the target machine. Every time your code gets any kind of control over the machine, check if your cron daemon is running (classic PID file) and start it if necessary.
A warning, though:
This is a social problem and a technical solution. Your mileage will be low.
Related
I have a script that monitors a specific server, giving me the disk usage, CPU usage, etc. I am using 2 Ubuntu VMs: I run the script on the server using SSH (ssh user#ip < script.sh from the first VM), and I want to make it show values in real time, so I tried 2 approaches I found on here:
1. while loop with clear
The first approach is using a while loop with "clear" to make the script run multiple times, giving new values every time and clearing the previous output like so:
while true
do
clear;
# bunch of code
done
The problem here is that it doesn't clear the terminal, it just keeps printing the new results one after another.
2. watch
The second approach uses watch:
watch -n 1 Script.sh
This works fine on the local machine (to monitor the current machine where the script is), but I can't find a way to make it run via SSH. Something like
ssh user#ip < 'watch -n 1 script.sh'
works in principle, but requires that the script be present on the server, which I want to avoid. Is there any way to run watch for the remote execution (via SSH) of a script that is present on the local machine?
For your second approach (using watch), what you can do instead is to run watch locally (from within the first VM) with an SSH command and piped-in script like this:
watch -n 1 'ssh user#ip < script.sh'
The drawback of this is that it will reconnect in each watch iteration (i.e., once a second), which some server configurations might not allow. See here for how to let SSH re-use the same connection for serial ssh runs.
But if what you want to do is to monitor servers, what I really recommend is to use a monitoring system like 'telegraf'.
I'm new to Linux (Red Hat) and I'm trying to automate the Eggplant tests I've come up with for our GUI based software. This will be run nightly.
I'll be running the base script on server 001. It will copy over the latest version of our software to a remote PC which is serving as test bench then it's to kick off a Bash script on the test bench which configures the environment then starts up the software.
I've tried:
ssh test#111.111.111.002 'bash -s' < testConfig.sh
ssh test#111.111.111.002 'bash -s testConfig.sh < /dev/nul > testConfig.log 2>&1 &'
ssh -X test#111.111.111.002 'testConfig.sh'
First one just fails, the second tries to start the software but instead of running on the test bench it tries running on the server. Of course with the third one it opens up windows on the server and runs the software; but I need the display on the test bench not the server.
A coworker happened by and saw my difficulty and suggested this, and it worked:
ssh user#1.1.1.1 'setenv DISPLAY :0.0 && testConfig.sh'
Try below and let us know. Using double quotes instead of single quotes.
ssh user#1.1.1.1 "bash -s" < ./testConfig.sh
make sure testConfig.sh is in the directory from where you are ssh'ing else use absolute path name for e.g. /home/qwe/testConfig.sh
Please also mention the error you are getting if any.
I'm trying to get a script to run at startup, but does nothing if I've connected to my Raspberry Pi via SSH.
So far I've got the crontab to automatically run the script checkssh.sh via #reboot sleep 30 && sudo bash ./checkssh.sh and './checkssh.sh' contains this:
#!/bin/bash
if [ -n "$SSH_CLIENT" ] || [ -n "$SSH_TTY" ]; then
echo "SSH CONNECTED"
else
./autobackup.sh
fi
Running checkssh.sh from an SSH terminal returns 'SSH CONNECTED' which is expected, and letting it run automatically from the crontab at reboot when SSH isn't connected works correctly. However, when it runs at boot and I connect via SSH as soon as it's available, it still runs the script. I'm not sure where this is going wrong.
I need it to run automatically and if there's no SSH connection run autobackup.sh , but if there is an SSH connection, not to run anything. The device I use for the SSH connection may vary & the network used may also vary, so a script that relies on specific IP's isn't ideal.
Thanks for any help :)
Those environment variables (SSH_CLIENT and SSH_TTY) are only set in the environment of an SSH session. You cannot check them from another process and expect them to fulfill your goals here.
Instead, run the program finger. This is the standard way to see who is logged in.
Probably you need to add some delay before running your script to allow for the SSH service to come up. If cron service comes up before the sshd does, you will have a failure. Try:
#reboot sleep 60 && bash ./checkssh.sh
Also I would substitute the '.' with the full script path.In one scenario I had to add as many as 120 seconds to get the #reboot crontab to work right. But ssh should not need as much. I guess you can trim 60 seconds according to your needs after you get it working.
SOLVED
Scenario: I am a beginner in bash script, windows task scheduler and such. I am able to run a local bash script in my Windows Task Scheduler successfully.
Problem: I need to do this on many computers, thus I think storing just 1 copy of the bash script on a remote server may be of help. What my Task Scheduler needs to do is just to run the script and output a log. However, I can't get the correct syntax for the argument.
The below is what I have currently:
Program/Script: C:\cygwin64\bin\bash.exe
Argument (works successfully):
-l -c "ssh -p 222 ME#ME.com "httpdocs/bashscript.sh" >> /cygdrive/c/Users/ME/Desktop/`date +%Y%m%d`.log 2>&1"
Start in: C:\cygwin64\bin
Also had to make sure that the user account under Properties in Task Scheduler is correct, as mine was incorrect before. And need key authentication for ME#ME.com too.
For the password issue, you really should use ssh keys. I think your command would simply be ssh -p 222 ME#ME.com:.... I.e., just get rid of the --rsh stuff. – chrisaycock
I am trying to execute two scripts which are available as sh files on remote host having 755 permissions.
I try callling them from client host as below:
REMOTE_HOST="host1"
BOUNCE_SCRIPT="
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/stopScript.sh ${ENV};
/code/sys/${ENV}/comp/1/${ENV}/scripts/unix/startScript.sh ${ENV};
"
ssh ${REMOTE_HOST} "${BOUNCE_SCRIPT}"
Above lines are in a script on local host.
While running the script on local host, the first command on remote host i.e. stopScript.sh gets executed correctly. It kills the running process which it was inteded to kill w/o any error.
However output of second script i.e. startScript.sh gets printed to local host window but the process it intended to start does not start on remote host.
Can anyone please let me know?
Is the way executing script on remote host correct?
Should I see output of running script on remote host locally as well? i.e. on the window of local host?
Thanks
You could try the -n flag for ssh:
ssh -n $REMOTE_HOST "$BOUNCE_SCRIPT" >> $LOG
The man page has further information (http://unixhelp.ed.ac.uk/CGI/man-cgi?ssh+1). The following is a snippet:
-n Redirects stdin from /dev/null (actually, prevents reading from
stdin).
Prefacing your startScript.sh line with 'nohup' may help. Often times if you remotely execute commands they will die when your ssh session ends, nohup allows your process to live after the session has ended. It would be helpful to know if your process is starting at all or if it starts and then dies.
I think cyber-monk is right, you should launch the processes with nohup to create à new independent process. Look if your stop script is killing the right process (the new one included).