Does ant sshexec task return when command it's executing returns or before? - linux

Basically I've got 2 sshexec tasks in an ant target, one after the other. The first one launches a message sender on one machine, and the second one launches a message receiver on a different machine.
I was assuming that the sshexec task will issue the command to run the message sender, and then return while the sender is running, and then the following sshexec task would start the receiver, and thus both sender and receiver would be running in parallel, which is what I hope to achieve, but I'm not sure if this is actually the case, or if in fact the first task will only return when the sender returns, and thus the receiver will only get started after the sender has finished executing.
The sshexec task page doesn't seem to offer much info, and I'm somewhat new to mac ( the commands are being issued on mac minis running macos 10), so any help would be greatly appreciated, thanks!
<!--start sender-->
<sshexec host="${ip.client3}"
username="${username}"
password="${userpassword}"
trust="true"
command="./sender"
/>
<!-- start receiver-->
<sshexec host="${ip.client4}"
username="${username}"
password="${userpassword}"
trust="true"
command="./receiver "
/>

<sshexec> task will return after the remote command returns.
If you just want the two commands to start at the same time, you can use <parallel> task. Any task nested in <parallel> task will run "at the same time" by multi threading. However, in this way, <sshexec> still need to wait for the two commands to return.
If you just want your ant script to launch those two commands and continue building without waiting for remote commands to return, you may use things like nohup in your commandline.
I am not sure if nohup works, because when I run a remote command from a ssh terminal with nohup like this:
nohup command & [ENTER]
I have to press enter again before I start to use the same terminal to do something else.

Related

linux wait command only proceed if previous job successful putty

Im an end user that has access to Putty in order to run selective scripts on our server as they would run during our overnight batch process.
Some of these I run in sequence using
run_process.ksh kj_job1 & wait; run_process.ksh kj_job2
However kj_job1 can fail and kj_job2 would still run. Id like a way for kj_job2 to only proceed if kj_job1 was completed succesfully but i cant find a guide online to walk me through what i need to do.
My knowledge in this area is limited, i simply navigate to the directory where we have the file run_process.ksh and then add the job name i want to run. I recently found out about the & wait command in order to run strings and the () s i can run things in parallel.
Any help is apprecaited.

How to completely exit a running asyncio script in python3

I'm working on a server bot in python3 (using asyncio), and I would like to incorporate an update function for collaborators to instantly test their contributions. It is hosted on a VPS that I access via ssh. I run the process in tmux and it is often difficult for other contributors to relaunch the script once they have made a commit, etc. I'm very new to python, and I just use what I can find. So far I have used subprocess.Popen to run git pull, but I have no way for it to automatically restart the script.
Is there any way to terminate a running asyncio loop (ideally without errors) and restart it again?
You can not start a event loop stopped by event_loop.stop()
And in order to incorporate the changes you have to restart the script anyways (some methods might not exist on the objects you have, etc.)
I would recommend something like:
asyncio.ensure_future(git_tracker)
async def git_tracker():
# check for changes in version control, maybe wait for a sync point and then:
sys.exit(0)
This raises SystemExit, but despite that exits the program cleanly.
And around the python $file.py a while true; do git pull && python $file.py ; done
This is (as far as I know) the simplest approach to solve your problem.
For your use case, to stay on the safe side, you would probably need to kill the process and relaunch it.
See also: Restart process on file change in Linux
As a necromancer, I thought I give an up-to-date solution which we use in our UNIX system.
Using the os.execl function you can tell python to replace the current process with a new one:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.
In our case, we have a bash script which executes the killall python3.7, sending the SIGTERM signal to our python apps which in turn listen to it via the signal module and gracefully shutdown:
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe(loop.stop)
sys.exit(0)
The script than starts the apps in background and finishes.
Note that killall python3.7 will send SIGTERM signal to every python3.7 process!
When we need to restart we jus rune the following command:
os.execl("./restart.sh", 'restart.sh')
The first parameter is the path to the file and the second is the name of the process.

How to catch and act when prompted for user input in shell script?

I have a working shell script which calls another shell script to perform some action on some processes running on the server. This inner shell script sometimes prompt to enter the userid and password. If this happens I want to come out this inner script and want to perform kill -9 for the process. Can anyone please suggest on how to achieve this?
One more point, whatever my shell scripts does, I am recording this in a log file,so I assume when script prompts to enter userid and password, this info also get recorded in the log.So is their should be a way to check this in the log file.
I am working on Linux OS. Please check and advise.
You can kill your child script
after some timeout:
( cmdpid=$BASHPID; (sleep 10; kill -9 $cmdpid) & exec my-child-script )
In this case you will kill my-child-script after given period of time (10 sec).
You can't (easily) detect if you script is waiting for input (on standard input), the only working method is to use strace/ptrace, but it's too complex
and I don't think it's really worth it. The timeout-based approach seems to be by far more natural.
You can find here
some additional examples of this approach in this question:
Bash script that kills a child process after a given timeout
Regarding log files:
You can extract data from your log files using grep/sed. To make the answer more concrete, we need some extra data.

Using VSPerf.exe instead of VSPerfCmd.exe

I would like to prepare a number of Visual Studio Profiler (VSP) reports using a batch script. On Windows 7 I used VSPerfCmd.exe in the following way:
VSPerfCmd /start:sample /output:%OUTPUT_FILE% /launch:%APP% /args:"..."
VSPerfCmd /shutdown
VSPerfCmd /shutdown waits until the application has finished its execution, closes data collection and only then the VSP report is generated. This is what I need.
I switched to Windows Server 2012 and now VSPerfCmd does not work; I need to use VSPerf instead. The problem is that I cannot get the same behavior as VSPerfCmd.
Specifically, the /shutdown option is no longer available. Available options do not wait until the application has finished but stop or detach from the process right after execution. This means I can't use them in a batch script, where I run several processes one after another. Any ideas how to get the desired behavior?
You don't have to manually shutdown vsperf. You can simply do:
vsperf /launch:YourApp.exe
And vsperf will stop automatically after your application finishes.
See: https://msdn.microsoft.com/en-us/library/hh977161.aspx#BKMK_Windows_8_classic_applications_and_Windows_Server_2012_applications_only

python script gets killed by test for stdout

I'm writing a CGI script that is supposed to send data to a user until they disconnect, then run logging tasks afterwards.
THE PROBLEM: Instead of break executing and the logging getting completed when the client disconnects (detected by inability to write to the stdout buffer), the script ends or is killed (I cannot find any logs anywhere for how this exit is occurring)
Here is a snippet of the code:
for block in r.iter_content(262144):
if stopRecord == True:
r.close()
if not block:
break
if not sys.stdout.buffer.write(block): #The code fails here after a client disconnects
break
cacheTemp.close()
####write data to other logs and exit gracefully####
I have tried using "except:" as well as "except SystemExit:" but to no avail. Has anyone been able to solve this problem? (It is for a CGI script which is supposed to log when the client terminates their connection)
UPDATE: I have now tried using signal to interrupt the kill process in the script, which also didn't work. Where can I see an error log? I know exactly which line fails and under which conditions, but there is no error log or anything like I would get if I ran a script which failed in a terminal.
When you say it kills the program, you mean the main python process exits - and not by some thrown exception? That's kinda weird. A workaround might be to have the task run in a separate Thread or process, and then monitor that until it dies and subsequently execute the second task.

Resources