Using VSPerf.exe instead of VSPerfCmd.exe - visual-studio-2012

I would like to prepare a number of Visual Studio Profiler (VSP) reports using a batch script. On Windows 7 I used VSPerfCmd.exe in the following way:
VSPerfCmd /start:sample /output:%OUTPUT_FILE% /launch:%APP% /args:"..."
VSPerfCmd /shutdown
VSPerfCmd /shutdown waits until the application has finished its execution, closes data collection and only then the VSP report is generated. This is what I need.
I switched to Windows Server 2012 and now VSPerfCmd does not work; I need to use VSPerf instead. The problem is that I cannot get the same behavior as VSPerfCmd.
Specifically, the /shutdown option is no longer available. Available options do not wait until the application has finished but stop or detach from the process right after execution. This means I can't use them in a batch script, where I run several processes one after another. Any ideas how to get the desired behavior?

You don't have to manually shutdown vsperf. You can simply do:
vsperf /launch:YourApp.exe
And vsperf will stop automatically after your application finishes.
See: https://msdn.microsoft.com/en-us/library/hh977161.aspx#BKMK_Windows_8_classic_applications_and_Windows_Server_2012_applications_only

Related

linux wait command only proceed if previous job successful putty

Im an end user that has access to Putty in order to run selective scripts on our server as they would run during our overnight batch process.
Some of these I run in sequence using
run_process.ksh kj_job1 & wait; run_process.ksh kj_job2
However kj_job1 can fail and kj_job2 would still run. Id like a way for kj_job2 to only proceed if kj_job1 was completed succesfully but i cant find a guide online to walk me through what i need to do.
My knowledge in this area is limited, i simply navigate to the directory where we have the file run_process.ksh and then add the job name i want to run. I recently found out about the & wait command in order to run strings and the () s i can run things in parallel.
Any help is apprecaited.

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

How to completely exit a running asyncio script in python3

I'm working on a server bot in python3 (using asyncio), and I would like to incorporate an update function for collaborators to instantly test their contributions. It is hosted on a VPS that I access via ssh. I run the process in tmux and it is often difficult for other contributors to relaunch the script once they have made a commit, etc. I'm very new to python, and I just use what I can find. So far I have used subprocess.Popen to run git pull, but I have no way for it to automatically restart the script.
Is there any way to terminate a running asyncio loop (ideally without errors) and restart it again?
You can not start a event loop stopped by event_loop.stop()
And in order to incorporate the changes you have to restart the script anyways (some methods might not exist on the objects you have, etc.)
I would recommend something like:
asyncio.ensure_future(git_tracker)
async def git_tracker():
# check for changes in version control, maybe wait for a sync point and then:
sys.exit(0)
This raises SystemExit, but despite that exits the program cleanly.
And around the python $file.py a while true; do git pull && python $file.py ; done
This is (as far as I know) the simplest approach to solve your problem.
For your use case, to stay on the safe side, you would probably need to kill the process and relaunch it.
See also: Restart process on file change in Linux
As a necromancer, I thought I give an up-to-date solution which we use in our UNIX system.
Using the os.execl function you can tell python to replace the current process with a new one:
These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.
In our case, we have a bash script which executes the killall python3.7, sending the SIGTERM signal to our python apps which in turn listen to it via the signal module and gracefully shutdown:
loop = asyncio.get_event_loop()
loop.call_soon_threadsafe(loop.stop)
sys.exit(0)
The script than starts the apps in background and finishes.
Note that killall python3.7 will send SIGTERM signal to every python3.7 process!
When we need to restart we jus rune the following command:
os.execl("./restart.sh", 'restart.sh')
The first parameter is the path to the file and the second is the name of the process.

Does ant sshexec task return when command it's executing returns or before?

Basically I've got 2 sshexec tasks in an ant target, one after the other. The first one launches a message sender on one machine, and the second one launches a message receiver on a different machine.
I was assuming that the sshexec task will issue the command to run the message sender, and then return while the sender is running, and then the following sshexec task would start the receiver, and thus both sender and receiver would be running in parallel, which is what I hope to achieve, but I'm not sure if this is actually the case, or if in fact the first task will only return when the sender returns, and thus the receiver will only get started after the sender has finished executing.
The sshexec task page doesn't seem to offer much info, and I'm somewhat new to mac ( the commands are being issued on mac minis running macos 10), so any help would be greatly appreciated, thanks!
<!--start sender-->
<sshexec host="${ip.client3}"
username="${username}"
password="${userpassword}"
trust="true"
command="./sender"
/>
<!-- start receiver-->
<sshexec host="${ip.client4}"
username="${username}"
password="${userpassword}"
trust="true"
command="./receiver "
/>
<sshexec> task will return after the remote command returns.
If you just want the two commands to start at the same time, you can use <parallel> task. Any task nested in <parallel> task will run "at the same time" by multi threading. However, in this way, <sshexec> still need to wait for the two commands to return.
If you just want your ant script to launch those two commands and continue building without waiting for remote commands to return, you may use things like nohup in your commandline.
I am not sure if nohup works, because when I run a remote command from a ssh terminal with nohup like this:
nohup command & [ENTER]
I have to press enter again before I start to use the same terminal to do something else.

XulRunner exit code

I was wondering how someone can specify an exit code when shutting a XULRunner application.
I currently use nsIAppStartup.quit() described in MDC nsIAppStartup reference to shutdown the application, but I can't figure out how to specify a process exit code.
The application is launched from a shell script and this exit code is needed to decide if it should be restarted or not.
NOTE : Passing eRestart to the quit function is useless in my situation because restarting depends on factors external to the application (system limits etc.)
Thank you and any help would be appreciated.
A quick look at XRE_main function shows that it will only return a non-zero value in case of errors - and even then the exit code is fixed. If everything succeeds and the application shuts down normally the exit code will be 0, no way to change it. XULRunner isn't really meant to be used in shell scripts, you will have to indicate your result in some other way (e.g. by writing it to a file).

Resources