How to supplement the functionality of an Origen command? - origen-sdk

I looked at the Origen docs regarding adding and overriding commands and don't see any origen command besides -d that is forbidden from overriding. However, when I try to override the target command, I do not see Origen doing the same things as it would when I execute the command. Here is what i expect when I set a target:
peologin02:ppekit $ origen t mytarget.rb
Target now set to: mytarget.rb
When I do the same with my overidden command, the target does not change. Here is my overridden target command:
when /target|^t$/
unless ARGV.empty?
curr_target = Origen.target.name
new_target = ARGV.first
unless curr_target == new_target
# Remove the old target product's symlinks
rm_src_links
end
end
exit 0
I thought the 'exit 0' code would ensure that ARGV gets returned back to Origen for completing the target command, but it seems like nothing is done.
thx

When a command is dispatched, the current application gets the first crack at trying to handle it, followed by its plugins and then finally Origen core.
If at any time along the way the process is exited, then that will end the Origen command execution completely and those further down the priority chain will never even see the command.
Therefore, if you want to override the command completely so that the Original implementation is never hit, then call exit 0 after you have handled it.
If you just want to supplement the existing behavior then don't call exit and the execution will continue on to the next in line.
In other words, remove exit 0 in the example above and it should do what you want.

Related

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

Maintain a session across multiple instances of app when called from same shell

I'm trying to have data (generated by an application only after its launch) persisted across multiple invocations of an application, but only when they're started from the same shell session.
One possible way to do that would be to pass the data back from the application to the calling shell, but since environment variable changes are only passed from parent to child, I don't know how to implement that.
Practical example:
There is job command that create subdirectory with current datetime and does work inside. Sometimes job needs to be killed and restarted, so it need directory where if finished, like job --resume 21Fri_1849/data. I would like to save 21Jan_1849/data so I don't have to check and type it each time I need to resume job. If I created something like .last_job, and wanted to restart job in another session, it could resume wrong (last) job, so files are not solution (AFAIK).
How can this be done?
Since you're only trying to target Linux, there are a fair number of tricks available here. Consider this one:
#!/usr/bin/env bash
current_boot_id=$(</proc/sys/kernel/random/boot_id)
# honor myprog_shell_pid if set and valid, fall back to PPID otherwise
if [[ $myprog_shell_pid ]] && [[ -e /proc/$myprog_shell_pid/stat ]]; then
parent_pid=$myprog_shell_pid
else
parent_pid=$PPID
fi
parent_start_time=$(awk '{print $22}' "/proc/$parent_pid/stat")
mkdir -p "$HOME/.cache/myscript-sessions"
data=$HOME/.cache/myscript-sessions/${current_boot_id}:${parent_pid}:${parent_start_time}
Now, we have a data file name that changes:
When we're rebooted (because current_boot_id is updated)
If we're run from a different shell (because our PPID changes).
If we're run from a different shell with the same PID (because the start time for the parent PID will be different).
...and you can easily delete files with the wrong boot id (because the system rebooted), or with names that refer to PID/start-time combinations that don't exist.
One caveat is that by default, this is sensitive to being called by subshells (output=$(./yourprog) will have a different PPID than ./yourprog will), but if the parent shell runs export myprog_shell_pid=$$, that issue goes away.
You're crossing over to where you need a simple job management engine instead of just shell. Using 'make' and writing Makefiles is the probably the simplest way to set this up. You can write a rule that tells how to turn a stage 1 file into a stage 2 file based on file extension, and then make will know how far things got and how to resume next time you run it.

Retrieving exit code of batch file in PeopleCode

I have the following Java code in PeopleCode to execute a batch file (which in turn executes WinSCP script file). How to get the return code?
Else if you guys have similar code in people code to transfer file. Please let me know.
Local JavaObject &runtime = GetJavaClass("java.lang.Runtime").getRuntime();
Local JavaObject &process = &runtime.exec("\\xyz\BATCH_FILE_NAME.bat");
Rest of the code.
Use the Process.exitValue() method:
&process.exitValue()
This will of course get you an exit code of the batch file. Whether that gets you exit code of WinSCP depends on how the batch file is implemented. You didn't show us.
In general you propagate exit code of WinSCP to exit code of batch file like:
winscp.com /ini=nul /command ...
exit /b %ERRORLEVEL%
Read also about Checking script results.
Though with such trivial batch file, it does not really make sense to use the batch file in the first place. You can execute the winscp.com (or the winscp.exe) directly from your PeopleCode.

How to guard from chef-client state of last block run; don't use state file

disclaimer: pretty new to chef and I've inherited a bunch of chef cookbooks. The methods below are sub-optimal but it is what I have to work with for now. Be gentle, please. :) Also, please bear with me as I try to describe what I need.
Please note that we are using chef-client 11.16.4. Updating to 12.x, for now, is not an option.
tl;dr
Is there a way to specify a guard from the state of the current running block:
...
only_if { this_block_did_something }
notifies :run, 'bash[deploy-custom-docker-container]', :immediately
OK....
Take this chunk of code in a recipe I inherited and need to refactor a little...
# The identities of the innocent have been changed for their
# protection. Please ignore odd things in this example:
application app[:name] do
path app[:deploy_path]
enable_submodules true
repository app[:repository]
owner OWNER
group GROUP
symlinks({
"file.py" => "path/file.py"
})
revision app[:branch]
deploy_key data_bag_item('deployment_keys', 'keyname')['private_key']
end
link "/path/to/file.py" do
to "/path/to/settings-%s.py" % [file]
end
# This is where I need some direction...I think.
# note that CMD is a valid constant and the custom docker
# container does not follow any industry standard docker
# conventions due to our strange use-case. So I had to resort
# using a bash block to call our custom start/stop/restart script
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
# currently a subscribes but I've tried other methods which
# don't achieve what I'm trying to accomplish
subscribes :run, 'application[%s]' % [app[:name]]
end
The application app[:name] deploys source code onto the target node whenever the repo has new code to be synced. The bash block restarts a very custom and non-industry standard docker container which uses the code.
In its current form, which is undesirable, the bash[deploy-custom-docker-container] block always gets executed irrespective of whether application app[:name] has to deploy code to a git repo or not (IE repo is up to date vs not up to date) on the target node. I'm sure I could create some code that determines if the repo was updated, touch a state/lock file, and then guard execution of the bash block by checking if that lockfile exists. To me, that would be a sub-optimal way to achieve my goal. What would be optimal is to use chef's state of the update as the method of setting the guard. Is that possible?? Read on...
In other words, when application app[:name] is hit during chef-client runtime, and a repo has been updated (and thus deployed on the node), chef-client reports the steps of application app[:name] deploying the new code. If the repo didn't need to be updated, chef-client happily skips the block with a "(up to date)" message. If the repo needed to be updated, chef-client shows the steps taken to deploy the code. So chef-client knows the state of the block of code it just ran.
Also, my observations of how chef-client runs in our environment has shown me that it doesn't matter if I put a notifies block in application app[:name] for bash[deploy-custom-docker-container] or use the subscribes method (pasted above); the bash block gets run irrespective of the state of the application app[:name]. I'd prefer that if the application app[:name] doesn't have an update to perform then the bash block doesn't run.
What I fear is that I will have to use a state file to determine the state of the update of the repo from the application app[:name] block. I'd rather just guard off the state of the run-time from chef's perspective of the application app[:name] block.
FIXED CODE
As pointed out by zts, my actions were wrong or missing. The following code is what I was able to come up with that resolved my issue.
application app[:name] do
...
notifies :run, 'link[%s]' % [filetolink], :immediately
end
link filetolink do
to file
notifies :run, 'bash[deploy-custom-docker-container]', :immediately
end
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
action :nothing
end
This works for me now.
Notifications only fire if the notifying resource has changed (and the other way around, subscriptions only fire if the resource you're subscribing to has changed).
The reason the bash block runs irrespective of the notification is that, by default, bash blocks will run. If you only want a resource to run when notified, make sure to include action :nothing.
ie:
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
action :nothing
subscribes :run, 'application[%s]' % [app[:name]]
end

XulRunner exit code

I was wondering how someone can specify an exit code when shutting a XULRunner application.
I currently use nsIAppStartup.quit() described in MDC nsIAppStartup reference to shutdown the application, but I can't figure out how to specify a process exit code.
The application is launched from a shell script and this exit code is needed to decide if it should be restarted or not.
NOTE : Passing eRestart to the quit function is useless in my situation because restarting depends on factors external to the application (system limits etc.)
Thank you and any help would be appreciated.
A quick look at XRE_main function shows that it will only return a non-zero value in case of errors - and even then the exit code is fixed. If everything succeeds and the application shuts down normally the exit code will be 0, no way to change it. XULRunner isn't really meant to be used in shell scripts, you will have to indicate your result in some other way (e.g. by writing it to a file).

Resources