In Maximo 7.6.1.1:
Is it possible to launch an automation script to update a WO -- just prior to the WO being closed?
The Change Status action seems to happen before any of the launch points that I've tried.
And of course, once a WO is closed, I can't edit the WO with an automation script, since it is flagged as Is History.
Which launch points have you tried? I think the earliest you can get is Attribute-Validate, where the status value will still have changed, but the action of setting historyflag should not have happened, yet. But if that's not working for you, you might be out of luck, unless you're willing to customize the WORKORDER object with Java.
You should be good to go with an attribute launchpoint script on the workorder.status attribute, event ACTION. I've done it before, I could for example call an API to check funds associated with GL account and block the status change, if necessary.
You just need to check for the current value so other status changes won't be affected.
Python example:
if mbo.getString("status") == 'CLOSE':
#... your code ...
Also, remember that you can always use the NOACCESSCHECK flag to change a mbo.
See MboConstants class: https://developer.ibm.com/assetmanagement/7609-maximo-javadoc/
from psdi.mbo import MboConstants
mbo.setValue("attribute", value, MboConstants.NOACCESSCHECK)
Related
How do I invoke a function every time the Script is loaded or reloaded?
tool
func _reload():
print("Changes have been made and saved! Script has been reloaded")
func _load():
print("Project was just opened! Script has been loaded")
I'm not sure what you mean by reloaded (or why it is an issue), but it probably is one of these:
When you add a Node to the scene tree (or when you enable a plugin, which is adding the EditorPlugin to scene tree of the editor), or when it is loaded (all plugins are loaded when you load the project), the code in _enter_tree will run. Similarly when it is removed (or when the plugin is disabled), the code in _exit_tree will run. Be aware that for tool scripts that you run manually (EditorScript), these don't work. So make a Node (or an EditorPlugin).
There is a signal that will notify when the script of an object changed. It is appropriately named "script_changed". So if you want to handle the situation when the tool script was modified, and thus reloaded, you could connect to that signal. You may also want to take advantage of the of _init, which is the first virtual method that Godot calls (on Nodes you can also use _enter_tree and _ready). The signal "script_changed" is emitted before _init is called in the new script.
If you want to handle when properties of a Node are modified (I'm including this since you mention "changed has been made"), you would have to use setters (with setget), or you could intercept the properties in _set.
Since you mention "Project was just opened", I think you want to make an EditorPlugin, and then you can use _enter_tree and _exit_tree.
I haven't found a way to get a notification when the currently edited scene on the editor is saved. However, saving the scene does not mean tool script are loaded or reloaded in anyway.
so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value
Is there an out of the box feature to automatically close issues that have not had any activity for a specific period of time e.g. 4 weeks?
If not, what would be the best way to go about implementing this for my Group's issues?
It does not exist per se but you can prepare a script to run into a cronjob or similar tool, so you regularly clean these issues. The script could use the GitLab Issue API, and check issue dates to determine whether to close a specific issue or not. The API has all the required tools for you to make this script with the described logic.
I'm not familier with such option but you can look into Issues list and sort by created or updated.
This solution uses the python-gitlab package. It gets all group issues, adds a comment to those that have been inactive and closes them.
Only prerequisites are to
get a PRIVATE_TOKEN and add it to your environment
find out your group ID and add it below
import datetime
import os
import gitlab
stale_before = datetime.date.today() - datetime.timedelta(days=28)
gl = gitlab.Gitlab(
url="https://gitlab.example.com", private_token=os.environ["PRIVATE_TOKEN"]
)
group = gl.groups.get(123) # your group ID
issues = group.issues.list(all=True, state="opened")
for issue in issues:
updated_at = datetime.datetime.fromisoformat(issue.updated_at).date()
if updated_at < stale_before:
print(f"Closing issue #{issue.iid} (last activity on {updated_at}).")
issue.notes.create({"body": "Closing for inactivity."})
issue.state_event = "close"
issue.save()
disclaimer: pretty new to chef and I've inherited a bunch of chef cookbooks. The methods below are sub-optimal but it is what I have to work with for now. Be gentle, please. :) Also, please bear with me as I try to describe what I need.
Please note that we are using chef-client 11.16.4. Updating to 12.x, for now, is not an option.
tl;dr
Is there a way to specify a guard from the state of the current running block:
...
only_if { this_block_did_something }
notifies :run, 'bash[deploy-custom-docker-container]', :immediately
OK....
Take this chunk of code in a recipe I inherited and need to refactor a little...
# The identities of the innocent have been changed for their
# protection. Please ignore odd things in this example:
application app[:name] do
path app[:deploy_path]
enable_submodules true
repository app[:repository]
owner OWNER
group GROUP
symlinks({
"file.py" => "path/file.py"
})
revision app[:branch]
deploy_key data_bag_item('deployment_keys', 'keyname')['private_key']
end
link "/path/to/file.py" do
to "/path/to/settings-%s.py" % [file]
end
# This is where I need some direction...I think.
# note that CMD is a valid constant and the custom docker
# container does not follow any industry standard docker
# conventions due to our strange use-case. So I had to resort
# using a bash block to call our custom start/stop/restart script
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
# currently a subscribes but I've tried other methods which
# don't achieve what I'm trying to accomplish
subscribes :run, 'application[%s]' % [app[:name]]
end
The application app[:name] deploys source code onto the target node whenever the repo has new code to be synced. The bash block restarts a very custom and non-industry standard docker container which uses the code.
In its current form, which is undesirable, the bash[deploy-custom-docker-container] block always gets executed irrespective of whether application app[:name] has to deploy code to a git repo or not (IE repo is up to date vs not up to date) on the target node. I'm sure I could create some code that determines if the repo was updated, touch a state/lock file, and then guard execution of the bash block by checking if that lockfile exists. To me, that would be a sub-optimal way to achieve my goal. What would be optimal is to use chef's state of the update as the method of setting the guard. Is that possible?? Read on...
In other words, when application app[:name] is hit during chef-client runtime, and a repo has been updated (and thus deployed on the node), chef-client reports the steps of application app[:name] deploying the new code. If the repo didn't need to be updated, chef-client happily skips the block with a "(up to date)" message. If the repo needed to be updated, chef-client shows the steps taken to deploy the code. So chef-client knows the state of the block of code it just ran.
Also, my observations of how chef-client runs in our environment has shown me that it doesn't matter if I put a notifies block in application app[:name] for bash[deploy-custom-docker-container] or use the subscribes method (pasted above); the bash block gets run irrespective of the state of the application app[:name]. I'd prefer that if the application app[:name] doesn't have an update to perform then the bash block doesn't run.
What I fear is that I will have to use a state file to determine the state of the update of the repo from the application app[:name] block. I'd rather just guard off the state of the run-time from chef's perspective of the application app[:name] block.
FIXED CODE
As pointed out by zts, my actions were wrong or missing. The following code is what I was able to come up with that resolved my issue.
application app[:name] do
...
notifies :run, 'link[%s]' % [filetolink], :immediately
end
link filetolink do
to file
notifies :run, 'bash[deploy-custom-docker-container]', :immediately
end
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
action :nothing
end
This works for me now.
Notifications only fire if the notifying resource has changed (and the other way around, subscriptions only fire if the resource you're subscribing to has changed).
The reason the bash block runs irrespective of the notification is that, by default, bash blocks will run. If you only want a resource to run when notified, make sure to include action :nothing.
ie:
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
action :nothing
subscribes :run, 'application[%s]' % [app[:name]]
end
I'm working in new application written in Siebel 8.1, issue appears when I'm trying to replay script and I can't handle that.
Replay Output:
Error -27086: Auto-correlation callback function
"flCorrelationCallbackParseWebPage" failed (rc=1) for parameter
"Siebel_Parse_Web_Page40"
web_reg_save_param("Siebel_Parse_Web_Page40",
"LB/IC=",
"RB/IC=",
"Ord=1",
"Search=Body",
"RelFrameId=1",
"AutoCorrelationFunction=flCorrelationCallbackParseWebPage",
"AutoCorrelationDll=LrwiSiebelCorrelationWrapper",
LAST);
I have done all steps for prepare record options from: http://software-qe.blogspot.se/2008/01/siebel-7x-record-and-replay-for.html
I'm using Loadrunner 11.52 (Siebel Web protocol), IE8.
We've been using the autocorrelation library for quite a few years on my team and we see this a lot. Unfortunately, it's not an easy problem to diagnose.
First I would check your test results and your VUser log to see if something happened before the autocorrelation failed. (Make sure your logging is set to parameter substitution in runtime settings).
Check your parameter files for extra spaces, commas, etc. Sometimes I've seen that error right after it rejects something about your parameter file.
Worst case scenario, your script is corrupted and you'll have to start over. We've gotten in the habit of making frequent backups of our scripts just because of this issue. Usually, we'll be able to start from our backup and continue or create a new script and paste the old code in. Autocorrelation error "magically" goes away with the same code in a new script.
If auto(magical)correlation does not work then use manual correlation.
Record twice with same data: Compare. You will find session, state and time data.
Change the credentials: Re-record. Compare. You will find credential related correlation
Change the business record but keep the same business process. Re-Record. You will find the business related correlation.
Do not expect autocorrelation to provide a magical working script. You have about a 0.0001% chance of that happening without LoadRunner script development intervenetion.