I need to take down Azure VMs in a controlled way, first stopping them and then removing them. The "stop" part needs to be executed only if the VM is existing, otherwise the task creates one in a stopped state, and then removes it. I tried variations on when: state == "present" but without success. Where can I find an example or documentation about how I can use that? Or maybe the solution to have a previous task retrieving the VM info, and act on that?
TIA!
If you haven't tried already, give it a try by having the first task using the module "azure_rm_virtualmachine_info" and make sure you save the result to a variable using "register" command. Then have the second task using "when" command to check if the value is "present" or not for the state object of the variable thats saved in the earlier task.
Related
Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).
so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value
I've created a powershell script that updates a Pipeline Variable during a Release Pipeline. It takes the custom variable and updates it with a new version using semantic versioning with every run.
I've tried to add this custom variable as the Release Pipeline but keeps on giving me an error "Names of releases from this pipeline will not be unique. Use pre-defined variables to generate unique release names."
I've tried setting the variable to "Settable at Release" and putting the scope to "Release"
Does anybody perhaps know if there is a way to let the release pipeline know this is a dynamic variable that changes?
The only other option is to add the revision number to it $(versionnumber)$(rev:.r)
use Custom Pipeline Variable for Release Name
For this issue ,I think it not feasible to achieve it. Release name must be a unique name,
the $(rev:r) token can ensure that every completed build/release has a unique name because it's adding a incremental number for each release. When a build/release is completed, if nothing else in the number has changed, the Rev integer value is incremented by one. So, basically we cannot achieve that without using $(rev:r), unless you can defined a token which has the same function with $(rev:r).
In addition,you can also use $(Build.BuildNumber) or $(Release.ReleaseId) which are also unique.
For the similar issue,please refer to this case .
I am trying to customize the output of my pipeline release through setting some env variables into a task.
I found the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=powershell
which however does not seem to work.
What I am doing is simply to create a pipeline with a single task (either bash or PS), and there declaring the commands specified in the link through the inline version of the task.
Has anyone already successfully managed to make these commands work?
Do I do something wrong and/or incomplete?
Does anyone have a better way to customise the pipeline with relevant information from a task? E.g. through the release name, or the description and/or tag of the specific release?
Edit:
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
this is the code, copy and pasted. Now I tried also with the script, and it does not work either.
I use an hosted windows agent.
When you set a new variable with the logging command the variable is available only in the next tasks and not in the same task.
So, split your script to 2 tasks, in the second task put the last 3 lines and you will see that the first task works:
this also puzzled me for a while, in the end i found out that if you want to modify the $env:path you can call the special task called task.prependpath by using the special logging command syntax like "##vso[task.prependpath]local directory path". you can find more of this kind of special commands from their source :
https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/authoring/commands.md
disclaimer: pretty new to chef and I've inherited a bunch of chef cookbooks. The methods below are sub-optimal but it is what I have to work with for now. Be gentle, please. :) Also, please bear with me as I try to describe what I need.
Please note that we are using chef-client 11.16.4. Updating to 12.x, for now, is not an option.
tl;dr
Is there a way to specify a guard from the state of the current running block:
...
only_if { this_block_did_something }
notifies :run, 'bash[deploy-custom-docker-container]', :immediately
OK....
Take this chunk of code in a recipe I inherited and need to refactor a little...
# The identities of the innocent have been changed for their
# protection. Please ignore odd things in this example:
application app[:name] do
path app[:deploy_path]
enable_submodules true
repository app[:repository]
owner OWNER
group GROUP
symlinks({
"file.py" => "path/file.py"
})
revision app[:branch]
deploy_key data_bag_item('deployment_keys', 'keyname')['private_key']
end
link "/path/to/file.py" do
to "/path/to/settings-%s.py" % [file]
end
# This is where I need some direction...I think.
# note that CMD is a valid constant and the custom docker
# container does not follow any industry standard docker
# conventions due to our strange use-case. So I had to resort
# using a bash block to call our custom start/stop/restart script
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
# currently a subscribes but I've tried other methods which
# don't achieve what I'm trying to accomplish
subscribes :run, 'application[%s]' % [app[:name]]
end
The application app[:name] deploys source code onto the target node whenever the repo has new code to be synced. The bash block restarts a very custom and non-industry standard docker container which uses the code.
In its current form, which is undesirable, the bash[deploy-custom-docker-container] block always gets executed irrespective of whether application app[:name] has to deploy code to a git repo or not (IE repo is up to date vs not up to date) on the target node. I'm sure I could create some code that determines if the repo was updated, touch a state/lock file, and then guard execution of the bash block by checking if that lockfile exists. To me, that would be a sub-optimal way to achieve my goal. What would be optimal is to use chef's state of the update as the method of setting the guard. Is that possible?? Read on...
In other words, when application app[:name] is hit during chef-client runtime, and a repo has been updated (and thus deployed on the node), chef-client reports the steps of application app[:name] deploying the new code. If the repo didn't need to be updated, chef-client happily skips the block with a "(up to date)" message. If the repo needed to be updated, chef-client shows the steps taken to deploy the code. So chef-client knows the state of the block of code it just ran.
Also, my observations of how chef-client runs in our environment has shown me that it doesn't matter if I put a notifies block in application app[:name] for bash[deploy-custom-docker-container] or use the subscribes method (pasted above); the bash block gets run irrespective of the state of the application app[:name]. I'd prefer that if the application app[:name] doesn't have an update to perform then the bash block doesn't run.
What I fear is that I will have to use a state file to determine the state of the update of the repo from the application app[:name] block. I'd rather just guard off the state of the run-time from chef's perspective of the application app[:name] block.
FIXED CODE
As pointed out by zts, my actions were wrong or missing. The following code is what I was able to come up with that resolved my issue.
application app[:name] do
...
notifies :run, 'link[%s]' % [filetolink], :immediately
end
link filetolink do
to file
notifies :run, 'bash[deploy-custom-docker-container]', :immediately
end
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
action :nothing
end
This works for me now.
Notifications only fire if the notifying resource has changed (and the other way around, subscriptions only fire if the resource you're subscribing to has changed).
The reason the bash block runs irrespective of the notification is that, by default, bash blocks will run. If you only want a resource to run when notified, make sure to include action :nothing.
ie:
bash 'deploy-custom-docker-container' do
code <<-EO
#{CMD} restart
EO
action :nothing
subscribes :run, 'application[%s]' % [app[:name]]
end