how to build aws ami with packer conditionally? - amazon-ami

I am working on building a new ami with packer ONLY if a result (shell command) matches a value in the "provisioner" section
I am looking for a solution to have a conditional statement in "provisioner" section
"provisioners": [
{
"type": "shell",
"inline": [
res=f(20)
]
in this example, I want to define a condition
if res = 10 then continue (so packer will generate aws ami)
else stop the execution (and print a message)

I'll start with a disclaimer: Building conditionally isn't something that a provisioner is really intended to do. Ideally that kind of logic should be handled outside of the packer build process perhaps in a build pipeline like #MattSchuchard suggested. Examples of build pipeline tools are: Jenkins, CircleCI, Drone.IO
However: If you really need to have this logic built into the provisioner, Packer terminates and exits on a non-zero error code so you could do something like this:
"provisioners": [
{
"type": "shell",
"inline": [
"if [ $res -eq f(20) ]; then echo $res && exit 0; else echo "Incorrect result" && exit 1; fi"
]
}
]
You can further tweak this by using the valid_exit_codes option and defining which exit codes you are expecting from the specific circumstances you're looking to validate. Ref: https://www.packer.io/docs/provisioners/shell.html#valid_exit_codes
Example Output:
$ packer build -var-file=provisioner-test.json build.json
==> amazon-ebs: Prevalidating AMI Name: Test-37Cv9mXMGqw5zAV
amazon-ebs: Found Image ID: ami-09693313102a30b2c
==> amazon-ebs: Creating temporary keypair: packer_5d96a8b4-ef4d-a705-a393-076457bdc3ea
==> amazon-ebs: Launching a source AWS instance...
==> amazon-ebs: Adding tags to source instance
amazon-ebs: Adding tag: "Name": "Packer Builder"
amazon-ebs: Instance ID: i-0ca4d944fe99255da
==> amazon-ebs: Waiting for instance (i-0ca4d944fe99255da) to become ready...
==> amazon-ebs: Using ssh communicator to connect: 10.0.24.189
==> amazon-ebs: Waiting for SSH to become available...
==> amazon-ebs: Connected to SSH!
==> amazon-ebs: Provisioning with shell script: /var/folders/mg/wc582qjx0y759zw3hfwxwjmm0000gp/T/packer-shell814735146
amazon-ebs: Incorrect result
==> amazon-ebs: Terminating the source AWS instance...
==> amazon-ebs: Cleaning up any extra volumes...
==> amazon-ebs: No volumes to clean up, skipping
==> amazon-ebs: Deleting temporary keypair...
Build 'amazon-ebs' errored: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]
==> Builds finished but no artifacts were created.

Related

GitLab Job Passes Despite Non-Zero Exit Code

I have a GitLab CI/CD Job with the following definition:
compile:
stage: compile
tags:
- windows
- powershell
- bl653_8dc0_1053
artifacts:
paths:
- main.linenumbers.uwc
script:
- XComp_BL653_8DC0_1053.exe .\main.linenumbers.sb
- Test-Path -Path .\main.linenumbers.uwc
When the job executes, the XComp_BL653_8DC0_1053.exe application fails and returns exit code 7. However, the build still succeeds even though there was a non-zero exit code and no artifacts.
Executing "step_script" stage of the job script
00:02
$ XComp_BL653_8DC0_1053.exe .\main.linenumbers.sb
OnEvent EVTMR2 call HandlerTimer2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Compile Error: (0x0453) TOK_UNKNOWN_EVENTFUNC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
File : main.linenumbers.sb
Line : 110
Source : OnEvent EVTMR2 call HandlerTimer2
: ----------------------------------^
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Uploading artifacts for successful job
00:01
Version: 14.3.2
Git revision: e0218c92
Git branch: 14-3-stable
GO version: go1.13.8
Built: 2021-09-30T16:11:30+0000
OS/Arch: windows/amd64
Uploading artifacts...
Runtime platform arch=amd64 os=windows pid=7216 revision=e0218c92 version=14.3.2
WARNING: main.linenumbers.uwc: no matching files
ERROR: No files to upload
Job succeeded
I can see that it never runs the Test-Path line, so it is correctly exiting when the non-zero exit code happens, but why is it saying the build passes?
I'm using GitLab EE version 14.9. My runner is a PowerShell Executer on Windows 10.
This behavior is an artifact of how powershell works. The script will continue even if a command fails and the overall exit code for the script (and job) will be the exit code of the last command.
To ensure a command failure in XComp_BL653_8DC0_1053.exe causes the job to stop and exit, you would want to do something like:
script:
- |
XComp_BL653_8DC0_1053.exe .\main.linenumbers.sb
if(!$?) { Exit $LASTEXITCODE }
You can see this pattern repeated a lot in the internal powershell scripts used by the runner.
You can also set the $ErrorActionPreference = "Stop" to change this behavior for powershell cmdlets (not necessarily .exes). This can be done in an environment variable:
variables:
ErrorActionPreference: STOP
For additional context, see:
How to stop a PowerShell script on the first error?
Why are my PowerShell exit codes always "0"?

Bazel sh_test doesn't find node

I am trying to run a script which needs node. I have node installed in my machine.
I can run sh_binary by bazel run //:sh_bin and the script runs node just fine:
sh_binary(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
script.sh:
node -v
bazel run //:sh_bin:
v14.17.6
Now I want to convert this to sh_test:
sh_test(
name = "sh_bin",
data = [
],
srcs = [":script.sh"],
)
but now bazel test //:sh_bin cannot find node:
node: command not found
I also tried to add local = True to the test and still the same issue.
Bazel tests are run in a more controlled environment than application run via bazel run. One of the initial conditions that the test runner establishes is the value of $PATH: https://docs.bazel.build/versions/main/test-encyclopedia.html#initial-conditions
If you are working with remote execution, another problem could be that your test is executed on a machine that does not have node installed.
It's always a great idea to strive for a hermetic build that runs and tests independent of the host's state. That means you'd need to make the node program available to your binary or test as a data dep.
A good alternative is to build on existing work such as https://github.com/bazelbuild/rules_nodejs.
That being said, your example actually works for me.
cd `mktemp -d`
touch WORKSPACE
echo "node -v" > script.sh
chmod +x script.sh
cat <<EOF > BUILD
sh_test(
name = "sh_bin",
srcs = [":script.sh"],
)
EOF
bazel test --test_output=all -- //:sh_bin
Starting local Bazel server and connecting to it...
INFO: Analyzed target //:sh_bin (24 packages loaded, 282 targets configured).
INFO: Found 1 test target...
INFO: From Testing //:sh_bin:
==================== Test output for //:sh_bin:
v17.1.0
================================================================================
Target //:sh_bin up-to-date:
bazel-bin/sh_bin
INFO: Elapsed time: 6.895s, Critical Path: 0.10s
INFO: 5 processes: 3 internal, 2 linux-sandbox.
INFO: Build completed successfully, 5 total actions
//:sh_bin PASSED in 0.0s
Executed 1 out of 1 test: 1 test passes.
INFO: Build completed successfully, 5 total actions

Get exit code from `az vm run-command` in Azure pipeline

I'm running a rather hefty build in my Azure pipeline, which involves processing a large amount of data, and hence requires too much memory for my buildagent to handle. My approach is therefore to start up an linux VM, run the build there, and push up the resulting docker image to my container registry.
To achieve this, I'm using the Azure CLI task to issue commands to the VM (e.g. az vm start, az vm run-command ... etc).
The problem I am facing is that az vm run-command "succeeds" even if the script that you run on the VM returns a nonzero status code. For example, this "bad" vm script:
az vm run-command invoke -g <group> -n <vmName> --command-id RunShellScript --scripts "cd /nonexistent/path"
returns the following response:
{
"value": [
{
"code": "ProvisioningState/succeeded",
"displayStatus": "Provisioning succeeded",
"level": "Info",
"message": "Enable succeeded: \n[stdout]\n\n[stderr]\n/var/lib/waagent/run-command/download/87/script.sh: 1: cd: can't cd to /nonexistent/path\n",
"time": null
}
]
}
So, the command succeeds, presumably because it succeeded in executing the script on the VM. The fact that the script actually failed on the VM is buried in the response "message"
I would like my Azure pipeline task to fail if the script on the VM returns a nonzero status code. How would I achieve that?
One idea would be to parse the response (somehow) and search the text under stderr - but that sounds like a real hassle, and I'm not sure even how to "access" the response within the task.
Have you enabled the option "Fail on Standard Error" on the Azure CLI task? If not, you can try to enable it and run the pipeline again to see if the error "cd: can't cd to /nonexistent/path" can make the task run failed.
If the task still is passed, the error "cd: can't cd to /nonexistent/path" should not be a Standard Error. In this situation, you may need to add more command lines in your script to monitor the output logs of the az command. Once there is any output message shows error, execute "exit 1" to exit the script and return a Standard Error to make the task be failed.
I solved this by using the SSH pipeline task - this allowed me to connect to the VM via SSH, and run the given script on the machine "directly" via SSH.
This means from the context of the task, you get the status code from the script itself running on the VM. You also see any console output inside the task logs, which was obscured when using az vm run-command.
Here's an example:
- task: SSH#0
displayName: My VM script
timeoutInMinutes: 10
inputs:
sshEndpoint: <sshConnectionName>
runOptions: inline
inline: |
echo "Write your script here"
Not that the SSH connection needs to be set up as a service connection using the Azure pipelines UI. You reference the name of the service connection you set up in yaml.

Using PsExec in Jenkins, even if the script fails, it shows Success

I am trying to run a powershell script which first logins to azure and then deploys the zip file to azure using psexec.
I am using the following command:
F:\jenkins\VMScripts\PsExec64.exe \\WINSU9 -u "WINSU9\administrator" -p mypassword /accepteula -h PowerShell -noninteractive -File C:\Shared\Trial\webappscript.ps1
I am getting the output as:
PsExec v2.2 - Execute processes remotely
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals - www.sysinternals.com
[
{
"cloudName": "AzureCloud",
"id": "a7b6d14fddef2",
"isDefault": true,
"name": "subscription_name",
"state": "Enabled",
"tenantId": "b41cd",
"user": {
"name": "username#user.com",
"type": "user"
}
}
]
WARNING: Getting scm site credentials for zip deploymentConnecting to WINSU9...
Starting PSEXESVC service on WINSU9...
Connecting with PsExec service on WINSU9...
Starting PowerShell on WINSU9...
PowerShell exited on WINSU9 with error code 0.
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
It is just giving the output of az login command but the output of deployment is not showing. Also if the deployment fails, it will still show success. But it should show failure.
Answering my question so that others facing the same issue can get help here. As #Alex said that powershell is exiting with error code 0, I tried to return the error code 1 whenever any command fails. Since the output of Azure CLI is in json format, I stored that output in a variable and checked if it contains anything. The sample of the code is written below.
$output = az login -u "username" -p "password" | ConvertFrom-Json
if (!$output) {
Write-Error "Error validating the credentials"
exit 1
}
The Jenkins job succeeded because PSExec.exe returned exit code 0, which means that no errors were encountered. Jenkins jobs will fail if the underlying scripts fail (e.g. returning non-zero exit codes, like 1). If the PSExec.exe application isn't doing what you want it to - I would wrap it in another script which performs post-deploy validation, and returns 1 if the deployment failed.
See How/When does Execute Shell mark a build as failure in Jenkins? for more details.
You can use powershell step, this should hand out the error directly as well.

127 Build step > 'Execute shell script on remote host using ssh' marked build as > failure Finished: FAILURE

I am trying to use sh or ssh to connect to a linux box via jenkins (I am a noob admittedly). Even trying a ls command I am getting error - I did have this working before however - any help greatly appreciated.
Building in workspace /var/lib/jenkins/jobs/Demo/workspace executing
script:
USER="jenkins" sh '''#!/bin/bash
HOST=10.59.151.121
USER=devuser
PASSWORD=TGMCfpfS
ls
bye
EOF
'''
: No such file or directory [SSH] exit-status: 127 Build step
'Execute shell script on remote host using ssh' marked build as
failure Finished: FAILURE
For some reason I found that adding commands after the ''' allows them to be executed - even although the same warning appears, it works fine!

Resources