I have a problem with my plan terraform in using cloud build. I cannot use gsutil command in a module terraform, I have an error :
Error: Error running command 'gsutil -m rsync -d -r ../../../sources/composer gs://toto/dags/': exit status 127. Output: /bin/sh: gsutil: not found
My cloudbuild.yaml :
steps:
- id: 'branch name'
name: 'alpine'
entrypoint: 'sh'
args:
- '-c'
- |
echo "***********************"
echo "$BRANCH_NAME"
echo "***********************"
...
# [START tf-apply]
- id: 'tf apply'
name: 'hashicorp/terraform:0.15.0'
entrypoint: 'sh'
args:
- '-c'
- |
if [ -d "terraform/environments/$BRANCH_NAME/" ]; then
cd terraform/environments/$BRANCH_NAME
terraform apply -auto-approve
else
echo "***************************** SKIPPING APPLYING *******************************"
echo "Branch '$BRANCH_NAME' does not represent an oficial environment."
echo "*******************************************************************************"
fi
# [END tf-apply]
timeout: 3600s
My module to put files in gcs :
resource "null_resource" "upload_folder_content" {
provisioner "local-exec" {
command = "gsutil -m rsync -d -r ${var.dag_folder_path} ${var.composer_dag_gcs}/"
}
}
As you are using the Hashicorp's Terraform image in your step, it is to be expected that gsutil it's not included by default and as such you're unable to run that command that your null_resource is defining opposed to what you'd be able to do on your local environment.
In order to overcome that, you could build your own custom image and push it to Google Container Registry so you're able to use it afterwards. With that option you will also have more flexibility as you could install whatever dependency your Terraform code has.
If you look at the actual error line, at the very end, it says this was the output of the command:
/bin/sh: gsutil: not found
I suspect that gsutil is simply not being found on your shell's path.
Perhaps you need to install whatever package gsutil is found in?
Related
I have a pipeline which pushes the packages to a server. In the logs of that script, I can see that even when it has some errors into it(and when I check into that server, the file doesn't appear there, which means there is definetly an error), the job still gets succeeded.
Below is the part of the logs:
ECHO sudo aptly repo add stable abc.deb
Loading packages...
[!] Unable to process abc.deb: stat abc.deb: no such file or directory
[!] Some files were skipped due to errors:
abc.deb
ECHO sudo aptly snapshot create abc-stable_2023.01.02-09.23.36 from repo stable
Snapshot abc-stable_2023.01.02-09.23.36 successfully created.
You can run 'aptly publish snapshot abc-stable_2023.01.02-09.23.36' to publish snapshot as Debian repository.
ECHO sudo aptly publish -passphrase=12345 switch xenial abc-stable_2023.01.02-09.23.36
ERROR: some files failed to be added
Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
gpg: WARNING: unsafe permissions on configuration file `/home/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/.gnupg/gpg.conf'
Signing file 'Release' with gpg, please enter your passphrase when prompted:
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:
gpg: WARNING: unsafe permissions on configuration file `/home/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/.gnupg/gpg.conf'
Cleaning up prefix "." components main...
Publish for snapshot ./xenial [amd64] publishes {main: [abc-stable_2023.01.02-09.23.36]: Snapshot from local repo [stable]: Repository} has been successfully switched to new snapshot.
Cleaning up project directory and file based variables
00:00
Job succeeded
Any idea how to fix this? Like it should fail if there is an error! Can anyone also please explain why it has this behaviour whereas it's the default thing that gitlab pipeline shows error whenever one job fails.
Edit 1:
here is the job which is causing the issue
deploy-on:
stage: deploy
image: ubuntu:20.04
before_script:
- apt-get update
- apt-get install sshpass -y
- apt-get install aptly -y
- apt-get install sudo -y
script:
- pOSOP=publisher
- unstableOrStable=stable
- chmod +x ./pushToServer.sh
- ./publishToServer.sh
here is the pushToServer.sh
#!/bin/bash
cat build.env
DebFileNameW=$(cat build.env | grep DebFileNameW | cut -d = -f2)
echo "DebFileNameW=" $DebFileNameW
sshpass -p pass ssh -oStrictHostKeyChecking=no $pOSOP '
echo "ECHO mkdir -p /home/packages/"
mkdir -p /home/packages/
exit
'
sshpass -p pass scp -oStrictHostKeyChecking=no build/$DebFileNameW.deb $pOSOP:/home/packages/
echo "making time"
file_name=$DebFileNameW
current_time=$(date "+%Y.%m.%d-%H.%M.%S")
sshpass -p pass ssh -t -oStrictHostKeyChecking=no $pOSOP '
echo "doing cd"
cd /home/packages
echo "ECHO sudo aptly repo add '$unstableOrStable' '$file_name'.deb"
sudo aptly repo add '$unstableOrStable' '$file_name'.deb
'
Edit 2:
At the second last line in pushToServer.sh file, i.e., after line sudo aptly repo add '$unstableOrStable' '$file_name'.deb and before last line which is ', I added these two ways get this done, but still it is not working:
Way 1:
if [[ ! $? -eq 0 ]]; then
print_error "The last operation failed."
exit 1
fi
Ways 2:
retVal=$?
echo "ECHOO exit status" $retVal
if [ $retVal -ne 0 ]; then
echo "<meaningful message>"
exit $retVal
fi
With both the ways it is not working. And the same error.
Output:
ECHO sudo aptly repo add stable abc.deb
Loading packages...
[!] Unable to process abc.deb: stat abc.deb: no such file or directory
ERROR: some files failed to be added
[!] Some files were skipped due to errors:
abc.deb
ECHO sudo aptly snapshot create abc-stable_2023.01.05-05.59.44 from repo stable
Snapshot abc-stable_2023.01.05-05.59.44 successfully created.
You can run 'aptly publish snapshot abc-stable_2023.01.05-05.59.44' to publish snapshot as Debian repository.
ECHO sudo aptly publish -passphrase=12345 switch xenial abc-stable_2023.01.05-05.59.44
Loading packages...
Generating metadata files and linking package files...
Finalizing metadata files...
Signing file 'Release' with gpg, please enter your passphrase when prompted:
gpg: WARNING: unsafe permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
gpg: WARNING: unsafe permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
gpg: WARNING: unsafe enclosing directory permissions on configuration file `/home/publisher/.gnupg/gpg.conf'
Clearsigning file 'Release' with gpg, please enter your passphrase when prompted:
Cleaning up prefix "." components main...
Publish for snapshot ./xenial [amd64] publishes {main: [abc-stable_2023.01.05-05.59.44]: Snapshot from local repo [stable]: Repository} has been successfully switched to new snapshot.
ECHOO exit status 0
Cleaning up project directory and file based variables
00:00
Job succeeded
Please note: I have done echo "ECHOO exit status" $retVal statement, and it shows exit status 0, which means $? doesn't have the right value itself. I have expecting $retVal which is '$?', to be 1 or something other than 0(Success) to get it worked.
Any Pointers?
So I have been trying with multiple different ways mentioned in comment replies and my Edits. Nothing worked. So I was able to solve it this way as mentioned here.
I just put these lines at the starting of my script after #!/bin/bash
#!/bin/bash
set -e
set -o pipefail
I have written a workflow file, that prepares the runner to connect to the desired server with ssh, so that I can run an ansible playbook.
ssh -t -v theUser#theHost shows me that the SSH connection works.
The ansible sript however tells me, that the sudo Password is missing.
If I leave the line ssh -t -v theUser#theHost out, ansible throws a connection timeout and cant connect to the server.
=> fatal: [***]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host *** port 22: Connection timed out
First I don't understand, why ansible can connect to the server only if i execute the command ssh -t -v theUser#theHost.
The next problem is, that the user does not need any sudo Password to have execution rights. The same ansible playbook works very well from my local machine without using the sudo password. I configured the server, so that the user has enough rights in the desired folder recursively.
It simply doesn't work form my GithHub Action.
Can you please tell me what I am doing wrong?
My workflow file looks like this:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Run Ansible Playbook
run: |
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/config
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
echo -e "Host ${{secrets.SSH_HOST}}\nIdentityFile /home/runner/.ssh/id_rsa" >> /home/runner/.ssh/config
ssh-keyscan -H ${{secrets.SSH_HOST}} > /home/runner/.ssh/known_hosts
cd myproject-infrastructure/ansible
eval `ssh-agent -s`
chmod 700 /home/runner/.ssh/id_rsa
ansible-playbook -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Finally found it
First basic setup of the action itself.
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
Next add a job to run and checkout the repository in the first step.
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
Next set up ssh correctly.
- name: Setup ssh
shell: bash
run: |
service ssh status
eval `ssh-agent -s`
First of all you want to be sure that the ssh service is running. The ssh service was already running in my case.
However when I experimented with Docker I had to start the service manually at the first place like service ssh start. Next be sure that the .shh folder exists for your user and copy your private key to that folder. I have added a github secret to my repository where I saved my private key. In my case it is the runner user.
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
Make sure that your private key is protected. If not the ssh service wont accept working with it. To do so:
chmod 700 /home/runner/.ssh/id_rsa
Normally when you start a ssh connection you are asked if you want to save the host permanently as a known host. As we are running automatically we can't type in yes. If you don't answer the process will fail.
You have to prevent the process being interrupted by the prompt. To do so you add the host to the known_hosts file by yourself. You use ssh-keyscan for that. Unfortunately ssh-keyscan can produce output in differeny formats/types.
Simply using ssh-keyscan was not enough in my case. I had to add other type options to the command. The generated output has to be written to the known_hosts file in the .ssh folder of your user. In my case /home/runner/.ssh/knwon_hosts
So the next command is:
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
Now you are almost there. Just call the ansible playbook command to run the ansible script. I ceated a new step where I changed the directory to the folder in my repository where my ansible files are saved.
- name: Run ansible script
shell: bash
run: |
cd infrastructure/ansible
ansible-playbook --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
The complete file:
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
run-playbooks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
with:
submodules: true
token: ${{secrets.REPO_TOKEN}}
- name: Setup SSH
shell: bash
run: |
eval `ssh-agent -s`
mkdir -p /home/runner/.ssh/
touch /home/runner/.ssh/id_rsa
echo -e "${{secrets.SSH_KEY}}" > /home/runner/.ssh/id_rsa
chmod 700 /home/runner/.ssh/id_rsa
ssh-keyscan -t rsa,dsa,ecdsa,ed25519 ${{secrets.SSH_HOST}} >> /home/runner/.ssh/known_hosts
- name: Run ansible script
shell: bash
run: |
service ssh status
cd infrastructure/ansible
cat setup-prod.yml
ansible-playbook -vvv --private-key /home/runner/.ssh/id_rsa -u ${{secrets.ANSIBLE_DEPLOY_USER}} -i hosts.yml setup-prod.yml
Next enjoy...
An alternative, without explaining why you have those errors, is to test and use actions/run-ansible-playbook to run your playbook.
That way, you can test if the "sudo Password is missing" is missing in that configuration.
- name: Run playbook
uses: dawidd6/action-ansible-playbook#v2
with:
# Required, playbook filepath
playbook: deploy.yml
# Optional, directory where playbooks live
directory: ./
# Optional, SSH private key
key: ${{secrets.SSH_PRIVATE_KEY}}
# Optional, literal inventory file contents
inventory: |
[all]
example.com
[group1]
example.com
# Optional, SSH known hosts file content
known_hosts: .known_hosts
# Optional, encrypted vault password
vault_password: ${{secrets.VAULT_PASSWORD}}
# Optional, galaxy requirements filepath
requirements: galaxy-requirements.yml
# Optional, additional flags to pass to ansible-playbook
options: |
--inventory .hosts
--limit group1
--extra-vars hello=there
--verbose
I am trying to copy some files from my local terraform directory into my datadog resources into a preexisting configuration path.
When I try the below in my datadog-values.yaml I do not see any of my configuration files copied into the location. I also cannot see any logs, even in debug mode, that are telling me whether it failed or the path was incorrect.
See datadog helm-charts
# agents.volumes -- Specify additional volumes to mount in the dd-agent container
volumes:
- hostPath:
path: ./configs
name: openmetrics_config
# agents.volumeMounts -- Specify additional volumes to mount in all containers of the agent pod
volumeMounts:
- name: openmetrics_config
mountPath: /etc/datadog-agent/conf.d/openmetrics.d
readOnly: true
What I've tried
I can manually copy the configuration files into the directory like below in a shell script. But Of course if the datadog names change on restart I have to manually update.
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl rollout restart deployment datadog-cluster-agent -n datadog
This volumes that you use here don't work as you wish to. This ./config directory is not your local directory. Kubernetes has no idea about your local machine.
But fear not.
There are few ways of doing it that and it all depending on your needs. They are:
Terraformed config
Terraformed mount
Terraformed copy config action
Terraformed config
To have config file terraformed means:
to have config updated in k8s whenever change of file occurs - we want terraform to track those changes
to have config uploaded before service using it will start (this is a configuration file after all, they configure something I assume)
DISCLAIMER - service won't reset after config change (it's achievable, but it's another topic)
To achieve this create config map for every config:
resource "kubernetes_config_map" "config" {
metadata {
name = "some_name"
namespace = "some_namespace"
}
data = {
"config.conf" = file(var.path_to_config)
}
}
and then use it in your volumeMounts. I assume that you're working with helm provider, so this should probably be
set {
name = "agents.volumeMounts"
value = [{
"mountPath": "/where/to/mount"
"name": kubernetes_config_map.config.metadata.0.name
}]
}
In example above I used single config and single volume for simplification, but for_each should be enough.
Terraformed mount
Another variant is that you don't want terraform to track configurations, then what you want to do is:
Create single storage (it can be mounted storage from your kube provider, can be also created dynamic volume in terraform - chose your poison)
Mount this storage to kubernetes volume (kubernetes_persistent_volume_v1 in terraform)
Set set {...} like in previous section.
Terraformed copy config action
Last one and my least favorited option is to call action to copy from terraform. It's last resort... Provisioners
Even terraform docs say it's bad, yet it has one advantage. It's super easy to use. You can simply call your shell command here - it could be: scp, rsync, or even (but please don't do it) kubectl cp.
To not encourage this solution more I'll just leave doc of null_resource which uses provisioner "remote-exec" (you can use "local-exec") here.
I have an enhancement to use Kaniko for Docker builds on Gitlab but the pipeline is failing to locate the dynamically generated Dockerfile with error :
$ echo "Docker build"
Docker build
$ cd ./src
$ pwd
/builds/group/subgroup/labs/src
$ cp /builds/group/subgroup/labs/src/Dockerfile /builds/group/subgroup/labs
cp: can't stat '/builds/group/subgroup/labs/src/Dockerfile': No such file or directory
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
For context the pipeline was designed to generate a Dockerfile dynamically for any particular project :
ci-scripts.yml
.create_dockerfile:
script: |
echo "checking dockerfile existence"
if ! [ -e Dockerfile ]; then
echo "dockerfile doesn't exist. Trying to create a new dockerfile from csproj."
docker_entrypoint=$(grep -m 1 AssemblyName ./src/*.csproj | sed -r 's/\s*<[^>]*>//g' | sed -r 's/\r$//g').dll
cat > Dockerfile << EOF
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
COPY ./publish .
ENTRYPOINT dotnet $docker_entrypoint
EOF
echo "dockerfile created"
else
echo "dockerfile exists"
fi
And in the main pipeline all that was needed was to reference .ci-scripts.yml as appropriate and do docker push.
After switching to Kaniko for Docker builds,Kaniko itself expects a Dockerfile at the location ${CI_PROJECT_DIR}/Dockerfile. In my context this is the path /builds/group/subgroup/labs.
And the main pipeline looks like this :
build-push.yml
docker_build_dev:
tags:
- aaa
image:
name: gcr.io/kaniko-project/executor:v1.6.0-debug
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
- pwd
- cd ./src
- pwd
extends: .create_dockerfile
variables:
DEV_TAG: dev-latest
script:
- cp /builds/group/subgroup/labs/src/Dockerfile /builds/group/subgroup/labs
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${DEV_TAG}"
In the block below I maintained the dynamically generated Dockerfile at the same path (./src) by switching from default Docker build directory (/builds/group/subgroup/labs) to (/builds/group/subgroup/labs/src). The assumption is that even with dynamic generation the Dockerfile should still be maintained at (./src)
Expected
The dynamically generated Dockerfile should be available at the default Docker build path /builds/group/subgroup/labs after the script ci-script.yml finishes executing.
When I maintain a Dockerfile at the project root (at /src ) (without Kaniko usage) the Docker-build runs successfully but once I switch to dynamically generating the Dockerfile (with Kaniko usage) the pipeline cannot find the Dockerfile. When the Dockerfile is maintained at project root this way as opposed to dynamic generation I have to copy the file to the Kaniko load path via :
script:
- cp ./src/Dockerfile /builds/group/subgroup/labs/Dockerfile
- mkdir -p /kaniko/.docker
I have a blank on how ci-script.yml is working (it was done by someone no longer around). I have tried to pwd in the script itself so as to check which directory its executing from :
.create_dockerfile:
script: |
pwd
echo "checking dockerfile existence"
....
....
but I get an error :
$ - pwd # collapsed multi-line command
/scripts-1175-34808/step_script: eval: line 123: -: not found
My questions :
Where exactly does Gitlab store Dockerfiles that are being generated on the fly?.
Is the generated Dockerfile treated as an artifact and if so at which path will it be?
My issue is that im trying to configure a Jenkins job using the declarative pipeline script and with that im facing some issue with the variable call.
For Example:
pipeline {
agent any
environment {
JAVA_HOME="/usr/lib/jvm/java-11-openjdk-11.0.11.0.9-1.el7_9.x86_64"
M2_HOME="/opt/tools/maven"
PATH="${JAVA_HOME}/bin:${M2_HOME}/bin:$PATH"
IMAGE_NAME = "us.gcr.io/myproject/myimage"
Here, i'm using IMAGE_NAME variable, so i can call this variable at different stages in the script. Below you will see the part of the script
stage('Build DockerImage for Pathla Backend') {
steps {
echo "Build DockerImage of Pathla Backend"
sh '''
d=`date +%Y%m%d%H%M`
build_version=${d}-v${BUILD_NUMBER}
pwd
cd mlp
cd pathla/pathla-service
cp -r /etc/mlp/./*.pem src/main/resources/METINF/resources/
mvn -e clean install -DskipTests=true -Dquarkus.container-image.build=true
IMAGE_PATH="${IMAGE_NAME}:${build_version}"
docker tag us.gcr.io/myproject/myimage $IMAGE_PATH
echo "Docker image to Push:: $IMAGE_PATH"
echo "Once pushed run this command:: docker rmi $IMAGE_PATH"
"
'''
}
}
Now if you see this "IMAGE_PATH="${IMAGE_NAME}:${build_version}" line in the above script, we are using the IMAGE_PATH as 1 variable (to provide image name) and build_version (to provide build version to the image)
we are calling the same variable (IMAGE_PATH) in the below script to scan the image using the PRISMA Cloud Scanner, but it doesn't work.
"stage('Scan via PrismaCloud for vulnerabilities of Pathla Backend') {
steps {
script {
withCredentials([
usernamePassword(credentialsId: 'mycluster',
usernameVariable: 'username',
passwordVariable: 'password')
])
{
sh '''
imagename=${IMAGE_PATH}
/opt/mlp/twistcli images scan -u $username -p $password --details --address https://10.XXX.XXX.XXX:8083 ${imagename}
> /bdnfs/pipelines/scan-out-details-backend.txt
if you check the above variable "${imagename}" it should call the "IMAGE_PATH" variable but it is not calling it and are not getting the results. And also, when i build this job and run it, i get the below error from the logs:
+ imagename=
+ /opt/mlp/twistcli images scan -u **** -p **** --details --address https://10.XXX.XXX.XXX:8083
**"missing image ID"**
I want this image us.gcr.io/myproject/myimage to be converted into the variable and then we call this variable at any stage of this pipeline script.