Why is the bitbucket pipeline variable $BITBUCKET_REPO_SLUG not being converted to the repository name when building a ZIP file - bitbucket-pipelines

I am creating a Bitbucket pipeline to deploy code from Bitbucket to an AWS EC2 instance. The steps required to do this in the pipeline are:
Package all the code from Bitbucket into a zip file
Upload that ZIP file to an S3 bucket
Deploy the ZIP file to the EC2 instance using AWS Code Deploy
I want the zip file to be called <repository_name>.zip when uploading it to the S3 bucket. To achieve that I use the $BITBUCKET_REPO_SLUG pipeline variable and set up the first step of the pipeline as shown below, where applications is the folder inside the repository that I want to package in the zip file.
staging:
- step:
name: Zip Code
image: atlassian/default-image:3
script:
- zip -r "$BITBUCKET_REPO_SLUG.zip" "applications"
artifacts:
- "$BITBUCKET_REPO_SLUG.zip"
However, from the pipeline output below (under Build teardown) you can see that the $BITBUCKET_REPO_SLUG.zip is not changed to <repository_name>.zip as expected.
zip -r "$BITBUCKET_REPO_SLUG.zip" "applications"
+ zip -r "$BITBUCKET_REPO_SLUG.zip" "applications"
adding: applications/ (stored 0%)
adding: applications/configuration/ (stored 0%)
adding: applications/configuration/trade_capture_trayport_private.ini (deflated 58%)
adding: applications/trade_capture_etrm/ (stored 0%)
adding: applications/trade_capture_etrm/trade_capture_trayport_private.ps1 (deflated 73%)
adding: applications/trade_capture_etrm/etrm_private_alerting.ps1 (deflated 65%)
adding: applications/trade_capture_etrm/tests/ (stored 0%)
adding: applications/trade_capture_etrm/tests/trayport.tests.ps1 (deflated 93%)
adding: applications/appspec.yml (deflated 70%)
Build teardown
Searching for files matching artifact pattern $BITBUCKET_REPO_SLUG.zip
Searching for test report files in directories named [test-results, failsafe-reports, test-reports, TestResults, surefire-reports] down to a depth of 4
Finished scanning for test reports. Found 0 test report files.
Merged test suites, total number tests is 0, with 0 failures and 0 errors.
In the next step, when uploading to S3, I am using the same approach to reference the zip file, as shown in this code
- step:
name: ⬆️ Upload to S3
services:
- docker
oidc: true
script:
# Test upload
- pipe: atlassian/aws-code-deploy:1.1.1
variables:
AWS_DEFAULT_REGION: $AWS_REGION
AWS_OIDC_ROLE_ARN: $AWS_OIDC_ROLE_ARN
COMMAND: 'upload'
APPLICATION_NAME: $APPLICATION_NAME
ZIP_FILE: "$BITBUCKET_REPO_SLUG.zip"
S3_BUCKET: $S3_BUCKET_STAGING
VERSION_LABEL: $BITBUCKET_REPO_SLUG
You can see from the output from the upload to S3 step that $BITBUCKET_REPO_SLUG.zip is correctly converted to <repository_name>.zip
INFO: Authenticating with a OpenID Connect (OIDC) Web Identity Provider
INFO: Executing the aws-ecr-push-image pipe...
INFO: Uploading powershell_trade_capture.zip to S3.
Traceback (most recent call last):
File "/pipe.py", line 264, in <module>
pipe.run()
File "/pipe.py", line 254, in run
self.upload_to_s3()
File "/pipe.py", line 230, in upload_to_s3
with open(self.get_variable('ZIP_FILE'), 'rb') as zip_file:
FileNotFoundError: [Errno 2] No such file or directory: 'powershell_trade_capture.zip'
Why is the pipeline variable $BITBUCKET_REPO_SLUG correctly converted to the value in the "Upload to S3" step but not converted to the repository name (and instead treated as a string) in the "Zip code" step?

The problem is that variable substitution is not currently supported in the artifact section of Bitbucket pipelines. See https://jira.atlassian.com/browse/BCLOUD-21666 for details which was created in Feb 2022.
The solution we found was to work around the problem and avoid the use of the artifact section. We did this by combining the "Zip Code" and "Upload to S3" sections of the pipeline. This meant that it was no longer necessary to use an artifact.
So this new section was set up as follows.
- step:
name: ⬆️ Zip & Upload to S3
services:
- docker
oidc: true
image: atlassian/default-image:3
script:
# Build the zip file
- zip -r $BITBUCKET_REPO_SLUG.zip "applications"
# Upload the zip file to S3
- pipe: atlassian/aws-code-deploy:1.1.1
variables:
AWS_DEFAULT_REGION: $AWS_REGION
AWS_OIDC_ROLE_ARN: $AWS_OIDC_ROLE_ARN
COMMAND: "upload"
APPLICATION_NAME: $APPLICATION_NAME
ZIP_FILE: $BITBUCKET_REPO_SLUG.zip
S3_BUCKET: $S3_BUCKET_STAGING
VERSION_LABEL: $BITBUCKET_REPO_SLUG
In this case there was no problem with the use of $BITBUCKET_REPO_SLUG and the pipeline was successful.

You don't need quotes for the zip or artifacts and I would assume it's not converting the variable correctly due to that.

Related

Download artifacts No artifacts found

Uploading artifacts for successful job
00:00
Uploading artifacts...
WARNING: sitespeed-results/: no matching files
ERROR: No files to upload
Uploading artifacts...
WARNING: browser-performance.json: no matching files
ERROR: No files to upload
Cleaning up project directory and file based variables
00:01
Job succeeded
gitlab-ci.yaml example :
include:
template: Verify/Browser-Performance.gitlab-ci.yml
browser_performance:
variables:
URL: https://example.com
DEGRADATION_THRESHOLD: 5
This error usually happens when there are no matching files found. Either move the files to the main directory, or specify the correct path to the files and folders. No matter where your script ends, gilab runner always searches for these files in the main/root directory of the project.
create a new folder inside $BITBUCKET_CLONE_DIR then copy your desired artifacts in that new folder?
After that, use the new folder as your artifacts.
See sample step below:
- step:
name: XXXX
image:
name: YYYY
username: $xxxy
password: $yyyy
script:
- mvn clean install --file $BITBUCKET_CLONE_DIR/xxxx/pom.xml
- mkdir –p $BITBUCKET_CLONE_DIR/dist
- cp -R xxx/target/* $BITBUCKET_CLONE_DIR/dist
- ls -lah $BITBUCKET_CLONE_DIR/dist
artifacts:
- dist/**
I hope this works
Check the path you have provided in the artifacts section:
artifacts:
paths:
- $CI_PROJECT_DIR
expire_in: 1 week
If you are choosing files only with some specific extensions like below, then GitLab Runner will not be able to find files with any other extensions:
artifacts:
paths:
- $CI_PROJECT_DIR/*.xlsx
expire_in: 1 week
So be careful with what are you mentioning in the "paths"

How to run yq command from included project in Gitlab?

I have two projects JWT and RELEASE-MGMT under the same group name in Gitlab.I have the pipelines as follows.
gitlab-ci.yml
JWT:
stages:
- prjname
include:
- project: 'testing-group/RELEASE-MGMT'
ref: 'main'
file:
- '/scripts/testing-prj-name.yml'
RELEASE-MGMT:(/scripts/testing-prj-name.yml)
testyqcommand:
stage: prjname
before_script:
- pip3 install jq
- pip3 install awscli
- pip3 install yq
script:
- pwd
- ls -ltr
- echo $CI_PROJECT_NAME
- yq -r '.$CI_PROJECT_NAME.projectname' projectnames.yml
Getting the below error
yq: error: argument files: can't open
'./scripts/testing-service-name.yml': [Errno 2] No such file or
directory: './scripts/testing-service-name.yml'
I was thinking since the two projects exists in the same group we can do this without using multi-project pipelines and also RELEASE-MGMT is the one that is included in all the microservices we have got.
include: is a logical mechanism in rendering a pipeline configuration. It won't actually bring any files to the workspace of the project running the pipeline.
If you want to run yq against a YAML file in another project, you'll have to clone the project first or otherwise retrieve the file as part of your CI job -- for example by using the files API or cloning the repo with the job token:
script:
- git clone https://gitlab-ci-token:${CI_JOB_TOKEN}#gitlab.example.com/<namespace>/<project>

Azure DevOps deployment pipeline failing with [error]Error: Cannot find any file based on /Users/runner/work/1/s/XYZ/**/ABC.ipa

Artifact is downloaded to '/Users/runner/work/1/' and the deployment task is looking for the artifact at '/Users/runner/work/1/s/XYZ/**/ABC.ipa'.
In build stage, artifacts are published to 'PathtoPublish: '$(build.artifactstagingdirectory)/${{parameters.env}}'' and in deployment, artifacts are accessed using ''$(System.DefaultWorkingDirectory)/XYZ/**/ABC.ipa''
Please help to access the ipa file correctly.
Two pre-defined variables are used but they points to different folder structures(doc here):
build.artifactstagingdirectory: The local path on the agent where any artifacts are copied to before being pushed to their destination. For example: c:\agent_work\1\a, not have s folder.
System.DefaultWorkingDirectory: The local path on the agent where your source code files are downloaded. For example: c:\agent_work\1\s.
It's recommended to add a powershell task adhere to list all files in the directory and subdirectory, so that we can find where the files are stored on the agent. code sample:
- task: PowerShell#2
name: listfiles
inputs:
targetType: 'inline'
script: 'Get-ChildItem -Path $(System.DefaultWorkingDirectory) -Recurse -File'
After we confirm where the file resides, we can modify the path for the task so the file can be found.

Gitlab detect secret artifacts can not get in another stage

i am trying the gitlab default secret detection option.
here is Gitlab CI file i am trying
include:
- template: Security/Secret-Detection.gitlab-ci.yml
stages:
- test
test:
stage: test
artifacts:
reports:
secret_detection: gl-secret-detection-report.json
script:
- pwd
- ls
- cat gl-secret-detection-report.json
i am not getting the file as artifacts.
error i am getting is :
cat: can't open 'gl-secret-detection-report.json': No such file or directory
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
however in default stage of Gitlab it is creating the file
Uploading artifacts for successful job
00:03
Uploading artifacts...
gl-secret-detection-report.json: found 1 matching files and directories
Uploading artifacts as "secret_detection" to coordinator... ok id=1asdfas239 responseStatus=201 Created token=jT28Z-ot
Cleaning up file based variables
00:00
Job succeeded
so secret_detection stage is working which is default using template.

Stop gitlab runner to not remove a directory

I have a directory which is generated during a build and it should not be deleted in the next builds. I tried to keep the directory using cache in .gitlab-ci.yml:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- target_directory/
build-runner1:
stage: build
script:
- ./build-platform.sh target_directory
In the first build a cache.zip is generated but for the next builds the target_directory is deleted and the cache.zip is extracted which takes a very long time. Here is a log of the the second build:
Running with gitlab-ci-multi-runner 1.11.
on Runner1
Using Shell executor...
Running on Runner1...
Fetching changes...
Removing target_directory/
HEAD is now at xxxxx Update .gitlab-ci.yml
From xxxx
Checking out xxx as master...
Skipping Git submodules setup
Checking cache for master...
Successfully extracted cache
Is there a way that gitlab runner not remove the directory in the first place?
What you need is to use a job artifacts:
Artifacts is a list of files and directories which are attached to a
job after it completes successfully.
.gitlab-ci.yml file:
your job:
before_script:
- do something
script:
- do another thing
- do something to generate your zip file (example: myFiles.zip)
artifacts:
paths:
- myFiles.zip
After a job finishes, if you visit the job's specific page, you can see that there is a button for downloading the artifacts archive.
Note
If you need to pass artifacts between different jobs, you need to use dependencies.
Gitlab has a good documentation about that if you really have this need http://docs.gitlab.com/ce/ci/yaml/README.html#dependencies

Resources