Unzip problems in Gitlab CI pipeline - terraform

I have a simple step with Gitlab CI to install Terraform on an Alpine based image:
step_1:
stage: stage_x
image: alpine/doctl:latest
script:
- wget https://releases.hashicorp.com/terraform/1.2.8/terraform_1.2.8_linux_amd64.zip
- unzip terraform_1.2.8_linux_amd64.zip
- mv terraform /usr/bin/terraform
- terraform version
When executed, I have this error from the unzip command:
unzip: 'terraform' exists but is not a regular file
I tried this command over the same image in my machine and it works fine.
Any ideas?

One way to overcome your issue is to use the -d option of unzip in order to unzip Terraform in the folder /usr/bin directly:
-d DIR Extract into DIR
Source: unzip --help
This way, you can also drop the mv line totally.
So, your step becomes:
step_1:
stage: stage_x
image: alpine/doctl:latest
script:
- wget https://releases.hashicorp.com/terraform/1.2.8/terraform_1.2.8_linux_amd64.zip
- unzip terraform_1.2.8_linux_amd64.zip -d /usr/bin
- terraform version

Related

How to integrate various services for building a project in GitLab CI/CD?

I have a project that requires npm and gradle for build, and docker for building and pushing the image.
At first I thought that I should create my own ubuntu image with gradle and npm setup, but I found out that is not what docker images are for.
So I hoped to run official Gradle and node images as a service so that my script can call those commands, but that is not happening for some reason.
My .gitlab-ci.yml:
variables:
IMAGE_NAME: my.registry.production/project
IMAGE_TAG: $CI_COMMIT_BRANCH
GIT_SUBMODULE_STRATEGY: recursive
stages:
- build
- deploy
build_project:
stage: build
image: ubuntu:jammy
services:
- name: node:12.20
alias: npm
- name: gradle:6.3.0-jre8
alias: gradle
before_script:
- git submodule init && git submodule update --remote --recursive
script:
- cd project-server && npm install && gradle clean build -Pprod -Pwar -x test -x integrationTest
deploy_image:
stage: deploy
image: docker:20.10.17
services:
- name: docker:20.10.17-dind
alias: docker
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASSWORD my.registry.production
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
If anyone has any info on how to solve this I would greatly appreciate it, since I’m a novice DevOps.
Edit 1:
My Dockerfile for custom image with Gradle and Node installed.
FROM ubuntu:jammy
LABEL key=DevOps
SHELL ["/bin/bash", "--login", "-i", "-c"]
RUN apt update && apt upgrade -y && apt install curl -y
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
RUN source /root/.bashrc && nvm install 12.14.1
RUN nvm install 12.20.0
RUN apt install zip unzip
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
RUN sdk install java 8.0.302-open
RUN sdk install gradle 3.4.1
SHELL ["/bin/bash", "--login", "-c"]
CMD [ "bin/bash" ]
After I run it, it says that npm is not found in $PATH, I tried Java, Gradle as well but they weren't found in the path as well.
I don't know why since I installed them as you can tell from the Dockerfile.
As I know, a docker image is equal to one build. So if you have multiple services you need to build each one into docker image then you can encapsulate all images into docker-compose.yml file.
I think you can do the following:
Build the npm project into a docker image
Build the Gradle project into a docker image
Write the docker-compose.yml file and put both images.
Once you have done it, the pipeline calls the docker-compose.yml file.
I hope this will be helpful.
Consider a few suggestions based on the fundamental concepts about the deployment in your CI/CD pipeline:
Remove the services keyword. Reference GitLab's official documents on what the services keyword inside gitlab-ci.yaml file is not for. The feature is used
to provide network accessable services to your job runtime (like
a database): https://docs.gitlab.com/ee/ci/services/index.html
Your project uses npm as a dependency management system, Gradle is
a build tool. Both of these pieces of software are more than
appropriate to run on the host operating system of the container
runtime inside GitLab's Pipeline job. You need these tools to assemble some build artifact as a result of the job on the same host your code has been downloaded on in the Runner.
Think about the overall size of the base image in your build_project job and consider how time to download the image over the network on to the Runner will impact your job and overall pipeline duration. If performance can be improved by baking build dependencies into a custom Dockerfile do this. If your image is too large, instead use shell commands inside the script keyword block to download them at the runtime of the job. There can be pros and cons for both.
Break shell scripts to one command per line for easier troubleshooting of failures in your scripts. You will be able to see the line number of the command which returned a non-zero exit code in your job logs:
...
script:
- cd project-server
- npm install
- gradle clean build -Pprod -Pwar -x test -x integrationTest
...
It's recommended to use the Gradle wrapper (gradlew) most of the time instead of the gradle executable directly. Configure this within your project and check the configuration files for the wrapper into your version control system and this will simplify your build dependency: https://docs.gradle.org/current/userguide/gradle_wrapper.html

Gitlab-runner run python2 instead of python 3

here is my gitlab-ci.yml file:
image: python:3.6
before_script:
- python --version
- pip install -r requirements.txt
stages:
- test
test:
stage: test
script:
- chmod +x ./scripts/lint.sh
- ./scripts/lint.sh
- chmod +x ./scripts/tests.sh
- ./scripts/tests.sh
Note that on my local machine, the job is running without any problem and it is using python 3.6.13
Running the job test online, I got this error: It does not make any sense!
below is the config of the runner which can run untagged job and the error message.
The screenshot you've showed, the job is run using the shell-executor, and so is using the Python version on whichever machine you have installed the gitlab-runner on.
It looks like you want to use the docker-executor for using image: python:3.6, and so I would reinstall the runner to use the docker executor.
Alternatively, you can update your machine which is using the shell executor, to have Python 3 instead.
Another issue could be that you have not tagged your runners, and are using the wrong gitlab-runner. Make sure you've tagged your shell / docker runners, ie with shell-runner or docker-runner, and then in the test job, add:
tags:
- docker-runner

Codebuild is unable to find build directory

I am running my buildspec.yml where after npm run build command it should create the build directory in the root path, however, codebuild is unable to find the build directory.I have tried all possible ways from the resources, still I am unable to resolve "no matching base directory path found for build"
PS: I am using codecommit as the source, codebuild & codepipeline to run the deployment steps and S3 bucket to deploy the build directory.
My buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
- echo Installing dependencies...
- npm cache clean --force
- npm install
- npm --version
build:
commands:
- aws s3 rm s3://bucketname --recursive
post_build:
commands:
- pwd
- cd src
- npm run build
- ls -la
- aws s3 sync build s3://bucketname
artifacts:
files:
- "**/*"
I had to remove cd src from the post_build stage because my build and then it worked with pipeline, without any error.

Zip directory in BitBucket pipeline using image microsoft/dotnet:sdk

In a bitbucket-pipelines.yml BitBucket Pipeline file I am trying to publish a DotNet Core solution, zip it into the correct form to be understood by AWS and then upload it S3.
My build is based on the image microsoft/dotnet:sdk.
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script:
- dotnet restore
- dotnet publish MyProj/MyProj.csproj -o ../output
- 7z a output.zip .\output\*
- 7z a MyPackage.zip service.zip aws-windows-deployment-manifest.json
This step fails on the first 7z command because 7Zip isn't installed. What is the best way from the Windows command line to zip these files? Alternatively, is there a different Docker image I should be using?
I'm using Amazon.Lambda.Tools to deploy and had a similar issue where I needed to install zip - you could use zip to do it, or install 7z and use that - just need a couple of extra commands to apt-get
If you use a deployment step you'll also get CI/CD metrics and visuals in BitBucket (this is my config)
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script:
- dotnet restore
- dotnet build
- dotnet test
- step:
deployment: test
script:
- dotnet tool install -g Amazon.Lambda.Tools
- export PATH="$PATH:/root/.dotnet/tools"
- apt-get update
- apt-get install zip -y # or install 7z instead
- dotnet lambda deploy-serverless --region $...... # or manually upload to S3

gitlab-ci : php -v bash: line 24: php: command not found

I'm experiencing a random problem.
before_script:
- cd sources
- php -v
test:
script:
- phpunit -c mypath
70% of the time I'm getting this error:
$ php -v
bash: line 24: php: command not found
ERROR: Build failed with: exit code 1
But the weird thing is if I keep re running the same build it'll pass.
Any ideas?
Actually several runners are available, but I can only use one of them. All I had to do is to add tags to my job to select the runner.
before_script:
- cd sources
- php -v
test:
script:
- phpunit -c mypath
tags:
- php
PHP is not installed in the runners enviromnent where the tests are executed.
You have to make sure that the runner has an enviromnent which has PHP installed.
You did not specified what kind of runner you are using in your question, so I suggest you have a runner which runs docker container (as standard).
To accomplish your goal (avoiding bash: line 24: php: command not found) you can go two ways:
Let your project run in an docker image which has php installed
image: php
before_script:
- cd sources
- php -v
test:
script:
- phpunit -c mypath
OR
Use a rudementary image and install php
image: debian
before_script:
- cd sources
- apt-get install php5*
- php -v
test:
script:
- phpunit -c mypath
If you are not using docker as runner executor then install php on your mashine where the runner runs.

Resources