i am trying to configure a variable in this case _PATH using cloudbuild. I've multiple paths (folders) on my github repo with tf files, and want to the terraform recognize any change that have been made on any folder at the moment to push and trigger.
I was wondering if there is any way to looping values separated by "comma" on trigger options and then use "for" in bash script.., or perhaps exists another better way that i dont really know yet,
Thanks for the help!
code cloudbuild
cloudbuild sample
Sadly, I haven't found a way yet to set variables at the cloudbuild.yaml level.
Note CloudBuild was originally called Cloud Container Builder, which is why it acts differently than other CI/CD tools.
I think there may be another way to get the behavior you want though:
Implement the bash looping logic in a script (ie. sh/run_terraform_applys.sh), and create a Dockerfile for it in your repo:
FROM hashicorp/terraform:1.0.0
WORKDIR /workspace
COPY sh/ /workspace/sh/
COPY requirements.txt /workspace/
RUN pip install -r requirements.txt
RUN . sh/run_terraform_applys.sh
COPY . /workspace/
RUN . sh/other_stuff_to_do.sh
Use the cloud-builders image to build your image and as a consequence the docker build will run sh/run_terraform_applys.sh within the Docker image (You can push your image to GCR to allow for layer-caching):
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |-
# Build Dockerfile
docker build -t ${MY_GCR_CACHE} --cache-from ${MY_GCR_CACHE} -f cicd/Dockerfile .
docker push ${MY_GCR_CACHE}
id: 'Run Terraform Apply'
waitFor: ['-']
Related
I have a GitLab instance self-managed and one of my project has a folder which contains 3 sub-directories, these 3 sub-directories have a Dockerfile.
All my Dockerfile's have a grep command to get the latest version from the CHANGELOG.md which is located in the root directory.
I tried something like this to go back 2 steps but it doesn't work (grep: ../../CHANGELOG.md: No such file or directory)
Dockerfile:
grep -m 1 '^## v.*$' "../../CHANGELOG.md"
example:
Link:
https://mygitlab/project/images/myproject
repo content:
.
├──build
├──image1
├──image2
├──image3
├──CHANGELOG.md
gitlab-ci.yaml
script:
- docker build --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM
The issue is happening when I build the images.
docker build --network host -t $VAL_IM ./build/image1
Here, you have set the build context to ./build/image1 -- builds cannot access directories or files outside of the build context. Also keep in mind that if you use RUN in a docker build, it can only access files that have already been copies inside the container (and as stated you can't copy files outside the build context!) so this doesn't quite make sense as stated.
If you're committed to this versioning strategy, what you probably want to do is perform your grep command as part of your GitLab job before calling docker build and pass in the version as a build arg.
In your Dockerfile, add an ARG:
FROM # ...
ARG version
# now you can use the version in the build... eg:
LABEL com.example.version="$version"
RUN echo version is "$version"
Then your GitLab job might be like:
script:
- version=$(grep -m 1 '^## v.*$' "./CHANGELOG.md")
- docker build --build-arg version="${version}" --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM
actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?
GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.
I'm working on a Node.js application for which my current Dockerfile looks like this:
# Stage 0
# =======
FROM node:10-alpine as build-stage
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# Stage 1
# =======
FROM nginx:mainline-alpine
COPY --from=build-stage /app/build /usr/share/nginx/html
I'd like to integrate this into a GitLab CI pipeline but I'm not sure if I got the basic idea. So far I know that I need to create a .gitlab-ci.yml file which will be later picked up by GitLab.
My basic idea is:
I push my code changes to GitLab.
GitLab builds a new Docker image based on my Dockerfile.
GitLab pushes this newly create image to a "production" server (later).
So, my question is:
My .gitlab-ci.yml should then contain something like a build job which triggers... what? The docker build command? Or do I need to "copy" the Dockerfile content to the CI file?
GitLab CI executes the pipeline in the Runners that need to be registered into the project using generated tokens (Settings/CI CD/Runners). You also can used Shared Runners for multiple projects. The pipeline is configured with the .gitlab-ci.yml file and you can build, test, push and deploy docker images using the yaml file, when something is done in the repo (push to branch, merge request, etc).
It’s also useful when your application already has the Dockerfile that
can be used to create and test an image
So basically you need to install the runner, register it with the token of your project (or use Shared Runners) and configure your CI yaml file. The recommended aproach is docker in docker but it is up to you. You can also check this basic example. Finally you can deploy your container directly into Kubernetes, Heroku or Rancher. Remember to safely configure your credentials and secrets in Settings/Variables.
Conclusion
GitLab CI is awesome, but I recommend you to firstly think about your git workflow to use in order to set the stages in the .gitlab-ci.yml file. This will allow you to configure your node project as a pipeline an then it would be easy to export to other tools such as Jenkins pipelines or Travis for example.
build job trigger:
option 1:
add when: manual in the job and you can run the job by manual in CI/CD>Pipelines
option 2:
only:
- <branchname>
in this case the job start when you push into the defined branch
(this my personal suggest)
option 3:
add nothin' and the job will run every time when you push code
Of corse you can combine the options above.
In addition may star the job with web request by using the job token.
docker build command will work in pipeline. I think in script section.
Requirements docker engine on the gitlab-runner which pick the job.
Or do I need to "copy" the Dockerfile content to the CI file?
no
I am in the process of setting up a Docker container that will pull private repos from GitHub as part of the process. At the moment I am using an Access Token that I pass from the command line (will change once build gets triggered via Jenkins).
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
# Dockerfile
# Env Vars
ARG GITHUB_API_TOKEN
ENV GITHUB_API_TOKEN=${GITHUB_API_TOKEN}
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
This works fine and seems to be a secure way of doing this? (though need to check the var GITHUB_API_TOKEN only being available at build time)
I am looking to find out how people deal with ssh keys or access tokens when running npm install and dependencies pull from github
"devDependencies": {
"my-repo": "git#github.com:org/my-repo.git",
"electron": "^1.7.4"
}
At the moment I cannot pull this repo as I get the error Please make sure you have the correct access rights as I have no ssh keys setup in this container
Use the multi-stage build approach.
Your Dockerfile should look something like this:
FROM alpine/git as base_clone
ARG GITHUB_API_TOKEN
WORKDIR /opt
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
FROM <whatever>
COPY --from=base_clone /opt/my-repo /opt
...
...
...
Build:
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
The Github API Token secret won't be present in the final image.
docker secrets is a thing, but it's only available to containers that are part of a docker swarm. It is meant for handling things like SSH keys. You could do as the documentation suggests and create a swarm of 1 to utilize this feature.
docker-compose also supports secrets, though I haven't used them with compose.
I want to build and add a custom image (with ruby, node.js, bower, grunt, jekyll etc.) and tag it as 'myimage:1.0'. This image needs to be stored in gitlab container registry and then used in .gitlab-ci.yml as image: sachin.1.0.0. So that my build via gitlab ci will have everything preinstalled like node.js, etc.
Tried enough, How can this be done ?
Before you do this, you need to configure a gitlab runner which allows you to use docker build. You can configure this using the instructions here depending on your use case
Next, create a new repo in gitlab, let's call it sachin-image.
Inside the root of the git repo, add a Dockerfile with installation of everything you need.
Now, into this repo, add a .gitlab-ci.yml file like so:
---
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN <my-docker-gitlab-registry-url>
stages:
- build
build_image:
stage: build
script:
- docker build -t gitlab.example.com/my/dockerimage/repo:latest .
- docker push gitlab.example/my/dockerimage/repo:latest
tags:
- docker_engine
At this point, you now have automated docker builds working in gitlab. In order to use this image in future gitlab builds, all you need to use the following image url:
gitlab.example.com/my/dockerimage/repo:latest