GitLab runner use same folder for diffrent environments - gitlab

I have a problem. I have two merge requests from two different branches to the master branch in a project.
Now I want to start an environment in GitLab for each merge request. I do this with a shell executor and I start docker container with docker run image_name where I mount the folder from the build process inside the container. Looks like this:
stages:
- deploy
deploy_stage:
stage: deploy
script:
- docker run -d --name ContainerName -v ${CI_PROJECT_DIR}:/var/www/html -e VIRTUAL_HOST=example.com php
environment:
name: review/$CI_COMMIT_REF_NAME
url: http://example.com
on_stop: stop_stage
tags:
- shell
except:
- master
stop_stage:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- docker stop ContainerName
- docker rm ContainerName
when: manual
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
tags:
- shell
Now my problem is that one environment is runng and when a new job runs the checkout/code get overwritten by the new pipeline job and both environments have now the same code but they should be different.
Does anyone have a solution for me how I can configure the gitlab runner to have different checkout folder for each merge request?

Related

Run 2 gitlab jobs on the same VM

I have the following pipeline in gitlab:
stages: # List of stages for jobs, and their order of execution
- build
- test
- deploy
clone-submodule-job: # This job runs in the build stage, which runs first.
tags:
- linuxvm
stage: build
script:
- git submodule update --init --recursive --jobs=10
build-job: # This job runs in the build stage, which runs first.
tags:
- linuxvm
stage: build
script:
- cd docker/project_builder && docker build -t docker_development -f Dockerfile .
I added the linuxvm tag so it runs on my linux vm. The problem is that build-job runs in a separate VM. Is it possible to make it run after the clone-submodule-job, but also run on the same VM so it accesses the submodules cloned?

GitLab Container to GKE (Kubernetes) deployment

Hello I have a problem with GitLab CI/CD. I'm trying to deploy container to Kubernetes on GKE however I'm getting an error:
This job failed because the necessary resources were not successfully created.
I created a service account with kube-admin rights and created cluster via GUI of GitLab so its fully itegrated. But when I run the job it still doesn't work..
by the way I use kubectl get pods in gitlab-ci file just to test if kubernetes is repsonding.
stages:
- build
- deploy
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
deploy-prod:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl get pods
environment:
name: production
kubernetes:
namespace: test1
Any Ideas?
Thank you
namespace should be removed.
GitLab creates own namespace for every project

Convert Yaml from GitLab to azure devops

I want to convert Yaml pipeline from Gitlab to Azure DevOps. The problem is I did not have experience with GitLab before. This is yaml.
Is .package_deploy template for job? And is image it a pool or I need to use for this Docker task?
And before_script: means I need to create a task before task with docker?
variables:
myVar: "Var"
stages:
- deploy
.package_deploy:
image: registry.gitlab.com/docker-images/$myVar:latest
stage: build
script:
- cd src
- echo "Output file name is set to $OUTPUT_FILE_NAME"
- echo $OUTPUT_FILE_NAME > version.txt
- az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
dev_package_deploy:
extends: .package_deploy
stage: deploy
before_script:
- export FOLDER=$FOLDER_DEV
- timestampSuffix=$(date -u "+%Y%m%dT%H%M%S")
- export OUTPUT_FILE_NAME=${myVar}-${timestampSuffix}-${CI_COMMIT_REF_SLUG}.tar.gz
when: manual
demo_package_deploy:
extends: .package_deploy
stage: deploy
before_script:
- export FOLDER=$FOLDER_DEMO
- timestampSuffix=$(date -u "+%Y%m%dT%H%M%S")
- export OUTPUT_FILE_NAME=${myVar}-${timestampSuffix}.tar.gz
when: manual
only:
refs:
- master
.package_deploy: is a 'hidden job' that you can use with the extends keyword. Itself, it does not create any job. It's a way to avoid repeating yourself in other job definitions.
before_script really is no different from script except that they're two different keys. The effect is that before_script + script includes all the script steps in the job.
before_script:
- one
- two
script:
- three
- four
Is the same as:
script:
- one
- two
- three
- four
image: defines the docker container in which the job runs. In this way, it is very similar to a pool you would define in ADO. But if you want things to run close to thee way it does in GitLab, you probably want to define it as container: in ADO.

How to clone to home directory of the gitlab-runner user?

This is my .gitlab-ci.yml file:
image: docker
stages:
- build
- test
- deploy
build-prod:
stage: build
only:
- master
tags:
- docker
script:
- docker network create -d overlay reprox
environment: master
test-prod:
stage: test
only:
- master
tags:
- runner
script:
- echo "yolo"
environment: master
deploy-prod:
stage: deploy
only:
- master
tags:
- docker
script:
- docker stack deploy -c ./site1/docker-compose.yml site1
- docker stack deploy -c ./site2/docker-compose.yml site2
- docker stack deploy -c ./site3/docker-compose.yml site3
- docker stack deploy -c ./reverse-proxy/docker-compose.yml proxy
environment: master
I have the dummy echo job so it will clone to the worker but when the repo is cloned to any node it always clones to /home/gitlab-runner/builds/random/0/docker/
I don't need variables, I just want it to clone the repo in the /home/gitlab-runner directory for every gitlab-runner.

Deploy docker container using gitlab ci docker-in-docker setup

I'm currently trying to setup a gitlab ci pipeline. I've chosen to go with the Docker-in-Docker setup.
I got my ci pipeline to build and push the docker image to the registry of gitlab but I cannot seem deploy it using the following configuration:
.gitlab-ci.yml
image: docker:stable
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
TEST_IMAGE: registry.gitlab.com/user/repo.nl:$CI_COMMIT_REF_SLUG
RELEASE_IMAGE: registry.gitlab.com/user/repo.nl:release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker info
build:
stage: build
tags:
- build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
only:
- branches
deploy:
stage: deploy
tags:
- deploy
script:
- docker pull $TEST_IMAGE
- docker tag $TEST_IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
- docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
only:
- master
when: manual
When I run the deploy action I actually get the following feedback in my log, but when I go check the server there is no container running.
$ docker run -d --name "review-$CI_COMMIT_REF_SLUG" -p "80:80" $RELEASE_IMAGE
7bd109a8855e985cc751be2eaa284e78ac63a956b08ed8b03d906300a695a375
Job succeeded
I have no clue as to what I am forgetting here. Am I right to expect this method to be correct for deploying containers? What am I missing / doing wrong?
tldr: Want to deploy images into production using gitlab ci and docker-in-docker setup, job succeeds but there is no container. Goal is to have a running container on host after deployment.
Found out that I needed to include the docker socket in the gitlab-runner configuration as well, and not only have it available in the container.
By adding --docker-volumes '/var/run/docker.sock:/var/run/docker.sock' and removing DOCKER_HOST=tcp://docker:2375 I was able to connect to docker on my host system and spawn sibling containers.

Resources