Hi in my case I use Gitlab community and Sonar Qube. The Sonar is hosted on Azure Ubuntu VM with a Docker container. Before my Sonar was on a Windows machine and here is the part of gitlab.yml.
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
stages:
- build
- test
before_script:
- Set-Variable -Name "time" -Value (Get-Date -Format "%H:%m")
- echo ${time}
- echo "started by ${GITLAB_USER_NAME}"
build:
stage: build
only:
- branches
script:
- echo "running scripts in the build job"
- choco feature enable -n=allowGlobalConfirmation
- choco install netfx-4.6.2-devpack
- choco install dotnetcore-sdk
- nuget restore -ConfigFile .\nuget.config
- msbuild ".\MyProject\MyProject.csproj" /p:DeployOnBuild=true /p:PublishProfile="Local Publish" /p:WarningLevel=0
- dotnet build ".\MyProject.Tests\MyProject.Tests.csproj" --configuration Release
artifacts:
paths:
- Publish
- .\MyProject.Tests\bin\Release\netcoreapp3.1\**
- .\MyProject
expire_in: 1 day
after_script:
- Remove-Item -Recurse -Force .\packages\
sonarcloud-check:
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
dependencies:
- build
script:
- choco install sonarqube-scanner.portable
- SonarScanner.MSBuild.exe begin /k:"myProject" /d:sonar.host.url="http://localhost:9000" /d:sonar.login="8f6658e7684de225a4f45c7cf3466d462a95c1c7"
- nuget restore -ConfigFile .\nuget.config
- MsBuild.exe ./MyProject /t:Rebuild
- SonarScanner.MSBuild.exe end /d:sonar.login="8f6658e7684de225a4f45c7cf3466d462a95c1c7"
only:
- merge_requests
- master
- develop
- GitLabQualityTool
I know that this yml is not the best but it's work. The main project is on .net 4.6.2.Sonar is running on Ubuntu on port 80:9000. The point is that I am started to use Azure, Docker, and Sonar before a week. Can somebody help me with yml configuration?
Related
I have a pipeline that builds an AMI image, but I'd also like to be able to use that AMI ID in an additional stage afterwards.I'm not sure how to capture an output (the AMI ID) as a value for further down the pipeline.
Here's my .gitlab-ci.yml file:
image:
name: hashicorp/packer:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- packer --version
stages:
- build
- deploy
get_packer:
stage: build
artifacts:
paths:
- packer
script:
- echo "Fetching packer"
- wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip
- unzip packer_1.5.5_linux_amd64.zip
- chmod +x packer
deploy_packer:
stage: deploy
script:
- echo "Deploying Packer Build"
- cd aws
- packer build -only="*rhel-stig*" .
Here's the tail-end of the output from my pipeline that spits out the AMI ID:
Build 'rhel.amazon-ebs.rhel-stig' finished after 8 minutes 17 seconds.
==> Wait completed after 8 minutes 17 seconds
==> Builds finished. The artifacts of successful builds are:
--> rhel.amazon-ebs.rhel-stig: AMIs were created:
us-east-1: ami-08155b7eaa9e0274f
Cleaning up project directory and file based variables
00:00
Job succeeded
Notice how it outputs the region and the ami-ID, how can I use that AMI ID in the same pipeline if I want to add onto the pipeline like so?
theoretical .gitlab-ci.yml file
image:
name: hashicorp/packer:latest
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- packer --version
stages:
- build
- deploy
- test
get_packer:
stage: build
artifacts:
paths:
- packer
script:
- echo "Fetching packer"
- wget https://releases.hashicorp.com/packer/1.5.5/packer_1.5.5_linux_amd64.zip
- unzip packer_1.5.5_linux_amd64.zip
- chmod +x packer
deploy_packer:
stage: deploy
script:
- echo "Deploying Packer Build"
- cd aws
- packer build -only="*rhel-stig*" .
test_image:
stage: test
script:
- (Do something with the outputted AMI ID from the deploy stage)
Update:
New error after initial additions
Build 'rhel.amazon-ebs.rhel-stig' finished after 9 minutes 22 seconds.
==> Wait completed after 9 minutes 22 seconds
==> Builds finished. The artifacts of successful builds are:
--> rhel.amazon-ebs.rhel-stig: AMIs were created:
us-east-1: ami-04b363eecd4fd841a
--> rhel.amazon-ebs.rhel-stig: AMIs were created:
us-east-1: ami-04b363eecd4fd841a
$ AMI_ID=$(jq -r '.builds[-1].artifact_id' manifest.json | cut -d ":" -f2)
/bin/bash: line 137: jq: command not found
Uploading artifacts for failed job
00:00
Uploading artifacts...
WARNING: image.env: no matching files. Ensure that the artifact path is relative to the working directory
ERROR: No files to upload
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The first thing you need to do is configure the packer build to use the manifest post-processor:
post-processor "manifest" {
output = "manifest.json"
strip_path = true
}
this will generate a json file which contains the AMI ID at the end of the build.
Then, you can use the dotenv artifacts construct to share variables with subsequent jobs.
Here's how it works:
deploy_packer:
stage: deploy
script:
- echo "Deploying Packer Build"
- cd aws
- packer build -only="*rhel-stig*" .
- AMI_ID=$(jq -r '.builds[-1].artifact_id' manifest.json | cut -d ":" -f2)
- echo "AMI_ID=$AMI_ID" > image.env
artifacts:
reports:
dotenv: aws/image.env
test_image:
stage: test
script:
- echo $AMI_ID
needs:
- job: build
artifacts: true
I need to set up a demo server, which is a copy of the production server, but pointed at a different API. I want to run 2 separate build/deploy whenever the main branch is updated to accomplish this, as I need to run the demo build (Vue) to use different env variables pointing at the demo API (which will also need a dual deploy). Is this possible, and how would I go about it? Here's the existing:
stages:
- build
- deploy
- test
include:
- template: Security/SAST.gitlab-ci.yml
- template: Security/Secret-Detection.gitlab-ci.yml
build-main:
image: node:12
stage: build
only:
- main
script:
- yarn global add #quasar/cli
- rm package-lock.json
- yarn
- npm run build:prod
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-main:
stage: deploy
only:
- main
script:
- echo $CI_PROJECT_DIR
- whoami
- sudo rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console
tags:
- deploy
build-beta:
image: node:12
stage: build
only:
- beta
script:
- yarn global add #quasar/cli
- rm package-lock.json
- yarn
- npm run build:beta
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-beta:
stage: deploy
only:
- beta
script:
- echo $CI_PROJECT_DIR
- whoami
# - sudo /usr/local/bin/rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console.beta
- sudo rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console.beta
tags:
- deploy
build-dev:
image: node:12
stage: build
only:
- dev
script:
- yarn global add #quasar/cli
- rm package-lock.json
- yarn
- npm run build:dev
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-dev:
stage: deploy
only:
- dev
script:
- echo $CI_PROJECT_DIR
- whoami
- sudo rsync -rav --exclude '.git' $CI_PROJECT_DIR/dist/spa/. /var/www/console.dev
tags:
- deploy
sast:
stage: test
artifacts:
reports:
sast: gl-sast-report.json
paths:
- 'gl-sast-report.json'
You can do something like the following ,that will create 2 build jobs and 2 deploy jobs, that are linked together using needs:
stages:
- build
- deploy
build-main:
stage: build
script: echo
only:
- main
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-main:
stage: deploy
script: echo
only:
- main
needs:
- job: build-main
artifacts: true
build-demo:
stage: build
script: echo
only:
- main
artifacts:
expire_in: 1 hour
paths:
- dist
deploy-demo:
stage: deploy
script: echo
only:
- main
needs:
- job: build-demo
artifacts: true
you might also want to extract common jobs to hidden jobs to simplify your pipeline, for instance:
stages:
- build
- deploy
.build:
stage: build
artifacts:
expire_in: 1 hour
paths:
- dist
build-main:
extends: .build
only: main
# other specific codes
Also you might want to improve readability and management of workflow rules like the following, which will centralize rules logics:
workflow:
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables:
DEPLOY_PROD: "true"
DEPOLOY_DEMO: "true"
# Some other conditional variables
- if: $CI_COMMIT_REF_NAME == "dev"
variables:
DEPLOY_DEV: "true"
- when: always
build-main:
rules:
- if: $DEPLOY_PROD
# other jobs
I have a Gitlab pipeline that is failing when it attempts docker build (using Kaniko)
I am yet to do a successful docker build BUT this particular error has started after I :
Changed the kaniko image from image: gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
Added settings for pushing to insecure registries :
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
After this change part of the pipeline looks like this :
before_script:
- 'dotnet restore --packages $NUGET_PACKAGES_DIRECTORY'
build_job:
tags:
- xxxx
only:
- develop
stage: build
script:
- dotnet build --configuration Release --no-restore
publish_job:
tags:
- xxxx
only:
- develop
stage: publish
artifacts:
name: "$CI_COMMIT_SHA"
paths:
- ./$PUBLISH_DIR
script:
- dotnet publish ./src --configuration Release --output $(pwd)/$PUBLISH_DIR
docker_build_dev:
tags:
- xxxx
image:
name: gcr.io/kaniko-project/executor:51734fc3a33e04f113487853d118608ba6ff2b81
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
script:
- echo "${CI_PROJECT_DIR}"
- cp ./src/Dockerfile /builds/xxx/xxx/xxx/Dockerfile
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--insecure
--skip-tls-verify
--skip-tls-verify-pull
--insecure-pull
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}"
Part of the output from the pipeline is as below:
[32;1mSkipping Git submodules setup[0;m
section_end:1652535765:get_sources
[0Ksection_start:1652535765:download_artifacts
[0K[0K[36;1mDownloading artifacts[0;m[0;m
[32;1mDownloading artifacts for publish_job (33475)...[0;m
Downloading artifacts from coordinator... ok [0;m id[0;m=33475 responseStatus[0;m=200 OK token[xxxxxxxxxxx
section_end:1652535769:download_artifacts
[0Ksection_start:1652535769:step_script
[0K[0K[36;1mExecuting "step_script" stage of the job script[0;m[0;m
section_end:1652535769:step_script
[0Ksection_start:1652535769:cleanup_file_variables
[0K[0K[36;1mCleaning up project directory and file based variables[0;m[0;m
section_end:1652539354:cleanup_file_variables
[0K[31;1mERROR: Job failed: execution took longer than 1h0m0s seconds
[0;m
What am I missing?
I was missing something in my GitLab project settings which is enabling Project Registries :
https://domain/group/subgroup/project/edit
Visibility,project features,permissions << Container Registry (Toggle to enable)
I started using Gitlab CI/CD and it is my yml file:
variables:
EXE_LOCAL_TEMP_FOLDER: 'c:\TMP'
DEPLOY_FOLDER: 'c:\MyProject_Release'
stages:
- build
- deploy
build:
stage: build
script:
- nuget restore
- msbuild MySolution.sln /t:Build /p:Configuration=Release
- xcopy /y MyProject\bin\Release\*.* $EXE_LOCAL_TEMP_FOLDER\
deploy:
stage: deploy
when : manual
script:
- xcopy /y c:\TMP\*.* $DEPLOY_FOLDER\
Now I need to deploy a Windows Service(WinService.csproj in my solution) on a remote machine, How can I do it in my yml file?
I am trying to setup ci with gitlab-ci. And I have a few questions about it.
It looks like there is no rollback mechanism on gitlab-ci. So should i care about rolling back if deploy stage fails?
I'am planning to use "dotnet publish Solution.sln -c release" script. But I have multiple projects in this solution. It has one classlib and 2 api. (like AdminApi and UserApi). And these 2 apis are hosted different sites in IIS. In this case, how can i configure dotnet publish script with params?
Should i use something like xcopy for moving publish output to iis folder?
I've put an app_offile.htm_for each web sites in iis with "We'll back soon message" in html.
And I've solved my problem with this gitlab-ci.yml
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo "Building the app"
- "dotnet publish MySolution.sln -c release"
artifacts:
untracked: true
only:
- dev
test:
stage: test
script: echo "Running tests"
artifacts:
untracked: true
dependencies:
- build
only:
- dev
deploy_staging:
stage: deploy
script:
- echo "Deployintg to staging server Admin"
- ren c:\\inetpub\\vhosts\\xxx\\admin\\app_offline.htm_ app_offline.htm
- dotnet publish PathToAdmin.csproj -c release -o c:\\inetpub\\vhosts\\xxx\\admin
- ren c:\\inetpub\\vhosts\\xxx\\admin\\app_offline.htm app_offline.htm_
- echo "Deployintg to staging server User"
- ren c:\\inetpub\\vhosts\\xxx\\user\\app_offline.htm_ app_offline.htm
- dotnet publish PathToUser.csproj -c release -o c:\\inetpub\\vhosts\\xxx\\user
- ren c:\\inetpub\\vhosts\\xxx\\user\\app_offline.htm app_offline.htm_
dependencies:
- build
only:
- dev