I am trying to add Static Application Security Testing (SAST) to my ci/cd yaml file.
But when I run it after adding the template Security/SAST.gitlab-ci.yml as instructed in
it fails with this log
[ERRO] [Find Security Bugs] [2022-01-06T13:20:34Z] ▶ Project couldn't be built: Command couldn't be executed: fork/exec /builds/Hoshani/my-awesome-project/mvnw: permission denied
[FATA] [Find Security Bugs] [2022-01-06T13:20:34Z] ▶ Command couldn't be executed: fork/exec /builds/Hoshani/my-awesome-project/mvnw: permission denied
here is the yaml file for your reference
variables:
MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
image: maven:3.8.1
cache:
paths:
- .m2/repository
stages:
- build
- test
- deploy
build-job:
stage: build
script:
- mvn clean install
include:
- template: Security/SAST.gitlab-ci.yml
- template: Jobs/SAST-IaC.latest.gitlab-ci.yml
unit-test-job:
stage: test
script:
- mvn test
artifacts:
when: always
reports:
junit:
- target/surefire-reports/TEST-*.xml
any help is appreciated, thanks
As a quick solution, simply adding execution permission on mvnw will fix this.
chmod a+x mvnw
For more details, refer here
Related
I got a lot of different android flavors for one app to build, so i want to split up the building into different yml files. I currently have my base file .gitlab-ci.yml
image: alvrme/alpine-android:android-29-jdk11
variables:
GIT_SUBMODULE_STRATEGY: recursive
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- .gradle/
stages:
- test
- staging
- production
- firebaseUpload
- slack
include:
- local: '/.gitlab/bur.yml'
- local: '/.gitlab/vil.yml'
- local: '/.gitlab/kom.yml'
I am currently trying to build 3 different flavors. But i dont know why only the last included yml file gets executed. the first 2 are ignored.
/.gitlab/bur.yml
unitTests:
stage: test
script:
- ./gradlew testBurDevDebugUnitTest
/.gitlab/vil.yml
unitTests:
stage: test
script:
- ./gradlew testVilDevDebugUnitTest
/.gitlab/kom.yml
unitTests:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest
What you observe looks like the expected behavior:
Your three files .gitlab/{bur,vil,kom}.yml contain the same job name unitTests.
So, each include overrides the specification of this job.
As a result, you only get 1 unitTests job in the end, with the specification from the last YAML file.
Thus, the simplest fix would be to change this job name, e.g.:
unitTests-kom:
stage: test
script:
- ./gradlew testKomDevDebugUnitTest
I'm trying to set up MobSF SAST within Gitlab-ci and having a few issues.
I've followed the instructions within the Gitlab Docs and within the MobSF Gitlab repo
However, when I add:
To my .gitlab-ci.yml . I get a yml error stating that it could not get access
My .gitlab-ci.yml file looks like:
sast:
stage: Security
tags:
- docker
include:
- project: 'gitlab-org/security-products/analyzers/mobsf'
ref: master
file: '/template/mobsf.gitlab-ci.yml'
I have a docker image on my machine with gitlab-runners as an image. Does anyone have any thoughts about how to implement this so that i can run automated MobSF SAST on both Android and iOS?
So after working through this, It seems that you must have the following included in yoru gitlab-ci.yml file:
variables:
#required for Mobile SAST
SAST_EXPERIMENTAL_FEATURES: "true"
include:
- template: Security/SAST.gitlab-ci.yml
sast:
image: docker:19.03.8
stage: Security
variables:
SEARCH_MAX_DEPTH: 4
artifacts:
reports:
sast: gl-sast-report.json
tags:
- docker
since migrating our test from bitbucket to gitlab the video is no longer recorded during runs in the pipeline. has anyone encountered a similar problem? cypress version 7.3.0
stages:
- build
- test
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .cache/*
- cache/Cypress
- node_modules
- build
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: build
script:
- yarn install
- npx cypress cache path
- npx cypress cache list
phone-sanity-tests-development:
image: cypress/browsers:node14.15.0-chrome86-ff82
stage: test
parallel: 15
script:
- yarn cypress:run-phone-development-sanity
artifacts:
paths:
- cypress/screenshots/**
- cypress/videos/**
- cypress/reports/**
- cypress/projects/phone/puppeteer/videos/**
Here: - yarn cypress:run-phone-development-sanity you need to add --record.
In order to tell cypress to record and make screenshots you need to configure this on the run command in yml file.
This link is nice example how Cypress team configures their gitlab-ci.yml:
https://github.com/cypress-io/cypress-realworld-app/blob/develop/.gitlab-ci.yml
I am trying to get terraform to perform terraform init in a specific root directory, but somehow the pipeline doesn't recognize it. Might there be something wrong with the structure of my gitlab-ci.yml file? I have tried moving everything to the root directory, which works fine, but I'd like to have a bit of a folder structure in the repository, in order to make it more readable for future developers.
default:
tags:
- aws
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
# If not using GitLab's HTTP backend, remove this line and specify TF_HTTP_* variables
TF_STATE_NAME: default
TF_CACHE_KEY: default
# If your terraform files are in a subdirectory, set TF_ROOT accordingly
TF_ROOT: ./src/envs/infrastruktur
before_script:
- rm -rf .terraform
- terraform --version
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
Everything is fine until the validate stage, but as soon as the pipeline comes to the plan stage, it says that it cannot find any config files.
validate:
stage: validate
script:
- terraform validate
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
apply:
stage: pre-apply
script:
- terraform pre-apply -input=false "planfile"
dependencies:
- plan
when: manual
You need to cd in your configration folder in every job and after each job you need to pass the content of /src/envs/infrastruktur where terraform is operating on to the next job via artifacts. I omitted the remainder of your pipeline for brevity.
before_script:
- rm -rf .terraform
- terraform --version
- cd $TF_ROOT
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
artifacts:
paths:
- $TF_ROOT
validate:
stage: validate
script:
- terraform validate
artifacts:
paths:
- $TF_ROOT
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
- $TF_ROOT
I have a simple pipeline config:
image: python:3.7.3
pipelines:
branches:
Server:
- step:
name: Test
script:
- pytest --ignore .
which yields the following error:
We didn't find the deployment keyword in your bitbucket-pipelines.yml file
What should i do?
I just figured out, that i did not have "bitbucket-pipelines" enabled in the repository in the settings.
Here are examples to run python with bitbucket pipelines.
You can change unit test according to yours.
First make sure pipelines are enabled.
Generate SSH.
Add host key.
Add variable to Deployment menu.
I'm also attach running pipelines with tags and branch.
Generate SSH from :
https://bitbucket.org/<WORKSPACE>/<REPOSITORY_NAME>/admin/addon/admin/pipelines/ssh-keys
and put it to your server ~/.ssh/authorized_keys
definitions:
steps:
# Build
- step: &build
name: Install and Test
image: python:3.7.2
trigger: automatic
script:
- pip install -r requirements.txt
- python3 test.py test
# Deployment
- step: &deploy
name: Deploy Artifacts
trigger: automatic
deployment: test
script:
# Deploy New Artifact
- pipe: atlassian/scp-deploy:0.3.11
variables:
USER: <REMOTE_USER>
SERVER: <REMOTE_HOST>
REMOTE_PATH: <REMOTE_PATH>
LOCAL_PATH: $BITBUCKET_CLONE_DIR/**
# Runner
pipelines:
# Running by tags
tags:
v*:
- step: *build
- step:
<<: *deploy
deployment: test
trigger: manual
# Running by branch
branches:
master:
- step: *build
- step:
<<: *deploy
deployment: test
trigger: automatic