I´m setting up a CI of my maven project in GITLAB and I need sign the jars before publish to maven central. (https://gitlab.com/awe-team/ade)
I generate the key pair with gnuPgp and add the public key to my gitlab profile.
Do I a copy of my private key to gitlab-ci workfolder?
The error get is that you can not find the key.
[DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-gpg-plugin:1.5:sign'
with basic configurator -->
[DEBUG] (f) ascDirectory = /builds/awe-team/ade/target/gpg
[DEBUG] (f) defaultKeyring = true
[DEBUG] (f) interactive = false
[DEBUG] (f) passphrase = *******
[DEBUG] (f) skip = false
[DEBUG] (f) useAgent = true
[DEBUG] (f) project = MavenProject: com.almis.ade:ade:2.0.5 # /builds/awe- team/ade/pom.xml
[DEBUG] -- end configuration --
[DEBUG] Generating signature for /builds/awe-team/ade/target/ade-2.0.5.pom
gpg: directory '/root/.gnupg' created
gpg: keybox '/root/.gnupg/pubring.kbx' created
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
My .gitlab-ci.yaml looks line:
image: maven:latest
variables:
MAVEN_CLI_OPTS: "-X -s .m2/settings.xml --batch-mode -
Dgpg.passphrase=$GPG_PASSPHRASE"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
cache:
paths:
- .m2/repository/
- target/
build:
stage: build
script:
- mvn $MAVEN_CLI_OPTS compile
test:
stage: test
script:
- mvn $MAVEN_CLI_OPTS test
deploy:
stage: deploy
script:
- mvn $MAVEN_CLI_OPTS deploy
only:
- master
I expected build a release of my jars and sign it to publish maven central.
Related
I am trying to add Static Application Security Testing (SAST) to my ci/cd yaml file.
But when I run it after adding the template Security/SAST.gitlab-ci.yml as instructed in
it fails with this log
[ERRO] [Find Security Bugs] [2022-01-06T13:20:34Z] ▶ Project couldn't be built: Command couldn't be executed: fork/exec /builds/Hoshani/my-awesome-project/mvnw: permission denied
[FATA] [Find Security Bugs] [2022-01-06T13:20:34Z] ▶ Command couldn't be executed: fork/exec /builds/Hoshani/my-awesome-project/mvnw: permission denied
here is the yaml file for your reference
variables:
MAVEN_OPTS: "-Dhttps.protocols=TLSv1.2 -Dmaven.repo.local=$CI_PROJECT_DIR/.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
image: maven:3.8.1
cache:
paths:
- .m2/repository
stages:
- build
- test
- deploy
build-job:
stage: build
script:
- mvn clean install
include:
- template: Security/SAST.gitlab-ci.yml
- template: Jobs/SAST-IaC.latest.gitlab-ci.yml
unit-test-job:
stage: test
script:
- mvn test
artifacts:
when: always
reports:
junit:
- target/surefire-reports/TEST-*.xml
any help is appreciated, thanks
As a quick solution, simply adding execution permission on mvnw will fix this.
chmod a+x mvnw
For more details, refer here
currently, I try to build a spring boot application and make releases with Azure Pipelines and maven-release-plugin.
My Azure Pipeline YAML Looks like this:
- stage: BuildRelease
condition: true
displayName: Building a Release with Maven
jobs:
- job: BuildReleaseJob
displayName: Create a Maven release with version $(releaseVersion)
steps:
- checkout: self
persistCredentials: true
- task: MavenAuthenticate#0
displayName: 'Authenticate to Maven'
inputs:
artifactsFeeds: 'ciam'
- task: Bash#3
displayName: Set Git Credentials
inputs:
targetType: 'inline'
script: |
git config --global user.email "you#example.com"
git config --global user.name "Azure Pipeline Release"
git checkout develop
- task: Bash#3
displayName: Maven Clean & Prepare Release
inputs:
targetType: 'inline'
script: |
mvn --batch-mode release:clean release:prepare -DscmCommentPrefix=***NO_CI***
- task: Bash#3
displayName: Maven Perform Release
inputs:
targetType: 'inline'
script: |
mvn --batch-mode release:perform -DscmCommentPrefix=***NO_CI***
I also added Allow Contriubte, Create branch, Create tag, and Read permission to Project Collection Build Service (MyCompany) in Azure Dev Ops Project Settings -> Repositories -> -> Secuirty
Therefore everything works fine until the last task, which is executing release:perform
The shown Error is:
[INFO] Executing: /bin/sh -c cd /home/vsts/work/1/s/target && git clone --branch projectName-0.0.36 https:********#dev.azure.com/MyCompany/Project/_git/projectName-service /home/vsts/work/1/s/target/checkout
[INFO] Working directory: /home/vsts/work/1/s/target
[ERROR] The git-clone command failed.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.136 s
[INFO] Finished at: 2022-02-01T14:18:31Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.5.3:perform (default-cli) on project projectName-service: Unable to checkout from SCM
[ERROR] Provider message:
[ERROR] The git-clone command failed.
[ERROR] Command output:
[ERROR] Cloning into '/home/vsts/work/1/s/target/checkout'...
[ERROR] fatal: could not read Password for 'https://mycompany#dev.azure.com': terminal prompts disabled
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Could anyone point me to somewhere what I did wrong?
After that, the git tag is built and committed in SCM
UPDATE
I tried to do in batch inline script what maven tries to do in release perform and it kind of worked. Now I am more confused.
- task: Bash#3
displayName: TestClone
inputs:
targetType: inline
script: |
ls -laf /home/vsts/work/1/s/target/checkout
/bin/sh -c cd '/home/vsts/work/1/s/target' && 'git' 'clone' '--depth' '1' '--branch' 'myaccount-service-0.0.43' 'https://$(System.AccessToken)#dev.azure.com/kiongroup/CxP/_git/myaccount-service' 'checkout'
cd /home/vsts/work/1/s/target/checkout
ls
Output is:
ls: cannot access '/home/vsts/work/1/s/target/checkout': No such file or directory
Cloning into 'checkout'...
Note: switching to '1653266b5f151fd6137b7b579044eef1867d8d5b'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
/home/vsts/work/_temp/358629de-4b6b-4548-942e-f9a05281c5a5.sh: line 3: cd: /home/vsts/work/1/s/target/checkout: No such file or directory
Dockerfile
README.md
azure-pipelines.yml
checkout
docker
mvnw
mvnw.cmd
pom.xml
pom.xml.releaseBackup
postman
release.properties
src
target
I was able to fix the issue described above without using SSH authentication. By specifying the organization name as the username and the SystemAccessToken as the password, the release:perform was able to run successfully:
- task: Maven#3
displayName: Perform release
inputs:
mavenPomFile: 'pom.xml'
options: '-Dusername="<name of your DevOps organisation>" -Dpassword="$(System.AccessToken)"'
Ok, I found a solution for me that involves using the Azure DevOps Git SSH URL and not the HTTPS.
First of all, I created a SSH Key according to this Use SSH key authentication or choose your Git providers tutorial.
Once you have your SSH private and public key, you need to install the SSH Key into your YAML pipeline. See Install SSH Key task.
- task: InstallSSHKey#0
inputs:
knownHostsEntry: ssh.dev.azure.com ssh-rsa <YOUR KNOWN HOST KEY>
sshKeySecureFile: <NameOfSecureFileKey>
sshPassphrase: <PassphraseIfUsed>
Here are the steps summarized
Create a KeyPair in your .ssh folder with a private key azurePipeline and a public one azurePipeline.pub (enter passphrase if desired)
ssh-keygen -f ~.ssh/azurePipeline -t rsa
Get the known host entry e.g. ssh.dev.azure.com ssh-rsa AAAAB3Nz.... with
ssh-keyscan ssh.dev.azure.com
Go to Azure DevOps and add the content of public key as described in Use SSH key authentication
Add the private key as a secure file as described in Install SSH Key task
Change the SCM Urlin pom.xml to:
a
<scm>
<developerConnection>scm:git:git#ssh.dev.azure.com:v3/kiongroup/CxP/myaccount-service</developerConnection>
<tag>HEAD</tag>
</scm>
Hope this helps someone out there :)
I am new and is trying set up a CICD pipeline. The CI build stage is not uploading artifacts as shown. I have provide the current yml script below. what are the steps to correct errors in order to generate an artifacts for deployment.
Gitlab yaml script:
stages:
- build
- deploy
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_REGION: $AWS_REGION
S3_BUCKET_NAME: $S3_BUCKET_NAME
build:
tags: [docker]
stage: build
image: node:latest
script:
- cd client
- npm install
- CI='' npm run build-prod
artifacts:
expose_as: 'Arti-Reports'
paths:
- build.
expire_in: 24 hour
when: on_success
the build job in CI pipeline shown as:
Uploading artifacts for successful job
Uploading artifacts...
WARNING: build.: no matching files
ERROR: No files to upload
Cleaning up file based variables
I am trying to get terraform to perform terraform init in a specific root directory, but somehow the pipeline doesn't recognize it. Might there be something wrong with the structure of my gitlab-ci.yml file? I have tried moving everything to the root directory, which works fine, but I'd like to have a bit of a folder structure in the repository, in order to make it more readable for future developers.
default:
tags:
- aws
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
# If not using GitLab's HTTP backend, remove this line and specify TF_HTTP_* variables
TF_STATE_NAME: default
TF_CACHE_KEY: default
# If your terraform files are in a subdirectory, set TF_ROOT accordingly
TF_ROOT: ./src/envs/infrastruktur
before_script:
- rm -rf .terraform
- terraform --version
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
Everything is fine until the validate stage, but as soon as the pipeline comes to the plan stage, it says that it cannot find any config files.
validate:
stage: validate
script:
- terraform validate
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
apply:
stage: pre-apply
script:
- terraform pre-apply -input=false "planfile"
dependencies:
- plan
when: manual
You need to cd in your configration folder in every job and after each job you need to pass the content of /src/envs/infrastruktur where terraform is operating on to the next job via artifacts. I omitted the remainder of your pipeline for brevity.
before_script:
- rm -rf .terraform
- terraform --version
- cd $TF_ROOT
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
artifacts:
paths:
- $TF_ROOT
validate:
stage: validate
script:
- terraform validate
artifacts:
paths:
- $TF_ROOT
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
- $TF_ROOT
I use powershell and msbuild for both build and test stages.
The configuration for the test stage is like this:
test:
stage: test
artifacts:
when: always
name: "${CI_BUILD_STAGE}_${CI_BUILD_REF_NAME}"
expire_in: 1 week
paths:
- TestResults/
dependencies:
- build
script:
- ./.gitlab-ci/Test.ps1
tags:
- powershell
- msbuild
On successful test run (Test.ps1 returns 0) it uploads the artifacts as it should:
Uploading artifacts...
TestResults/: found 15 matching files
Uploading artifacts to coordinator... ok
Build succeeded
However if the test run was was failed it just fails and doesn't upload anything:
ERROR: Build failed: exit status 1
SOLVED
As cascaval suggested I had to update the runner.