I am trying to get the files generated by maven to store as artifacts. To my understanding I have edited the following ci yml.
stages:
- build
- package
maven-build:
stage: build
script:
- mvn install
artifacts:
paths:
- art/
maven-package:
stage: package
artifacts:
paths:
- art/
script:
- mvn package -U
By default, maven will build artifacts to a directory called target. You are not including this path in your artifacts declaration.
To deal with this you can:
Add target to your paths: array in your yaml OR
Add a script step to move/copy the target directory to the ./art which you already include in your paths: array OR
Change the directory to which Maven will place built files.
Related
I am currently in the process of implementing a CI script for a node project. On of the steps involved in this process is to be able to get the project's version from the package.json file and setting it as an env variable (used subsequently for various operations).
What I have tried so far is creating the following job:
version:get-version:
stage: version
script: |
npm pkg get version
version=$(npm pkg get version)
echo "Current version: $version"
echo "PROJECT_VERSION=$version" >> .env
artifacts:
reports:
dotenv: .env
rules:
- if: $CI_COMMIT_BRANCH == "master"
- if: '$CI_COMMIT_BRANCH =~ /^feat.*/'
- if: $CI_COMMIT_TAG
The problem is that when I run the individual commands using the same docker image my CI is using (node:lts-alpine3.16) everything works fine. However when this job runs, it fails with the following error:
Created fresh repository.
18Checking out a3ac42fd as feat/SIS-540-More-CI-Changes...
19Skipping Git submodules setup
20
Executing "step_script" stage of the job script
00:00
21$ npm pkg get version # collapsed multi-line command
22
Uploading artifacts for failed job
00:01
23Uploading artifacts...
24WARNING: .env: no matching files. Ensure that the artifact path is relative to the working directory
25ERROR: No files to upload
26
Cleaning up project directory and file based variables
00:00
27ERROR: Job failed: command terminated with exit code 243
What is even more interesting is that, sometimes the same job would succeed with no problems (at least printing out the version using npm pkg get version). I am honestly stuck and I have no idea how to troubleshoot this or resolve it.
Any hints or ideas are more than welcome.
I'm working with Wasm and Rust, and I'm deploying the page with gitlab pages.
I'm using a gitlab-ci.yml file that looks like this:
image: "rust:latest"
variables:
PUBLIC_URL: "/repo-name"
pages:
stage: deploy
script:
- rustup target add wasm32-unknown-unknown
- cargo install wasm-pack
- wasm-pack build --target web
- mkdir public
- mv ./pkg ./public/pkg
- cp ./index.html ./public/index.html
artifacts:
paths:
- public
But even for a "Hello World" app, this takes ~12 minutes.
~11 minutes of that is taken by the cargo install wasm-pack step.
Is there any way I can cache the intermediate step, to avoid doing this every time?
This page: Caching in GitLab CI/CD talks about caching and/or using artifacts to persist files between jobs. You may be able to make use of that.
It then becomes a question of how to get cargo install to use that cache or the saved artifacts.
Alternatively, you can define your own base build image (run the cargo install steps in that), and store that in Gitlab's docker registry; see https://docs.gitlab.com/ee/user/packages/container_registry/.
I am using maven cache in my pipeline and I have a question. In my settings.xml I define my privet Jfrog repository for lib_release and lib_snapshot.
definitions:
steps:
- step: &compile
name: compile
caches:
- maven
script:
- mvn -s settings.xml clean compile package
artifacts:
- target/**
I see that in Build stage artifacts are Downloaded from the maven cache:
>Cache "maven": Downloading.
>Cache "maven": Downloaded 103.5 MiB in 4 seconds.
>Cache "maven": Extracting.
>Cache "maven": Extracted in 1 seconds
But during the build I see that some .pom, maven-metadata.xml, and some .jar files still downloading from my privet Jfrog artifactory.
For example:
>Downloaded from snapshots: https://jfrog.com/libs-snapshot/my-data/1.5.1-SNAPSHOT/my-data--1.pom (6.3 kB at 8.7 kB/s)
*So the question is why this data is not cashed?*
It is a snapshot. Snapshots are always retrieved to make sure the build has the latest version of it. Only the server knows the latest version of a snapshot (You can do an mvn clean install and look at your local repository to see the generated pom how this works).
https://maven.apache.org/guides/getting-started/index.html#What_is_a_SNAPSHOT_version
What's the easiest way to run a Node.js script as a step in Azure DevOps Pipeline (release step)?
I tried a few approaches and it seems like this is not as straightforward as I'd like it to be. I'm looking for a general approach idea.
My last attempt was adding the script to the source repository under tools/script.js. I need to run it in a release pipeline (after build), from where I can't access the repo directly, so I have added the entire repo as a second build artifact. Now I can reach the script file from the release agent, but I haven't found a way to actually run the script, there seems to be no option to run a Node.js script on an agent in general.
Option 1: Visual Pipeline Editor
1) Create the script file you want to run in your build/release steps
You can add the file to your Azure project repository, e.g. under tools/script.js and add any node modules it needs to run to your package.json, for example:
npm install --save-dev <module>
Commit and push, so your changes are online and Azure can see them.
2) Add your repo as an artifact for your release pipeline
You can skip this for build pipelines as they already have access to the repository.
3) Edit your release pipeline to ensure environment
Add a step to make sure correct Node version is on agent (Node.js Tool Installer):
Add a step to install all required node modules (npm):
4) Add your node script step
Use the Bash step to run your node script, make sure the working directory is set to root of the project (where package.json is located):
Option 2: YAML
you have a script\shell step where you can execute custom commands, just use that to achieve the goal. Agent have node installed on them, one thing you might need to do is use the pick node version step to use the right node version for your script
Example:
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: self
persistCredentials: true
clean: true
- bash: |
curl $BEDROCK_BUILD_SCRIPT > build.sh
chmod +x ./build.sh
displayName: My script download
env:
BEDROCK_BUILD_SCRIPT: https://url/yourscript.sh
- task: ShellScript#2
displayName: My script execution
inputs:
scriptPath: build.sh
I have 2 maven projects hosted on Gitlab. Let's call them A and B. Project A depends on project B.
I want to use gitlab-ci to build A.
Here is the gitlab-ci.yml file for project B:
image: maven:3-jdk-8
build:
script: "mvn install -B"
What should the gitlab-ci in project A look like?
Use GIT SUBMODULES with your project A to refer to project B and then add
GIT_SUBMODULE_STRATEGY: recursive
to the gitlab-ci.yml file in project A. Further project B needs also a CI configuration file in the project root.
https://docs.gitlab.com/ce/ci/git_submodules.html