Concourse CI - Build Artifacts inside source, pass all to next task - node.js

I want to set up a build pipeline in Concourse for my web application. The application is built using Node.
The plan is to do something like this:
,-> build style guide -> dockerize
source code -> npm install -> npm test -|
`-> build website -> dockerize
The problem is, after npm install, a new container is created so the node_modules directory is lost. I want to pass node_modules into the later tasks but because it is "inside" the source code, it doesn't like it and gives me
invalid task configuration:
you may not have more than one input or output when one of them has a path of '.'
Here's my jobs set up
jobs:
- name: test
serial: true
disable_manual_trigger: false
plan:
- get: source-code
trigger: true
- task: npm-install
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
outputs:
- name: node_modules
run:
path: npm
args: [ install ]
- task: npm-test
config:
platform: linux
image_resource:
type: docker-image
source: {repository: node, tag: "6" }
inputs:
- name: source-code
path: .
- name: node_modules
run:
path: npm
args: [ test ]
Update 2016-06-14
Inputs and outputs are just directories. So you put what you want output into an output directory and you can then pass it to another task in the same job. Inputs and Outputs can not overlap, so in order to do it with npm, you'd have to either copy node_modules, or the entire source folder from the input folder to an output folder, then use that in the next task.
This doesn't work between jobs though. Best suggestion I've seen so far is to use a temporary git repository or bucket to push everything up. There has to be a better way of doing this since part of what I'm trying to do is avoid huge amounts of network IO.

There is a resource specifically designed for this use case of npm between jobs. I have been using it for a couple of weeks now:
https://github.com/ymedlop/npm-cache-resource
It basically allow you to cache the first install of npm and just inject it as a folder into the next job of your pipeline. You could quite easily setup your own caching resources from reading the source of that one as well, If you want to cache more than node_modules.
I am actually using this npm-cache-resource in combination with a Nexus proxy to speed up the initial npm install further.
Be aware that some npm packages have native bindings that need to be built with the standardlibs that matches the containers linux versions standard libs so, If you move between different types of containers a lot you may experience some issues with libmusl etc, in that case I recommend either streamlinging to use the same container types through the pipeline or rebuilding the node_modules in question...
There is a similar one for gradle (on which the npm one is based upon)
https://github.com/projectfalcon/gradle-cache-resource

This doesn't work between jobs though.
This is by design. Each step (get, task, put) in a Job is run in an isolated container. Inputs and outputs are only valid inside a single job.
What connects Jobs is Resources. Pushing to git is one way. It'd almost certainly be faster and easier to use a blob store (eg S3) or file store (eg FTP).

Related

Gitlab - cache error message about no URL

I have the following logic in my GitLab-ci:
stages:
- build
- deploy
job_make_zip:
tags:
- test123
image: node:10.19
stage: build
script:
- npm install
- make
- make source-package
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
artifacts:
when:
paths:
- test.bz2
expire_in: 2 days
When the job runs, I see the following message:
17 Restoring cache
18 Checking cache for master...
19 No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
20 Successfully extracted cache
I'm just new to Gitlab... and so I can't tell if this is an error or not. I basically don't want to have to download the same npm modules every single time this build runs.
I found a similar post here: GitLab CI caching key
But I'm already using the correct gitlab CI variable.
Any suggestions would be appreciated.
In my GitLab-CI setup at home I am getting this warning (in my case I am not considering it to be an error) in all of my build jobs. According to https://gitlab.com/gitlab-org/gitlab/-/issues/201861 and https://gitlab.com/gitlab-org/gitlab-runner/-/issues/16097 there seem to be cases where this is a message to be taken seriously.
This is especially true if you are uploading (and later on downloading / extracting) the cache to a particular URL, which is used by several runners to get and sync the cache. In a general case though - meaning that if the cache is stored on a single GitLab-Runner rather than on a shared source, which is supposed to be used by several GitLab-Runners, I don't think this message has any real meaning. On my GitLab-Runners, which usually are project- or group-specific, this never was a problem and I always had the cache properly extracted in a local manner.

NPM dependencies to another private Bitbucket repo Azure DevOps pipeline authentication fails

I'm working on a Azure DevOps build pipeline for a project. I can't make any changes to the code itself besides the azure-pipeline.yaml file. (And to be honest, I know very little about the project itself)
I'm stuck on the NPM install dependencies step. I'm currently working with the YAML pipeline, but if there's a solution in the classic mode I'll go with that.
The issue is the following:
I've created the pipeline with and I check out a private Bitbucket repository according to the documentation:
resources:
repositories:
- repository: MyBitBucketRepo1
type: bitbucket
endpoint: MyBitBucketServiceConnection
name: MyBitBucketOrgOrUser/MyBitBucketRepo
Next I set the correct version of node, and execute a npm install task
- task: Npm#1
displayName: 'NPM install'
inputs:
command: 'install'
workingDir: 'the working directory'
So far so good. But, there is a dependency to another Bitbucket repository. In the package.json there is a dependecy like this:
another-dependency: git:https://bitbucket.org/organisation/repo.git#v1.1.3
I do have access to this repository, but if I run NPM install it can't re-use the credentials from the first repository.
I've tried adding both repositories to the resources in the hope that would work. But still the same error:
error fatal: Authentication failed for 'https://bitbucket.org/organisation/repo.git/'
I've tried to set up some caching mechanism, run npm install on the 2nd repo, store the dependencies, run npm install on the first one. But that didn't work unfortunately.
Is there a way in Azure Devops pipelines -without making changes to the project set-up- to make this work?
Thanks!
Normally I have the .npmrc on the Repo so I dont have to add any other task. Something like in this guide:
https://learn.microsoft.com/en-us/azure/devops/artifacts/get-started-npm?view=azure-devops&tabs=windows
And I never do something like that, but I think that you can authenticate with the external feed adding this task:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/package/npm-authenticate?view=azure-devops
Reading a bit more, I dont know if you can do this without adding a .npmrc on your Repo. You have to create a ServiceConnection to store your login credentials, but on that you will need the .npmrc on your Repo.
Try it and tell my if this help!!
Npm will prompt for passwords when you run npm install command for your package.json locally. Since we can't enter the password during pipeline run in CI/CD pipeline, it causes the Authentication failed error.
An alternative workaround is to add credentials directly in url, like this:
"dependencies": {
"another-dependency": "git+https://<username>:<password>#bitbucket.org/xxx/repo.git"
}
See app-password:
username: your normal Bitbucket username
password: the app password
It has disadvantage since we store the app-password directly as plain-text in package.json file, which lacks security if someone else can access your package.json file. So it depends on you whether to use this workaround.
As a workaround for Azure Devops pipeline:
You can add a File Transform task to replace the old url with new Username+Password url before your npm install steps.
1.I have a package.json in root directory with content like git:https://bitbucket.org/organisation/repo.git#v1.1.3.
2.Define a dependencies.another-dependency variable with value git+https://<username>:<password>#bitbucket.org/..., set it as secret!
3.Then add the File Transform task like this:
4.Finally you'll get a new package.json file with content below:
It won't actually affect your package.json file under version control, it just add credentials temporarily during your pipeline.

Use private npm registry for Google App Engine Standard

For all the other stackoverflow questions, it seems like people are asking either about a private npm git repository or about a different technology stack. I'm pretty sure I can use a private npm registry with GAE Flexible, but I was wondering if it was possible with the Standard version?
From the GAE standard docs, doesn't seem like it is possible. Anyone else figure out otherwise?
Google marked this feature request as "won't fix, intended behavior" but there is a workaround.
Presumably you have access to the environment variables during the build stage of your CI/CD pipeline. Begin that stage by having your build script overwrite the .npmrc file using the value of the environment variable (note: the value, not the variable name). The .npmrc file (and the token in it) will then be available to the rest of the CI/CD pipeline.
For example:
- name: Install and build
env:
NPM_AUTH_TOKEN: ${{ secrets.PRIVATE_REPO_PACKAGE_READ_TOKEN }}
run: |
# Remove these 'echo' statements after we migrate off of Google App Engine.
# See replies 14 and 18 here: https://issuetracker.google.com/issues/143810864?pli=1
echo "//npm.pkg.github.com/:_authToken=${NPM_AUTH_TOKEN}" > .npmrc
echo "#organizationname:registry=https://npm.pkg.github.com" >> .npmrc
echo "always-auth=true" >> .npmrc
npm install
npm run compile
npm run secrets:get ${{ secrets.YOUR_GCP_PROJECT_ID }}
Hat tip to the anonymous heroes who wrote replies 14 and 18 in the Issure Tracker thread - https://issuetracker.google.com/issues/143810864?pli=1
If you have a .npmrc file checked in with your project's code, you would be wise to put a comment at the top, explaining that it will be overwritten during the CI/CD pipeline. Otherwise, Murphy's Law dictates that you (or a teammate) will check in a change to that .npmrc file and then waste an unbounded amount of time trying to figure out why that change has no effect during deployment.

GitLab CI Yocto Build - How to use SSTATE and DL_DIR

How to configure GitLab CI to store the SSTATE_DIR and the DL_DIR between several jobs? Currently, bitbake rebuilds the complete project every time, which is very time consuming. So i would like to use the sstage again. I tried caching, but building time increases effectively, due to the big zip/unzip overhead.
I don't even need a shared sstate between several projects, just a method to store the output between jobs.
I'm using Gitlab 11.2.3 with a shell executor as runner.
Thanks a lot!
In version 11.10, GIT_CLEAN_FLAGS was added, which could be used to achieve what you want to do with the shell executor.
For completeness: when using the docker executor, this can be achieved by using docker volumes, which are persistent across builds.
If you're only using one runner for this, you could potentially use GIT_STRATEGY: none, which would re-use the project workspace for the following job; relevant documentation. However, this wouldn't be extremely accurate in case you have multiple jobs running which requires the runner, as it could dilute the repository, if jobs are started from across different pipelines.
Another way, if you're still using one runner, is you could just copy the directories out and back into the job you require.
Otherwise, you may potentially be out of luck, and have to wait for the sticky runners issue.
You can reuse a shared-state cache between jobs simply as follows:
Specify the path to the sstate-cache directory in the .yml file of your
gitlab-ci pipeline. An example fragment from one of mine:
myrepo.yml
stages:
...
...
variables:
...
TCM_MACHINE: buzby2
...
SSTATE_CACHE: /sstate-cache/$TCM_MACHINE/PLAT3/
PURGE_SSTATE_CACHE: N
...
In my case /sstate-cache/$TCM_MACHINE/PLAT3/ is a directory in the docker container
that runs the build. This path is mounted in the docker container from a
permanent sstate cache directory on the build-server's filesystem, /var/bitbake/sstate-cache/<machine-id>/PLAT3.
PURGE_SSTATE_CACHE is overridable by a private variable
in the pipeline schedule settings so that I can optionally delete the cache for a squeaky clean
build.
Ensure that the setting of SSTATE_CACHE is appended to the bitbake conf/local.conf
file of the build, e.g.
.build_image: &build_image
stage: build
tags:
...
before_script:
...
script:
- echo "SSTATE_DIR ?= \"$SSTATE_CACHE\"" >> conf/local.conf
...
Apply the same pattern to DL_DIR if you use it.
Variables you use in the .yml file can be overriden by gitlab-ci trigger
or schedule variables. See Priority of variables

GitLab CI/CD pull code from repository before building ASP.NET Core

I have GitLab running on computer A, development environment (Visual studio Pro) on computer B and Windows Server on computer C.
I set up GitLab-Runner on computer C (Windows server). I also set up .gitlab-ci.yml file to perform build and run tests for ASP.NET Core application on every commit.
I don't know how can I get code on computer C (Windows server) so I can build it (dotnet msbuild /p:Configuration=Release "%SOLUTION%"). It bothers me that not a single example .gitlab-ci.yml I found on net, doesn't pull code form GitLab, before building application. Why?
Is this correct way to set-up CI/CD:
User create pull request (a new branch is created)
User writes code
User commit code to branch from computer B.
GitLab runner is started on computer C.
It needs to pull code from current branch (CI_COMMIT_REF_NAME)
Build, test, deploy ...
Should I use common git command to get the code, or is this something GitLab runner already do? Where is the code?
Why no-one pull code from GitLab in .gitlab-ci.yml?
Edited:
I get error
'"git"' is not recognized as an internal or external command
. Solution in my case was restart GitLab-Runner. Source.
#MilanVidakovic explain that source is automatically downloaded (which I didn't know).
I just have one remaining problem of how to get correct path to my .sln file.
Here is my complete .gitlab-ci.yml file:
variables:
SOLUTION: missing_path_to_solution #TODO
before_script:
- dotnet restore
stages:
- build
build:
stage: build
script:
- echo "Building %CI_COMMIT_REF_NAME% branch."
- dotnet msbuild /p:Configuration=Release "%SOLUTION%"
except:
- tags
I need to set correct variable for SOLUTION. My dir (where GitLab-Runner is located) currently holds this folder/files:
- config.toml
- gitlab-runner.exe
- builds/
- 7cab42e4/
- 0/
- web/ # I think this is project group in GitLab
- test/ # I think this is project name in GitLab
- .sln
- AND ALL OTHER PROJECT FILES #Based on first look
- testm.tmp
So, what are 7cab42e4, 0. Or better how to get correct path to my project structure? Is there any predefined variable?
Edited2:
Answer is CI_PROJECT_DIR.
I'm not sure I follow completely.
On every commit, Gitlab runner is fetching your repository to C:\gitlab-runner\builds.. on the local machine (Computer C), and builds/deploys or does whatever you've provided as an action for the stage.
Also, I don't see the need for building the source code again. If you're using Computer C for both runner and tests/acceptance, just let the runner do the building and add Artifacts item in your .gitlab-ci.yaml. Path defined in artifacts will retain your executables on Computer C, which you are then able to use for whatever purposes.
Hope it helps.
Edit after comment:
When you push to repository, Gitlab CI/CD automatically checks your root folder for .gitlab-ci.yaml file. If its there, the runner takes over, parses the file and starts executing jobs/stages.
As soon as the file itself is valid and contains proper jobs and stages, runner fetches the latest commit (automatically) and does whatever script item tells it to do.
To verify that everything works correctly, go to your Gitlab -> CI / CD -> Pipelines, and check out whats going on. You should see something like this:
Maybe it would be best if you posted your .yaml file, there could be a number of reasons your runner is not picking up the code. For instance, maybe your .yaml tags are not matching what runner is created to pick up etc.

Resources