I'm so curious about trigger input on gitlab CI pipeline *cmiiw.
So, the main problem when there's any prompt on Node.js like this,
Ubuntu
But when i'm trying to implementation into Gitlab CI, there's any error something like this
Gitlab.CI
This is my gitlab.ci.yml script
image: node:latest
cache:
paths:
- node_modules/
all_tests:
script:
- npm install
- npm run setup
- John Doe \n
- npm run test
First, CI best-practices suggest you create a --force or --no-interactive variant of your installer, to omit interactive input in case of automated deploys.
A workaround could be to use the yes unix util. This util lets you feed a string to interactive input like this (in your case):
image: node:latest
cache:
paths:
- node_modules/
all_tests:
script:
- npm install
- yes 'Gitlab CI' | npm run setup
- npm run test
This will answer 'Gitlab CI' to all questions asked so its rather limited.
Btw; I think you mean .gitlab-ci.yml instead of travis.ci.yml in your question?
Related
I have a project that requires npm and gradle for build, and docker for building and pushing the image.
At first I thought that I should create my own ubuntu image with gradle and npm setup, but I found out that is not what docker images are for.
So I hoped to run official Gradle and node images as a service so that my script can call those commands, but that is not happening for some reason.
My .gitlab-ci.yml:
variables:
IMAGE_NAME: my.registry.production/project
IMAGE_TAG: $CI_COMMIT_BRANCH
GIT_SUBMODULE_STRATEGY: recursive
stages:
- build
- deploy
build_project:
stage: build
image: ubuntu:jammy
services:
- name: node:12.20
alias: npm
- name: gradle:6.3.0-jre8
alias: gradle
before_script:
- git submodule init && git submodule update --remote --recursive
script:
- cd project-server && npm install && gradle clean build -Pprod -Pwar -x test -x integrationTest
deploy_image:
stage: deploy
image: docker:20.10.17
services:
- name: docker:20.10.17-dind
alias: docker
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
script:
- docker login -u $REGISTRY_USER -p $REGISTRY_PASSWORD my.registry.production
- docker build -t $IMAGE_NAME:$IMAGE_TAG .
- docker push $IMAGE_NAME:$IMAGE_TAG
If anyone has any info on how to solve this I would greatly appreciate it, since I’m a novice DevOps.
Edit 1:
My Dockerfile for custom image with Gradle and Node installed.
FROM ubuntu:jammy
LABEL key=DevOps
SHELL ["/bin/bash", "--login", "-i", "-c"]
RUN apt update && apt upgrade -y && apt install curl -y
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.3/install.sh | bash
RUN source /root/.bashrc && nvm install 12.14.1
RUN nvm install 12.20.0
RUN apt install zip unzip
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
RUN sdk install java 8.0.302-open
RUN sdk install gradle 3.4.1
SHELL ["/bin/bash", "--login", "-c"]
CMD [ "bin/bash" ]
After I run it, it says that npm is not found in $PATH, I tried Java, Gradle as well but they weren't found in the path as well.
I don't know why since I installed them as you can tell from the Dockerfile.
As I know, a docker image is equal to one build. So if you have multiple services you need to build each one into docker image then you can encapsulate all images into docker-compose.yml file.
I think you can do the following:
Build the npm project into a docker image
Build the Gradle project into a docker image
Write the docker-compose.yml file and put both images.
Once you have done it, the pipeline calls the docker-compose.yml file.
I hope this will be helpful.
Consider a few suggestions based on the fundamental concepts about the deployment in your CI/CD pipeline:
Remove the services keyword. Reference GitLab's official documents on what the services keyword inside gitlab-ci.yaml file is not for. The feature is used
to provide network accessable services to your job runtime (like
a database): https://docs.gitlab.com/ee/ci/services/index.html
Your project uses npm as a dependency management system, Gradle is
a build tool. Both of these pieces of software are more than
appropriate to run on the host operating system of the container
runtime inside GitLab's Pipeline job. You need these tools to assemble some build artifact as a result of the job on the same host your code has been downloaded on in the Runner.
Think about the overall size of the base image in your build_project job and consider how time to download the image over the network on to the Runner will impact your job and overall pipeline duration. If performance can be improved by baking build dependencies into a custom Dockerfile do this. If your image is too large, instead use shell commands inside the script keyword block to download them at the runtime of the job. There can be pros and cons for both.
Break shell scripts to one command per line for easier troubleshooting of failures in your scripts. You will be able to see the line number of the command which returned a non-zero exit code in your job logs:
...
script:
- cd project-server
- npm install
- gradle clean build -Pprod -Pwar -x test -x integrationTest
...
It's recommended to use the Gradle wrapper (gradlew) most of the time instead of the gradle executable directly. Configure this within your project and check the configuration files for the wrapper into your version control system and this will simplify your build dependency: https://docs.gradle.org/current/userguide/gradle_wrapper.html
I know my question is similar to this one, but hope someone can help me to execute Playwright tests in the Gitlab pipeline.
My .gitlab-ci.yaml insludes next lines:
image: node:16.13.0
...
test e2e:
stage: test
script:
- npx playwright install
- npm run test-e2e
Can I somehow set a proper docker image or OS that is supported..?
Was described in the official docs https://playwright.dev/docs/ci#gitlab-ci
You can set image per job like in my case:
test e2e:
stage: test
**image: mcr.microsoft.com/playwright:v1.16.3-focal**
script:
- npx playwright install
- npm run test-e2e
Consider the following builds:
3rd build without npm cache
3rd build with npm cache
These two repositories are almost identical, with the only difference being the latter repository caches npm via the setup-node GitHub Action, whereas the former one does not. In other words, the only difference between the repositories is at the .github/workflows/main file:
name: Build Pipeline
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: actions/setup-node#v2
with:
node-version: '11'
# Following line is present only on the latter repository
cache: 'npm'
- run: npm install
- run: npm run build
Although the build at "setup-node-with-cache" successfully uses npm cache (as evident by the output of the Run actions/setup-node#v2 step), run time of Run npm install step is almost the same as the corresponding step of the build at "setup-node-without-cache".
Isn’t the run time of Run npm install step of the build at "setup-node-with-cache" supposed to be significantly shorter than the corresponding step of the build at "setup-node-without-cache", since it is supposed to use the cached npm packages? Am I missing something here?
It really does not seem to be, at least on GitHub Actions. The workaround that I found is caching the actual node_modules folder, even though it is "not recommended". Caching the actual node_modules folder does speed up npm install.
I've created a action for a deployment on github actions. This all works with composer install and git pulling the master branch. However on my digital ocean droplet, I get the issue
bash: line 4: npm: command not found
If i ssh into my server i can use npm perfectly fine. This was installed via nvm and uses the latest version but for some reason its not accessable via the action.
My deployment script is
on:
push:
branches: [master]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy Laravel APP
uses: appleboy/ssh-action#v0.1.4
with:
host: ${{secrets.SSH_HOST}}
key: ${{secrets.SSH_KEY}}
username: ${{ secrets.SSH_USER }}
script: |
cd /var/www/admin
git pull origin master
composer install
npm install
npm run prod
I presume this is more to do with the setup from nvm as i can use this via ssh but as they use the same user to log in via ssh, i can't seem to see an issue.
Any ideas how I can resolve this issue to give access/allow github actions to use npm?
I didn't find a solution for the nvm issue, installing npm via a different way resoleved this issue.
I had the same issue, and finally found the solution.
I could solve the issue by adding the following lines before running npm commands.
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
These commands helps the terminal fix the node path installed by nvm.
Reference link is here.
I am trying to run a CI in gitlab
image: node:latest
stages:
- deploy
production:
stage: deploy
before_script:
- npm config set prefix /usr/local
- npm install -g serverless
script:
- serverless deploy
I am using the docker image like they suggest but it cannot find npm (or node)
How can I get this working?
Well, this is a bit weird, as your ci is correct.
If you are just using gitlab.com and their shared runners then this .gitlab-ci.yml will work.
One possible reason could be you have runners added as ssh/shell executors in the project repo. If so then the image tag you specified will be simply ignored.
So error like command not found could occur because of the server where you have added the runner doesn't have nodejs installed, and this error will occur for the npm config... command in before script with exit code 127 and pipeline will fail just there and stop.
If you have multiple runners then tag them and tag your jobs in ci.yml as well.
And if you are trying to run the job on your own server then you got to install docker first.
BTW for docker image node:latest you don't need npm config set prefix /usr/local as it already is /usr/local