npm: command not found - Gitlab CI specific runner - node.js

I am running gitlab-runner on my server, I am not using docker for deployment. I am trying to achieve the deployment on a remote server by doing ssh to the server. This is my .gitlab-ci.yml file -
stages:
- deploy
pre-staging:
stage: deploy
environment:
name: Gitlab CI/CD for pre-staging deployment
url: "$REMOTE_SERVER"
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- ssh-add <(echo "$REMOTE_PVT_KEY")
- ssh ubuntu#"$REMOTE_SERVER" "cd deployment_container; npm --version ;rm -rf static; source deploy.sh"
- echo "Deployment completed"
only:
- megre_requests
- pre-staging
tags:
- auto-deploy
My pipeline is failing with error npm: command not found. I have proper environment for npm on my ssh-ed server. I am trying to deploy the Django-react application.
I have already tried using image: node:latest.
npm is installed using nvm
Can somebody help me resolve this?

Try and replace the ssh step with:
ssh ubuntu#"$REMOTE_SERVER" "pwd; cd deployment_container; echo $PATH"
If this "deployment" (which won't do anything) completes, it means npm is not accessible in the default PATH defined in the SSH session.

In this we have to give npm access to all users by executing below command,
n=$(which node);n=${n%/bin/node}; chmod -R 755 $n/bin/*; sudo cp -r $n/{bin,lib,share} /usr/local
This resolved my issue of npm: command not found

You can try this one.
stages:
- build
- deploy
deploy-prod:
image: node:12.13.0-alpine
stage: deploy
script:
- npm i -g firebase-tools

Related

Gitlab Pipeline: lftp runs into a timeout

I received a notification a few days ago from my gitlab runner about a failed pipeline.
The Pipeline was working normally and nothing got changed, which makes everything a bit harder to investigate.
The specific command which causes the timeout of 1 hour is the following:
lftp -e "set ftp:ssl-allow no; mirror -R dist/ ./; quit;" -u $USER_TEST,$PASSWORD_TEST $HOST_TEST
This was working fine. I tried to troubleshoot this problem. There are a plenty of reasons out there why the timeout happens. However none of it was solving my problem.
A short brief of what the git pipeline does:
building an angular app (the dist folder's size is about 750kB)
deploying it on a server using ftp credentials
I manually went through the pipeline steps, hoping to replicate the bug, but it was working fine.
.gitlab-ci.yml:
image: node:14.15.3-alpine
cache:
paths:
- node_modules/
stages:
- build
- deploy
#DEV Stage
build_stage_dev:
stage: build
only:
refs:
- develop
cache:
paths:
- dist/
script:
- npm install --legacy-peer-deps
- npm install -g #angular/cli#11.0.5
- ng build --build-optimizer
deploy_stage_dev:
stage: deploy
environment: develop
only:
refs:
- develop
script:
- apk update && apk add openssh-client && apk add sshpass
- export SSHPASS=$PASSWORD_DEV
#command to remove all files first
#- sshpass -e ssh -o stricthostkeychecking=no $USER_DEV#$HOST_DEV rm -r /var/www/app/*
- sshpass -e scp -o stricthostkeychecking=no -r dist/* $USER_DEV#$HOST_DEV:/var/www/app
#TEST Stage
build_stage_test:
stage: build
only:
refs:
- test
cache:
paths:
- dist/
script:
- npm install --legacy-peer-deps
- npm install -g #angular/cli#11.0.5
- ng build --build-optimizer
deploy_stage_test:
stage: deploy
environment: test
only:
refs:
- test
script:
- apk update && apk add lftp
- lftp -e "set ftp:ssl-allow no; mirror -R dist/ ./; quit;" -u $USER_TEST,$PASSWORD_TEST $HOST_TEST
The DEV Stage (deploying with the ssh) is working fine. Only the test stage throws an error after 1 hour.
This is the error I receive on Gitlab: ERROR: Job failed: execution took longer than 1h0m0s seconds
Maybe any of you have experienced the same? or did "lftp" got an update and I am running into an enndless job?
Also I checked whether files are getting updated at all, the answer is no. I thought it could be an issue with "quit". But appearently nothing is getting transferred at all.

I get no such file or directory when moving dist folder contents from gitlab to aws server instance

I am new to Gitlab CI/CD and trying to fix this all day long but nothing works. I am trying to move the dist folder generated by gitlab runner after build stage to aws ec2-instance folder location. I am currently implementing CI/CD pipeline using Gitlab and this is how my .gitlab-ci.yml looks like:
# Node Image for docker on which code will execute
image: node:latest
# This is the stages / task to perfom in jobs
stages:
- build
- deploy
# caching for reuse
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
# This command is run before the execution of stages
before_script:
- npm install
# Job One for making build
build_testing_branch:
stage: build
script:
- node --max_old_space_size=4096 --openssl-legacy-provider ./node_modules/#angular/cli/bin/ng build --configuration=dev-build --build-optimizer
only: ['testing']
# Job Two for deploy build to server
deploy_testing_branch:
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
# - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
# - apt-get update -y
# - apt-get -y install rsync
artifacts:
paths:
- dist/
script:
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- ssh -p22 ubuntu#$SERVER_IP "rm -r /usr/share/nginx/user-host/ui-user-host/dist/; mkdir /usr/share/nginx/user-host/ui-user-host/dist/"
- scp -P22 -r $CI_PROJECT_DIR/dist/ ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
only: ['testing']
The build process works just fine with success confirmation, but the deployment stage fails because I get:
$scp -P22 -r $CI_PROJECT_DIR/dist/ ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
/builds/user-live/ui-user-host/dist: No such file or directory
Cleaning up project directory and file based variables
So, I dont understand why its not able to locate dist folder in above location. If i correctly understand this should be available on the gitlab runner's filesystem. Is it because scp command is not right?
EDIT:
I also tried with
- scp -P22 -r dist/ ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
and
- scp -P22 -r dist/* ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
but no luck!
You are building your dist folder in build_testing_branch job and trying to access it in deploy_testing_branch for this to work you have to give the dist folder as artifact in build_testing_branch job (since dist is created there) and not in deploy_testing_branch.

gitlab ci pipeline failed deploy ftp

I try to build and push my react build folder with gitlab-ci.yml
Build and test passes but deploy failed with this error :
If I do the same script in my locale file, it works !
lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
mirror: Access failed: /builds/myGitLab/myGitlabProjectName/build: No such file or directory
lftp: MirrorJob.cc:242: void MirrorJob::JobFinished(Job*): Assertion `transfer_count>0' failed.
/bin/bash: line 97: 275 Aborted (core dumped) lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
ERROR: Job failed: exit code 1
Here is my all yml file :
image: node:13.8
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
test:
stage: test
script:
- yarn
- yarn test
deploy:
script:
- apt-get update && apt-get install -y lftp
- lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
enter code here
I 've got it ! i was started from a docker image (node) to perform those 3 stages: the build, the test and the deploy but without success but i tried doing an ls-a in the stage deploy I realized that I didn't have the build folder. Because the docker image was recreated each time, so I added artifacts to keep the buid file!
Once the job in the build stage is "done".it keep in a variable buid readable for next job also the deploy !
image: node:13.8
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
only:
- master
artifacts:
paths:
- build
test:
stage: test
script:
- yarn
- yarn test
deploy:
stage: deploy
before_script:
- apt-get update -qq
script:
- apt-get install -y -qq lftp
- ls -a
- lftp -e "set ssl:verify-certificate false; mirror --reverse --verbose --delete build/ ./test2 ; quit" -u $USERNAME,$PASSWORD $HOST
only:
- master
I have a part of the answer, but i would like to do something better
Actually, i understood what is going on. On every stage the docker image build then after the build on the test and deploy, there is no more build folder.
I don't know how to persit the docker image witch is node to every stages.
Any help will be welcome.
To make it works i have done every script in one stage this way:
image: node:13.0.1
stages:
- production
build:
stage: production
script:
- npm install
- npm run build
- npm run test
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
only:
- master

Expo publish command getting stuck after tunnel connected stage

I am using gitlab-ci to publish react native app using expo cli but the pipeline gets stuck at the tunnel connected stage. How do I fix this issue ?
cache:
paths:
- node_modules/
stages:
- deploy
before_script:
- npm ci
expo-deployments:
stage: deploy
script:
- echo "EXPO_USERNAME=""$EXPO_USERNAME" >> .env
- echo "EXPO_PASSWORD=""$EXPO_PASSWORD" >> .env
- npx expo login -u "$EXPO_USERNAME" -p "$EXPO_PASSWORD"
- cat /proc/sys/fs/inotify/max_user_watches
- echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.conf && sysctl -p
- npx expo publish --non-interactive
Here is the pipeline screenshot.
Not really sure what might be the issue. Tried to search but didn't find any concrete solution specific to this problem.

GitLab FTP deploy - Job failed: execution took longer than 1h0m0s seconds

Im new to GitLab and CI but I want deploy from GitLab repo to FTP via lftp
its goes to lftp and still running 1 hour and then return:
ERROR: Job failed: execution took longer than 1h0m0s seconds
.gitlab-ci.yml
...
deploy:
stage: deploy
image: mwienk/docker-lftp:latest
only:
- dev
script:
- lftp -c "set ftp:ssl-allow no; open -u $FTP_USERNAME,$FTP_PASSWORD -p $FTP_PORT $FTP_HOST; mirror -Rev ./ gitlab --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
...
also tried
script:
- apt-get update -qq && apt-get install -y -qq lftp
Its SFTP protocol, maybe lftp is asking for something on background and not continue? Its not upload anything on FTP. Any advice?
Using SFTP you should try with port 22 and prefix your host like so: sftp://example.com
A very useful tool is also the lftp debug command and --verbose flag for the mirror command. Just include it in your script like so:
lftp -c "set ftp:ssl-allow no; debug; open -u $FTP_USERNAME,$FTP_PASSWORD -p $FTP_PORT $FTP_HOST; mirror -Rev ./ gitlab --verbose --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
Also you should try to install lftp with:
apt-get update -qq && apt-get install -y -qq lftp
since this version includes the library libgnutls for supporting secure connections.
This is the configuration which worked on my setup deploying with FTP:
.gitlab-ci.yml
...
deploy:
stage: deploy
script:
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -u $FTP_USER,$FTP_PASS $HOST -e "mirror -e -R -p ./dist/ new/ ; quit"
- echo "deployment complete"
# specify environment this job is using
environment:
name: staging
url: http://example.com/new/
# needs artifacts from previous build
dependencies:
- build
lftp documentation: https://lftp.yar.ru/lftp-man.html

Resources