Im new to GitLab and CI but I want deploy from GitLab repo to FTP via lftp
its goes to lftp and still running 1 hour and then return:
ERROR: Job failed: execution took longer than 1h0m0s seconds
.gitlab-ci.yml
...
deploy:
stage: deploy
image: mwienk/docker-lftp:latest
only:
- dev
script:
- lftp -c "set ftp:ssl-allow no; open -u $FTP_USERNAME,$FTP_PASSWORD -p $FTP_PORT $FTP_HOST; mirror -Rev ./ gitlab --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
...
also tried
script:
- apt-get update -qq && apt-get install -y -qq lftp
Its SFTP protocol, maybe lftp is asking for something on background and not continue? Its not upload anything on FTP. Any advice?
Using SFTP you should try with port 22 and prefix your host like so: sftp://example.com
A very useful tool is also the lftp debug command and --verbose flag for the mirror command. Just include it in your script like so:
lftp -c "set ftp:ssl-allow no; debug; open -u $FTP_USERNAME,$FTP_PASSWORD -p $FTP_PORT $FTP_HOST; mirror -Rev ./ gitlab --verbose --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
Also you should try to install lftp with:
apt-get update -qq && apt-get install -y -qq lftp
since this version includes the library libgnutls for supporting secure connections.
This is the configuration which worked on my setup deploying with FTP:
.gitlab-ci.yml
...
deploy:
stage: deploy
script:
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -u $FTP_USER,$FTP_PASS $HOST -e "mirror -e -R -p ./dist/ new/ ; quit"
- echo "deployment complete"
# specify environment this job is using
environment:
name: staging
url: http://example.com/new/
# needs artifacts from previous build
dependencies:
- build
lftp documentation: https://lftp.yar.ru/lftp-man.html
Related
I am new to Gitlab CI/CD and trying to fix this all day long but nothing works. I am trying to move the dist folder generated by gitlab runner after build stage to aws ec2-instance folder location. I am currently implementing CI/CD pipeline using Gitlab and this is how my .gitlab-ci.yml looks like:
# Node Image for docker on which code will execute
image: node:latest
# This is the stages / task to perfom in jobs
stages:
- build
- deploy
# caching for reuse
cache:
key: "$CI_COMMIT_REF_SLUG"
paths:
- node_modules/
# This command is run before the execution of stages
before_script:
- npm install
# Job One for making build
build_testing_branch:
stage: build
script:
- node --max_old_space_size=4096 --openssl-legacy-provider ./node_modules/#angular/cli/bin/ng build --configuration=dev-build --build-optimizer
only: ['testing']
# Job Two for deploy build to server
deploy_testing_branch:
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
# - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
# - apt-get update -y
# - apt-get -y install rsync
artifacts:
paths:
- dist/
script:
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- ssh -p22 ubuntu#$SERVER_IP "rm -r /usr/share/nginx/user-host/ui-user-host/dist/; mkdir /usr/share/nginx/user-host/ui-user-host/dist/"
- scp -P22 -r $CI_PROJECT_DIR/dist/ ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
only: ['testing']
The build process works just fine with success confirmation, but the deployment stage fails because I get:
$scp -P22 -r $CI_PROJECT_DIR/dist/ ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
/builds/user-live/ui-user-host/dist: No such file or directory
Cleaning up project directory and file based variables
So, I dont understand why its not able to locate dist folder in above location. If i correctly understand this should be available on the gitlab runner's filesystem. Is it because scp command is not right?
EDIT:
I also tried with
- scp -P22 -r dist/ ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
and
- scp -P22 -r dist/* ubuntu#$SERVER_IP:/usr/share/nginx/user-host/ui-user-host/dist/
but no luck!
You are building your dist folder in build_testing_branch job and trying to access it in deploy_testing_branch for this to work you have to give the dist folder as artifact in build_testing_branch job (since dist is created there) and not in deploy_testing_branch.
I'm trying to publish an aspnet mvc 5 project via ftp using GitLab CI / CD.
I configured the runner as it is at the link https://medium.com/#gabriel.faraday.barros/gitlab-ci-cd-with-net-framework-39220808b18f
I'm having difficulty in the last step, which is to take the generated publish and send it to another server via ftp, as the runner executes with powershel the lftp generates an error in the build.
can you help me?
Here is my yaml code:
variables:
NUGET_PATH: 'C:\Tools\Nuget\nuget.exe'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\msbuild.exe'
build_job:
stage: build
cache:
key: build-package
policy: push
script:
- echo "*****Nuget Restore*****"
- '& "$env:NUGET_PATH" restore'
- echo "*****Build Solution*****"
- '& "$env:MSBUILD_PATH" /p:Configuration=Release /clp:ErrorsOnly'
- '& "$env:MSBUILD_PATH" FisioSystem.MVC\FisioSystem.MVC.csproj /p:DeployOnBuild=True /p:Configuration=Release /P:PublishProfile=Publish_FisioSystems.pubxml'
- echo "*****Install lftp*****"
- apt-get update -qq && apt-get install -y -qq lftp
- echo "*****Upload file to ftp*****"
- lftp -c "set ftp:ssl-allow no; open -u $FTP_USERNAME,$FTP_PASSWORD $FTP_HOST; mirror -R C:/Deploy/ ./../manager --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/; quit"
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME"
when: always
paths:
- ./FisioSystem.MVC/bin/release
expire_in: 1 week
only:
- master
Thanks!
If your gitlab runner is custom windows machine then manually install lftp. Then command will be available in your pipeline.
After you installed lftp on runner, just remove from pipeline
- echo "*****Install lftp*****"
- apt-get update -qq && apt-get install -y -qq lftp
I try to build and push my react build folder with gitlab-ci.yml
Build and test passes but deploy failed with this error :
If I do the same script in my locale file, it works !
lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
mirror: Access failed: /builds/myGitLab/myGitlabProjectName/build: No such file or directory
lftp: MirrorJob.cc:242: void MirrorJob::JobFinished(Job*): Assertion `transfer_count>0' failed.
/bin/bash: line 97: 275 Aborted (core dumped) lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
ERROR: Job failed: exit code 1
Here is my all yml file :
image: node:13.8
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
test:
stage: test
script:
- yarn
- yarn test
deploy:
script:
- apt-get update && apt-get install -y lftp
- lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
enter code here
I 've got it ! i was started from a docker image (node) to perform those 3 stages: the build, the test and the deploy but without success but i tried doing an ls-a in the stage deploy I realized that I didn't have the build folder. Because the docker image was recreated each time, so I added artifacts to keep the buid file!
Once the job in the build stage is "done".it keep in a variable buid readable for next job also the deploy !
image: node:13.8
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
only:
- master
artifacts:
paths:
- build
test:
stage: test
script:
- yarn
- yarn test
deploy:
stage: deploy
before_script:
- apt-get update -qq
script:
- apt-get install -y -qq lftp
- ls -a
- lftp -e "set ssl:verify-certificate false; mirror --reverse --verbose --delete build/ ./test2 ; quit" -u $USERNAME,$PASSWORD $HOST
only:
- master
I have a part of the answer, but i would like to do something better
Actually, i understood what is going on. On every stage the docker image build then after the build on the test and deploy, there is no more build folder.
I don't know how to persit the docker image witch is node to every stages.
Any help will be welcome.
To make it works i have done every script in one stage this way:
image: node:13.0.1
stages:
- production
build:
stage: production
script:
- npm install
- npm run build
- npm run test
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
only:
- master
I am running gitlab-runner on my server, I am not using docker for deployment. I am trying to achieve the deployment on a remote server by doing ssh to the server. This is my .gitlab-ci.yml file -
stages:
- deploy
pre-staging:
stage: deploy
environment:
name: Gitlab CI/CD for pre-staging deployment
url: "$REMOTE_SERVER"
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- ssh-add <(echo "$REMOTE_PVT_KEY")
- ssh ubuntu#"$REMOTE_SERVER" "cd deployment_container; npm --version ;rm -rf static; source deploy.sh"
- echo "Deployment completed"
only:
- megre_requests
- pre-staging
tags:
- auto-deploy
My pipeline is failing with error npm: command not found. I have proper environment for npm on my ssh-ed server. I am trying to deploy the Django-react application.
I have already tried using image: node:latest.
npm is installed using nvm
Can somebody help me resolve this?
Try and replace the ssh step with:
ssh ubuntu#"$REMOTE_SERVER" "pwd; cd deployment_container; echo $PATH"
If this "deployment" (which won't do anything) completes, it means npm is not accessible in the default PATH defined in the SSH session.
In this we have to give npm access to all users by executing below command,
n=$(which node);n=${n%/bin/node}; chmod -R 755 $n/bin/*; sudo cp -r $n/{bin,lib,share} /usr/local
This resolved my issue of npm: command not found
You can try this one.
stages:
- build
- deploy
deploy-prod:
image: node:12.13.0-alpine
stage: deploy
script:
- npm i -g firebase-tools
I want to deploy to a ftp server using a Gitlab pipeline.
I tried this code:
deploy: // You can name your task however you like
stage: deploy
only:
- master
deploy:
script:
- apt-get update -qq && apt-get install -y -qq lftp
But I get a error message. What is the best way to do this? :)
Then add the following code in your .gitlab-ci.yml file.
variables:
HOST: "example.com"
USERNAME: "yourUserNameHere"
PASSWORD: "yourPasswordHere"
deploy:
script:
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -Rnev ./public_html ./ --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
only:
- master
The above code will push all your recently modified files in your Gitlab repository into public_html folder in your FTP Server root.
Just update the variables HOST, USERNAME and PASSWORD with your FTP Credentials and commit this file to your Gitlab Repository, you are good to go.
Now whenever you make changes in your master branch, Gitlab will automatically push your changes to your remote FTP server.
Got it :)
image: mwienk/docker-git-ftp
deploy_all:
stage: deploy
script:
- git config git-ftp.url "ftp://xx.nl:21/web/new.xxx.nl/public_html"
- git config git-ftp.password "xxx"
- git config git-ftp.user "xxxx"
- git ftp init
#- git ftp push -m "Add new content"
only:
- master
try this. There's a CI Lint tool in Gitlab that helps with formatting errors. The linter was showing an error, the additional deploy statement.
deploy:
stage: deploy
only:
- master
script:
- apt-get update -qq && apt-get install -y -qq lftp
I use this
deploy:
script:
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow no; open -u $FTP_USERNAME,$FTP_PASSWORD $FTP_HOST; mirror -v ./ $FTP_DESTINATION --reverse --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
environment:
name: production
only:
- master