upload submodule to ftp server using bitcucket pipeline - bitbucket-pipelines

Is there any way to upload git submodules to ftp server using bitbucket pipeline?
I'm able to upload main repo to ftp server but not it's sub module.
The code I have used is as follows:
# This is a sample build configuration for Other.
# Check our guides at https://confluence.atlassian.com/x/5Q4SMw for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
# image: atlassian/default-image:latest
pipelines:
default:
- step:
script:
- apt-get update
- apt-get -qq install git-ftp
- git submodule update --init --recursive
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD $FTP_SERVER

Related

.gitlab-ci.yml - how to sync to ftp on repo root

I'm using this ci.yml file to sync my data to an external ftp server.
image: ubuntu:18.04
before_script:
- apt-get update -qy
- apt-get install -y lftp
build:
script:
# Sync to FTP
- lftp -c "set ssl:verify-certificate no; open $FTP_LOCATION; user $FTP_USERNAME $FTP_PASSWORD; mirror -Rev / Apps/ --ignore-time --parallel=10; "
Since I'm not syncing a folder, but rather from the repo root - it seems to be the case that it tries to sync the whole ubuntu image to the FTP.
How can I just sync the files in the repo from root without syncing the whole ubuntu image?

gitlab-ci.yml fatal: unable to access http://gitlab-ci-token

I am new to gitlab CI/CD and I'm struggling to figure this out.
All I want to do is when I push to dev branch I want my react app to be built and the folder ./build to be pushed through SSH to my dev server.
Here is what I did so far, including a screenshot of the error message I get.
This is my gitlab-ci.yml
image: node:latest
cache:
paths:
- node_modules/
build_dev:
stage: build
environment: Development
only:
- dev
script:
- ls
- npm install
- npm run build
artifacts:
paths:
- build/
- ecosystem.config.js
deploy_dev:
stage: deploy
environment: Development
only:
- dev
script:
- rsync -r -a -v -e ssh --delete "./build" root#dev.teledirectasia.com:/var/www/gitlab/${CI_PROJECT_NAME}
- rsync -r -a -v -e ssh --delete "./ecosystem.config.js" root#dev.teledirectasia.com:/var/www/gitlab/${CI_PROJECT_NAME}/
- ssh root#dev.teledirectasia.com "cd /var/www/gitlab/${CI_PROJECT_NAME} && pm2 start ecosystem.config.js"
I don't know why I am getting this output with job failed
This is a DNS problem. Your runner cannot resolve the hostname of the GitLab server - gitlab.teledirectgroup.com. Did you set the GitLab hostname if your local workstation's host file manually, or did you set it up in a DNS server as a 'proper' hostname?
If you set up the hostname in a DNS server then the solution may be as simple as adding the DNS server to /etc/resolv.conf on the runner. However, if you just set the GitLab hostname in your workstation's hosts file then you'll need to set it in the runner's /etc/hosts file, too. It's hard to say what the exact solution is without knowing how you set up the GitLab hostname in the first place.
the solution that’s so applied to GitLab?
Use the git clone by ssh, I don’t have a good goal that’s so I can up to push that’s changes over a submodule from runner Shell by GitLab CI. The pipeline ever fails and prints this error.
ERROR PIPELINE JOB
In the local repo as a project the file config contains that line with the URL, more don’t have login with this about the pipeline.
.git/config
Some help or walkthrough of reference to culminate with that challenge in troubleshooting!
This is my code over the file ".gitlab-ci.yml"
variables:
TEST_VAR: "Update Git Submoudel in all Etecnic projects."
job1:
variables: {}
script:
- echo "$TEST_VAR"
job2:
variables: {}
script:
- echo "OK" >> exito.txt
- git add --all
- git commit -m "Update Submodule"
- git push origin HEAD:master
Versions:
GitLab:
gitlab-ce is already the newest version (15.6.0-ce.0).
Runner:
Version: 15.5.1
Git revision: 7178588d
Git branch: 15-5-stable
GO version: go1.18.7
Built: 2022-11-11T09:45:25+0000
OS/Arch: linux/amd64
Thanks so much for your attention.

Change directory in pipe line bitbucket

My folder structure:
-backend
-frontend
My reactapp is placed in frontend directory.
image: node:10.15.3
pipelines:
default:
- step:
caches:
- node
script: # Modify the commands below to build your repository.
- yarn install
- yarn test
- yarn build
This one fails. How do I go to the frontend-directory to run this?
Bitbucket Pipeline run in one bitbucket cloud server.
So, similar as using a local command line interface, you can navigate using comands like cd, mkdir.
image: node:10.15.3
pipelines:
default:
- step:
caches:
- node
script: # Modify the commands below to build your repository.
- cd frontend
- yarn install
- yarn test
- yarn build
- cd ../ #if you need to go back
#Then,probably you will need to deploy your app, so you can use:
- apt-get update
- apt-get -qq install git-ftp
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD $FTP_HOST
If you need to test syntax of your yml file, try here

Is it possible to transfer caches/artifacts between pipelines?

In Gitlab, is it possible to transfer caches or artifacts between pipelines?
I am building a library in one pipeline and I want to build an application with the library in another pipeline.
Yes, it is possible. There are a couple of options to achieve this:
Using Job API and GitLab Premium
The first option is to use Job API to fetch artifacts. This method is available only if you have GitLab Premium. In this option, you use CI_JOB_TOKEN in Job API to fetch artifacts from another pipeline. Read more here.
Here is quick example of a job you would put in your application pipeline configuration:
build_application:
image: debian
stage: build
script:
- apt update && apt install -y unzip
- curl --location --output artifacts.zip "https://gitlab.example.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/master/download?job=build&job_token=$CI_JOB_TOKEN"
- unzip artifacts.zip
Using S3
The second option is to use some third-party intermediate storage, for instance, AWS S3. To pass artifacts follow below example.
In your library pipeline configuration create the following job:
variables:
TARGET_PROJECT_TOKEN: [get token from Settings -> CI/CD -> Triggers]
TARGET_PROJECT_ID: [get project id from project main page]
publish-artifact:
image: "python:latest"
stage: publish
before_script:
- pip install awscli
script:
- aws s3 cp output/artifact.zip s3://your-s3-bucket-name/artifact.zip.${CI_JOB_ID}
- "curl -X POST -F token=${TARGET_PROJECT_TOKEN} -F ref=master -F variables[ARTIFACT_ID]=${CI_JOB_ID} https://gitlab.com/api/v4/projects/${TARGET_PROJECT_ID}/trigger/pipeline"
Then in your application pipeline configuration retrieve the artifact from the s3 bucket:
fetch-artifact-from-s3:
image: "python:latest"
stage: prepare
artifacts:
paths:
- artifact/
before_script:
- pip install awscli
script:
- mkdir artifact
- aws s3 cp s3://your-s3-bucket-name/artifact.zip.${ARTIFACT_ID} artifact/artifact.zip
only:
variables:
- $ARTIFACT_ID
Once fetch-artifact-from-s3 job is completed you will have your artifact available in artifact/ directory. It can now be consumed in other jobs within application pipeline.

Pipeline in bitbucket failed by composer error

I try pipeline in bitbucket. This is default conf which I am using:
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:5.6.36
pipelines:
default:
- step:
caches:
- composer
script:
- apt-get update && apt-get install -y unzip
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- vendor/bin/phpunit
I am getting this error
+ composer install
Do not run Composer as root/super user! See https://getcomposer.org/rootfor details
Composer could not find a composer.json file in /opt/atlassian/pipelines/agent/build
To initialize a project, please create a composer.json file as described in the https://getcomposer.org/ "Getting Started" section
Even when I created composer.json (all path as well) in /opt/atlassian/pipelines/agent/build.
Btw, why tool looking for composer in such path? I have installed composer on /home/username/.composer
What did I do wrong?
Use another docker container for deployment instead of image: php:5.6.36.
There is an official composer container: https://hub.docker.com/_/composer/
It's based on php:7-alpine3.7, I hope PHP7 does the job with your application too.

Resources