Gitlab runner does not seem to load docker image - gitlab

For a old codebase, we're trying to go from just uploading changes through FTP to using Gitlab CI/CD. However, none of us have extensive Gitlab experience, and I've been trying to set the deployment up by following this guide:
https://savjee.be/2019/04/gitlab-ci-deploy-to-ftp-with-lftp/
I'm running a gitlab-runner on my own mac right now, however, it seems like the docker image in my yml file is not loaded correctly. When using the yml from the article:
image: ubuntu:18.04
before_script:
- apt-get update -qy
- apt-get install -y lftp
build:
script:
# Sync to FTP
- lftp -e "open ftp.mywebhost.com; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete local-folder/ destination-folder/; bye"
It tells me apt-get: command not found. I've tried with apk-get as well, but no differences. I've tried to find a different docker image that has lftp installed ahead of time, but then I just get a lftp: command not found:
image: minidocks/lftp:4
before_script:
# - apt-get update -qy
#- apt-get install -y lftp
build:
script:
- lftp -e "open ftp.mywebhost.com; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete local-folder/ destination-folder/; bye"
- echo 'test this'
If I comment out the lftp/apt-get bits, I do get to the echo command, however (and it does work).
I can't seem to find any reason for this when searching online. Apologies if this is a duplicate question or I've just been looking in the wrong places.

From your question, it seems you are executing your tasks on a gitlab-runner using the shell executor.
The shell executor does not handle the image keyword as exposed in the runner compatibility matrix.
Moreover, since you want to deploy on docker containers, you need the docker executor anyway.

Related

go command not found which is needed for installing CUE

I am trying to install go in a docker image with the below commands:
- cd /usr/local
- wget -L "https://golang.org/dl/go1.20.1.linux-amd64.tar.gz"
- tar -xzvf go1.20.1.linux-amd64.tar.gz
- rm -f go1.20.1.linux-amd64.tar.gz
- echo "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile
- source /etc/profile
- /usr/local/go/bin version
- /usr/local/go/bin install cuelang.org/go/cmd/cue#latest
These commands are passed in a .gitlab-ci.yml in one of the stages and it throws an error at the end by saying go not found when I try to check the version.
What am I missing here? my ultimate goal is to install CUE but I want to install it via golang.

Deploying React app on Debian 11 with serve throws unexplainable error in the CI/CD Pipeline on gitlab

The following as the pipeline we are using to deploy the App to the Debian server.
stages:
- deploy
deploy-job: # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
environment: production
image: node:latest
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- echo "Deploying application..."
- ssh $SSH_USER#$SSH_IP "cd $PROEJCT_PATH/$PROJECT_DIRECTORY_NAME && serve -s build"
- echo "Application successfully deployed."
But this throws the following Error Message:
file:///usr/local/lib/node_modules/serve/build/main.js:169
const ipAddress = request.socket.remoteAddress?.replace("::ffff:", "")
?? "unknown"; ^
SyntaxError: Unexpected token '.' at Loader.moduleStrategy
(internal/modules/esm/translators.js:133:18) at async link
(internal/modules/esm/module_job.js:42:21)
We had the same issue after installing node on the Debian Server but after updating it with nvm install 19.4.0it fixed the problem
The command serve -s build did work then on the server but it does not work in the pipeline.
We are discussing the possibility that the container is using is his own environment but we are not sure with that assumption.
Can some help and explain the problem.
The error comes from the "new" optional chaining javascript feature ?. which is supported from v14. However default debian 11 node version is v12. Thus you should update node to newest version
sudo apt remove node npm nodejs #remove them old node/npm
sudo snap install node --classic #see https://snapcraft.io/node
or use instead nvm like you have done; to install specific node version

How to fix: GitLab pipeline - failed to start

Introduction: I am new to creating GitLab pipelines.
Details:
The type of executor I am using for Runner is; Shell.
(I am not sure if this can be used or new runner needs to be registered with a different executor.)
Gitlab-runner 13.11.0
On trying to execute the below code which I have written in the .gitlab-ci.yml file, it throws the error.
image: "ruby:2.6"
test:
script:
- sudo apt-get update -qy
- sudo apt-get -y install unzip zip
- gem install cucumber
- gem install rspec-expectations
##TODO grep on all folders searching for .feature files
- find . -name "*.feature"
The error I am receiving is as follows.
Outout from Gitlab pipeline execution:
Can I request you to please help me fix this and run this successfully?
Thanks.

gitlab.com CI - build a NodeJS app using docker in docker

I'm currently facing a problem with the gitlab.com shared-runners. What I'm trying to archieve in my pipeline is:
- NPM install and using grunt to make some uncss, minimize and compress tasks
- Cleaning up
- Building a docker container with the app included
- Moving the container to gitlab registry
Unfortunateley I don't get it running since a long time! I tried a lot of different gitlab.ci configs - without success. The problem is, that I have to use the "image: docker:latest" to have all the docker-tools running. But then I don't have node and grunt installed in the container.
Also the other way around is not working. I was trying to use image: centos:latest and install docker manually - but this is also not working as I always just get a Failed to get D-Bus connection: Operation not permitted
Does anyone has some more experience on the gitlab-ci using docker build commands in a docker shared runner?
Any help is highly appreciated!!
Thank you
Jannik
Gitlab can be a bit tricky :) I dont have an example based on CentOS, but I have one based on Ubuntu if that helps you. Here is some copy paste of a working gitlab pipeline of mine which uses gulp (you should be easily able to adjust it to work with your grunt).
The .gitlab-ci.yml looks like this (adjust the CONTAINER... variables at the beginning):
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/psono/psono-client:$CI_BUILD_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/psono/psono-client:latest
stages:
- build
docker-image:
stage: build
image: ubuntu:16.04
services:
- docker:dind
variables:
DOCKER_HOST: 'tcp://docker:2375'
script:
- sh ./var/build-ubuntu.sh
- docker info
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" registry.gitlab.com
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
in addition i have a this "./var/build-ubuntu.sh" which you can adjust a bit according to your needs, replace some ubuntu dependencies or switch gulp for grunt as needed:
#!/usr/bin/env bash
apt-get update && \
apt-get install -y libfontconfig zip nodejs npm git apt-transport-https ca-certificates curl openssl software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get install -y docker-ce && \
ln -s /usr/bin/nodejs /usr/bin/node && \
npm install && \
node --version && \
npm --version

docker - cannot find aws credentials in container although they exist

Running the following docker command on mac works and on linux, running ubuntu cannot find the aws cli credentials. It returns the following message: Unable to locate credentials
Completed 1 part(s) with ... file(s) remaining
The command which runs an image and mounts a data volume and then copies a file from and s3 bucket, and starts the bash shell in the docker container.
sudo docker run -it --rm -v ~/.aws:/root/.aws username/docker-image sh -c 'aws s3 cp s3://bucketname/filename.tar.gz /home/emailer && cd /home/emailer && tar zxvf filename.tar.gz && /bin/bash'
What am I missing here?
This is my Dockerfile:
FROM ubuntu:latest
#install node and npm
RUN apt-get update && \
apt-get -y install curl && \
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get -y install python build-essential nodejs
#install and set-up aws-cli
RUN sudo apt-get -y install \
git \
nano \
unzip && \
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && \
unzip awscli-bundle.zip
RUN sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /home/emailer && cp -a /tmp/node_modules /home/emailer/
Mounting $HOME/.aws/ into the container should work. Make sure to mount it as read-only.
It is also worth mentioning, if you have several profiles in your ~/.aws/config -- you must also provide the AWS_PROFILE=somethingsomething environment variable. E.g. via docker run -e AWS_PROFILE=xxx ... otherwise you'll get the same error message (unable to locate credentials).
Update: Added example of the mount command
docker run -v ~/.aws:/root/.aws …
You can use environment variable instead of copying ~/.aws/credentials and config file into container for aws-cli
docker run \
-e AWS_ACCESS_KEY_ID=AXXXXXXXXXXXXE \
-e AWS_SECRET_ACCESS_KEY=wXXXXXXXXXXXXY \
-e AWS_DEFAULT_REGION=us-west-2 \
<img>
Ref: AWS CLI Doc
what do you see if you run
ls -l ~/.aws/config
within your docker instance?
the only solution that worked for me in this case is:
volumes:
- ${USERPROFILE}/.aws:/root/.aws:ro
There are a few things that could be wrong. One, as mentioned previously you should check if your ~/.aws/config file is set accordingly. If not, you can follow this link to set it up. Once you have done that you can map the ~/.aws folder using the -v flag on docker run.
If your ~/.aws folder is mapped correctly, make sure to check the permissions on the files under ~/.aws so that they are able to be accessed safely by whatever process is trying to access them. If you are running as the user process, simply running chmod 444 ~/.aws/* should do the trick. This will give full read permissions to the file. Of course, if you want write permissions you can add whatever other modifiers you need. Just make sure the read octal is flipped for your corresponding user and/or group.
The issue I had was that I was running Docker as root. When running as root it was unable to locate my credentials at ~/.aws/credentials, even though they were valid.
Directions for running Docker without root on Ubuntu are here: https://askubuntu.com/a/477554/85384
You just have to pass the credential in order to be the AWS_PROFILE, if you do not pass anything it will use the default, but if you want you can copy the default and add your desired credentials.
In Your credentials
[profile_dev]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
output = json
region = eu-west-1
In Your docker-compose
version: "3.8"
services:
cenas:
container_name: cenas_app
build: .
ports:
- "8080:8080"
environment:
- AWS_PROFILE=profile_dev
volumes:
- ~/.aws:/app/home/.aws:ro

Resources