Handling CI with GitLab and Azure Kubernetes - azure

Building a docker image using Docker in Docker is not working.
before_script:
- apt-get update && apt-get install -y apt-transport-https
- apk add --update curl && rm -rf /var/cache/apk/
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv ./kubectl /usr/local/bin/kubectl
build:
stage: build
image: docker:dind
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://127.0.0.1:2376
script:
- kubectl version
- docker info
Getting: Client sent an HTTP request to an HTTPS server

Related

Twistcli podman in pipeline

I keep getting this error when I try to use twistcli to scan a container using podman:
failed to augment data: Error: error mounting storage for container c494c177d35d905aa267f0eebccf67dce2bcd0b61bc1511ef7039fde07baf152: error creating aufs mount to /var/lib/containers/storage/aufs/mnt/b5da5a775dc2fcd81fb1ca5b658415cb5e94450ed1cf0861dc4214a3d3fe9285: invalid argument
In the current configuration I'm trying to run twistcli in the gitlab ci pipeline, using Ubuntu 21.04 as an image on which podman is then installed on top.
Pipeline .gitlab-ci.yml
stages:
- scan
scan:
stage: scan
image: ubuntu:21.04
script:
- apt-get update
- apt-get -y install curl
#- apt install software-properties-common uidmap
#- add-apt-repository ppa:projectatomic/ppa
- apt-get -y upgrade
- apt-get -y install podman
- podman info
- podman login docker.io -u sim55649 -p $DOCKER_PASS
- podman pull docker.io/alpine:latest
#- cat ~/etc/containers/storage.conf
- curl -k -u $PRISMA_USER:$PRISMA_PASS --output ./twistcli $PRISMA_ADD/api/v1/util/twistcli
- more ./twistcli
- chmod a+x ./twistcli
- df /var/lib/containers/.
- ./twistcli images scan --address $PRISMA_ADD --user $PRISMA_USER --password $PRISMA_PASS --details alpine:latest
How can this error be solved?
Thanks in advance

Dockerized Node JS application not accessible from host machine

I have Node JS application
docker-compose.yml
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
command: 'yarn nuxt'
ports:
- 3000:3000
volumes:
- '.:/app'
Dockerfile
FROM node:15
RUN apt-get update \
&& apt-get install -y curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& apt-get update \
&& apt-get install -y yarn
WORKDIR /app
After running $ docker-compose up -d application starts and inside container it's accessible
$ docker-compose exec admin sh -c 'curl -i localhost:3000'
// 200 OK
But outside of container it's doesnt work. For example in chrome ERR_SOCKET_NOT_CONNECTED
Adding this to app service solves problem in docker-compose.yml
environment:
HOST: 0.0.0.0
Thanks to Marc Mintel article Development setup with Nuxt, Node and Docker
did you try to add
published: 3000
you can read more here - https://docs.docker.com/compose/compose-file/compose-file-v3/

gitlab ci pipeline failed deploy ftp

I try to build and push my react build folder with gitlab-ci.yml
Build and test passes but deploy failed with this error :
If I do the same script in my locale file, it works !
lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
mirror: Access failed: /builds/myGitLab/myGitlabProjectName/build: No such file or directory
lftp: MirrorJob.cc:242: void MirrorJob::JobFinished(Job*): Assertion `transfer_count>0' failed.
/bin/bash: line 97: 275 Aborted (core dumped) lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
ERROR: Job failed: exit code 1
Here is my all yml file :
image: node:13.8
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
test:
stage: test
script:
- yarn
- yarn test
deploy:
script:
- apt-get update && apt-get install -y lftp
- lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
enter code here
I 've got it ! i was started from a docker image (node) to perform those 3 stages: the build, the test and the deploy but without success but i tried doing an ls-a in the stage deploy I realized that I didn't have the build folder. Because the docker image was recreated each time, so I added artifacts to keep the buid file!
Once the job in the build stage is "done".it keep in a variable buid readable for next job also the deploy !
image: node:13.8
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
- npm run build
only:
- master
artifacts:
paths:
- build
test:
stage: test
script:
- yarn
- yarn test
deploy:
stage: deploy
before_script:
- apt-get update -qq
script:
- apt-get install -y -qq lftp
- ls -a
- lftp -e "set ssl:verify-certificate false; mirror --reverse --verbose --delete build/ ./test2 ; quit" -u $USERNAME,$PASSWORD $HOST
only:
- master
I have a part of the answer, but i would like to do something better
Actually, i understood what is going on. On every stage the docker image build then after the build on the test and deploy, there is no more build folder.
I don't know how to persit the docker image witch is node to every stages.
Any help will be welcome.
To make it works i have done every script in one stage this way:
image: node:13.0.1
stages:
- production
build:
stage: production
script:
- npm install
- npm run build
- npm run test
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -e "mirror -R build/ ./test ; quit" -u $USERNAME,$PASSWORD $HOST
only:
- master

How to fix PSQL connection error with Docker Compose

I'm trying to connect my Python-Flask app with a Postgres database in a docker environment. I am using a docker-compose file to build my web and db environment.
However, I am getting the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is my docker file:
FROM ubuntu:16.04 as base
RUN apt-get update -y && apt-get install -y python3-pip python3-dev postgresql libpq-dev libffi-dev jq
ENV LC_ALL=C.UTF-8 \
LANG=C.UTF-8
ENV FLASK_APP=manage.py \
FLASK_ENV=development \
APP_SETTINGS=config.DevelopmentConfig \
DATABASE_URL=postgresql://user:pw#postgres/database
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
FROM base as development
EXPOSE 5000
CMD ["bash"]
Here is my Docker-compose file:
version: "3.6"
services:
development_default: &DEVELOPMENT_DEFAULT
build:
context: .
target: development
working_dir: /app
volumes:
- .:/app
environment:
- GOOGLE_CLIENT_ID=none
- GOOGLE_CLIENT_SECRET=none
web:
<<: *DEVELOPMENT_DEFAULT
ports:
- "5000:5000"
depends_on:
- db
command: flask run --host=0.0.0.0
db:
image: postgres:10.6
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=db

Use Gitlab Pipeline to push data to ftpserver

I want to deploy to a ftp server using a Gitlab pipeline.
I tried this code:
deploy: // You can name your task however you like
stage: deploy
only:
- master
deploy:
script:
- apt-get update -qq && apt-get install -y -qq lftp
But I get a error message. What is the best way to do this? :)
Then add the following code in your .gitlab-ci.yml file.
variables:
HOST: "example.com"
USERNAME: "yourUserNameHere"
PASSWORD: "yourPasswordHere"
deploy:
script:
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -Rnev ./public_html ./ --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
only:
- master
The above code will push all your recently modified files in your Gitlab repository into public_html folder in your FTP Server root.
Just update the variables HOST, USERNAME and PASSWORD with your FTP Credentials and commit this file to your Gitlab Repository, you are good to go.
Now whenever you make changes in your master branch, Gitlab will automatically push your changes to your remote FTP server.
Got it :)
image: mwienk/docker-git-ftp
deploy_all:
stage: deploy
script:
- git config git-ftp.url "ftp://xx.nl:21/web/new.xxx.nl/public_html"
- git config git-ftp.password "xxx"
- git config git-ftp.user "xxxx"
- git ftp init
#- git ftp push -m "Add new content"
only:
- master
try this. There's a CI Lint tool in Gitlab that helps with formatting errors. The linter was showing an error, the additional deploy statement.
deploy:
stage: deploy
only:
- master
script:
- apt-get update -qq && apt-get install -y -qq lftp
I use this
deploy:
script:
- apt-get update -qq && apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow no; open -u $FTP_USERNAME,$FTP_PASSWORD $FTP_HOST; mirror -v ./ $FTP_DESTINATION --reverse --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
environment:
name: production
only:
- master

Resources