skaffold miss configuration or how to set up a simple helm example - skaffold

I am a bit puzzled how to set-up skaffold correctly in my case. Here is my skaffold.yaml:
apiVersion: skaffold/v2beta17
kind: Config
build:
tagPolicy:
gitCommit: {}
artifacts:
- image: zero-x/spring-cloud-kubernetes/config-map-it
custom:
buildCommand: ./build.sh
local:
useDockerCLI: false
useBuildkit: false
push: false
deploy:
helm:
releases:
- name: config-map-it
chartPath: src/main/helm
artifactOverrides:
# skaffold will override this with a different tag
image: zero-x/spring-cloud-kubernetes/config-map-it
valuesFiles:
- src/main/helm/values.yaml
wait: true
setValues:
namespace: spring-k8s
In build.sh:
#!/usr/bin/env bash
# build jar only, no tests, no chart
.././gradlew clean build bootjar -x test -x helmChartBuild --quiet
docker build --quiet --build-arg JAR_FILE='build/libs/*.jar' -t ${IMAGE} .
${IMAGE} is provided by skaffold.
So I need to build the jar first, pack that into an image and deploy. I invoke two things, one after another:
skaffold build // builds the image just fine
skaffold deploy
fails with :
You either need to:
run [skaffold deploy] with [--images TAG] for each pre-built artifact
or [skaffold run] instead, to let Skaffold build, tag and deploy artifacts.
no tag provided for image [zero-x/spring-cloud-kubernetes/config-map-it]
What is going on here? Without much bash-ing, I can't get this one working. The tutorials and documentation about skaffold and how to properly do things, are scarce, to say the least.
EDIT
So I was indeed doing:
kind create cluster --name spring-k8s --wait 5m
To that extent I thought that if I do :
deploy:
kubeContext: kind-spring-k8s
helm:
...
things would work, but they do not.
If I start everything from scratch again and run:
skaffold deploy --file-output=images.json -vdebug
I do see that:
Tags used in deployment:
- zero-x/spring-cloud-kubernetes/config-map-it -> zero-x/spring-cloud-kubernetes/config-map-it:78da248b669d2fafacbd144cf22d7251dfde57c664c70a5fd7d53793d9d5efd7
DEBU[0000] Local images can't be referenced by digest.
They are tagged and referenced by a unique, local only, tag instead.
See https://skaffold.dev/docs/pipeline-stages/taggers/#how-tagging-works
Or later:
helm --kube-context kind-spring-k8s dep build src/main/helm
So this is rather confusing. What more am I missing?

You need to communicate the images built by skaffold build into skaffold deploy:
skaffold build --file-output=images.json
skaffold deploy --build-artifacts=images.json
skaffold deploy doesn't (re)build images: it just deploys a set of images. So deploy needs to know the images to be deployed.
skaffold run combines these steps into a single command.

Related

How to put multiple images in image keyword in gitlab-ci

I have two jobs build_binary and build deb and I want to combine both of them. But the issue is they both use different images. Former one uses golang:latest and later one uses ubuntu:20.04 as shown:
gitlab-ci.yml
build_binary:
stage: build
image: golang:latest
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
artifacts:
untracked: true
script:
- echo "Some script"
build deb:
stage: deb-build
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
image: ubuntu:20.04
dependencies:
- build_binary
script:
- echo "Some script 2"
artifacts:
untracked: true
I have tried in these two ways but it didn't work.
build_binary:
stage: build
image: golang:latest ubuntu:20.04
and
build_binary:
stage: build
image: [golang:latest,ubuntu:20.04]
Any pointers would be very helpful.
It's not about gitlab-ci - for first hand you should understand what is images, what containers (docker) are in it's nature.
You cannot magically get mixed image that will be magically ubuntu:20.04+golang:latest. It's just impossible to make it from gitlab-ci file.
But you can create your own IMAGE.
You can take Dockerfile for ubuntu:20.04, at dockerhub https://hub.docker.com/_/ubuntu
Then you can add commands to it to install golang inside this operation system `
After this you open golang:latest Dockerfile and copy it's installation process to ubuntu Dockerfile with required modifications.
Then you do docker build -t my-super-ubuntu-and-golang - see manual
Then you check it - docker run and check that it's normal ubuntu with golang
If all success you can push it to your own account to dockerhub and in gitlab-ci:
image: your-name/golang-ubuntu-20.04
...
Suggestion to use services is very incorrect - service starts another image and connect it by network, so you can run postgres, rabbit and another services and use it in your tests for example:
image: alpine
services : [ rabbitmq ]
not mean that rabbitmq will be started on alpine - both will started - image of alpine and image of rabbitmq with local HOST name rabbitmq, and your alpine could connect to tcp://rabbitmq:5672 and use it. It's another approach.
P.S.
For example you can look at https://hub.docker.com/r/partlab/ubuntu-golang
I think it's not image you really want but you can see how to make mixed ubuntu-golang image
Using the image and services keywords in your.gitlab-ci.yml file, you may define a GitLab CI/CD job that uses multiple Docker images.
For example, you could use the following configuration to specify a job that uses both a Golang image and an Ubuntu image:
build:
image: golang:latest
services:
- ubuntu:20.04
before_script:
- run some command using Ubuntu
script:
- go build
- run some other command using Ubuntu

GitLab Container to GKE (Kubernetes) deployment

Hello I have a problem with GitLab CI/CD. I'm trying to deploy container to Kubernetes on GKE however I'm getting an error:
This job failed because the necessary resources were not successfully created.
I created a service account with kube-admin rights and created cluster via GUI of GitLab so its fully itegrated. But when I run the job it still doesn't work..
by the way I use kubectl get pods in gitlab-ci file just to test if kubernetes is repsonding.
stages:
- build
- deploy
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
deploy-prod:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl get pods
environment:
name: production
kubernetes:
namespace: test1
Any Ideas?
Thank you
namespace should be removed.
GitLab creates own namespace for every project

Cache build folder during gitlab CI build

I have a remote server where I serve my project via Nginx. I am using Gitlab CI to automatize my deploy process and I have ran into a problem. When I push my commits to the master branch the gitlab-runner run nicely but the problem is that it removes my React build folder (it is ok, as I have put it into the .gitignore), but because it always removes my build folder my Nginx could not serve any files until the build finish, and a new build folder creaeted. Is is there any solution for this problem? It would be nice if I could cache my build file until the build process finish. I attached my gitlab.ci.yml. Thank's in advance!
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
GIT_SSL_NO_VERIFY: "1"
build-step:
stage: build
tags:
- shell
script:
- docker image prune -f
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
deploy-step:
stage: deploy
tags:
- shell
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
It should be possible to use git fetch and to disable git clean when your deploy job starts. Here are links for the variables to do this:
https://docs.gitlab.com/ee/ci/yaml/#git-clean-flags
https://docs.gitlab.com/ee/ci/yaml/#git-strategy
It would look something like this:
deploy-step:
variables:
GIT_STRATEGY: fetch
GIT_CLEAN_FLAGS: none
stage: deploy
tags:
- shell
script:
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
This should make GitLab use git fetch instead of git clone, and to not run any git clean ... commands. The build artifacts from your previous run should then not be removed.
There are some problems with this though. If something goes wrong with a build, you might end up in a situation where you will have to manually log into the server where the runner is to fix it. The reason that GitLab uses git clean is to prevent these types of problems.
A more proper solution is to use nginx to have a sort of dubble buffer. You can have two different build folders, change the config in nginx, and then send a signal to nginx to reload the config. nginx will then make sure to gracefully switch to the new version of your application, without any interruptions. Here is a link to someone that has done this:
https://syshero.org/2016-06-09-zero-downtime-deployments-using-nginx/

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

What are services in gitlab pipeline job?

I am using gitlab's pipeline for CI and CD to build images for my projects.
In every job there are configurations to be set like image and stage but I can't wrap my head around what services are. Can someone explain its functionality? Thanks
Here's a code snippet I use that I found
build-run:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -t "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA" .
- docker push "$CI_REGISTRY_IMAGE/my-project:$CI_COMMIT_SHA"
cache:
untracked: true
environment: build
The documentation says:
The services keyword defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.

Resources