GitLab CI/CD configuration problem using shared runners - gitlab

I have problems with GitLab CI/CD configuration - I'm using free runners on GitLab it self.
I have joomla (test) project using docker - I'm learng how it's work.
I created .gitlab-ci.yml with:
image: docker:latest
services:
- docker:dind
at top of file.
On test stage I want run docker image created at the build stage.
When I add:
services:
- mariadb:latest
to test stage I always get
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? at docker pull command. Without it I get error at docker run command at joomla image initialization cose of lack of MySql server
Any help will be appreciated.

If you set
services:
- mariadb:latest
in your test job, this will override the globally defined services. Therefore, the docker daemon is not running during test. This also explains why you do not get the Docker daemon error when you omit the services definition for the test job.
Either specify the docker:dind service also for the test job, or remove the local services definition and add mariadb to your global services definition.

Related

How to select a shared gitlab runner with docker installed on?

Its my first time to set a CI/CD using gitlab.com and I would like to use Gitlab's shared runners for my project.
My CI/CD process uses a docker-compose file which I build and deploy.
I saw that there is an option to select a gitlab-runner instance that has docker installed on so I will be able to run my CI/CD process on it.
With that being said, I can't find a way to configure a specific shared gitlab-runner to my project.
Will be happy to get your help here, how can I assign a specific shared gitlab-runner to my project?
Thanks
You can use tags to select runners, but in your case this is not necessary. You can just specify that the image is docker and attach a service container of docker:dind and your project will build on shared runners.
image: docker:latest
services:
- docker:dind

Are containers available between stages in Gitlab CI

Is a container that is used in the build stage accessible in the next stage? I have yaml like this:
build_backend:
image: web-app
services:
- mysql:5.7
stage: build
script:
- make build
test_frontend:
image: node:8
stage: test
script:
- make run-tests
My tests, that are triggered in make run-tests need to run HTTP requests against the backend container if possible?
I was trying to avoid building a new container and then pushing to a registry only to pull it down again, but maybe there is no other way? If I did this, would my web-app container still have access to the mysql container if I added it as a service in the test_frontend job.
No, containers are not available between stages. Job artifacts (i.e. files) will be passed between stages by default and can also be passed explicitly betweeen jobs.
If you need to run tests against a container, you should indeed pull it down again from a registry. Then, you can use the docker in docker (dind) service to run your tests.
I think this blog post explains a similar use case nicely. The testing job that's is described there is the following:
test:
stage: test
script:
- docker run -d --env-file=.postgres-env postgres:9.5
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests --with-coverage --cover-erase --cover-package=${CI_PROJECT_NAME} --cover-html

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

build and push docker images with GitLab CI

I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)

Easiest way to run Selenium tests in a Docker container over Jenkins CI

I want to execute my automated tests, written in Nightwatch-Cucumber over a Jenkins CI in a Docker container. I have a Docker image that I want to use for it.
This is what I want to do in more detail.
Start tests over Jenkins CI job
On the same machine the Docker image is loaded and the related Docker container will start. This container based on a Unix OS. Also, some configuration in Docker container will be executed.
Tests will be executed (from local or remote) in a headless mode via xvfb and the report will be saved on Jenkins machine.
Over GitLab CI I've realized it over a .gitlab-ci.yml config file and it runs very good:
image: "my-docker-image"
stages:
- "chrome-tests"
before_script:
- "apt-get update"
- "apt-get install -y wget bzip2"
- "npm install"
cache:
paths:
- node_modules/
run-tests-on-chrome:
stage: "chrome-tests"
script:
- "whereis xvfb-run"
- "xvfb-run --server-args='-screen 0 1600x1200x24' npm run test-chrome"
But I want to realize the same procedure with Jenkins CI. What is the easiest way to do it and ro run my automated tests in a Docker image which is called by Jenkins? Should I write a Dockerfile or not or or or?
I'm currently running Selenium Test scripts written in PHP and running them through Jenkins using Docker Compose. You can do the same as well without the hassle of dealing with Xvfb yourself.
To run your Selenium tests using headless browsers inside a docker container and linking it to your application with docker-compose, you can simply use the pre-defined standalone server.
https://github.com/SeleniumHQ/docker-selenium
I'm currently using the Chrome Standalone image.
Here's what your docker-compose should look like:
version: '3'
services:
your-app:
build:
context: .
dockerfile: Dockerfile
your_selenium_application:
build:
context: .
dockerfile: Dockerfile.selenium.test
depends_on:
- chrome-server
- your-app
chrome-server:
image: selenium/standalone-chrome:3.4.0-einsteinium
When running docker-compose, it will spin up your application, the selenium environment that will be interacting with your app, and the standalone server that will provide you with your headless browser. Because they are linked, inside your selenium code, you can make your test requests to the host via your-app:80 for example. Your headless browser will be chrome-server:4444/wd/hub which is the default address.
This can all be done inside of Jenkins using only one command in your Execute Shell inside of your Jenkins Job. docker-compose will also allow you to easily run the tests on your local machine as well, and the results should be identical.
Check out the maintained Selenium Docker images, specifically the node flavors. It's a good place to start, whether you decide to use the containers as-is or roll your own.

Resources