There has been a runner system failure, please try again - gitlab

How to resolve this error specially on macbook. I have installed docker on my machine also but its not working.

You have to start docker deamon on your gitlab runner.
Below is link to docker desktop for mac.
If you start docker desktop gui, docker deamon will start.
https://docs.docker.com/desktop/mac/

This problem is not just for mac computers but also for windows. I experienced this problem and found out that the the PowerShell script part for Infra as Code that I was using was not installing docker.
I therefore made changes to the Infra as Code to ensure that Microsoft-Hyper-V is enabled and that docker is actually installed and docker service started, then the computer HAS to be restarted for the changes to pick up.
I do not know the exact order of when the restart should happen since I am still understanding the flow but I can update my answer when I have more info

Related

How do you configure docker to prevent it from updating itself

We ship our app as a docker image.
The database is PostgreSQL's official image.
It runs on a Kubuntu host v 18.4 LTS unfortunately these machines mostly operate offline and have little connectivity throughout the month. I have come to realize that docker has auto-updated at some point and has caused an issue that stops the docker daemon from starting.

Azure windows server 2019 install docker

I hope somebody can help me with this issue.
I have a windows server 2019 hosted on azure. I would like to install docker for some tests.
Before to jump to docker, I updated the operative system
I downloaded docker desktop from docker website.
Installation went without any issue.
But when I try to start the app, I get the error that the Hyper-V is not running.
At this point, in the server management tool => add features I checked if Hyper-v was installed and if containers was also installed, After this I went to services.msc and restarted Hyper-v managment and also set to auto hypervisorlaunchtype and restarted the VM. When I logged in back, I had the same issue.
I know this is a case of nested virtualisation, but I was wondering if is achievable on a cloud platform.
Can please anyone help me to understand this and if its possible or/and what I am doing wrong
I tried to install the docker desktop in my environment and it was successful with out any issue. Created a vm with windows server 2019 and updated to latest and installed docker desktop from here Install Docker Desktop on Windows and it is running as expected.
Nested virtualizations is supported in azure but all the vm sizes not support nested virtualization, So Try to deploy the VM with any of the following series D_v3; Ds_v3; E_v3; Es_v3; F2s_v2 – F72s_v2; M and check.

Hyperledger Fabric setup

I would like to setup Hyperledger Fabric on an Ubuntu machine with docker (docker-compose up). Is it possible to run the chaincode and nodejs code from another system (Mac system), as I already have Go and nodejs ready on the Mac.
Please help me with this query.
you can use same environment in different systems. This is the main reason to choose docker and docker-compose.
Just follow steps. Please confirm the version of tools.
To run on another system, you only have to simply build the image of your current hyperledger package on current system(Ubuntu). and use this image on another system(MAC).
Yes you can totally do that. Use this example: https://github.com/hyperledger/fabric-sdk-node/tree/master/examples/balance-transfer
Run docker-compose in your ubuntu machine. Update the app config.json and /app/network-config.json with ubuntu machine IP and make sure required ports are opened.
Run app on your mac.

Linux dev environment in osx (docker as mv or any other)

I'd love to hear from you some advice on setting up what I'm looking for.
I'm using OSX and I need to develop some code on a Linux machine, the thing is that I was looking for some VM alternative since it takes too much battery power.
The first thing I come across with was a docker container. I know It is not what it was designed for, but I thought it might work anyway. So I tried running a container as
docker run -i -t ubuntu /bin/bash
and it worked well. However all the changes I make are gone and I can't fins a way to solve it. I also tried
docker run -i -v /Users/JaimehRubiks/test:/home/Jaime -t ubuntu /bin/bash
and all files in there are saved (also very interesting because I can share my files with host), but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
What I'm looking for is just a simple way to run linux in my mac, and then access to it somehow, like I did in docker or via SSH.
Docker currently does not run natively on osx as Docker relies on the Linux kernel for its isolation features. In fact, the Docker Toolbox uses a Virtual Box virtual machine running the boot2docker Linux distro to run the Docker daemon on osx. See the official documentation on Mac osx installation.
The boot2docker linux image is quite light weight, but I'm not sure you will get much benefit from running Docker on osx for Linux development over simply running a full Virtualbox machine with Ubuntu (or other distro). If you want to run a virtual machine vagrant is a good tool to help you set that up. It lets you easily pull down images from an image repo, setup the image, and ssh into it. It also makes host -> guest-machine folder sharing and port forwarding quite simple.
but it's kind of boring having to commit to the docker image if I change anything in the config files of my ubuntu.
You don't have to docker commit anything: any file change make on the host (/Users/JaimehRubiks/test) will be visible in the container (/home/Jaime)
what about using vagrant to run Ubuntu or CentOS? you can access the system via command vagrant ssh and configure it with configuration file and share it like using docker.

How to remove/install a docker image on an unconfigured Docker for centos 7

Using Centos 6.6 and 7 and deciding to move to centos 7 as there are some issues using docker with centos 6.6 (reboot issues for me) and i'm trying to pull the current centos image from docker. (should just be docker pull centos)
However because i already had a docker image of centos installed on the 6.6 virtual machine, i thought it conflicts with the one im trying to pull on the centos 7. It states that the image (f1b something) is already being used on the system and is causing the download to not go through. Simply going over to the centos 6.6 and trying to remove the images (which would be labeled as none by the way, thus you have to do docker images -a),even with force, does nothing. The only solution so far to this is to do a full removal of docker and its dependencies, and reinstall it which should come package free.
Of course this is not the solution i want. One of two things can happen. Either a way to make the two of them to coexist, or a way to remove the current one without removing any other current images. Or if I am not getting this right, take an entirely different approach.
EDIT+1: Ok heres the actual error im receiving when doing the the docker pull...
f1b...: download complete
f1b...:error downloading dependant layers
c85...:Downloading [>
7322...: Error pulling image (latest) from docker.io/centos, endpoint :https://registry-1.docker.io/v1,Dr
7322...:Error pulling image (latest) from docker.io/centos, Driver devicemapper failed to create image rootfs
FATA[0012] Error pulling image (latest) from docker.io/centos, Driver device mapper failed to create image rootfs f1b...:error running DeviceCreate (createSnapDevice) dm_task_run failed
And looking over the problem more im not so sure if its because of the centos 6.6 like i had initially thought despite sharing the same ID's.
EDIT +2: Stranger still is that the fatal error codes keeps changing (im assuming those are FATA[0012]?)
http://docker-sean.readthedocs.org/en/latest/chapter1.html
Theres a config file that needs to be changed for centos 7 docker users which happened to be applying the following change
OPTIONS='-g /docker/data -p /var/run/docker.pid'
in the vim/vi file of /etc/sysconfig/docker.
I swear docker is going to be the death of me...
EDIT +1: Ok lets remap the solution to the following starting from a new centos 7 machine...
yum install docker
service docker start
docker pull centos
ERROR
systemctl enable docker.service
ERROR?
sudo systemctl enable docker.service
systemctl start docker.service
ERROR?
yum remove docker
yum install epel-release.noarch
yum install docker-io
vim /etc/sysconfig/docker
OPTIONS='-g /docker/data -p /var/run/docker.pid'
service docker restart
docker pull centos
and thats how i got docker to work on the new VM if i mapped it correctly.
Also one of the commands i might have used was a thin_check. Somebody used it to verify docker in this link
EDIT +2:
Oh wow, this would explain even better whats happening here. See, the docker server can be installed straight out of the box with centos 7, however the daemon must still be installed from epel. As a reminder, the daemon is the item that actually runs the docker service. The server just allows docker to connect to the internet and view its repositories. Link is right here.

Resources