Is docker swarm rearranges the services upon draining a node to other nodes? - linux

I've got a Docker Swarm Stack. Three managers and two nodes, precisely. There're few services on one node (on that node only) which are working correctly, and I've got a zombie container on it, which couldn't be killed. I wanted to drain this node to prevent access to this "bad" container (which is working, just ain't responding to any command, it's a website container) and create a healthy one later. And I'm not sure if these services would be rearranged to the "healthy" node?
Presumably, docker system prune hasn't finished its work correctly, and now the system is in lock mode.
I'm using Moby Linux.
$ docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:05:03 2017
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:30 2017
OS/Arch: linux/amd64
Experimental: true
UDP1: draining a node doesn't stop any containers or rearranges the services, at least not from the start. Basically, all services on that node continue to functioning. That's was the question of interest.
UPD2: after rebooting it started to give errors about 'No such image was found...', it could be resolved by logging in to Docker hub again.

it's the principle of swarm, same Kubernetes, if a nodes goes down, as long as a master exists it should rearrange the distribution of all container to the other nodes.

Related

Guacamole container exits and couldn't authenticated with Mysql

I am trying to install Apache Guacamole container which is followed by the instructions from https://guacamole.apache.org/doc/gug/guacamole-docker.html
I am able to install the guacamole/guacd and mysql containers but when I install guacamole/guacamole container it exits as it is installed.
I reinstalled the container couple of times but there was no improvement. Guacamole container log informes the authentication didn't succeed.
In log it is written the container needs authentication with mysql but I couldn't succeed even I tried to do as in the instruction in website. I probably miss something.
docker version:
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.17.3
Git commit: 20.10.12-0ubuntu4
Built: Mon Mar 7 17:10:06 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.17.3
Git commit: 20.10.12-0ubuntu4
Built: Mon Mar 7 15:57:50 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.9-0ubuntu3.1
GitCommit:
runc:
Version: 1.1.0-0ubuntu1.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
docker ps:
root#server:~# root#server:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4288a45a153f guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Exited (1) About an hour ago guacamole-guacamole
e17d224935d1 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp guacamole-mysql
7d0e75730239 guacamole/guacd "/bin/sh -c '/usr/lo…" 2 hours ago Up 2 hours (healthy) 4822/tcp guacd-guacd
Logs of the container :
root#server:~# docker logs guacamole-guacamole
FATAL: No authentication configured
-------------------------------------------------------------------------------
The Guacamole Docker container needs at least one authentication mechanism in
order to function, such as a MySQL database, PostgreSQL database, LDAP
directory or RADIUS server. Please specify at least the MYSQL_DATABASE or
POSTGRES_DATABASE environment variables, or check Guacamole's Docker
documentation regarding configuring LDAP and/or custom extensions.

Can't get cloudstor:azure Docker plugin to work with latest versions of Docker/plugin

I'm attempting to create a docker volume using the cloudstor:azure docker plugin on a Ubuntu 18 VM in Azure.
I managed to get this working once on a VM with this Docker version:
Client:
Version: 18.09.7
API version: 1.39
Go version: go1.10.1
Git commit: 2d0083d
Built: Fri Aug 16 14:20:06 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.10.1
Git commit: 2d0083d
Built: Wed Aug 14 19:41:23 2019
OS/Arch: linux/amd64
Experimental: false
And installing build azure-v17.03.0-ce of the plugin. However That's not the default version of Docker that comes with the Ubuntu 18 VM image so at some point I must have upgraded something but can't reproduce this.
So I tried to upgrade Docker and the plugin to 19.03, I now get different errors when installing the plugin or trying to enable it:
docker plugin enable cloudstor:azure
Error response from daemon: failed to listen to abstract unix socket "/containerd-shim/plugins.moby/7bee13f0a815242cfcf1bf5d715ab1bc4d687c482e5ac0051aae90061980f8bb/shim.sock": listen unix ?/containerd-shim/plugins.moby/7bee13f0a815242cfcf1bf5d715ab1bc4d687c482e5ac0051aae90061980f8bb/shim.sock: bind: permission denied: unknown
I've noticed on the Docker version that does work there no 'ce' indicate Community Edition, not sure if that matters.
If I update Docker daemon to 18.09.9 and use docker4x/cloudstor:azure-v17.03.0-ce I can get the plugin to work correctly. But I cant get this working with any other versions of Docker or the plugin.
How do you get the cloudstor:azure Docker plugin working on a Ubuntu VM in Azure with latest versions of Docker and the plugin?

Fabric ./byfn.sh up returns error "...container_linux.go:348..." [win10 home edition]

I am try to start an fabric network according to the doc "Building Your First Network" and the prerequisite docs.
However, when I execute the command ./byfn.sh up, it returns the error below:
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "no such file or directory": unknown ERROR !!!! Test failed
I have already tried to search for this error but with no luck.
I would appreciate if anyone can help me ...
System information:
$ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24302
Built: Fri Mar 23 08:31:36 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:13:39 2018
OS/Arch: linux/amd64
Experimental: false
37675#DESKTOP-JU1BJMT MINGW64 /c/Users/fabric-samples_120/first-network ((v1.2.0 ))
$ go version
go version go1.10.1 windows/amd64
The complete output of byfn.sh up is here.
Please change the docker exec command to below
docker exec cli //bin//bash scripts/script.sh $CHANNEL_NAME $CLI_DELAY $LANGUAGE $CLI_TIMEOUT $VERBOSE
Add //bin//bash to the command so it points to the bash part of the container
Later in case if it happens to throw EOL exception, because shell doesn't understand DOS/Windows-like line endings
Edit in notepad ++ Edit > EOL Conversion > select Unix/OSX
Then it woks out
I guess it is because your docker version is too new for release-1.2,
As documented in
https://hyperledger-fabric.readthedocs.io/en/release-1.2/prereqs.html#docker-and-docker-compose
Try to pin to Docker version 17.06.2-ce, it could be better

'Kubectl' throws error 'failed to negotiate an api version' while installing using docker

I installed docker in machine using the guide in https://docs.docker.com/engine/installation/linux/ubuntulinux/ and I also installed Kubernetes in my local machine by using http://kubernetes.io/docs/getting-started-guides/docker/.
But once I run "kubectl get nodes" I get the error error: failed to negotiate an api version; server supports: map[], client supports: map[v1:{} metrics/v1alpha1:{} extensions/v1beta1:{} componentconfig/v1alpha1:{} batch/v1:{} autoscaling/v1:{} authorization.k8s.io/v1beta1:{}].
The docker version on my machine is as follows.
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64
Looks like the server responded with an empty list of api versions that it supports.
Can you post the output of kubectl version?
That will print the git versions of kubectl and api server and will help us find if there is any incompatibility between the two.
Ive tried using v1.3.0-alpha.3 of kubernetes with the same version of docker as the OP. I`m still having the same issue though. Should this be fixed in alpha.3 or do I need to wait for a new version ?

New device node created on host does not get reflected in Docker container when using --device flag

I'm running a container with the following options:
docker run -d --device=/dev/bus/usb:/dev/bus/usb --device=/dev/ttyS0:/dev/ttyS0 instr_img
Inside the container I have a Python code which resets a USB device that in turn causes a device file in '/dev/bus/usb/002/005' on the host to be removed and a new file (/dev/bus/usb/002/006) created in its place. The problem is that inside the container '/dev/bus/usb/002/005' still exists, and '/dev/bus/usb/002/006' is no where to be found. The directories '/dev/bus/usb/002' on the host and container are now out of sync. As a result, the code execution inside the container throws an exception because it can't talk to the USB device. I confirmed by manually creating a new device file (mknod) in the container and saw that it did not get sync'ed to the host and vice versa. Is this an unsupported feature or a bug in Docker?
>docker version
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:48:04 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:48:04 UTC 2015
OS/Arch: linux/amd64
>docker info
Containers: 66
Images: 313
Server Version: 1.9.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 445
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-47-generic
Operating System: Ubuntu 15.04
CPUs: 4
Total Memory: 7.69 GiB
Name:my-host-1
ID: VIT4:S2P3:Q4TY:A3I4:L4WH:HFWJ:I36U:PBTV:B3VW:NFXB:LDNM:KY7G
Username: myuser
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
>uname -a
Linux my-host-1 3.19.0-47-generic #53-Ubuntu SMP Mon Jan 18 14:02:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
My workaround is to issue a mknod command to create a new device file with the minor device number incremented by 1 (from the previous number) every time a device reset happens; however, this is not a clean hack since I need to put in some checks because this program is used in multiple environments outside and inside the Docker container. I could very well be not using Docker properly for this use case since I'm very green (a noob) with Docker.
Some comments/insights from some experienced Docker users would really be appreciated. It could be a deal breaker for me to dockerize this program if I can't find a clean workaround for this issue.
Thanks in advance for your comments!
From all the researches online and some experimentation with using '--device', I've found that ephemeral (hot pluggable) devices are not supported by this option. It's a shame that the Docker documentation did not state this clearly, if at all. I only read one comment online from a user which mentioned it in passing. For those who want to use '--device' for these devices, don't; use the '--privileged' & '-v ' options instead. This will avoid you having to specify the exact device file name, e.g. /dev/bus/usb/002/088, instead you can specify just /dev/bus/usb. The '--device' option requires the actual device file name to work.

Resources