Can't get cloudstor:azure Docker plugin to work with latest versions of Docker/plugin - azure

I'm attempting to create a docker volume using the cloudstor:azure docker plugin on a Ubuntu 18 VM in Azure.
I managed to get this working once on a VM with this Docker version:
Client:
Version: 18.09.7
API version: 1.39
Go version: go1.10.1
Git commit: 2d0083d
Built: Fri Aug 16 14:20:06 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.10.1
Git commit: 2d0083d
Built: Wed Aug 14 19:41:23 2019
OS/Arch: linux/amd64
Experimental: false
And installing build azure-v17.03.0-ce of the plugin. However That's not the default version of Docker that comes with the Ubuntu 18 VM image so at some point I must have upgraded something but can't reproduce this.
So I tried to upgrade Docker and the plugin to 19.03, I now get different errors when installing the plugin or trying to enable it:
docker plugin enable cloudstor:azure
Error response from daemon: failed to listen to abstract unix socket "/containerd-shim/plugins.moby/7bee13f0a815242cfcf1bf5d715ab1bc4d687c482e5ac0051aae90061980f8bb/shim.sock": listen unix ?/containerd-shim/plugins.moby/7bee13f0a815242cfcf1bf5d715ab1bc4d687c482e5ac0051aae90061980f8bb/shim.sock: bind: permission denied: unknown
I've noticed on the Docker version that does work there no 'ce' indicate Community Edition, not sure if that matters.
If I update Docker daemon to 18.09.9 and use docker4x/cloudstor:azure-v17.03.0-ce I can get the plugin to work correctly. But I cant get this working with any other versions of Docker or the plugin.
How do you get the cloudstor:azure Docker plugin working on a Ubuntu VM in Azure with latest versions of Docker and the plugin?

Related

Guacamole container exits and couldn't authenticated with Mysql

I am trying to install Apache Guacamole container which is followed by the instructions from https://guacamole.apache.org/doc/gug/guacamole-docker.html
I am able to install the guacamole/guacd and mysql containers but when I install guacamole/guacamole container it exits as it is installed.
I reinstalled the container couple of times but there was no improvement. Guacamole container log informes the authentication didn't succeed.
In log it is written the container needs authentication with mysql but I couldn't succeed even I tried to do as in the instruction in website. I probably miss something.
docker version:
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.17.3
Git commit: 20.10.12-0ubuntu4
Built: Mon Mar 7 17:10:06 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.17.3
Git commit: 20.10.12-0ubuntu4
Built: Mon Mar 7 15:57:50 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.9-0ubuntu3.1
GitCommit:
runc:
Version: 1.1.0-0ubuntu1.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
docker ps:
root#server:~# root#server:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4288a45a153f guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Exited (1) About an hour ago guacamole-guacamole
e17d224935d1 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp guacamole-mysql
7d0e75730239 guacamole/guacd "/bin/sh -c '/usr/lo…" 2 hours ago Up 2 hours (healthy) 4822/tcp guacd-guacd
Logs of the container :
root#server:~# docker logs guacamole-guacamole
FATAL: No authentication configured
-------------------------------------------------------------------------------
The Guacamole Docker container needs at least one authentication mechanism in
order to function, such as a MySQL database, PostgreSQL database, LDAP
directory or RADIUS server. Please specify at least the MYSQL_DATABASE or
POSTGRES_DATABASE environment variables, or check Guacamole's Docker
documentation regarding configuring LDAP and/or custom extensions.

access denied in docker when mounting volumes while userns is enabled

I am trying to use docker's user namespaces feature using the official documentation here
I have added the configuration to my daemon.json file like
{
"debug":true,
"experimental": false,
"features":{"buildkit": false},
"userns-remap":"default"
}
I also verified that both subuid and subguid in /etc contain the following entries
dhost:100000:65536
dockremap:165536:65536
I built my image to verify the functionality using an alpine:latest like so
FROM alpine:latest
RUN mkdir -p /root/.cache
WORKDIR /app
command used in building the image docker image build -t myimage:1 .
Then I run a container from this image using
docker container run -it --rm --name mycontainer -v "$(pwd)/test:/app" myimage:1 sh
I get access to the workdir inside the container (app) but I cannot touch/create any file without getting permission denied. Do I need to change the owner of the test directory I used to mount? if yes, who should own it?
docker version
docker version
Client: Docker Engine - Community
Version: 20.10.14
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 24 01:47:57 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 01:45:46 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Host OS info
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic

Is docker swarm rearranges the services upon draining a node to other nodes?

I've got a Docker Swarm Stack. Three managers and two nodes, precisely. There're few services on one node (on that node only) which are working correctly, and I've got a zombie container on it, which couldn't be killed. I wanted to drain this node to prevent access to this "bad" container (which is working, just ain't responding to any command, it's a website container) and create a healthy one later. And I'm not sure if these services would be rearranged to the "healthy" node?
Presumably, docker system prune hasn't finished its work correctly, and now the system is in lock mode.
I'm using Moby Linux.
$ docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:05:03 2017
OS/Arch: linux/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:12:30 2017
OS/Arch: linux/amd64
Experimental: true
UDP1: draining a node doesn't stop any containers or rearranges the services, at least not from the start. Basically, all services on that node continue to functioning. That's was the question of interest.
UPD2: after rebooting it started to give errors about 'No such image was found...', it could be resolved by logging in to Docker hub again.
it's the principle of swarm, same Kubernetes, if a nodes goes down, as long as a master exists it should rearrange the distribution of all container to the other nodes.

Fabric ./byfn.sh up returns error "...container_linux.go:348..." [win10 home edition]

I am try to start an fabric network according to the doc "Building Your First Network" and the prerequisite docs.
However, when I execute the command ./byfn.sh up, it returns the error below:
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "no such file or directory": unknown ERROR !!!! Test failed
I have already tried to search for this error but with no luck.
I would appreciate if anyone can help me ...
System information:
$ docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24302
Built: Fri Mar 23 08:31:36 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:13:39 2018
OS/Arch: linux/amd64
Experimental: false
37675#DESKTOP-JU1BJMT MINGW64 /c/Users/fabric-samples_120/first-network ((v1.2.0 ))
$ go version
go version go1.10.1 windows/amd64
The complete output of byfn.sh up is here.
Please change the docker exec command to below
docker exec cli //bin//bash scripts/script.sh $CHANNEL_NAME $CLI_DELAY $LANGUAGE $CLI_TIMEOUT $VERBOSE
Add //bin//bash to the command so it points to the bash part of the container
Later in case if it happens to throw EOL exception, because shell doesn't understand DOS/Windows-like line endings
Edit in notepad ++ Edit > EOL Conversion > select Unix/OSX
Then it woks out
I guess it is because your docker version is too new for release-1.2,
As documented in
https://hyperledger-fabric.readthedocs.io/en/release-1.2/prereqs.html#docker-and-docker-compose
Try to pin to Docker version 17.06.2-ce, it could be better

'Kubectl' throws error 'failed to negotiate an api version' while installing using docker

I installed docker in machine using the guide in https://docs.docker.com/engine/installation/linux/ubuntulinux/ and I also installed Kubernetes in my local machine by using http://kubernetes.io/docs/getting-started-guides/docker/.
But once I run "kubectl get nodes" I get the error error: failed to negotiate an api version; server supports: map[], client supports: map[v1:{} metrics/v1alpha1:{} extensions/v1beta1:{} componentconfig/v1alpha1:{} batch/v1:{} autoscaling/v1:{} authorization.k8s.io/v1beta1:{}].
The docker version on my machine is as follows.
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:30:23 2016
OS/Arch: linux/amd64
Looks like the server responded with an empty list of api versions that it supports.
Can you post the output of kubectl version?
That will print the git versions of kubectl and api server and will help us find if there is any incompatibility between the two.
Ive tried using v1.3.0-alpha.3 of kubernetes with the same version of docker as the OP. I`m still having the same issue though. Should this be fixed in alpha.3 or do I need to wait for a new version ?

Resources