Though I am able to successfully push a newly pulled docker image to Nexus 3 docker hosted repo, an error like "invalid checksum digest format" is thrown at the end. I pulled "jenkins:latest" from dockerhub, then tagged it and then pushed it to a nexus docker hosted repo.
f3e4e0468545: Pushed
656120ad8c56: Pushed
30f9a83f20f3: Pushed
78dbfa5b7cbc: Pushed
invalid checksum digest format
I know Nexus 3 is not LTS yet, but want to be sure that its not my environment settings. I have an insecure docker registry on 18443
docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 53
Server Version: 1.10.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 89
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.16.0-53-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.86 GiB
Client:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.1
API version: 1.22
Go version: go1.5.3
Git commit: 9e83765
Built: Thu Feb 11 19:27:08 2016
OS/Arch: linux/amd64
Docker version 1.10 was not out when Nexus 3.0m7 was released. We are working on adding support for it now. This specific issue is being tracked here:
https://issues.sonatype.org/browse/NEXUS-9766
UPDATE: This issue/ticket is resolved now in Nexus Repository Manager 3.0.0-03. For upgrade instructions see https://support.sonatype.com/hc/en-us/articles/217967608-How-to-Upgrade-Nexus-3-Milestone-m7-to-3-0-0-Final.
Related
I am trying to install Apache Guacamole container which is followed by the instructions from https://guacamole.apache.org/doc/gug/guacamole-docker.html
I am able to install the guacamole/guacd and mysql containers but when I install guacamole/guacamole container it exits as it is installed.
I reinstalled the container couple of times but there was no improvement. Guacamole container log informes the authentication didn't succeed.
In log it is written the container needs authentication with mysql but I couldn't succeed even I tried to do as in the instruction in website. I probably miss something.
docker version:
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.17.3
Git commit: 20.10.12-0ubuntu4
Built: Mon Mar 7 17:10:06 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.17.3
Git commit: 20.10.12-0ubuntu4
Built: Mon Mar 7 15:57:50 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.9-0ubuntu3.1
GitCommit:
runc:
Version: 1.1.0-0ubuntu1.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:
docker ps:
root#server:~# root#server:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4288a45a153f guacamole/guacamole "/opt/guacamole/bin/…" About an hour ago Exited (1) About an hour ago guacamole-guacamole
e17d224935d1 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp guacamole-mysql
7d0e75730239 guacamole/guacd "/bin/sh -c '/usr/lo…" 2 hours ago Up 2 hours (healthy) 4822/tcp guacd-guacd
Logs of the container :
root#server:~# docker logs guacamole-guacamole
FATAL: No authentication configured
-------------------------------------------------------------------------------
The Guacamole Docker container needs at least one authentication mechanism in
order to function, such as a MySQL database, PostgreSQL database, LDAP
directory or RADIUS server. Please specify at least the MYSQL_DATABASE or
POSTGRES_DATABASE environment variables, or check Guacamole's Docker
documentation regarding configuring LDAP and/or custom extensions.
I am trying to use docker's user namespaces feature using the official documentation here
I have added the configuration to my daemon.json file like
{
"debug":true,
"experimental": false,
"features":{"buildkit": false},
"userns-remap":"default"
}
I also verified that both subuid and subguid in /etc contain the following entries
dhost:100000:65536
dockremap:165536:65536
I built my image to verify the functionality using an alpine:latest like so
FROM alpine:latest
RUN mkdir -p /root/.cache
WORKDIR /app
command used in building the image docker image build -t myimage:1 .
Then I run a container from this image using
docker container run -it --rm --name mycontainer -v "$(pwd)/test:/app" myimage:1 sh
I get access to the workdir inside the container (app) but I cannot touch/create any file without getting permission denied. Do I need to change the owner of the test directory I used to mount? if yes, who should own it?
docker version
docker version
Client: Docker Engine - Community
Version: 20.10.14
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 24 01:47:57 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.14
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 87a90dc
Built: Thu Mar 24 01:45:46 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.11
GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
Host OS info
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
I try to run a Docker container based on Linux on Virtual Machine from Azure with Windows Server 2019.
I work with a lot of tutorials for that, I enabled experimental flags, so docker version show:
PS C:\Users\azure> docker version
Client: Docker Engine - Enterprise
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 2ee0c57608
Built: 11/13/2019 08:00:16
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Enterprise
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.24)
Go version: go1.12.12
Git commit: 2ee0c57608
Built: 11/13/2019 07:58:51
OS/Arch: windows/amd64
Experimental: true
And docker info:
docker info
Client:
Debug Mode: false
Plugins:
cluster: Manage Docker clusters (Docker Inc., v1.2.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 4
Server Version: 19.03.5
Storage Driver: lcow (linux) windowsfilter (windows)
LCOW:
Windows:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics internal l2bridge l2tunnel nat null overlay private transparent
Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 17763 (17763.1.amd64fre.rs5_release.180914-1434)
Operating System: Windows Server 2019 Datacenter Version 1809 (OS Build 17763.1098)
OSType: windows
Architecture: x86_64
CPUs: 1
Total Memory: 2GiB
Name: xxx-yyy
ID: R2TB:P4GZ:MRU4:IU4A:BPTU:DPYY:GV7C:VNL3:JW6F:IRKJ:BTKW:BVNE
Docker Root Dir: C:\ProgramData\docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
But finally, when I run any Linux container I got this error:
PS C:\Users\azure> docker run --platform=linux hello-world:linux
docker : C:\Program Files\Docker\docker.exe: Error response from daemon: failed to start
service utility VM (createreadwrite): hcsshim::CreateComputeSystem
2410bb8b9e431b1068750d0c79376b1fdc196eef97c0a48ec8571775349acde7_svm: The virtual machine
could not be started because a required feature is not installed.
At line:1 char:1
+ docker run --platform=linux hello-world:linux
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (C:\Program File... not installed.:String) [],
RemoteException
+ FullyQualifiedErrorId : NativeCommandError
(extra info: {"SystemType":"container","Name":"2410bb8b9e431b1068750d0c79376b1fdc196eef97c0
a48ec8571775349acde7_svm","Layers":null,"HvPartition":true,"HvRuntime":{"ImagePath":"C:\\Pr
ogram Files\\Linux Containers","LinuxInitrdFile":"initrd.img","LinuxKernelFile":"kernel"},"
ContainerType":"linux","TerminateOnLastHandleClosed":true}).
See 'C:\Program Files\Docker\docker.exe run --help'.
I miss something in Azure? In VM config?
I solve my problem and it wasn't a problem with config, docker, or with Windows Server.
The problem was hardware - when you select Azure processor you should use a processor with nested virtualization. The solution is described here: https://blog.darrenjrobinson.com/azure-vm-docker-createcontainer-error-0xc0370102/
I'm attempting to create a docker volume using the cloudstor:azure docker plugin on a Ubuntu 18 VM in Azure.
I managed to get this working once on a VM with this Docker version:
Client:
Version: 18.09.7
API version: 1.39
Go version: go1.10.1
Git commit: 2d0083d
Built: Fri Aug 16 14:20:06 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.10.1
Git commit: 2d0083d
Built: Wed Aug 14 19:41:23 2019
OS/Arch: linux/amd64
Experimental: false
And installing build azure-v17.03.0-ce of the plugin. However That's not the default version of Docker that comes with the Ubuntu 18 VM image so at some point I must have upgraded something but can't reproduce this.
So I tried to upgrade Docker and the plugin to 19.03, I now get different errors when installing the plugin or trying to enable it:
docker plugin enable cloudstor:azure
Error response from daemon: failed to listen to abstract unix socket "/containerd-shim/plugins.moby/7bee13f0a815242cfcf1bf5d715ab1bc4d687c482e5ac0051aae90061980f8bb/shim.sock": listen unix ?/containerd-shim/plugins.moby/7bee13f0a815242cfcf1bf5d715ab1bc4d687c482e5ac0051aae90061980f8bb/shim.sock: bind: permission denied: unknown
I've noticed on the Docker version that does work there no 'ce' indicate Community Edition, not sure if that matters.
If I update Docker daemon to 18.09.9 and use docker4x/cloudstor:azure-v17.03.0-ce I can get the plugin to work correctly. But I cant get this working with any other versions of Docker or the plugin.
How do you get the cloudstor:azure Docker plugin working on a Ubuntu VM in Azure with latest versions of Docker and the plugin?
I'm running a container with the following options:
docker run -d --device=/dev/bus/usb:/dev/bus/usb --device=/dev/ttyS0:/dev/ttyS0 instr_img
Inside the container I have a Python code which resets a USB device that in turn causes a device file in '/dev/bus/usb/002/005' on the host to be removed and a new file (/dev/bus/usb/002/006) created in its place. The problem is that inside the container '/dev/bus/usb/002/005' still exists, and '/dev/bus/usb/002/006' is no where to be found. The directories '/dev/bus/usb/002' on the host and container are now out of sync. As a result, the code execution inside the container throws an exception because it can't talk to the USB device. I confirmed by manually creating a new device file (mknod) in the container and saw that it did not get sync'ed to the host and vice versa. Is this an unsupported feature or a bug in Docker?
>docker version
Client:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:48:04 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.9.0
API version: 1.21
Go version: go1.4.2
Git commit: 76d6bc9
Built: Tue Nov 3 17:48:04 UTC 2015
OS/Arch: linux/amd64
>docker info
Containers: 66
Images: 313
Server Version: 1.9.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 445
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-47-generic
Operating System: Ubuntu 15.04
CPUs: 4
Total Memory: 7.69 GiB
Name:my-host-1
ID: VIT4:S2P3:Q4TY:A3I4:L4WH:HFWJ:I36U:PBTV:B3VW:NFXB:LDNM:KY7G
Username: myuser
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
>uname -a
Linux my-host-1 3.19.0-47-generic #53-Ubuntu SMP Mon Jan 18 14:02:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
My workaround is to issue a mknod command to create a new device file with the minor device number incremented by 1 (from the previous number) every time a device reset happens; however, this is not a clean hack since I need to put in some checks because this program is used in multiple environments outside and inside the Docker container. I could very well be not using Docker properly for this use case since I'm very green (a noob) with Docker.
Some comments/insights from some experienced Docker users would really be appreciated. It could be a deal breaker for me to dockerize this program if I can't find a clean workaround for this issue.
Thanks in advance for your comments!
From all the researches online and some experimentation with using '--device', I've found that ephemeral (hot pluggable) devices are not supported by this option. It's a shame that the Docker documentation did not state this clearly, if at all. I only read one comment online from a user which mentioned it in passing. For those who want to use '--device' for these devices, don't; use the '--privileged' & '-v ' options instead. This will avoid you having to specify the exact device file name, e.g. /dev/bus/usb/002/088, instead you can specify just /dev/bus/usb. The '--device' option requires the actual device file name to work.