Docker - error when extracting in Centos (invalid tar header) - linux

Docker version 18.06.1-ce, build e68fc7a
CentOS Linux release 7.5.1804 (Core)
My docker file is
FROM node:8
When I execute - docker build -t my-image . i got following error
Sending build context to Docker daemon 44.03kB
Step 1/1 : FROM node:8
8: Pulling from library/node
f189db1b88b3: Extracting [==================================================>] 54.25MB/54.25MB
3d06cf2f1b5e: Download complete
687ebdda822c: Download complete
99119ca3f34e: Download complete
e771d6006054: Download complete
b0cc28d0be2c: Download complete
7225c154ac40: Download complete
7659da3c5093: Download complete
failed to register layer: ApplyLayer exit status 1 stdout: stderr: archive/tar: invalid tar header
Any clue? Any suggestion what can I do to fix it ?
I have the same error when running docker run -it ubuntu

The error message indicates that the image you are attempting to download has been corrupted. There are a few places I can think of where that would happen:
On the remote registry server
In transit
In memory
On disk
By the application
Given the popularity of the image, I would rule out the registry server having issues. Potentially you have an unstable server with memory or disk issues that were triggered when downloading a large image. On Linux, you'd likely see kernel errors from this in dmesg.
The version of docker is recent enough that any past issues on this have long since been fixed. There's only a single issue on the tar file processing related to very large layers (over 8GB) which doesn't apply to the image you are pulling. The tar processing is embedded directly into docker, so changing or upgrading your tar binary won't affect docker.
Potentially you could have an issue with the storage driver and the backend storage device. Changing from devicemapper to overlay2 if you haven't already would be a good first step if docker hasn't already defaulted to this (you can see your current storage driver in docker info and change it with an entry in /etc/docker/daemon.json).
My first guess on that list is the "in transit" part. Since the request will be over https, this won't be from a bad packet. But a proxy on the network that intercepts all web traffic could be the culprit. If you have a proxy, make sure docker is configured to login and use your proxy. For more details on that, see https://docs.docker.com/config/daemon/systemd/#httphttps-proxy

Try unpack your image with: tar tvf yourarchive
If there no errors, try update docker(if it possible)
If error presents try rebuild your archive.
Similar issue described there

when you have the same error on tar extraction, the fetched image might indeed be corrupt.
comments on issue 15561 hint for, that building locally still works.

Related

How to pull docker images inside restricted network

I want to pull a certain number of images from docker hub. But since I cannot access the docker hub from my organization internet, what are the ways by which I can pull those images.
The error is:ERROR: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Thanks.
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command : docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server : docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command : docker image tag IMAGE_ID openjdk:latest.
Adding to the answer of #omernaci, you can either download the image on a separate environment, or use a proxy (prefered, as it applies to usual restrictions like isolating servers from the public internet):
Using a proxy
If your restricted environment has access to a proxy for this kind of management operations, you may just use it [1]:
HTTP_PROXY="http://proxy.example.com:80/" docker pull openjdk
or HTTPS_PROXY="https://proxy.example.com:443/" docker pull openjdk (if using an https proxy)
OR configure the proxy setting on docker daemon as explained in https://docs.docker.com/config/daemon/systemd/#httphttps-proxy and then you may just use docker pull openjdk normally
Downloading the image on a separate environment
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command: docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server: docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command: docker image tag IMAGE_ID openjdk:latest.
The best solution in this case would be contact you network administrators and explain them why you need to access this one url. :)
As a workaround:
If it's not also restricted, VPN might help.
You could connect to remote computer outside your network and use docker from there.

gitlab-runner using directory path instead of URL to clone; fails with "does not appear to be a git repository"

I am trying to run my GitLab CI tests locally, to speed up developing CI.
I installed gitlab-runner locally (on my GNU/Linux box) following these instructions.
Now, when I run sudo gitlab-runner exec docker test, I get:
sudo gitlab-runner exec docker test
Runtime platform arch=amd64 os=linux pid=7133 revision=0e5417a3 version=12.0.1
Running with gitlab-runner 12.0.1 (0e5417a3)
Using Docker executor with image joyzoursky/python-chromedriver:2.7 ...
Pulling docker image joyzoursky/python-chromedriver:2.7 ...
Using docker image sha256:f593ebe67d0c75eb8e3d663412c8fc80df70bd08e835d7a64ba6afa21b1c2d8a for joyzoursky/python-chromedriver:2.7 ...
Running on runner--project-0-concurrent-0 via hostname...
Fetching changes...
Initialized empty Git repository in /builds/project-0/.git/
Created fresh repository.
fatal: '/home/USER/path/to/repo' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
ERROR: Job failed: exit code 1
FATAL: exit code 1
(/home/USER/path/to/repo is the actual path to my working directory containing .gitlab-ci.yml, and my project code)
So, it looks like gitlab-runner is trying to use the directory path as a git clone URL, which obviously won't work.
I tried specifying --clone-url=, but I can't understand what part of the full project URL (either in the main git repo, or the gitlab mirror) I should be using.
Any clues?
If you are on Fedora and have SELinux enabled; try setting that to permissive:
sudo setenforce 0
SELinux blocks any activity that isn't allowed by some existing policy. The GitLab runner is basically a docker container trying to access files on the host machine, so that gets blocked...which leads the runner to report that you don't have a valid repository at (an error saying it can't read the files would have made more sense but we don't live in a perfect world)
The lazy fix is to make SELinux blanket permit the activity, if you don't do this sort of thing often enough to warrant a policy change for example.
The more security conscious will prefer to edit the policy instead. Figure out some unique identifier in the denial error, then create a new policy to allow it
sudo grep <needle> /var/log/audit/audit.log | audit2allow -M gitlab
sudo semodule -i gitlab.pp

Kubernaties unable to mount NFS FS on Google Container Engine

I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations

Docker 1.6 and Registy 2.0

Has anyone tried successfully the search command with Docker 1.6 and the new registry 2.0?
I've set mine up behind Nginx with SSL, and so far it is working fine. I can push and pull images without problems. But when I try to search for them all the following command give a 404 response:
curl -k -s -X GET https://username:password#my-docker-registry.com/v1/search
404 page not found
curl -k -s -X GET https://username:password#my-docker-registry.com/v2/search
404 page not found
root#ip-10-232-0-191:~# docker search username:password#my-docker-registry.com/hello-world
FATA[0000] Invalid repository name (admin:admin), only [a-z0-9-_.] are allowed
root#ip-10-232-0-191:~# docker search my-docker-registry.com/hello-world
FATA[0000] Error response from daemon: Unexpected status code 404
I wanted to ask if anyone has any ideas why and what is the correct way to use the Docker client to search the registry for images.
Looking at the API v2.0 documentation, do they simply not support a search function? Seems a bit strange to omit such functionality.
At least something works :)
root#ip-10-232-0-191:~# curl -k -s -X GET https://username:password#my-docker-registry.com/v2/hello-world/tags/list
{"name":"hello-world","tags":["latest"]}
To Date - the search api is lacking from registry v2.0.1 and this issue is under discussion here. I believe search api is intended to land in v2.1.
EDIT: /v2/catalog endpoint is available in distribution/registry:master
Before new registry api:
If you are using REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY you may list the contents of that directory
user#host:~# tree $REGISTRY_FS_ROOTDIR/docker/registry/v2/repositories -L 2
***/docker/registry/v2/repositories
└── repository1
└── image1
This may be useful to make a quick web ui you can call to do this or if you have ssh access to the host storing the repositories:
ssh -T user#host -p <port> tree $REGISTRY_FS_ROOTDIR/docker/registry/ -L 2
Do look at the compose example which deploys both v1 & v2 registry behind an nginx reverse proxy
The latest version of Docker Registry available from https://github.com/docker/distribution supports Catalog API. (v2/_catalog). This allows for capability to search repositories.
If interested, you can try docker image registry CLI I built to make it easy for using the search features in the new Docker Registry v2 distribution : (https://github.com/vivekjuneja/docker_registry_cli)
if you're on windows, here's a Powershell script to query the v2/_catalog from windows with basic http auth.
https://gist.github.com/so0k/b59382ea7fd959cf7040
FYI, to use this you have to docker pull distribution/registry:master instead of docker pull registry:2. the registry:2 image version is currently 2.0.1 which does not come with the catalog endpoint.

Error pulling image (latest) from centos, Authentication is required

I have installed docker.io on CentOS 6.4 64 bit following the steps mentioned here: http://nareshv.blogspot.in/2013/08/installing-dockerio-on-centos-64-64-bit.html
Now I am able to start the docker daemon. When I am searching for a container as follows it's giving me result
[root#test ~]# docker search tutorial
Found 8 results matching your query ("tutorial")
NAME DESCRIPTION
mhubig/echo Simple echo loop from the tutorial.
learn/tutorial
jbarbier/tutorial1
mzdaniel/buildbot-tutorial
kyma/ping Ping image from the tutorial.
ivarvong/redis From the redis tutorial. Just redis-server and telnet on the base image.
amattn/postgresql-9.3.0 precise base, PostgreSQL 9.3.0 installed w/ default configuration. http://amattn.com/2013/09/19/tutorial_postgresql_us...
danlucraft/postgresql Postgresql 9.3, on port 5432, un:docker, pw:docker. From following the Postgresql example tutorial.
But When I am trying to pull a container it's giving me below error
[root#test ~]# docker pull learn/tutorial
Pulling repository learn/tutorial
8dbd9e392a96: Error pulling image (latest) from learn/tutorial, Authentication is required.
2013/10/08 02:50:01 Internal server error: 404 trying to fetch remote history for learn/tutorial
How to set the authentication and where? Please help
I had the same problem and this answer was the solution for me.
It was a time-zone issue. I ran docker on a VM, and my host and guest clock had different ctimezone, the authentication failure was due to clock divergence. Once I setup ntp correctly (with HW clock set to UTC) on my host, this problem went away.

Resources