How to pull docker images inside restricted network - linux

I want to pull a certain number of images from docker hub. But since I cannot access the docker hub from my organization internet, what are the ways by which I can pull those images.
The error is:ERROR: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Thanks.

You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command : docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server : docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command : docker image tag IMAGE_ID openjdk:latest.

Adding to the answer of #omernaci, you can either download the image on a separate environment, or use a proxy (prefered, as it applies to usual restrictions like isolating servers from the public internet):
Using a proxy
If your restricted environment has access to a proxy for this kind of management operations, you may just use it [1]:
HTTP_PROXY="http://proxy.example.com:80/" docker pull openjdk
or HTTPS_PROXY="https://proxy.example.com:443/" docker pull openjdk (if using an https proxy)
OR configure the proxy setting on docker daemon as explained in https://docs.docker.com/config/daemon/systemd/#httphttps-proxy and then you may just use docker pull openjdk normally
Downloading the image on a separate environment
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command: docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server: docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command: docker image tag IMAGE_ID openjdk:latest.

The best solution in this case would be contact you network administrators and explain them why you need to access this one url. :)
As a workaround:
If it's not also restricted, VPN might help.
You could connect to remote computer outside your network and use docker from there.

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

How to switch from docker.io to quay.io?

I need help how to switch from image registry server from docker.io to quay.io.
My cases is I pull centos image from hub.docker.com and I want to push it to quay.io.
I have login to my quay.io but when I try to push to quay like this shell that was not working.
Here are my step to switch to quay.io:
Create quay repository MYUSERNAME/centos
Logout from docker.io
# docker logout docker.io
Login to quay.io
# docker login quay.io
I fill the credential
I create new tag
# docker tag IMAGEID MYUSERNAME/centos:7
I push my image to quay
# docker push MYUSERNAME/centos:7
Here is output I got for the last shell command:
The push refers to repository [docker.io/MYUSERNAME/centos]
2653d992f4ef: Preparing
denied: requested access to the resource is denied
I can see that registry image server still pointed to docker.io.
How to solve this? Thank you.
In super short, the tag is in format site/something/name:tag where:
if site is missing, it is assumed to be docker.io (can't be changed)
if something is missing, it is assumed to be library.
if :tag is missing, it is assumed to be latest.
So for example docker pull alpine is the same as docker pull docker.io/library/alpine:latest.
If you want to use a repository with different address, you have to explicitly give its name when tagging:
docker tag IMAGEID quay.io/MYUSERNAME/centos:7
docker push quay.io/MYUSERNAME/centos:7

gitlab-runner using directory path instead of URL to clone; fails with "does not appear to be a git repository"

I am trying to run my GitLab CI tests locally, to speed up developing CI.
I installed gitlab-runner locally (on my GNU/Linux box) following these instructions.
Now, when I run sudo gitlab-runner exec docker test, I get:
sudo gitlab-runner exec docker test
Runtime platform arch=amd64 os=linux pid=7133 revision=0e5417a3 version=12.0.1
Running with gitlab-runner 12.0.1 (0e5417a3)
Using Docker executor with image joyzoursky/python-chromedriver:2.7 ...
Pulling docker image joyzoursky/python-chromedriver:2.7 ...
Using docker image sha256:f593ebe67d0c75eb8e3d663412c8fc80df70bd08e835d7a64ba6afa21b1c2d8a for joyzoursky/python-chromedriver:2.7 ...
Running on runner--project-0-concurrent-0 via hostname...
Fetching changes...
Initialized empty Git repository in /builds/project-0/.git/
Created fresh repository.
fatal: '/home/USER/path/to/repo' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
ERROR: Job failed: exit code 1
FATAL: exit code 1
(/home/USER/path/to/repo is the actual path to my working directory containing .gitlab-ci.yml, and my project code)
So, it looks like gitlab-runner is trying to use the directory path as a git clone URL, which obviously won't work.
I tried specifying --clone-url=, but I can't understand what part of the full project URL (either in the main git repo, or the gitlab mirror) I should be using.
Any clues?
If you are on Fedora and have SELinux enabled; try setting that to permissive:
sudo setenforce 0
SELinux blocks any activity that isn't allowed by some existing policy. The GitLab runner is basically a docker container trying to access files on the host machine, so that gets blocked...which leads the runner to report that you don't have a valid repository at (an error saying it can't read the files would have made more sense but we don't live in a perfect world)
The lazy fix is to make SELinux blanket permit the activity, if you don't do this sort of thing often enough to warrant a policy change for example.
The more security conscious will prefer to edit the policy instead. Figure out some unique identifier in the denial error, then create a new policy to allow it
sudo grep <needle> /var/log/audit/audit.log | audit2allow -M gitlab
sudo semodule -i gitlab.pp

Docker - error when extracting in Centos (invalid tar header)

Docker version 18.06.1-ce, build e68fc7a
CentOS Linux release 7.5.1804 (Core)
My docker file is
FROM node:8
When I execute - docker build -t my-image . i got following error
Sending build context to Docker daemon 44.03kB
Step 1/1 : FROM node:8
8: Pulling from library/node
f189db1b88b3: Extracting [==================================================>] 54.25MB/54.25MB
3d06cf2f1b5e: Download complete
687ebdda822c: Download complete
99119ca3f34e: Download complete
e771d6006054: Download complete
b0cc28d0be2c: Download complete
7225c154ac40: Download complete
7659da3c5093: Download complete
failed to register layer: ApplyLayer exit status 1 stdout: stderr: archive/tar: invalid tar header
Any clue? Any suggestion what can I do to fix it ?
I have the same error when running docker run -it ubuntu
The error message indicates that the image you are attempting to download has been corrupted. There are a few places I can think of where that would happen:
On the remote registry server
In transit
In memory
On disk
By the application
Given the popularity of the image, I would rule out the registry server having issues. Potentially you have an unstable server with memory or disk issues that were triggered when downloading a large image. On Linux, you'd likely see kernel errors from this in dmesg.
The version of docker is recent enough that any past issues on this have long since been fixed. There's only a single issue on the tar file processing related to very large layers (over 8GB) which doesn't apply to the image you are pulling. The tar processing is embedded directly into docker, so changing or upgrading your tar binary won't affect docker.
Potentially you could have an issue with the storage driver and the backend storage device. Changing from devicemapper to overlay2 if you haven't already would be a good first step if docker hasn't already defaulted to this (you can see your current storage driver in docker info and change it with an entry in /etc/docker/daemon.json).
My first guess on that list is the "in transit" part. Since the request will be over https, this won't be from a bad packet. But a proxy on the network that intercepts all web traffic could be the culprit. If you have a proxy, make sure docker is configured to login and use your proxy. For more details on that, see https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
Try unpack your image with: tar tvf yourarchive
If there no errors, try update docker(if it possible)
If error presents try rebuild your archive.
Similar issue described there
when you have the same error on tar extraction, the fetched image might indeed be corrupt.
comments on issue 15561 hint for, that building locally still works.

Docker 1.6 and Registy 2.0

Has anyone tried successfully the search command with Docker 1.6 and the new registry 2.0?
I've set mine up behind Nginx with SSL, and so far it is working fine. I can push and pull images without problems. But when I try to search for them all the following command give a 404 response:
curl -k -s -X GET https://username:password#my-docker-registry.com/v1/search
404 page not found
curl -k -s -X GET https://username:password#my-docker-registry.com/v2/search
404 page not found
root#ip-10-232-0-191:~# docker search username:password#my-docker-registry.com/hello-world
FATA[0000] Invalid repository name (admin:admin), only [a-z0-9-_.] are allowed
root#ip-10-232-0-191:~# docker search my-docker-registry.com/hello-world
FATA[0000] Error response from daemon: Unexpected status code 404
I wanted to ask if anyone has any ideas why and what is the correct way to use the Docker client to search the registry for images.
Looking at the API v2.0 documentation, do they simply not support a search function? Seems a bit strange to omit such functionality.
At least something works :)
root#ip-10-232-0-191:~# curl -k -s -X GET https://username:password#my-docker-registry.com/v2/hello-world/tags/list
{"name":"hello-world","tags":["latest"]}
To Date - the search api is lacking from registry v2.0.1 and this issue is under discussion here. I believe search api is intended to land in v2.1.
EDIT: /v2/catalog endpoint is available in distribution/registry:master
Before new registry api:
If you are using REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY you may list the contents of that directory
user#host:~# tree $REGISTRY_FS_ROOTDIR/docker/registry/v2/repositories -L 2
***/docker/registry/v2/repositories
└── repository1
└── image1
This may be useful to make a quick web ui you can call to do this or if you have ssh access to the host storing the repositories:
ssh -T user#host -p <port> tree $REGISTRY_FS_ROOTDIR/docker/registry/ -L 2
Do look at the compose example which deploys both v1 & v2 registry behind an nginx reverse proxy
The latest version of Docker Registry available from https://github.com/docker/distribution supports Catalog API. (v2/_catalog). This allows for capability to search repositories.
If interested, you can try docker image registry CLI I built to make it easy for using the search features in the new Docker Registry v2 distribution : (https://github.com/vivekjuneja/docker_registry_cli)
if you're on windows, here's a Powershell script to query the v2/_catalog from windows with basic http auth.
https://gist.github.com/so0k/b59382ea7fd959cf7040
FYI, to use this you have to docker pull distribution/registry:master instead of docker pull registry:2. the registry:2 image version is currently 2.0.1 which does not come with the catalog endpoint.

Resources