How to switch from docker.io to quay.io? - linux

I need help how to switch from image registry server from docker.io to quay.io.
My cases is I pull centos image from hub.docker.com and I want to push it to quay.io.
I have login to my quay.io but when I try to push to quay like this shell that was not working.
Here are my step to switch to quay.io:
Create quay repository MYUSERNAME/centos
Logout from docker.io
# docker logout docker.io
Login to quay.io
# docker login quay.io
I fill the credential
I create new tag
# docker tag IMAGEID MYUSERNAME/centos:7
I push my image to quay
# docker push MYUSERNAME/centos:7
Here is output I got for the last shell command:
The push refers to repository [docker.io/MYUSERNAME/centos]
2653d992f4ef: Preparing
denied: requested access to the resource is denied
I can see that registry image server still pointed to docker.io.
How to solve this? Thank you.

In super short, the tag is in format site/something/name:tag where:
if site is missing, it is assumed to be docker.io (can't be changed)
if something is missing, it is assumed to be library.
if :tag is missing, it is assumed to be latest.
So for example docker pull alpine is the same as docker pull docker.io/library/alpine:latest.
If you want to use a repository with different address, you have to explicitly give its name when tagging:
docker tag IMAGEID quay.io/MYUSERNAME/centos:7
docker push quay.io/MYUSERNAME/centos:7

Related

How to push and pull docker images from Gitlab with access token

I am trying to push an image to a gitlab registry with two factor authentication. It gives me this error message:
unauthorized: HTTP Basic: Access denied\nYou must use a personal access token with 'api' scope for Git over HTTP
I tried to use this command to login but it still says access denied:
docker login https://registry.gitlab.com/my_registry -u my_user_name -p my_public_key
What am I doing wrong? How can I push and pull images with the public key?
Ok I found my error I was using my_public_key but I should have used a gitlab access token instead generated as the instructions in the link say.
So the correct command is :
docker login https://registry.gitlab.com/my_registry -u my_user_name -p my_gitlab_token
Or better yet for security purposes provide the password not in the command but when prompted after the command like this:
docker login https://registry.gitlab.com/my_registry -u my_user_name

Docker no basic auth credentials after succesfull login

I've moved to linux (pop_os 21.04) on my desktop and I'm having some issues with docker.
When I'm trying to run docker-compose to pull an image from a private registry I'm getting:
ERROR: Head "https://my.registry/my-image/manifests/latest": no basic auth credentials
Of course before running this command I've ran:
docker login https://my.registry.com -u user -p pass
which returns
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
And my config.json in my .docker folder show my credentials
{
"auths": {
"my.registry.com": {
"auth": "XXXXX"
}
}
}
To install docker I've followed instructions on their page https://docs.docker.com/engine/install/ubuntu/
And my version is:
Docker version 20.10.8, build 3967b7d
The same command ran on a macos system with Docker version 20.10.8 runs without any issues so I my password and all the urls are correct for sure.
Thanks for any help!
The login commands is
docker login my.registry.com
Without the https:// in front of the host. If you still have auth issues doing that:
if the registry uses an unknown TLS certificate, load that certificate on the host and restart the docker engine
if the registry is http instead of https, configure it as an insecure registry on /etc/docker/daemon.conf
if the login is successful, but the pull fails, verify your user has access to the specific repo on the registry
double check your password was correctly entered
check for a network proxy intercepting the request (the http_proxy variable)
I reinstalled the whole thing again as the docker page states, didn't work, so I uninstalled it and proceeded to install snap version, that didn't work neither and finally I removed it and went with simple apt-get install docker.io and it works like a charm! I don't know why it didn't work previously but I won't lose more sleep over it.
On Ubuntu 20.x, I observed that the credentials are stored in home/<username>/snap/docker/1125/.docker/config.json.
If older credentials are stored in $HOME/.docker/config.json, they are not used by docker pull. Verify if docker is indeed picking up the credentials from the right config.json location.

How to pull docker images inside restricted network

I want to pull a certain number of images from docker hub. But since I cannot access the docker hub from my organization internet, what are the ways by which I can pull those images.
The error is:ERROR: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Thanks.
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command : docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server : docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command : docker image tag IMAGE_ID openjdk:latest.
Adding to the answer of #omernaci, you can either download the image on a separate environment, or use a proxy (prefered, as it applies to usual restrictions like isolating servers from the public internet):
Using a proxy
If your restricted environment has access to a proxy for this kind of management operations, you may just use it [1]:
HTTP_PROXY="http://proxy.example.com:80/" docker pull openjdk
or HTTPS_PROXY="https://proxy.example.com:443/" docker pull openjdk (if using an https proxy)
OR configure the proxy setting on docker daemon as explained in https://docs.docker.com/config/daemon/systemd/#httphttps-proxy and then you may just use docker pull openjdk normally
Downloading the image on a separate environment
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command: docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server: docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command: docker image tag IMAGE_ID openjdk:latest.
The best solution in this case would be contact you network administrators and explain them why you need to access this one url. :)
As a workaround:
If it's not also restricted, VPN might help.
You could connect to remote computer outside your network and use docker from there.

gitlab-runner using directory path instead of URL to clone; fails with "does not appear to be a git repository"

I am trying to run my GitLab CI tests locally, to speed up developing CI.
I installed gitlab-runner locally (on my GNU/Linux box) following these instructions.
Now, when I run sudo gitlab-runner exec docker test, I get:
sudo gitlab-runner exec docker test
Runtime platform arch=amd64 os=linux pid=7133 revision=0e5417a3 version=12.0.1
Running with gitlab-runner 12.0.1 (0e5417a3)
Using Docker executor with image joyzoursky/python-chromedriver:2.7 ...
Pulling docker image joyzoursky/python-chromedriver:2.7 ...
Using docker image sha256:f593ebe67d0c75eb8e3d663412c8fc80df70bd08e835d7a64ba6afa21b1c2d8a for joyzoursky/python-chromedriver:2.7 ...
Running on runner--project-0-concurrent-0 via hostname...
Fetching changes...
Initialized empty Git repository in /builds/project-0/.git/
Created fresh repository.
fatal: '/home/USER/path/to/repo' does not appear to be a git repository
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
ERROR: Job failed: exit code 1
FATAL: exit code 1
(/home/USER/path/to/repo is the actual path to my working directory containing .gitlab-ci.yml, and my project code)
So, it looks like gitlab-runner is trying to use the directory path as a git clone URL, which obviously won't work.
I tried specifying --clone-url=, but I can't understand what part of the full project URL (either in the main git repo, or the gitlab mirror) I should be using.
Any clues?
If you are on Fedora and have SELinux enabled; try setting that to permissive:
sudo setenforce 0
SELinux blocks any activity that isn't allowed by some existing policy. The GitLab runner is basically a docker container trying to access files on the host machine, so that gets blocked...which leads the runner to report that you don't have a valid repository at (an error saying it can't read the files would have made more sense but we don't live in a perfect world)
The lazy fix is to make SELinux blanket permit the activity, if you don't do this sort of thing often enough to warrant a policy change for example.
The more security conscious will prefer to edit the policy instead. Figure out some unique identifier in the denial error, then create a new policy to allow it
sudo grep <needle> /var/log/audit/audit.log | audit2allow -M gitlab
sudo semodule -i gitlab.pp

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

Resources