How to `docker login` to OpenShift Docker registry - azure

I am using Redhat OpenShift 4.4.17 deployed in Azure.
I logged in to OpenShift as administrator.
I have a Docker image locally, now I need to push my docker image to OpenShift Docker registry.
I am using below command
docker login -u <user_name> -p `oc whoami -t` image-registry.openshift-image-registry.svc:5000
I am getting error as:
Error response from daemon: Get https://image-registry.openshift-image-registry.svc:5000/v2/: dial tcp: lookup image-registry.openshift-image-registry.svc: no such host"
What can I try to resolve this?
please see this one:
$ oc get route -n openshift-image-registry
NAME HOST/PORT
default-route default-route-openshift-image-registry.
PATH SERVICES PORT TERMINATION WILDCARD
image-registry <all> reencrypt None

image-registry.openshift-image-registry.svc:5000 can not be resolved at the external of the Openshift cluster, because it's internal registry service name.
So you should access to the internal registry service through the Route hostname of the registry in order to do docker login. Refer Exposing a secure registry manually, if the internal registry was not exposed.
// expose the internal registry to external using Route.
$ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
// Verify the internal registry Route hostname.
$ oc get route -n openshift-image-registry
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
default-route default-route-openshift-image-registry.apps.clustername.basedomain image-registry <all> reencrypt None
// Try to login using the internal registry Route hostname.
$ docker login -u <user_name> -p $(oc whoami -t) default-route-openshift-image-registry.apps.clustername.basedomain
Here is my test evidence using podman as follows.
First of all, you should place and update the trusted CA of your Router wildcard certificates on your client host which is executed the docker or podman client.
# podman login -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.<clustername>.<basedomain>
Login Succeeded!
Additionally, if you face "x509: certificate signed by unknown authority" error message, then you should place the Router trusted CA on your host or should use "--tls-verify=false" in podman case or the same option for docker case instead of that.
# podman login -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.<clustername>.<basedomain>
Error: error authenticating creds for "default-route-openshift-image-registry.apps.<clustername>.<basedomain>": pinging docker registry returned: Get https://default-route-openshift-image-registry.apps.<clustername>.<basedomain>/v2/: x509: certificate signed by unknown authority
# podman login --tls-verify=false -u admin -p $(oc whoami -t) default-route-openshift-image-registry.apps.<clustername>.<basedomain>
Login Succeeded!

Related

How to request host/service certificate when authenticated as Certificate Admin - FreeIPA?

Note: I've tried to keep things as simple as possible in this question as that is as far as my knowledge goes. Any form of help is appreciated
I'm new to FreeIPA and I struggle to request a SSL certificate and key file from FreeIPA as Certificate Authority.
I verify I get a krbtgt using klist using the credentials of Certificate Admin.
$ klist
Valid starting Expires Service principal
01/05/2022 5:35:35 01/06/2022 5:35:35 krbtgt/MYDOM#MYDOM
renew until 01/12/2022 5:35:35
sudo /usr/bin/ipa-getcert request -r -w -k /tmp/test.key \
-f /tmp/test.cert.pem \
-g 4096
-K HTTP/service.mydom \
-T caIPAserviceCert \
-D test.myDom -N CN=test.myDom,O=MYDOM
New signing request "20220105093346" added.
Only thing being created is the private key:
$ ls /tmp
test.key
Why isn't the certificate being created ? Insufficient privileges.
Error:
$ sudo getcert list
Number of certificates and requests being tracked: 1.
Request ID '20220105093346':
status: CA_REJECTED
ca-error: Server at https://idm.myDom/ipa/xml denied our request, giving up: 2100 (RPC failed at server. Insufficient access: Insufficient 'write' privilege to the 'userCertificate' attribute of entry 'krbprincipalname=HTTP/service.mydom#MYDOM,cn=services,cn=accounts,dc=mydom'.).
stuck: yes
key pair storage: type=FILE,location='/tmp/test.key'
certificate: type=FILE,location='/tmp/test.cert.pem'
CA: IPA
issuer:
subject:
expires: unknown
pre-save command:
post-save command:
track: yes
auto-renew: yes
Though I am able to run
$ ipa service-mod HTTP/service.mydom --certificate=
Possible duplicatae freeipa-request-certificate-with-cname
Any ideas?
Turns out the machine I am requesting the certificate from needs to be allowed to manage the web service for web host.
Only the target machine can create a certificate (IPA uses the host
kerberos ticket) by default, so to be able to create the certificate
on your IPA server you need to allow it to manage the web service for
the www host.
[root#ipa-server ~]# ipa service-add-host --hosts=ipa-server.test.lan HTTP/www.test.lan
Source:
Creating certs and keys for services using freeipadogtag/

How to push and pull docker images from Gitlab with access token

I am trying to push an image to a gitlab registry with two factor authentication. It gives me this error message:
unauthorized: HTTP Basic: Access denied\nYou must use a personal access token with 'api' scope for Git over HTTP
I tried to use this command to login but it still says access denied:
docker login https://registry.gitlab.com/my_registry -u my_user_name -p my_public_key
What am I doing wrong? How can I push and pull images with the public key?
Ok I found my error I was using my_public_key but I should have used a gitlab access token instead generated as the instructions in the link say.
So the correct command is :
docker login https://registry.gitlab.com/my_registry -u my_user_name -p my_gitlab_token
Or better yet for security purposes provide the password not in the command but when prompted after the command like this:
docker login https://registry.gitlab.com/my_registry -u my_user_name

Docker login x509: certificate signed by unknown authority

I am running docker registry as container in Redhat Linux 7.5 with Docker 18.09.3-3 version. if configured with self-sign certificate.
container started successfully. it works with curl with-out any error. but giving error for docker login command.
curl command works
curl --cacert /etc/docker/certs.d/dockerhost\:5000/ca.crt https://dockerhost:5000 -v
login command
docker login dockerhost:5000
Error response from daemon: Get https://dockerhost:5000/v2/: x509: certificate signed by unknown authority
how to resolve this error message?
Thanks
My hostname set with upper case letter. Certificate was generated with lower case name. I changed hostname to lowercase, it started working.

How to use the deploy token correctly

I will use the gitlab container registry for a private docker image. When the project is public I can download the docker image with docker login registry.gitlab.com/user/jupyterhub
Is the project private so I need a DEPLOY TOKEN. and a PASSWORD
PASSWORD = KzErTBKAnwNEpxwVWU9g
DEPLOY USER = gitlab+deploy-token-28155
docker login registry.example.com -u gitlab+deploy-token-28155 -p KzErTBKAnwNEpxwVWU9g and I can login into the registry
I get two warnings. How do I solve this problems?
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/klein/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
When I set in the Variables CI_DEPLOY_USER and CI_DEPLOY_PASSWORD then gitlab ask for a password.
docker login registry.gitlab.com -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD
To avoid the warning that your password is getting stored, you have to pass it via stdin:
echo $CI_DEPLOY_PASSWORD | docker login -u $CI_DEPLOY_USER --password-stdin registry.gitlab.com

`docker service update` from remote server with cert

I want to update my docker stack automatically from my CI server but I don't figure out how to configure credentials to be able to drive docker from an external host.
I have enabled experimental mode on my server and it works fine in local with docker-machine.
My deploy script look like this:
echo $DOCKER_CERT > cert.pem # which other file ?
OPTS=" --tlsverify --host $DOCKER_DEPLOY_HOST --tlscert cert.pem" # which other args ???
docker $OPTS pull $REPO_IMAGE
docker $OPTS service update multiverse-prod_api
Is there a way (or planned in future version) to achieve this with just an ssh key ?
Thanks !!
You need to configure the docker server with a self signed cert, and then configure the client with a client cert signed from the same ca. The steps to create the certificates and configure the server and client are described by docker in their documentation.

Resources