Has anyone tried successfully the search command with Docker 1.6 and the new registry 2.0?
I've set mine up behind Nginx with SSL, and so far it is working fine. I can push and pull images without problems. But when I try to search for them all the following command give a 404 response:
curl -k -s -X GET https://username:password#my-docker-registry.com/v1/search
404 page not found
curl -k -s -X GET https://username:password#my-docker-registry.com/v2/search
404 page not found
root#ip-10-232-0-191:~# docker search username:password#my-docker-registry.com/hello-world
FATA[0000] Invalid repository name (admin:admin), only [a-z0-9-_.] are allowed
root#ip-10-232-0-191:~# docker search my-docker-registry.com/hello-world
FATA[0000] Error response from daemon: Unexpected status code 404
I wanted to ask if anyone has any ideas why and what is the correct way to use the Docker client to search the registry for images.
Looking at the API v2.0 documentation, do they simply not support a search function? Seems a bit strange to omit such functionality.
At least something works :)
root#ip-10-232-0-191:~# curl -k -s -X GET https://username:password#my-docker-registry.com/v2/hello-world/tags/list
{"name":"hello-world","tags":["latest"]}
To Date - the search api is lacking from registry v2.0.1 and this issue is under discussion here. I believe search api is intended to land in v2.1.
EDIT: /v2/catalog endpoint is available in distribution/registry:master
Before new registry api:
If you are using REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY you may list the contents of that directory
user#host:~# tree $REGISTRY_FS_ROOTDIR/docker/registry/v2/repositories -L 2
***/docker/registry/v2/repositories
└── repository1
└── image1
This may be useful to make a quick web ui you can call to do this or if you have ssh access to the host storing the repositories:
ssh -T user#host -p <port> tree $REGISTRY_FS_ROOTDIR/docker/registry/ -L 2
Do look at the compose example which deploys both v1 & v2 registry behind an nginx reverse proxy
The latest version of Docker Registry available from https://github.com/docker/distribution supports Catalog API. (v2/_catalog). This allows for capability to search repositories.
If interested, you can try docker image registry CLI I built to make it easy for using the search features in the new Docker Registry v2 distribution : (https://github.com/vivekjuneja/docker_registry_cli)
if you're on windows, here's a Powershell script to query the v2/_catalog from windows with basic http auth.
https://gist.github.com/so0k/b59382ea7fd959cf7040
FYI, to use this you have to docker pull distribution/registry:master instead of docker pull registry:2. the registry:2 image version is currently 2.0.1 which does not come with the catalog endpoint.
Related
I've moved to linux (pop_os 21.04) on my desktop and I'm having some issues with docker.
When I'm trying to run docker-compose to pull an image from a private registry I'm getting:
ERROR: Head "https://my.registry/my-image/manifests/latest": no basic auth credentials
Of course before running this command I've ran:
docker login https://my.registry.com -u user -p pass
which returns
WARNING! Your password will be stored unencrypted in /home/user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
And my config.json in my .docker folder show my credentials
{
"auths": {
"my.registry.com": {
"auth": "XXXXX"
}
}
}
To install docker I've followed instructions on their page https://docs.docker.com/engine/install/ubuntu/
And my version is:
Docker version 20.10.8, build 3967b7d
The same command ran on a macos system with Docker version 20.10.8 runs without any issues so I my password and all the urls are correct for sure.
Thanks for any help!
The login commands is
docker login my.registry.com
Without the https:// in front of the host. If you still have auth issues doing that:
if the registry uses an unknown TLS certificate, load that certificate on the host and restart the docker engine
if the registry is http instead of https, configure it as an insecure registry on /etc/docker/daemon.conf
if the login is successful, but the pull fails, verify your user has access to the specific repo on the registry
double check your password was correctly entered
check for a network proxy intercepting the request (the http_proxy variable)
I reinstalled the whole thing again as the docker page states, didn't work, so I uninstalled it and proceeded to install snap version, that didn't work neither and finally I removed it and went with simple apt-get install docker.io and it works like a charm! I don't know why it didn't work previously but I won't lose more sleep over it.
On Ubuntu 20.x, I observed that the credentials are stored in home/<username>/snap/docker/1125/.docker/config.json.
If older credentials are stored in $HOME/.docker/config.json, they are not used by docker pull. Verify if docker is indeed picking up the credentials from the right config.json location.
I want to pull a certain number of images from docker hub. But since I cannot access the docker hub from my organization internet, what are the ways by which I can pull those images.
The error is:ERROR: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Thanks.
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command : docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server : docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command : docker image tag IMAGE_ID openjdk:latest.
Adding to the answer of #omernaci, you can either download the image on a separate environment, or use a proxy (prefered, as it applies to usual restrictions like isolating servers from the public internet):
Using a proxy
If your restricted environment has access to a proxy for this kind of management operations, you may just use it [1]:
HTTP_PROXY="http://proxy.example.com:80/" docker pull openjdk
or HTTPS_PROXY="https://proxy.example.com:443/" docker pull openjdk (if using an https proxy)
OR configure the proxy setting on docker daemon as explained in https://docs.docker.com/config/daemon/systemd/#httphttps-proxy and then you may just use docker pull openjdk normally
Downloading the image on a separate environment
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command: docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server: docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command: docker image tag IMAGE_ID openjdk:latest.
The best solution in this case would be contact you network administrators and explain them why you need to access this one url. :)
As a workaround:
If it's not also restricted, VPN might help.
You could connect to remote computer outside your network and use docker from there.
I have a linux system where I have installed dockers. I also have a registry on azure for which I have the user name and password. To get a list of docker images from a private registry we can simply use curl command like below:
curl localhost:5000/v2/_catalog
This command I tested when I installed private registry on my machine and it was giving me the list of images which I have in the registry. Now I have azure registry. I can login to it successfully but don't know what command I can run to get the list of docker images. Is this possible.? For example, if I run:
curl myregistry.azurecr.io/v2/_catalog
It shows:
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
How do I get the list of images stored in azure registry from my linux machine
Thanks
You can use the container registry cli for azure:
az acr repository list --name <acrName> --output table
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-azure-cli
It is important to understand how docker lists the images in the registry.
Docker CLI provides command to pull/push/delete images from a private Azure Registry like myprivate.azurecr.io after the user authenticates itself using docker login command but the docker CLI does not provide any command to list the images in the private registry.
It is important to understand that the docker image ls only lists the images present on the local machine and not in a registry.
There are multiple answers that describe the Docker HTTP API V2 (Refer here) to list the images present in the registry. The HTTP v2 API v2/_catalog and other only work with local registry created on-premise but when user wants to list the images present in the Private Azure Registry one needs to use Azure CLI
What is Local Registry ?
The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images. The Registry is open-source, under the permissive Apache license. Local Registry can be created to store and distribute images in house or on-premise.
Refer here : https://docs.docker.com/registry/ . One can create a private registry,push and pull image from there using Dokcker HTTP API V2.
Azure CR is a special type and inorder to list the images there is no other option to Azure CLI.
Use Case
- List the top three images present in the registry
The command for the same can be
az acr repository show-tags -n <RegistryName> --repository <RepositoryName> --orderby time_desc --output table | select -First 5
Not used to Azure I accidentally got stuck on the idea that I needed the Azure credentials to access the API, these answers strengthening that perception, but given you have the u/p you should be able to access it with curl in a simple:
curl -L --user <username>:<password> myregistry.azurecr.io/v2/_catalog
{"repositories":["name1", "name2", "nameN"]}
As yamenk said, you could use Azure CLI 2.0 to get your registry on azure.
Azure Cli 2.0 works on linux and docker, so I think it could work your linux machine.
Also, you could use Azure Rest APi to get registry on azure.
GET https://management.azure.com/subscriptions/<subscription id>/resourceGroups/<rg>/providers/Microsoft.ContainerRegistry/registries/<registry name>?api-version=2017-10-01
For get token, please refer to this link.
Using API like below:
curl -X "GET" "https://management.azure.com/subscriptions/********/resourceGroups/shuiapp/providers/Microsoft.ContainerRegistry/registries/shuitest?api-version=2017-10-01" \
-H "Authorization: Bearer $token" \
-H "Content-Type: application/json"
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.
I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (: