Cannot get Docker to use pull-through cache - linux

I am trying to cache images locally in a Docker registry using a pull through cache as described here: https://docs.docker.com/registry/recipes/mirror/. But when I do, Docker seems to ignore the pass-through cache and is instead pulling images directly from Docker Hub.
I have a Docker Compose file that I am using to run the registry:
version: '3.9'
services:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
volumes:
- ~/.docker/registry:/var/lib/registry
- ./registry-config.yml:/etc/docker/registry/config.yml
And I have configured Docker to use the registry via the --registry-mirror option, which I have confirmed is taking effect by checking the output of docker info:
$ docker info
...
Registry Mirrors:
https://localhost:5000/
...
But when I try to pull an image, I see no activity in the registry's logs, as if Docker is ignoring the mirror option and just going straight to Docker Hub.
I have also confirmed the registry is working by pulling alpine:latest from Docker Hub, tagging it as localhost:5000/alpine:latest and pushing it to the registry, then pulling it back out again. Both of which work fine.
This is all being done on a Linux VM running Ubuntu.
Can someone please help me understand what I am doing wrong. and how I am supposed to get Docker to pull all images through the pull-through cache?
Thanks in advance for your help.

Related

Azure Container Instances unable to pull public docker hub images with docker compose deployment

I'm trying to deploy a docker compose file to Azure Container Instances (ACI) using public Dockerhub images with these two tutorials, docker, youtube. However, it keeps saying I can't pull a public dockerhub image
containerinstance.ContainerGroupsClient#CreateOrUpdate:
Failure sending request: StatusCode=400 -- Original Error: Code="MultipleErrorsOccurred"
Message="Multiple error occurred:
'BadRequest':'InaccessibleImage':'The image 'selenium/standalone-firefox:latest' in container group 'test_ui_automation' is not accessible. Please check the image and registry credential.
The process itself should be when an ACI instance gets setup so there's no register details existing and the image is public on docker hub.
I've logged into Azure and Docker Hub. Based on the youtube tutorial, it should just be a process of creating an ACI instance, then running docker compose -f azure-testproject-docker.yaml -d.
azure-testproject-docker.yaml
version: '3.1'
services:
testproject-agent:
image: testproject/agent:latest
container_name: testproject-agent
depends_on:
- chrome
- firefox
volumes:
- mydata:/var/testproject/agent
environment:
TP_AGENT_ALIAS: "MY DOCKER AGENT"
TP_API_KEY: "MY KEY"
TP_JOB_PARAMS: '"jobParameters" : { "browsers": [ "chrome", "firefox" ] }'
CHROME: "chrome:4444"
FIREFOX: "firefox:4444"
chrome:
image: selenium/standalone-chrome:latest
volumes:
- mydata:/dev/shm
firefox:
image: selenium/standalone-firefox:latest
volumes:
- mydata:/dev/shm
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare
storage_account_name: mystorageaccount
The docker-compose file is quite basic, just used TestProject's default docker compose file. The images download on local docker, but the azure part doesn't work.
The whole point was to deploy with only a Docker Compose file, no Dockerfile as it's just using public dockerhub images.
Has anyone experienced a similar issue?
Solution
Remove container_name as not supported in ACI
Turns out the issue was simply due to container_name not being supported.
As per comment by andriy-bilous
First you should look at supported ACI docker-compose features, as at least container_name is not supported. docs.docker.com/cloud/aci-compose-features – Andriy Bilous 17 hours ago
Resolved the issue just by removing the container_name line.

Move TheSpaghettiDetective from local machine to Azure Container Instances

I have clone TheSpaghettiDetective (https://github.com/TheSpaghettiDetective/TheSpaghettiDetective) repository and then use the docker-compose.yml to build it and it works great on my local machine. Now I want to push it to Azure Container Instances in a multi-container group, but can't get it working.
I tried using this tutorial from Microsoft, but this project is a lot more complex.
The docker-compose file creates 4 containers that run, I couldn't figure out push them to Azure Container Registry with the tutorial, but was able to do so easily with the docker extension in VS Code.
Then when I tried deploy the images, I was able to get the docker context setup but the images wouldn't deploy. I think because they rely on the files I downloaded from github, to so I think I need to setup a file share in azure???
Where do I go from here? Is there no easy way to clone the repository into azure and use docker-compose up like I'm used to?
The problem that the images don't deploy is the docker-compose.yml file does not set the image option. It only set the build option to build the images. But when the images are pushed into the ACR. You don't need to build the images again, just need to pull the images with the image option. See the example in the link you found:
version: '3'
services:
azure-vote-back:
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
container_name: azure-vote-back
environment:
ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "6379:6379"
azure-vote-front:
build: ./azure-vote
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
container_name: azure-vote-front
environment:
REDIS: azure-vote-back
ports:
- "8080:80"
You can find that the docker-compose.yml file configures the image to set the image name and use the build option to build the images. So try to add the image option in the docker-compose.yml file.

Access denied when pushing docker image to gitlab's (on prem) integrated docker registry

When pushing a docker image with a modified tag (to contain registry) to the gitlab integrated registry i get an access denied.
Using the gitlab registry is using it per project. Once the registry is enabled for a project there is a hint how to push the images to the registry https://gitlab.mydomain.com/**path/to/project**/container_registry.
The problem got solved when the full path was included in the TAG Name.
When i changed the tagname to [registryUrl]:[registryPort]/path/to/project/[imageNameWithTags] i was able to push to the repository/registry.
Indeed you need to do docker login ... as described on the /container_registry page.
You can also rely on some GitLab Predefined environment variables to make code generic and re-use it in many projects.
Here is the example of doing it in .gitlab-ci.yml:
build-image:
stage: build
image: docker:latest
services:
- name: docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE .
- docker login -u $CI_REGISTRY_USER -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE
See full example in one of our projects

MongoDB and unrecognised option '--enableEncryption'

I have a problem when I run an image mongo with docker-compose.yml. I need to encrypt my data because it is very sensitive. My docker-compose.yml is:
version: '3'
services:
mongo:
image: "mongo"
command: ["mongod","--enableEncryption","--encryptionKeyFile", "/data/db/mongodb-keyfile"]
ports:
- "27017:27017"
volumes:
- $PWD/data:/data/db
I check the mongodb-keyfile exits in data/db, ok no problem, but when I build the file, made and up the image, and te command is:
"docker-entrypoint.sh mongod --enableEncryption --encryptionKeyFile /data/db/mongodb-keyfile"
The status:
About a minute ago Exited (2) About a minute ago
I show the logs and see:
Error parsing command line: unrecognised option '--enableEncryption'
I understand the error, but I don't known how to solve it. I think to make a Dockerfile with the image an ubuntu (linux whatever) and install mongo with the all configurations necessary. Or try to solved it.
Please help me, thx.
According to the documentation, the encryption is available in MongoDB Enterprise only. So you need to have paid subscription to use it.
For the docker image of the enterprise version it says in here that you can build it yourself:
Download the Docker build files for MongoDB Enterprise.
Set MONGODB_VERSION to your major version of choice.
export MONGODB_VERSION=4.0
curl -O --remote-name-all https://raw.githubusercontent.com/docker-library/mongo/master/$MONGODB_VERSION/{Dockerfile,docker-entrypoint.sh}
Build the Docker container.
Use the downloaded build files to create a Docker container image wrapped around MongoDB Enterprise. Set DOCKER_USERNAME to your Docker Hub username.
export DOCKER_USERNAME=username
chmod 755 ./docker-entrypoint.sh
docker build --build-arg MONGO_PACKAGE=mongodb-enterprise --build-arg MONGO_REPO=repo.mongodb.com -t $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION .
Test your image.
The following commands run mongod locally in a Docker container and check the version.
docker run --name mymongo -itd $DOCKER_USERNAME/mongo-enterprise:$MONGODB_VERSION
docker exec -it mymongo /usr/bin/mongo --eval "db.version()"

Docker: Uses an image, skipping (docker-compose)

I am currently trying out this tutorial for node express with mongodb
https://medium.com/#sunnykay/docker-development-workflow-node-express-mongo-4bb3b1f7eb1e
the first part works fine where to build the docker-compose.yml
it works totally fine building it locally so I tried to tag it and push into my dockerhub to learn and try more.
this is originally what's in the yml file followed by the tutorial
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
this works like a charm when I use docker-compose build and docker-compose up
so I tried to push it to my dockerhub and I also tag it as node-test
I then changed the yml file into
version: "2"
services:
web:
image: "et4891/node-test"
volumes:
- ./:/app
ports:
- "3000:3000"
then I removed all images I have previously to make sure this also works...but when I run docker-compose build I see this message error: web uses an image, skipping and nothing happens.
I tried googling the error but nothing much I can find.
Can someone please give me a hand?
I found out, I was being stupid.
I didn't need to run docker-compose build I can just directly run docker-compose up since then it'll pull the images down, the build is just to build locally
in my case below command worked:
docker-compose up --force-recreate
I hope this helps!
Clarification: This message (<service> uses an image, skipping)
is NOT an error. It's informing the user that the service uses Image and it's therefore pre-built, So it's skipped by the build command.
In other words - You don't need build , you need to up the service.
Solution:
run sudo docker-compose up <your-service>
PS: In case you changed some configuration on your docker-compose use --force-recreate flag to apply the changes and creating it again.
sudo docker-compose up --force-recreate <your-service>
My problem was that I wanted to upgrade the image so I tried to use:
docker build --no-cache
docker-compose up --force-recreate
docker-compose up --build
None of which rebuild the image.
What is missing ( from this post ) is:
docker-compose stop
docker-compose rm -f # remove old images
docker-compose pull # download new images
docker-compose up -d

Resources