I'm getting the following error message when trying to run a Docker container with which I want to share some data via a directory (I added the emphasis):
##[error]/usr/bin/docker: Error response from daemon: create -v /opt/vsts/work/1/s/coverage: "-v /opt/vsts/work/1/s/coverage" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
What I don't understand is, that to my knowledge /opt/vsts/work/1/s/coverage is an absolute path, as indicated by the first forward slash.
Can someone explain what I'm doing wrong?
A build script was passing in "-v /opt/vsts/work/1/s/coverage" as the actual name, i.e.
docker run -v -v /opt/vsts/work/1/s/coverage:[...]
instead of
docker run -v /opt/vsts/work/1/s/coverage:[...].
Thanks #larsks for pointing me in the right direction.
Related
So in order to set up the amazon-ecr-credential-helper I need to add some lines to the .docker/config.json file in my EC2. When I try to run the script
echo "{ \"credHelpers\": { \"acc_id.dkr.ecr.acc_region.amazonaws.com\": \"ecr-login\" } }" > ~/.docker/config.json
I get an error -bash: /root/.docker/config.json: No such file or directory
Docker is installed, I'm using root user. This is an Amazon Linux EC2. Can someone please tell me what is wrong here? Doesn't Docker already create the .docker folder? Or is this something I need to do?
Some context: I intend to run this script as part of the EC2 Userdata, but have been facing issues, so trying to debug within the container first.
Any hints in the right direction would be highly appreciated.
Thanks!
"Doesn't Docker already create the .docker folder?"
No, Docker doesn't create a .docker folder in every user's home directory. You need to create that folder yourself.
I'm trying to follow these steps to get a docker container running NextCloud on my RaspberryPI. The steps seem very straight forward except I can't seem to get this working. The biggest difference is that I want to use an external drive as the data location. Here's what's happening:
I run sudo docker run -d -p 4442:4443 -p 442:443 -p 79:80 -v /mnt/nextclouddata:/data --name nextcloud ownyourbits/nextcloudpi-armhf
but when I go to https://pi_ip_address:442/activate (or any of the other ports), I get "problem loading page". I've also tried using https://raspberrypi.local:442/activate as well as appending both the IP and the name to the end of the command (where the DOMAIN is listed in the instructions).
I've seen some posts talking about how this is a problem with how docker accesses mounted drives, but I can't seem to get it working. When I type sudo docker logs -f nextcloud I get the following errors:
/run-parts.sh: line 47: /etc/services-enabled.d/010lamp: Permission denied
/run-parts.sh: line 47: /etc/services-enabled.d/020nextcloud: Permission denied
Init done
Does anyone have any steps to help get this working? I can't seem to find a consistent/working answer.
Thanks!
I want to pull a certain number of images from docker hub. But since I cannot access the docker hub from my organization internet, what are the ways by which I can pull those images.
The error is:ERROR: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Thanks.
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command : docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server : docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command : docker image tag IMAGE_ID openjdk:latest.
Adding to the answer of #omernaci, you can either download the image on a separate environment, or use a proxy (prefered, as it applies to usual restrictions like isolating servers from the public internet):
Using a proxy
If your restricted environment has access to a proxy for this kind of management operations, you may just use it [1]:
HTTP_PROXY="http://proxy.example.com:80/" docker pull openjdk
or HTTPS_PROXY="https://proxy.example.com:443/" docker pull openjdk (if using an https proxy)
OR configure the proxy setting on docker daemon as explained in https://docs.docker.com/config/daemon/systemd/#httphttps-proxy and then you may just use docker pull openjdk normally
Downloading the image on a separate environment
You can try these steps, firstly, in an environment with internet access, for example,
run the docker pull openjdk command and download the image.
then save the image as tar with the following command: docker save -o <path for generated tar file> <image name>.
Copy this tar file to a server that does not have internet access using scp or similar methods.
After you copy,
run the following command to include the tar file on the server: docker load -i <generated tar name>.tar.
If you want to add a tag to this image, you can run the following command: docker image tag IMAGE_ID openjdk:latest.
The best solution in this case would be contact you network administrators and explain them why you need to access this one url. :)
As a workaround:
If it's not also restricted, VPN might help.
You could connect to remote computer outside your network and use docker from there.
I've got a situation where I've got a .json file that I want to persist between runs of a given container. In addition this file needs to be appended to by the container as part of it running.
The syntax which I've been using so far is as follows:
docker run -d -p 3001:3001 -v /usr/bob/database.json:/app/data/database.json:rw --name=myapp appImage
Nothing gets inserted into the file (though I can cat the contents inside and outside the container to confirm it's the same). I have ensured that the root user (yes not best practice) who is running docker owns all of the files in that folder and has full rwx.
What DOES work is if I bind at the folder level eg:
docker run -d -p 3001:3001 -v /usr/bob:/app/data --name=myapp appImage
Can anyone explain the difference?
I feel that sharing access to a folder instead of a single file is a lot less precise and also causes structural changes in the app (eg. source control with multiple files (plus the .json file mentioned) in the same folder).
Thanks in advance for any pointers.
Thanks,
Andrew
Mounting a file as a volume mounts a specific inode inside the container. Many tools that modify a file will change the inode when writing a new copy of the file. This new inode will be stored in the directory as the new pointer to that filename. When they directory is mounted you see the change in your host, but otherwise you only see it inside the container since the inode on the host and the pointer to it in the host directory are unchanged.
There are more details on this behavior in Docker's tutorial on volumes: https://docs.docker.com/engine/tutorials/dockervolumes
I'm having a slightly annoying issue while using a Docker container (I'm on Ubuntu, so no virtualization like VMWare or b2d). I've built my image, and have a running container that has one shared (mounted) directory from my host, and one shared (mounted) file from my host. Here's the docker run command in full:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/components \
-v $(pwd)/index.php:/var/www/index.php \
my-image
This works great, and both /components (and its contents) and the file are shared appropriately. However, when I want to make changes to either the directory (e.g. adding a new file or folder), or edit the mounted file (or any file in the directory), I'm unable to do so due to incorrect permissions. Running ls- lFh shows that the owner and group for the mounted items have been changed to libuuid:libuuid. Modifying either the file or parent directory requires root permissions, which impedes my workflow (as I'm working from Sublime Text, not Terminal, I'm presented with a popup for admin privs).
Why does this occur? How can I work around this / handle this properly? From Managing Data Volumes: Mount a Host File as a Data Volume:
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device or resource busy”. In the case where you want to edit the mounted file, it is often easiest to instead mount the parent directory.
This would seem to suggest that instead of mounting /components and /index.php, I should instead mount the parent directory of both. Sounds great in theory, but based on the behavior of the -v option and how it interacts with /directory, it would seem that every file in my parent directory would be altered to be owned by libuuid:libuuid. Additionally, I have lots of things inside the parent directory that are not needed in the container - things like build tools, various files, some compressed folders, etc. Mounting the whole parent directory would seem to be wasteful.
Running chown user:group on /components and /index.php on my host machine allow me to work around this and seem to continue to sync with the container. Is this something I'll need to do every time I run a container with mounted host volumes? I'm guessing that there is a more efficient way to do this, and I'm just not finding an explanation for my particular use-case anywhere.
I am using this container for development of a module for another program, and have no desire to manage a data-only container - the only files that matter are from my host; persistence isn't needed elsewhere (like a database, etc).
Dockerfile
/setup
Created on pastebin to avoid an even longer post. Never expires.
After creating the image, this is the run command I'm using:
docker run -dit \
-p 80:80 \
--name my-container \
-v $(pwd)/components:/var/www/wp-content/plugins/my-plugin-directory/components \
-v $(pwd)/index.php:/var/www/wp-content/plugins/my-plugin-directory/index.php \
my-image
It looks like your chown -R nginx:nginx ... commands inside your container are changing the ownership bits on your files to be owned by libuuid on your host machine.
See Understanding user file ownership in docker: how to avoid changing permissions of linked volumes for a basic explanation on how file ownership bits work between your host and your docker containers.