I'm trying to setup a environment trough a docker container on my windows 7 computer. I have problems with mounting a folder to my container. When I run
docker run -it --volume /C/Users/Public/docker_share:/home --rm ubuntu bash
I'm getting no errors but I can't see my files from my windows host in docker:
cd /home
ls
is empty but on my windows computer I have a text file and a folder in C:/Users/Public/docker_share
I have found multiple threads related to this topic but no solution solved my problem.
(Docker version 17.03.1-ce, build c6d412e)
I had the same problem and after searching in many forums, someone posted the solution that worked for me.
When defining your volume, write the "c" lowercase (/c/Users/Public/doc). This seems odd, but it worked in my case, so just give it a try in case you haven't found the solution yet.
Related
I'm having issues now with my portainer. I run ubuntu with docker and portainer and I ran the apt-get upgrade and install command through the terminal to update some things. now when I go to the portainer all my containers are gone and when I go to deploy them again I get the:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
At the time the only thing I could think of that maybe had created this issue was went ubuntu said there were updates to be installed so I let it install and then also ran the apt-get commands in terminal:
apt-get upgrade
apt-get install
My ubuntu storage has 80GB free (someone said may be a storage issue)
I was on the portainer slack channel trying to get help from a staff member he had me try "docker ps" which didn't work I had to try "sudo docker ps" which gave me that second error message listed above.https://docs.docker.com/engine/install/linux-postinstall/
permission denied while trying to connect to the docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permissions denied
I also went and tried the docker group add from the docker docs and tried adding the $USER to the docker group. I have done that also, it lets me run the hello-world from the terminal but still doesn't let me deploy on portainer.
I still get the same deploy error from portainer:
failed to deploy a stack: Creating Container xxxxx Starting Error response from daemon: error while creating mount source path '/docker/ghost/mysql': mkdir /docker: read-only file system
This was from one of the stacks I tried to redeploy since it has the /docker/"ghost"/mysql. Also, doesnt let me redeploy any other stacks tried that too.
Really unsure what to do and how to fix it since it basically now doesn't let me use any of those containers. Any help will be really appreciated, quite on edge right now! Thanks
I was kinda expecting not to have any of these issues. Not event entirely sure how it happened, I'm assuming maybe when I was using the "apt-get" commands. I don't really know myself. I would just like this fixed so I can get my data and containers back up and going on portainer. Yes, I also know portainer and docker are different and portainer is only a utility for docker.
edit: to add to this I have also re-installed portainer and docker not a full docker refresh but the standard one where it keeps some of the files since I dont want to remove some of the directories where some containers keep there configs and data files
I am writing software on macOS. As a subroutine I would like to call certain Linux-only CLI tools, e.g., > mytool inputfile. Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?). And if I can, is it a good idea or will there be issues installing and compiling Linux packages?
From my understanding of docker as basically a lightweight VM that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
Can Docker be used to run Linux CLI tools from macOS?
Docker supports macOS according to documentation.
Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?
Yes.
And if I can, is it a good idea
Depends on the term "good" - it's subjective and highly depends on specific case.
or will there be issues installing and compiling Linux packages?
No.
From my understanding of docker as basically a lightweight VM
Yes.
that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
What is in docker container depends on the container. Overall, usually man pages and system package manager repository information are removed from images. I would disagree - mostly docker containers come with full Linux distributions and can be used as such.
You should do as follow:
docker run --rm -v /:/host -ti ubuntu ... your command referring to /host...
And this is the command parameters explanation:
--rm : remove sthe container after running (but keep cached the image for next calls).
-t : allocates a visibile shell terminal.
-i : runs in interactive mode.
-v /:/host : maps your root folder to container /host folder.
ubuntu : pulls the ubuntu image, which you can change with any other you prefer.
As last parameter put the commands to run into the container but relatives to /host.
I recently started using docker (desktop version for Windows) for my node project development. I have a docker-compose file with volume configuration to share the project source files between my host machine and docker container.
When I need to install a new mode module, I can't do that on my host machine, of course, because it's Windows and docker is Linux or something, so I run docker exec -it my-service bash to "get into" the docker container and then run yarn add something from inside it. The problem is - yarn runs extremely slow and freezes almost all of the time. The docker container then becomes unresponsive, I cannot cancel the yarn command or stop the container using docker-compose stop. The only way I've found to recover is to restart the whole docker engine. So then, to finally install the new module, after docker engine restarts, I delete the node_modules folder and do the same steps again. This time it's still extremely slow, but it doesn't freeze somehow and actually installs the new module. But after some time, when I need to do that again, it freezes again and I have to delete node_modules again...
I would like to find the reasons why the yarn command is so slow and why it freezes.
I'm new to docker, so maybe my workflow is not correct.
I tried increasing RAM limit for docker engine from 2 GB to 8 GB and CPUs limit from 1 to 8, but it had absolutely no effect on the yarn command behavior.
My project was using file watching with chokidar, so I thought maybe that could cause the problem, but disabling it had no effect either.
I also thought the problem could be the file sharing mechanism between host machine (Windows) and docker container, but if it is the case, I do not know how to fix it. I suppose I then should somehow separate node_modules from the source directory and make them private to docker container, so that they are not shared with host machine.
This is quite a severe problem, as it slows the development down a lot. Please share any of your ideas about what could be wrong. I would even consider changing my development environment to Linux if the problem was caused by the file sharing mechanism between Windows and docker container.
I’m a fresh beginner on bioinformatics. Recently, I start learning it with the book named “Bioinformatics with Python Cookbook (by Antao, Tiago)”. I met some issues while setting up Docker for Linux. Please see below for the issues:
I was trying to set up the Docker files following the author’s instruction, but I found some files were “failed to download”.
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
Then I still went ahead set up the container following the instruction:
“Now, you are ready to run the container, as follows: docker run -ti -p 9875:9875 -v YOUR_DIRECTORY:/data bio”
I typed as docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
However, it gave me an error saying “Unable to find image “bio:latest” locally”.
Can anyone give me any suggestions on this? My thought could be the first step I missed downloading some files for setting the Dockers, but I am not sure if I can fetch these files.
Thank you so much for any comments!
Best regards
Johnny
I tried downloading the docker files a few time, but the error still appears
docker build -t bio
https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile
docker run -ti -p 9875:9875 -v C:/Users/guangliang/Desktop/Bioinformation/data bio
In the first issue, I found some files were “failed to download”.
In the 2nd issue, an error saying “Unable to find image “bio:latest” locally”. appears
Here you have a couple of problems:
1) It looks you do not download that docker file and build required docker image locally
2) You are getting that error about not finding image locally because of previous problem
So, you should do like this:
1) Download that Dockerfile (https://raw.githubusercontent.com/tiagoantao/bioinf-python/master/docker/2/Dockerfile). If you cant download that file for some reason, just open it at the git, select all content, copy, than in some folder on your computer make a new file, name it "Dockerfile" and paste the content.
2) Build locally image - go to the folder you download that dockerfile and execute following command:
docker build -t bio .
3)Run your container with docker run ... command
The user guide states that an image should be run as follows:
docker run -t -i ubuntu /bin/bash
I get that -t creates the pseudo-terminal and -i makes it interactive. But it seems that the /bin/bash part is unnecessary. Whether I run it with or without /bin/bash, I'm given an interactive prompt that I can read and write from both times.
root#77eeb1f4ac2a:/#
Why do we need /bin/bash?
Part 2
I'm running on Docker for Mac. When I download the hello-world binary and run it, it's only 1kb. Obviously a Linux image wasn't downloaded with it. Is the small hello-world binary running off my Mac kernel or off of a small Linux kernel that comes with Docker for Mac?
Why do we need /bin/bash?
Because while the ubuntu image may be configured to run /bin/bash by default, that's not going to be true of every image. If you have an image that starts a webserver by default, and you want to run bash...you need to make that explicit. Some images don't specify any default command, leading to:
$ docker run -it alpine
docker: Error response from daemon: No command specified.
It never hurts to be explicit when starting a container, especially using an inmage that you didn't build yourself.
When I download the hello-world binary and run it...
Which hello-world binary?
but is a VM of Linux executing it or is my mac executing it?
Docker only runs under Linux. When you are using Docker under OS X or Windows, you are running containers inside a Linux VM spawned for that purpose by docker-machine (or, previously, boot2docker). Under Windows Docker uses Hyper V, and on OS X it previously used VirtualBox and in more recent versions may be using something else (it's been a while since I've run Docker under OS X).
Part 1:
Whatever you pass after docker run -t -i ubuntu is the first command that your container will run. You can try using /bin/bash, /bin/sh, or even echo hello and see it in action. Ubuntu uses bash by default, but other containers use other commands based on their Dockerfiles.
part 2:
When you run hello-world, a docker container is created from the hello-image. Containers "include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system.".
Hello-world in specific is created from scratch https://hub.docker.com/_/scratch/.