how to pass file to docker image? - python-3.x

I have an application which I have dockerize it. The application recieves a configuration file which can change per user. I used the following command to pass config file to the image.
docker run --rm --name $(PROJECT_NAME) -v $(pwd)/config.yaml:/root/config.yaml
I don't want to build the image again for a new config file. but it doesn't seems to work as when I inspected the contents of the file on the container it showed me the same file. It seems that when I change the file on my local system and pass it to the container in run command, the change is not reflected and it loads previous config file. Any ideas what I might be doing wrong or any best practices to deal with such a scenario.
Dockerfile
# base image
FROM amazonlinux:2
# copy requirements file to working directory
COPY requirements.txt ./
RUN yum -y install python3 \
&& pip3 install --no-cache-dir -r requirements.txt
COPY app/. ./
ENTRYPOINT python3 run.py config.yaml

I was able to resolve it. Here is my updated Docker file : I have put my config file in config/ folder and mount it. Now I can build the image once and can change the config file inside the config/ folder and changes will be reflected inside the container.
# base image
FROM amazonlinux:2
# copy requirements file to working directory
COPY requirements.txt ./
COPY app/. ./
RUN yum -y install python3 \
&& pip3 install --no-cache-dir -r requirements.txt
ENTRYPOINT python3 run.py config/config.yaml
This is the run command:
docker run --rm --name $(PROJECT_NAME) -v $(CONFIG_PATH):/config $(DOCKER_REPO)
CONFIG_PATH = $(PWD)/config

What you see is "normal".
When you mount a file or directory, you actually mount a pointer (inode) inside the container. It's the way docker work with volume.
Some/most editor use a "safe write" method that consist to write a new file and replace the old file with the new one. When the editor do that, the new updated file have a new pointer (inode) and the old one still exist pointing to the old content.
On linux, a file is only deleted when no one use it, it's why you can't see it but the container still have it.
You will have the same problem if you mount a directory and you delete it and recreate one with the same name, the container will not see the new directory.
It's a know limitation of the bind mount and the best you can do is to mount a directory with your file inside. The other way is to restart your container, Docker will find the new inode and use it.

Related

How to output my files to the host machine on docker

Im new, very new to this whole docker thing and also programming thing.
I built a simple script that outputs a series of files (namely a .tsv and a .txt) from an api.
Now, this works fine running the script on the regular linux terminal (I know, thats not the same thing).
However, with docker, it never seems to output my file to my host machine.
Interesting thing is, when running 'ls' in the docker terminal, all the files show up!
sudo docker run -it -v /home/USER/DESKTOP/PROJECT:/CONTAINERWORKINGDIR IMAGENAME
also, tried mounting it via the dockerfile, which also seems to not do anything.
I don't get an error, the host path is just empty, always...
running 'ls' will show my files, sitting neatly in the docker directory, but when verifying on the host path, its empty. Also, verified that all the files are accessible.
Docker File:
# Use an official Python runtime as a base image
FROM python:3.11.1-alpine
# Set the working directory to /AudiblePy
WORKDIR /AudiblePy
# Copy the current directory contents into the container at /AudiblePy
COPY main.py /AudiblePy
COPY additional.py /AudiblePy
COPY requirements.txt /AudiblePy
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Run app.py when the container launches
CMD ["python", "main.py"]
Would love some help, Im stumped and it seems that I can't find any information on why this is...

Docker File and Python

Apologies I am very new to Docker. I have the following Docker file which contains the following commands (see below). I am not sure I understand all commands and I would appreciate some explanation. I commented all the lines I understood but put a question mark in others. Please see below
#That this line means that python will be our base. Can some comment here please and explain this line more?
FROM python:3.9 as base
#create a working directory in the virtual machine (VM)
WORKDIR /code
# copy all the python requirements stored in requirements.txt into the new directoy (in the VM)
COPY ./requirements.txt /code/requirements.txt
# activate the package manager pip. But why use no-cache-dir?
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# copy all files to the new directory (in the VM)
COPY ./ /code/
# I don't understand the line below. Please explain? why uvicorn? app.main:app is the
#location of the fastapi
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "180"]
Thank you
A Docker file states all steps that Docker will execute when creating your image. From that image, a container can be created.
#That this line means that python will be our base. Can some comment here please and explain this line more?
FROM python:3.9 as base
This is very basic docker stuff, follow a (beginners) tutorial and you will learn a lot more than just someone spoon-feeding little bits of knowledge.
#create a working directory in the virtual machine (VM)
WORKDIR /code
You are creating a container image, not a VM. That is a similar but very different concept and should not be mixed.
# copy all the python requirements stored in requirements.txt into the new directoy (in the VM)
COPY ./requirements.txt /code/requirements.txt
This copies all files to the image.
# activate the package manager pip. But why use no-cache-dir?
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
RUN is an image build step, and the outcome will be committed to the Docker image. So, in this step, you are telling docker that you want an image that has everything installed as outlined in requirements.txt with pip. No cache, by default PIP saves the whl's of the packages you are installing, but that only would increase the image and are no longer required. So, no cache.
# copy all files to the new directory (in the VM)
COPY ./ /code/
Again, not VM but image, an image that will later be used to create a container.
# I don't understand the line below. Please explain? why uvicorn? app.main:app is the
#location of the fastapi
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "180"]
Because you are trying to run a FastAPI project, and FastAPI is just the app; you need a server to actually be able to fire request at FastAPI. This is explained on the very first page of the FastAPI documentation actually.
"app.main:app" express your project has such python file:
<Project Root Dir>
app - folder
main.py -- python file
In the main.py, you init a FastAPI instance and named app, like this:
# main.py
....
app = FastAPI()
...
unicorn use above rules to get the FastAPI instance app, then load it.

How to dynamically change content in node project run through docker

I have an angularjs application, I'm running using docker.
The docker file looks like this:-
FROM node:6.2.2
RUN npm install --global gulp-cli && \
npm install --global bower
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
COPY bower.json /usr/src/app/
RUN npm install && \
bower install -F --allow-root --config.interactive=false
COPY . /usr/src/app
ENV GULP_COMMAND serve:dist
ENTRYPOINT ["sh", "-c"]
CMD ["gulp $GULP_COMMAND"]
Now when I make any changes in say any html file, It doesn't dynamically loads up on the web page. I have to stop the container, remove it, build the image again, remove the earlier image and then restart the container from new image. Do I have to do this every time? (I'm new to docker, and I guess this issue is coz my source code is not put into volume, but I don't know how to do it using docker file)
You are correct, you should use volumes for stuff like this. During development, give it the same volumes as the COPY directories. It'll override it with whatever is on your machine, no need to rebuild the image, or even restart the container. Perfect for development.
When actually baking your images for production, you remove the volumes, leave the COPY in, and you'll get a deterministic container. I would recommend you read through this article here: https://docs.docker.com/storage/volumes/.
In general, there are 3 ways to do volumes.
Define them in your dockerfile using VOLUME.
Personally, I've never done this. I don't really see the benefits of this against the other two methods. I believe it would be more common to do this when your volume is meant to act as a permanent data-store. Not so much when you're just trying to use your live dev environment.
Define them when calling docker run.
docker run ... -v $(pwd)/src:/usr/src/app ...
This is great, cause if your COPY in your dockerfile is ./src /usr/src/app then it temporarily overrides the directory while running the image, but it's still there for deployment when you don't use -v.
Use docker-compose.
My personal recommendation. Docker compose massively simplifies running containers. For sake of simplicity just calls docker run ... but automates the arguments based on a given docker-compose.yml config.
Create a dev service specifying the volumes you want to mount, other containers you want it linked to, etc. Then bring it up using docker-compose up ... or docker-compose run ... depending on what you need.
Smart use of volumes will DRAMATICALLY reduce your development cycle. Would really recommend looking into them.
Yes, you need to rebuild every time the files change, since you only modify the files that are outside of the container. In order to apply the changes to the files IN the container, you need to rebuild the container.
Depending on the use case, you could either make the Docker Container dynamically load the files from another repository, or you could mount an external volume to use in the container, but there are some pitfalls associated with either solution.
If you want to keep your container running as you add your files you could also use a variation.
Mount a volume to any other location e.g. /usr/src/staging.
While the container is running, if you need to copy new files into the container, copy them into the location of the mounted volume.
Run docker exec -it <container-name> bash to open a bash shell inside the running container.
Run a cp /usr/src/staging/* /usr/src/app command to copy all new files into the target folder.

Cannot finde module 'express' (node app with docker)

I'm a newbie with Docker and I'm trying to start with NodeJS so here is my question..
I have this Dockerfile inside my project:
FROM node:argon
# Create app directory
RUN mkdir -p /home/Documents/node-app
WORKDIR /home/Documents/node-app
# Install app dependencies
COPY package.json /home/Documents/node-app
RUN npm install
# Bundle app source
COPY . /home/Documents/node-app
EXPOSE 8080
CMD ["npm", "start"]
When I run a container with docker run -d -p 49160:8080 node-container it works fine..
But when I try to map my host project with the container directory (docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont) it doesn't work.
The error I get is: Error: Cannot find module 'express'
I've tried with other solutions from related questions but nothing seems to work for me (or I know.. I'm just too rookie with this)
Thank you !!
When you run your container with -v flag, which mean mount a directory from your Docker engine’s host into a container, will overwrite what you do in /home/Documents/node-app,such as npm install.
So you cannot see the node_modules directory in the container.
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
mount a host directory as a data volume.As what the docs said,the pre-existing content of host directory will not be removed, but no information about what's going on the exist directory of the container.
There is a example to support my opinion.
Dockerfile
FROM alpine:latest
WORKDIR /usr/src/app
COPY . .
I create a test.t file in the same directory of Dockerfile.
Proving
Run command docker build -t test-1 .
Run command docker run --name test-c-1 -it test-1 /bin/sh,then your container will open bash.
Run command ls -l in your container bash,it will show test.t file.
Just use the same image.
Run command docker run --name test-c-2 -v /home:/usr/src/app -it test-1 /bin/sh. You cannot find the file test.t in your test-c-2 container.
That's all.I hope it will help you.
I recently faced the similar issue.
Upon digging into docker docs I discovered that when you run the command
docker run -p 49160:8080 -v ~/Documentos/nodeApp:/home/Documents/node-app node-cont
the directory on your host machine ( left side of the ':' in the -v option argument ) will be mounted on the target directory ( in the container ) ##/home/Documents/node-app##
and since your target directory is working directory and so non-empty, therefore
"the directory’s existing contents are obscured by the bind mount."
I faced an alike problem recently. Turns out the problem was my package-lock.json, it was outdated in relation to the package.json and that was causing my packages not being downloaded while running npm install.
I just deleted it and the build went ok.

npm package.json and docker (mounting it...)

I am using Docker, so this case might look weird. But I want my whole /data directory to be mounted inside my docker container when developing.
My /data folder container my package.json file, an app directory and a bunch of other stuff.
The problem is that I want my node_modules folder to NOT be persistent, only the package.json file.
I have tried a couple of things, but package.json and npm is giving me a hard time here...
Mounting the package.json file directly will break npm. npm tries to rename the file on save, which is not possible when its a mounted file.
Mounting the parent folder (/data) will mount the node_modules folder.
I cant find any configuration option to put node_modules in another folder outside /data, example /dist
Putting package.json in /data/conf mounting the /data/conf as a volume instead wont work. I cant find any way to specify the package.json path in npmrc.
Putting package.json in /data/conf and symlinking it to /data/package.json wont work. npm breaks the symlink and replaces it with a file.
Copying data back and forth to/from inside the docker container is how I am doing it now.. A little tedious.. I also want a clean solution..
As you have already answered, I think that might be the only solution right now.
When you are building your Docker image, do something like:
COPY data/package.json /data/
RUN mkdir /dist/node_modules && ln -s /dist/node_modules /data/node_modules && cd /data && npm install
And for other stuff (like bower, do the same thing)
COPY data/.bowerrc /data/
COPY data/bower.json /data/
RUN mkdir /dist/vendor && ln -s /dist/vendor /data/vendor && cd /data && bower install --allow-root
And COPY data/ /data at the end (so you are able to use Dockers caching and not having to do npm/docker installation when there is a change to data.
You will also need to create the symlinks you need and store them in your git-repo. They will be invalid on the outside, but will happely work on the inside of your container.
Using this solution, you are able to mount your $PWD/data:/data without getting the npm/bower "junk" outside your container. And you will still be able to build your image as a standalone deployment of your service..
A similar and alternative way is to use NODE_ENV variable instead of creating a symlink.
RUN mkdir -p /dist/node_modules
RUN cp -r node_modules/* /dist/node_modules/
ENV NODE_PATH /dist/node_modules
Here you first create a new directory for node_modules, copy all modules there, and have Node read the modules from there.
I've been having this problem for some time now, and the accepted solution didn't work for me*
I found this link, which had an edit pointing here and this indeed worked for me:
volumes:
- ./:/data
- /data/node_modules
In this case the Engine creates a volume (see Compose reference on volumes) which is not mounted to your source directory. This was the easiest solution and didn't require me to do any symlinking, setting paths, etc.
For reference, my simple Dockerfile just looks like this:
# install node requirements
WORKDIR /data
COPY ./package.json ./package.json
RUN npm install -qq
# add source code
COPY ./ ./
# run watch script
CMD npm run watch
(The watch script is just webpack --watch -d)
Hope this is able to help someone and save hours of time like it did for me!
'*' = I couldn't get webpack to work from my package.json scripts and installing anything while inside the container created the node_modules folder with whatever I just installed (I run npm i --save [packages] from inside the container to get the package update the package.json until the next rebuild)
The solution I went with was placing the node_modules folder in /dist/node_modules, and making a symlink to it from /data/node_modules. I can do this both in my Dockerfile so it will use it when building, and I can submit my symlinks to my git-repo. Everything worked out nicely..
Maybe you can save your container, and then rebuild it regularly with a minimal dockerfile
FROM my_container
and a .dockerignore file containing
/data/node_modules
See the doc
http://docs.docker.com/reference/builder/#the-dockerignore-file

Resources