Docker File and Python - python-3.x

Apologies I am very new to Docker. I have the following Docker file which contains the following commands (see below). I am not sure I understand all commands and I would appreciate some explanation. I commented all the lines I understood but put a question mark in others. Please see below
#That this line means that python will be our base. Can some comment here please and explain this line more?
FROM python:3.9 as base
#create a working directory in the virtual machine (VM)
WORKDIR /code
# copy all the python requirements stored in requirements.txt into the new directoy (in the VM)
COPY ./requirements.txt /code/requirements.txt
# activate the package manager pip. But why use no-cache-dir?
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
# copy all files to the new directory (in the VM)
COPY ./ /code/
# I don't understand the line below. Please explain? why uvicorn? app.main:app is the
#location of the fastapi
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "180"]
Thank you

A Docker file states all steps that Docker will execute when creating your image. From that image, a container can be created.
#That this line means that python will be our base. Can some comment here please and explain this line more?
FROM python:3.9 as base
This is very basic docker stuff, follow a (beginners) tutorial and you will learn a lot more than just someone spoon-feeding little bits of knowledge.
#create a working directory in the virtual machine (VM)
WORKDIR /code
You are creating a container image, not a VM. That is a similar but very different concept and should not be mixed.
# copy all the python requirements stored in requirements.txt into the new directoy (in the VM)
COPY ./requirements.txt /code/requirements.txt
This copies all files to the image.
# activate the package manager pip. But why use no-cache-dir?
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
RUN is an image build step, and the outcome will be committed to the Docker image. So, in this step, you are telling docker that you want an image that has everything installed as outlined in requirements.txt with pip. No cache, by default PIP saves the whl's of the packages you are installing, but that only would increase the image and are no longer required. So, no cache.
# copy all files to the new directory (in the VM)
COPY ./ /code/
Again, not VM but image, an image that will later be used to create a container.
# I don't understand the line below. Please explain? why uvicorn? app.main:app is the
#location of the fastapi
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "180"]
Because you are trying to run a FastAPI project, and FastAPI is just the app; you need a server to actually be able to fire request at FastAPI. This is explained on the very first page of the FastAPI documentation actually.

"app.main:app" express your project has such python file:
<Project Root Dir>
app - folder
main.py -- python file
In the main.py, you init a FastAPI instance and named app, like this:
# main.py
....
app = FastAPI()
...
unicorn use above rules to get the FastAPI instance app, then load it.

Related

How to output my files to the host machine on docker

Im new, very new to this whole docker thing and also programming thing.
I built a simple script that outputs a series of files (namely a .tsv and a .txt) from an api.
Now, this works fine running the script on the regular linux terminal (I know, thats not the same thing).
However, with docker, it never seems to output my file to my host machine.
Interesting thing is, when running 'ls' in the docker terminal, all the files show up!
sudo docker run -it -v /home/USER/DESKTOP/PROJECT:/CONTAINERWORKINGDIR IMAGENAME
also, tried mounting it via the dockerfile, which also seems to not do anything.
I don't get an error, the host path is just empty, always...
running 'ls' will show my files, sitting neatly in the docker directory, but when verifying on the host path, its empty. Also, verified that all the files are accessible.
Docker File:
# Use an official Python runtime as a base image
FROM python:3.11.1-alpine
# Set the working directory to /AudiblePy
WORKDIR /AudiblePy
# Copy the current directory contents into the container at /AudiblePy
COPY main.py /AudiblePy
COPY additional.py /AudiblePy
COPY requirements.txt /AudiblePy
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Run app.py when the container launches
CMD ["python", "main.py"]
Would love some help, Im stumped and it seems that I can't find any information on why this is...

Unable to run (Linux container) or create image (Windows container) a Gatsby React site (win binaries error, matching manifest error) through Docker

I have my website wrapped up and wanted to containerize it for experience as I've never used Docker before. It's built on Gatsby. I did a fresh install of Docker and am running into two issues:
If I try to create an image in a Linux container, it seems to work, but I can't actually run it. I get the following error: "Error in "/app/node_modules/gatsby-transformer-sharp/gatsby-node.js": 'win32-x64' binaries cannot be used on the 'linuxmusl-x64' platform. Please remove the 'node_modules/sharp' directory and run 'npm install' on the 'linuxmusl-x64' platform."
I tried the above, uninstalling and reinstalling sharp in my project to no avail.I'm not even using sharp nor do I know what it is, though.
If I switch to Windows containers, I can't even create an image as I get the following:
"no matching manifest for windows/amd64 10.0.18363 in the manifest list entries"
My Dockerfile is as follows:
FROM node:13.12.0-alpine
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
# add app
COPY . ./
# start app
CMD ["npm", "start"]
and my .dockerignore contains
node_modules
build
Dockerfile
Dockerfile.prod
.git
Things I've tried:
This tutorial > https://mherman.org/blog/dockerizing-a-react-app/ (Where I got the Dockerfile text)
This tutorial >https://www.robinwieruch.de/docker-create-react-app-development (And its Dockerfile at one point)
Changing the FROM for node: to 14.4.0, 14, with or without -alpine.
Uninstalling and re-installing sharp
Uninstalling sharp entirely and trying to run it that way (I still get the sharp error for some reason)
Reading the documentation. Which for whatever reason only tells you how to launch a default application (such as create-react-app) or one pulled from somewhere, but not how to do so for our own website.
Thanks

how to pass file to docker image?

I have an application which I have dockerize it. The application recieves a configuration file which can change per user. I used the following command to pass config file to the image.
docker run --rm --name $(PROJECT_NAME) -v $(pwd)/config.yaml:/root/config.yaml
I don't want to build the image again for a new config file. but it doesn't seems to work as when I inspected the contents of the file on the container it showed me the same file. It seems that when I change the file on my local system and pass it to the container in run command, the change is not reflected and it loads previous config file. Any ideas what I might be doing wrong or any best practices to deal with such a scenario.
Dockerfile
# base image
FROM amazonlinux:2
# copy requirements file to working directory
COPY requirements.txt ./
RUN yum -y install python3 \
&& pip3 install --no-cache-dir -r requirements.txt
COPY app/. ./
ENTRYPOINT python3 run.py config.yaml
I was able to resolve it. Here is my updated Docker file : I have put my config file in config/ folder and mount it. Now I can build the image once and can change the config file inside the config/ folder and changes will be reflected inside the container.
# base image
FROM amazonlinux:2
# copy requirements file to working directory
COPY requirements.txt ./
COPY app/. ./
RUN yum -y install python3 \
&& pip3 install --no-cache-dir -r requirements.txt
ENTRYPOINT python3 run.py config/config.yaml
This is the run command:
docker run --rm --name $(PROJECT_NAME) -v $(CONFIG_PATH):/config $(DOCKER_REPO)
CONFIG_PATH = $(PWD)/config
What you see is "normal".
When you mount a file or directory, you actually mount a pointer (inode) inside the container. It's the way docker work with volume.
Some/most editor use a "safe write" method that consist to write a new file and replace the old file with the new one. When the editor do that, the new updated file have a new pointer (inode) and the old one still exist pointing to the old content.
On linux, a file is only deleted when no one use it, it's why you can't see it but the container still have it.
You will have the same problem if you mount a directory and you delete it and recreate one with the same name, the container will not see the new directory.
It's a know limitation of the bind mount and the best you can do is to mount a directory with your file inside. The other way is to restart your container, Docker will find the new inode and use it.

No such file or directory error while running Docker image

I am learning Docker concept and trying to make a Docker image of my project. I have installed Docker Desktop for Windows and build the image successfully by using below command:
docker build -t ${IMAGE_NAME} .
But when I run following command docker run ${IMAGE_NAME}:${TAG} I am getting following file not found error:
D:\Projects\AI360\deep_auto_backbar_api>docker run dsbyprateekg:prateek_gupta
python3: can't open file '/Prepare_Dataset/server_engine/server.py': [Errno 2] No such file or directory
My project structure is looks like:
And my Dockerfile.txt has following instructions:
FROM python: 3.6-stretch
MAINTAINER PrateekG
# install build utilities
RUN apt-get update && \
apt-get install -y gcc make apt-transport-https ca-certificates build-essential
# check our python environment
RUN python3 version RUN pip3 --version
# Installing python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy all the files from the project's root to the working directory
COPY Prepare_Dataset/ .
# Running Python Application
CMD ["python3", "/Prepare_Dataset/server_engine/server.py"]
I suspect I am missing something related to file path. Please see my Dockerfile and my project structure and help me to find out what I am doing wrong here.
When you use COPY Prepare_Dataset/ . this will copy the content of the directory, not the directory itself so
CMD path become invalid /Prepare_Dataset/server_engine/server.py.
You need to use
COPY Prepare_Dataset/ ./Prepare_Dataset/
so when you copy you can verify
Step 5/7 : COPY Prepare_Dataset/ ./Prepare_Dataset/
---> Using cache
---> 2c5c15c23f65
Step 6/7 : RUN ls | grep "Prepare_Dataset"
---> Running in 54147bd4740c
Prepare_Dataset
Better to keep convention to avoid such error in future.
# SEt workdirectory
WORKDIR /app
# Now it will copy to /app/
COPY Prepare_Dataset/ ./Prepare_Dataset
CMD ["Prepare_Dataset/server_engine/server.py"]
You can verify you problem using below steps.
COPY Prepare_Dataset/ .
#You will see the content but not the directory
RUN ls /
You will not able to see the directory but you can grep the any file in it.
Step 5/7 : COPY Prepare_Dataset/ .
---> Using cache
---> e4eec046c860
Step 6/7 : RUN ls | grep "Prepare_Dataset"
---> Running in 23e4b2aab3d1
The command '/bin/sh -c ls | grep "Prepare_Dataset"' returned a non-zero code: 1
In my case I had to change the line separators from cr/lf (Windows) to lf (Unix/Linux/macOS). To do this in IntelliJ, you have to select your root folder in the Project window and the go to File -> File Properties -> Line Separators -> LF - Unix and macOS (\n)
Also see this answer
Make sure the value of TAG is proper. Check if container is getting launch or not using following command;
docker ps -a
Use an ENTRYPOINT instead of CMD and then you can use command line options in the docker run like in your example.
ENTRYPOINT ["python3", "Prepare_Dataset/server_engine/server.py"]
Reference: link
If others stumble across this, I hit the same error. But my issue was different:
# Error `No such file or directory`
ENTRYPOINT ["/bin/bash", "-c", "{$APP}"]
# Success
ENTRYPOINT ["/bin/bash", "-c", "$APP"]

Access raspistill / pi camera inside a Docker container

I've been trying out my Node.js app on a Raspberry Pi 3 Model B using Docker and it runs without any troubles.
The problem comes when an app dependency (raspicam) requires raspistill to make use of the camera to take a photo. Raspberry is running Debian Stretch and the pi camera is configured and tested. But I cant access it when running the app via Docker.
Basically, I build the image with Docker Desktop on a win10 64bit machine using this Dockerfile:
FROM arm32v7/node:10.15.1-stretch
ENV PATH /opt/vc/bin:/opt/vc/lib:$PATH
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
# Create the app directory
ENV APP_DIR /home/app
RUN mkdir $APP_DIR
WORKDIR $APP_DIR
# Copy both package.json and package-lock.json
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Then in the Raspberry, if I pull the image and run it with:
docker run --privileged --device=/dev/vchiq -p 3000:3000 [my/image:latest]
I get:
Error: spawn /opt/vc/bin/raspistill ENOENT
After some researching, I also tried running with:
docker run --privileged -v=/opt/vc/bin:/opt/vc/bin --device=/dev/vchiq -p 3000:3000 [my/image:latest]
And with that command, I get:
stderr: /opt/vc/bin/raspistill: error while loading shared libraries: libmmal_core.so: cannot open shared object file: No such file or directory
Can someone share some thoughts on what changes do I have to make to the Dockerfile so that I'm able to access the pi camera from inside the Docker container? Thanks in advance.
I've had the same problem trying to work with camera interface from docker container. With suggestions in this thread I've managed to get it working with the below dockerfile.
FROM node:12.12.0-buster-slim
EXPOSE 3000
ENV PATH="$PATH:/opt/vc/bin"
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf
COPY "node_modules" "/usr/src/app/node_modules"
COPY "dist" "/usr/src/app"
CMD ldconfig && node /usr/src/app/app.js
There are 3 main points here:
Add /opt/vc/bin to your PATH so that you can call raspistill without referencing the full path.
Add /opt/vc/lib to your config file so that raspistill can find all dependencies it needs.
Reload config file (ldconfig) during container's runtime rather than build-time.
The last point is the main reason why Anton's solution didn't work. ldconfig needs to be executed in a running container so either use similar approach to mine or go with entrypoint.sh file instead.
Try replace this from the Dockerfile:
RUN echo "/opt/vc/lib" > /etc/ld.so.conf.d/00-vcms.conf \
&& ldconfig
With the following:
ADD 00-vmcs.conf /etc/ld.so.conf.d/
RUN ldconfig
And create the file 00-vmcs.conf:
/opt/vc/lib
Edit:
If it still doesn't work, try loading a Raspbian Docker image for example balenalib/rpi-raspbian:
FROM balenalib/rpi-raspbian

Resources