I have a prediction application with the below folder structure:
Docker
├── dataset
│ └── fastText
│ └── crawl-300d-2M.vec
├── Dockerfile
├── encoder
│ └── sentencoder2.pkl
├── pyt_models
│ └── actit1.pt
├── requirements.txt
└── src
├── action_items_api.py
├── infer_predict.py
├── model.py
├── models.py
└── sent_enc.py
Dockerfile:
FROM python:3.6
EXPOSE 80
# copy and install packages for flask
COPY /requirements.txt /tmp/
RUN cd /tmp && \
pip3 install --no-cache-dir -r ./requirements.txt
WORKDIR /Docker
COPY src src
CMD gunicorn -b 0.0.0.0:80 --chdir src action_items_api:app
In the Docker file I try only to copy the src folder where all the Python files are placed. I want to keep the fastTest, ecnode, pyt_models to be accessed outside the container.
When I tried:
docker run -p8080:80 -v /encoder/:/encoder/;/pyt_models/:/pyt_models/;/dataset/:/dataset/ -it actit_mount:latest
But by doing this my code gives me FileNotFoundError No such file or directory: 'encoder/sentencoder2.pkl'
But by keeping the same folder structure if I run from the docker folder:
gunicorn --chdir src --bind 0.0.0.0:80 action_items_api:app It works.
What is wrong with the Dockerfile or the docker run?
Because you set WORKDIR /Docker, the gunicorn process will have its working directory set to /Docker. Which implies that relative file paths in your python app will be resolved from /Docker.
Give a try to
docker run -p8080:80 \
-v $(pwd)/encoder/:/Docker/encoder/ \
-v $(pwd)/pyt_models/:/Docker/pyt_models/ \
-v $(pwd)/dataset/:/Docker/dataset/ \
-it actit_mount:latest
docker: Error response from daemon: create ./folder: "./folder" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
Here is an example:
Related
Consider following file structure of yarn workspaces:
.
├── docker-compose.yaml
├── package.json
├── packages
│ └── pkg-1
│ ├── dist
│ ├── package.json
│ ├── src
│ └── tsconfig.json
├── services
│ ├── api-1
│ │ ├── dist
│ │ ├── Dockerfile
│ │ ├── package.json
│ │ ├── src
│ │ ├── tsconfig.json
│ │ └── yarn.lock
│ └── client-1
│ ├── package.json
│ ├── src
│ └── yarn.lock
├── tsconfig.json
└── yarn.lock
I have written Dockerfile to create image for api-1:
ARG APP_DIR=/usr/app
# Build stage
FROM node:16.2-alpine AS build
ARG APP_DIR
WORKDIR ${APP_DIR}
COPY package.json ./
COPY yarn.lock ./
COPY tsconfig.json ./
WORKDIR ${APP_DIR}/packages/pkg-1
COPY packages/pkg-1/package.json ./
RUN yarn --pure-lockfile --non-interactive
COPY packages/pkg-1/tsconfig.json ./
COPY packages/pkg-1/src/ ./src
RUN yarn build
WORKDIR ${APP_DIR}/services/api-1
COPY services/api-1/package.json ./
COPY services/api-1/yarn.lock ./
RUN yarn --pure-lockfile --non-interactive
COPY services/api-1/tsconfig.json ./
COPY services/api-1/src/ ./src
RUN yarn build
# Production stage
FROM node:16.2-alpine AS prod
ARG APP_DIR
WORKDIR ${APP_DIR}
COPY --from=build ${APP_DIR}/package.json ./
COPY --from=build ${APP_DIR}/yarn.lock ./
WORKDIR ${APP_DIR}/packages/pkg-1
COPY --from=build ${APP_DIR}/packages/pkg-1/package.json ./
RUN yarn --pure-lockfile --non-interactive --production
COPY --from=build ${APP_DIR}/packages/pkg-1/dist ./dist
WORKDIR ${APP_DIR}/services/api-1
COPY --from=build ${APP_DIR}/services/api-1/package.json ./
COPY --from=build ${APP_DIR}/services/api-1/yarn.lock ./
RUN yarn --pure-lockfile --non-interactive --production
COPY --from=build ${APP_DIR}/services/api-1/dist ./dist
CMD ["node", "dist"]
Build is running from root docker-compose.yaml to have proper context:
services:
api-1:
image: project/api-1
container_name: api-1
build:
context: ./
dockerfile: ./services/api-1/Dockerfile
target: prod
ports:
- 3000:3000
It is working but this way there will be a lot of repetition while application grow. Problem is the way how packages are building.
Package can be for example normalized components collection used among client services or collection of normalized errors used among api services.
Whenever I will build some service I need to first build its depending packages which is unnecessarily repetitive task. Not mention that building steps of respective package are defined over and over again in Dockerfile of every single service that uses the package.
So my question is. Is there a way how to create for example image of package that will be used for building a service to avoid defining build steps of package in service Dockerfile?
A while ago I have posted an answer detailing how I structured a monorepo with multiple services and packages.
The "trick" is to copy all the packages that your service depends on, as well as the project root package.json. Then running yarn --pure-lockfile --non-interactive --production once will install the dependencies for the all the sub-packages since they are part of the workspace.
The example linked isn't using typescript, but I believe this could be easily achieved with a postinstall script in every package.json that would run yarn build.
Seems like you are looking for something that gives you the option to have a "parent" package.json, so you only have to invoke "build" on one and with that build the whole dependency tree.
e.g:
- package.json // root package
| - a
| - package.json // module a package
| - b
| - package.json // module b package
You might want to look into the following:
npm workspaces
lerna
Both support structures like the one mentioned, lerna has just a lot more features. To get a quick grasp on the differences, look here: Is Lerna needed anymore with NPM 7.0.0's workspaces?
Facing following error when I do:
docker build -t web_app .
My web_app structure is :
web-app
├── Dockerfile
└── src
└── server.py
└── requirements.txt
ERROR: No matching distribution found for aiohttp (from -r requirements.txt (line 1))
Dockerfile:
FROM python:3.6
# Create app directory
WORKDIR /app
# Install app dependencies
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
# Bundle app source
COPY src /app
EXPOSE 8080
CMD [ "python", "server.py" ]
Please help.
I need to install a local package (my own shared package) in a docker container, but it doesn't work without -e option.
I've the following:
Docker folder tree:
./
├── Dockerfile
├── mypackage-lib
│ ├── MANIFEST.in
│ ├── mypackagelib
│ └── setup.py
├── requirements.txt
Dockerfile:
# pull official base image
FROM python:3.8.3
# set work directory
WORKDIR /usr/src/
# copy requirements file
COPY ./requirements.txt /usr/src/requirements.txt
COPY ./mypackage-lib /usr/src/mypackage-lib
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install -e /usr/src/mypackage-lib
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
pip freeze from the docker container:
# Editable install with no version control (mypackagelib==0.1)
-e /usr/src/mypackage-lib
I would like to have it in the site-packages directory and not linked from /usr/src/mypackage-lib
Without the option -e the main application (which is using the library) doesn't work.
setup.py looks like:
install_requires = [
'pydantic'
]
setup(name='mypackagelib',
version='0.1',
author='aemilc',
packages=['mypackagelib'],
install_requires=install_requires,
include_package_data=True,
python_requires='>3.8')
What did I forget?
Thank you!
E.
I have a project that basically has a structure like this: (*)
my_project/
├── server/
│ ├── node_modules/
│ └── server.js
├── src/
├── node_modules/
├── Dockerfile
└── {multiple important config files for webpack and typescript etc}.json
I build the project with npm run build. This creates a dist/ folder from the src/ folder.
This is my package.json:
"scripts": {
"prebuild": "npm run install:client && npm run install:server",
"build": "webpack",
"install:client": "npm install",
"install:server": "cd server/ && npm install"
}
The final project only needs this: (**)
my_project/
├── server/
│ ├── node_modules/
│ └── server.js
└── dist/
├── webapp/
└── assets/
Now I want to create a docker image out of this.
I have a Dockerfile that's working now. It looks like this:
FROM node:boron
WORKDIR /usr/src/app
COPY package.json .
COPY . .
RUN npm run build
EXPOSE 9090
CMD [ "node", "server/server.js" ]
But from my understanding, it copies everything I have in my directory and and then creates dist/ folder and final docker image contains all of this: (***)
my_project/
├── server/
│ ├── node_modules/
│ └── server.js
├── src/
├── node_modules/
├── Dockerfile
├── {multiple important config files for webpack and typescript etc}.json
└── dist/
├── webapp/
└── assets/
How can I configure the Docker image to contain only the things in (**)
Running npm run build will create the dist folder, which is what you want. After that you can remove the stuff that you don't need from the image by adding the following to the docker file
FROM node:boron
WORKDIR /usr/src/app
COPY package.json .
COPY . .
RUN npm run build && /bin/bash -c "find . -not -name 'server' -not -name 'dist' -delete"
EXPOSE 9090
CMD [ "node", "server/server.js" ]
The command /bin/bash -c "find . -not -name 'server' -not -name 'dist' -delete" will just keep the server and dist folders
We want to start containerizing our applications, but we have stumbled upon some issues with local dependencies.
We have a single git repository, in which we have numerous node packages, under "shared" folder, and applications that require these packages.
So let's say our folder structure is as follows:
src/
├── apps
│ └── my_app
└── shared
└── shared_module
in my_app package.json we have the following dependency:
{
"dependencies": {
"shared-module": "file:../../shared/shared_module"
}
}
The issue here is that because we want to move "my_app" to run in a container, we need to npm install our local dependency.
Can this be done?
Yes, it's possible but a little bit ugly. The problem for you is that Docker is very restrictive when it comes to its build context. I'm not sure how familiar you are already with that concept, so here is the introduction from the documentation:
The docker build command builds an image from a Dockerfile and a context.
For example, docker build . uses . as its build context, and since it's not specified otherwise, ./Dockerfile as the Dockerfile. Files or paths outside the build context cannot be referenced in the Dockerfile (so no COPY ..).
The issue for you is that during a Docker build, the build context cannot be left. If you have multiple applications that you want to build, you would normally add a Dockerfile for each app.
src/
├── apps
│ ├── my_app
│ │ └── Dockerfile
│ └── my_other_app
│ └── Dockerfile
└── shared
└── shared_module
Naturally, you would cd into my_app and use docker build . to build the application's Docker image. The issue with this is that you can't access ../../shared from the build, since it's outside of the context.
So you need to make sure both apps and shared is in the build context. One way would be to place all Dockerfile in src like so:
src/
├── Dockerfile.my_app
├── Dockerfile.my_other
├── apps
│ ├── my_app
│ └── my_other_app
└── shared
└── shared_module
You can then build the applications by explicitly specifying the context and the Dockerfile:
src$ docker build -f Dockerfile.my_app .
Alternatively, you can keep the Dockerfiles inside my_app and my_other_app, and point to them:
src$ docker build -f apps/my_app/Dockerfile .
That should also work. In both cases, the build is executed from within src, which means you need to pay a little attention to the paths in the Dockerfile. The working directory is still src:
COPY ./apps/my_app /src/apps/my_app
By mirroring the folder structure you have locally, you should be able to make your dependencies work without any changes:
RUN mkdir -p /src
COPY ./shared /src/shared
COPY ./apps/my_app /src/apps/my_app
RUN cd /src/apps/my_app && npm install
Hope that helps you get started.