I need to install a local package (my own shared package) in a docker container, but it doesn't work without -e option.
I've the following:
Docker folder tree:
./
├── Dockerfile
├── mypackage-lib
│ ├── MANIFEST.in
│ ├── mypackagelib
│ └── setup.py
├── requirements.txt
Dockerfile:
# pull official base image
FROM python:3.8.3
# set work directory
WORKDIR /usr/src/
# copy requirements file
COPY ./requirements.txt /usr/src/requirements.txt
COPY ./mypackage-lib /usr/src/mypackage-lib
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install -e /usr/src/mypackage-lib
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
pip freeze from the docker container:
# Editable install with no version control (mypackagelib==0.1)
-e /usr/src/mypackage-lib
I would like to have it in the site-packages directory and not linked from /usr/src/mypackage-lib
Without the option -e the main application (which is using the library) doesn't work.
setup.py looks like:
install_requires = [
'pydantic'
]
setup(name='mypackagelib',
version='0.1',
author='aemilc',
packages=['mypackagelib'],
install_requires=install_requires,
include_package_data=True,
python_requires='>3.8')
What did I forget?
Thank you!
E.
Related
I have a monorepo that has holds various Go services and libraries. The directory structure is like the following:
monorepo
services
service-a
- Dockerfile
go.mod
go.sum
This go.mod file resides in the root of the monorepo directory and the services use the dependencies stated in that file.
I build the Docker image with this command:
docker build -t some:tag ./services/service-a/
When I try to build my Docker image from the root of monorepo directory with the above docker command I get the following error:
COPY failed: Forbidden path outside the build context: ../../go.mod ()
Below is my Dockerfile
FROM golang:1.14.1-alpine3.11
RUN apk add --no-cache ca-certificates git
# Enable Go Modules
ENV GO111MODULE=on
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY ../../go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o service-a
ENTRYPOINT ["/app/service-a"]
Is there something I have to do to be able to add files into my Docker image that aren't in the current directory without having to have a separate go.mod and go.sum in each service within the monorepo?
Docker only allows adding files to the image from the context, which is by default the directory containing the Dockerfile. You can specify a different context when you build, but again, it won't let you include files outside that context:
docker build -f ./services/service-a/Dockerfile .
This should use the current directory as the context.
Alternatively, you can create a temp directory, copy all the artifacts there and use that as the build context. This can be automated by a makefile or build script.
You can build and manage your docker containers using docker-compose, then this problem can be solved with the help context directive, for example:
project_folder
├─── src
│ └── folder1
│ └── folder2
│ └── Dockerfile
├── docker-compose.yaml
└── copied_file.ext
docker-compose.yaml
version: '3'
services:
your_service_name:
build:
context: ./ #project_folder for this case
dockerfile: ./src/folder1/folder2/Dockefile
Dockerfile
FROM xxx
COPY copied_file.ext /target_folder/
build or rebuild services:
docker-compose build
run a one-off command on a service:
docker-compose run your_service_name <command> [arguments]
Consider following file structure of yarn workspaces:
.
├── docker-compose.yaml
├── package.json
├── packages
│ └── pkg-1
│ ├── dist
│ ├── package.json
│ ├── src
│ └── tsconfig.json
├── services
│ ├── api-1
│ │ ├── dist
│ │ ├── Dockerfile
│ │ ├── package.json
│ │ ├── src
│ │ ├── tsconfig.json
│ │ └── yarn.lock
│ └── client-1
│ ├── package.json
│ ├── src
│ └── yarn.lock
├── tsconfig.json
└── yarn.lock
I have written Dockerfile to create image for api-1:
ARG APP_DIR=/usr/app
# Build stage
FROM node:16.2-alpine AS build
ARG APP_DIR
WORKDIR ${APP_DIR}
COPY package.json ./
COPY yarn.lock ./
COPY tsconfig.json ./
WORKDIR ${APP_DIR}/packages/pkg-1
COPY packages/pkg-1/package.json ./
RUN yarn --pure-lockfile --non-interactive
COPY packages/pkg-1/tsconfig.json ./
COPY packages/pkg-1/src/ ./src
RUN yarn build
WORKDIR ${APP_DIR}/services/api-1
COPY services/api-1/package.json ./
COPY services/api-1/yarn.lock ./
RUN yarn --pure-lockfile --non-interactive
COPY services/api-1/tsconfig.json ./
COPY services/api-1/src/ ./src
RUN yarn build
# Production stage
FROM node:16.2-alpine AS prod
ARG APP_DIR
WORKDIR ${APP_DIR}
COPY --from=build ${APP_DIR}/package.json ./
COPY --from=build ${APP_DIR}/yarn.lock ./
WORKDIR ${APP_DIR}/packages/pkg-1
COPY --from=build ${APP_DIR}/packages/pkg-1/package.json ./
RUN yarn --pure-lockfile --non-interactive --production
COPY --from=build ${APP_DIR}/packages/pkg-1/dist ./dist
WORKDIR ${APP_DIR}/services/api-1
COPY --from=build ${APP_DIR}/services/api-1/package.json ./
COPY --from=build ${APP_DIR}/services/api-1/yarn.lock ./
RUN yarn --pure-lockfile --non-interactive --production
COPY --from=build ${APP_DIR}/services/api-1/dist ./dist
CMD ["node", "dist"]
Build is running from root docker-compose.yaml to have proper context:
services:
api-1:
image: project/api-1
container_name: api-1
build:
context: ./
dockerfile: ./services/api-1/Dockerfile
target: prod
ports:
- 3000:3000
It is working but this way there will be a lot of repetition while application grow. Problem is the way how packages are building.
Package can be for example normalized components collection used among client services or collection of normalized errors used among api services.
Whenever I will build some service I need to first build its depending packages which is unnecessarily repetitive task. Not mention that building steps of respective package are defined over and over again in Dockerfile of every single service that uses the package.
So my question is. Is there a way how to create for example image of package that will be used for building a service to avoid defining build steps of package in service Dockerfile?
A while ago I have posted an answer detailing how I structured a monorepo with multiple services and packages.
The "trick" is to copy all the packages that your service depends on, as well as the project root package.json. Then running yarn --pure-lockfile --non-interactive --production once will install the dependencies for the all the sub-packages since they are part of the workspace.
The example linked isn't using typescript, but I believe this could be easily achieved with a postinstall script in every package.json that would run yarn build.
Seems like you are looking for something that gives you the option to have a "parent" package.json, so you only have to invoke "build" on one and with that build the whole dependency tree.
e.g:
- package.json // root package
| - a
| - package.json // module a package
| - b
| - package.json // module b package
You might want to look into the following:
npm workspaces
lerna
Both support structures like the one mentioned, lerna has just a lot more features. To get a quick grasp on the differences, look here: Is Lerna needed anymore with NPM 7.0.0's workspaces?
Facing following error when I do:
docker build -t web_app .
My web_app structure is :
web-app
├── Dockerfile
└── src
└── server.py
└── requirements.txt
ERROR: No matching distribution found for aiohttp (from -r requirements.txt (line 1))
Dockerfile:
FROM python:3.6
# Create app directory
WORKDIR /app
# Install app dependencies
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
# Bundle app source
COPY src /app
EXPOSE 8080
CMD [ "python", "server.py" ]
Please help.
I have a prediction application with the below folder structure:
Docker
├── dataset
│ └── fastText
│ └── crawl-300d-2M.vec
├── Dockerfile
├── encoder
│ └── sentencoder2.pkl
├── pyt_models
│ └── actit1.pt
├── requirements.txt
└── src
├── action_items_api.py
├── infer_predict.py
├── model.py
├── models.py
└── sent_enc.py
Dockerfile:
FROM python:3.6
EXPOSE 80
# copy and install packages for flask
COPY /requirements.txt /tmp/
RUN cd /tmp && \
pip3 install --no-cache-dir -r ./requirements.txt
WORKDIR /Docker
COPY src src
CMD gunicorn -b 0.0.0.0:80 --chdir src action_items_api:app
In the Docker file I try only to copy the src folder where all the Python files are placed. I want to keep the fastTest, ecnode, pyt_models to be accessed outside the container.
When I tried:
docker run -p8080:80 -v /encoder/:/encoder/;/pyt_models/:/pyt_models/;/dataset/:/dataset/ -it actit_mount:latest
But by doing this my code gives me FileNotFoundError No such file or directory: 'encoder/sentencoder2.pkl'
But by keeping the same folder structure if I run from the docker folder:
gunicorn --chdir src --bind 0.0.0.0:80 action_items_api:app It works.
What is wrong with the Dockerfile or the docker run?
Because you set WORKDIR /Docker, the gunicorn process will have its working directory set to /Docker. Which implies that relative file paths in your python app will be resolved from /Docker.
Give a try to
docker run -p8080:80 \
-v $(pwd)/encoder/:/Docker/encoder/ \
-v $(pwd)/pyt_models/:/Docker/pyt_models/ \
-v $(pwd)/dataset/:/Docker/dataset/ \
-it actit_mount:latest
docker: Error response from daemon: create ./folder: "./folder" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
Here is an example:
I'm trying to launch puppeteer in an express app that's run in a docker container, using docker-compose.
The line that should launch puppeteer const browser = await puppeteer.launch({args: ['--no-sandbox']}); throws the following error:
(node:28) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): AssertionError [ERR_ASSERTION]: Chromium revision is not downloaded. Run "npm install"
I've tried adding a yarn add puppeteer after the yarn install and also replacing yarn install in the Dockerfile with npm install .
What needs to change, so that I can use puppeteer with chromium as expected?
Express app's Dockerfile:
FROM node:8
RUN apt-get update
# for https
RUN apt-get install -yyq ca-certificates
# install libraries
RUN apt-get install -yyq libappindicator1 libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libnss3 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6
# tools
RUN apt-get install -yyq gconf-service lsb-release wget xdg-utils
# and fonts
RUN apt-get install -yyq fonts-liberation
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY code/package.json /usr/src/app
COPY code/index.js /usr/src/app
RUN mkdir -p /usr/src/app/views
COPY code/views/ /usr/src/app
# install the necessary packages
RUN yarn install
CMD npm run start:dev
docker-compose.yml:
app:
restart: always
build: ${REPO}
volumes:
- ${REPO}/code:/usr/src/app:ro
working_dir: /usr/src/app
ports:
- "8087:5000"
index.js route:
app.post('/img', function (req, res) {
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch({args: ['--no-sandbox']});
})();
});
The docker volume I was using mapped the entire local code directory to the docker container's /usr/src/app directory.
This is great for allowing quick code updates during development.
However, it also overwrites the version of chromium previously installed on the docker container via the yarn install in the Dockerfile with the version of chromium installed on my machine via yarn install in the command line.
Each machine needs its own, correct, os-specific version of chromium. The docker container needs a linux-specific chromium (linux-515411), my laptop needs a mac-specific chromium (mac-508693). Simply running yarn install (or npm install) with puppeteer in your package.json will handle installing the correct version of chromium.
Previous project structure:
.
├── Dockerfile
├── README.md
└── code
├── index.js
├── package.json
└── node_modules
├── lots of other node packages
└── puppeteer
├── .local-chromium
│ └── mac-508693 <--------good for macs, bad for linux!
├── package.json
└── all the other puppeteer files
Partial Dockerfile
This is where the container gets its own version of .local-chromium:
FROM node:8
RUN apt-get update
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY code/package.json /usr/src/app
COPY code/index.js /usr/src/app
# install the necessary packages <------ including puppeteer, with the correct chromium
RUN yarn install
CMD npm run start:dev
Previous volumes from docker-compose.yml
This copies everything from the local ${REPO}/code to the docker container's /usr/src/app directory. Including the wrong version of chromium.
volumes:
- ${REPO}/code:/usr/src/app:ro
Updated project structure:
.
├── Dockerfile
├── README.md
└── code
├── src
│ ├── index.js
│ └── package.json
└── node_modules
├── lots of other node packages
└── puppeteer
├── .local-chromium
├── package.json
└── all the other puppeteer files
Updated docker volume maps the entire contents of the local ./code/src to the docker container's /usr/src/app. This does NOT include the node_modules directory:
volumes:
- ${REPO}/code/src:/usr/src/app:ro
I ran into this problem and I wanted to leave the simple solution. The reason it couldn't find my chrome install is that I had mounted my local volume into the container for testing. I use a mac, so the local npm install gave me the mac version of chromium. When that node_modules folder was mounted into the linux container, it expected to find the linux version which was not there.
To get around this, you need to exclude your node_modules folder when you're doing a volume mount into the container. I was able to do that by passing another volume parameter.
docker run -rm \
--volume ~/$project:/dist/$project \
--volume /dist/$project/node_modules
Hey guys please visit this link. This is quite helpful for you who want to launch headless puppeter inside docker container
https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
This is my project code structure
config
migrations
models
utils
.dockerignorefile
app.js
docker-compose.yml
Dockerfile
package.json
Make sure you have install docker. Once installed you can follow these procedure
Run docker-compose up --build inside your docker directory
And this is my repo regarding headless chrome inside docker