I'm trying to launch puppeteer in an express app that's run in a docker container, using docker-compose.
The line that should launch puppeteer const browser = await puppeteer.launch({args: ['--no-sandbox']}); throws the following error:
(node:28) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): AssertionError [ERR_ASSERTION]: Chromium revision is not downloaded. Run "npm install"
I've tried adding a yarn add puppeteer after the yarn install and also replacing yarn install in the Dockerfile with npm install .
What needs to change, so that I can use puppeteer with chromium as expected?
Express app's Dockerfile:
FROM node:8
RUN apt-get update
# for https
RUN apt-get install -yyq ca-certificates
# install libraries
RUN apt-get install -yyq libappindicator1 libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libnss3 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6
# tools
RUN apt-get install -yyq gconf-service lsb-release wget xdg-utils
# and fonts
RUN apt-get install -yyq fonts-liberation
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY code/package.json /usr/src/app
COPY code/index.js /usr/src/app
RUN mkdir -p /usr/src/app/views
COPY code/views/ /usr/src/app
# install the necessary packages
RUN yarn install
CMD npm run start:dev
docker-compose.yml:
app:
restart: always
build: ${REPO}
volumes:
- ${REPO}/code:/usr/src/app:ro
working_dir: /usr/src/app
ports:
- "8087:5000"
index.js route:
app.post('/img', function (req, res) {
const puppeteer = require('puppeteer');
(async() => {
const browser = await puppeteer.launch({args: ['--no-sandbox']});
})();
});
The docker volume I was using mapped the entire local code directory to the docker container's /usr/src/app directory.
This is great for allowing quick code updates during development.
However, it also overwrites the version of chromium previously installed on the docker container via the yarn install in the Dockerfile with the version of chromium installed on my machine via yarn install in the command line.
Each machine needs its own, correct, os-specific version of chromium. The docker container needs a linux-specific chromium (linux-515411), my laptop needs a mac-specific chromium (mac-508693). Simply running yarn install (or npm install) with puppeteer in your package.json will handle installing the correct version of chromium.
Previous project structure:
.
├── Dockerfile
├── README.md
└── code
├── index.js
├── package.json
└── node_modules
├── lots of other node packages
└── puppeteer
├── .local-chromium
│ └── mac-508693 <--------good for macs, bad for linux!
├── package.json
└── all the other puppeteer files
Partial Dockerfile
This is where the container gets its own version of .local-chromium:
FROM node:8
RUN apt-get update
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY code/package.json /usr/src/app
COPY code/index.js /usr/src/app
# install the necessary packages <------ including puppeteer, with the correct chromium
RUN yarn install
CMD npm run start:dev
Previous volumes from docker-compose.yml
This copies everything from the local ${REPO}/code to the docker container's /usr/src/app directory. Including the wrong version of chromium.
volumes:
- ${REPO}/code:/usr/src/app:ro
Updated project structure:
.
├── Dockerfile
├── README.md
└── code
├── src
│ ├── index.js
│ └── package.json
└── node_modules
├── lots of other node packages
└── puppeteer
├── .local-chromium
├── package.json
└── all the other puppeteer files
Updated docker volume maps the entire contents of the local ./code/src to the docker container's /usr/src/app. This does NOT include the node_modules directory:
volumes:
- ${REPO}/code/src:/usr/src/app:ro
I ran into this problem and I wanted to leave the simple solution. The reason it couldn't find my chrome install is that I had mounted my local volume into the container for testing. I use a mac, so the local npm install gave me the mac version of chromium. When that node_modules folder was mounted into the linux container, it expected to find the linux version which was not there.
To get around this, you need to exclude your node_modules folder when you're doing a volume mount into the container. I was able to do that by passing another volume parameter.
docker run -rm \
--volume ~/$project:/dist/$project \
--volume /dist/$project/node_modules
Hey guys please visit this link. This is quite helpful for you who want to launch headless puppeter inside docker container
https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
This is my project code structure
config
migrations
models
utils
.dockerignorefile
app.js
docker-compose.yml
Dockerfile
package.json
Make sure you have install docker. Once installed you can follow these procedure
Run docker-compose up --build inside your docker directory
And this is my repo regarding headless chrome inside docker
Related
I have dockerized a php application which require some npm dependencies, so I've installed in the Docker container nodejs and the required packages using:
FROM php:8.0.2-fpm-alpine
WORKDIR /var/www/html
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install mysqli
RUN apk add icu-dev
RUN docker-php-ext-configure intl && docker-php-ext-install intl
RUN apk add --update libzip-dev curl-dev &&\
docker-php-ext-install curl && \
apk del gcc g++ &&\
rm -rf /var/cache/apk/*
COPY docker/php-fpm/config/php.ini /usr/local/etc/php/
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
RUN apk add --update nodejs nodejs-npm
RUN npm install gulp-cli -g
RUN npm install
COPY src src/
CMD ["php-fpm"]
EXPOSE 9000
this is my docker-compose.yml:
version: '3.7'
services:
php-fpm:
container_name: boilerplate_app
restart: always
build:
context: .
dockerfile: ./docker/php-fpm/Dockerfile
volumes:
- ./src:/var/www/html
the problem's that when I enter in the container using: docker exec -ti boilerplate_app sh
and launch that command: ls -la I can't see any node_modules folder, infact, if I try to execute the installed dependency gulp I get:
Local modules not found in /var/www/html
Try running: npm install
What I did wrong?
There are two issues:
You are running npm install in a folder that does not contain any package.json listing the required node modules to install. If you inspect the logs you should see something like
no such file or directory, open '/var/www/html/package.json'
Moreover, when you mount your local src folder, you are replacing the content of /var/www/html/ with src content, which might not include any node_modules folder
volumes:
- ./src:/var/www/html
Facing following error when I do:
docker build -t web_app .
My web_app structure is :
web-app
├── Dockerfile
└── src
└── server.py
└── requirements.txt
ERROR: No matching distribution found for aiohttp (from -r requirements.txt (line 1))
Dockerfile:
FROM python:3.6
# Create app directory
WORKDIR /app
# Install app dependencies
COPY src/requirements.txt ./
RUN pip install -r requirements.txt
# Bundle app source
COPY src /app
EXPOSE 8080
CMD [ "python", "server.py" ]
Please help.
I need to install a local package (my own shared package) in a docker container, but it doesn't work without -e option.
I've the following:
Docker folder tree:
./
├── Dockerfile
├── mypackage-lib
│ ├── MANIFEST.in
│ ├── mypackagelib
│ └── setup.py
├── requirements.txt
Dockerfile:
# pull official base image
FROM python:3.8.3
# set work directory
WORKDIR /usr/src/
# copy requirements file
COPY ./requirements.txt /usr/src/requirements.txt
COPY ./mypackage-lib /usr/src/mypackage-lib
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt
RUN pip install -e /usr/src/mypackage-lib
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
pip freeze from the docker container:
# Editable install with no version control (mypackagelib==0.1)
-e /usr/src/mypackage-lib
I would like to have it in the site-packages directory and not linked from /usr/src/mypackage-lib
Without the option -e the main application (which is using the library) doesn't work.
setup.py looks like:
install_requires = [
'pydantic'
]
setup(name='mypackagelib',
version='0.1',
author='aemilc',
packages=['mypackagelib'],
install_requires=install_requires,
include_package_data=True,
python_requires='>3.8')
What did I forget?
Thank you!
E.
I have a prediction application with the below folder structure:
Docker
├── dataset
│ └── fastText
│ └── crawl-300d-2M.vec
├── Dockerfile
├── encoder
│ └── sentencoder2.pkl
├── pyt_models
│ └── actit1.pt
├── requirements.txt
└── src
├── action_items_api.py
├── infer_predict.py
├── model.py
├── models.py
└── sent_enc.py
Dockerfile:
FROM python:3.6
EXPOSE 80
# copy and install packages for flask
COPY /requirements.txt /tmp/
RUN cd /tmp && \
pip3 install --no-cache-dir -r ./requirements.txt
WORKDIR /Docker
COPY src src
CMD gunicorn -b 0.0.0.0:80 --chdir src action_items_api:app
In the Docker file I try only to copy the src folder where all the Python files are placed. I want to keep the fastTest, ecnode, pyt_models to be accessed outside the container.
When I tried:
docker run -p8080:80 -v /encoder/:/encoder/;/pyt_models/:/pyt_models/;/dataset/:/dataset/ -it actit_mount:latest
But by doing this my code gives me FileNotFoundError No such file or directory: 'encoder/sentencoder2.pkl'
But by keeping the same folder structure if I run from the docker folder:
gunicorn --chdir src --bind 0.0.0.0:80 action_items_api:app It works.
What is wrong with the Dockerfile or the docker run?
Because you set WORKDIR /Docker, the gunicorn process will have its working directory set to /Docker. Which implies that relative file paths in your python app will be resolved from /Docker.
Give a try to
docker run -p8080:80 \
-v $(pwd)/encoder/:/Docker/encoder/ \
-v $(pwd)/pyt_models/:/Docker/pyt_models/ \
-v $(pwd)/dataset/:/Docker/dataset/ \
-it actit_mount:latest
docker: Error response from daemon: create ./folder: "./folder" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
Here is an example:
I'm building this dockerfile using docker-compose and I need it to build native modules in docker (not just copy them from local). This only works when my local modules are built (npm install) As soon as I delete them this runs but there is no node_modules directory and it gives an error: Error: Cannot find module 'express'
FROM mhart/alpine-node:6
MAINTAINER Me
COPY package.json index.js lib /app/
WORKDIR /app
RUN apk add --no-cache make gcc g++ python && \
addgroup -S app && adduser -S -g app app && \
npm install && \
npm cache clean && \
apk del make gcc g++ python
USER app
And here is the app directory:
.dockerignore
.eslintignore
.eslintrc.js
Dockerfile
docker-compose.yml
index.js
lib
npm-debug.log
package.json
The problem was with the way docker binds the app folder from the host to the container. The second line in the volume section from my docker-compose.yml fixed it.
volumes:
- .:/app
- /app/node_modules