my puppeteer not working on docker container - node.js

It was working fine on just development environment not docker container.
But not working on docker container. How do I setup puppeteer on docker container?
I have seen a lot of answer and question but not working at all. I think that my code is wrong..
My Env:
OS: MacOS, M1
Here is Dockerfile:
FROM node:18-alpine
WORKDIR /usr/src/app
RUN apk update && apk upgrade
RUN apk add --no-cache udev ttf-freefont chromium
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true \
CHROME_PATH=/usr/bin/chromium-browser \
PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 8120
CMD ["npm", "run", "start:dev"]
And docker-compose.yml:
version: "3.7"
services:
node:
container_name: node-test
build:
context: .
dockerfile: Dockerfile
restart: always
platform: linux/amd64
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- test-net
env_file:
- .env
ports:
- "8010:8010"
depends_on:
- mongo
mongo:
image: mongo
container_name: mongo
restart: always
networks:
- test-net
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: test
MONGO_INITDB_ROOT_PASSWORD: test
MONGO_INITDB_DATABASE: test
volumes:
mongo:
networks:
test-net:
name: test-net
external: true
crawler.ts:
import puppeteer from 'puppeteer';
export const crawler = async (url: string) => {
console.log(url); // <- [v] working
const browser = await puppeteer.launch({
ignoreHTTPSErrors: false,
executablePath: '/usr/bin/chromium-browser',
headless: true,
args:
'--disable-dev-shm-usage',
'--no-sandbox',
'--disable-setuid-sandbox'
],
slowMo: 30
});
console.log(browser) // [v] print something.
const page = await browser.newPage();
console.log(page); // <- [x] not working. print anything.
await page.setViewport({ width: 1920, height: 1080 })
await page.goto("https://www.example.com");
await page.close();
await browser.close();
};
My error message is different whenever modify my code, but It is latest error message on terminal.
node-test | ProtocolError: Protocol error (Target.createTarget): Target closed.
node-test | at /usr/src/app/node_modules/puppeteer/src/common/Connection.ts:119:16
node-test | at new Promise (<anonymous>)
node-test | at Connection.send (/usr/src/app/node_modules/puppeteer/src/common/Connection.ts:115:12)
node-test | at Browser._createPageInContext (/usr/src/app/node_modules/puppeteer/src/common/Browser.ts:525:47)
node-test | at BrowserContext.newPage (/usr/src/app/node_modules/puppeteer/src/common/Browser.ts:886:26)
node-test | at Browser.newPage (/usr/src/app/node_modules/puppeteer/src/common/Browser.ts:518:33)
node-test | at crawler (/usr/src/app/src/crawler/crawler.ts:16:30)
node-test | at processTicksAndRejections (node:internal/process/task_queues:95:5)
node-test | at async CrawlerController (/usr/src/app/src/crawler/crawler.controller.ts:10:5) {
node-test | originalMessage: ''
node-test | }

Related

Docker - Files Contain Bad Line Terminators

I setup a development environment using Docker on Windows 10. My Dockerfile and docker-compose.yml file uses php:8.2.2-apache, mysql:8.0.32, composer:2.5.3, and phpMyAdmin:5.2.1.
I will admit that getting Docker up and running to basically mimic my old xampp development environment has been incredibly frustrating.
Recently, I added robmorgan/phinx 0.13 to my composer.json. Initially, I ran from the docker container's terminal vendor/bin/phinx init and it successfully created a phinx.php file. I stopped the container, modified my phinx.php file to use values from my .env file. When I reran Docker, back into the container's terminal to run vendor/bin/phinx create <name>, I now get this error in the terminal:
usr/bin/env: 'php\r': No such file or directory
I have read in several places that this is because files have the Windows line terminators instead of the Unix line terminators.
The issue is that I do not understand which file is affected. How can I audit my files to find out what is the culprit?
In case you are curious, this is my docker-compose.yml and phinx.json:
version: '3.9'
services:
webserver:
build: ./docker
image: -redacted-
ports:
- "80:80"
- "443:443"
volumes:
- ./www:/var/www/html
links:
- db
db:
image: mysql:8.0.32
ports:
- "3306:3306"
volumes:
- ./database:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
composer:
image: composer:2.5.3
command: ["composer", "install"]
volumes:
- ./www:/app
phpmyadmin:
depends_on:
- db
image: phpmyadmin:5.2.1
restart: always
ports:
- 8080:80
environment:
PMA_HOST: db
<?php
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
$databaseName = $_ENV['MYSQL_DATABASE'];
$username = $_ENV['MYSQL_USER'];
$password = $_ENV['MYSQL_PASSWORD'];
return
[
'paths' => [
'migrations' => '%%PHINX_CONFIG_DIR%%/db/migrations',
'seeds' => '%%PHINX_CONFIG_DIR%%/db/seeds'
],
'environments' => [
'default_migration_table' => 'phinxlog',
'default_environment' => 'development',
'production' => [
'adapter' => 'mysql',
'host' => 'localhost',
'name' => $databaseName,
'user' => $username,
'pass' => $password,
'port' => '3306',
'charset' => 'utf8',
]
],
'version_order' => 'creation'
];
And I am running this to load Docker: docker-compose --env-file=./www/.env up
This should find files w/ CRLFs:
find -type f -print0 | xargs -0 file | grep CRLF

Differences in code between local project and Dockerized project break the app

I'm trying to dockerize my current pet project in which I use a NodeJS (ExpressJS) as a backend, React as a frontend and PostgreSQL as a database. On both backend and frontend I use TypeScript instead of JavaScript. I'm also using a Prisma as ORM for my database. I decided to have a standard three container's architecture, one for backend, one for database and one for frontend app. My Dockerfile's are as follows:
Frontend's Dockerfile
FROM node:alpine
WORKDIR /usr/src/frontend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start"]
Backend's Dockerfile
FROM node:lts
WORKDIR /usr/src/backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
RUN npx prisma generate
CMD ["npm", "run", "dev"]
there's also a .dockerignore file in the backend folder:
node_modules/
and my docker-compose.yml looks like this:
version: '3.9'
services:
db:
image: 'postgres'
ports:
- '5432:5432'
environment:
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'postgres'
POSTGRES_DB: 'hucuplant'
server:
build:
context: ./backend_express
ports:
- "8000:8000"
environment:
DATABASE_URL: 'postgresql://postgres:postgres#localhost:5432/hucuplant?schema=public'
client:
build:
context: ./frontend
ports:
- "3000:3000"
After doing a docker-compose up --build everything starts well but when I try to register a new user on my site then I get the following error:
Error:
hucuplant-server-1 | Invalid `prisma.user.findUnique()` invocation in
hucuplant-server-1 | /usr/src/backend/src/routes/Auth.ts:44:57
hucuplant-server-1 |
hucuplant-server-1 | 41 auth.post("/register", async (req: Request, res: Response) => {
hucuplant-server-1 | 42 const { email, username, password } = req.body;
hucuplant-server-1 | 43
hucuplant-server-1 | → 44 const usernameResult: User | null = await prisma.user.findUnique({
hucuplant-server-1 | where: {
hucuplant-server-1 | ? username?: String,
hucuplant-server-1 | ? id?: Int,
hucuplant-server-1 | ? email?: String
hucuplant-server-1 | }
hucuplant-server-1 | })
However, the existing code in my Auth.ts file on the line 44 looks like this:
auth.post("/register", async (req: Request, res: Response) => {
const { email, username, password } = req.body;
const usernameResult: User | null = await prisma.user.findUnique({
where: {
username: username,
},
});
When I run my project locally everything works just fine but when I try to run the containerized app then those things break and differ quite much. What is causing that? How do I fix that?

ampqlib Error:"Frame size exceeds frame max" inside docker container

I am trying to do simple application with backend on node.js + ts and rabbitmq, based on docker. So there are 2 containers: rabbitmq container and backend container with 2 servers running - producer and consumer. So now I am trying to get an access to rabbitmq server, but I get this error "Frame size exceeds frame max".
The full code is:
My producer server code is:
import express from 'express';
import amqplib, { Connection, Channel, Options } from 'amqplib';
const producer = express();
const sendRabbitMq = () =>{
amqplib.connect('amqp://localhost', function(error0: any, connection: any) {
if(error0){
console.log('Some error...')
throw error0
}
})
}
producer.post('/send', (_req, res) => {
sendRabbitMq();
console.log('Done...');
res.send("Ok")
})
export { producer };
It is connected to main file index.ts and running inside this file.
Also maybe I have some bad configuration inside docker. My Dockerfile is
FROM node:16
WORKDIR /app/backend/src
COPY *.json ./
RUN npm install
COPY . .
And my docker-compose include this code:
version: '3'
services:
backend:
build: ./backend
container_name: 'backend'
command: npm run start:dev
restart: always
volumes:
- ./backend:/app/backend/src
- ./conf/myrabbit.conf:/etc/rabbitmq/rabbitmq.config
ports:
- 3000:3000
environment:
- PRODUCER_PORT=3000
- CONSUMER_PORT=5672
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3.9.13
container_name: 'rabbitmq'
ports:
- 5672:5672
- 15672:15672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=user
I will be very appreciated for your help

(Docker-Compose) UnhandledPromiseRejectionWarning when connecting node and postgres

I am trying to connect the containers for postgres and node. Here is my setup:
yml file:
version: "3"
services:
postgresDB:
image: postgres:alpine
container_name: postgresDB
ports:
- "5432:5432"
environment:
- POSTGRES_DB=myDB
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=Thisisngo1995!
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
links:
- postgresDB
ports:
- "3000:3000"
Dockerfile:
FROM node:12
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 3000
CMD ["npm", "start"]
connect to postgres:
let { Pool, Client } = require("pg");
let postgres = new Pool({
host: "postgresDB",
port: 5432,
user: "postgres",
password: "Thisisngo1995!",
database: "myDB",
});
module.exports = postgres;
and here is how I handled my endpoint:
exports.postgres_get_controller = (req, resp) => {
console.log("Reached Here");
postgres
.query('SELECT * FROM public."People"')
.then((results) => {
console.log(results);
resp.send({ allData: results.rows });
})
.catch((e) => console.log(e));
};
Whenever I try to touch the endpoint above, I get this error in the container:
Reasons why?
Note: I am able to have everything functioning on my local machine (without docker) simply by changing "host: localhost"
Your postgres database name and username should be the same
You can use docker-compose-wait to make sure interdependent services are launched in proper order.
See below on how to use it for your case.
update the final part of your Dockerfile as below;
# ...
# this will be used to check if DB is up
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait ./wait
RUN chmod +x ./wait
CMD ./wait && npm start
Update some parts of your docker-compose.yml as below:
express-server:
build: ./
environment:
- DB_SERVER=postgresDB
- WAIT_HOSTS=postgresDB:5432
- WAIT_BEFORE_HOSTS=4
links:
- postgresDB
depends_on:
- postgresDB
ports:
- "3000:3000"

elasticsearch check indices exist returns true on first run

I have a docker-compose running 2 containers, each with its own service, node and elasticsearch.
app.js
...
const isElasticReady = await elastic.checkConnection();
if (isElasticReady) {
const elasticIndex = await elastic.esclient.indices.exists({index:elastic.index});
if (!elasticIndex.body) {
await elastic.createIndex(elastic.index);
await elastic.setMapping();
await data.populateDatabase();
}
}
...
Whenever I run docker-compose up, esclient.indices.exists always returns false, even though the index already exists. As a result I always get thrown a resource_already_exists_exception.
The strange thing is that I am using nodemon for development, and whenever I make changes while development esclient.indices.exists will return true. So the problem only happens when I run docker-compose up. I suspect something is happening asynchronously, but I am not sure what.
*docker-compose.yml - depends_on has been set.
version: '3.6'
services:
api:
image: nodeservice/node:10.15.3-alpine
container_name: nodeservice
build: .
ports:
- 3000:3000
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- NODE_PORT=3000
- ELASTIC_URL=http://elasticsearch:9200
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
links:
- elasticsearch
depends_on:
- elasticsearch
networks:
- esnet
elasticsearch:
container_name:my_elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
logging:
driver: none
ports:
- 9300:9300
- 9200:9200
networks:
- esnet
volumes:
esdata:
networks:
esnet:
Any hints?

Resources