Hot Reload HMR Webpack not working on Docker Container WSL2 - node.js

To preface, I am new to Docker.
I created my own docker container for my local development environment for a project I am working on using Intertia.JS + Laravel Jetstream in Ubuntu on WSL 2.
I am having trouble getting the HMR Hot Reloading for webpack to work, when I run npm run hot - it also does not start a proxy server...
When I run docker-compose exec php npm run hot I get the following message:
> # hot /var/www/html
> cross-env NODE_ENV=development node_modules/webpack-dev-server/bin/webpack-
dev-server.js --inline --hot --disable-host-check --
config=node_modules/laravel-mix/setup/webpack.config.js
ℹ 「wds」: Project is running at http://localhost:8080/
ℹ 「wds」: webpack output is served from http://localhost:8080/
ℹ 「wds」: Content not from webpack is served from /var/www/html/public
ℹ 「wds」: 404s will fallback to /index.html
DONE Compiled successfully in 27200ms
When I try to make a change to a vue template for example, the changes do not get reflected on the website running on Chrome (localhost:8080) - not even if I hard refresh the page. The changes are NEVER updated even though the compilation is successful every time.
I tried to play with my webpack config following other suggestions online, but to no avail.
For example adding hmrOptions and devServer options.
Here is my webpack.mix.js for reference:
const cssImport = require('postcss-import')
const cssNesting = require('postcss-nesting')
const mix = require('laravel-mix')
const path = require('path')
const purgecss = require('#fullhuman/postcss-purgecss')
const tailwindcss = require('tailwindcss')
mix.js('resources/js/app.js', 'public/js')
.postCss('resources/css/app.css', 'public/css/app.css')
.options({
hmrOptions: {
host: 'localhost',
port: '8080',
},
postCss: [
cssImport(),
cssNesting(),
tailwindcss('tailwind.config.js'),
...mix.inProduction() ? [
purgecss({
content: ['./resources/views/**/*.blade.php', './resources/js/**/*.vue'],
defaultExtractor: content => content.match(/[\w-/:.]+(?<!:)/g) || [],
whitelistPatternsChildren: [/nprogress/],
}),
] : [],
],
})
.webpackConfig({
mode: 'development',
output: { chunkFilename: 'js/[name].js?id=[chunkhash]' },
resolve: {
alias: {
vue$: 'vue/dist/vue.runtime.esm.js',
'#': path.resolve('resources/js'),
},
},
devServer: { // I have tried with & without this
proxy: {
host: '0.0.0.0',
port: 8080,
},
watchOptions:{
aggregateTimeout:200,
poll:5000,
},
},
})
if (mix.inProduction()) {
mix.version()
mix.sourceMaps()
}
Here is my docker-compose.yml
version: "3.7"
services:
php:
build:
args:
user: admin
uid: 1000
context: ./
dockerfile: Dockerfile
image: app
container_name: app-php
restart: unless-stopped
working_dir: /var/www/html
volumes:
- ./:/var/www/html
networks:
- app
mysql:
image: mysql:5.7
container_name: app-mysql
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
- nodemodules:/node_modules
networks:
- app
nginx:
image: nginx:alpine
container_name: app-nginx
restart: unless-stopped
ports:
- 8080:80
volumes:
- ./:/var/www/html
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- app
links:
- php
depends_on:
- php
networks:
app:
driver: bridge
volumes:
nodemodules: {}
Here is my Dockerfile
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN curl -sL https://deb.nodesource.com/setup_13.x | bash
RUN apt-get update && \
apt-get install -y -q --no-install-recommends \
nano apt-utils curl zip unzip default-mysql-client nodejs build-essential git \
libcurl4-gnutls-dev libmcrypt-dev libmagickwand-dev \
libwebp-dev libjpeg-dev libpng-dev libxpm-dev \
libonig-dev \
libxml2-dev \
libfreetype6-dev libaio-dev zlib1g-dev libzip-dev && \
echo 'umask 002' >> /root/.bashrc && \
apt-get clean
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www/html
USER $user
EXPOSE 8080
Let me know if you need more information. Thank you!

add following config in your docker config file
ENV CHOKIDAR_USEPOLLING=true
remove node_modules and re-install again

Related

Docker Cannot find module after builing

Dockerfile:
FROM node:lts-slim AS base
# Install dependencies
RUN apt-get update \
&& apt-get install --no-install-recommends -y openssl
# Create app directory
WORKDIR /usr/src
FROM base AS builder
# Files required by npm install
COPY package*.json ./
# Files required by prisma
COPY prisma ./prisma
# Install app dependencies
RUN npm ci
# Bundle app source
COPY . .
# Build app
RUN npm install -g prisma --force
RUN prisma generate
RUN npm run build \
&& npm prune --omit=dev
FROM base AS runner
# Copy from build image
COPY --from=builder /usr/src/node_modules ./node_modules
COPY --from=builder /usr/src/dist ./dist
COPY --from=builder /usr/src/package*.json ./
COPY prisma ./prisma
RUN apt-get update \
&& apt-get install --no-install-recommends -y procps openssl
RUN chown -R node /usr/src/node_modules
RUN chown -R node /usr/src/dist
RUN chown -R node /usr/src/package*.json
USER node
# Start the app
EXPOSE 80
CMD ["node", "dist/index.js"]
docker-compose.yml
version: '3'
services:
mysql:
image: mysql:latest
container_name: mysql
ports:
- 3306:3306
bot:
container_name: bot
build:
context: .
depends_on:
- mysql
docker-compose.prod.yml
version: '3'
services:
mysql:
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: '123123'
MYSQL_DATABASE: 'test'
MYSQL_USER: 'test'
MYSQL_PASSWORD: '123123'
bot:
ports:
- "3000:80"
env_file:
- docker-compose.prod.bot.env
volumes:
mysql:
for some reason after running this commands:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml run bot npx prisma migrate deploy
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
im getting an error when the bot container running up, that he cant find any node module...
im using ubuntu 20.4 to run docker inside, installed docker and for some reason only this part is not working, wehn im running a build on normal machine without docker, build is working fine.
The only problem is with the docker.
error:
bot | node:internal/modules/cjs/loader:936
bot | throw err;
bot | ^
bot |
bot | Error: Cannot find module 'envalid'
bot | Require stack:
bot | - /usr/src/dist/config.js

Docker compose - my images repository tag is <none> when building

I am a newbie in Docker, and while going through a course online, I stumbled into a problem. While trying to build separate dev, prod and test images, only my dev one seems to build correctly.
My prod and test images build with their flag as "<none>", even though I run the build with the following command:
"sudo docker build -t ultimatenode:test --target test ."
Also my prod image is supposed to be smaller in size, because I removed the original node_modules and ./test folder, but there seems to be some mistake on my part.
Would anyone be kind enough to check the problem in the following Dockerfile:
FROM node:16 as base
EXPOSE 80
WORKDIR /app
COPY package*.json ./
RUN npm config list
RUN npm ci \
&& npm cache clean --force
ENV PATH /app/node_modules/.bin:$PATH
CMD ["node", "server.js"]
#DEVELOPMENT
FROM base as dev
ENV NODE_ENV=development
# NOTE: these apt dependencies are only needed
# for testing. they shouldn't be in production
RUN apt-get update -qq \
&& apt-get install -qy --no-install-recommends \
bzip2 \
ca-certificates \
curl \
libfontconfig \
&& rm -rf /var/lib/apt/lists/*
RUN npm config list
RUN npm install --only=development \
&& npm cache clean --force
COPY . /app
CMD ["nodemon", "server.js"]
#TEST
FROM dev as test
COPY . .
RUN npm audit
#PREPROD
FROM test as preprod
#Removing unecessary folders
RUN rm -rf ./tests && rm -rf ./node_modules
FROM base as prod
COPY --from=pre-prod /app /app
WORKDIR /app
HEALTHCHECK CMD curl http://127.0.0.1/ || exit 1
CMD ["node", "server.js"]
Also, here is my docker-compose.yml
version: '2.4'
services:
redis:
image: redis:alpine
db:
image: postgres:9.6
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
volumes:
- db-data:/var/lib/postgresql/data
vote:
image: bretfisher/examplevotingapp_vote
ports:
- '5000:80'
depends_on:
- redis
result:
build:
context: .
target: dev
ports:
- '5001:80'
volumes:
- .:/app
environment:
- NODE_ENV=development
depends_on:
- db
worker:
image: bretfisher/examplevotingapp_worker
depends_on:
- redis
- db
volumes:
db-data:

How to configure webpack hot reload to work inside Docker?

I'm working on a Symfony 4 project for months, and I want to Dockerize it.
I make everything work except Webpack, I use it to compile my .scss and .js files with the npm run watch or npm run dev command.
Actually webpack does not listen changes I do in a .scss or .js file for example.
Here is my config, I surely miss something in my files.
My docker-compose.yml :
version: '3.8'
services:
mysql:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
restart: on-failure
environment:
MYSQL_ROOT_PASSWORD: rootpassword
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: on-failure
depends_on:
- mysql
ports:
- '8004:80'
environment:
PMA_HOSTS: mysql
php:
build:
context: .
dockerfile: php/Dockerfile
volumes:
- '../.:/usr/src/app'
restart: on-failure
env_file:
- .env
nginx:
image: nginx:1.19.0-alpine
restart: on-failure
volumes:
- '../public:/usr/src/app'
- './nginx/default.conf:/etc/nginx/conf.d/default.conf:ro'
ports:
- '80:80'
depends_on:
- php
node:
build:
context: .
dockerfile: node/Dockerfile
volumes:
- '../.:/usr/src/app'
command: npm run watch
My Dockerfile for Node Image :
FROM node:12.10.0
RUN apt-get update && \
apt-get install -y \
curl
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
WORKDIR /usr/src/app
CMD ["npm", "run", "watch"]
My webpack.config.js :
var Encore = require('#symfony/webpack-encore');
var CopyWebpackPlugin = require('copy-webpack-plugin');
if (!Encore.isRuntimeEnvironmentConfigured()) {
Encore.configureRuntimeEnvironment(process.env.NODE_ENV || 'dev');
}
Encore
.setOutputPath('public/build/')
.setPublicPath('/build')
.addEntry('app', './assets/js/app.js')
.splitEntryChunks()
.disableSingleRuntimeChunk()
.enableSassLoader()
.cleanupOutputBeforeBuild()
.enableBuildNotifications()
.enableSourceMaps(!Encore.isProduction())
.enableVersioning(Encore.isProduction())
.configureBabel(() => {}, {
useBuiltIns: 'usage',
corejs: 3
})
.addPlugin(new CopyWebpackPlugin([
{ from: './assets/pictures', to: 'pictures' }
]))
;
module.exports = Encore.getWebpackConfig();
// module.exports = {
// mode: 'development',
// devServer: {
// port: 80,
// host: '0.0.0.0',
// disableHostCheck: true,
// watchOptions: {
// ignored: /node_modules/,
// poll: 1000,
// aggregateTimeout: 1000
// }
// }
// }
As you can see I already tried some thing in webpack.config.js, I saw many things about watchOptions but I didn't get it.
And here is my project's organisation :
project's organisation
I want to be able to launch my Docker with Webpack listening any change I do in real time.
Here is the command console after running docker-compose up:
console command docker-compose up
If you have some advise to improve my Docker environment, I take it all !
Thank you !
i just use this:
docker-compose.yml:
node:
image: node:16-alpine3.13
working_dir: /var/www/app
user: "$USERID"
volumes:
- .:/var/www/app
tty: true
and docker-compose exec node yarn watch
working as expected.
Okay i think i solved my issue,
I followed #Rufinus answer; i had to docker-compose up in a first console command, open a second console command and execute winpty docker-compose exec node yarn watch but for some reason i had issue with node-sass compatibility : i mounted my node_module (windows 10) folder into the container (Linux).
So i opened my node CLI container and execute npm rebuild node-sass to solve this and finally it worked !
But i don't know why, my current solution is to execute npm run watch on my local folders (like i used to do it before Dockerizing all my application) and it re-builds assets when i change .scss or .js file.

How to fix PSQL connection error with Docker Compose

I'm trying to connect my Python-Flask app with a Postgres database in a docker environment. I am using a docker-compose file to build my web and db environment.
However, I am getting the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Here is my docker file:
FROM ubuntu:16.04 as base
RUN apt-get update -y && apt-get install -y python3-pip python3-dev postgresql libpq-dev libffi-dev jq
ENV LC_ALL=C.UTF-8 \
LANG=C.UTF-8
ENV FLASK_APP=manage.py \
FLASK_ENV=development \
APP_SETTINGS=config.DevelopmentConfig \
DATABASE_URL=postgresql://user:pw#postgres/database
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
FROM base as development
EXPOSE 5000
CMD ["bash"]
Here is my Docker-compose file:
version: "3.6"
services:
development_default: &DEVELOPMENT_DEFAULT
build:
context: .
target: development
working_dir: /app
volumes:
- .:/app
environment:
- GOOGLE_CLIENT_ID=none
- GOOGLE_CLIENT_SECRET=none
web:
<<: *DEVELOPMENT_DEFAULT
ports:
- "5000:5000"
depends_on:
- db
command: flask run --host=0.0.0.0
db:
image: postgres:10.6
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=db

docker-compose with container mongo ECONNREFUSED

I m new at docker so i tried to connect multiple container
- mongo
- my app
- redis
and i get this error in chrome=> code: "ECONNREFUSED", errno: "ECONNREFUSED", syscall: "connect", address: "127.0.0.1", port: 8080}
here is my docker-compose file :
version: "2"
services:
mongo:
image: "mongo"
restart: always
ports:
- "27017:27017"
networks:
- all
redis:
image: "redis:3.2.1"
networks:
- all
node:
image: "project"
links:
- mongo
ports:
- "8080:8080"
networks:
- all
backoffice:
image: "back"
links:
- node
- mongo
- redis
depends_on:
- mongo
- node
- redis
ports:
- "8181:8181"
networks:
- all
networks:
all:
driver: bridge
my differents Dockerfile:
for mongo:
FROM mongo:2.6
COPY ./data ./
EXPOSE 27017
CMD ["mongod"]
for service node:
FROM node:4.4.7
WORKDIR /app
COPY /api ./
RUN npm install
RUN apt-get -q update && apt-get install -y -qq \ git \ curl
EXPOSE 8080
CMD ["node","index.js"]
for service back:
FROM node:4.4.7
WORKDIR /api
COPY . ./
RUN npm install && npm install bower -g && npm install gulp -g
RUN bower install --allow-root && gulp build
RUN apt-get -q update && apt-get install -y -qq \ git \ curl
EXPOSE 8181
CMD ["node","index.js"]
can you please help me figure this out ?
Probably your port 8080 is already in use. Open your cmd and type netstat -a. This is for checking the ports availability.
I solve my issue, i was using version 2 of docker-compose but links are available only from version 3.
Just upgrade and it works fine.

Resources