Docker and Node .mjs files - node.js

I have an express application with all the JS files using the *.mjs extension.
So, to start the server I do node index.mjs and it works as expected.
Now I'm trying to containerize the app.
I have this basic Dockerfile
FROM mhart/alpine-node:14
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node index.mjs
EXPOSE 80
After building (with no errors) and tagging I try to run my application (docker run my-app:latest) it breaks the line in the console but I don't see the console logs of my server.
If I try to hit localhost at port 80, it doesn't work.
I check the containers with docker container ls and I see the container
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ce7ca2a0db96 my-app:latest "/bin/sh -c 'node in…" 6 minutes ago Up 6 minutes 80/tcp clever_bhabha
If I look for logs, nothing.
Does anyone have this issue? Could it be related to .mjs files? If so, is there a way to use them in Docker?
Thanks

I think you need to expose a port different to 80 locally. You should try
docker run -p 8080:80 my-app
Then in localhost:8080 you should reach your app.

Related

frontend (React) call backend (express) fails to bind to HOST environment variable when trying to Dockerize

I am trying to dockerize both a frontend made with create-react-app and its express API backend.
The docker containers sit on ec2 instances. I have tried several tutorials and also threads here on stackoverflow, but I can't work out what is going wrong.
The error I get is:
Could not find an open port at ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com.
this refers to my .env file which contains only
HOST= ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com
my Dockerfile for the front end is as follows:
FROM node:12-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --silent
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
whereas the backend's:
FROM node:12-alpine as builder
WORKDIR /app
COPY package.json /app/package.json
RUN npm install
COPY . /app
EXPOSE 3001
CMD ["node", "app.js" ]
I need the backend to run on port 3001 and the frontend to run on 3000. The frontend is bound to the backend api via the proxy line in package.json, where I have placed:
"proxy":"http://ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com:3001",
Both containers build fine. However, running the backend with docker run -p 3001:3001 server and then docker run -p 3000:3000 client spits the error
Attempting to bind to HOST environment variable: ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com
If this was unintentional, check that you haven't mistakenly set it in your shell.
Learn more here: https://xxx.xx/xxx-advanced-config
Could not find an open port at ec2-x-xx-xx-xxx.eu-west-2.compute.amazonaws.com.
Network error message: listen EADDRNOTAVAIL: address not available 10.xx.0.xxx
I have tried running only the server side and then running npm start from my local machine, it works. The problem seems to have to do with docker networking between containers.
I also tried running the server side with the command
docker run -p 10.xx.0.xxx:3000:3000 client
to make sure the client pings the right ip address, however this didn't work.
Could anyone give me some direction please?
If you need more info on the source code, please just leave a comment, I didn't want to clutter the thread by making it longer than it already is..
Thank you

Unable to reach angular running in Docker container [duplicate]

This question already has answers here:
ng serve not working in Docker container
(7 answers)
Closed 2 years ago.
I am brand new to Docker so pleas bear with me.
Dockerfile:
FROM node:alpine
WORKDIR '/app'
COPY ./package.json .
EXPOSE 4200
RUN npm i
COPY . .
CMD ["npm","start"]
Commands:
docker build -t angu .
docker run -p 4300:4200 angu
I am not sure if I need to include EXPOSE 4200 in Dockerfile. But it is not working either ways.
Your server inside the container is listening on localhost which is different from the localhost on your machine. The container has its own localhost.
To change the app to listen to outside traffic you need to add --host 0.0.0.0 to the ng serve command in the npm start script in package.json like so:
"start": "ng server --host 0.0.0.0"
You don't need to add EXPOSE in your Dockerfile since it doesn't do much practically, it's mostly for documentation purposes. You're already publishing the port with --p. You can read more about this in this article

Run node Docker without port mapping

I am very to new Docker so please pardon me if this this is a very silly question. Googling hasn't really produced anything I am looking for. I have a very simple Dockerfile which looks like the following
FROM node:9.6.1
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install --silent
COPY . /usr/src/app
RUN npm start
EXPOSE 8000
In the container the app is running on port 8000. Is it possible to access port 8000 without the -p 8000:8000? I just want to be able to do
docker run imageName
and access the app on my browser on localhost:8000
By default, when you create a container, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the ‍‍--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host.
Read more: Container networking - Published ports
But you can use docker-compose to set config and run your docker images easily.
First installing the docker-compose. Install Docker Compose
Second create docker-compose.yml beside the Dockerfile and copy this code on them
version: '3'
services:
web:
build: .
ports:
- "8000:8000"
Now you can start your docker with this command
docker-compose up
If you want to run your services in the background, you can pass the ‍‍-d flag (for “detached” mode) to docker-compose up -d and use `docker-compose ps to see what is currently running.
Docker Compose Tutorial
Old question but someone might find it useful:
First get the IP of the docker container by running
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
Then connect to it from the the browser or using curl using the IP and port exposed :
Note that you will not be able to access the container on 0.0.0.0 because port is not mapped

Why is my Docker container not running my Nodejs app?

End goal: To spin up a docker container running my expressjs application on port 3000 (as if I am using npm start).
Details:
I am using Windows 10 Enterprise:
This a very basic, front-end Expressjs application.
It runs fine using npm start – no errors.
Dockerfile I am using:
FROM node:8.11.2
WORKDIR /app
COPY package.json .
RUN npm install
COPY src .
CMD node src/index.js
EXPOSE 3000
Steps:
I am able to create an image, using basic docker build command:
docker build –t portfolio-img .
Running the image (I am using this command from a tutorial www.katacoda.com/courses/docker/deploying-first-container):
docker run -d --name portfolio-container -p 3000:3000 portfolio-img
The container is not running. It is created, since I can inspect it, but it has exited after the command. I am guessing I did something wrong with the last command, or I am not giving the correct instructions in the dockerfile.
If anyone can point me in the right direction, I'd greatly appreciate it.
Already have searched a lot on the docker documentation and on here.

Google Sheets API v4, How to authenticate from Node.js (Docker Container)

I'm learning how to use Google Sheets API v.4 to download data from a sheet --> my nodeJS server. I'm using Docker containers for my node app. Fails on either localhost or online at server in Docker. It will work fine on localhost, but not in a Docker container. I've whitelisted the IP address at the Google API console. (note: I'm easily able to use firebase API from this node server, not the Google Sheets v4 API)
ref: https://developers.google.com/sheets/api/quickstart/nodejs#step_4_run_the_sample
First time you run the app, the command line on the node server displays:
Authorize this app by visiting this url:
https://accounts.google.com/o/oauth2/auth?access_type=offline&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets.readonly&response_type=code&client_id=xxx.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob
You go to that URL, and that Google page displays:
Sign in
Please copy this code, switch to your application and paste it there.
4/xxxxxxxxxxxx
And here's the rub. No way will that work. I can copy and paste the 4/xxx token into the command line, but it's a fail. No error message, no nothing. No function either. Is there a way to get there from here? I know this works fine in a stand alone Node server on my desktop computer , but not in a docker container (either localhost or online). Is there a manual method for the authentication?
-----------Edit---------------------------------------------------------
I started looking at the code again, and the issue is a fail on node readline while using a docker container.
var rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
And that issue already exists here on StackOveflow.
Unable to get standard input inside docker container
duplicate of:
how to get docker container to read from stdin?
You need to run the container in interactive mode with --interactive
or -i:
Whoa... and how do you do that in a docker-compose deployment?
Interactive shell using Docker Compose
Ouch. No go on that posting. Didn't work at all for me.. See the answer provided below...
Info provided here in case anybody else hits this bump in the road.
So it turns out the solution was nowhere near that provided by Interactive shell using Docker Compose
I'm running a node server in a docker container. I wanted to use the terminal to insert a token upon container startup in response to Google sheet API call, using the Node readline method.
Instead the solution I came up with was the result of a note I saw in a docker compose github issue. A long slow read of docker compose functions got me to a better solution. It was as simple as:
$ docker-compose build
$ docker-compose run -p 8080:80 node
One important issue here... the word node is the name of my service as called out in the docker-compose.yml file below. This solution worked fine on both my localhost and at an online server via SSH terminal.
Dockerfile:
FROM node:8
RUN mkdir -p /opt/app
# set our node environment, either development or production
ARG NODE_ENV=production
ENV NODE_ENV $NODE_ENV
# default to port 80 for node, and 5858 or 9229 for debug
ARG PORT=80
ENV PORT $PORT
EXPOSE $PORT 5858 9229
# install dependencies first, in a different location for easier app bind mounting for local development
WORKDIR /opt
COPY package.json package-lock.json* ./
RUN npm install && npm cache clean --force
ENV PATH /opt/node_modules/.bin:$PATH
# copy in our source code last, as it changes the most
WORKDIR /opt/app
COPY . /opt/app
CMD [ "node", "./bin/www" ]
docker-compose.yml
version: '3.1'
services:
node: <---- Name of service in the container
build:
context: .
args:
- NODE_ENV=development
command: ../node_modules/.bin/nodemon ./bin/www --inspect=0.0.0.0:9229
ports:
- "80:80"
- "5858:5858"
- "9229:9229"
volumes:
- .:/opt/app
# this is a workaround to prevent host node_modules from accidently getting mounted in container
# in case you want to use node/npm both outside container for test/lint etc. and also inside container
# this will overwrite the default node_modules dir in container so it won't conflict with our
# /opt/node_modules location. Thanks to PR from #brnluiz
- notused:/opt/app/node_modules
environment:
- NODE_ENV=development
# tty: true ## tested, not needed
# stdin_open: true ## tested, not needed
volumes:
notused:
Many thanks to Bret Fisher for his work on node docker defaults.

Resources