Can app in docker container proxy to host machine - node.js

I've got a docker container running an app that needs to proxy requests from itself, to itself to my host machine running an instance of a theme that shares data between the theme and the app. So the app can be used across multiple themes using the relative theme data.
Anyway, I've got an image built using the below Dockerfile
FROM node:12-alpine
EXPOSE 3006
RUN apk update && apk upgrade
RUN apk add \
bash \
git \
openssh
RUN npm install -g -y lerna
Then I create a container using
docker create -it -p 3006:3006 --name=testing -v $(pwd)/build:/build
The container is then started and the app is started using
PORT=3006 react-scripts-ts start screen
Which exposes the app through the container on port 3006. To proxy requests to itself, the package.json file is set as such
{
"proxy": "http://192.168.1.101:3000"
// Rest of file
}
^ This above setup works but this project is worked on by multiple developers, my concern is that when this is run on a different setup or the host machines local IP isn't 192.168.1.101 then it'll fail.
I've tried setting it to the below
{
"proxy": "http://localhost:3000"
// Rest of file
}
but this fails as presumably this is proxying the requests to the localhost of the container, not the host machine. So essentially my question, in simplistic terms is can I tell a docker container to proxy/reroute a request to localhost (or localhost:3000) to my host machines localhost?

Related

Accessing A Web App Running In Docker From Another Machine

I've cloned the following dockerized MEVN app and would like to access it from another PC on the local network.
The box that docker is running on has an ip of 192.168.0.111 but going to http://192.168.0.111:8080/ from another PC just says it can't be reached. I run other services like plex and a minecraft server that can be reached with this ip so I assume it is a docker config issue. I am pretty new to docker.
Here is the Dockerfile for the poral. I made a slight change from the repo adding -p 8080:8080 because I read elsewhere that it would open it up to lan access.
FROM node:16.15.0
RUN mkdir -p /usr/src/www &&
apt-get -y update &&
npm install -g http-server
COPY . /usr/src/vue
WORKDIR /usr/src/vue
RUN npm install
RUN npm run build
RUN cp -r /usr/src/vue/dist/* /usr/src/www
WORKDIR /usr/src/www
EXPOSE 8080
CMD http-server -p 8080:8080 --log-ip
Don't put -p 8080:8080 in the Dockerfile!
You should first build your docker image using docker build command.
docker build -t myapp .
once you've built the image, and confirmed using docker images you can run it using docker run command
docker run -p 8080:8080 myapp
Docker listens 0.0.0.0 IP address and the other machines on the same network can use your ip address to show your website on which port did you use for sharing. For example you use 8080 and actually you listen 0.0.0.0:8080 and the other machines http://192.168.0.111:8080/ can reach that website with your ip address. Without docker you can also listen 0.0.0.0 to share your app on network.
The box that docker is running on
What u mean by saying "BOX"? Is it some kind of virtual box or maybe actual computer with Linux or Windows or maybe MacOS?
Have u checked particular "BOX"'s firewall? (u may need to do "NAT" over firewall to particular in "BOX" running service for incoming requests from outside of "BOX").
I'll be happy to help u our if u'll provide more detailed information about your environment...

Deploy and run Docker in AWS

I just finish my React and NodeJS project and create a Dockerfile for each one and then create a docker-compose file to create the docker image for each one (frontend and backend).
I also push my images to my repository in Docker hub.
What should I do now? I want to run my docker project in AWS EC2, so I create in my AWS dashboard a new EC2 instance and install docker there and also manage to download my images from my docker hub...
But now I'm pretty stuck, do I need to create a container to run both of them? Do I need to run each one alone?
Also, I'm using Nginx to use port 80 instead of the default 3000.
I really lost now (first time working with Docker and AWS)
Thanks!
EDIT 1
My dockerfile for React is:
# build environment
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
# new
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
My dockerfile for Nodejs is:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
and my Nginx config file is:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
my docker-compose.yml file is:
version: "2"
services:
frontend:
image: itaik/test-docker:frontend
build: ./frontend
ports:
- "80:80"
depends_on:
- backend
backend:
image: itaik/test-docker:backend
build: ./backend
ports:
- "5000:5000"
On my computer (windows) I download the Docker desktop app and I using the docker compose-up command and it's create a container with both of my images and everything is great, this is what i'm trying to achieve on my AWS EC2 (Linux).
Hope things more clear now.
EDIT 2
Ok so somehow I manage to run both my images with difference containers and they are now both online like I wanted
I use the command:
docker run -d -p 5000:5000 itaik/test-docker:backend
docker run -d -p 80:80 itaik/test-docker:frontend
But now, for some reason all my API calls to "localhost:5000" are getting an error:
GET http://localhost:5000/user/ net::ERR_CONNECTION_REFUSED
Is this related to my React/Nodejs code or to the way I setup my docker images?
Thanks!
EDIT 3
Even after I use docker-compose and both images are running on the same network bridge I still got that error.
The only solution I can think about is to manually edit my code and change "localhost" to AWS public IP, but I guess I just missing something that need to be done...
Maybe my network bridge is not visible to the public IP somehow?
Because I do get response when I enter to the public IP (port 80) and to port 5000.
But the API call to localhost:5000 is getting an error.
Probably the shortest path to get this working is to
Merge frontend with backend into same docker image (because you have to serve your bundle from somewhere and backend is the nearest already prepared place for it)
Make sure hosts and ports set up and container works properly on you machine
Push image into DockerHub (or e.g. AWS ECR)
Get machine At AWS EC2 and install docker there (and, if needed, Docker Compose)
Make sure control groups for you machine allow incoming traffic to the port (right click on instance at AWS Console), your application will serve on (for your example you should open port :80)
Pull image and start container (or 2 via docker-compose) with port set up as 80:3000 (lets assume your apps serves on :3000 on container)
on same AWS EC2 console click on your instance and you will see public address of your machine (smth like ec2-203-0-113-25.compute-1.amazonaws.com)
I think those are the most common pitfalls
I would recommend to deploy your app in single image (e.g. Express can easily serve your frontend-app after you built it into static bundle) because it may take some time to make containers communicate with each other.
But if you still want to serve your app with 2 components - frontend from Nginx, and backend from NodeJS, then you can use docker-compose. Just make sure that hosts for each components set up properly (they won't see each other on localhost)
UPD 2
There are actually 2 ways to make frontend communicate with backend (I suppose your are probably using axios or smth similar for API calls):
1st way - is to explicitly set up http://<host>:<port> as axios baseURL so all your api calls become interpolated into http://ec2-203-....amazonaws.com:3000/api/some-data. But you should know exact host, your backend is serving on. Maybe this is the most explicit and easiest way :)
2nd way - is to make browser locate your server. I'm not deeply familiar with how that works under the hood, but on high level it works like this: user gets your application as bundle from yousite.com. And app needs to fetch data from backend, but in code those calls are "relative" - only /api/some-data is stated.
As there is no backend host set up, app asks browser to locate it's "home" and, I suppose, location.hostname becomes the host for such calls, so full url becomes http://yousite.com/api/some-data.
But, as you can see, if user will make this call, it will hit Nginx instead of actual backend, because Nginx is the one who serves frontend.
So next thing that you have to do - it to proxy such calls from Nginx to NodeJs. And there is another thing - only API calls should be proxied. Other calls should, as usually, return your bundle. So you have to set up proxy this way:
location /api/ {
proxy_pass http://backend:5000;
}
note as NodeJs app is on backend:5000, not localhost:5000 - this how docker-compose sets up DNS
So now Nginx makes 2 things:
Serves static content
Reverse-proxies api calls to your backend
As you may have noticed, it has many tricky parts to make your app work in 2 containers. And this is how you can serve static content (from directory build) with express, so you won't need to use Nginx:
const app = express();
app.use('/', express.static(path.join(__dirname, 'build')));
I would recommend to try 1st way - explicitly set up host:port - and enable all the logging you can so you know whats going on. And after you are done with 1st part, update Nginx to make it work in proper way
Good luck!
first of all, to interact with another docker service running on the same ec2 instance, you don't need to go through the public internet.
Second, your react application cannot access the backend service via http://localhost:5000, because the backend service is not running in the same docker instance.
You should be able to access the backend service through http://backend:5000 from the react docker. Please check and get back to me.
Not related to your problem, but a suggestion,
Is your react application public-facing or internal application, if it's public-facing, you can easily run your react application from amazon S3 itself since amazon s3 supports static website hosting. you could simply copy the react application into an s3 bucket and run it as a static website. this also allows you to setup SSL via cloudfront.
hope this helps.
Reference:
how to setup the s3 static website

How to set up a NodeJs Development environment with docker?

i am trying to set up a nodejs development environment within docker, i also want hot reloading and source files to be in sync in both local and container, any help is appriciated, thanks
Here is a good article on hot reloading source files in a docker container for development environments.
source files to be in sync in both local and container
To achieve that you basically just need to mount your project directory to your container, as says the official documentation. For example:
docker run -v $PWD:/home/node node:alpine node index.js
What it does is:
It will run container based on node:alpine image;
node index.js command will be executed as the container is ready;
The console output will come from the container to your host console, so you could debug things. If you don't want to see the output but return control to your console, you could use flag -d.
And, the most valuable thing is that your current directory ($PWD) is fully synchronized with /home/node/ directory of the container. Any file update will be immediately represented at your container files.
I also want hot reloading
It depends on the approach you are using to serve your application.
For example, you could use Webpack dev server with a hot reload setting. After that, all you need to map a port to your webpack dev server's port.
docker run \
-v $PWD:/home/node \
-p 8080:8080 \
node:alpine \
webpack-dev-server \
--host 0.0.0.0 \
--port 8080

NodeJS in Docker doesn't see connection

I have a NodeJS/Vue app that I can run fine until I try to put it in a Docker container. I am using project structure like:
When I do npm run dev I get the output:
listmymeds#1.0.0 dev /Users/.../projects/myproject
webpack-dev-server --inline --progress --config build/webpack.dev.conf.js
and then it builds many modules before giving me the message:
DONE Compiled successfully in 8119ms
I Your application is running here: http://localhost:8080
then I am able to connect via browser at localhost:8080
Here is my Dockerfile:
FROM node:9.11.2-alpine
RUN mkdir -p /app
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD npm run dev
EXPOSE 8080
I then create a docker image with docker build -t myproject . and see the image listed via docker images
I then run docker run -p 8080:8080 myproject and get a message that my application is running here: localhost:8080
However, when I either use a browser or Postman to GET localhost:8080 there is no response.
Also, when I run the container from the command line, it appears to lock up so I have to close the terminal. Not sure if that is related or not though...
UPDATE:
I trying following the Docker logs such as
docker logs --follow
and there is nothing other than the last line that my application is running on localhost:8080
This would seem to indicate that my http requests are never making into my container right?
I also tried the suggestion to
CMD node_modules/.bin/webpack-dev-server --host 0.0.0.0
but that failed to even start.
It occurred to me that perhaps there is a Docker network issue, perhaps resulting in an earlier attempt at kong api learning. So I run docker network ls and see
NETWORK ID NAME DRIVER SCOPE
1f11e97987db bridge bridge local
73e3a7ce36eb host host local
423ab7feaa3c none null local
I have been unable to stop, disconnect or remove any of these networks. I think the 'bridge' might be one Kong created, but it won't let me whack it. There are no other containers running, and I have deleted all images other than the one I am using here.
Answer
It turns out that I had this in my config/index.js:
module.exports = {
dev: {
// Various Dev Server settings
host: 'localhost',
port: 8080,
Per Joachim Schirrmacher excellent help, I changed host from localhost to 0.0.0.0 and that allowed the container to receive the requests from the host.
With a plain vanilla express.js setup this works as expected. So, it must have something to do with your Vue application.
Try the following steps to find the source of the problem:
Check if the container is started or if it exits immediately (docker ps)
If the container runs, check if the port mapping is set up correctly. It needs to be 0.0.0.0:8080->8080/tcp
Check the logs of the container (docker logs <container_name>)
Connect to the container (docker exec -it <container_name> sh) and check if node_modules exists and contains all
EDIT
Seeing your last change of your question, I recommend starting the container with the -dit options: docker run -dit -p 8080:8080 myproject to make it go to the background, so that you don't need to hard-stop it by closing the terminal.
Make sure that only one container of your image runs by inspecting docker ps.
EDIT2
After discussing the problem in chat, we found that in the Vue.js configuration there was a restriction to 'localhost'. After changing it to '0.0.0.0', connections from the container's host system are accepted as well.
With Docker version 18.03 and above it is also possible to set the host to 'host.docker.internal' to prevent connections other than from the host system.

Azure Linux App Service: Just start one container

I am using this Dockerfile on Azure Linux App Service:
FROM ruby:2.3.3
ENV "GEM_HOME" "/home/gems"
ENV "BUNDLE_PATH" "/home/gems"
EXPOSE 3000
WORKDIR /home/webapp
CMD bundle install && bundle exec rails server puma -b 0.0.0.0 -e production
As you can see the gems folder is located in the home folder. The home folder is shared with the host system of the App Service. Now my problem the App Service LogFiles/docker/docker_***_out.log indicates that bundle install is called multiple times (probably from different containers). This leads to that some gems are never successfully installed.
Is there some setting which runs just one container so that my gems can be installed successfully and not interferring with each other? Or am I making wrong assumptions here? Maybe the problem isn't that there a multiple containers started?
Is there an easier way to install the gems the first time in the shared folder of the host system?

Resources