Creating (2) dockers for a react app w/ a node front end. How do I adjust the proxy settings to prevent breaking the app if it's run natively? - node.js

I have a client/server react+node.js app. The front end communicates with the API via the proxy in packages.json.
"proxy": "http://localhost:5000/"
I can get both the client and API to come up by running them in two separate docker containers via docker-compose. This allows an alias to be used in place of localhost:
"proxy": "http://server:5000/"
That fixes docker -- but breaks it if the app is going to be run natively outside of docker. It cannot resolve server to localhost (or an IP.)
Is there a way for the app to detect if it's being run in a docker and use another proxy? Or a way for it to fail-over to a second proxy if the first one times out?

If you are running the webpack build in your docker container you can provide the proxy url by passing in an environment variable from docker to webpack using the -e flag:
docker run -e "PROXY_URL=http://server:5000/"
Then you can provide PROXY_URL to react using webpack's DefinePlugin:
plugins: [
new webpack.DefinePlugin({
PROXY_URL: JSON.stringify(process.env.PROXY_URL)
})
]
Then you can just read PROXY_URL as a variable inside your app.

Related

How can I add a response header when running a SPA in Azure Web Apps using PM2

I have a React SPA which runs in Azure Web Apps, Node stack (Node 16 LTS), via the startup command:
pm2 serve /home/site/wwwroot --no-daemon --spa
I would like to add a response header (specifically, a Content-Security-Policy header) for every outgoing request.
Things I have tried:
adding a .htaccess file
adding a web.config file
looking for an evironment setting, or way to configure either pm2, or node
I don't have any node code to change (so I can't add headers there), and doing it in React feels "too late" - I think it's something that node on the server, or Azure needs to do.
My nuclear option is to wrap Front Door around it and do it there, but I'm hoping there is way to achieve this without that.
In App Service Linux, React application can be started with PM2, npm start or custom command.
The container automatically starts your app with PM2 when one of the common Node.js files is found in your project:
Also Note that Starting from Node 14 LTS, the container doesn't automatically start your app with PM2. To start your app with PM2, set the startup command to pm2 start <.js-file-or-PM2-file> --no-daemon.
Be sure to use the --no-daemon argument because PM2 needs to run in the foreground for the container to work properly.
Reference: https://learn.microsoft.com/en-us/azure/app-service/configure-language-nodejs?pivots=platform-linux

Can app in docker container proxy to host machine

I've got a docker container running an app that needs to proxy requests from itself, to itself to my host machine running an instance of a theme that shares data between the theme and the app. So the app can be used across multiple themes using the relative theme data.
Anyway, I've got an image built using the below Dockerfile
FROM node:12-alpine
EXPOSE 3006
RUN apk update && apk upgrade
RUN apk add \
bash \
git \
openssh
RUN npm install -g -y lerna
Then I create a container using
docker create -it -p 3006:3006 --name=testing -v $(pwd)/build:/build
The container is then started and the app is started using
PORT=3006 react-scripts-ts start screen
Which exposes the app through the container on port 3006. To proxy requests to itself, the package.json file is set as such
{
"proxy": "http://192.168.1.101:3000"
// Rest of file
}
^ This above setup works but this project is worked on by multiple developers, my concern is that when this is run on a different setup or the host machines local IP isn't 192.168.1.101 then it'll fail.
I've tried setting it to the below
{
"proxy": "http://localhost:3000"
// Rest of file
}
but this fails as presumably this is proxying the requests to the localhost of the container, not the host machine. So essentially my question, in simplistic terms is can I tell a docker container to proxy/reroute a request to localhost (or localhost:3000) to my host machines localhost?

Deploy and run Docker in AWS

I just finish my React and NodeJS project and create a Dockerfile for each one and then create a docker-compose file to create the docker image for each one (frontend and backend).
I also push my images to my repository in Docker hub.
What should I do now? I want to run my docker project in AWS EC2, so I create in my AWS dashboard a new EC2 instance and install docker there and also manage to download my images from my docker hub...
But now I'm pretty stuck, do I need to create a container to run both of them? Do I need to run each one alone?
Also, I'm using Nginx to use port 80 instead of the default 3000.
I really lost now (first time working with Docker and AWS)
Thanks!
EDIT 1
My dockerfile for React is:
# build environment
FROM node:13.12.0-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm ci --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . ./
RUN npm run build
# production environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html
# new
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
My dockerfile for Nodejs is:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 5000
CMD [ "npm", "start" ]
and my Nginx config file is:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
my docker-compose.yml file is:
version: "2"
services:
frontend:
image: itaik/test-docker:frontend
build: ./frontend
ports:
- "80:80"
depends_on:
- backend
backend:
image: itaik/test-docker:backend
build: ./backend
ports:
- "5000:5000"
On my computer (windows) I download the Docker desktop app and I using the docker compose-up command and it's create a container with both of my images and everything is great, this is what i'm trying to achieve on my AWS EC2 (Linux).
Hope things more clear now.
EDIT 2
Ok so somehow I manage to run both my images with difference containers and they are now both online like I wanted
I use the command:
docker run -d -p 5000:5000 itaik/test-docker:backend
docker run -d -p 80:80 itaik/test-docker:frontend
But now, for some reason all my API calls to "localhost:5000" are getting an error:
GET http://localhost:5000/user/ net::ERR_CONNECTION_REFUSED
Is this related to my React/Nodejs code or to the way I setup my docker images?
Thanks!
EDIT 3
Even after I use docker-compose and both images are running on the same network bridge I still got that error.
The only solution I can think about is to manually edit my code and change "localhost" to AWS public IP, but I guess I just missing something that need to be done...
Maybe my network bridge is not visible to the public IP somehow?
Because I do get response when I enter to the public IP (port 80) and to port 5000.
But the API call to localhost:5000 is getting an error.
Probably the shortest path to get this working is to
Merge frontend with backend into same docker image (because you have to serve your bundle from somewhere and backend is the nearest already prepared place for it)
Make sure hosts and ports set up and container works properly on you machine
Push image into DockerHub (or e.g. AWS ECR)
Get machine At AWS EC2 and install docker there (and, if needed, Docker Compose)
Make sure control groups for you machine allow incoming traffic to the port (right click on instance at AWS Console), your application will serve on (for your example you should open port :80)
Pull image and start container (or 2 via docker-compose) with port set up as 80:3000 (lets assume your apps serves on :3000 on container)
on same AWS EC2 console click on your instance and you will see public address of your machine (smth like ec2-203-0-113-25.compute-1.amazonaws.com)
I think those are the most common pitfalls
I would recommend to deploy your app in single image (e.g. Express can easily serve your frontend-app after you built it into static bundle) because it may take some time to make containers communicate with each other.
But if you still want to serve your app with 2 components - frontend from Nginx, and backend from NodeJS, then you can use docker-compose. Just make sure that hosts for each components set up properly (they won't see each other on localhost)
UPD 2
There are actually 2 ways to make frontend communicate with backend (I suppose your are probably using axios or smth similar for API calls):
1st way - is to explicitly set up http://<host>:<port> as axios baseURL so all your api calls become interpolated into http://ec2-203-....amazonaws.com:3000/api/some-data. But you should know exact host, your backend is serving on. Maybe this is the most explicit and easiest way :)
2nd way - is to make browser locate your server. I'm not deeply familiar with how that works under the hood, but on high level it works like this: user gets your application as bundle from yousite.com. And app needs to fetch data from backend, but in code those calls are "relative" - only /api/some-data is stated.
As there is no backend host set up, app asks browser to locate it's "home" and, I suppose, location.hostname becomes the host for such calls, so full url becomes http://yousite.com/api/some-data.
But, as you can see, if user will make this call, it will hit Nginx instead of actual backend, because Nginx is the one who serves frontend.
So next thing that you have to do - it to proxy such calls from Nginx to NodeJs. And there is another thing - only API calls should be proxied. Other calls should, as usually, return your bundle. So you have to set up proxy this way:
location /api/ {
proxy_pass http://backend:5000;
}
note as NodeJs app is on backend:5000, not localhost:5000 - this how docker-compose sets up DNS
So now Nginx makes 2 things:
Serves static content
Reverse-proxies api calls to your backend
As you may have noticed, it has many tricky parts to make your app work in 2 containers. And this is how you can serve static content (from directory build) with express, so you won't need to use Nginx:
const app = express();
app.use('/', express.static(path.join(__dirname, 'build')));
I would recommend to try 1st way - explicitly set up host:port - and enable all the logging you can so you know whats going on. And after you are done with 1st part, update Nginx to make it work in proper way
Good luck!
first of all, to interact with another docker service running on the same ec2 instance, you don't need to go through the public internet.
Second, your react application cannot access the backend service via http://localhost:5000, because the backend service is not running in the same docker instance.
You should be able to access the backend service through http://backend:5000 from the react docker. Please check and get back to me.
Not related to your problem, but a suggestion,
Is your react application public-facing or internal application, if it's public-facing, you can easily run your react application from amazon S3 itself since amazon s3 supports static website hosting. you could simply copy the react application into an s3 bucket and run it as a static website. this also allows you to setup SSL via cloudfront.
hope this helps.
Reference:
how to setup the s3 static website

Local IP and external IP setup for React

I am connecting to Node on React. Every time I am outside of my home server, I need to change the local IP config to the external IP config. And, is there any way that I can set two IP addresses (one for local IP address and another one for external IP address?
I believe you mean when your are outside, in your React app you need to change the IP to access your Node server. If I am right then you can move forward.
You can make use of environment variable process.env.
In you package.json under scripts add something like this:
"office": "IP=\"THE_IP\" npm start". Then from your app, you can access the IP value like this process.env.IP.
Now, when you run your app from outside you would run this command: npm run office.
If your are using windows: set "IP=abcdef" && npm start.
Hope it makes sense.
For Node.js application -
You can add ecosystem.config.js file into your project and add the developement and production URL and other variables there. Then you can start your server via pm2.
For developement you can use command -
pm2 start
For production you can use command -
pm2 start --env production
For detailed explaination you can follow this link : https://pm2.keymetrics.io/docs/usage/environment/
And for React application
You can simply make a file named .env and define your URL and other environment variables there.

What's the best way to debug a NodeJs app running on a docker container in a remote host?

I have a NodeJs app running on a docker container on a remote server. I can access the app on the browser. I'm also able to deploy to my app using PhpStorm and its remote server connection.
However, I tried to use the remote NodeJs debug tool of PhpStorm and it doesn't work. I always get connection refused.
I know the debug port is open because I check the docker containers and the 5858 is open. This port is also oppened on the host. And this is also the port I set for the debug.
package.json:
"scripts": {
"start": "nodemon --debug=5858 index.js myApp"
}
I don't know if PhpStorm is the best solution to debug this kind of app. So if someone has a better idea please let me know.
Thanks!
After further searching I found this great repository:
https://github.com/seelio/node-inspector-docker
It seems to me the easier way to make the app running and debug it.
Definitely node-inspector,
I had to do the same for an app in microservices and clusters/workers
just in case you need it: clustered apps with node-inspector
You can use intelij IDEA as IDE
It support running app directly from docker and allows you to debug apps easily.
once configured with your docker image its done.
next time just click run and it will start quickly nodejs inside your docker and show logs etc all just like we do with local node instance
https://www.jetbrains.com/help/idea/2016.3/running-and-debugging-node-js.html#node_docker_run_debug
Its EAP and communitiy editions are always is free

Resources