Is it possible to run nodejs server and react server on single port? [duplicate] - node.js

This question already has answers here:
Can two applications listen to the same port?
(17 answers)
Closed 3 months ago.
I want to run both react and node on single server because i don't want to start it together.
like when i run npm start both will be run because i want to run project using electronjs.
I tried to run command npm start client.js server.js but it didn't work

you can't run 2 servers in the same port and the same IP you should run each one on a different port, react has a default port(3000) to run on you should set your nodejs server to another one like 8000.

You cannot do that you'll need two different port numbers 3000 for server 3001 client...
otherwise, you can use Docker to run on the same port and map to your host on different ports with docker commands:
docker run CLIENT_IMAGE -p 3000:3000
docker run SERVER_IMAGE -p 3001:3000

Related

Is this dockerfile set correctly to serve a React.js build on port 5000?

I have a React.js app which I have dockerized and it was working fine until yesterday when there was some kind of an error which I found out is due to node version 17 so I decided to get the docker image's node version back to 16. All good, but since I did this, I cannot get the docker image to run on the specified port.
Here is my dockerfile:
ROM node:16.10-alpine as build
RUN mkdir /app
WORKDIR /app
COPY /front-end/dashboard/package.json /app
RUN npm install
COPY ./front-end/dashboard /app
RUN npm run build
# Install `serve` to run the application.
RUN npm install -g serve
# Set the command to start the node server.
CMD serve -s -n build
# Tell Docker about the port we'll run on.
EXPOSE 5000
As you can see, I am making a build which I then serve on port 5000 but for some reason that does not work anymore and it used to work fine.
All I can see as an output in docker is:
Serving! │
│ │
│ - Local: http://localhost:3000 │
│ - On Your Network: http://172.17.0.2:3000
When I go to localhost:3000 nothing happens which is fine but it should be working on port 5000 and it does not run there. Any idea why I cannot run the docker image's build on port 5000 as I used to do before?
I use docker run -p 5000:5000 to run it on port 5000 but this does not solve the problem.
I faced issues at work due to this exact same scenario. After a few hours of looking through our companies deployment pipeline, I discovered the culprit...
The serve package.
They changed the default port from 5000 to 3000.
Source: serve github releases
So, to fix your issue, I recommend to add -l 5000 in your serve cmd.
From the logs you can see that your application might be listening for traffic on localhost:3000. Your EXPOSE 5000 line does not change that behaviour but makes Docker (and other users) think port 5000 is important. Since nothing is listening on port 3000 obviously you should get a 'connection refused'. You may want to lookup https://docs.docker.com/engine/reference/builder/#expose
To get out of that situation:
ensure your dockerized process is listening to 0.0.0.0:5000. You will have to add -l tcp://0.0.0.0:5000to your CMD line (see https://github.com/vercel/serve/blob/main/bin/serve.js#L117)
When running the container, ensure you expose the port by using docker run -p 5000:5000 ...
If need be tell your docker host's firewall to allow traffic to <ip>:5000
Now if you connect to http://<ip>:5000 you should see the application's response.
Your app is listening on port 3000. So you need to map whatever port you want to use on the host to port 3000. If you want to use port 5000, you should use -p 5000:3000. Then you should be able to access it using localhost:5000 on the host machine.
You should think of containers as separate machines from the host. So when the container says that it's listening on localhost:3000, that means localhost in the context of the container. Not the host.

Starting a simple HTTP-server using "npm" without installing npm

Which command do I need to use to start a simple HTTP server using "npm", the specified port to be used is port 8080. Also, I don't have to download the npm package. The only hint I got is that I can download the basic web server packet, also from there I can specify the port 8080?
Once the http server has been configured with the x-server name at port xxxx, you can start it with the command:
x-server -p xxxx
for example if my server's name is simple-http-server at port 8080:
simple-http-server -p 8080
would start it.
for the next exercice:
php -S 127.0.0.1:8080
I am also getting started on HTB.
contact me (in the next 23 hours) at this account or leave a correspondance as comment if you want to work with me to break through on HTB.
Once npm is installed, the command is simply: http-server -p 8080 (without writing 'npm' at the beginning of the line).

ERR_NAME_NOT_RESOLVED using Docker network to communicate between a backend container and a React container

I have a backend up (NodeJS) which is the listening on some port (2345 in this example).
The client application is a React app. Both are leveraging socket.io in order to communicate between themselves. Both are containerized.
I then run them both on the same network, using the "network" flag:
docker run -it --network=test1 --name serverapp server-app
docker run -it --network=test1 client-app
In the client, I have this code (in the server the code is pretty standard, almost copy-paste from socket.io page):
const socket = io.connect('http://serverapp:2345');
socket.emit('getData');
socket.on('data', function(data) {
console.log('got data: ${data}');
})
What I suspect is that the problem has to do with having the client (React) app served by the http-server package, and then in the browser context, the hostname is not understood and therefore cannot be resolved. When I go into the browser console, I then see the following error: GET http://tbserver:2345/socket.io/?EIO=3&transport=polling&t=MzyGQLT net::ERR_NAME_NOT_RESOLVED
Now if I switch (in the client-app) the hostname serverapp to localhost (which the browser understands but is not recommended to use in docker as it is interpreted differently), when trying to connect to the server socket, I get the error: GET http://localhost:2345/socket.io/?EIO=3&transport=polling&t=MzyFyAe net::ERR_CONNECTION_REFUSED.
Another piece of information is that we currently build the React app (using npm run build), and then we build and run the Docker container using the following Dockerfile:
FROM mhart/alpine-node
RUN npm install -g http-server
WORKDIR /app
COPY /app/build/. /app/.
EXPOSE 2974 2326 1337 2324 7000 8769 8000 2345
CMD ["http-server", "-p", "8000"]`
(So, no build of the React app takes place while building the container; we rather rely on a prebuilt once)
Not sure what I am missing here and if it has to do with the http-server or not, but prior to that I managed to get a fully working socket.io connection between 2 NodeJS applications using the same Docker network, so it shouldn't be a code issue.
Your browser needs to know how to resolve the SocketIO server address. You can expose port 2345 of serverapp and bind port 2345 of your host to port 2345 of serverapp
docker run -it --network=test1 -p 2345:2345 --name serverapp server-app
That way you can use localhost in your client code
const socket = io.connect('http://localhost:2345');
Also get rid of 2345 from your client Dockerfile

docker container cannot connect to localhost mongodb [duplicate]

This question already has answers here:
Connect to host mongodb from docker container
(4 answers)
Closed 4 years ago.
A. I have a container that includes the following
1. NodeJS version 8.11.4
2. Rocketchat meteor app
B. This is my Dockerfile
FROM node:8.11.4
ADD . /app
RUN npm install -g node-gyp
RUN set -x \
&& cd /app/programs/server/ \
&& npm install \
&& npm cache clear --force
WORKDIR /app/
ENV PORT=3000 \
ROOT_URL=http://localhost:3000
EXPOSE 3000
CMD ["node", "main.js"]
C. This command is executed well
docker build -t memo:1.0 .
When I try to run the container, it encounters the following error in containers log
{"log":"MongoNetworkError: failed to connect to server [localhost:27017] on first connect [MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017]\n","stream":"stderr","time":"2019-01-24T21:56:42.222722362Z"}
So container can not be executed.
The mongodb is running and I've added 0.0.0.0 to bindIp in the mongod.conf file.
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1,0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
My mongodb is installed in host(outside the container)
The problem was not resolved and my container status is Exited
I put the IP instead of the localhost,but it encounters the following error
{"log":"MongoNetworkError: failed to connect to server [192.168.0.198:27017] on first connect [MongoNetworkError: connect EHOSTUNREACH
The problem here is that you're starting a docker container (a self contained environment) and then trying to reach localhost:27017. However, localhost inside your container is not the same localhost as outside your container (on your host). There are two approaches you could take from this point:
Instead of attempting to connect to localhost:27017, connect to your host's ip (something like 192.x.x.x or 10.x.x.x)
(Better option imo) dockerize your mongodb, then your services will be able to communicate with each other using docker dns. To do this, you would create a docker-compose.yml with one service being your app and the other being mongodb.
When you try with MONGO_URL=mongodb://192.168.0.198:27017/rocketchat add that IP to bindIp as well.
With localhost MONGO_URL=mongodb://127.0.0.1:27017/rocketchat
I also recommend enabling security authorization in mongo config. Then set a user and password for your database.
Keep in mind that any change to config file requires a mongo restart

How to access several ports of a Docker container inside the same container?

I am trying to put an application that listens to several ports inside a Docker image.
At the moment, I have one docker image with a Nginx server with the front-end and a Python app: the Nginx runs on the port 27019 and the app runs on 5984.
The index.html file listens to localhost:5984 but it seems like it only listens to it outside the container (on the localhost of my computer).
The only way I can make it work at the moment is by using the -p option twice in the docker run:
docker run -p 27019:27019 -p 5984:5984 app-test.
Doing so, I generate two localhost ports on my computer. If I don't put the -p 5984:5984 it doesn't work.
I plan on using more ports for the application, so I'd like to avoid adding -p xxx:xxx for each new port.
How can I make an application inside the container (in this case the index.html at 27019) listens to another port inside the same container, without having to publish both of them? Can it be generalized to more than two ports? The final objective would be to have a complete application running on a single port on a server/computer, while listening to several ports inside Docker container(s).
If you want to expose two virtual hosts on the same outgoing port then you need a proxy like for example https://github.com/jwilder/nginx-proxy .
It's not a good thing to put a lot of applications into one container, normally you should split that with one container per app, it's the way it should be used.
But if you absolutly want to use many apps into one container you can use proxy or write a dockerfile that will open your ports itself.

Resources