ERR_NAME_NOT_RESOLVED using Docker network to communicate between a backend container and a React container - node.js

I have a backend up (NodeJS) which is the listening on some port (2345 in this example).
The client application is a React app. Both are leveraging socket.io in order to communicate between themselves. Both are containerized.
I then run them both on the same network, using the "network" flag:
docker run -it --network=test1 --name serverapp server-app
docker run -it --network=test1 client-app
In the client, I have this code (in the server the code is pretty standard, almost copy-paste from socket.io page):
const socket = io.connect('http://serverapp:2345');
socket.emit('getData');
socket.on('data', function(data) {
console.log('got data: ${data}');
})
What I suspect is that the problem has to do with having the client (React) app served by the http-server package, and then in the browser context, the hostname is not understood and therefore cannot be resolved. When I go into the browser console, I then see the following error: GET http://tbserver:2345/socket.io/?EIO=3&transport=polling&t=MzyGQLT net::ERR_NAME_NOT_RESOLVED
Now if I switch (in the client-app) the hostname serverapp to localhost (which the browser understands but is not recommended to use in docker as it is interpreted differently), when trying to connect to the server socket, I get the error: GET http://localhost:2345/socket.io/?EIO=3&transport=polling&t=MzyFyAe net::ERR_CONNECTION_REFUSED.
Another piece of information is that we currently build the React app (using npm run build), and then we build and run the Docker container using the following Dockerfile:
FROM mhart/alpine-node
RUN npm install -g http-server
WORKDIR /app
COPY /app/build/. /app/.
EXPOSE 2974 2326 1337 2324 7000 8769 8000 2345
CMD ["http-server", "-p", "8000"]`
(So, no build of the React app takes place while building the container; we rather rely on a prebuilt once)
Not sure what I am missing here and if it has to do with the http-server or not, but prior to that I managed to get a fully working socket.io connection between 2 NodeJS applications using the same Docker network, so it shouldn't be a code issue.

Your browser needs to know how to resolve the SocketIO server address. You can expose port 2345 of serverapp and bind port 2345 of your host to port 2345 of serverapp
docker run -it --network=test1 -p 2345:2345 --name serverapp server-app
That way you can use localhost in your client code
const socket = io.connect('http://localhost:2345');
Also get rid of 2345 from your client Dockerfile

Related

Is this dockerfile set correctly to serve a React.js build on port 5000?

I have a React.js app which I have dockerized and it was working fine until yesterday when there was some kind of an error which I found out is due to node version 17 so I decided to get the docker image's node version back to 16. All good, but since I did this, I cannot get the docker image to run on the specified port.
Here is my dockerfile:
ROM node:16.10-alpine as build
RUN mkdir /app
WORKDIR /app
COPY /front-end/dashboard/package.json /app
RUN npm install
COPY ./front-end/dashboard /app
RUN npm run build
# Install `serve` to run the application.
RUN npm install -g serve
# Set the command to start the node server.
CMD serve -s -n build
# Tell Docker about the port we'll run on.
EXPOSE 5000
As you can see, I am making a build which I then serve on port 5000 but for some reason that does not work anymore and it used to work fine.
All I can see as an output in docker is:
Serving! │
│ │
│ - Local: http://localhost:3000 │
│ - On Your Network: http://172.17.0.2:3000
When I go to localhost:3000 nothing happens which is fine but it should be working on port 5000 and it does not run there. Any idea why I cannot run the docker image's build on port 5000 as I used to do before?
I use docker run -p 5000:5000 to run it on port 5000 but this does not solve the problem.
I faced issues at work due to this exact same scenario. After a few hours of looking through our companies deployment pipeline, I discovered the culprit...
The serve package.
They changed the default port from 5000 to 3000.
Source: serve github releases
So, to fix your issue, I recommend to add -l 5000 in your serve cmd.
From the logs you can see that your application might be listening for traffic on localhost:3000. Your EXPOSE 5000 line does not change that behaviour but makes Docker (and other users) think port 5000 is important. Since nothing is listening on port 3000 obviously you should get a 'connection refused'. You may want to lookup https://docs.docker.com/engine/reference/builder/#expose
To get out of that situation:
ensure your dockerized process is listening to 0.0.0.0:5000. You will have to add -l tcp://0.0.0.0:5000to your CMD line (see https://github.com/vercel/serve/blob/main/bin/serve.js#L117)
When running the container, ensure you expose the port by using docker run -p 5000:5000 ...
If need be tell your docker host's firewall to allow traffic to <ip>:5000
Now if you connect to http://<ip>:5000 you should see the application's response.
Your app is listening on port 3000. So you need to map whatever port you want to use on the host to port 3000. If you want to use port 5000, you should use -p 5000:3000. Then you should be able to access it using localhost:5000 on the host machine.
You should think of containers as separate machines from the host. So when the container says that it's listening on localhost:3000, that means localhost in the context of the container. Not the host.

Failed to connect to duckling http server. Make sure the duckling server is running and the proper host and port are set in the configuration

I have made workplace on slack and app is registered there from where i get the necessary things like slack token and channel to put it into the credentials.yml file of the rasa. After getting all the credentials i tried to connect between the rasa bot and slack using the command as:
rasa run
and my credentials.yml contains:
slack:
slack_token: "xoxb-****************************************"
slack_channel: "#ghale"
Here i have used the ngrok to expose a web server running on your local machine to the internet
but getting the error :
rasa.nlu.extractors.duckling_http_extractor - Failed to connect to duckling http server. Make sure the duckling server is running and the proper host and port are set in the configuration. More information on how to run the server can be found on github: https://github.com/facebook/duckling#quickstart Error: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /parse (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it',))
Are you using Duckling? Duckling is a rule-based component to extract entities(docs).
If you are not using it, you can remove it from your NLU pipeline.
If you want to use it, the easiest way to do so is using docker:
docker run -p 8000:8000 rasa/duckling
The command above will run duckling and expose it on port 8000 of your host.
Just to add to #Tobias's answer;
If you are running some other service on port 8000, then you can bind any other port with th container's port and specify that in your pipeline config.
Example:
docker run -p <Desired Port, ex- 1000>:8000 rasa/duckling
Change config file to reflect that. Your pipeline should include
- name: DucklingHTTPExtractor
# https://rasa.com/docs/rasa/nlu/components/#ducklinghttpextractor
url: http://localhost:<Desired Port, 1000>
Retrain your model with changed config.
After training, simply run: rasa run
If you are doing numerical extractions (e.g. six to 6 etc), I presume this will be helpful for form filling on Rasa - you need to install docker and then expose duckling on port 8000
First, install docker (assumes Fedora but you can lookup other distros)
sudo dnf install docker
Second, activate docker
sudo systemctl start docker
Last, activate Rasa's docker
docker run -p 8000:8000 rasa/duckling
Your response should be - Status: "Downloaded newer image for
docker.io/rasa/duckling:latest Listening on http://0.0.0.0:8000"

Can not access Nodejs app which deployed on container of Bluemix

I create an image from cf ic command line expose 4000 then install nodejs,npm via apt-get and deploy a nodejs app on the image listen on port 4000.
And then I create a container with this image ,assign a public_ip to this container and run it.
But I found that I can not access the nodejs app with port [ http://public_ip4000 ].
When I login into container with command line cf ic exec -it container_id bash,I found that the nodejs app is running and I can access nodejs app by curl -GET http://localhost:4000/
Error Message is :net::ERR_CONNECTION_TIMED_OUT
Q: How can I access my nodejs app outside container?
Port 4000 is not exposed to the IBM Containers firewall.
There is a limited set of ports exposed, so I suggest you try a different port like 3000 or 5000. The complete list is not published for security reasons.
Alternatively you can create a Container Group with a single container. In that case you can define a route (domain) for your container that will automatically route all requests internally to your container port (like 4000).
You can find more details about creating container groups in Bluemix documentation:
https://console.ng.bluemix.net/docs/containers/container_creating_ov.html#container_group_ov

docker connection refused nodejs app

I launch docker container:
docker run --name node-arasaac -p 3000:3000 juanda/arasaac
And my node.js app works ok.
If I want to change host port:
docker run --name node-arasaac -p 8080:3000 juanda/arasaac
Web page is not loaded, logs from browser console:
Failed to load resource: net::ERR_CONNECTION_REFUSED
http://localhost:3000/app.318b21e9156114a4d93f.js Failed to load resource: net::ERR_CONNECTION_REFUSED
Do I need to have the same port both in host and container? It seems it knows how to resolve http://localhost:8080 so it loads my website, but internal links in the webpage go to port 3000 and it's not as good :-(
When you are running your node.js app in a docker container it will only expose the ports externally that you designate with your -p (lowercase) command. The first instance with, "-p 3000:3000", maps the host port 3000 to port 3000 being exposed from within your docker container. This provides a 1 to 1 mapping, so any client that is trying to connect to your node.js service can do so through the HOST port of 3000.
When you do "-p 8080:3000", docker maps the host port of 8080 to the node.js container port of 3000. This means any client making calls to your node.js app through the host (meaning not within the same container as your node.js app or not from a linked or networked docker container) will have to do so through the HOST port of 8080.
So if you have external services that expect to access your node.js at port 3000 they won't be able.

Docker node.js app not listening on port

I'm working on a node.js web application and use localhost:8080 to test it by sending requests from Postman. Whenever I run the application (npm start) without using Docker, the app works fine and listens on port 8080.
When I run the app using Docker, The app seems to be running correctly (it displays that it is running and listening on port 8080), however it is unreachable using Postman (I get the error: Could not get any response). Do you know what the reason for this could be?
Dockerfile:
FROM node:8
WORKDIR /opt/service
COPY package.json .
COPY package-lock.json .
RUN npm i
COPY . .
EXPOSE 8080
CMD ["npm", "start"]
I build and run the application in Docker using:
docker build -t my-app .
docker run my-app
I have tried binding the port as described below, but I also wasn't able to reach the server on port 8181.
docker run -p 8181:8080 my-app
In my application, the server listens in the following way (using Express):
app.listen(8080, () => {
console.log('listening on port 8080');
})
I have also tried using:
app.listen(8080, '0.0.0.0', () => {
console.log('listening on port 8080');
})
The docker port command returns:
8080/tcp -> 0.0.0.0:8181
Do you guys have nay idea what the reason for this could be?
UPDATE: Using the IP I obtained from the docker-machine (192.168.99.100:8181) I was able to reach the app. However, I want to be able to reach it from localhost:8181.
The way you have your port assignment setup requires you to use the docker machine's ip address, not your localhost. You can find your docker machines ip using:
docker-machine ip dev
If you want to map the container ip to your localhost ports you should specify the localhost ip before the port like this:
docker run -p 127.0.0.1:8181:8080 my-app
Similar question:

Resources