Running under MacOS I am connecting from a node.js app with net.Socket() into a Docker container running on the same host, which contains a C++ sockets server under Centos. The Docker run command is:
docker run -it --rm -p 14000-14010:14000-14010 -v /Users/me/Development/spdz:/spdz spdz/spdzdev
When the c++ server in docker is not running, I see a successful connection in node followed 3ms later by a socket closed message.
It appears as if a proxy in front of the container is accepting the request, passing it through to Docker where it is rejected. However this leads to erroneous messages in my front end application which thinks the connection was successful, only to find out later it was not.
I would like to see a simple connection declined. Any suggestions as to how this may be remedied or better understood would be helpful.
I am confident that the behaviour is introduced by Docker, as running the components outside Docker gives the expected immediate failure on connection. Also I have tried mapping the exported ports to an external network interface rather than localhost but see the same behaviour.
I suggest that you check that if the error is not coming from your server application.
You can use netcat command line to open a socket on your Docker container
nc -l 14000
This will create a TCP server socket listening on port 14000.
Then, from your host computer (MacOs), open a terminal and try to connect with telnet
telnet -e q localhost 14000
Related
response from testing
code hello.js
i am trying to create a web service with node.js. Unfortunately after implementing the application hello.js i am only able to test successfully "node hello.js", but furthermore when i type in "curl http://localhost:3000" i am getting "curl: (7) Failed to connect to localhost port 3000: connection estavlishment rejected"
This also happens with other ports oder when i type 127.0.0.1 directly. I also disabled my firewall with ufw and allow 3000 with ufw.
I am working with VirtualBox for a uni project. I am running everything in the VM. The goal is to cluster with docker and i need the web server for loadbalancing in the end.
Thanks!
The issue here that you use same terminal for multiple processes. As soon as you start the server, it holds the terminal. It means when you press Ctrl + C (^c in terminal output), you stop the server.
In your case you should use two terminals. In the first one you start the server (and do nothing with this particular terminal), in second one you make the request. Or you can change second step and make request from browser (it will be GET request by default), Postman, Insomnia etc.
I'm running a webpack-dev-server application inside a Docker container (node:4.2.1). If I try to connect to the server port from within the container - it works fine. However, trying to connect it from the host computer results in reset connection (the port is published, of course). How can I fix it?
This issue is not a docker problem.
Add --host=0.0.0.0 to your webpack command.
You need to connect to your page like this:
http://host:port/webpack-dev-server/index.html
Look to the iframe mode
You need to make sure:
you docker container has mapped the EXPOSE'd port to a host port
docker run -p x:y
your VM (if you are using docker machine with a VM) has forwarded that mapped port to the actual host (the host of the VM).
See "How to access tomcat running in docker container from browser?"
I am running a angular app on node server and in server.js I have specified app.listen(8084,localhost)..So when i run npm start in the docker container and try to -p 8084:8084 in docker run I was not able to get anything, even though the curl command inside my container curl localhost:8084 was giving me right result.
So i change the app.listen(8084) and the -p 8084:8084 started working..I am not sure why ?
When you open socket, you need to bind it to some interface in your system. There are predefined values:
0.0.0.0 - all interfaces, your service will be available from any interface
locahost, 127.0.0.1 - bind locally. That means service is NOT available from oustide -- this is your case.
You also can specify particular interface IP address to bind to it.
When you start your container, by default docker start default bridge network, so your container is being put into separate network and to access it, you need to allow incoming remote connections in container.
You bind your service to localhost into a container, so no communication is possible outside the container. localhost for your node server is not the same than localhost for your container.
Running a NodeJS Docker Container on an EC2 instance, I'm trying to remote debug it, but keep getting "connection refused" from the instance.
What I've tried -
Opening ports in EC2 security groups
Exposing ports in Dockerfile, both the port the app is listening on and the debug port
Forwarding the port within the Docker run command using the -p flag
Making sure the app is accessible directly through the port it's configured to listen to
After trying all of these, the debug port is still inaccessible by the remote debugger or even telnet.
Any ideas what could cause this?
So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.