Docker cannot access mariadb server - node.js

I am newbie on docker.
I want to migrate my nodejs app to docker, and existing database already installed on server (172.17.2.1). I set mariadb host 172.17.2.1 on my nodejs config.
After that, I created an images and run with :
docker run -p 3009:3009 -d my-node
actually its already running, but when I tested to open by browser, I got an error that my app cannot connect to 172.17.2.1 (connecting to database).
I try to create bridge IP (172.17.2.135) and make a same subnet, but still got a same error.
My images on docker inside doesn't know 172.17.2.1 on my LAN.
Please help me,
I use windows 10 environment

You have two options to allow your container to reach an external server:
Run your docker container on your host network:
docker run -p 3009:3009 --network host -d my-node
This way your container will be able to reach anything reachable from your machine
create a network bridge: in this case docker will route the traffic from the container to the external server. the bridge IP can't be your docker machine IP as you tried to do.

Related

Accessing docker container running in remote linux machine from a windows browser

I have a remote ubuntu machine with docker installed and a container is running on that, i want to access it from my windows machine through a browser, i can connect to the ubuntu remote machine from my windows machine through putty, is there any way, i would be able to achieve this, any helps or leads in this case will be highly appreciated?
When you start the container, you'll need to publish the port that you want to connect to using the -p flag. Here's an example from the Docker documentation that publishes port 80 in the container to port 80 on the host (you can map to a different port if you'd like):
$ docker run -d -p 80:80 my_image service nginx start
See https://docs.docker.com/engine/reference/run/#expose-incoming-ports

"The connection was reset" after starting my server [duplicate]

I'm running a webpack-dev-server application inside a Docker container (node:4.2.1). If I try to connect to the server port from within the container - it works fine. However, trying to connect it from the host computer results in reset connection (the port is published, of course). How can I fix it?
This issue is not a docker problem.
Add --host=0.0.0.0 to your webpack command.
You need to connect to your page like this:
http://host:port/webpack-dev-server/index.html
Look to the iframe mode
You need to make sure:
you docker container has mapped the EXPOSE'd port to a host port
docker run -p x:y
your VM (if you are using docker machine with a VM) has forwarded that mapped port to the actual host (the host of the VM).
See "How to access tomcat running in docker container from browser?"

Can't get docker to accept request over the internet

So, I'm trying to get Jenkins working inside of docker as an exercise to get experience using docker. I have a small linux server, running Ubuntu 14.04 in my house (computer I wasn't using for anything else), and have no issues getting the container to start up, and connect to Jenkins over my local network.
My issue comes in when I try to connect to it from outside of my local network. I have port 8080 forwarded to the serve with the container, and if I run a port checker it says the port is open. However, when I actually try and go to my-ip:8080, I will either get nothing if I started the container just with -p 8080:8080 or "Error: Invalid request or server failed. HTTP_Proxy" if I run it with -p 0.0.0.0:8080:8080.
I wanted to make sure it wasn't jenkins, so I tried getting just a simple hello world flask application to work, and had the exact same issue. Any recommendations? Do I need to add anything extra inside Ubuntu to get it to allow outside connections to go to my containers?
EDIT: I'm also just using the official Jenkins image from docker hub.
If you are running this:
docker run -p 8080:8080 jenkins
Then to connect to jenkins you will have to connect to (in essence you are doing port forwarding):
http://127.0.0.1:8080 or http://localhost:8080
If you are just running this:
docker run jenkins
You can connect to jenkins using the container's IP
http://<containers-ip>:8080
The Dockerfile when the Jenkins container is built already exposes port 8080
The Docker Site has a great amount of information on container networks.
https://docs.docker.com/articles/networking
"By default Docker containers can make connections to the outside world, but the outside world cannot connect to containers."
You will need to provide special options when invoking docker run in order for containers to accept incoming connections.
Use the -P or --publish-all=true|false for containers to accept incoming connections.
The below should allow you to access it from another network:
docker run -P -p 8080:8080 jenkins
if you can connect to Jenkins over local network from a machine different than the one docker is running on but not from outside your local network, then the problem is not docker. In this case the problem is what ever machine who is receiving outside connection (normally your router, modem or ...) does not know to which machine the outside request should be forwarded.
You have to make sure you are forwarding the proper port on your external IP to proper port on the machine which is running Docker. This can be normally done on your internet modem/router.

External access to Node.JS app, within Docker container

i have a Node app running within a Docker container, hosted on Elastic Beanstalk (single instance). The docker has port 3000 exposed to access the app within the docker, and I can 'curl 172.17.0.32:3000/test' from the host which returns the expected response.
The problem I have is accessing this port externally using the elastic beanstalk url. i.e
http://XXXXXX-env.elasticbeanstalk.com:3000/test
This will time out.. can anyone recommend how to gain access to this port externally?
thanks
Check this for reference
http://victorlin.me/posts/2014/11/26/running-docker-with-aws-elastic-beanstalk
see what your docker ps command returns.
The ip you have shared looks like private ip address of the docker service used for internal network. You have to enable a bridge between your host and docker container by supplying -p 3000:3000 to the run command and finally enable the app in your elastic console.

Send request from one docker container to another

I'm trying to move some existing servers to be housed within docker containers. I have two: an app server and an api server but developed with node.js. I have them both working within an ubuntu vm and can hit both apps from outside the vm which is great.
Each server has it's own domain. The app server uses the domain app and the api server uses the api domain, clever I know. Locally I added both domains to my hosts file to point to the ip assigned to the ubuntu vm.
The only issue I'm having is there is a request sent from the app server that needs to be routed to the api server. Tried editing the hosts file of both the app server container (via the Dockerfile) and the ubuntu vm however the request fails.
Is there a simple way to get that request to not go out and try to resolve the api domain but get it to point to the api container?
A typical solution to this would be to use Docker's --link option to link the containers. That is, if you do:
docker run -d --name api myapi
docker run -d --name app --link api:api myapp
Then within the app container, the hostname api will map to the api container. You will also have a set of environment variables available that describe the exposed ports on the linked container. E.g., if your "api" container exposed port 80, the variable would look like:
API_PORT_80_TCP=tcp://172.17.0.10:80
API_PORT_80_TCP_PORT=80
API_PORT_80_TCP_PROTO=tcp
API_PORT=tcp://172.17.0.10:80
API_NAME=/app/api
API_PORT_80_TCP_ADDR=172.17.0.10
There are some disadvantages to the link option:
This only works for containers hosted on the same physical host
If you restart the "api" container, you have to restart the "app" container, too.
Both of these particular problems can probably be resolved by the orchestration tool of your choice if you are operating in a multi-host environment.
The linking feature (--link) is a legacy feature.
You should always prefer using Docker network drivers over linking.
Example: running a Redis container with Redis binding to localhost then running the redis-cli command and connecting to the Redis server over the localhost interface.
$ docker run -d --name redis example/redis --bind 127.0.0.1
$ # use the redis container's network stack to access localhost
$ docker run --rm -it --network container:redis example/redis-cli -h 127.0.0.1
See the docs for details.
https://docs.docker.com/compose/link-env-deprecated/
https://docs.docker.com/engine/reference/run/#network-settings

Resources