I`m trying to run a vue dashboard in docker container and a service on my local machine. both are binded to "localhost:80". When running both the dashboard and the service in docer, everything works fine, but when running the dashboard in docker and the second service on host machine, the host machine somehow overwrites the access to the docker localhost. so the expected behaviour:
- http://localhost -> should load the dashboard
- http://localhost/graphql -> should load the api for graphql from the second service
so these are working when both running in docker, but not working when one is in docker and the second one running on hist.
Any idea how to solve the issue? The reason why I need to have the second service running on host is that I can debug and code quicker instead of building image after each code change and updating the stack.
Thanks, Zoli.
localhost inside docker refers to the docker container itself. You can't access the actual host from inside docker with that.
However, your actual host has an IP address in the docker network. You can access your actual host using that IP. You can get it by doing ifconfig. Look for a docker interface. In my machine the actual host has the IP 172.17.0.1
problem solved, on host I had to change the port and it started to work. so when both services are running in docker, the port 80 can be assigned to 2 containers and will be resolved correctly. but when one is running it from docker and another on host, it will not work. that is my explanation to this, maybe someone can give a better explanation, but the problem is solved now. thanks.
Related
Intro:
I have an app, when I run it, it connects to a server and shares its bandwidth, (basically gives out the public IP for the server to use),
Development:
Now I wanted that app to use transparent socks5 proxy for sending and receiving requests. This was possible when I downloaded the app (non containerized version) on ubuntu and configured red-socks and iptables in it, the server the app connected to assumed the app's public IP was the one I mentioned (socks5://user:pass#ip:port) (i own these proxy IPs too)
Requirement:
Next, I wanted to scale it out, so I looked for a docker image of this app, it was available so I downloaded and ran it, it worked fine, but the IP the server received was the non proxy IP (obviously as I have not configured any proxy).
My proposed idea was to run multiple docker containers,
but I still don't know how will I configure a different socks5 proxy for each of these containers.
Can someone advise me on how to go forward with this.
Thank you in advance :D
Edits: I had mentioned I was trying to do what I did with the ubuntu system on a ubuntu docker container, removed that and organized the whole situation.
Bit of a weird one this.
Basically I'm setting up a HAProxy docker container on a VM (Ubuntu) in Azure. The VM has a public address and the container is configured to listen on port 80 like normal.
It works some of the time. When it does everything is grand and the container works as expected.
However when I run my automation, which deletes and recreates the docker container by default to pick up the latest image, it always passes but the container times out when I try to load up the public address. This seems to happen around half of the time.
Nothing else has changed in this scenario. Firewall is correct, the port itself is open but it just times out. Restarting the docker container solves the issue most of the time.
This doesn't happen when I try the same thing on GCP so I assume it's something more Azure related. Any help would be awesome thanks!
It appears this was due to needing net.ipv4.ip_unprivileged_port_start configured with the HAProxy container.
Since adding that I've not seen the issue. Apologies for the noise!
I'm deploying a bunch of node apps in docker containers and trying to use N|Solid to monitor them. However, the process in the container is using the internal ip address of the container( 172.17.0.1 ). Which makes sense technically, but the those IPs are not resolvable and the UI never picks them up.
Is there a way to tell the process the IP address to use? Environment variable or something
Will with NodeSource here.
Yes. This is a bit of a problem. We have a set of N|Solid Docker Images baking in the oven that address this.
For now, the best way to get N|Solid to work with Docker is to create a network using docker network create nsolid, and run the N|Solid proxy, console, and etcd all in docker containers on that network using docker run --net nsolid.
When you add a container to the network, it will grab the ip address and register it with etcd. Since everything is on the same network, the proxy will be able to use that ip address to reach the N|Solid agent.
If you want to try out the N|Solid Docker Images we are baking, shoot me an email at wblankenship#nodesource.com
I'm docker newbie. 'm running mongoDB in a container and redisDB in an other container and i'm linking this tow databases to my nodeJS project wich is running in a third container. In order to connect to my databases i'm putting the IPs of my containers in my source code but everytime i'm restarting a container the IP is changing so i have to change it in my source code,How can i deal with this problem?
as Michael just said, you can specify an IP Address via the "--ip" parameter
Example :
docker run -d --name="mongoDB" --ip=172.10.0.1 -p=12720:12720 imageIdOrTagName
(Don't forget it is "--ip" and not "-ip)
For further information, please consider reading the "Docker Networking Documentation" page.
If you have any other questions, feel free to ask.
EDIT For Docker < 1.10:
This github issue references what you are asking :
Allow user to choose the IP address for the container
It has been integrated in Docker 1.10.0 trough the "docker run --ip=..." command
For older versions, itoffshore presented a temporary solution right here.
Hope it will help.
Have a good day,
Nicolas.
You can specify the IP address of the container in the docker run command line with --ip="<ip address>"
How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?
You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/
You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.
You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.
It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432