I'm docker newbie. 'm running mongoDB in a container and redisDB in an other container and i'm linking this tow databases to my nodeJS project wich is running in a third container. In order to connect to my databases i'm putting the IPs of my containers in my source code but everytime i'm restarting a container the IP is changing so i have to change it in my source code,How can i deal with this problem?
as Michael just said, you can specify an IP Address via the "--ip" parameter
Example :
docker run -d --name="mongoDB" --ip=172.10.0.1 -p=12720:12720 imageIdOrTagName
(Don't forget it is "--ip" and not "-ip)
For further information, please consider reading the "Docker Networking Documentation" page.
If you have any other questions, feel free to ask.
EDIT For Docker < 1.10:
This github issue references what you are asking :
Allow user to choose the IP address for the container
It has been integrated in Docker 1.10.0 trough the "docker run --ip=..." command
For older versions, itoffshore presented a temporary solution right here.
Hope it will help.
Have a good day,
Nicolas.
You can specify the IP address of the container in the docker run command line with --ip="<ip address>"
Related
I am new to the tech world, I want to get to know eclipse hawkBit, I am a total noob and beginner so any help is mucha appreciated.
My scenario:
I have a Windows 10 OS with VMware workstation on it, I have installed Ubuntu on it to run eclipse hawkBit and docker ofcourse.
My internet connection is a highspeed mobile internet connection, I use my smarphone to tether(Mobile hotspot).
What I want to do:
I want to start eclipse hawkBit so basically run it simulate some devices and get to know how to use hawkBit.
My problem: so to run hawkBit according to the official documentation i just need to give this on the terminal: docker run -p 8080:8080 hawkbit/hawkbit-update-server:latest
I have seen a youtube video where if hawkBit works it will show me the localhost port info and say successfully connected(or something similar to that)
but the error or problem i keep encountering is this: 2021-06-17 11:35:19.312 INFO 1 --- [tContainer#1-56] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:5672]
Here is a picture of it:
I would really appreciate if you guys could help me with this, there is not much support or videos of this problem, infact its hard to find decent reference material for hawkBit.
any help is much appreciated.
The exception you see here is due to a missing (or misconfigured) RabbitMQ service.
You will need to configure the environment variable SPRING_RABBITMQ_HOST to point to the running RabbitMQ instance.
docker run -p 8080:8080 -e SPRING_RABBITMQ_HOST=ip-of-rabbitmq-service hawkbit/hawkbit-update-server:latest
For an easier setup, you could use the docker compose file for hawkbit (Option B or C), which already has a preconfigured RabbitMQ as well as MySQL container.
I`m trying to run a vue dashboard in docker container and a service on my local machine. both are binded to "localhost:80". When running both the dashboard and the service in docer, everything works fine, but when running the dashboard in docker and the second service on host machine, the host machine somehow overwrites the access to the docker localhost. so the expected behaviour:
- http://localhost -> should load the dashboard
- http://localhost/graphql -> should load the api for graphql from the second service
so these are working when both running in docker, but not working when one is in docker and the second one running on hist.
Any idea how to solve the issue? The reason why I need to have the second service running on host is that I can debug and code quicker instead of building image after each code change and updating the stack.
Thanks, Zoli.
localhost inside docker refers to the docker container itself. You can't access the actual host from inside docker with that.
However, your actual host has an IP address in the docker network. You can access your actual host using that IP. You can get it by doing ifconfig. Look for a docker interface. In my machine the actual host has the IP 172.17.0.1
problem solved, on host I had to change the port and it started to work. so when both services are running in docker, the port 80 can be assigned to 2 containers and will be resolved correctly. but when one is running it from docker and another on host, it will not work. that is my explanation to this, maybe someone can give a better explanation, but the problem is solved now. thanks.
I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.
I'm deploying a bunch of node apps in docker containers and trying to use N|Solid to monitor them. However, the process in the container is using the internal ip address of the container( 172.17.0.1 ). Which makes sense technically, but the those IPs are not resolvable and the UI never picks them up.
Is there a way to tell the process the IP address to use? Environment variable or something
Will with NodeSource here.
Yes. This is a bit of a problem. We have a set of N|Solid Docker Images baking in the oven that address this.
For now, the best way to get N|Solid to work with Docker is to create a network using docker network create nsolid, and run the N|Solid proxy, console, and etcd all in docker containers on that network using docker run --net nsolid.
When you add a container to the network, it will grab the ip address and register it with etcd. Since everything is on the same network, the proxy will be able to use that ip address to reach the N|Solid agent.
If you want to try out the N|Solid Docker Images we are baking, shoot me an email at wblankenship#nodesource.com
Currently, I have a single machine I'm starting to run docker images on. What I want to do is have all containers accessible through a host name based upon the container name. So, if I have container C1 and C2, and the host for the docker server is mydocker.local, then c1.mydocker.local will point to container image C1 and if I were to run C3, it would become available as C3.mydocker.local.
docker-dns seems similar to what I'm trying to do, but the project hasn't been updated for 7 months and documentation was not enough for me to get it running.
This seems like a very common use case, but I have not been able to create the appropriate google query magic to find anything.
You can try docker-gen to automatic update your host's hosts file, or create a dns service such as Dnsmasq.
Also I found an docker image called dns-gen which just using docker-gen and dnsmasq, might solve your problem.
A you should use explicit linking if containers are going to depend on each other. This will update the etc hosts file inside the container to add IPs for each of the linked containers.
If you do want something something like Rancher will give you a complete virtual private network in which each of the containers has its own IP. Rancher also supports services which will maintain DNS for all running containers. Bonus for using rancher is that it will work in a multi-host scenario as well.