Hello I have setup gitlab over docker and I created a repository then added simple readme file. I am trying to access to the repo from other computers in the same network but I cannot. I setup gitlab to this http://gitlab.local:30080/ url. What should I do to clone repo into other computers and work on local server ?
Where did you specify the dns entry for gitlab.local?
you need some DNS Server which is able to resolve gitlab.local to the IP of the host your docker container is running on.
Did you expose the Port from the container to the Host?
you must published the port from docker container to the one port from host.
after do this, if you use a linux OS add the record like this in /etc/hosts file.
192.168.1.10 gitlab.local
if you use a Windows OS add the record into the C:Windows\System32\drivers\etc HostFile
now you can access gitlab with this urlon the any network computer that edit host file an add record above:
http://gitlab.local:30080/
notice: the firewall must be off or add firewall-rule for gitlab and port on any computer that you use
Related
I'm running a webpack-dev-server application inside a Docker container (node:4.2.1). If I try to connect to the server port from within the container - it works fine. However, trying to connect it from the host computer results in reset connection (the port is published, of course). How can I fix it?
This issue is not a docker problem.
Add --host=0.0.0.0 to your webpack command.
You need to connect to your page like this:
http://host:port/webpack-dev-server/index.html
Look to the iframe mode
You need to make sure:
you docker container has mapped the EXPOSE'd port to a host port
docker run -p x:y
your VM (if you are using docker machine with a VM) has forwarded that mapped port to the actual host (the host of the VM).
See "How to access tomcat running in docker container from browser?"
I installed Gitlab on a VMWare VM, using NAT, where the VM is running Ubuntu 16.04. Everything installed OK, but I can't access it via the browser. It says I need to configure an external URL. I only need to access the VM from my Mac (where the VM is running). How do I configure a URL so I can access it from my Mac?
Thanks!
When the VM is running locally on the Mac in NAT network config, this means that the ports are available directly on the Mac IP. If you only need to access it from the Mac itself, you could access the application at the port via the loopback (local only) IP 127.0.0.1
If gitlab is running on port 80 in the VM, on the Mac you should be able to access with http://127.0.0.1
If this doesn't work, there are a few options:
Confirm no other service/webserver is running on port 80 locally on the Mac. If there is, you should change the port of the gitlab webserver in your VM, and access using http://127.0.0.1:port
Confirm that port 80 is allowed in the VM firewall, and that the webserver is running https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-gitlab-on-ubuntu-16-04
So I want to connect to my mongodb running on my host machine (DO droplet, Ubuntu 16.04). It is running on the default 27017 port on localhost.
I then use mup to deploy my Meteor app on my DO droplet, which is using docker to run my Meteor app inside a container. So far so good.
A standard mongodb://... connection url is used to connect the app to the mongodb.
Now I have the following problem:
mongodb://...#localhost:27017... obviously does not work inside the docker container, as localhost is not the host's localhost.
I already read many stackoverflow posts on this, I already tried using:
--network="host" - did not work as it said 0.0.0.0:80 is already in use or something like that (nginx proxy)
--add-host="local:<MY-DROPLET-INTERNET-IP>" and connect via mongodb://...#local:27017...: also not working as I can access my mongodb only from localhost, not from the public IP
This has to be a common problem!
tl;dr - What is the proper way to expose the hosts localhost inside a docker container so I can connect to services running on the host? (including their ports, e.g. 27017).
I hope someone can help!
You can use: 172.17.0.1 as it is the default host ip that the containers can see. But you need to configure Mongo to listen to 0.0.0.0.
From docker 18.03 onwards the recommendation is to connect to the special DNS name host.docker.internal
For previous versions you can use DNS names docker.for.mac.localhost or docker.for.windows.localhost.
change the bindIp from 127.0.0.1 to 0.0.0.0 in /etc/mongod.conf. Then it will work
or start mongod on ubuntu with a flag to bind all ip address as a temporary workaround (dev/learning purposes)
$ mongod --bind_ip_all
Tried 100500 variants for Windows (using docker desktop), but without any result...
Unfortunately, currently, Windows (at least docker desktop) is not supporting --net=host
Quoted from: https://docs.docker.com/network/network-tutorial-host/#prerequisites
The host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.
You can try to use https://docs.docker.com/toolbox/
1- I created a new containerservice in azure.
2 - The creation was done following the portal step by step.
3 - I have not changed any configuration of any service, VM, balancing, master and agent.
4 - I can connect with PuTTY normally.
5 - I can open a tunnel by redirecting port 80 to port 80.
Following this tutorial, I can put the container to run::
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ffe6a1c890e4 yeasy/simple-web "/bin/sh -c 'pytho..." 31 minutes ago Up 31 minutes 0.0.0.0:80->80/tcp vibrant_morse
If I access localhost from my browser I can reach port 80 of the container and see the identical "Real Visit Results" page of the tutorial.
But in the tutorial it says that if I use load balancer's DNS I should see the result, that's my problem, I can not access the container through DNS, I only get timeout.
Reinforcing, I created a container service and did not change any configuration, just entered with PuTTY and put the container to run.
According to your description, it seems that you don't set your DOCKER_HOST environment variable to the local port configured for the tunnel. When you ssh to your master VM, you need execute command below:
export DOCKER_HOST=:2375
Run the Docker commands that tunnel to the Docker Swarm cluster. For example:
docker info
If you don't set the environment variable on the tunnel, the docker contanier is created on master VM, so you could not get the Web with agent Public IP.
Also, you could not set environment variable, but you need to point to the host when you execute docker command. More information please refer to this link
I am running docker on OSX via boot2docker. I am using docker remotely, via the API.
I create several images of a web server. Docker assigns different IP address to each container, like 172.17.0.61. Each web server is running on port 8080.
Inside VM, I can ping the server on this address.
How can I map these different container IP addresses (from VM) to the same one in VM, but on different port? E.G.
<local.ip>:9001 -> 172.17.0.61:8080
<local.ip>:9002 -> 172.17.0.62:8080
where local.ip may be either ip from boot2docker or anything else.
Possible solution is to define port bindings when creating container and bind each container to a different port. However, I would like to avoid that, since this config becomes part of the container, and only exist because running on OSX. If I do all this above on linux, we would not have this issue.
How to map inner containers to different ports?
Publishing ports is the right solution. You have the same problem whether you're running remotely or locally, just the IP address changes.
For example, say I start the following web servers:
$ docker run -d -p 8000:80 nginx
$ docker run -d -p 8001:80 nginx
From inside the VM (run boot2docker ssh), I can then run curl localhost:8000 or curl localhost:8001 to reach the website. This is the normal way of working with Docker on Linux. From the Mac command line, it becomes curl $(boot2docker ip):8000 because of the VM, but we've not done anything different with regards to starting the web servers because of boot2docker.