I'm trying to setup a spark cluster using this link - https://github.com/actionml/docker-spark
When I created my container(2-worker and 1-master), I see all the ports being mapped to the same ports on the host.
I'm wondering how to access my master web ui for spark?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b54c5fd1442c actionml/spark "/entrypoint.sh wo..." 2 minutes ago Up 2 minutes 4040/tcp, 6066/tcp, 7001-7006/tcp, 7077/tcp, 8080-8081/tcp spark-worker1
2c987a057223 actionml/spark "/entrypoint.sh wo..." 3 minutes ago Up 3 minutes 4040/tcp, 6066/tcp, 7001-7006/tcp, 7077/tcp, 8080-8081/tcp spark-worker0
b1d34441507e actionml/spark "/entrypoint.sh ma..." 9 minutes ago Up 9 minutes 4040/tcp, 6066/tcp, 7001-7006/tcp, 7077/tcp, 8080-8081/tcp spark-master
As stated in the README file of the repository, when starting master, you can specify the web ui port:
docker run --rm -it actionml/docker-spark master --webui-port PORT
--webui-port PORT Port for web UI (default: 8080)
As you can see the default is 8080.
However you need to expose the port so that is it accessible:
docker run -p 8080:8080 --rm -it actionml/docker-spark master
You can now open the browser and see the ui at localhost:8080
Related
I use a docker container to interact with my kubernetes cluster. I run kubectl from inside the container. All works fine except when I want to port forward. I can use kubectl port forward to forward from the pod to my container. But then I won't be able to access the site from my laptop browser. I can only curl from inside the container.
Is there any way at all I can access the site from my browser. docker host networking mode isn't supported on Macs and I use a Mac. Any suggestions?
I have this scenario you may adapt for your need :
On my laptop, I run :
docker run -it --rm -p 2222:2222 ubuntu bash
From inside container, I run :
kubectl port-forward --address 0.0.0.0 pod/my-pod-7f66c99ddd-6c429 2222:22
Now this is the diagram for port-forwarding :
2222 2222 2222 22
laptop -------------------> docker-container------------------->k8s-pod
Now, in another terminal on my laptop, I can do
ssh -p 2222 user-on-pod#laptop-hostname
to arrive at pod
Is there any way to stop 2 conflicting docker containers on the same machine since they are both using the internal?! or container port 80?
I have one ngnix docker that runs on port 9000 from the browser.
916aa1f58ca3 nginx:1.21.6-alpine "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 0.0.0.0:9000->80/tcp
and a second Apache2 that is mapped to external port 88.
4a3ba2c4847b apache-master_php "docker-php-entrypoi…" 3 hours ago Up 3 seconds 80/tcp, 0.0.0.0:88->88
They both run together except when I send a request to the apache machine. Accessing the apaches index.html on port 88 on the browser crashes the machine.
Is there a way around this?
docker-compose.yml looks like this:
apache:
ports:
- 88:88
volumes:
- ./src:/var/www/html/
for ngnix:
nginx:
ports:
- "$SENTRY_BIND:80/tcp"
image: "nginx:1.21.6-alpine"
Conflict can never occur between containers, as docker will assign above containers to a private network created specifically for these services and each container will have its own private ip.
docker configuration should be as below, assuming apache is running using the default configurations
apache:
ports:
- 80:88
volumes:
- ./src:/var/www/html/
your configurations is telling docker to map container port 88 to host port 88 which is wrong.
I think you should read the docs
https://docs.docker.com/network/
I have a Postgres Database which was deployed as a Docker container in Linux server. How can i add this as a data source in grafana? I tried finding the docker host IP address using sudo ip addr show docker0 and got the result as 172.17.255.255. I added this URL in Host name field. And added the port number as 5432. But still, grafana is unable to fetch the data from DB. How can i do this?
If your database has its port closed you should use a docker networks but if the port is exposed you should try to use your host ip address instead of the docker gateway address
With docker ps, check if your Postgres container expose the port, eg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e25459aa0ed6 postgres:10.17 "docker-entrypoint.s…" 12 seconds ago Up 11 seconds 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp jovial_merkle
If not, stop the service, then restart it exposing the port with docker run -d -p 5432:5432 ....
Check if there is any firewall rule that allow to access the port from outside the server.
Now you can reach you DB using the server IP
I currently have two docker containers running:
ab1ae510f069 471b8de074c4 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3001->3001/tcp hopeful_bassi
2d4797b77fbf 5985005576a6 "nginx -g 'daemon of…" 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_cori
One is my client and the other (port 3001) is my server.
The issue I'm facing is I've just added SSL to my site, but now I can't access the server. My theory is that the server needs both port 443 and port 3001 open, but I can't have port 443 open on both containers. I can run the server via HTTP locally, so I think that also points to this conclusion.
Is there anything I can do to have both using https? The client won't talk to the server if the server uses http (for obvious reasons).
Edit:
I'm now not sure if it is to do with port 443, as I killed my client and tried to just run the server, but it still gave me connection refused:
docker run -dit --restart unless-stopped -p 3001:3001 -p 443:443 471b8de074c4
If you open the port 443 for a docker container, it means that a docker-managed tool will be started. This (anyways, highly sup-optimal) tool will forward the TCP requrest to your host port 443 to the container.
If you want two containers to use the port 443, docker would want to start this portforwarder twice, on the same port. As your docker output shows, it could happen only once. Maybe digging (deeply) in the (nearly non-existent) docker logs, you can see also the relevant error message.
The problem you've found is not docker-dependant, it is the same problem what you would face also in a container-less environment - you simply can't start multiple service processes listening on the same TCP port.
Also the solution is coming from the world before the containers.
You need a central proxy service, this would listen on your port 443, and forward the requests - depending on the asked virtualhost - to the corresponding container.
Dig into the docker containers, it is nearly sure that such a https forward proxy exists. This third container will forward the requests where you want. Of course you will need to configure it.
From that moment, you don't even need to have https in your containers (although you can if you want), what helps a lot in productive, correctly certified ssl environments. Only your proxy will need the certificates. So:
/---> container A (tcp:80)
tcp:443 -- proxy
\---> container B (tcp:80)
I tried this on multiple machines (Win10 and server 2016), same result
using this tutorial:
https://docs.docker.com/docker-for-windows/#set-up-tab-completion-in-powershell
This works
docker run -d -p 80:80 --name webserver nginx
Any other port, fails with
docker run -d -p 8099:8099 --name webserver nginx --> ERR_EMPTY_RESPONSE
Looks like Docker/nginx is listening failing on this port, but failing. Telneting to this port shows that the request goes through, but disconnects right away. This is different from when a port is not being listened to on at all.
There are two ports in that list. The first port is the one docker publishes on the host for you to connect to remotely. The second port is where to send that traffic in the container. Docker doesn't modify the application, so the application itself needs to be listening on that second port. By default, nginx listens on port 80. Therefore, you could run:
docker run -d -p 8099:80 --name webserver nginx
To publish on port 8099 and send that traffic to an app inside the container listening on port 80.