since a couple of weeks I'm trying to fix an issue on my new laptop with fedora 28 KDE desktop!
I have two issues :
The container can't connect to the internet
The container doesn't see my hosts in /etc/hosts
I tried many solutions, disable firewalld, flusing iptables, accepting all connections in ip tables, enabling firewalld and changing network zones to "trusted"! also disbaled iptables using daemon.json! it still not working!!
please anyone can help, it's becoming a nightmare for me!
UPDATE #1:
even when I try to build an image it can't access the internet for some reason!, it seems the problem in the level of docker not only containers!
I tried to disable the firewall or changing zones, I also set all connections to "trusted" zone
anyone can help?
UPDATE #2:
When I turn on firewalld service and set wifi connection zone to 'external' now container/docker is able to access internet, but services can't access each other
Here is my yml file :
version: "3.4"
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
deploy:
mode: replicated
replicas: 1
networks:
nabed: {}
volumes:
- "../nginx/etc/nginx/conf.d:/etc/nginx/conf.d"
- "../nginx/etc/nginx/ssl:/etc/nginx/ssl"
api:
image: nabed_backend:dev
hostname: api
command: api
extra_hosts:
- "nabed.local:172.17.0.1"
- "cms.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .api.env
networks:
nabed: {}
cms:
image: nabedd/cms:master
hostname: cms
extra_hosts:
- "nabed.local:172.17.0.1"
- "api.nabed.local:172.17.0.1"
deploy:
mode: replicated
replicas: 1
env_file: .cms.env
volumes:
- "../admin-panel:/admin-panel"
networks:
nabed: {}
networks:
nabed:
driver: overlay
inside API container:
$ curl cms.nabed.local
curl: (7) Failed to connect to cms.nabed.local port 80: Connection timed out
inside CMS container:
$ curl api.nabed.local
curl: (7) Failed to connect to api.nabed.local port 80: Connection timed out
UPDATE #3:
I'm able to fix the issue by putting my hosts in my YAML file in extra_hosts options
then turning my all networks to 'trusted' mode
then restarting docker and Networkmanager
Note: for ppl who voted to close this question, please try help instead
Try very dirty solution - start your container in host network - docker run argument --net=host.
I guess, there will be also better solution, but you didn't provide details how are you starting your containers and which network is available for your containers.
Related
I used kartoza's docker images for Geoserver and Postgis and started them in two docker containers using the provided docker-compose.yml:
version: '2.1'
volumes:
geoserver-data:
geo-db-data:
services:
db:
image: kartoza/postgis:12.0
volumes:
- geo-db-data:/var/lib/postgresql
ports:
- "25434:5432"
env_file:
- docker-env/db.env
restart: on-failure
healthcheck:
test: "exit 0"
geoserver:
image: kartoza/geoserver:2.17.0
volumes:
- geoserver-data:/opt/geoserver/data_dir
ports:
- "8600:8080"
restart: on-failure
env_file:
- docker-env/geoserver.env
depends_on:
db:
condition: service_healthy
healthcheck:
test: curl --fail -s http://localhost:8080/ || exit 1
interval: 1m30s
timeout: 10s
retries: 3
The referenced .env files are:
db.env
POSTGRES_DB=gis,gwc
POSTGRES_USER=docker
POSTGRES_PASS=docker
ALLOW_IP_RANGE=0.0.0.0/0
geoserver.env
GEOSERVER_DATA_DIR=/opt/geoserver/data_dir
ENABLE_JSONP=true
MAX_FILTER_RULES=20
OPTIMIZE_LINE_WIDTH=false
FOOTPRINTS_DATA_DIR=/opt/footprints_dir
GEOWEBCACHE_CACHE_DIR=/opt/geoserver/data_dir/gwc
GEOSERVER_ADMIN_PASSWORD=myawesomegeoserver
INITIAL_MEMORY=2G
MAXIMUM_MEMORY=4G
XFRAME_OPTIONS='false'
STABLE_EXTENSIONS=''
SAMPLE_DATA=false
GEOSERVER_CSRF_DISABLED=true
docker-compose up brings both containers up and running with no errors giving them names backend_db_1 (Postgis) and backend_geoserver_1 (Geoserver). I can access Geoserver running in backend_geoserver_1 under http://localhost:8600/geoserver/ as expected. I can connect an external, AWS-based Postgis as a data store to my docker-based Geoserver instance without any problems. I can also access the Postgis running in the docker container backend_db_1 from PgAdmin, with psql from the command line and from the Webstorm IDE.
However, if I try to use my Postgis running in backend_db_1 as a data store for my Geoserver running in backend_geoserver_1, I get the following error:
> Error creating data store, check the parameters. Error message: Unable
> to obtain connection: Cannot create PoolableConnectionFactory
> (Connection to localhost:25434 refused. Check that the hostname and
> port are correct and that the postmaster is accepting TCP/IP
> connections.)
So, my Geoserver in backend_geoserver_1 can connect to Postgis on AWS, but not to the one running in another docker container on the same localhost. The Postgis in backend_db_1 in its turn can be accessed from many other local apps and tools, but not from Geoserver running in a docker container.
Any ideas what I am missing? Thanks!
just add the network_mode in YAML in db and geoserver and set it to host
network_mode: host
note that this will ignore the expose option and will use the host network an containers network
I have successfully set up Docker-Desktop for Windows and installed my first linux containers from dockerhub. Network-wise containers can communicate with each other on the docker internal network. I am even able to communication with the host network via host.docker.internal.
Now i am coming to the point where i want to access the outside network (Just some other server on the network of the docker host) from within a docker-container.
I have read on multiple websites that network_mode: host does not seem to work with docker desktop for windows.
I have not configured any switches within Hyper-V Manager and have not added any routes in docker, as i am confused with the overall networking concept of docker-desktop for windows in combination with Hyper-V and Linux Containers.
Below you can see my current docker-compose.yaml with NiFi and Zookeeper installed. NiFi is able to see Zookeeper and NiFi is able to query data from a database installed on the docker host. However i need to query data from a different server other than the host.
version: "3.4"
services:
zookeeper:
restart: always
container_name: zookeeper
ports:
- 2181:2181
hostname: zookeeper
image: 'bitnami/zookeeper:latest'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
nifi:
restart: always
container_name: nifi
image: 'apache/nifi:latest'
volumes:
- D:\Docker\nifi:/data # Data directory
ports:
- 8080:8080 # Unsecured HTTP Web Port
environment:
- NIFI_WEB_HTTP_PORT=8080
- NIFI_CLUSTER_IS_NODE=false
- NIFI_CLUSTER_NODE_PROTOCOL_PORT=8082
- NIFI_ZK_CONNECT_STRING=zookeeper:2181
- NIFI_ELECTION_MAX_WAIT=1 min
depends_on:
- zookeeper
Check if you have connection type in dockerNAT set to appropriate external network and set IPV4 config to auto.
I have ansible running one of my ubuntu virtual machines on Azure. I am trying to host a website on Nginx docker container on a remote machine (host machine).
I've done everything provided o this link
http://www.inanzzz.com/index.php/post/6138/setting-up-a-nginx-docker-container-on-remote-server-with-ansible
When I run the curl command it displays all the content of index.html on the terminal as output and when I try to access the website (Welcome to Nginx page) on the browser it doesn't show anything.
I'm not sure what IP address to assign for the NGINX_IP variable in the docker/.env file shown in this tutorial.
Is there any other tutorial that can help me achieve what I want.
Thanks in advance.
For your issue, the problem is that you do not map the container port to the host port. So you just can access the container inside the host.
The solution is that you need to map the port in the docker-compose file like this:
version: '3'
services:
nginx_img:
container_name: ${COMPOSE_PROJECT_NAME}_nginx_con
build:
context: ./nginx
ports:
- "80:80"
networks:
public_net:
ipv4_address: ${NGINX_IP}
networks:
public_net:
driver: bridge
ipam:
driver: default
config:
- subnet: ${NETWORK_SUBNET}
The docker container runs like this:
The last step, you need to allow the port 80 in the NSG which associated with the VM that you run the nginx. Then you can access the nginx outside the VM in the browser.
I have a VPS (Ubuntu 16.04) and deploy a website with docker-compose, and it worked fine before.
My docker-compose.yml file looks like:
version: '2'
services:
backend:
build: ./backend
restart: always
command: uwsgi --ini /opt/workspace/backend/uwsgi.ini
nginx:
image: nginx:latest
expose:
- "80:80"
restart: always
redis:
image: redis:latest
volumes:
- redis-data:/data
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
redis-data:
However, recently, it suffers DNS intermittent failure (every 2-3 days).
MySQL Client raise error:
Can't connect to MySQL server on 'xxx.xxx.com (it's in internet)
Redis Client raise error:
ConnectionError: Error -3 connecting to redis:6379. Temporary failure in name resolution.
When the problem happens, ping vps's ip is ok. But ssh is not.
What's wrong?
This is not a DNS issue, check the logs on your server, the server might be too busy to answer at any given point in time. There can be multiple reasons for server being busy. Eg. it could be made busy by bots, or some other process might be running.
And since you have publicly open mysql port, it will be the culprit mostly.
I wrote docker-compose.yml file which download the image from docker store. I had already subscribe that image in docker store and I am able to pull that image. Following are the services I am using on my compose file.
store/datastax/dse-server:5.1.6
datastax/dse-studio
The link which I followed to write the compose file is datastax/docker-images
I am running docker from Docker Toolbox because I am using Window 7.
version: '2'
services:
seed_node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
node:
image: "store/datastax/dse-server:5.1.6"
environment:
- DS_LICENSE=accept
- SEEDS=seed_node
links:
- seed_node
# Allow DSE to lock memory with mlock
cap_add:
- IPC_LOCK
ulimits:
memlock: -1
studio:
image: "datastax/dse-studio"
environment:
- DS_LICENSE=accept
ports:
- 9091:9091
When I go the browser link for http://192.168.99.100:9091/ and try to have a connection I am getting the following errors:
TEST FAILED
All host(s) tried for query failed (tried: /192.168.99.100:9042 (com.datastax.driver.core.exceptions.TransportException: [/192.168.99.100:9042] Cannot connect))
Docker Compose creates a default internal network where all your containers get IP addresses and can communicate. The IP address you're using there (192.168.99.100) is the address of your host that's running the containers, not the internal IP addresses where the containers can communicate with each other on that default internal network. Port 9091 where you're running Studio is available on that external IP address because you exposed it in the studio service of your yaml:
ports:
- 9091:9091
For Studio to make a connection to one of your nodes, you need to be using an IP on that internal network where they communicate, not on that external IP. The cool thing with Docker Compose is that instead of trying to figure out those internal IPs, you can just use a hostname that matches the name of your service in the docker-compose.yaml file.
So to connect to the service you named node (i.e. the DSE node), you should just use the hostname node (instead of an IP) when creating the connection in Studio.