I am new to docker, this is my first attempt at using it.
I have setup a docker container on a AWS DEBIAN 9 host and started it:
#docker-compose up -d
This is the section related to the web app:
waweb:
image: docker.whatsapp.biz/web:v${WA_API_VERSION:?Run docker-compose with env var WA_API_VERSION (ex. WA_API_VERSION=2.29.2 docker-compose <command> <options>)}
command: ["/opt/whatsapp/bin/wait_on_mysql.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
# This is the version of the docker templates being used to run WhatsApp Business API
WA_RUNNING_ENV_VERSION: v2.2.3
ORCHESTRATION: DOCKER-COMPOSE
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
When I test this it shows that all appears to be correct and it is listening on 9090:
# docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------------
wabiz_db_1 docker-entrypoint.sh mysqld Up 0.0.0.0:33060->3306/tcp, 33060/tcp
wabiz_wacore_1 /opt/whatsapp/bin/wait_on_ ... Up 6250/tcp, 6251/tcp, 6252/tcp, 6253/tcp
wabiz_waweb_1 /opt/whatsapp/bin/wait_on_ ... Up 0.0.0.0:9090->443/tcp
and:
# netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 18818/sshd
tcp6 0 0 :::22 :::* LISTEN 18818/sshd
tcp6 0 0 :::9090 :::* LISTEN 32144/docker-proxy
tcp6 0 0 :::33060 :::* LISTEN 32361/docker-proxy
I can test-connect to this locally:
# telnet localhost 9090
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
Yet when I attempt connect to it remotely, the connection is refused.
The firewall ports are all open to my IP (1-65535), I can remotely telnet to port 22 and also create a python simple-http-server and connect to that remotely too.
I thought that maybe IPV6 was being forced, but it is not:
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
Any suggestions on what the issue may be ?
Related
I have a linux server with PSQL installed (psql (15.2 (Ubuntu 15.2-1.pgdg22.04+1))). This is installed on Oracle Cloud.
I am trying to connect using the command
psql -h 129.213.17.88 -p 5432 -d breedingdb -U postgres
Where 129.213.17.88 is the public IP of the server in Oracle.
Error message:
psql: error: connection to server at "129.213.17.88", port 5432 failed: No route to host
Is the server running on that host and accepting TCP/IP connections?
sudo ufw status
5432 ALLOW Anywhere
5432/tcp ALLOW Anywhere
5432 (v6) ALLOW Anywhere (v6)
5432/tcp (v6) ALLOW Anywhere (v6)
sudo systemctl status postgresql
I have changed postgresql.conf to include:
listen_addresses = '*'
port = 5432
I have changed pg_hba.conf to include:
host all all 0.0.0.0/0 md5
host all all ::1/128 md5
After that sudo systemctl restart postgresql
inbound rules on Oracle cloud
sudo netstat -plunt |grep postgres
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 41326/postgres
tcp6 0 0 :::5432 :::* LISTEN 41326/postgres
I have no problems connecting
locally
sudo nmap -sS 129.213.17.88 -p 5432
Starting Nmap 7.80 ( https://nmap.org ) at 2023-02-18 00:14 UTC
Nmap scan report for 129.213.17.88
Host is up (0.00045s latency).
PORT STATE SERVICE
5432/tcp filtered postgresql
Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds
I set up a docker for a Django project on a Linux server. However, when I run the Django project, it cannot be accessed through the Internet. The project is set up using Docker automatic build and pulled using docker-compose. The docker-compose ps command gives the following output, indicating the project is running.
~/otree-docker$ docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------
otreedocker_database_1 docker-entrypoint.sh postgres Up 5432/tcp
otreedocker_otree_1 /bin/sh -c /opt/otree/entr ... Up 0.0.0.0:80->80/tcp
otreedocker_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
When I use nginx to set up a test webpage, however, it can be accessed without any issue.
Most importantly, I used the same Docker automatic build on another server, it ran without any issue, so the issue must be the setup of the server, not the Django project or the Docker automatic build. Anyone can suggest where to identify the issue? I have been struggling with this issue for several days, and have no idea where to check.
BTW, when I check the usage of the ports when running the Django project, I get the following:
~$ netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:49471 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:4410 0.0.0.0:* LISTEN
tcp6 0 0 :::4000 :::* LISTEN
tcp6 0 0 :::10050 :::* LISTEN
tcp6 0 0 :::9390 :::* LISTEN
tcp6 0 0 :::111 :::* LISTEN
tcp6 0 0 :::80 :::* LISTEN
tcp6 0 0 :::5555 :::* LISTEN
tcp6 0 0 :::46361 :::* LISTEN
tcp6 0 0 :::25 :::* LISTEN
tcp6 0 0 :::4410 :::* LISTEN
When I put up a test webpage using nginx, I get the following:
~$ netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:49471 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:10050 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:4410 0.0.0.0:* LISTEN
tcp6 0 0 :::4000 :::* LISTEN
tcp6 0 0 :::10050 :::* LISTEN
tcp6 0 0 :::9390 :::* LISTEN
tcp6 0 0 :::111 :::* LISTEN
tcp6 0 0 :::5555 :::* LISTEN
tcp6 0 0 :::46361 :::* LISTEN
tcp6 0 0 :::25 :::* LISTEN
tcp6 0 0 :::4410 :::* LISTEN
Django uses tcp6 while nginx uses tcp. I don't know too much about the server setup, but might this issue be caused by some restriction on the tcp6 protocol on the server?
My docker-compose.yml is as follows
ersion: "2"
services:
database:
image: postgres:9.5
environment:
POSTGRES_DB: ${POSTGRES_DATABASE}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DATA: /var/lib/postgresql/data/pgdata
logging:
options:
max-size: "10m"
max-file: "3"
restart: always
read_only: true
volumes:
- "database:/var/lib/postgresql/data"
tmpfs:
- "/tmp"
- "/run"
networks:
db-net:
otree:
# if using Docker Hub, leave "build: ./" commented out.
# if you want to build an image locally, uncomment it.
# build: ./
image: myusername/otree_experiment:latest
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#database/${POSTGRES_DATABASE}"
OTREE_ADMIN_PASSWORD: ${OTREE_ADMIN_PASSWORD}
OTREE_PRODUCTION: ${OTREE_PRODUCTION}
OTREE_AUTH_LEVEL: ${OTREE_AUTH_LEVEL}
ports:
- ${OTREE_PORT}:80
volumes:
volumes:
- "otree-resetdb:/opt/init"
# Uncomment for live editing
# - ./:/opt/otree
restart: always
logging:
options:
max-size: "10m"
max-file: "3"
networks:
- db-net
- redis-net
redis:
image: redis
command: "redis-server"
logging:
options:
max-size: "10m"
max-file: "3"
restart: always
read_only: true
networks:
- redis-net
volumes:
database:
otree-resetdb:
networks:
db-net:
redis-net:
I'm not sure if this is 100% programming or sysadmin related question.
I'm trying to setup a docker-compose file, in the version 3, for docker-swarm, docker version 1.13 to test spark for my local workflow.
Sadly the port 7077 does only get bound to localhost on my swarm cluster and so is not reachable from the outside world, where my spark app is trying to connect to it.
Does anyone have an idea, how to get docker-compose in swarm mode to bind to all interfaces?
I publish my ports and this works fine for say 8080, but not for 7070.
nmap output:
Starting Nmap 7.01 ( https://nmap.org ) at 2017-03-02 11:27 PST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000096s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 994 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
8080/tcp open http-proxy
8081/tcp open blackice-icecap
8888/tcp open sun-answerbook
Explanation of ports
8081 is my spark worker
8080 is my spark master frontend
8888 is the spark hue frontend
nmap does not list 7077
Using netstat:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1641/sshd
tcp6 0 0 :::4040 :::* LISTEN 1634/dockerd
tcp6 0 0 :::2377 :::* LISTEN 1634/dockerd
tcp6 0 0 :::7946 :::* LISTEN 1634/dockerd
tcp6 0 0 :::80 :::* LISTEN 1634/dockerd
tcp6 0 0 :::8080 :::* LISTEN 1634/dockerd
tcp6 0 0 :::8081 :::* LISTEN 1634/dockerd
tcp6 0 0 :::6066 :::* LISTEN 1634/dockerd
tcp6 0 0 :::22 :::* LISTEN 1641/sshd
tcp6 0 0 :::8888 :::* LISTEN 1634/dockerd
tcp6 0 0 :::443 :::* LISTEN 1634/dockerd
tcp6 0 0 :::7077 :::* LISTEN 1634/dockerd
And I can connect to 7077 over telnet on localhost without any issues, but outside of localhost I'm receiving a connection refused error.
At this point in time (please bear with me, I'm not a sysadmin, I'm a software guy), I'm starting to have the feeling this is somehow related to the docker mesh network.
Docker compose section for my master configuration:
#the spark master, having to run on the frontend of the cluster
master:
image: eros.fiehnlab.ucdavis.edu/spark
command: bin/spark-class org.apache.spark.deploy.master.Master -h master
hostname: master
environment:
MASTER: spark://master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: blonde.fiehnlab.ucdavis.edu
ports:
- 4040:4040
- 6066:6066
- 8080:8080
- 7077:7077
volumes:
- /tmp:/tmp/data
networks:
- spark
- frontends
deploy:
placement:
#only run on manager node
constraints:
- node.role == manager
The networks spark and frontend are both overlay networks
The issue was a configuration error in the docker-compose file. The -h master in the original configuration always bound to the local host interface.
Even after specifying the SPARK_LOCAL_IP value
master:
image: eros.fiehnlab.ucdavis.edu/spark:latest
command: bin/spark-class org.apache.spark.deploy.master.Master
hostname: master
environment:
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: blonde.fiehnlab.ucdavis.edu
SPARK_LOCAL_IP: 0.0.0.0
ports:
- 4040:4040
- 6066:6066
- 8080:8080
- 7077:7077
volumes:
- /tmp:/tmp/data
deploy:
placement:
#only run on manager node
constraints:
- node.role == manager
I created a CentOS on GCE and installed dsc-cassandra 3.0. Then I changed the rpc_address from localhost to the internal ip or external ip in cassandra.yaml.
On the VM, I started cassandra and use cqlsh to access cassandra successfully. But I couldn't use cqlsh internal_ip or cqlsh external_ip.
Also, I turned on tcp:9042 port for this instance.
But I still couldn't access Cassandra from my local java app with the NoHostAvailableException(Cannot connect).
By the way, I did the same thing of my local VM running with VM VistualBox. I could access it.
Running sudo netstat -lntp | grep pid displayed:
tcp 0 0 127.0.0.1:33743 0.0.0.0:* LISTEN 1207/java
tcp 0 0 127.0.0.1:7000 0.0.0.0:* LISTEN 1207/java
tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 1207/java
tcp6 0 0 127.0.0.1:9042 :::* LISTEN 1207/java
The Ip address was still 127.0.0.1. I think this is the problem.
How to configure the cassandra.yaml file?
I know where I was wrong.
I used sudo service cassandra restart to restart cassandra after editing the cassandra.yaml. The terminal showed:
Restarting cassandra (via systemctl): [ OK ]
Actually, I think it didn't really restart it. Then I used nodetool stopdaemon to stop cassandra and then start it again. The configuration of cassandra.yaml worked.
Helpful commands:
1.
ps aux | grep cassandra
sudo netstat -lntp | grep <cassandra_pid>
Using these commands to verify the ip/port of the cassandra service on remote VM.
tcp 0 0 127.0.0.1:7000 0.0.0.0:* LISTEN 5928/java
tcp 0 0 127.0.0.1:42682 0.0.0.0:* LISTEN 5928/java
tcp 0 0 127.0.0.1:7199 0.0.0.0:* LISTEN 5928/java
tcp6 0 0 10.138.0.2:9042 :::* LISTEN 5928/java
2.
telnet <cassandra_ip> 9042
Using this command to verify the ip/port of the cassandra service on local machine.
This is my problem
# docker exec -ti root_web_1 bash
[root#ca32f79bdc14]# curl couchdb:5984
curl: (7) Failed to connect to couchdb port 5984: Connection refused
[root#ca32f79bdc14]# curl redis:6379
-ERR wrong number of arguments for 'get' command
-ERR unknown command 'Host:'
-ERR unknown command 'User-Agent:'
-ERR unknown command 'Accept:'
^C
Question
Why can't I access couchdb:5984 ?
Background
When I am in my couchdb container I can curl localhost:5984 and it responds and netstat -nl gives me
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:5984 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.11:35300 0.0.0.0:* LISTEN
udp 0 0 127.0.0.11:51267 0.0.0.0:*
and the Dockerfile contains EXPOSE 5984, but I get connection refused when doing curl couchdb:5984 from the web container.
When I do the same with redis, curl redis:6379 it responds and netstat -nl gives
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:46665 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN
tcp6 0 0 :::6379 :::* LISTEN
udp 0 0 127.0.0.11:49518 0.0.0.0:*
This is the couchdb Dockerfile
FROM fedora:25
RUN dnf -y update
RUN dnf -y install couchdb
EXPOSE 5984
CMD ["/usr/bin/couchdb"]
This is the docker-compose.yml.
version: '2'
networks:
revproxynet:
external: true
services:
web:
images: nginx
networks:
- revproxynet
redis:
image: redis
networks:
- revproxynet
couchdb:
build: /docker/couchdb/
networks:
- revproxynet
The network is created with docker network create revproxynet.
In /etc/couchdb/local.ini you need to have
[httpd]
bind_address = 0.0.0.0
and it will work.