I have a server running on a Linux/debian machine. I can GET/PUT correctly from within the same machine.
$ curl -v -X PUT -d "blabla" 127.0.1.1:5678
* About to connect() to 127.0.1.1 port 5678 (#0)
* Trying 127.0.1.1...
* connected
* Connected to 127.0.1.1 (127.0.1.1) port 5678 (#0)
> PUT / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: 127.0.1.1:5678
> Accept: */*
> Content-Length: 6
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 6 out of 6 bytes
* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 405 Unsupported request method: "PUT"
< Connection: close
< Server: My Server v1.0.0
<
* Closing connection #0
However if I try from another machine (same local network), here is what it says:
$ curl -v -X PUT -d "blabla" 192.168.0.21:5678
* About to connect() to 192.168.0.21 port 5678 (#0)
* Trying 192.168.0.21...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
From the server side, no firewall is running:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here is what netcat reveals:
$ netstat -alnp | grep 5678
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.1.1:5678 0.0.0.0:* LISTEN -
Is there a way to debug what could be going on ?
The webapp is listening on 127.0.0.1 which is loopback interface. In order to be accessible outside it would have to be listening on 192.168.0.21:5678 or *:5678 which means all interfaces.
I had to comment out the following line:
$ cat /etc/hosts
#127.0.1.1 [...]
This is similar to my debian squeeze setup and appears to be working as expected. I have no clue why this extra line messed things up so badly. Apparently this is due to an issue in GNOME package, but this server does not even have X installed.
Related
I'm trying to make a GET request to an old Linux machine using cURL inside WSL2/Debian. The connection between my windows PC and the remote Linux is via VPN. VPN is working as I can ping the IP, as well as VNC to it (via Windows).
The curl command I'm using on WSL2/Debian is:
curl -k --header 'Host: 10.48.1.3' --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2'
Using the verbose option, I get:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x555661293fb0)
* Expire in 4000 ms for 8 (transfer 0x555661293fb0)
* Trying 10.48.1.3...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x555661293fb0)
* Connection timed out after 4001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 4001 milliseconds
After the max-time of 4s the command is cancelled
When I execute the same command on the same computer but using Windows Powershell, it works:
curl.exe -k --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2' -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.48.1.3:443...
* Connected to 10.48.1.3 (10.48.1.3) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: using IP address, SNI is not supported by OS.
* schannel: ALPN, offering http/1.1
* ALPN, server did not agree to a protocol
Parameter1 HTTP/1.1
> User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 17 May 2022 15:39:00 GMT
< Server: IosHttp/0.0.1 (MANUFACTURER)
< Content-Type: application/json
< Content-Length: 239
< Strict-Transport-Security: max-age=15768000
<
PAYLOAD OF API* Connection #0 to host 10.48.1.3 left intact
Using Postman inside Windows works also.
Inside WSL2/Debian I'm able to ping the machine, but ssh is not working either; the cursor just stays there blinking without getting back any answer from remote machine:
$ ping 10.48.1.3 -c 2
PING 10.48.1.3 (10.48.1.3) 56(84) bytes of data.
64 bytes from 10.48.1.3: icmp_seq=1 ttl=61 time=48.9 ms
64 bytes from 10.48.1.3: icmp_seq=2 ttl=61 time=28.4 ms
--- 10.48.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 28.353/38.636/48.919/10.283 ms
$ ssh -c aes256-cbc root#10.48.1.3
^C # Cancelled as nothing happened for several minutes
On Windows Powershell both, ping and ssh work:
> ssh -c aes256-cbc root#10.48.1.3
The authenticity of host '10.48.1.3 (10.48.1.3)' can't be established.
ECDSA key fingerprint is SHA256:FINGERPRINTDATAOFMACHINE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I've about 100 similar machines on field to which I need to cURL, and this error is coming in about 10 of them, the rest 90 work fine (also on WSL2/Debian).
I guess the error may come from the SSL Version on my WSL2/Debian... Has anyone an idea how to solve this problem?
General context
Docker daemon comes with an embedded DNS server. It resolves local Docker swarm and network records and forwards queries for external records to an upstream nameserver configured with --dns 1.
Docs say you can set an IP address for this upstream nameserver with --dns=[IP_ADDRESS...]. The default port used is 53.
My question
Can I configure the port used as well?
My host's /etc/docker/daemon.json shows "dns": ["10.99.0.1"],. Is there a way for me to specify something like "dns": ["10.99.0.1:53"], so that dockerd always knows to forward DNS queries to port 53?
My use case
In my case, 10.99.0.1 is the IP of a localhost bridge interface. I run a local DNS caching server on this host. So DNS queries sent to 10.99.0.1:53 work. But dockerd forwards queries originating from containers connected to user-defined bridge networks (created with docker network create) to non-standard ports it picks. See terminal output below.
Detailed terminal output and debugging info
"toogle" is a Docker container connected to a Docker network I created with docker network create. 127.0.0.11 is another loopback address. DNS queries originating from within Docker containers connected to user-defined Docker networks are destined for this IP.
Is Docker's embedded DNS server actually running?
DNS queries are routed by toogle's firewall rules this way.
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 1 packets, 60 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 1 packets, 60 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- * * 0.0.0.0/0 127.0.0.11
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- * * 0.0.0.0/0 127.0.0.11
Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 127.0.0.11 tcp dpt:53 to:127.0.0.11:37619 <-- look at this rule
0 0 DNAT udp -- * * 0.0.0.0/0 127.0.0.11 udp dpt:53 to:127.0.0.11:58552 <-- look at this rule
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- * * 127.0.0.11 0.0.0.0/0 tcp spt:37619 to::53
0 0 SNAT udp -- * * 127.0.0.11 0.0.0.0/0 udp spt:58552 to::53
Whatever's listening on those ports is accepting TCP and UDP connections
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) nc 127.0.0.11 37619 -vz
127.0.0.11: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [127.0.0.11] 37619 (?) open
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) nc 127.0.0.11 58552 -vzu
127.0.0.11: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [127.0.0.11] 58552 (?) open
But there's no DNS reply from either
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) dig #127.0.0.11 -p 58552 accounts.google.com
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #127.0.0.11 -p 58552 accounts.google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) dig #127.0.0.11 -p 37619 accounts.google.com +tcp
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #127.0.0.11 -p 37619 accounts.google.com +tcp
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
dockerd is listening for DNS queries at that IP and port from within toogle.
$ sudo nsenter -n -p -t $(docker inspect --format {{.State.Pid}} toogle) ss -utnlp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 127.0.0.11:58552 0.0.0.0:* users:(("dockerd",pid=10984,fd=38))
tcp LISTEN 0 128 127.0.0.11:37619 0.0.0.0:* users:(("dockerd",pid=10984,fd=40))
tcp LISTEN 0 128 *:80 *:* users:(("toogle",pid=12150,fd=3))
But dockerd is trying to forward the DNS query to 10.99.0.1, which is my docker0 bridge network interface.
$ sudo journalctl --follow -u docker
-- Logs begin at Tue 2019-11-05 18:17:27 UTC. --
Apr 22 15:43:12 my-host dockerd[10984]: time="2021-04-22T15:43:12.496979903Z" level=debug msg="[resolver] read from DNS server failed, read udp 172.20.0.127:37928->10.99.0.1:53: i/o timeout"
Apr 22 15:43:13 my-host dockerd[10984]: time="2021-04-22T15:43:13.496539033Z" level=debug msg="Name To resolve: accounts.google.com."
Apr 22 15:43:13 my-host dockerd[10984]: time="2021-04-22T15:43:13.496958664Z" level=debug msg="[resolver] query accounts.google.com. (A) from 172.20.0.127:51642, forwarding to udp:10.99.0.1"
dockerd forwards the DNS query that asks for nameserver 127.0.0.11:58552 to 10.99.0.1 but only changes the IP and not the port. So the DNS query is forwarded to 10.99.0.1:58552 and nothing is listening at that port.
$ dig #10.99.0.1 -p 58552 accounts.google.com
[NO RESPONSE]
$ nc 10.99.0.1 58552 -vz
10.99.0.1: inverse host lookup failed: Unknown host
(UNKNOWN) [10.99.0.1] 58552 (?) : Connection refused
A DNS query to 10.99.0.1:53 works as expected.
dig #10.99.0.1 -p 53 accounts.google.com
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #10.99.0.1 -p 53 accounts.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53674
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;accounts.google.com. IN A
;; ANSWER SECTION:
accounts.google.com. 235 IN A 142.250.1.84
;; Query time: 0 msec
;; SERVER: 10.99.0.1#53(10.99.0.1)
;; WHEN: Thu Apr 22 17:20:09 UTC 2021
;; MSG SIZE rcvd: 64
I don't think there's a way to do this. I also misread the output. Docker daemon was forwarding to port 53.
read udp 172.20.0.127:37928->10.99.0.1:53: i/o timeout
I'm creating resources using terraform and I have a node in the subnet 172.1.0.0 with no security group or rules assigned to the node; the node has two endpoints 22 and 80.
I used nmap to confirm that the ports are open and they are:
nmap -Pn -sT -p T:11 some-ingress.cloudapp.net
Starting Nmap 6.47 ( http://nmap.org ) at 2016-07-08 19:02 EAT
Nmap scan report for some-ingress.cloudapp.net (1.2.3.4
Host is up (0.31s latency).
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 1.34 seconds
nmap -Pn -sT -p T:80 some-ingress.cloudapp.net
Starting Nmap 6.47 ( http://nmap.org ) at 2016-07-08 19:02 EAT
Nmap scan report for some-ingress.cloudapp.net (1.2.3.4)
Host is up (0.036s latency).
PORT STATE SERVICE
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds
I've installed nginx on the node and running curl 172.1.0.4 and curl 127.0.0.1 and curl 0.0.0.0 gets me the default nginx page.
However running curl <public ip> or curl <dns name> hangs and I get a
curl: (56) Recv failure: Connection reset by peer
My iptables rules are:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
ACCEPT udp -- anywhere anywhere udp dpt:bootps
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Why can't I possibly access nginx at the node's public ip?
The node is in a virtual network 172.1.0.0/16 and a subnet of 172.1.0.0/24 I can post the terraform configs if needed.
I am installing a Musicbox Frontend on a Debian Server.
Everything works on the local server, by accessing 127.0.0.1:6680.
On other machines in the same subnet i can't reach this webpage by using 192.168.0.50:6680
I added the port to the ip-table, i have this output:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:6600
ACCEPT tcp -- anywhere anywhere tcp dpt:6680
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:6680
ACCEPT tcp -- anywhere anywhere tcp dpt:6600
When i use nmap to inspect the ports, the port doesn't seem to be reachable
Starting Nmap 6.47 ( http://nmap.org ) at 2015-02-08 03:32 Romance Standard Time
NSE: Loaded 118 scripts for scanning.
NSE: Script Pre-scanning.
Initiating ARP Ping Scan at 03:32
Scanning 192.168.0.50 [1 port]
Completed ARP Ping Scan at 03:32, 0.13s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 03:32
Completed Parallel DNS resolution of 1 host. at 03:32, 0.02s elapsed
Initiating SYN Stealth Scan at 03:32
Scanning 192.168.0.50 [1000 ports]
Discovered open port 22/tcp on 192.168.0.50
Discovered open port 3389/tcp on 192.168.0.50
Completed SYN Stealth Scan at 03:32, 0.21s elapsed (1000 total ports)
Initiating Service scan at 03:32
Scanning 2 services on 192.168.0.50
Completed Service scan at 03:33, 6.01s elapsed (2 services on 1 host)
Initiating OS detection (try #1) against 192.168.0.50
NSE: Script scanning 192.168.0.50.
Initiating NSE at 03:33
Completed NSE at 03:33, 1.12s elapsed
Nmap scan report for 192.168.0.50
Host is up (0.0023s latency).
Not shown: 998 closed ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 6.0p1 Debian 4+deb7u2 (protocol 2.0)
3389/tcp open ms-wbt-server xrdp
MAC Address: XXXXXXXXXXXX
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.11 - 3.14
Uptime guess: 0.031 days (since Sun Feb 08 02:47:48 2015)
Network Distance: 1 hop
TCP Sequence Prediction: Difficulty=262 (Good luck!)
IP ID Sequence Generation: All zeros
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
TRACEROUTE
HOP RTT ADDRESS
1 2.27 ms 192.168.0.50
the Musicbox listen address is 127.0.0.1:6680? if so you can't reach this webpage by using 192.168.0.50:6680, you can inspect it by using netstat -anop | grep 6680
If I try:
telnet localhost 143
I can get access to imap
If I try
telnet server.name 143
I get
telnet: Unable to connect to remote host: Connection timed out
See output of my netstat.
netstat --numeric-ports -l | grep 143
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN
tcp6 0 0 :::143 :::* LISTEN
What does the above output mean?
Am at my wits end cannot get imap to work remotely, works perfectly with webmail on the server.
I'm accessing the server from laptop terminal remotely, and locally for localhost connection
The output you quote:
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN
tcp6 0 0 :::143 :::* LISTEN
means that you have a program (presumably your imap server) listening on port 143 via tcp (on both IPv4 and IPv6). And the "0.0.0.0" part means it should accept connections from any source. (If it said "127.0.0.1:143" for the local address, it would mean that only local connections would be accepted.)
So, since it looks like you have the serer listening correctly, I'd first check that server.name actually resolves to the correct IP address. Can you contact any other services on that server to make sure that part works?
Assuming that that works, then the next thing I'd check would be your firewall. You might look at http://www.cyberciti.biz/faq/howto-display-linux-iptables-loaded-rules/ but you could probably start by just running:
sudo iptables -L -v
On my machine which has no firewall rules I get this:
$ sudo iptables -L -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
If you get something much different, the I'd take a closer look to see if that's blocking your traffic.