debian port 80 does not accept remote connections - node.js

I have a Node.js express website that was listening on port 9000, the thing which was fine until I changed the port to 80, now it accepts only connection from local:
wget http://127.0.0.1/ -O -
curl 127.0.0.1:80
Locally it's working fine and they return the html page. But it does not accept remote connections from browser, either using the external IP address or the domain name.
# iptables
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80

Well, i have figured the issue, it was my iptables rules forwarding remote connections from outside to another port.

Related

iptables TPROXY gets hit but doesn't redirect to port

I'm running Debian 8 with iptables.
I have the following rule:
iptables -t mangle -A PREROUTING -p tcp --dport 5000 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 4000
I simply want to redirect all traffic going with destination port 5000 to port 4000.
The standard iptables REDIRECT is not usable in my case, as it alters the packet and changes the original destination port.
Looking at iptables -t mangle -nvL I can see the rule being hit:
Chain PREROUTING (policy ACCEPT 5056 packets, 13M bytes)
pkts bytes target prot opt in out source destination
12 720 TPROXY tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5000 TPROXY redirect 0.0.0.0:4000 mark
0x1/0x1
But my service running on port 4000 doesn't intercept the packets.
I have a simple NodeJS application listening for all TCP on port 4000, which doesn't get any packets:
server.listen(4000, () => { console.log('listening on 4000'); });
Also, running wireshark on TCP port 4000 on all interfaces doesn't show anything.
You also need to set up the routing rule:
# 1 is --tproxy-mark parameter in iptables command
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

Centos 6.4 Nodejs external not responding

I am new to ssh and Centos 6.4 and I want to run nodejs on port 80. But couldn't make it to work external.
When I type netstat -anp | grep 8080 I can see that my node listening.
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 7976/node
But it is not going external.
I tried to add settings to iptables and result is same again. It is not working.
[root#culturalinfluences ~]# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http /* node.js port */
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:webcache /* node.js port */
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here is my nodejs
var port = 8080;
app.listen(port, "0.0.0.0" ,function() {
console.log("Listening on " + port);
});
Thank you four understand cause I am really new into linux and its iptables system. I am sure people like me will search the same issue and I hope they will find answer from this question.
Thank you for your helps.
You have a
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
right before the "http" ports you're allowing, so those rules will never be reached. Move the REJECT all rule to the bottom of the list instead.
Additionally you may want to use -n on the iptables command line to make sure the port numbers are right and aren't 80 instead of 8080 for example.

how to connect a kubernetes pod to the outside world without a forwarding rule (google container engine)

I'm using Google's Container Engine service, and got a pod running a server listening on port 3000. I set up the service to connect port 80 to that pod's port 3000. I am able to curl the service using its local and public ip from within the node, but not from outside. I set up a firewall rule to allow port 80 and send it to the node, but I keep getting 'connection refused' from outside the network. I'm trying to do this without a forwarding rule, since there's only one pod and it looked like forwarding rules cost money and do load balancing. I think the firewall rule works, because when I add the createExternalLoadBalancer: true to the service's spec, the external IP created by the forwarding rule works as expected. Do I need to do something else? Set up a route or something?
controller.yaml
kind: ReplicationController
apiVersion: v1beta3
metadata:
name: app-frontend
labels:
name: app-frontend
app: app
role: frontend
spec:
replicas: 1
selector:
name: app-frontend
template:
metadata:
labels:
name: app-frontend
app: app
role: frontend
spec:
containers:
- name: node-frontend
image: gcr.io/project_id/app-frontend
ports:
- name: app-frontend-port
containerPort: 3000
targetPort: 3000
protocol: TCP
service.yaml
kind: Service
apiVersion: v1beta3
metadata:
name: app-frontend-service
labels:
name: app-frontend-service
app: app
role: frontend
spec:
ports:
- port: 80
targetPort: app-frontend-port
protocol: TCP
publicIPs:
- 123.45.67.89
selector:
name: app-frontend
Edit (additional details):
Creating this service adds these additional rules, found when I run iptables -L -t nat
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http redir ports 56859
REDIRECT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 56859
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 10.247.247.206 /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
DNAT tcp -- anywhere 89.67.45.123.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http to:10.241.69.28:56859
I don't fully understand iptables, so I'm not sure how the destination port matches my service. I found that the DNS for 89.67.45.123.bc.googleusercontent.com resolves to 123.45.67.89.
kubectl get services shows the IP address and port I specified:
NAME IP(S) PORT(S)
app-frontend-service 10.247.243.151 80/TCP
123.45.67.89
Nothing recent from external IPs is showing up in /var/log/kube-proxy.log
TL;DR: Use the Internal IP of your node as the public IP in your service definition.
If you enable verbose logging on the kube-proxy you will see that it appears to be creating the appropriate IP tables rule:
I0602 04:07:32.046823 24360 roundrobin.go:98] LoadBalancerRR service "default/app-frontend-service:" did not exist, created
I0602 04:07:32.047153 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 10.119.244.130/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.048446 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 10.119.244.130:80
I0602 04:07:32.049525 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.050872 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-CONTAINER -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j REDIRECT --to-ports 36970]
I0602 04:07:32.052247 24360 proxier.go:595] Opened iptables from-containers portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
I0602 04:07:32.053222 24360 iptables.go:186] running iptables -C [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.054491 24360 iptables.go:186] running iptables -A [KUBE-PORTALS-HOST -t nat -m comment --comment default/app-frontend-service: -p tcp -m tcp -d 23.251.156.36/32 --dport 80 -j DNAT --to-destination 10.240.121.42:36970]
I0602 04:07:32.055848 24360 proxier.go:606] Opened iptables from-host portal for service "default/app-frontend-service:" on TCP 23.251.156.36:80
Listing the iptables entries using -L -t shows the public IP turned into the reverse DNS name like you saw:
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 10.119.240.2 /* default/kubernetes: */ tcp dpt:https redir ports 50353
REDIRECT tcp -- anywhere 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:http redir ports 54605
REDIRECT udp -- anywhere 10.119.240.10 /* default/kube-dns:dns */ udp dpt:domain redir ports 37723
REDIRECT tcp -- anywhere 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:domain redir ports 50126
REDIRECT tcp -- anywhere 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
REDIRECT tcp -- anywhere 36.156.251.23.bc.googleusercontent.com /* default/app-frontend-service: */ tcp dpt:http redir ports 36970
But adding the -n option shows the IP address (by default, -L does a reverse lookup on the ip address, which is why you see the DNS name):
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- 0.0.0.0/0 10.119.240.2 /* default/kubernetes: */ tcp dpt:443 redir ports 50353
REDIRECT tcp -- 0.0.0.0/0 10.119.240.1 /* default/kubernetes-ro: */ tcp dpt:80 redir ports 54605
REDIRECT udp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns */ udp dpt:53 redir ports 37723
REDIRECT tcp -- 0.0.0.0/0 10.119.240.10 /* default/kube-dns:dns-tcp */ tcp dpt:53 redir ports 50126
REDIRECT tcp -- 0.0.0.0/0 10.119.244.130 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
REDIRECT tcp -- 0.0.0.0/0 23.251.156.36 /* default/app-frontend-service: */ tcp dpt:80 redir ports 36970
At this point, you can access the service from within the cluster using both the internal and external IPs:
$ curl 10.119.244.130:80
app-frontend-5pl5s
$ curl 23.251.156.36:80
app-frontend-5pl5s
Without adding a firewall rule, attempting to connect to the public ip remotely times out. If you add a firewall rule then you will reliably get connection refused:
$ curl 23.251.156.36
curl: (7) Failed to connect to 23.251.156.36 port 80: Connection refused
If you enable some iptables logging:
sudo iptables -t nat -I KUBE-PORTALS-CONTAINER -m tcp -p tcp --dport
80 -j LOG --log-prefix "WTF: "
And then grep the output of dmesg for WTF it's clear that the packets are arriving on the 10. IP address of the VM rather than the ephemeral external IP address that had been set as the public IP on the service.
It turns out that the problem is that GCE has two types of external IPs: ForwardingRules (which forward with the DSTIP intact) and 1-to-1 NAT (which actually rewrites the DSTIP to the internal IP). The external IP of the VM is the later type so when the node receives the packets the IP tables rule doesn't match.
The fix is actually pretty simple (but non-intuitive): Use the Internal IP of your node as the public IP in your service definition. After updating your service.yaml file to set publicIPs to the Internal IP (e.g. 10.240.121.42) you will be able to hit your application from outside of the GCE network.
If you add the node's external IP address to the service's publicIPs field, you should then be able to access it at the node's IP address. If your cluster has multiple nodes, you can put more than one of their IP addresses into the field if you'd like to enable accessing the pod on any of them.
In an upcoming release there will be a simpler built-in option for setting up an external service without a load balancer. If you're curious or reading this sometime in the future, check out the updated "External Services" section of this doc to see how you'll be able to use use NodePort to accomplish the same thing much more easily.
Answer by #Robert Bailey is absolutely correct. publicIPs has been deprecated in kubernetes 1.5.1, you can use externalIPs instead.
get the internal ip of the node, kubectl describe node | grep Address
Addresses: 10.119.244.130,101.192.150.200,gke-...
or you can run the ifconfig eth0 inside the node terminal to get the internal ip
set the ip in service.yaml
spec:
type: NodePort
externalIPs:
- 10.119.244.130
can curl with resolve to test
curl --resolve 'example.com:443:23.251.156.36' https://example.com -k

Can access NFS to and from at most 3 (of 5) computers [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have 5 computers which we will label as such:
Ubuntu 13.10 Desktop --> U13.10
Ubuntu 11.10 Desktop --> U11.10
Raspberry Pi Raspbian --> R1
Raspberry Pi Raspbian --> R2
Raspberry Pi Raspbian --> R3
I have NFS shares set up like so:
U13.10 (192.168.7.1)
exporting to U11.10
U11.10 (192.168.7.10)
importing from U13.10
importing from R1 (FAILS)
importing from R2
importing from R3 (FAILS)
exporting to R1
exporting to R2
exporting to R3
R1 (192.168.7.104)
importing from U11.10
exporting to U11.10
R2 (192.168.7.105)
importing from U11.10
exporting to U11.10
R3 (192.168.7.106)
importing from U11.10
exporting to U11.10
Finally here is the output of my iptables on the server (U13.10) acting as a router:
U13.10$ sudo iptables -nL
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:111
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:111
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:2049
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:32803
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:32769
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:892
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:892
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:875
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:875
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:662
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:662
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10000
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10000
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10001
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10001
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10002
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10002
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10003
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10003
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10004
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10004
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10005
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10005
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10006
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10006
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10007
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10007
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10008
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10008
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10009
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:10009
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10000
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10001
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10001
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10002
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10002
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10003
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10003
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10004
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10004
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10005
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10005
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10006
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10006
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10007
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10007
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10008
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10008
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10009
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:10009
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x3F/0x00
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp flags:0x3F/0x3F
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:465
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:110
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:995
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:143
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:993
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:6000
ACCEPT udp -- 0.0.0.0/0 192.168.7.11 udp dpt:6001
ACCEPT udp -- 0.0.0.0/0 192.168.7.12 udp dpt:6002
ACCEPT udp -- 0.0.0.0/0 192.168.7.13 udp dpt:6003
ACCEPT udp -- 0.0.0.0/0 192.168.7.14 udp dpt:6004
ACCEPT udp -- 0.0.0.0/0 192.168.7.15 udp dpt:6005
ACCEPT udp -- 0.0.0.0/0 192.168.7.16 udp dpt:6006
ACCEPT udp -- 0.0.0.0/0 192.168.7.17 udp dpt:6007
ACCEPT udp -- 0.0.0.0/0 192.168.7.18 udp dpt:6008
ACCEPT udp -- 0.0.0.0/0 192.168.7.19 udp dpt:6009
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:6000
ACCEPT tcp -- 0.0.0.0/0 192.168.7.11 tcp dpt:6001
ACCEPT tcp -- 0.0.0.0/0 192.168.7.12 tcp dpt:6002
ACCEPT tcp -- 0.0.0.0/0 192.168.7.13 tcp dpt:6003
ACCEPT tcp -- 0.0.0.0/0 192.168.7.14 tcp dpt:6004
ACCEPT tcp -- 0.0.0.0/0 192.168.7.15 tcp dpt:6005
ACCEPT tcp -- 0.0.0.0/0 192.168.7.16 tcp dpt:6006
ACCEPT tcp -- 0.0.0.0/0 192.168.7.17 tcp dpt:6007
ACCEPT tcp -- 0.0.0.0/0 192.168.7.18 tcp dpt:6008
ACCEPT tcp -- 0.0.0.0/0 192.168.7.19 tcp dpt:6009
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7000
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7001
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7002
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7003
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7004
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7005
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7006
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7007
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7008
ACCEPT udp -- 0.0.0.0/0 192.168.7.10 udp dpt:7009
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7000
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7001
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7002
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7003
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7004
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7005
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7006
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7007
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7008
ACCEPT tcp -- 0.0.0.0/0 192.168.7.10 tcp dpt:7009
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
As indicated above, I fail to mount from either R1 or R3. Observe the following output as well, as I think it may be helpful:
U11.10$ rpcinfo -p R1
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
U11.10$ showmount -e R1
clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused)
U11.10$ rpcinfo -p R2
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 39036 status
100024 1 tcp 35998 status
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049
100227 3 tcp 2049
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 2 udp 2049
100227 3 udp 2049
100021 1 udp 55799 nlockmgr
100021 3 udp 55799 nlockmgr
100021 4 udp 55799 nlockmgr
100021 1 tcp 50119 nlockmgr
100021 3 tcp 50119 nlockmgr
100021 4 tcp 50119 nlockmgr
100005 1 udp 49361 mountd
100005 1 tcp 48407 mountd
100005 2 udp 37991 mountd
100005 2 tcp 47634 mountd
100005 3 udp 41386 mountd
100005 3 tcp 35740 mountd
U11.10$ showmount -e R2
Export list for R2:
/ U11.10
U11.10$ rpcinfo -p R3
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
U11.10$ showmount -e R3
clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused)
I can ping R1-R3 from U11.10, and as alluded to earlier I can mount onto R1 and R3 from U11.10. I suspect there is something wrong with my iptables, I just can't figure out why it would let one raspi through, but not the other two.
Better ask that on serverfault than on stackoverflow. But to make it short, if i were you, i'd drop ALL my iptables rules first, then check if it works. When you have everything running, check netstat -nap on U11.10 to make sure each process is using the ports you expect it to. Then, re-enable your iptables one by one.
Also, when you try something like the rpcinfo that doesn't work, you might want to have a tcpdump running on your U11.10, and examine the result with wireshark. This gives you an idea if packets are sent, received, and which ports are used, as well.

can't open PORT on IPTABLES firewall

I'm struggling to understand why I can't open port 61616 by adding IPTABLES rule. Here is the listing of all rules, obtained via IPTABLES -L command.
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:61616
ACCEPT udp -- anywhere anywhere udp dpt:cslistener
ACCEPT tcp -- anywhere anywhere tcp dpt:cslistener
ACCEPT tcp -- anywhere anywhere tcp dpt:webcache
ACCEPT tcp -- anywhere anywhere tcp dpt:smtp
RH-Firewall-1-INPUT all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
RH-Firewall-1-INPUT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain RH-Firewall-1-INPUT (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:61616
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT all -- anywhere anywhere
ACCEPT icmp -- anywhere anywhere icmp any
ACCEPT esp -- anywhere anywhere
ACCEPT ah -- anywhere anywhere
ACCEPT udp -- anywhere 224.0.0.251 udp dpt:mdns
ACCEPT udp -- anywhere anywhere udp dpt:ipp
ACCEPT tcp -- anywhere anywhere tcp dpt:ipp
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
As much as I'm ignorant about IPTABLES, what confuses me is that http port is visible from the outside, yet port 61616 still isn't. For me, the rules look the same. Anyways, all help's appreciated.
Best
Maybe you try to open port for host in the network behind the CentOS host (CentOS host is firewall for network)?
If so, you must add rule for chain FORWARD of table filter, and you should
add rule for DNAT to some IP in network x.x.x.x
iptables -A FORWARD -p tcp --dport 61616 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp --dport 61616 -j DNAT --to-destinanion x.x.x.x

Resources