can not ftp localhost but can ftp from remote server - linux

I have installed vsftp to the server , the status is running now , but when use the command ftp localhost , it pops the 421 Service not available. , would advise what is wrong ? thanks
bash-3.2# /etc/rc.d/init.d/vsftpd status
vsftpd (pid 580) is running...
bash-3.2# ps -ef |grep vsftpd
root 580 1 0 15:44 ? 00:00:00 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
root 607 467 0 15:45 pts/0 00:00:00 grep vsftpd
bash-3.2# ftp localhost
Connected to localhost (127.0.0.1).
421 Service not available.
the below is the output , would advise what can I do ? thanks
bash-3.2# netstat -tanp |grep ftp
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 3205/vsftpd
bash-3.2# ps -ef |grep ftp
root 3205 1 0 Feb02 ? 00:00:00 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
root 6075 5626 0 21:35 pts/0 00:00:00 grep ftp
bash-3.2# /sbin/iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
bash-3.2# /sbin/chkconfig --list |grep ip
all iptables are off

There is a parameter called tcp_wrappers=YES in vsftpd.conf. And it checks your hosts.deny file under /etc. You can change tcp_wrappers=NO or you can check your hosts.deny file under /etc

bash-3.2# ftp localhost
Connected to localhost (127.0.0.1).
421 Service not available.
It looks like you can connect, that is the server listens to this address (like you see with netstat too) and there are no iptables rules which block the connection.
Instead the server itself refuses to serve you. Therefore you need to look at the server logs and server configuration for help. Maybe you should look at the tcp_wrappers option and the related configuration in /etc/hosts.allow etc.

Related

Is there a way to configure Docker's embedded DNS server's upstream nameserver's port?

General context
Docker daemon comes with an embedded DNS server. It resolves local Docker swarm and network records and forwards queries for external records to an upstream nameserver configured with --dns 1.
Docs say you can set an IP address for this upstream nameserver with --dns=[IP_ADDRESS...]. The default port used is 53.
My question
Can I configure the port used as well?
My host's /etc/docker/daemon.json shows "dns": ["10.99.0.1"],. Is there a way for me to specify something like "dns": ["10.99.0.1:53"], so that dockerd always knows to forward DNS queries to port 53?
My use case
In my case, 10.99.0.1 is the IP of a localhost bridge interface. I run a local DNS caching server on this host. So DNS queries sent to 10.99.0.1:53 work. But dockerd forwards queries originating from containers connected to user-defined bridge networks (created with docker network create) to non-standard ports it picks. See terminal output below.
Detailed terminal output and debugging info
"toogle" is a Docker container connected to a Docker network I created with docker network create. 127.0.0.11 is another loopback address. DNS queries originating from within Docker containers connected to user-defined Docker networks are destined for this IP.
Is Docker's embedded DNS server actually running?
DNS queries are routed by toogle's firewall rules this way.
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) iptables -t nat -nvL
Chain PREROUTING (policy ACCEPT 1 packets, 60 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 1 packets, 60 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- * * 0.0.0.0/0 127.0.0.11
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- * * 0.0.0.0/0 127.0.0.11
Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- * * 0.0.0.0/0 127.0.0.11 tcp dpt:53 to:127.0.0.11:37619 <-- look at this rule
0 0 DNAT udp -- * * 0.0.0.0/0 127.0.0.11 udp dpt:53 to:127.0.0.11:58552 <-- look at this rule
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- * * 127.0.0.11 0.0.0.0/0 tcp spt:37619 to::53
0 0 SNAT udp -- * * 127.0.0.11 0.0.0.0/0 udp spt:58552 to::53
Whatever's listening on those ports is accepting TCP and UDP connections
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) nc 127.0.0.11 37619 -vz
127.0.0.11: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [127.0.0.11] 37619 (?) open
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) nc 127.0.0.11 58552 -vzu
127.0.0.11: inverse host lookup failed: Host name lookup failure
(UNKNOWN) [127.0.0.11] 58552 (?) open
But there's no DNS reply from either
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) dig #127.0.0.11 -p 58552 accounts.google.com
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #127.0.0.11 -p 58552 accounts.google.com
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
$ sudo nsenter -n -t $(docker inspect --format {{.State.Pid}} toogle) dig #127.0.0.11 -p 37619 accounts.google.com +tcp
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #127.0.0.11 -p 37619 accounts.google.com +tcp
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
dockerd is listening for DNS queries at that IP and port from within toogle.
$ sudo nsenter -n -p -t $(docker inspect --format {{.State.Pid}} toogle) ss -utnlp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 127.0.0.11:58552 0.0.0.0:* users:(("dockerd",pid=10984,fd=38))
tcp LISTEN 0 128 127.0.0.11:37619 0.0.0.0:* users:(("dockerd",pid=10984,fd=40))
tcp LISTEN 0 128 *:80 *:* users:(("toogle",pid=12150,fd=3))
But dockerd is trying to forward the DNS query to 10.99.0.1, which is my docker0 bridge network interface.
$ sudo journalctl --follow -u docker
-- Logs begin at Tue 2019-11-05 18:17:27 UTC. --
Apr 22 15:43:12 my-host dockerd[10984]: time="2021-04-22T15:43:12.496979903Z" level=debug msg="[resolver] read from DNS server failed, read udp 172.20.0.127:37928->10.99.0.1:53: i/o timeout"
Apr 22 15:43:13 my-host dockerd[10984]: time="2021-04-22T15:43:13.496539033Z" level=debug msg="Name To resolve: accounts.google.com."
Apr 22 15:43:13 my-host dockerd[10984]: time="2021-04-22T15:43:13.496958664Z" level=debug msg="[resolver] query accounts.google.com. (A) from 172.20.0.127:51642, forwarding to udp:10.99.0.1"
dockerd forwards the DNS query that asks for nameserver 127.0.0.11:58552 to 10.99.0.1 but only changes the IP and not the port. So the DNS query is forwarded to 10.99.0.1:58552 and nothing is listening at that port.
$ dig #10.99.0.1 -p 58552 accounts.google.com
[NO RESPONSE]
$ nc 10.99.0.1 58552 -vz
10.99.0.1: inverse host lookup failed: Unknown host
(UNKNOWN) [10.99.0.1] 58552 (?) : Connection refused
A DNS query to 10.99.0.1:53 works as expected.
dig #10.99.0.1 -p 53 accounts.google.com
; <<>> DiG 9.11.3-1ubuntu1.14-Ubuntu <<>> #10.99.0.1 -p 53 accounts.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53674
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;accounts.google.com. IN A
;; ANSWER SECTION:
accounts.google.com. 235 IN A 142.250.1.84
;; Query time: 0 msec
;; SERVER: 10.99.0.1#53(10.99.0.1)
;; WHEN: Thu Apr 22 17:20:09 UTC 2021
;; MSG SIZE rcvd: 64
I don't think there's a way to do this. I also misread the output. Docker daemon was forwarding to port 53.
read udp 172.20.0.127:37928->10.99.0.1:53: i/o timeout

Netcat server and intermittent UDP datagram loss

The client on enp4s0 (192.168.0.77) is sending short text-messages permanently to 192.168.0.1:6060. The server on 192.168.0.1 listen on 6060 via nc -ul4 --recv-only 6060
A ping (ping -s 1400 192.168.0.77) from server to client works fine. Wireshark is running on 192.168.0.1 (enp4s0) and detects that all datagrams are correct. There are no packages missing.
But netcat (as also a simple UDP-server) receives only sporadic on datagrams.
Any Idea what's going wrong?
System configuration:
# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
# ip route
default via 192.168.77.1 dev enp0s25 proto dhcp metric 100
default via 192.168.0.1 dev enp4s0 proto static metric 101
192.168.0.0/24 dev enp4s0 proto kernel scope link src 192.168.0.1 metric 101
192.168.77.0/24 dev enp0s25 proto kernel scope link src 192.168.77.25 metric 100
192.168.100.0/24 dev virbr0 proto kernel scope link src 192.168.100.1
# uname -a
Linux nadhh 5.1.20-300.fc30.x86_64 #1 SMP Fri Jul 26 15:03:11 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

iptables --gid-owner works only for user's main group

I am trying to disable access to IP 1.2.3.4 for all users except for members of group "neta". This is a new group which I created only for this matter.
iptables -I OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m owner ! --gid-owner neta -j REJECT
This disables access to 1.2.3.4 for all users, even if they are member of group "neta".
I have an user xx and he is a member of groups xx (main group) and neta. If I change the rule to:
iptables -I OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m owner \! --gid-owner xx -j REJECT
everyone except user xx is not able to access 1.2.3.4.
I added root to this group xx:
usermod -a -G xx root
but root was still not able to access this IP.If I add main user's group (root, xx) to the rule everything works as expected.
I tried spliting it in two rules just to be sure (and log rejected):
iptables -A OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m owner --gid-owner neta -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp -d 1.2.3.4 -m limit --limit 2/s --limit-burst 10 -j LOG
iptables -A OUTPUT -o eth0 -p tcp -d 1.2.3.4 -j REJECT
but there is no difference. Everything is being rejected.
There are no other iptables rules.
root#vm1:~# iptables -nvL
Chain INPUT (policy ACCEPT 19 packets, 1420 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 10 packets, 1720 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * eth0 0.0.0.0/0 1.2.3.4 owner GID match 1001
0 0 LOG tcp -- * eth0 0.0.0.0/0 1.2.3.4 limit: avg 2/sec burst 10 LOG flags 0 level 4
0 0 REJECT tcp -- * eth0 0.0.0.0/0 1.2.3.4 reject-with icmp-port-unreachable
I want to be able to (dis)allow access to this IP by adding/removing users from this "neta" group instead of adding iptables rules for every user.
Ok, to be honest I know to little about linux and iptables to be sure about my theory, but since I wanted to do the same for a VPN here we go.
I assume that the match is done using the process from which the packets originate from and that a linux process doesn't get all groups of a user assigned but instead a process runs with one uid and one gid.
That means that you have to execute the command explicitly using this specific group, or else the command/process is executed using the default group of the user.
Writing this I had an idea to see whether there is such possibility.
I restricted access to a certain IP range using the group VPN. This never worked. Now I tested with the following command and it works:
sg vpn -c "ssh user#10.15.1.1"
So I hope my theory was correct.
Old post, but chiming in since I have run into this exact problem in Ubuntu 16.04.3 LTS server.
Ubuntu's implementation of iptables extensions through netfilter examines the owner of the current network packet, and queries only the primary group id of that user. It doesn't dig deeper and get all the group memberships. Only the primary group is compared to the --gid-owner value. It doesn't look any further.
What the OP was trying to accomplish would work if he/she changed the primary/default user group of all relevant usernames to "neta". Those users would then be captured by the rule.
To use a supplementary group you need to add the --suppl-groups flag to your iptables command
From man page:
--suppl-groups
Causes group(s) specified with --gid-owner to be also checked in the supplementary groups of a process.

Unable to connect to imap via telnet remotely

If I try:
telnet localhost 143
I can get access to imap
If I try
telnet server.name 143
I get
telnet: Unable to connect to remote host: Connection timed out
See output of my netstat.
netstat --numeric-ports -l | grep 143
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN
tcp6 0 0 :::143 :::* LISTEN
What does the above output mean?
Am at my wits end cannot get imap to work remotely, works perfectly with webmail on the server.
I'm accessing the server from laptop terminal remotely, and locally for localhost connection
The output you quote:
tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN
tcp6 0 0 :::143 :::* LISTEN
means that you have a program (presumably your imap server) listening on port 143 via tcp (on both IPv4 and IPv6). And the "0.0.0.0" part means it should accept connections from any source. (If it said "127.0.0.1:143" for the local address, it would mean that only local connections would be accepted.)
So, since it looks like you have the serer listening correctly, I'd first check that server.name actually resolves to the correct IP address. Can you contact any other services on that server to make sure that part works?
Assuming that that works, then the next thing I'd check would be your firewall. You might look at http://www.cyberciti.biz/faq/howto-display-linux-iptables-loaded-rules/ but you could probably start by just running:
sudo iptables -L -v
On my machine which has no firewall rules I get this:
$ sudo iptables -L -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
If you get something much different, the I'd take a closer look to see if that's blocking your traffic.

snmpd is not listening on port 161 on Ubuntu server

I have installed snmpd on my Ubuntu server via apt-get install snmpd snmp. Then I changed the line in /etc/default/snmpd
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid 0.0.0.0'
After that, I restarted the snmpd server(/etc/init.d/snmpd restart). However, when I ran netstat -an | grep "LISTEN ", I don't see snmpd is listening on port 161.
I don't have any firewall which blocks that port.
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
User "nos" is correct; UDP bindings do not show up as "LISTEN" under "netstat". Instead, you will see a line or two like the following, showing that "snmpd" is indeed ready to receive data on UDP port 161:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:161 0.0.0.0:* 1785/snmpd
udp6 0 0 ::1:161 :::* 1785/snmpd
The "netstat" manpage has this to say about the "State" column:
The state of the socket. Since there are no states in raw mode and usually no states used in UDP, this column may be left blank.
Thus, you would not expect to see the word "LISTEN" here.
From a practical perspective, however, there is one more thing that I'd like to note. Often, the default Net-SNMP "snmpd.conf" configuration file limits incoming connections to only local processes.
Default /etc/snmp/snmpd.conf
# Listen for connections from the local system only
agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
#agentAddress udp:161,udp6:[::1]:161,tcp:161,tcp6:[::1]:161
Usually, the point of setting up "snmpd" is so that another machine can monitor it. To accomplish this, make sure that the first line is commented out and that the second line is enabled.
Looks like it is listening on 161/UDP. From the man page:
By default, snmpd listens for incoming SNMP requests on UDP port 161 on all IPv4 interfaces. However, it is possible to modify this behaviour by specifying one or more listening addresses as arguments to snmpd. A listening address takes the form: [<transport-specifier>:]<transport-address>
Read the man page for more details

Resources