RPC: Unable to receive; errno = Connection refused - rpc

I am trying to do a simple Sun RPC program which is a phonebook. It sends and receives commands like adding or removing someone to server and server responds back with a message.
When I run the both programs(Server and Client) after the first command it closes unexpectedly and got this error
RPC: Unable to receive; errno = Connection refused
I already checked and rpcbind and portmap is installed.
And here is my rpcinfo output:
program version netid address service owner
100000 4 tcp6 ::.0.111 portmapper superuser
100000 3 tcp6 ::.0.111 portmapper superuser
100000 4 udp6 ::.0.111 portmapper superuser
100000 3 udp6 ::.0.111 portmapper superuser
100000 4 tcp 0.0.0.0.0.111 portmapper superuser
100000 3 tcp 0.0.0.0.0.111 portmapper superuser
100000 2 tcp 0.0.0.0.0.111 portmapper superuser
100000 4 udp 0.0.0.0.0.111 portmapper superuser
100000 3 udp 0.0.0.0.0.111 portmapper superuser
100000 2 udp 0.0.0.0.0.111 portmapper superuser
100000 4 local /run/rpcbind.sock portmapper superuser
100000 3 local /run/rpcbind.sock portmapper superuser
553523285 1 udp 0.0.0.0.3.222 - superuser
553523285 1 tcp 0.0.0.0.3.223 - superuser
The server procedures run normal, I put printf in server side and it shows the server is running but cannot send message back to the client!

Related

Linux: who is listening on tcp port 22?

I have a AST2600 evb board. After power on (w/ RJ45 connected), it boots into a OpenBMC kernel. From serial port, using ip command I can obtain its IP address. From my laptop, I can ssh into the board using account root/0penBmc:
bruin#gen81:/$ ssh root#192.168.6.132
root#192.168.6.132's password:
Then I want to find out which tcp ports are open. As there is no ss/lsof/netstat utilities, I cat /proc/net/tcp:
root#AMIfa7ba648f62e:/proc/net# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:14EB 00000000:0000 0A 00000000:00000000 00:00000000 00000000 997 0 9565 1 0c202562 100 0 0 10 0
1: 3500007F:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000 997 0 9571 1 963c8114 100 0 0 10 0
The strange thing puzzled me is that that tcp port 22 is not listed in /proc/net/tcp, which suggests that no process is listening on tcp port 22. If this is true, how the ssh connection is established?
Btw, as tested using ps, it's the dropbear process who is handling the ssh connection, and the dropbear is spawned dynamically (i.e., if no ssh connection, no such process exist; if I made two ssh connection, two dropbear processes were spawned).
PS: as suggested by John in his reply, I added the ss utilities into the image, and it shows what I expected:
root#AMI8287361b9c6f:~# ss -antp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 0.0.0.0:5355 0.0.0.0:* users:(("systemd-resolve",pid=239,fd=12))
LISTEN 0 0 127.0.0.1:5900 0.0.0.0:* users:(("obmc-ikvm",pid=314,fd=5))
LISTEN 0 0 127.0.0.53:53 0.0.0.0:* users:(("systemd-resolve",pid=239,fd=17))
LISTEN 0 0 *:443 *:* users:(("bmcweb",pid=325,fd=3),("systemd",pid=1,fd=41))
LISTEN 0 0 *:5355 *:* users:(("systemd-resolve",pid=239,fd=14))
LISTEN 0 0 *:5900 *:* users:(("obmc-ikvm",pid=314,fd=6))
LISTEN 0 0 *:22 *:* users:(("systemd",pid=1,fd=49))
LISTEN 0 0 *:2200 *:* users:(("systemd",pid=1,fd=50))
ESTAB 0 0 [::ffff:192.168.6.89]:22 [::ffff:192.168.6.98]:34906 users:(("dropbear",pid=485,fd=2),("dropbear",pid=485,fd=1),("dropbear",pid=485,fd=0),("systemd",pid=1,fd=20))
Good question.
First, it is pretty straigt forward to add common tools/utitlies to an image.
It could be added (for local testing only) by adding a line
OBMC_IMAGE_EXTRA_INSTALL:append = " iproute2 iproute2-ss"
to the https://github.com/openbmc/openbmc/blob/master/meta-aspeed/conf/machine/evb-ast2600.conf file (or to your own testing/deveopment layer). Adding useful tools is often worth it.
Second, if you are using ipv6 you will need to check /proc/net/tcp6
Third, you can also look for a port by looking up the pid of your application ps | grep <application name>. Then reading the port used by that pid cat /proc/<pid>/net/tcp
Last, if you have any more question or these steps don't work. Please reach out to us on discord https://discord.com/invite/69Km47zH98 or Email https://lists.ozlabs.org/listinfo/openbmc (they are the preferred place to ask questions)

Use multiple interface to create tcp outbound connection to single IP and port(TCP server)

A TCP connection is determined by a 5-tuple: [local IP, local port, remote IP, remote port, protocol]. I can't change the remote IP, remote port, protocol. Now how can I create more than 65K(plan is to create more than 2L) concurrent TCP connection(which holds the connection for the logger time) from the same client when the ephemeral port is constant(1024–65535) on the client-side?
Question:
Multiple interfaces(multiple IP from the same client instance)can use the same ephemeral port to create an outbound TCP connection?
written one TCP client which is creating 15 TCP concurrent connections(holding for logger time) using eth1(10) and eth2(5) interface and enable only 10 ephemeral ports to use(49001 - 49010 ip_local_port_range file). And eth0 default interface doesn't use any port from 49001 - 49010 except 49001.
Now when I am trying to send a curl command getting an error
curl http://google.com -v
* Rebuilt URL to: http://google.com/
* Trying XXX.XXXX.XXXX.46...
* TCP_NODELAY set
* Immediate connect fail for XXX.XXX.XXX.46: Cannot assign requested address
* Trying XXXX:XXXX:XXXX:XXXX::XXXX...
* TCP_NODELAY set
* Immediate connect fail for XXXX:XXXX:XXXX:XXXX::XXXX: Network is unreachable
* Closing connection 0
curl: (7) Couldn't connect to server
tcp 0 0 xxx.xxx.xxx.245:49001 xxx.xxx.xxx.xxx:443 ESTABLISHED XXXX
tcp 0 0 xxx.xxx.xxx.116:49010 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49006 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49002 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49008 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49010 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49009 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49006 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49004 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49001 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49008 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49005 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.116:49002 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49003 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49004 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
tcp 0 0 xxx.xxx.xxx.248:49007 xxx.xxx.xxx.41:9999 ESTABLISHED 21805/client
In Linux you can have multiple sockets using the same source address and source port if you set SO_REUSEPORT on your sockets using setsockopt. You need to control the socket creation code for this to work, however.
As you noted you are still restricted in that the 5-tuple must be unique for all TCP sockets on your system.

Why is a part of the connection disconnected when tcp connection number is up to 1000?

I write two program using socket. The server listens on a port. And it will open a thread for a connection. Client sends a connection.
When I run server and client locally, I can create 5000 connections. However, when I put the server program on the cloud server, I use watch ss -s to see the connection number of tcp. During the process of establishing 5000 connections, some of the connections will be closed automatically.
Even if I only set up 1300 connections, some tcp connections will also be closed. I have used the ulimit -n 65536 command to open the number of local and remote threads to 65536.
And this result is unstable, it is possible that out of the next 1300 connections, only 500 will succeed, while 800 will fail....
Why would this happen? Is this because of the speed of the Internet? Or is it because of too many threads? But my test on the local is no problem
The following is the result of the watch ss -s command:
TCP: 1310 (estab 953, closed 348, orphaned 0, synrecv 0, timewait 0/0),
ports
0
Transport Total IP IPv6
* 0 - -
RAW 0 0 0
UDP 10 7 3
TCP 962 959 3
INET 972 966 6
FRAG 0 0 0

mount.nfs: Connection timed out on ubuntu 14.04.1 LTS

I trying to create a nfs server for the first time. On trying to mount to the server I am getting an error "mount.nfs: Connection timed out". My server version is ubuntu 14.04.5 LTS while my client is ubuntu 14.04.1 LTS. Following are the step that I have performed.
On Server Side:
# vi /etc/exports
/home/nfs 192.168.13.81(rw,async,no_root_squash)
# sudo service nfs-kernel-server restart
# sudo exportfs -u
/home/nfs 192.168.13.81
On Client Side:
# sudo mount 192.168.13.80:/home/nfs /home/nfs
mount.nfs: Connection timed out
on trying,
#sudo mount -t nfs4 -o proto=tcp,port=2049 192.168.13.80:/home/nfs /home/nfs
mount.nfs4: mounting 192.168.13.80:/home/nfs failed, reason given by server: No such file or directory
"# rpcinfo -p" gives:
program vers proto port service
100000 4 tcp 111 portmapper
100000 3 tcp 111 portmapper
100000 2 tcp 111 portmapper
100000 4 udp 111 portmapper
100000 3 udp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 39340 status
100024 1 tcp 49970 status
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 2 tcp 2049
100227 3 tcp 2049
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
I am not sure what or how else to try. Please help. Any help would be highly appreciated. Thanks.
Not required restart nfs service. After you add below in /etc/exports
/home/nfs 192.168.13.81(rw,async,no_root_squash)
Just re-export the NFS directories:
exportfs -ra
Then use nfs4 to mount it.
mount -t nfs4 192.168.13.80:/home/nfs /home/nfs
You may also want to check if client has NFS access to server by the following command.
rpcinfo -p 192.168.13.80
You will get result if client has access, with lines as portmapper, nfs, mountd, nlockmgr, nfs_acl and rquotad.

Weird TCP connection on Oracle Linux

On Oracle Linux "Linux bjzv0880 3.8.13-16.2.1.el6uek.x86_64 #1 SMP Thu Nov 7 17:01:44 PST 2013 x86_64 x86_64 x86_64 GNU/Linux"
I have 1 TCP Server and 2 TCP Clients running with the connection status below:
******[root]# netstat -anp | grep 58000
tcp 0 0 192.168.250.102:58000 0.0.0.0:* LISTEN 3614/AppServer
tcp 0 0 192.168.250.102:44500 192.168.250.102:58000 ESTABLISHED 3673/AppClient1
tcp 0 0 192.168.250.102:44488 192.168.250.102:58000 ESTABLISHED 3631/AppClient2
tcp 0 0 192.168.250.102:58000 192.168.250.102:44500 ESTABLISHED 3614/AppServer
tcp 0 0 192.168.250.102:58000 192.168.250.102:44488 ESTABLISHED 3614/AppServer******
Then I forcefully stop the AppServer without cleaning up the socket. And make the AppClient* to try to reconnect to AppServer very quickly. After a little moment, I got a weird connection:
*[root]# netstat -anp | grep 58000
tcp 0 0 192.168.250.102:58000 192.168.250.102:58000 ESTABLISHED 3673/AppClient1*
Note: I have done a wireshark capture on the tcp communication, and from the traffic log
1. there are 2 rounds of retries to connect from source port selected by OS
2. In the 1st round, 58000 was not selected by OS
3. but in the 2nd round, 58000 was selected and it happened to be able to established
How could it be possible? Appreciated for your advice.

Resources