Server Connecting to Itself - Netstat Results - linux

I've been having a problem with my server, and the host is refusing to look into the issue.
It's a dedicated Cent OS machine with DirectAdmin, nothing out of the ordinary, with a PHP/MySQL site running on it.
So I ran a netstat command on the box, and got this (xs in place to mask live data)
netstat -plan|grep :80|awk {'print $5'}|cut -d: -f 1|sort|uniq -c|sort -nk 1
1 xx.xx.xx.xx
1 xx.xx.xx.xx
1 xx.xx.xx.xx
109
163 xx.xx.xx.xx
344 xx.xx.xx.xx
The 163, for some reason, is coming from Facebook Ireland. The 344 is from my own server itself - am not sure why, but can't get to the root of the problem either - at times it can balloon up to 500,600 connections.
Any ideas? Am not sure if I should block the Facebook one as not sure why it would need to crawl the site with that many connections.
Thanks a lot!

Related

monitoring linux server sockets or files

I have the famous socketexception too many open files bug.
Iam running an apache http server, tomcat server and a mysql database on my server.
I checked the limit of open files with ulimit -n that gave me 1024.
If i want to check how many files are opened by lsof -u tomcat, it gives me 5
same for mysql. I not sure what the problem is.. but i have also a readlink permission denied.
i want to monitor my socket connections and opened files on my server. I thought about using the decribed linux commands in a shell script and send them per mail to me.
The other option i think is using netstat and count maybe the connections.. but its loading very slowly and is giving me getnameinfo fail.
what would be the better command to monitor the bug i have`?
EDIT:
SHOW GLOBAL VARIABLES LIKE '%open%';
Variable_name Value
Com_ha_open 0
Com_show_open_tables 0
Open_files 8
Open_streams 0
Open_table_definitions 87
Open_tables 64
Opened_files 673
Opened_table_definitions 87
Opened_tables 628
Slave_open_temp_tables 0
SHOW GLOBAL VARIABLES LIKE '%open%';
Variable_name Value
have_openssl DISABLED
innodb_open_files 300
open_files_limit 2000
table_open_cache 64
SHOW GLOBAL VARIABLES LIKE '%connect%'
character_set_connection latin1
collation_connection latin1_swedish_ci
connect_timeout 10
init_connect
max_connect_errors 10
max_connections 400
max_user_connections 0
SHOW GLOBAL STATUS LIKE '%connect%';
Variable_name Value
Aborted_connects 1
Connections 35954
Max_used_connections 102
Ssl_client_connects 0
Ssl_connect_renegotiates 0
Ssl_finished_connects 0
Threads_connected 11
You may check ulimit values with 'ulimit -a' to determine capacity of Open Files.
From OS Command Prompt, ulimit -n 8192 and press enter to enable more Open Files dyamically.
To make this change persist across OS restart, the next URL can be your guide.
https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/
Where their example is for 500000 capacity, use 8192 for your system, please.
Suggestions to consider for your my.cnf [mysqld] section,
thread_cache_size=100 # to support your max_used_connections of 102
max_user_connections=400 # from 0 to match max_connections requested
table_open_cache=800 # from 64 to reduce Opened_tables count
innodb_open_files=800 # from 300 to match table_open_cache requested
Implementing these details should avoid 'too many open files' message. For additional assistance, view profile, Network profile for contact information and free downloadable Utility Scripts to assist with performance tuning.

SMTP between two linux machines

Is there are way to use SMTP to message pass between two linux servers? Even if they are not SMTP servers, I was wondering if I could just use SMTP to communicate between servers.
I have two ubuntu servers: 111.111.111.111 and 222.222.222.222. On each server I have set up user accounts master and node, respectively.
On 111.111.111.111, the file /var/spool/mail/master exists.
On 222.222.222.222, the file /var/spool/mail/node exists.
On 111.111.111.111, /etc/hosts has the line 222.222.222.222 node.us
On 222.222.222.222, /etc/hosts has the line 111.111.111.111 master.us
Assume that sudo iptables --list shows that port 25 is being accepted from all addresses
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN
tcp6 0 0 :::25 :::* LISTEN
Could I get something like this to work from the master (111.111.111.111) server?
sendmail -s "subject" node#node.us < sometextfile.txt
or some equivalent usint sendEmail or mutt, etc?
James -
Ideally you should be able to do what you are suggesting. You need to make sure that DNS resolution is working for those hosts files though - i did a quick test of this and I kept getting undeliverable for no AAAA (ipv6) record being found.
Also, the command to send you message should use the mail command instead of the sendmail, like this:
mail -s "subject" node#node.us < sometextfile.txt

Large amount of http connections from self

I have a relatively high traffic linux/apache webserver running Wordpress (oh the headaches). I think our developer configured the memcache settings incorrectly because when I run this command to look at all incoming httpd connections.
sudo netstat -anp |grep 'tcp\|udp' | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n
I get:
1 68.106.x.x
1 74.125.x.x
1 74.125.x.x
1 74.125.x.x
1 74.125.x.x
15 0.0.0.0
70 173.0.x.x
194 127.0.0.1
...I see that I have 194 connections from 127.0.0.1, and VERY few from actual public IP's. looking at netstat further I can see those are going to port 11211 (memcache). Even if I restart httpd, it only takes a few seconds for the open memcached connections from 127.0.0.1 to skyrocket up again and almost immediately we are pushing our max httpd process limit (currently MaxClients = 105).
Here are the details for those connections:
tcp 0 0 127.0.0.1:26210 127.0.0.1:11211 ESTABLISHED -
cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS=""

iptables redirect from external interface to loopback's port?

I try to redirect port from my lxc-container to loopback.
My lxc-container configured with lxcbr1 bridge 11.0.3.1.
I try to connect with netcat from host to lxc, and from lxc to host. Success.
localhost:
# nc -l 1088
lxc:
# nc 11.0.3.1 1088
Hello!
And localhost See message: "Hello!". Success!
When I redirect port that way:
# iptables -t nat -A PREROUTING -i lxcbr1 -p tcp -d 11.0.3.1 --dport 1088 -j DNAT --to-destination 127.0.0.1:1088
# nc -l 127.0.0.1 1088
Thereafter, i try to connect from lxc-container:
# nc 11.0.3.1 1088
Hello !
But localhost doesn't see this message.
Where am i wrong?
I found this answer: https://serverfault.com/questions/211536/iptables-port-redirect-not-working-for-localhost
There sound words that loopback doesn't use PREROUTING. What should i do?
DNAT for loopback traffic is not possible.
I found alot of similar questions. 1, 2, 3, etc...
According to RFC 5735, network 127.0.0.0/8 should not be routed outside host itself:
127.0.0.0/8 - This block is assigned for use as the Internet host loopback address. A datagram sent by a higher-level protocol to an
address anywhere within this block loops back inside the host. This is
ordinarily implemented using only 127.0.0.1/32 for loopback. As
described in [RFC1122], Section 3.2.1.3, addresses within the entire
127.0.0.0/8 block do not legitimately appear on any network anywhere.
RFC 1700, page 5, «Should never appear outside a host».
There is one of exits: use inetd.
There are many inted servers, xinetd, etc.
My choice was rinetd.
I use this manual http://www.howtoforge.com/port-forwarding-with-rinetd-on-debian-etch
My config looks like this:
$ cat /etc/rinetd.conf
# bindadress bindport connectaddress connectport
11.0.3.1 1081 127.0.0.1 1081
11.0.3.1 1088 127.0.0.1 1088
I restart rinetd:
$ /etc/init.d/rinetd restart
Stopping internet redirection server: rinetd.
Starting internet redirection server: rinetd.
And redirection works like a charm.
I will not close this question by myself, cause I still in looking for more elegant solution for such task. It is unlikely to do this by any animal, netcat or inetd, it doesn't matter. This is my opinion.
Just for reference if someone stumbles upon here, on new kernel versions (probably >= 3.6), all you need to do extra is:
~# echo 1 | sudo tee /proc/sys/net/ipv4/conf/all/route_localnet
REFERENCE: ipv4: Add interface option to enable routing of 127.0.0.0/8

How to tie a network connection to a PID without using lsof or netstat?

Is there a way to tie a network connection to a PID (process ID) without forking to lsof or netstat?
Currently lsof is being used to poll what connections belong which process ID. However lsof or netstat can be quite expensive on a busy host and would like to avoid having to fork to these tools.
Is there someplace similar to /proc/$pid where one can look to find this information? I know what the network connections are by examining /proc/net but can't figure out how to tie this back to a pid. Over in /proc/$pid, there doesn't seem to be any network information.
The target hosts are Linux 2.4 and Solaris 8 to 10. If possible, a solution in Perl, but am willing to do C/C++.
additional notes:
I would like to emphasize the goal here is to tie a network connection to a PID. Getting one or the other is trivial, but putting the two together in a low cost manner appears to be difficult. Thanks for the answers to so far!
I don't know how often you need to poll, or what you mean with "expensive", but with the right options both netstat and lsof run a lot faster than in the default configuration.
Examples:
netstat -ltn
shows only listening tcp sockets, and omits the (slow) name resolution that is on by default.
lsof -b -n -i4tcp:80
omits all blocking operations, name resolution, and limits the selection to IPv4 tcp sockets on port 80.
On Solaris you can use pfiles(1) to do this:
# ps -fp 308
UID PID PPID C STIME TTY TIME CMD
root 308 255 0 22:44:07 ? 0:00 /usr/lib/ssh/sshd
# pfiles 308 | egrep 'S_IFSOCK|sockname: '
6: S_IFSOCK mode:0666 dev:326,0 ino:3255 uid:0 gid:0 size:0
sockname: AF_INET 192.168.1.30 port: 22
For Linux, this is more complex (gruesome):
# pgrep sshd
3155
# ls -l /proc/3155/fd | fgrep socket
lrwx------ 1 root root 64 May 22 23:04 3 -> socket:[7529]
# fgrep 7529 /proc/3155/net/tcp
6: 00000000:0016 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 7529 1 f5baa8a0 300 0 0 2 -1
00000000:0016 is 0.0.0.0:22. Here's the equivalent output from netstat -a:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
Why don't you look at the source code of netstat and see how it get's the information? It's open source.
For Linux, have a look at the /proc/net directory
(for example, cat /proc/net/tcp lists your tcp connections). Not sure about Solaris.
Some more information here.
I guess netstat basically uses this exact same information so i don't know if you will be able to speed it up a whole lot. Be sure to try the netstat '-an' flags to NOT resolve ip-adresses to hostnames realtime (as this can take a lot of time due to dns queries).
The easiest thing to do is
strace -f netstat -na
On Linux (I don't know about Solaris). This will give you a log of all of the system calls made. It's a lot of output, some of which will be relevant. Take a look at the files in the /proc file system that it's opening. This should lead you to how netstat does it. Indecently, ltrace will allow you to do the same thing through the c library. Not useful for you in this instance, but it can be useful in other circumstances.
If it's not clear from that, then take a look at the source.
Take a look at these answers which thoroughly explore the options available:
How I can get ports associated to the application that opened them?
How to do like "netstat -p", but faster?

Resources