list all open files by ip - linux

Is there a way to show all open files by IP address on linux?
I use this:
netstat -atun | awk '{print $5}' | cut -d: -f1 | sed -e '/^$/d' |sort | uniq -c | sort -n
to show all connections from IP sorted by number of connections.
How do I know what are these IP's hitting?
thanks in advance!

If you can find a way to identify the process that has the socket open in netstat, you could use the ls -l /proc/<pid>/fd to find what files that process has open. Of course, a lot of those files may not be accessed from the network - for example your typical apache server will have /var/log/httpd/access_log and /var/log/httpd/error_log and quite possibly some other files too. And of course, it will be a "moment in time", what file that process has open in 5 seconds or a millisecond from now may be quite a bit different.
I presume you don't just let anything/anyone access the server, so I'm guessing it's a webserver or some such, in which case, it would probably be easier to put some code into your web interface to track who does what.

Related

Tcp connection leak issue

Observed an issue in which system is throwing - "too many open files" after working fine for few hours.
Observed that there are many tcp connection stuck in "CLOSE_WAIT" state.
sudo lsof | grep ":http (CLOSE_WAIT)" | wc -l -> 16215.
Number is increasing with time and in few hours it would cross max limit allowed.
Also ran netstat command -
"netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n" and output is -> 122 CLOSE_WAIT.
Why output from netstat command is way lower than lsof command. Both are returning close wait connections and should have given approx same value.
Once i got to know that connection to specific service is causing this issue, then what should i do identify the exact code where this is happening ? I went through the client code for connecting to service and i don't see any connection leakage.

How can I ask from Ant Media Server the number of current viewers

I tried in this way:
netstat -a | grep EST|grep -v localhost| grep \:5080| cut -d' ' -f 16-17|cut -d':' -f1|sort|uniq|wc -l
But it obviously can't show viewers from same ip.
Surely there is a better way to do this...
Thank you!
do you want to access your total viewers count?
You can reach the viewer numbers with the REST Service. Could you please check this https://antmedia.io/rest/#/BroadcastRestService/getBroadcastStatistics

Nested for loops in Shellscript

I need help with a Shellscript.
I have a For loop in my script, which creates files names with a variable names. like file.$variable.
For example I have a list of, servers in a servers.txt file. from there i will read the server and connect to them and get some data from the each server. The filenames will be file.$server.
From that i am using For loop and creating the files of each server.
for server in `cat servers.txt`; do
ssh $server ls | awk '{print $2}' | tee -a files.$server.txt
This one working fine.
Now, from those generated files I need to run one more for loop to read all the files and read the content of the each file and give it as input to another command.
ex:
for file in `cat files.$servers.txt`; do
cat $file | awk '{print $2}' | tee -a column.$file.txt
But, not working for my in the second loop. Please help.
In a nut shell its a nested loop. Excuse me for my English.
Thanks in Advance.

Is it possible to find which process is using OPENSSL in linux?

Suppose, one process is running and accessing OPENSSL shared library to perform some operation. Is there any way to find the pid of this process ?
Is there any way to find on which core this process is running ?
If possible, does it require any special privilege like sudo etc?
OS- Debian/Ubuntu
Depending on what exactly you want, something like this might do:
lsof | grep /usr/lib64/libcrypto.so | awk '{print $1, $2}' | sort -u
This essentially:
uses lsof to list all open files on the system
searches for the OpenSSL library path (which also catches versioned names like libcrypto.so.1.0)
selects the process name and PID
removes any duplicate entries
Note that this will also output processes using previous instances of the shared library file that were e.g. updated to a new version and then deleted. It also has the minor issue of outputting duplicates when a process has multiple threads with different names.
And yes, this may indeed require elevated privileges, depending on the permissions on your /proc directory.
If you really do need the processor core(s), you could try something like this (credit to dkaz):
lsof | grep /usr/lib64/libcrypto.so | awk '{print $2}' |
xargs -r ps -L --no-headers -o pid,psr,comm -p | sort -u
Adding the lwp variable to the ps command would also show the thread IDs:
lsof | grep /usr/lib64/libcrypto.so | awk '{print $2}' |
xargs -r ps -L --no-headers -o pid,lwp,psr,comm -p
PS: The what-core-are-the-users-of-this-library-on requirement still sounds a bit unusual. It might be more useful if you mentioned the problem that you are trying to solve in broader terms.
thkala is almost right. The problem is that the answer is half, since it doesn't give the core.
I would run that:
$ lsof | grep /usr/lib64/libcrypto.so |awk '{print $2}' | xargs ps -o pid,psr,comm -p

Bash script to get server health

Im looking to monitor some aspects of a farm of servers that are necessary for the application that runs on them.
Basically, Im looking to have a file on each machine, which when accessed via http (on a vlan), with curl, that will spit out information Im looking for, which I can log into the database with dameon that sits in a loop and checks the health of all the servers one by one.
The info Im looking to get is
<load>server load</load>
<free>md0 free space in MB</free>
<total>md0 total space in MB</total>
<processes># of nginx processes</processes>
<time>timestamp</time>
Whats the best way of doing that?
EDIT: We are using cacti and opennms, however what Im looking for here is data that is necessary for the application that runs on these servers. I dont want to complicate it by having it rely on any 3rd party software to fetch this basic data which can be gotten with a few linux commands.
Make a cron entry that:
executes a shell script every few minutes (or whatever frequency you want)
saves the output in a directory that's published by the web server
Assuming your text is literally what you want, this will get you 90% of the way there:
#!/usr/bin/env bash
LOAD=$(uptime | cut -d: -f5 | cut -d, -f1)
FREE=$(df -m / | tail -1 | awk '{ print $4 }')
TOTAL=$(df -m / | tail -1 | awk '{ print $2 }')
PROCESSES=$(ps aux | grep [n]ginx | wc -l)
TIME=$(date)
cat <<-EOF
<load>$LOAD</load>
<free>$FREE</free>
<total>$TOTAL</total>
<processes>$PROCESSES</processes>
<time>$TIME</time>
EOF
Sample output:
<load> 0.05</load>
<free>9988</free>
<total>13845</total>
<processes>6</processes>
<time>Wed Apr 18 22:14:35 CDT 2012</time>

Resources