Little assistance needed regarding linux shell script - linux

Actually this script is well know, DDos Deflate .
But after using, i notice im getting some emails without ip like
Banned the following ip addresses on Thu Mar 21 21:19:01 CET 2013
138 with 138 connections
From source, and from "netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr" command, i notice maybe we need to avoid first line , because looks like first line is just number of total open connections.
Can someone who know this scripting language, check if i`m right and fix it? Maybe some additional basic check like if ip == number of connections , break?

If you'd like to do what you ask :
netstat -ntu |
awk 'NR>1{sub(/:.*/, "", $5); print $5}' |
sort |
uniq -c |
sort -nr

Related

Calculating Awk Output divide by mega=1048576

Hi Can someone please let me know how I can calculate the output field from this command to MB ?
The command below shows the 20 largest file in directory and sub directories
but I need to convert the output to MB. In my script I use an array.. But If you guys show me how to use awk to divide the output for this by mega=1048576
I would really appreciate it .. Please explain the options !!!
ls -1Rs | sed -e "s/^ *//" | grep "^[0-9]" | sort -nr | head -n20 | awk {'print $1'}
Thanks
You don't show any sample input or expected output so this is a guess but this MAY be what you want (assuming you cant follow all the other good advice about not parsing ls output and you don't have GNU awk for internal sorting):
ls -1Rs | awk '/^ *[0-9]/' | sort -nr | awk 'NR<21{print $1/1024}'
Note that you don't need all those other commands and pipes when you're already using awk.
ls -1Rs | sed -e "s/^ *//" | grep "^[0-9]" | sort -nr | head -n20 | awk {'print $1 / 1024'} To turn it into MB - You have to divide it by 1024

WHM server access logs for all accounts

I've had a lot of issues with hacks and DDOS attacks on a few servers, though this is usually caused by some very simple things. However I've found it invaluable to be able to look through an accounts access logs and list the hit pages in order of lowest to highest using the following through ssh cat example.co.uk | cut -d\" -f2 | awk '{print $1 " " $2}' | cut -d? -f1 | sort | uniq -c | sort -n
However this means I need to run this against every single accounts access log. is there a server wide version or script out there to scan all access logs for activity?
You can use your command in for loop to check all domain access logs file.
for i in `cat /etc/trueuserdomains | awk {'print $1'} | cut -d":" -f1`;do echo "Pages list Of $i" ; cat /usr/local/apache/domlogs/$i* | grep GET | cut -d\" -f2 | awk '{print $1 " " $2}' | cut -d? -f1 | sort | uniq -c | sort -n;done > /root/report.txt
Once it's done, Please check your /root/report.txt file.

How to only grep one of each address. Linux

Okay so lets say I have a list of addresses in a text file like this:
https://www.amazon.com
https://www.google.com
https://www.msn.com
https://www.google.com
https://www.netflix.com
https://www.amazon.com
...
There is a whole bunch of other stuff there but basically the issue I am having is that after running this:
grep "https://" addresses.txt | cut -d"/" -f3
I get amazon.com and google.com twice. I want to only get them once. I don't know how to make the search only grep for things that are unique.
Pipe your output to sort and uniq:
grep "https://" addresses.txt | cut -d"/" -f3 | sort | uniq
you can use sort for this purpose.
just add another pipe to your command and use the unique feature of sort to remove duplicates.
grep 'https://' addresses.txt | cut -d"/" -f3 | sort -u
EDIT: you can use sed instead of grep and cut which would reduce your command to
sed -n 's#https://\([^/]*\).*#\1#p' < addresses.txt | sort -u
I would filter the results post-grep.
e.g. using sort -u to sort and then produce a set of unique entries.
You can also use uniq for this, but the input has to be sorted in advance.
This is the beauty of being able to pipe these utilities together. Rather than have a single grepping/sorting/uniq(ing) tool, you get the distinct executables, and you can chain them together how you wish.
grep "https://" addresses.txt | cut -d"/" -f3 | sort | uniq is what you want
with awk you can use only one unix command instead of four with 3 pipes:
awk 'BEGIN {FS="://"}; { myfilter = match($1,/https/); if (myfilter) loggeddomains[$2]=0} END {for (mydomains in loggeddomains) {print mydomains}}' addresses.txt

Bash script to get server health

Im looking to monitor some aspects of a farm of servers that are necessary for the application that runs on them.
Basically, Im looking to have a file on each machine, which when accessed via http (on a vlan), with curl, that will spit out information Im looking for, which I can log into the database with dameon that sits in a loop and checks the health of all the servers one by one.
The info Im looking to get is
<load>server load</load>
<free>md0 free space in MB</free>
<total>md0 total space in MB</total>
<processes># of nginx processes</processes>
<time>timestamp</time>
Whats the best way of doing that?
EDIT: We are using cacti and opennms, however what Im looking for here is data that is necessary for the application that runs on these servers. I dont want to complicate it by having it rely on any 3rd party software to fetch this basic data which can be gotten with a few linux commands.
Make a cron entry that:
executes a shell script every few minutes (or whatever frequency you want)
saves the output in a directory that's published by the web server
Assuming your text is literally what you want, this will get you 90% of the way there:
#!/usr/bin/env bash
LOAD=$(uptime | cut -d: -f5 | cut -d, -f1)
FREE=$(df -m / | tail -1 | awk '{ print $4 }')
TOTAL=$(df -m / | tail -1 | awk '{ print $2 }')
PROCESSES=$(ps aux | grep [n]ginx | wc -l)
TIME=$(date)
cat <<-EOF
<load>$LOAD</load>
<free>$FREE</free>
<total>$TOTAL</total>
<processes>$PROCESSES</processes>
<time>$TIME</time>
EOF
Sample output:
<load> 0.05</load>
<free>9988</free>
<total>13845</total>
<processes>6</processes>
<time>Wed Apr 18 22:14:35 CDT 2012</time>

Find the IP address of the client in an SSH session

I have a script that is to be run by a person that logs in to the server with SSH.
Is there a way to find out automatically what IP address the user is connecting from?
Of course, I could ask the user (it is a tool for programmers, so no problem with that), but it would be cooler if I just found out.
Check if there is an environment variable called:
$SSH_CLIENT
OR
$SSH_CONNECTION
(or any other environment variables) which gets set when the user logs in. Then process it using the user login script.
Extract the IP:
$ echo $SSH_CLIENT | awk '{ print $1}'
1.2.3.4
$ echo $SSH_CONNECTION | awk '{print $1}'
1.2.3.4
You could use the command:
server:~# pinky
that will give to you somehting like this:
Login Name TTY Idle When Where
root root pts/0 2009-06-15 13:41 192.168.1.133
Try the following to get just the IP address:
who am i|awk '{ print $5}'
Just type the following command on your Linux machine:
who
who | cut -d"(" -f2 |cut -d")" -f1
Improving on a prior answer. Gives ip address instead of hostname. --ips not available on OS X.
who am i --ips|awk '{print $5}' #ubuntu 14
more universal, change $5 to $6 for OS X 10.11:
WORKSTATION=`who -m|awk '{print $5}'|sed 's/[()]//g'`
WORKSTATION_IP=`dig +short $WORKSTATION`
if [[ -z "$WORKSTATION_IP" ]]; then WORKSTATION_IP="$WORKSTATION"; fi
echo $WORKSTATION_IP
who am i | awk '{print $5}' | sed 's/[()]//g' | cut -f1 -d "." | sed 's/-/./g'
export DISPLAY=`who am i | awk '{print $5}' | sed 's/[()]//g' | cut -f1 -d "." | sed 's/-/./g'`:0.0
I use this to determine my DISPLAY variable for the session when logging in via ssh and need to display remote X.
netstat -tapen | grep ssh | awk '{ print $4}'
A simple command to get a list of recent users logged in to the machine is last. This is ordered most recent first, so last | head -n 1 will show the last login. This may not be the currently logged in user though.
Sample output:
root pts/0 192.168.243.99 Mon Jun 7 15:07 still logged in
admin pts/0 192.168.243.17 Mon Jun 7 15:06 - 15:07 (00:00)
root pts/0 192.168.243.99 Mon Jun 7 15:02 - 15:06 (00:03)
root pts/0 192.168.243.99 Mon Jun 7 15:01 - 15:02 (00:00)
root pts/0 192.168.243.99 Mon Jun 7 13:45 - 14:12 (00:27)
root pts/0 192.168.243.99 Mon May 31 11:20 - 12:35 (01:15)
...
You can get it in a programmatic way via an SSH library (https://code.google.com/p/sshxcute)
public static String getIpAddress() throws TaskExecFailException{
ConnBean cb = new ConnBean(host, username, password);
SSHExec ssh = SSHExec.getInstance(cb);
ssh.connect();
CustomTask sampleTask = new ExecCommand("echo \"${SSH_CLIENT%% *}\"");
String Result = ssh.exec(sampleTask).sysout;
ssh.disconnect();
return Result;
}
an older thread with a lot of answers, but none are quite what i was looking for, so i'm contributing mine:
sshpid=$$
sshloop=0
while [ "$sshloop" = "0" ]; do
if [ "$(strings /proc/${sshpid}/environ | grep ^SSH_CLIENT)" ];
then
read sshClientIP sshClientSport sshClientDport <<< $(strings /proc/${sshpid}/environ | grep ^SSH_CLIENT | cut -d= -f2)
sshloop=1
else
sshpid=$(cat /proc/${sshpid}/status | grep PPid | awk '{print $2}')
[ "$sshpid" = "0" ] && sshClientIP="localhost" && sshloop=1
fi
done
this method is compatible with direct ssh, sudoed users, and screen sessions. it will trail up through the process tree until it finds a pid with the SSH_CLIENT variable, then record its IP as $sshClientIP. if it gets too far up the tree, it will record the IP as 'localhost' and leave the loop.
I'm getting the following output from who -m --ips on Debian 10:
root pts/0 Dec 4 06:45 123.123.123.123
Looks like a new column was added, so {print $5} or "take 5th column" attempts don't work anymore.
Try this:
who -m --ips | egrep -o '([0-9]{1,3}\.){3}[0-9]{1,3}'
Source:
#Yvan's comment on #AlexP's answer
#Sankalp's answer
netstat -tapen | grep ssh | awk '{ print $10}'
Output:
two # in my experiment
netstat -tapen | grep ssh | awk '{ print $4}'
gives the IP address.
Output:
127.0.0.1:22 # in my experiment
But the results are mixed with other users and stuff. It needs more work.
Assuming he opens an interactive session (that is, allocates a pseudo terminal) and you have access to stdin, you can call an ioctl on that device to get the device number (/dev/pts/4711) and try to find that one in /var/run/utmp (where there will also be the username and the IP address the connection originated from).
Usually there is a log entry in /var/log/messages (or similar, depending on your OS) which you could grep with the username.
netstat will work (at the top something like this)
tcp 0 0 10.x.xx.xx:ssh someipaddress.or.domainame:9379 ESTABLISHED
Linux: who am i | awk '{print $5}' | sed 's/[()]//g'
AIX: who am i | awk '{print $6}' | sed 's/[()]//g'
Search for SSH connections for "myusername" account;
Take first result string;
Take 5th column;
Split by ":" and return 1st part (port number don't needed, we want just IP):
netstat -tapen | grep "sshd: myusername" | head -n1 | awk '{split($5, a, ":"); print a[1]}'
Another way:
who am i | awk '{l = length($5) - 2; print substr($5, 2, l)}'
One thumb up for #Nikhil Katre's answer :
Simplest command to get the last 10 users logged in to the machine is last|head.
To get all the users simply use last command
The one using who or pinky did what is basically asked. But But But they don't give historical sessions info.
Which might also be interesting if you want to know someone who has just logged in and
logged out already when you start this checking.
if it is a multiuser system. I recommand add the user account you are looking for:
last | grep $USER | head
EDIT:
In my case, both $SSH_CLIENT and $SSH_CONNECTION do not exist.

Resources