WHM server access logs for all accounts - linux

I've had a lot of issues with hacks and DDOS attacks on a few servers, though this is usually caused by some very simple things. However I've found it invaluable to be able to look through an accounts access logs and list the hit pages in order of lowest to highest using the following through ssh cat example.co.uk | cut -d\" -f2 | awk '{print $1 " " $2}' | cut -d? -f1 | sort | uniq -c | sort -n
However this means I need to run this against every single accounts access log. is there a server wide version or script out there to scan all access logs for activity?

You can use your command in for loop to check all domain access logs file.
for i in `cat /etc/trueuserdomains | awk {'print $1'} | cut -d":" -f1`;do echo "Pages list Of $i" ; cat /usr/local/apache/domlogs/$i* | grep GET | cut -d\" -f2 | awk '{print $1 " " $2}' | cut -d? -f1 | sort | uniq -c | sort -n;done > /root/report.txt
Once it's done, Please check your /root/report.txt file.

Related

List all Linux users without systen users

I would like to list all users in Linux without showing systen-user.
How can I make this only the username .
For example cut -d: -f1 /etc/passwd, I can see all users + system users.
This shows all users with uid less than 999:
awk -F':' '$3>999 {print $1 " uid: " $3}' /etc/passwd | column -t | grep -v nobody
EDIT:
With cut showing only human users:
cut -d: -f1,3 /etc/passwd | egrep ':[0-9]{4}$' | cut -d: -f1
You can try this : awk -F: '$6 ~ /\/home/ {print}' /etc/passwd

Calculating Awk Output divide by mega=1048576

Hi Can someone please let me know how I can calculate the output field from this command to MB ?
The command below shows the 20 largest file in directory and sub directories
but I need to convert the output to MB. In my script I use an array.. But If you guys show me how to use awk to divide the output for this by mega=1048576
I would really appreciate it .. Please explain the options !!!
ls -1Rs | sed -e "s/^ *//" | grep "^[0-9]" | sort -nr | head -n20 | awk {'print $1'}
Thanks
You don't show any sample input or expected output so this is a guess but this MAY be what you want (assuming you cant follow all the other good advice about not parsing ls output and you don't have GNU awk for internal sorting):
ls -1Rs | awk '/^ *[0-9]/' | sort -nr | awk 'NR<21{print $1/1024}'
Note that you don't need all those other commands and pipes when you're already using awk.
ls -1Rs | sed -e "s/^ *//" | grep "^[0-9]" | sort -nr | head -n20 | awk {'print $1 / 1024'} To turn it into MB - You have to divide it by 1024

Display users on Linux with tabbed output

I am working with Linux and I am trying to display and count the users on the system. I am currently using who -q, which gives me a count and the users but I am trying not to list one person more than once with it. At the same time I would like the output of users on separate lines as well or tabbed better than it currently is.
The following will show the number of unique users logged in, ignoring the number of times they are each logged in individually:
who | awk '{ print $1; }' | sort -u | awk '{print $1; u++} END{ print "users: " u}'
If the output of who | awk '{ print $1 }' is :
joe
bunty
will
sally
will
bunty
Then the one-liner will output:
bunty
joe
sally
will
users: 4
Previous answers have involved uniq (but this command only removes duplicates if they are storted, which who does not guarantee, hence we use sort -u to achieve the same.
The awk command at the end outputs the results whilst counting the number of unique users and outputtig this value at the end.
I think you want
who | awk '{print $1}' | uniq && who -q | grep "\# " | cut -d' ' -f2

How to only grep one of each address. Linux

Okay so lets say I have a list of addresses in a text file like this:
https://www.amazon.com
https://www.google.com
https://www.msn.com
https://www.google.com
https://www.netflix.com
https://www.amazon.com
...
There is a whole bunch of other stuff there but basically the issue I am having is that after running this:
grep "https://" addresses.txt | cut -d"/" -f3
I get amazon.com and google.com twice. I want to only get them once. I don't know how to make the search only grep for things that are unique.
Pipe your output to sort and uniq:
grep "https://" addresses.txt | cut -d"/" -f3 | sort | uniq
you can use sort for this purpose.
just add another pipe to your command and use the unique feature of sort to remove duplicates.
grep 'https://' addresses.txt | cut -d"/" -f3 | sort -u
EDIT: you can use sed instead of grep and cut which would reduce your command to
sed -n 's#https://\([^/]*\).*#\1#p' < addresses.txt | sort -u
I would filter the results post-grep.
e.g. using sort -u to sort and then produce a set of unique entries.
You can also use uniq for this, but the input has to be sorted in advance.
This is the beauty of being able to pipe these utilities together. Rather than have a single grepping/sorting/uniq(ing) tool, you get the distinct executables, and you can chain them together how you wish.
grep "https://" addresses.txt | cut -d"/" -f3 | sort | uniq is what you want
with awk you can use only one unix command instead of four with 3 pipes:
awk 'BEGIN {FS="://"}; { myfilter = match($1,/https/); if (myfilter) loggeddomains[$2]=0} END {for (mydomains in loggeddomains) {print mydomains}}' addresses.txt

under mac terminal: List all of the users whom has at least one running process?

how to list all of the users whom has at least one running process.
The user name should not be duplicated.
The user name should be sorted.
$ ps xau | cut -f1 -d " "| sort | uniq | tail -n +2
You may want to weed out names starting with _ as well like so :
ps xau | cut -f1 -d " "| sort | uniq | grep -v ^_ | tail -n +2
users does what is requested. From the man page:
users lists the login names of the users currently on the system, in
sorted order, space separated, on a single line.
Try this:
w -h | cut -d' ' -f1 | sort | uniq
The w -h displays all users in system, without header and some output. The cut part removes all other information without username. uniq ignores duplicate lines.

Resources