How would I disable accounts that have been inactive for 90 days in Linux? - linux

Working on a script that disables accounts that have been inactive for 90 days. Couldn't really find an answer after researching my problem for a few days, but I did find this command on a forum:
lastlog -t 10000 > temp1; lastlog -t 90 > temp2; diff temp1 temp2; rm temp1; rm temp2
This command outputs the users that have been inactive for 90 days. I think the solution to my problem would be to:
Filter the output of this command so only the usernames are displayed (in a list, with 1 username per line).
Take this output and write it to a text file.
Run a for-loop that for each line in the file, the contents of the line (which should be just a single username) are stored in a variable called "inactiveUser". Then the command usermod -L $inactiveUser will be executed.
Would my proposed solution work? If so, how could it be achieved? Is there a much easier method to lock inactive accounts that I am not aware of?

you can simplify this with:
lastlog -b 90
which directly lists users who have not logged in in the past 90 days.
however, it also has a header row, and lists lots of system users.
use tail to skip the header row:
lastlog -b 90 | tail -n+2
then you could use grep to filter out system users:
lastlog -b 90 | tail -n+2 | grep -v 'Never log'
although perhaps there is a safer way to find real, non-system users, e.g.:
cd /home; find * -maxdepth 0 -type d
that issue aside, you can get just the usernames out with awk:
lastlog -b 90 | tail -n+2 | grep -v 'Never log' | awk '{print $1}'
then either output the list to a file, or else directly run usermod via while read loop or xargs:
lastlog -b 90 | tail -n+2 | grep -v 'Never log' | awk '{print $1}' |
xargs -I{} usermod -L {}
perhaps you should also log what you've done:
lastlog -b 90 | tail -n+2 | grep -v 'Never log' | awk '{print $1}' |
tee -a ~/usermod-L.log | xargs -I{} usermod -L {}

While the other answer works, it can be made much cleaner by using awk instead of tail | grep | awk
lastlog -b 90 | awk '!/Never log/ {if (NR > 1) print $1}' | xargs -I{} usermod -L {}
The awk command checkes for lines that don't have the expression 'Never log' in it (!/Never log/).
NR > 1 emulates tail -n +2.
print $1 prints the first column.

Related

different shell behaviour: bash omits newline, zsh keeps it

I have a script which searches source files that contain a "TODO"
note inside the comments. Furthermore I use a concatenation of grep, git blame, uniq and sort to get
the list ordered by the person who wrote the TODO comment.
The following works fine in bash and zsh:
#!/bin/bash
for FILE in $(grep -r -i "todo" apps/business | awk '{print $1}' | sed 's/://' | sed 's/\#//')
do
git blame $FILE | grep -i "todo"
done | sort -k2 | uniq
Now I want to count all the entries. Instead of calling the (time expensive)
grep/git blame again, I want to save everything into $MATCHES to count it
without evaluating it again.
MATCHES=$(for FILE in $(grep -r -i "todo" apps/business | awk '{print $1}' | sed 's/://' | sed 's/\#//')
do
git blame $FILE | grep -i "todo"
done | sort -k2 | uniq)
echo $MATCHES
That's where I experience different behaviour in bash/zsh:
zsh: Returns the same as the first script (as expected)
bash: Ignores the newlines of git blame, puts everything on one line. wc -l counts 1 line.
What am I missing here? Why is bash behaving differently here?
And how do I get bash to not-ignore the newline?
zsh doesn't perform word-splitting on the unquoted parameter expansion $MATCH by default. Use echo "$MATCHES" | wc -l, and bash should work as well.
Note this is the wrong way to iterate over the output of a command; use a while loop and the read command instead.
grep -ri "todo" apps/business | awk '{print $1}' | sed -e 's/://' -e 's/\#//' |
while IFS= read -r FILE; do
git blame "$FILE" | grep -i todo
done | sort -k2 | uniq

Linux-About sorting shell output

I have output from a customised log file like this:
8 24 yum
8 24 yum
8 24 make
8 24 make
8 24 cd
8 24 cd
8 25 make
8 25 make
8 25 make
8 26 yum
8 26 yum
8 26 make
8 27 yum
8 27 install
8 28 ./linux
8 28 yum
I'd like to know if there's anyway to count the number of specific values of the third field. For example I may want to count the number of cd,yum and install only.
You can use awk to do get the third field values and wc -l to count the number.
awk '$3=="cd"||$3=="yum"||$3=="install"||$3=="cat" {print $0}' file | wc -l
You can also use egrep, but this will look for these words not only on the third field, but everywhere else in the line.
egrep "(cd|yum|install|cat)" file | wc -l
if you want to count a specific word on the third field, then you can do the above without multiple regexs.
awk '$3=="cd" {print $0}' | wc -l
A classic shell script to do the job is:
awk '{print $3}' "$file" | sort | uniq -c | sort -n
Extract values from column 3 with awk, sort the identical names together, count the repeats, sort the output in increasing order of count. The sort | uniq -c | sort -n part is a common meme.
If you're using GNU awk, you can do it all in the awk script; it might be more efficient, but for really humungous files, it can run out of memory where the pipeline doesn't (sort spills to disk when necessary; writing code to spill to disk in awk is not sensible).
Use cut, sort and uniq:
$ cut -d" " -f3 inputfile | sort | uniq -c
2 cd
1 install
1 ./linux
6 make
6 yum
For your input this
awk '{++a[$3]}END{for(i in a)print i "\t" a[i];}' file
Would print:
cd 2
install 1
./linux 1
make 6
yum 6
Using awk to count the occurrences of field three and sort to order the output:
$ awk '{a[$3]++}END{for(k in a)print a[k],k}' file | sort -n
1 install
1 ./linux
2 cd
6 make
6 yum
So filter by command:
$ awk '/cd|yum|install/{a[$3]++}END{for(k in a)print a[k],k}' file | sort -n
1 install
2 cd
6 yum
To stop partial matches such as grep in egrep use word boundaries \< and \> so the filter would be /\<cd\>|\<yum\>|\<install\>/
You can use grep to filter by multiple terms at the same time:
cut -f3 -d' ' file | grep -x -e yum -e make -e install | sort | uniq -c
Explanation:
The -x flag is to match only the lines that match exactly, as if with ^pattern$
The cut extracts the 3rd column only
We sort, uniq with count in the end for efficiency, after all junk is removed from the input
i guess u want to count the values of yum install & cd separately. if so, u shud go for 3 separate awk statements: awk '$3=="cd" {print $0}' file | wc -l
awk '$3=="yum" {print $0}' file | wc -l
awk '$3=="install" {print $0}' file | wc -l

Why bc and args doesn't work together in one line?

I need help using xargs(1) and bc(1) in the same line. I can do it multiple lines, but I really want to find a solution in one line.
Here is the problem: The following line will print the size of a file.txt
ls -l file.txt | cut -d" " -f5
And, the following line will print 1450 (which is obviously 1500 - 50)
echo '1500-50' | bc
Trying to add those two together, I do this:
ls -l file.txt | cut -d" " -f5 | xargs -0 -I {} echo '{}-50' | bc
The problem is, it's not working! :)
I know that xargs is probably not the right command to use, but it's the only command I can find who can let me decide where to put the argument I get from the pipe.
This is not the first time I'm having issues with this kind of problem. It will be much of a help..
Thanks
If you do
ls -l file.txt | cut -d" " -f5 | xargs -0 -I {} echo '{}-50'
you will see this output:
23
-50
This means, that bc does not see a complete expression.
Just use -n 1 instead of -0:
ls -l file.txt | cut -d" " -f5 | xargs -n 1 -I {} echo '{}-50'
and you get
23-50
which bc will process happily:
ls -l file.txt | cut -d" " -f5 | xargs -n 1 -I {} echo '{}-50' | bc
-27
So your basic problem is, that -0 expects not lines but \0 terminated strings. And hence the newline(s) of the previous commands in the pipe garble the expression of bc.
This might work for you:
ls -l file.txt | cut -d" " -f5 | sed 's/.*/&-50/' | bc
Infact you could remove the cut:
ls -l file.txt | sed -r 's/^(\S+\s+){4}(\S+).*/\2-50/' | bc
Or use awk:
ls -l file.txt | awk '{print $5-50}'
Parsing output from the ls command is not the best idea. (really).
you can use many other solutions, like:
find . -name file.txt -printf "%s\n"
or
stat -c %s file.txt
or
wc -c <file.txt
and can use bash arithmetics, for avoid unnecessary slow process forks, like:
find . -type f -print0 | while IFS= read -r -d '' name
do
size=$(wc -c <$name)
s50=$(( $size - 50 ))
echo "the file=$name= size:$size minus 50 is: $s50"
done
Here is another solution, which only use one external command: stat:
file_size=$(stat -c "%s" file.txt) # Get the file size
let file_size=file_size-50 # Subtract 50
If you really want to combine them into one line:
let file_size=$(stat -c "%s" file.txt)-50
The stat command gets you the file size in bytes. The syntax above is for Linux (I tested against Ubuntu). On the Mac the syntax is a little different:
let file_size=$(stat -f "%z" mini.csv)-50

The number of processes a user is running using bash

I would like to know how I could get the number of processes for each user that is currently logged in.
You could try some variation of this:
ps haux Ou | cut '-d ' -f1 | uniq -c
It gives you the number of processes for each users (being logged in or not). Now you could filter those results using the output of the w command or another way of determining who is logged in.
Give this a try:
ps -u "$(echo $(w -h | cut -d ' ' -f1 | sort -u))" o user= | sort | uniq -c | sort -rn
In order to properly handle usernames that may be longer than eight characters, use users instead of w. The latter truncates usernames.
ps -u "$(echo $(printf '%s\n' $(users) | sort -u))" o user= | sort | uniq -c | sort -rn
ps -u aboelnour | awk 'END {print NR}'
will show number of process which user aboelnour running it
If you are ever concerned about nearing the user process limit shown by ulimit -a, the you want to get ALL the processes (including LWPs). In such a case you should use:
ps h -Led -o user | sort | uniq -c | sort -n
On one system doing this:
ps haux Ou | cut '-d ' -f1 | uniq -c
yields:
# ps haux Ou | cut '-d ' -f1 | uniq -c
30 user1
1 dbus
3 user2
1 ntp
1 nut
1 polkitd
2 postfix
124 root
2 serv-bu+
where doing the former yields the true process count:
# ps h -Led -o user | sort | uniq -c | sort -n
1 ntp
1 nut
2 dbus
2 postfix
2 serv-builder
3 user2
6 polkitd
141 root
444 user1
Just try:
lslogins -o USER,PROC
If you just want a count of processes you can use procfs directly like this:
(requires linux 2.2 or greater)
you can use wc:
number_of_processes=`echo /proc/[0-9]* | wc -w`
or do it in pure bash (no external commands) like this
procs=( /proc/[0-9]* )
number_of_proccesses=${#procs[*]}
If you only want the current userid
procs=( /proc/[0-9]*/fd/. )
number_of_proccesses=${#procs[*]}
userlist=$(w|awk 'BEGIN{ORS=","}NR>2{print $1}'|sed 's/,$//' )
ps -u "$userlist"
Following links contain useful ps commands options including your requirements:
Displaying all processes owned by a specific user
Show All Running Processes in Linux
Here is my solution, for Linux:
$ find /proc –user $USER -maxdepth 1 -name '[0-9]*' | wc –l
This solution will not fail when the number of processes is larger than the command line limit.

Sorting in bash

I have been trying to get the unique values in each column of a tab delimited file in bash. So, I used the following command.
cut -f <column_number> <filename> | sort | uniq -c
It works fine and I can get the unique values in a column and its count like
105 Linux
55 MacOS
500 Windows
What I want to do is instead of sorting by the column value names (which in this example are OS names) I want to sort them by count and possibly have the count in the second column in this output format. So It will have to look like:
Windows 500
MacOS 105
Linux 55
How do I do this?
Use:
cut -f <col_num> <filename>
| sort
| uniq -c
| sort -r -k1 -n
| awk '{print $2" "$1}'
The sort -r -k1 -n sorts in reverse order, using the first field as a numeric value. The awk simply reverses the order of the columns. You can test the added pipeline commands thus (with nicer formatting):
pax> echo '105 Linux
55 MacOS
500 Windows' | sort -r -k1 -n | awk '{printf "%-10s %5d\n",$2,$1}'
Windows 500
Linux 105
MacOS 55
Mine:
cut -f <column_number> <filename> | sort | uniq -c | awk '{ print $2" "$1}' | sort
This will alter the column order (awk) and then just sort the output.
Hope this will help you
Using sed based on Tagged RE:
cut -f <column_number> <filename> | sort | uniq -c | sort -r -k1 -n | sed 's/\([0-9]*\)[ ]*\(.*\)/\2 \1/'
Doesn't produce output in a neat format though.

Resources