How do I get the five last created users in order?
I can sort them by userId with this command but is there a file where you can see the date when the user is created?
tail -5 /etc/passwd | sort -r | grep '/home' | cut -d: -f1
You can look at the files in the system and sort the users based on the least recently modified files they own:
find /home | xargs -n 1 -I {} stat -c '%Y %U' "{}" | \
awk '$1 < d[$2] || !d[$2] { d[$2]=$1 } \
END{ for (u in d) { print d[u], u } }' | \
sort -n -k1
Related
If torrent has a problem like deleted data on hard drive in id column it has number like "ID*".
I want to filter IDs of torrents in torrent list which have a symbol "*" at the end of id(LIKE ID* or 1*,2*,25*) and delete them from torrent list.
Full command is:
/usr/bin/transmission-remote 127.0.0.1:9091 --auth ts:ts -l | grep "*" | awk '{print $1}' \
| xargs -n 1 -I % /usr/bin/transmission-remote 127.0.0.1:9091 --auth ts:ts -t% -r
I expected result:
/usr/bin/transmission-remote 127.0.0.1:9091 --auth ts:ts -t ID* -r
But something went wrong.
Transmission said that:
127.0.0.1:9091/transmission/rpc/ responded: "success"
But torrent didn't delete from list.
How I can see the final result to compare with expected?
To get IDs :
transmission-remote -l | grep '*' | awk '{print $1}' | grep -o '[0-9]*'
The full command :
transmission-remote -l | grep '*' | awk '{print $1}' | grep -o '[0-9]*' | tr "\\n" "," | xargs -n 1 -I \% transmission-remote -t \% -r
Done and done (:
With the added improvement of using "tr" to join all torrent IDs and avoid running everything in a loop ( Transmission-RPC is extremely resource intensive to call repeatedly )
I currently have many text files over several directories that I am sorting and storing the results in text file. The issues is not the sorting part but formatting the output that gets placed in the text file. I am looking to output in this format file '/path/to/file1' currently it shows /path/to/file1. I want to do this all within one process(not have to run an additional loop or fine to change the format).
$ target=~/tmp/shuf
$ destination=/filepath/
$ find $target -iname "*.txt" -type f | shuf | awk -F- '{printf("%s:%s\n", $0, $NF)}' | sort -t : -k 2 -s | cut -d : -f 1 | xargs -n1 basename | sed "s,^,$destination," > $destination/results.txt
Current results.txt:
/path/to/cs650-software_methodologies-fname_lname-001.txt
/path/to/s630-linux_research_paper-fname_lname-001.txt
Desired results.txt:
file '/path/to/cs650-software_methodologies-fname_lname-001.txt'
file '/path/to/s630-linux_research_paper-fname_lname-001.txt'
I find awk is often easier for this kind of formatting, if you don't have substitutions. This also allows us to skip the basename call and leave that part to awk as well. Just note that this will not work if you have any forward slashes in your actual filenames.
find $target -type f -iname "*.txt" \
| shuf \
| awk -F- '{printf("%s:%s\n", $0, $NF)}' \
| sort -t : -k 2 -s \
| cut -d : -f 1 \
| awk -F / '{printf("file '\''%s'\''\n", $0)}' \
> $destination/results.txt
I am trying to get the max version number from a directory where i have several versions of one program
for example if output of ls is
something01_1.sh
something02_0.1.2.sh
something02_0.1.sh
something02_1.1.sh
something02_1.2.sh
something02_2.0.sh
something02_2.1.sh
something02_2.3.sh
something02_3.1.2.sh
something.sh
I am getting the max version number with the following -
ls somedir | grep some_prefix | cut -d '_' -f2 | sort -t '.' -k1 -r | head -n 1
Now if at the same time i want to check it with the version number which i already have in the system, whats the best way to do it...
in bash i got this working (if 2.5 is the current version)
(ls somedir | grep some_prefix | cut -d '_' -f2; echo 2.5) | sort -t '.' -k1 -r | head -n 1
is there any other correct way to do it?
EDIT: In the above example some_prefix is something02.
EDIT: Actual Problem here is
(ls smthing; echo more) | sort
is it the best way to merge output of two commands/program for piping into third.
I have found the solution. The best way it seems is using process substitution.
cat <(ls smthing) <(echo more) | sort
for my version example
cat <(ls somedir | grep some_prefix | cut -d '_' -f2) <(echo 2.5) | sort -t '.' -k1 -r | head -n 1
for the benefit of future readers, I recommend - please drop the lure of one-liner and use glob as chepner suggested.
Almost similar question is asked on superuser.
more info about process substitution.
Is the following code more suitable to what you're looking for:
#/bin/bash
highest_version=$(ls something* | sort -V | tail -1 | sed "s/something02_\|\.sh//g")
current_version=$(echo $0 | sed "s/something02_\|\.sh//g")
if [ $current_version > $highest_version ]; then
echo "Uh oh! Looks like we need to update!";
fi
You can try something like this :
#! /bin/bash
lastversion() { # prefix
local prefix="$1" a=0 b=0 c=0 r f vmax=0
for f in "$prefix"* ; do
test -f "$f" || continue
read a b c r <<< $(echo "${f#$prefix} 0 0 0" | tr -C '[0-9]' ' ')
v=$(((a*100+b)*100+c))
if ((v>vmax)); then vmax=$v; fi
done
echo $vmax
}
lastversion "something02"
It will print: 30102
I have a jar file, i need to execute the files in it in Linux.
So I need to get the result of the unzip -l command line by line.
I have managed to extract the files names with this command :
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 ;
But i can't figure out how to obtain the file names one after another to execute them.
How can i do it please ?
Thanks a lot.
If all you need the first row in a column, add a pipe and get the first line using head -1
So your one liner will look like :
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 |head -1;
That will give you first line
now, club head and tail to get second line.
unzip -l package.jar | awk '{print $NF}' | grep com/tests/[A-Za-Z] | cut -d "/" -f3 |head -2 | tail -1;
to get second line.
But from scripting piont of view this is not a good approach. What you need is a loop as below:
for class in `unzip -l el-api.jar | awk '{print $NF}' | grep javax/el/[A-Za-Z] | cut -d "/" -f3`; do echo $class; done;
you can replace echo $class with whatever command you wish - and use $class to get the current class name.
HTH
Here is my attempt, which also take into account Daddou's request to remove the .class extension:
unzip -l package.jar | \
awk -F'/' '/com\/tests\/[A-Za-z]/ {sub(/\.class/, "", $NF); print $NF}' | \
while read baseName
do
echo " $baseName"
done
Notes:
The awk command also handles the tasks of grep and cut
The awk command also handles the removal of the .class extension
The result of the awk command is piped into the while read... command
baseName represents the name of the class file, with the .class extension removed
Now, you can do something with that $baseName
I need some help combining elements of scripts to form a read output.
Basically I need to get the file name of a user for the folder structure listed below and using count the number of lines in the folder for that user with the file type *.ano
This is shown in the extract below, to note that the location on the filename is not always the same counting from the front.
/home/user/Drive-backup/2010 Backup/2010 Account/Jan/usernameneedtogrep/user.dir/4.txt
/home/user/Drive-backup/2011 Backup/2010 Account/Jan/usernameneedtogrep/user.dir/3.ano
/home/user/Drive-backup/2010 Backup/2010 Account/Jan/usernameneedtogrep/user.dir/4.ano
awk -F/ '{print $(NF-2)}'
This will give me the username I need but I also need to know how many non blank lines they are in that users folder for file type *.ano. I have the grep below that works but I dont know how to put it all together so it can output a file that makes sense.
grep -cv '^[[:space:]]*$' *.ano | awk -F: '{ s+=$2 } END { print s }'
Example output needed
UserA 500
UserB 2
UserC 20
find /home -name '*.ano' | awk -F/ '{print $(NF-2)}' | sort | uniq -c
That ought to give you the number of "*.ano" files per user given your awk is correct. I often use sort/uniq -c to count the number of instances of a string, in this case username, as opposed to 'wc -l' only counting input lines.
Enjoy.
Have a look at wc (word count).
To count the number of *.ano files in a directory you can use
find "$dir" -iname '*.ano' | wc -l
If you want to do that for all directories in some directory, you can just use a for loop:
for dir in * ; do
echo "user $dir"
find "$dir" -iname '*.ano' | wc -l
done
Execute the bash-script below from folder
/home/user/Drive-backup/2010 Backup/2010 Account/Jan
and it will report the number of non-blank lines per user.
#!/bin/bash
#save where we start
base=$(pwd)
# get all top-level dirs, skip '.'
D=$(find . \( -type d ! -name . -prune \))
for d in $D; do
cd $base
cd $d
# search for all files named *.ano and count blank lines
sum=$(find . -type f -name *.ano -exec grep -cv '^[[:space:]]*$' {} \; | awk '{sum+=$0}END{print sum}')
echo $d $sum
done
This might be what you want (untested): requires bash version 4 for associative arrays
declare -A count
cd /home/user/Drive-backup
for userdir in */*/*/*; do
username=${userdir##*/}
lines=$(grep -cv '^[[:space:]]$' $userdir/user.dir/*.ano | awk '{sum += $2} END {print sum}')
(( count[$username] += lines ))
done
for user in "${!count[#]}"; do
echo $user ${count[$user]}
done
Here's yet another way of doing it (on Mac OS X 10.6):
find -x "$PWD" -type f -iname "*.ano" -exec bash -c '
ar=( "${#%/*}" ) # perform a "dirname" command on every array item
printf "%s\000" "${ar[#]%/*}" # do a second "dirname" and add a null byte to every array item
' arg0 '{}' + | sort -uz |
while IFS="" read -r -d '' userDir; do
# to-do: customize output to get example output needed
echo "$userDir"
basename "$userDir"
find -x "${userDir}" -type f -iname "*.ano" -print0 |
xargs -0 -n 500 grep -hcv '^[[:space:]]*$' | awk '{ s+=$0 } END { print s }'
#xargs -0 -n 500 grep -cv '^[[:space:]]*$' | awk -F: '{ s+=$NF } END { print s }'
printf '%s\n' '----------'
done