tr "[1-9]" "['01'-'09']" not working properly - linux

I'm trying to cut only the date part from a ls -lrth | grep TRACK output:
-rw-r--r-- 1 ins ins 0 Dec 3 00:00 TRACK_1_20121203_01010014.LOG
-rw-r--r-- 1 ins ins 0 Dec 3 00:00 TRACK_0_20121203_01010014.LOG
-rw-r--r-- 1 ins ins 0 Dec 13 15:10 TRACK_9_20121213_01010014.LOG
-rw-r--r-- 1 ins ins 0 Dec 13 15:10 TRACK_8_20121213_01010014.LOG
But, doing this:
ls -lrth | grep TRACK | tr "\t" " " | cut -d" " -f 9
only gives me the dates which are double digits and spaces for single digits:
13
13
So I tried something with tr command, to translate all single digit dates to double digits:
ls -lrth | grep TRACK | tr "\t" " " | tr "[1-9]" "['01'-'09']" | cut -d" " -f 9
But it's giving some weird results, and evidently don't serve my purpose. Any ideas on how to get the correct output?

Don't parse ls output.
ls is a tool for interactively looking at file information. Its output is formatted for humans and will cause bugs in scripts. Use globs or find instead. Understand why: http://mywiki.wooledge.org/ParsingLs
I recommend this way :
If you want the date and the file path :
find . -name 'TRACK*' -printf '%a %p\n'
If you want only the date:
find . -name 'TRACK*' -printf '%a\n'

You could try another approach with something like
find . -name 'TRACK*' -exec stat -c %y {} \; | sort
You can add something like | cut -f1 -d' ' if you only need the date.

I guess this does suffice:
ls -lhrt | grep TRACK | awk '{print $6, $7, $8}'

that kind of substitution would be better handled through sed:
ls -lrth | grep TRACK | sed 's/ \+/ /g;s/ \([0-9]\) / 0\1 /g' | cut -d" " -f 7

As already said, never parse the output of ls!
Since you only want the modification time, the command date has a cool option for that: option -r (man date for more info).
Hence, you probably want this instead of your line:
for i in TRACK*; do date -r "$i"; done
I don't know how you want the format of the date, so play with the options, e.g.,
for i in TRACK*; do date -r "$i" "+%D"; done
(the formats are in man date).

Use stat to get information about a file.
Also, tr only does one-to-one character translation. It won't replace one-character sequences with two-character ones.

Related

Extracting the user with the most amount of files in a dir

I am currently working on a script that should receive a standard input, and output the user with the highest amount of files in that directory.
I've wrote this so far:
#!/bin/bash
while read DIRNAME
do
ls -l $DIRNAME | awk 'NR>1 {print $4}' | uniq -c
done
and this is the output I get when I enter /etc for an instance:
26 root
1 dip
8 root
1 lp
35 root
2 shadow
81 root
1 dip
27 root
2 shadow
42 root
Now obviously the root folder is winning in this case, but I don't want only to output this, i also want to sum the number of files and output only the user with the highest amount of files.
Expected output for entering /etc:
root
is there a simple way to filter the output I get now, so that the user with the highest sum will be stored somehow?
ls -l /etc | awk 'BEGIN{FS=OFS=" "}{a[$4]+=1}END{ for (i in a) print a[i],i}' | sort -g -r | head -n 1 | cut -d' ' -f2
This snippet returns the group with the highest number of files in the /etc directory.
What it does:
ls -l /etc lists all the files in /etc in long form.
awk 'BEGIN{FS=OFS=" "}{a[$4]+=1}END{ for (i in a) print a[i],i}' sums the number of occurrences of unique words in the 4th column and prints the number followed by the word.
sort -g -r sorts the output descending based on numbers.
head -n 1 takes the first line
cut -d' ' -f2 takes the second column while the delimiter is a white space.
Note: In your question, you are saying that you want the user with the highest number of files, but in your code you are referring to the 4th column which is the group. My code follows your code and groups on the 4th column. If you wish to group by user and not group, change {a[$4]+=1} to {a[$3]+=1}.
Without unreliable parsing the output of ls:
read -r dirname
# List user owner of files in dirname
stat -c '%U' "$dirname/" |
# Sort the list of users by name
sort |
# Count occurrences of user
uniq -c |
# Sort by higher number of occurrences numerically
# (first column numerically reverse order)
sort -k1nr |
# Get first line only
head -n1 |
# Keep only starting at character 9 to get user name and discard counts
cut -c9-
I have an awk script to read standard input (or command line files) and sum up the unique names.
summer:
awk '
{ sum[ $2 ] += $1 }
END {
for ( v in sum ) {
print v, sum[v]
}
}
' "$#"
Let's say we are using your example of /etc:
ls -l /etc | summer
yields:
0
dip 2
shadow 4
root 219
lp 1
I like to keep utilities general so I can reuse them for other purposes. Now you can just use sort and head to get the maximum result output by summer:
ls -l /etc | summer | sort -r -k2,2 -n | head -1 | cut -f1 -d' '
Yields:
root

How to only display owner of file when using ls command with special edge case

My objective is to find all files in a directory recursively and display only the file owner name so I'm able to use uniq to count the # of files a user owns in a directory. The command I am using is the following:
command = "find " + subdirectory.directoryPath + "/ -type f -exec ls -lh {} + | cut -f 3 -d' ' | sort | uniq -c | sort -n"
This command successfully displays only the owner of the file of each line, and allows me to count of the # of times the owner names is repeated, hence getting the # of files they own in a subdirectory. Cut uses ' ' as a delimiter and only keeps the 3rd column in ls, which is the owner of the file.
However, for my purpose there is this special edge case, where I'm not able to obtain the owner name if the following occurs.
-rw-r----- 1 31122918 group 20169510233 Mar 17 06:02
-rw-r----- 1 user1 group 20165884490 Mar 25 11:11
-rw-r----- 1 user1 group 20201669165 Mar 31 04:17
-rwxr-x--- 1 user3 group 20257297418 Jun 2 13:25
-rw-r----- 1 user2 group 20048291543 Mar 4 22:04
-rw-r----- 1 14235912 group 20398346003 Mar 10 04:47
The special edge cases are the #s as the owner you see above. The current command Im using can detect user1,user2,and user3 perfectly, but because the numbers are placed all the way to the right, the command above doesn't detect the numbers, and simply displays nothing. Example output is shown here:
1
1 user3
1 user2
1
2 user1
Can anyone help me parse the ls output so I'm able to detect these #'s when trying to only print the file owner column?
cut -d' ' won't capture the third field when it contains leading spaces -- each space is treated as the separator of another field.
Alternatives:
cut -c
123456789X123456789X123456789X123456789X123456789L0123456789X0123
-rw-r----- 1 31122918 group 20169510233 Mar 17 06:02
-rw-r----- 1 user1 group 20165884490 Mar 25 11:11
The data you seek is between characters 15 and 34 on each line, so you can say
cut -c14-39
perl/awk: other tools are adept at extracting data out of a line. Try one of
perl -lane 'print $F[2]'
awk '{print $3}'
Don't try to parse the output of ls. Use the stat command.
find dirname ! -user root -type f -exec stat --format=%U {} + | sort | uniq -c | sort -n
%U prints the owner username.
Merging multiple spaces
tr -s ' '
Get file users
ls -hl | tr -s ' ' | cut -f 3 -d' '
ls -hl | awk '{print $3}'
sudo find ./ ! -user root -type f -exec ls -lh {} + | tr -s ' ' | cut -f 3 -d' ' | sort | uniq -c | sort -n
You can use the below command to display only the owner of a directory or a file.
stat -c "%U" /path/of/the/file/or/directory
If you also want to print the group of a file or directory you can use %G as well.
stat -c "%U %G" /path/of/the/file/or/directory

Obtaining the total of coincidences with multiple pattern using grep command

I have a file in Linux contains strings:
CALLTMA
Starting
Starting
Ending
Starting
Ending
Ending
CALLTMA
Ending
I need the quantity of any string (FE. #Ending, # Starting, #CALLTMA). In my example I need obtaining:
CALLTMA : 2
Starting: 3
Ending : 4
I can obtaining this output when I execute 3 commands:
grep -i "Starting" "/myfile.txt" | wc -l
grep -i "Ending" "/myfile.txt" | wc -l
grep -i "CALLTMA" "/myfile.txt" | wc -l
I want to know if it is possible to obtain the same output using only one command.
I try running this command
grep -iE "CALLTMA|Starting|Ending" "/myfile.txt" | wc -l
But this returned the total of coincidences. I appreciate your help .
Use sort and uniq:
sort myfile.txt | uniq -c
The -c adds the counts to the unique lines. If you want to sort the output by frequency, add
| sort -n
to the end (and change to -nr if you want the descending order).
A simple awk way to handle this:
awk '{counts[$1]++} END{for (c in counts) print c, counts[c]}' file
Starting 3
Ending 4
CALLTMA 2
grep -c will work. You can put it all together in a short script:
for i in Starting CALLTMA Ending; do
printf "%-8s : %d\n" "$i" $(grep -c "$i" file.txt)
done
(to enter the search terms as arguments, just use the arguments array for the loop list, e.g. for i in "$#"; do)
Output
Starting : 3
CALLTMA : 2
Ending : 4

Getting the total size of a directory as a number with du

Using the command du, I would like to get the total size of a directory
Output of command du myfolder:
5454 kkkkk
666 aaaaa
3456788 total
I'm able to extract the last line, but not to remmove the string total:
du -c myfolder | grep total | cut -d ' ' -f 1
Results in:
3456788 total
Desired result
3456788
I would like to have all the command in one line.
That's probably because it's tab delimited (which is the default delimiter of cut):
~$ du -c foo | grep total | cut -f1
4
~$ du -c foo | grep total | cut -d' ' -f1
4
to insert a tab, use Ctrl+v, then TAB
Alternatively, you could use awk to print the first field of the line ending with total:
~$ du -c foo | awk '/total$/{print $1}'
4
First of, you probably want to use tail -n1 instead of grep total ... Consider what happens if you have a directory named local? :-)
Now, let's look at the output of du with hexdump:
$ du -c tmp | tail -n1 | hexdump -C
00000000 31 34 30 33 34 34 4b 09 74 6f 74 61 6c 0a |140344K.total.|
That''s the character 0x09 after the K, man ascii tells us:
011 9 09 HT '\t' (horizontal tab) 111 73 49 I
It's a tab, not a space :-)
The tab character is already the default delimiter (this is specified in the POSIX spec, so you can safely rely on it), so you don't need -d at all.
So, putting that together, we end up with:
$ du -c tmp | tail -n1 | cut -f1
140344K
Why don't you use -s to summarize it? This way you don't have to grep "total", etc.
$ du .
24 ./aa/bb
...
# many lines
...
2332 .
$ du -hs .
2.3M .
Then, to get just the value, pipe to awk. This way you don't have to worry about the delimiter being a space or a tab:
du -s myfolder | awk '{print $1}'
From man du:
-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)
-s, --summarize
display only a total for each argument
I would suggest using awk for this:
value=$(du -c myfolder | awk '/total/{print $1}')
This simply extracts the first field of the line that matches the pattern "total".
If it is always the last line that you're interested in, an alternative would be to use this:
value=$(du -c myfolder | awk 'END{print $1}')
The values of the fields in the last line are accessible in the END block, so you can get the first field of the last line this way.

linux bash command separate by space

so I'm trying to display only columns at a time
first ls -l gives me this
drwxr-xr-x 11 stuff stuff 4096 2009-08-22 06:45 lyx-1.6.4
-rw-r--r-- 1 stuff stuff 14403778 2009-10-26 02:37 lyx.tar.gz
I'm using this:
ls -l |cut -d " " -f 1
to get this
drwxr-xr-x
-rw-r--r--
and it displays my first column just fine. Then I want to see on the second column
ls -l |cut -d " " -f 2
I only get this
11
Shouldn't I get
11
1
?
Why is it doing this?
if I try
ls -l |cut -d " " -f 2-3
I get
11 stuff
There's gotta be an easier way to display columns right?
This should show the second column:
ls -l | awk '{print $2}'
cut considers two sequential delimiters to have an empty field in between. So the second line:
-rw-r--r-- 1 stuff stuff
has fields:
1: -rw-r--r--
2: --empty field--
3: 1
etc.
You can use use column fields in cut:
ls -l | cut -c13-14
Or you can use awk to separate fields (unlink, cut awk will treat sequential delimiters as a single delimiter).

Resources