How to return substring from a linux command - linux

I'm connecting to an exadata and want to get information about "ORACLE_HOME" variable inside them. So i'm using this command:
ls -l /proc/<pid>/cwd
this is the output:
2 oracle oinstall 0 Jan 23 21:20 /proc/<pid>/cwd -> /u01/app/database/11.2.0/dbs/
i need the get the last part :
/u01/app/database/11.2.0 (i dont want the "/dbs/" there)
i will be using this command several times in different machines. So how can i get this substring from whole output?

Awk and grep are good for these types of issues.
New:
ls -l /proc/<pid>/cwd | awk '{print ($NF) }' | sed 's#/dbs/##'
Old:
ls -l /proc/<pid>/cwd | awk '{print ($NF) }' | egrep -o '^.+[.0-9]'
Awk prints the last column of the input which is your ls command and then grep grabs the beginning of that string up the last occurrence of numbers and dots. This is a situational solution and perhaps not the best.

Parsing the output of ls is generally considered sub-optimal. I would use something more like this instead:
dirname $(readlink -f /proc/<pid>/cwd)

Related

Remove multiple spaces in ls -l output

I need to display the filesize and the filename. Like this:
4.0K Desktop
I'm extracting these two fields using cut from the ls -l output:
ls -lhS | cut -d' ' -f5,9
Due to multiple spaces in the ls -l output, I'm getting a few erroneous outputs, like:
4.0K 19:54
4.0K 19:55
6
18:39
31
25
How should I fix this?
I need to accomplish this task using pipes only and no bash scripting ( output could be multiple pipes ) and preferably no sed, awk.
If no alternative to sed or awk is available- use of sed is OK.
You can avoid parsing ls output and use the stat command which comes as part of GNU coreutils in bash for detailed file information.
# -c --format=FORMAT
# use the specified FORMAT instead of the default; output a newline after each use of FORMAT
# %n File name
# %s Total size, in bytes
stat -c '%s %n' *
You can use translate character command before using cut.
ls -lhS | tr -s ' ' | cut -d' ' -f 5,9
Or you could just submit to awk:
$ ls -lhS | awk '$0=$5 OFS $9'
ie. replace whole record $0 with fields $5 and $9 separated by output field separator OFS.

Sum out of grep -c

I am trying to find the number an even occured in my log file.
Command:
grep -Eo "2016-08-30" applciationLog.log* -c
Output:
applciationLog.log.1:0
applciationLog.log.2:0
applciationLog.log.3:0
applciationLog.log.4:0
applciationLog.log.5:7684
applciationLog.log.6:9142
applciationLog.log.7:8699
applciationLog.log.8:0
What I actually need is sum of all these values 7684 + 9142 + 8699 = 25525. Any suggestion I can do it? Anything I can append to the grep to enable it.
Any help or pointers are welcome and appreciated.
If you want to keep your grep command, pipe its output to awk, the quick and dirty way is down here:
grep -Eo "aaa" -c aaa.txt bbb.txt -c | awk 'BEGIN {cnt=0;FS=":"}; {cnt+=$2;}; END {print cnt;}'
Or use use awk regex directly:
awk 'BEGIN {cnt=0}; {if(/aaa/) {cnt+=1;}}; END {print cnt;}' aaa.txt bbb.txt
As addition to the already given answer by ghoti:
You can avoid awk -F: by using grep -h:
grep -c -h -F "2016-08-30" applicationLog.log* | awk '{n+=$0} END {print n}'
This means no filenames and only the counts are printed by grep and we can use the first field for the addition in awk.
See if this works for you:
grep -Eo "2016-08-30" applciationLog.log* -c | awk -F':' 'BEGIN {sum = 0;} {sum += $2;} END {print sum;}'
We use awk to split each line up with a delimeter of :, sum up the numbers for each line, and print the result at the end.
The grep command doesn't do arithmetic, it just finds lines that match regular expressions.
To count the output you already have, I'd use awk.
grep -c -F "2016-08-30" applciationLog.log* | awk -F: '{n+=$2} END {print n}'
Note that your grep options didn't make sense -- -E tells the command to use Extended regular expressions, but you're just looking for a fixed string (the date). So I swapped in the -F option instead. And -o tells grep to print the matched text, which you've overridden with -c, so I dropped it.
An alternative using for-loop and arithmetic expansion could be:
x=0
for i in $(grep -hc "2016-08-30" applciationLog.log*);do
x=$((x+i))
done
echo "$x"
An easy alternative is to merge all the files before grep sees them:
cat applciationLog.log* | grep -Eo "2016-08-30" -c
In my directory have have hundreds of files, each file contains lot of text along with a lines similar to this-
Job_1-Run.log:[08/27/20 01:28:40] Total Jobs Cancelled for Job_1_set0 = 10
I do
grep '^Total Jobs Cancelled' ./*
to get that above line.
Then I do a pipe
| awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;}; END {print cnt;}'
so my final command is-
grep '^Total Jobs Cancelled' ./* | awk 'BEGIN {cnt=0;FS="="}; {cnt+=$2;};END {print cnt;}'
and result is the sum. e.g. -
900
I am using Cmder # https://cmder.net/
Thanks to the answer by #alagner, #john above

Count lines of CLI output in linux

Hi have the following command:
lsscsi | grep HITACHI | awk '{print $6}'
I want that the output will be the number of lines of the original output.
For example, if the original output is:
/dev/sda
/dev/sdb
/dev/sdc
The final output will be 3.
Basically the command wc -l can be used to count the lines in a file or pipe. However, since you want to count the number of lines after a filter has been applied I would recommend to use grep for that:
lsscsi | grep -c 'HITACHI'
-c just prints the number of matching lines.
Another thing. In your example you are using grep .. | awk. That's a useless use of grep. It should be
lsscsi | awk '/HITACHI/{print $6}'

get the first word as result of ls -l

I need to use ls -l and I would like to have as result just the first word of the file name for instance for a result like this
-rw-r--r-- 1 root root 9 Sep 21 23:11 best file 1.txt
I would like to have only
best
as result because I need to put this value into a variable. It is ok as well if there is another way instead of using ls -l.
...sorry to bother you again...if the file is under a sub-directory, how can I hide the folder from the result? Thanks
You don't need to use ls -l (L).
Instead, use ls -1 (number one), that just outputs the names of the files, and then filter out the first column with cut:
ls -1 | cut -d' ' -f1
^
number one, not letter L
To store the value into a variable, do:
var=$(ls -1 | cut -d' ' -f1)
Note it is not a good thing to parse ls: the number of columns may vary, etc. You can read more about the topic in Why you shouldn't parse the output of ls
Update
Note there is no even need to use -1 (one), ls alone suffices:
ls | cut -d' ' -f1
As BroSlow comments below, "because they are EOL (end of line) separated across a pipe".
If you have only one row to output, this will work fine:
var=`ls -l | awk '{ print $9 }'`
echo ${var}
Or you need to use grep to filter your output for the correct file.
set -- $(ls -l)
echo ${11} # Assumes the file is the FIRST one listed.
Should do the trick. But I'm not sure if that's really what you want. For one thing, ls -l also prints an extra header line. Why do you say that you need to use ls -l? If you could state the actual problem, maybe we can find a much better solution together...
awk can pick the first word for you;
ls | awk '{print $1}'
Try:
ls -al|awk 'NR==4{ print $9 }'
Row number 4 will have first line of files. $9 indicates column 9 which will have desired word.

Omitting the first line from any Linux command output

I have a requirement where i'd like to omit the 1st line from the output of ls -latr "some path" Since I need to remove total 136 from the below output
So I wrote ls -latr /home/kjatin1/DT_901_linux//autoInclude/system | tail -q which excluded the 1st line, but when the folder is empty it does not omit it. Please tell me how to omit 1st line in any linux command output
The tail program can do this:
ls -lart | tail -n +2
The -n +2 means “start passing through on the second line of output”.
Pipe it to awk:
awk '{if(NR>1)print}'
or sed
sed -n '1!p'
ls -lart | tail -n +2 #argument means starting with line 2
This is a quick hacky way: ls -lart | grep -v ^total.
Basically, remove any lines that start with "total", which in ls output should only be the first line.
A more general way (for anything):
ls -lart | sed "1 d"
sed "1 d" means only print everything but first line.
You can use awk command:
For command output use pipe: | awk 'NR>1'
For output of file: awk 'NR>1' file.csv

Resources