let's say my file /etc/passwd contains
ntp:x:38:40::/etc/ntp:/sbin/nologin
avahi:x:70:70:Avahi mDNS/DNS-SD Stack:/var/run/avahi-daemon:/sbin/nologin
haldaemon:x:38:68:HAL daemon:/:/sbin/nologin
pulse:x:497:495:PulseAudio System Daemon:/var/run/pulse:/sbin/nologin
gdm:x:42:38::/var/lib/gdm:/sbin/nologin
sshd:x:388:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
tcpdump:x:38:72::/:/sbin/nologin
what i'm trying to do is print the line containing a "38" in the third column, something which will print this:
ntp:x:38:40::/etc/ntp:/sbin/nologin haldaemon:x:38:68:HAL
daemon:/:/sbin/nologin gdm:x:42:38::/var/lib/gdm:/sbin/nologin
tcpdump:x:38:72::/:/sbin/nologin
I tried something like
cat "/etc/passwd" | cut -d ":" -f3 | grep "38"
but it only show the "38" not the entire line
Thanks
you may test this:
awk -F: '$3~/38/' /etc/passwd
note that 3rd column with 338 or 838 will be printed as well.
You could use grep
grep ^.*:.*:38: /etc/passwd
Improved version after tripleee's comment:
egrep ^[^:]*:[^:]*:38: /etc/passwd
You can use wk:
awk -F: '$3==38{print}' file
In general, I would suggest you avoid parsing /etc/passwd directly. Instead you can use getent passwdto read the passwd database.
You can do this:
cat /etc/passwd | egrep "^[[:alnum:]]*:[[:alnum:]]*:38:.*"
Using the alphanumeric character class.
In pure bash (awk is the way to go though!):
$ while read line; do array=(${line//:/ }); [ ${array[2]} -eq 38 ] && echo $line; done < input
ntp:x:38:40::/etc/ntp:/sbin/nologin
haldaemon:x:38:68:HAL daemon:/:/sbin/nologin
only sed was remaining :)
sed -n '/^[^:]*:[^:]:*38:/p' /etc/passwd
Related
I want to format output of a user account from /etc/passwd to display only the name, role, and directory path, all separated by commas. I know this should be easy, but for the life of me I cannot figure out how to display between certain colons. (Note this should work with any username, not just the ex)
Ex of grep joe /etc/passwd:
joe:x:1001:1001:System Admin:/home/joe:/bin/bash
Desired Output:
joe, System Admin, /home/joe
Thank you!
awk 'BEGIN{ FS=":"; OFS=", " } $1=="joe" { print $1,$5,$6; exit }' /etc/passwd
(but you should show a little more effort next time -- your question is very downvotable :))
With cut using comma as --output-delimiter:
cut -d: -f1,5,6 --output-delimiter=, /etc/passwd
Try:
grep joe /etc/passwd | cut -d: -f1,5,6
If you really need "," as a delimiter:
grep "^joe:" /etc/passwd | cut -d: -f1,5,6 | tr : ,
The "^" ensures only matches to "joe" at the beginning of the line are intercepted (Thanks PSkocik for the reminder). grep by defaults accepts a regex.
Translation in bash, but two lines:
x=`grep joe /etc/passwd | cut -d: -f1,5,6`
echo ${x/:/,}
Adding to the variations, in bash itself, you can pipe the output of grep directly to a brace enclosed read and separate the fields using IFS as required. Using short names for the fields, you could do:
grep '^joe:' /etc/passwd |
{ IFS=: read -r u x uid gid n h s; echo "$u, $n, $h"; }
Which would give the output you seek (if /etc/passwd contained the entry)
If you want to do it purely in shell (POSIX sh is enough, Bash not needed), then read is your friend:
while IFS=: read user pw uid gid gecos home shell ; do
if [ "$user" = "joe" ] ; then
echo "$user, $gecos, $home"
fi
done < /etc/passwd
I have some file xxx.conf in text format. I have some text "disablelog = 1" in this file.
When I use
grep -r "disablelog" oscam.conf
output is
disablelog = 1
But i need only value 1.
Do you have some idea please?
one way is to use awk to print just the value
grep -r "disablelog" oscam.conf | awk '{print $3}'
you could also use sed to replace diablelog = with empty
grep -r 'disablelog' oscam.conf | sed -e 's/disablelog = //'
If you also want to get the lines with or without space before and after = use
grep -r 'disablelog' oscam.conf | sed 's/disablelog\s*=\s*//'
above command will also match
disablelog=1
Assuming you need it as a var in a script:
#!/bin/bash
DISABLELOG=$(awk -F= '/^.*disablelog/{gsub(/ /,"",$2);print $2}' /path/to/oscam.conf)
echo $DISABLELOG
When calling this script, the output should be 1.
Edit: No matter wether there is whitespace or not between the equals sign and the value, the above will handle that. The regex should be anchored in either way to improve performance.
Try:
grep -r "disablelog" oscam.conf | awk -F= '{print $2}'
Just for fun a solution without awk
grep -r disablelog | cut -d= -f2 | xargs
xargs is used here to trim the whitespace
I have a text files with a line like this in them:
MC exp. sig-250-0 events & $0.98 \pm 0.15$ & $3.57 \pm 0.23$ \\
sig-250-0 is something that can change from file to file (but I always know what it is for each file). There are lines before and above this, but the string "MC exp. sig-250-0 events" is unique in the file.
For a particular file, is there a good way to extract the second number 3.57 in the above example using bash?
use awk for this:
awk '/MC exp. sig-250-0/ {print $10}' your.txt
Note that this will print: $3.57 - with the leading $, if you don't like this, pipe the output to tr:
awk '/MC exp. sig-250-0/ {print $10}' your.txt | tr -d '$'
In comments you wrote that you need to call it in a script like this:
while read p ; do
echo $p,awk '/MC exp. sig-$p/ {print $10}' filename | tr -d '$'
done < grid.txt
Note that you need a sub shell $() for the awk pipe. Like this:
echo "$p",$(awk '/MC exp. sig-$p/ {print $10}' filename | tr -d '$')
If you want to pass a shell variable to the awk pattern use the following syntax:
awk -v p="MC exp. sig-$p" '/p/ {print $10}' a.txt | tr -d '$'
More lines would've been nice but I guess you would like to have a simple use awk.
awk '{print $N}' $file
If you don't tell awk what kind of field-separator it has to use it will use just a space ' '. Now you just have to count how many fields you have got to get your field you want to get. In your case it would be 10.
awk '{print $10}' file.txt
$3.57
Don't want the $?
Pipe your awk result to cut:
awk '{print $10}' foo | cut -d $ -f2
-d will use the $ als field-separator and -f will select the second field.
If you know you always have the same number of fields, then
#!/bin/bash
file=$1
key=$2
while read -ra f; do
if [[ "${f[0]} ${f[1]} ${f[2]} ${f[3]}" == "MC exp. $key events" ]]; then
echo ${f[9]}
fi
done < "$file"
Suppose I have a log which has data in the format given below
Time number status
2013-5-10 19:18:43.430 123456 success
2013-5-10 19:28:13.430 134324 fail
2013-5-10 19:58:33.430 456456 success
I want to extract the numbers having success status.
Is there any way in linux using command line(grep, sed) to extract the data as mentioned. ??
Thanks all ..
grep only solution:
grep -Po '\d+(?= success)' file
or with awk only:
awk '$4=="success"&&$0=$3' input
This prints numbers based on success status-:
awk '$4 ~ /success/ {print $3}' logfile
You could do
(grep 'success' | cut -d ' ' -f 3) <$file
cat file | grep success | awk '{print $3}'
Using perl:
perl -ne '/success/ && split && print "$_[2]\n"' inputfile
I'm writing a script that queries my JBoss server for some database related data. The thing that is returned after the query looks like this:
ConnectionCount=7
ConnectionCreatedCount=98
MaxConnectionsInUseCount=10
ConnectionDestroyedCount=91
AvailableConnectionCount=10
InUseConnectionCount=0
MaxSize=10
I would like to tokenize this data so the numbers on the right hand side are stored in a variable in the format 7,98,10,91,10,0,10. I tried to use IFS with the equals sign, but that still keeps the parameter names (only the equals signs are eliminated).
I put your input data into file d.txt. The one-liner below extracts the numbers, comma-delimits them and assigns all that to variable TAB (tested with Korn shell):
$ TAB=$(awk -F= '{print $2}' d.txt | xargs echo | sed 's/ /,/g')
$ echo $TAB
7,98,10,91,10,0,10
Or just use cut/tr:
F=($(cut -d'=' -f2 input | tr '\n' ' '))
You can do it with one sed command too:
sed -n 's/^.*=\(.*\)/\1,/;H;${g;s/\n//g;s/,$//;p;}' file
7,98,10,91,10,0,10
A simple cut without any pipes :
arr=( $(cut -d'=' -f2 file) )
Outut
printf '%s\n' "${arr[#]}"
7
98
10
91
10
0
10