Is \d not supported by grep's basic expressions? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
This does not generate any output. How come?
$ echo 'this 1 2 3' | grep '\d\+'
But these do:
$ echo 'this 1 2 3' | grep '\s\+'
this 1 2 3
$ echo 'this 1 2 3' | grep '\w\+'
this 1 2 3

As specified in POSIX, grep uses basic regular expressions, but \d is part of a Perl-compatible regular expression (PCRE).
If you are using GNU grep, you can use the -P option, to allow use of PCRE regular expressions. Otherwise you can use the POSIX-specified [[:digit:]] character class in place of \d.
echo 1 | grep -P '\d'
# output: 1
echo 1 | grep '[[:digit:]]'
# output: 1

Try this $ echo 'this 1 2 3' | grep '[0-9]\+'

Related

How do I exit tail --follow on ERROR or FATAL log level [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I tail -f the log file of one of my services, but I would like to automatically stop my tail process once my logger has printed any content starting with OR containing ERROR or FATAL.
How can I achieve this?
To terminate tail -f immediately after the output has a line containing ERROR or FATAL, try:
tail -f file.log | awk '{print} /ERROR|FATAL/{exit}'
Example
$ cat file.log
abc
def
ghi
ERROR
jkl
mno
pqr
$ tail -f file.log | awk '{print} /ERROR|FATAL/{exit}'
abc
def
ghi
ERROR

List only numerical directory names in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
How to list only numerical directory names in Linux.
Only directories, that has only numeric characters?
There are multiple solutions to do it.
1.You can List just dirs and then remove . and / from the names and then Grep just numerical ones:
ls -d ./*/ | sed 's/\.\///g' | sed 's/\///g' | grep -E '^[0-9]+$'
2.By "ls" & "grep" & then "awk". Just list with details, Grep dirs and then Print 9th column:
ls -llh | grep '^d' | awk '{print $9}'
Good luck In Arbaeen.
In bash, you can benefit from extended globbing:
shopt -s extglob
ls -d +([0-9])/
Where
+(pattern-list)
Matches one or more occurrences of the given patterns
The / at the end limits the list to directories, and -d prevents ls from listing their contents.

grep -1 freezes, but it should report the invalid flag [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
grep -1 gives an error, as it should. But
$ touch foo
$ grep -1 foo
freezes. It doesn't report the invalid flag. Why is this happening? Is it a bug?
I've tested it on Mac (El Capitan) and Ubuntu (14.04).
For modern GNU and MacOS BSD implementations, grep -1 foo is reading from stdin, filtering files for lines containing foo -- which was interpreted as a pattern, not a filename. This differs from grep foo by having the amount of context to print surrounding each match set to a single line, thus being equivalent to grep -C1 foo.
Reading the source to GNU grep, it explicitly allows numbers as short options:
static char const short_options[] =
"0123456789A:B:C:D:EFGHIPTUVX:abcd:e:f:hiLlm:noqRrsuvwxyZz";
These are stored in DEFAULT_CONTEXT, determining how many lines of context to print surrounding each match, unless overridden with the more explicit -A or -B (indicating how many lines to print after of before a match). This is the same value set with -C.
Thus, in the GNU implementation and in BSD implementations extended to behave similarly to it,
grep -C3 foo
...and...
grep -3 foo
...behave identically, printing three lines of context surrounding each match.
To demonstrate this behavior:
$ printf '%s\n' 3 2 1 foo 1 2 3 | grep -0 foo
foo
$ printf '%s\n' 3 2 1 foo 1 2 3 | grep -1 foo
1
foo
1
$ printf '%s\n' 3 2 1 foo 1 2 3 | grep -2 foo
2
1
foo
2
1

What is the equivalent of "grep -e pattern1 -e pattern2 <file> " in Solaris? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
What is the equivalent of grep -e pattern1 -e pattern2 "$file" in Solaris?
In Linux it works fine. but in Solaris, i got "grep: illegal option -- e
Usage: grep -hblcnsviw pattern file . . ." error.
Can anyone help please?
Instead of:
# GNU grep only
grep -e pattern1 -e pattern2 file
...you can use:
# POSIX-compliant
grep -e 'pattern1
pattern2' file

linux/ unix command for checking specific logs [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
how can I extract a log based on specific time frame? Let's say issue started between 4pm to 5pm, how can I get that specific log between those times? I can use less or cat or grep but it would not give me the details of the error, sample command:
grep "2013-08-26 16:00:00" sample.log
what is the more precise Linux/ Unix command that can do the trick?
For viewing ERROR log messages between 16:00:00 and 17:00:00 use:
grep -nP '2013-08-15 16:.+ERROR' sample.log | less
If you have multiline messages in log you can use -A n and -B n params to add for each output string n lines after or before:
3 lines before and after each line:
grep -A 3 -B 3 -nP '2013-08-15 16:.+ERROR' sample.log | less
Shorthand for the same:
grep -3 -nP '2013-08-15 16:.+ERROR' sample.log | less
If you know that issue happened between 4 and 5 pm, you can use this:
grep "2013-08-26 16:" sample.log | less
If you need some lines around that issue, add option -N to grep (context of N lines), something like that:
grep -3 "2013-08-26 16:" sample.log | less
If you know that your event contained some specific word, you can filter it more using one more grep:
grep -3 "2013-08-26 16:" sample.log | grep somethingelse

Resources