What is the equivalent of "grep -e pattern1 -e pattern2 <file> " in Solaris? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
What is the equivalent of grep -e pattern1 -e pattern2 "$file" in Solaris?
In Linux it works fine. but in Solaris, i got "grep: illegal option -- e
Usage: grep -hblcnsviw pattern file . . ." error.
Can anyone help please?

Instead of:
# GNU grep only
grep -e pattern1 -e pattern2 file
...you can use:
# POSIX-compliant
grep -e 'pattern1
pattern2' file

Related

What do terminal commands ls > wc and ls | wc show? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I know what the commands ls and wc do, but I can not find out what ls > wc and ls | wc will show. Can someone please help me flush out the meaning of this commands?
ls | wc The output from the ls command is piped into the wc command. So it will count the words which are in the output of ls. So you see simply the number of files read by ls.
ls > wc This creates a new file in your current working directory with the name wc with the output of your ls command. The program wc is not used here, simply a new file with the same name is created. You can simply look into this new file with your favorite editor or simply use cat for it.

List only numerical directory names in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
How to list only numerical directory names in Linux.
Only directories, that has only numeric characters?
There are multiple solutions to do it.
1.You can List just dirs and then remove . and / from the names and then Grep just numerical ones:
ls -d ./*/ | sed 's/\.\///g' | sed 's/\///g' | grep -E '^[0-9]+$'
2.By "ls" & "grep" & then "awk". Just list with details, Grep dirs and then Print 9th column:
ls -llh | grep '^d' | awk '{print $9}'
Good luck In Arbaeen.
In bash, you can benefit from extended globbing:
shopt -s extglob
ls -d +([0-9])/
Where
+(pattern-list)
Matches one or more occurrences of the given patterns
The / at the end limits the list to directories, and -d prevents ls from listing their contents.

Grep: show only what matched in a regex group [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
How can I grep only what matched in a regex group?
for example, get from:
some text ... <a href='...'/user/9082/>... </a>
only numbers from /user/9082/:
9082
What I've tried:
echo "some text ... <a href='...'/user/9082/>3435435345345</a>" | grep -Eo "/user/([0-9]+)/"
Use sed.
$ echo "some text ... <a href='...'/user/9082/>3435435345345</a>" |
> sed -E 's|^.*/user/([0-9]+)/.*$|\1|'
9082
You say "I can use also sed and other methods" implying you are aware sed is the right tool, but that you don't want to use it. Can you elaborate on why? grep is for searching, sed is for formatting.
You could use a bash regex:
str="some text ... <a href='...'/user/9082/>... </a>"
re="/user/([0-9]+)/"
[[ $str =~ $re ]] && echo ${BASH_REMATCH[1]}
Using grep
echo "some text ... <a href='...'/user/9082/>3435435345345</a>" | grep -o '\/user\/[0-9]\+\/'

Count number of files within a directory in Linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
To count the number of files in a directory, I typically use
ls directory | wc -l
But is there another command that doesn't use wc ?
this is one:
ls -l . | egrep -c '^-'
Note:
ls -1 | wc -l
Which means:
ls: list files in dir
-1: (that's a ONE) only one entry per line. Change it to -1a if you want hidden files too
|: pipe output onto...
wc: "wordcount"
-l: count lines.

Trimming linux log files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
It seems like a trivial issue, but I did not find a solution.
I have a number of log files in a php installation on Debian/Linux that tend to grow quite a bit and I would like to trim nightly to the last 500 lines or so.
How do I do it, possibly in shell and applying a command to *log?
For this, I would suggest to use logrotate with a configuration to your liking instead of programming your own script.
There might be a more elegant way to do this programmatically, but it is possible to use tail and a for-loop for this:
for file in *.log; do
tail -500 "$file" > "$file.tmp"
mv -- "$file.tmp" "$file"
done
If you want to save history of older files, you should check out logrotate.
Otherwise, this can be done trivially with the command line:
LOGS="/var/log"
MAX_LINES=500
find "$LOGS" -type f -name '*.log' -print0 | while read -d '' file; do
tmp=$(mktemp)
tail -n $MAX_LINES $file > $tmp
mv $tmp $file
done

Resources