my grep command looks like this
zgrep -B bb -A aa "pattern" *
I would lke to have output as:
file1:line1
file1:line2
file1:line3
file1:pattern
file1:line4
file1:line5
file1:line6
</blank line>
file2:line1
file2:line2
file2:line3
file2:pattern
file2:line4
file2:line5
file2:line6
The problem is that its hard to distinguish when lines corresponding to the first found result end and the lines corresponding to the second found result start.
Note that although man grep says that "--" is added between contiguous group of matches. It works only when multiple matches are found in the same file. but in my search (as above) I am searching multiple files.
also note that adding a new blank line after every bb+aa+1 line won't work because what if a file has less than bb lines before the pattern.
pipe grep output through
awk -F: '{if(f!=$1)print ""; f=$1; print $0;}'
Pipe | any output to:
sed G
Example:
ls | sed G
If you man sed you will see
G Append's a newline character followed by the contents of the hold space to the pattern space.
The problem is that its hard to distinguish when lines corresponding to the first found result end and the lines corresponding to the second found result start.
Note that although man grep says that "--" is added between contiguous group of matches. It works only when multiple matches are found in the same file. but in my search (as above) I am searching multiple files.
If you don't mind a -- in lieu of a </blank line>, add the -0 parameter to your grep/zgrep command. This should allow for the -- to appear even when searching multiple files. You can still use the -A and -B flags as desired.
You can also use the --group-separator parameter, with an empty value, so it'd just add a new-line.
some-stream | grep --group-separator=
I can't test it with the -A and -B parameters so I can't say for sure but you could try using sed G as mentioned here on Unix StackEx. You'll loose coloring though if that's important.
There is no option for this in grep and I don't think there is a way to do it with xargs or tr (I tried), but here is a for loop that will do it (for f in *; do grep -H "1" $f && echo; done):
[ 11:58 jon#hozbox.com ~/test ]$ for f in *; do grep -H "1" $f && echo; done
a:1
b:1
c:1
d:1
[ 11:58 jon#hozbox.com ~/test ]$ ll
-rw-r--r-- 1 jon people 2B Nov 25 11:58 a
-rw-r--r-- 1 jon people 2B Nov 25 11:58 b
-rw-r--r-- 1 jon people 2B Nov 25 11:58 c
-rw-r--r-- 1 jon people 2B Nov 25 11:58 d
The -H is to display file names for grep matches. Change the * to your own file glob/path expansion string if necessary.
Try with -c 2; with printing a context I see grep is separating its found o/p
Related
How do I list files that have only numeric names e.g
1111
2342
763
71
I have tried ls -l [0-9]* but this seems to bring all file names that
start with a digit and can have anything in their name after a digit.
First, turn on Bash's extglob option:
shopt -s extglob
Then use this pattern:
ls -l +([0-9])
Read more about extended pattern matching in the Bash Reference Manual.
Using GNU find:
find . -type f -regex './[0-9]*$'
This uses a regular expression rather than a filename globbing pattern. The expression ./[0-9]*$ matches ./ (the current directory) followed by only digits until the end ($).
If you need to stop find from going into subdirectories, add -maxdepth 1.
If you want ls -l -like behaviour, add -ls at the very end.
With opposite operation : ignoring filenames with non-digits [^0-9]:
ls -1 --ignore=*[^0-9]*
--ignore=PATTERN
do not list implied entries matching shell PATTERN
This command will be safest option always!
ls | grep -E '^[0-9]+$'
Adding few sample test cases for audiences to make it clear.
$ls
1 12222e 2 B.class Test.java file.sh y.csv
1.txt 1a A.class Test.class db_2017-08-15 file.txt
$ ls | grep -E '^[0-9]+$'
1
2
for example if we have txt as below
drwxr-sr-x 7 abcdefgetdf
drwxr-sr-x 7 abcdef123123sa
drwxr-sr-- 7 abcdefgetdf
drwxr-sr-- 7 abcdeadfvcxvxcvx
drwxr-sr-x 7 abcdef123ewlld
To answer the question in the title strictly:
awk 'substr($0, 1, 9) ~ /x/' txt
Though if you're interested in files with at least one execute permission bit set, then perhaps find -perm /0111 would be something to look into.
awk solution:
ls -l | awk '$1~/x/'
$1~/x/ - regular expression match, accepts only lines with the 1st field containing x character
I couldn't figure out a one liner that just checked the first 9 (or 10) characters for 'x'. But this little script does the trick. Uses "ls -l" as input instead of a file, but it's trivial to pass in a file to filter instead.
#!/bin/bash
ls -l | while read line
do
perms="${line:0:10}"
if [[ "${perms}" =~ "x" ]]; then
echo "${line}"
fi
done
Was just wondering because I whipped this up last month.
#!/usr/bin/bash
# Collects all of the args, make sure to seperate with ','
IN="$*"
# Takes everything before a ',' and places them each on a single line of tmp file
echo $IN | sed 's/,/\n/g' > /tmp/pick.a.random.word.or.phrase
# Obvious vars are obvious
WORDFILE="/tmp/pick.a.random.word.or.phrase"
# Pick only one of the vars
NUMWORDS=1
## Picks a random line from tmp file
#Number of lines in $WORDFILE
tL=`awk 'NF!=0 {++c} END {print c}' $WORDFILE`
# Expand random
RANDOM_CMD='od -vAn -N4 -tu4 /dev/urandom'
for i in `seq $NUMWORDS`
do
rnum=$((`${RANDOM_CMD}`%$tL+1))
sed -n "$rnum p" $WORDFILE | tr '\n' ' '
done
printf "\n"
rm /tmp/pick.a.random.word.or.phrase
Mainly I ask:
Do I need to have a tmp file?
Is there a way to do this in one line with another program?
How to condense as much as possible?
The command-line argument handling is, to my mind, bizarre. Why not just use normal command line arguments? That makes the problem trivial:
#!/usr/bin/bash
shuf -en1 "$#"
Of course, you could just use shuf -en1, which is only nine keystrokes:
$ shuf -en1 word another_word "random phrase"
another_word
$ shuf -en1 word another_word "random phrase"
word
$ shuf -en1 word another_word "random phrase"
another_word
$ shuf -en1 word another_word "random phrase"
random phrase
shuf command-line flags:
-e Shuffle command line arguments instead of lines in a file/stdin
-n1 Produce only the first random line (or argument in this case)
If you really insist on running the arguments together and then separating them with commas, you can use the following. As with your original, it will exhibit unexpected behaviour if some word in the arguments could be glob-expanded, so I really don't recommend it:
#!/usr/bin/bash
IFS=, read -ra args <<<"$*"
echo $(shuf -en1 "${args[#]}")
The first line combines the arguments and then splits the result at commas into the array args. (The -a option to read.) Since the string is split at commas, spaces (such as though automatically inserted by the argument concatenation) are preserved; to remove the spaces, I word-split the result of shuf by not quoting the command expansion.
You could use shuff to shorten your script and remove temporary file.
#!/usr/bin/bash
# Collects all of the args, make sure to seperate with ','
IN="$*"
# Takes everything before a ',' and places them in an array
words=($(echo $IN | sed 's/,/ /g'))
# Get random indexi in range: 0, length of array: words
index=$(shuf -i 0-"${#words[#]}" -n 1)
# Print the random index
echo ${words[$index]}
If you don't want to use shuff, you could also use $RANDOM:
#!/usr/bin/bash
# Collects all of the args, make sure to seperate with ','
IN="$*"
# Takes everything before a ',' and places them in an array
words=($(echo $IN | sed 's/,/ /g'))
# Print the random index
echo ${words[$RANDOM % ${#words[#]}]}
shuf in coreutils does exactly this, but with multiple command arguments instead of a single comma separated argument.
shuf -n1 -e arg1 arg2 ...
The -n1 option says to choose just one element. The -e option indicates that elements will be passed as arguments (as opposed to through standard input).
Your script then just needs to replace commas with spaces in $*. We can do this using bash parameter substitution:
#!/usr/bin/bash
shuf -n1 -e ${*//,/ }
This won't work with elements with embedded spaces.
Isn't it as simple as generating a number at random between 1 and $# and simply echo the corresponding argument? It depends on what you have; your comment about 'collect arguments; make sure to separate with commas' isn't clear, because the assignment does nothing with commas — and you don't show how you invoke your command.
I've simply cribbed the random number generation from the question: it works OK on my Mac, generating the values 42,405,691 and 1,817,261,076 on successive runs.
n=$(( $(od -vAn -N4 -tu4 /dev/urandom) % $# + 1 ))
eval echo "\${$n}"
You could even reduce that to a single line if you were really determined:
eval echo "\${$(( $(od -vAn -N4 -tu4 /dev/urandom) % $# + 1 ))}"
This use of eval is safe as it involves no user input. The script should check that it is provided at least one argument to prevent a division-by-zero error if $# is 0. The code does an absolute minimum of data movement — in contrast to solutions which shuffle the data in some way.
If that's packaged in a script random_selection, then I can run:
$ bash random_selection Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Feb
$ bash random_selection Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Oct
$ bash random_selection Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Nov
$
If the total number of arguments is big enough that you run out of argument space, then you need to think again, but that restriction is present in the existing code.
The selection is marginally biassed towards the earlier entries in the list; you have to do a better job of rejecting random numbers that are very near the maximum value in the range. For a random 32-bit unsigned value, if it is larger than $# * (0xFFFFFFFF / $#) you should generate another random number.
I have some files that have names like this: 'abcdefg_y_zz.jpg'
The 'abcdefg' is a sequence of digits, while the 'y' and 'zz' are letters.
I need to get all the files that have the sequence of digits ending with a number greater than 10. The files that have 'fg' greater that 10.
Does anyone have an idea on how to do that in a bash script?
Ok, technically, based on all your info...
ls | grep '[0-9]{5}[1-9][0-9]_[[:alpha:]]_[[:alpha:]]{2}.jpg'
How about this? Just exclude ones which have 0 in position f.
ls -1 | grep -v "?????0?_?_??.jpg"
Update
Since you want > 10 and not >= 10, you'll need to exclude 10 too. So do this:
ls -1 | grep -v "?????0*_?_??.jpg" | grep -v "??????10_?_??.jpg"
with more scripting
#!/bin/bash
for i in seq FIRST INCREMENT LAST
do
cp abcde$i_y_zz.jpg /your_new_dir //or whatever you want to do with those files
done
so in your example line with seq will be
for i in seq 11 1 100000000
If the filenames are orderly named this awk solution works:
ls | awk 'BEGIN { FIELDWIDTHS = "5 2" } $2 > 10'
Explanation
FIELDWIDTHS = "5 2" means that $1 will refer to the first 5 characters and $2 the next 2.
$2 > 10 matches when field 2 is greater than 10 and implicitly invokes the default code block, i.e. '{ print }'
Just one process:
ls ?????+(1[1-9]|[2-9]?)_?_??.jpg
All the solutions provided so far are fine, but anybody who's had some experience with shell programming knows that parsing ls is never a good idea and must be avoided. This actually doesn't apply in this case, where we can assume that the names of the files follow a certain pattern, but it's a rule that should be remembered. More explanation here.
What you want can be achieved much safer with GNU find - assuming that you run the command in the directory where the files are, it would look something like this :
find . -regextype posix-egrep -regex '\./[0-9]{5}[1-9][0-9]_[[:alpha:]]_[[:alpha:]]{2}.jpg$'
This question already has answers here:
How to use sed to remove the last n lines of a file
(26 answers)
Closed 5 years ago.
I was wondering if someone could help me out.
Im writing a bash script and i want to delete the last 12 lines of a specific file.
I have had a look around and somehow come up with the following;
head -n -12 /var/lib/pgsql/9.6/data/pg_hba.conf | tee /var/lib/pgsql/9.6/data/pg_hba.conf >/dev/null
But this wipes the file completely.
All i want to do is permanently delete the last 12 lines of that file so i can overwrite it with my own rules.
Any help on where im going wrong?
There are a number of methods, depending on your exact situation. For small, well-formed files (say, less than 1M, with regular sized lines), you might use Vim in ex mode:
ex -snc '$-11,$d|x' smallish_file.txt
-s -> silent; this is batch processing, so no UI necessary (faster)
-n -> No need for an undo buffer here
-c -> the command list
'$-11,$d' -> Select the 11 lines from the end to the end (for a total of 12 lines) and delete them. Note the single quote so that the shell does not interpolate $d as a variable.
x -> "write and quit"
For a similar, perhaps more authentic throw-back to '69, the ed line-editor could do this for you:
ed -s smallish_file.txt <<< $'-11,$d\nwq'
Note the $ outside of the single quote, which is different from the ex command above.
If Vim/ex and Ed are scary, you could use sed with some shell help:
sed -i "$(($(wc -l < smallish_file.txt) - 11)),\$d" smallish_file.txt
-i -> inplace: write the change to the file
The line count less 11 for a total of 12 lines. Note the escaped dollar symbol ($) so the shell does not interpolate it.
But using the above methods will not be performant for larger files (say, more than a couple of megs). For larger files, use the intermediate/temporary file method, as the other answers have described. A sed approach:
tac some_file.txt | sed '1,12d' | tac > tmp && mv tmp some_file.txt
tac to reverse the line order
sed to remove the last (now first) 12 lines
tac to reverse back to the original order
More efficient than sed is a head approach:
head -n -12 larger_file.txt > tmp_file && mv tmp_file larger_file.txt
-n NUM show only the first NUM lines. Negated as we've done, shows up to the last NUM lines.
But for real efficiency -- perhaps for really large files or for where a temporary file would be unwarranted -- truncate the file in place. Unlike the other methods which involve variations of overwriting the entire old file with entire the new content, this one will be near instantaneous no matter the size of the file.
# In readable form:
BYTES=$(tail -12 really_large.txt | wc -c)
truncate -s -$BYTES really_large.txt
# Inline, perhaps as part of a script
truncate -s -$(tail -12 really_large.txt | wc -c) really_large.txt
The truncate command makes files exactly the specified size in bytes. If the file is too short, it will make it larger, and if the file is too large, it will chop off the excess really efficiently. It does this with filesystem semantics, so it involves writing usually no more than a couple of bytes. The magic here is in calculating where to chop:
-s -NUM -> Note the dash/negative; says to reduce the file by NUM bytes
$(tail -12 really_large.txt | wc -c) -> returns the number of bytes to be removed
So, you pays your moneys and takes your choices. Choose wisely!
Like this:
head -n -12 test.txt > tmp.txt && cp tmp.txt test.txt
You can use a temporary file store the intermediate result of head -n
I think the code below should work:
head -n -12 /var/lib/pgsql/9.6/data/pg_hba.conf > /tmp/tmp.pg.hba.$$ && mv /tmp/tmp.pg.hba.$$ /var/lib/pgsql/9.6/data/pg_hba.conf
If you are putting it a script, a more readable and easy to maintain code would be:
SRC_FILE=/var/lib/pgsql/9.6/data/pg_hba.conf
TMP_FILE=/tmp/tmp.pg.hba.$$
head -n -12 $SRC_FILE > $TMP_FILE && mv $TMP_FILE $SRC_FILE
I would suggest backing up /var/lib/pgsql/9.6/data/pg_hba.conf before running any script.
Simple and clear script
declare -i start
declare -i cnt
cat dummy
1
2
3
4
5
6
7
8
9
10
11
12
13
cnt=`wc -l dummy|awk '{print $1}'`
start=($cnt-12+1)
sed "${start},\$d" dummy
OUTPUT
is the first line
1