How to GREP a string between two date ranges? [duplicate] - linux

I am trying to grep all the lines between 2 date ranges , where the dates are formatted like this :
date_time.strftime("%Y%m%d%H%M")
so say between [201211150821 - 201211150824]
I am trying to write a script which involves looking for lines between these dates:
cat <somepattern>*.log | **grep [201211150821 - 201211150824]**
I am trying to find out if something exists in unix where I can look for a range in date.
I can convert dates in logs to (since epoch) and then use regular grep with [time1 - time2] , but that means reading each line , extracting the time value and then converting it etc .
May be something simple already exist , so that I can specify date/timestamp ranges the way I can provide a numeric range to grep ?
Thanks!
P.S:
Also I can pass in the pattern something like 2012111511(27|28|29|[3-5][0-9]) , but thats specific to ranges I want and its tedious to try out for different dates each time and gets trickier doing it at runtime.

Use awk. Assuming the first token in the line is the timestamp:
awk '
BEGIN { first=ARGV[1]; last=ARGV[2]; }
$1 > first && $1 < last { print; }
' 201211150821 201211150824

A Perl solution:
perl -wne 'print if m/(?<!\d)(20\d{8})(?!\d)/
&& $1 >= 201211150821 && $1 <= 201211150824'
(It finds the first ten-digit integer that starts with 20, and prints the line if that integer is within your range of interest. If it doesn't find any such integer, it skips the line. You can tweak the regex to be more restrictive about valid months and hours and so on.)

You are looking for the somewhat obscure 'csplit' (context split) command:
csplit '%201211150821%' '/201211150824/' file
will split out all the lines between the first and second regexps from file. It is likely to be the fastest and shortest if your files are sorted on the dates (you said you were grepping logs).

Bash + coreutils' expr only:
export cmp=201211150823 ; cat file.txt|while read line; do range=$(expr match "$line" '.*\[\(.*\)\].*'); [ "x$range" = "x" ] && continue; start=${range:0:12}; end=${range:15:12}; [ $start -le $cmp -a $end -ge $cmp ] && echo "match: $line"; done
cmp is your comparison value,

I wrote a specific tool for similar searches - http://code.google.com/p/bsearch/
In your example, the usage will be:
$ bsearch -p '$[YYYYMMDDhhmm]' -t 201211150821 -t 201211150824 logfile.

Related

Count occurrences of string in logfile in last 5 minutes in bash

I have log file containing logs like this:
[Oct 13 09:28:15] WARNING.... Today is good day...
[Oct 13 09:28:15] Info... Tommorow will be...
[Oct 13 09:28:15] WARNING.... Yesterday was...
I need shell command to count occurrences of certain string in last 5 minutes.
I have tried this:
$(awk -v d1="$(date --date="-5 min" "+%b %_d %H:%M:%S")" -v d2="$(date "+%b %_d %H:%M:%S")" '$0 > d1 && $0 < d2 || $0 ~ d2' "$1" |
grep -ci "$2")
and calling script like this: sh ${script} /var/log/message "day" but it does not work
Your immediate problem is that you are comparing dates in random string format. To Awk (and your computer generally) a string which starts with "Dec" is "less than" a string which starts with "Oct" (this is what date +%b produces). Generally, you would want both your log files and your programs to use dates in some standard computer-readable format, usually ISO 8601.
Unfortunately, though, sometimes you can't control that, and need to adapt your code accordingly. The solution then is to normalize the dates before comparing them.
awk -v d1=$(date -d "-5 min" +"%F-%T") -v d2=$(date +"%F-%T") '
BEGIN { split("Jan:Feb:Mar:Apr:May:Jun:Jul:Aug:Sep:Oct:Nov:Dec", m, ":")
for (i=1; i<=12; ++i) mon["[" m[i]] = i }
{ timestamp = substr(d1, 1, 5) mon[$1] "-" $2 "-" $3 }
timestamp > d1 && timestamp <= d2' "$1" | grep -ci "$2
This will not work across New Year boundaries, but should hopefully at least help get you started in the right direction. (I suppose you could check if the year in d2 is different, and then check if the month in $1 is January, and then add 1 to the year from d1 in timestamp; but I leave this as an exercise for the desperate. This still won't work across longer periods of time, but the OP calls for a maximum period of 5 minutes, so the log can't straddle multiple years. Or if it does, you have a more fundamental problem.)
Perhaps note as well that date -d is a GNU extension which is not portable to POSIX (so this will not work e.g. on MacOS without modifications).
(Also, for production use, I would refactor the grep -ci into the Awk script; see also useless use of grep.)
Finally, the command substitution $(...) around your entire command line is wrong; this would instruct your shell to use the output from Awk and run it as a command.

How can I truncate a line of text longer than a given length?

How would you go about removing everything after x number of characters? For example, cut everything after 15 characters and add ... to it.
This is an example sentence should turn into This is an exam...
GnuTools head can use chars rather than lines:
head -c 15 <<<'This is an example sentence'
Although consider that head -c only deals with bytes, so this is incompatible with multi-bytes characters like UTF-8 umlaut ü.
Bash built-in string indexing works:
str='This is an example sentence'
echo "${str:0:15}"
Output:
This is an exam
And finally something that works with ksh, dash, zsh…:
printf '%.15s\n' 'This is an example sentence'
Even programmatically:
n=15
printf '%.*s\n' $n 'This is an example sentence'
If you are using Bash, you can directly assign the output of printf to a variable and save a sub-shell call with:
trim_length=15
full_string='This is an example sentence'
printf -v trimmed_string '%.*s' $trim_length "$full_string"
Use sed:
echo 'some long string value' | sed 's/\(.\{15\}\).*/\1.../'
Output:
some long strin...
This solution has the advantage that short strings do not get the ... tail added:
echo 'short string' | sed 's/\(.\{15\}\).*/\1.../'
Output:
short string
So it's one solution for all sized outputs.
Use cut:
echo "This is an example sentence" | cut -c1-15
This is an exam
This includes characters (to handle multi-byte chars) 1-15, c.f. cut(1)
-b, --bytes=LIST
select only these bytes
-c, --characters=LIST
select only these characters
Awk can also accomplish this:
$ echo 'some long string value' | awk '{print substr($0, 1, 15) "..."}'
some long strin...
In awk, $0 is the current line. substr($0, 1, 15) extracts characters 1 through 15 from $0. The trailing "..." appends three dots.
Todd actually has a good answer however I chose to change it up a little to make the function better and remove unnecessary parts :p
trim() {
if (( "${#1}" > "$2" )); then
echo "${1:0:$2}$3"
else
echo "$1"
fi
}
In this version the appended text on longer string are chosen by the third argument, the max length is chosen by the second argument and the text itself is chosen by the first argument.
No need for variables :)
Using Bash Shell Expansions (No External Commands)
If you don't care about shell portability, you can do this entirely within Bash using a number of different shell expansions in the printf builtin. This avoids shelling out to external commands. For example:
trim () {
local str ellipsis_utf8
local -i maxlen
# use explaining variables; avoid magic numbers
str="$*"
maxlen="15"
ellipsis_utf8=$'\u2026'
# only truncate $str when longer than $maxlen
if (( "${#str}" > "$maxlen" )); then
printf "%s%s\n" "${str:0:$maxlen}" "${ellipsis_utf8}"
else
printf "%s\n" "$str"
fi
}
trim "This is an example sentence." # This is an exam…
trim "Short sentence." # Short sentence.
trim "-n Flag-like strings." # Flag-like strin…
trim "With interstitial -E flag." # With interstiti…
You can also loop through an entire file this way. Given a file containing the same sentences above (one per line), you can use the read builtin's default REPLY variable as follows:
while read; do
trim "$REPLY"
done < example.txt
Whether or not this approach is faster or easier to read is debatable, but it's 100% Bash and executes without forks or subshells.

Filter a very large, numerically sorted CSV file based on a minimum/maximum value using Linux?

I'm trying to output lines of a CSV file which is quite large. In the past I have tried different things and ultimately come to find that Linux's command line interface (sed, awk, grep, etc) is the fastest way to handle these types of files.
I have a CSV file like this:
1,rand1,rand2
4,randx,randy,
6,randz,randq,
...
1001,randy,randi,
1030,rando,randn,
1030,randz,randc,
1036,randp,randu
...
1230994,randm,randn,
1230995,randz,randl,
1231869,rande,randf
Although the first column is numerically increasing, the space between each number varies randomly. I need to be able to output all lines that have a value between X and Y in their first column.
Something like:
sed ./csv -min --col1 1000 -max --col1 1400
which would output all the lines that have a first column value between 1000 and 1400.
The lines are different enough that in a >5 GB file there might only be ~5 duplicates, so it wouldn't be a big deal if it counted the duplicates only once -- but it would be a big deal if it threw an error due to a duplicate line.
I may not know whether particular line values exist (e.g. 1000 is a rough estimate and should not be assumed to exist as a first column value).
Optimizations matter when it comes to large files; the following awk command:
is parameterized (uses variables to define the range boundaries)
performs only a single comparison for records that come before the range.
exits as soon as the last record of interest has been found.
awk -F, -v from=1000 -v to=1400 '$1 < from { next } $1 > to { exit } 1' ./csv
Because awk performs numerical comparison (with input fields that look like numbers), the range boundaries needn't match field values precisely.
You can easily do this with awk, though it won't take full advantage of the file being sorted:
awk -F , '$1 > 1400 { exit(0); } $1 >= 1000 { print }' file.csv
If you know that the numbers are increasing and unique, you can use addresses like this:
sed '/^1000,/,/^1400,/!d' infile.csv
which does not print any line that is outside of the lines between the one that matches /^1000,/ and the one that matches /^1400,/.
Notice that this doesn't work if 1000 or 1400 don't actually exist as values, i.e., it wouldn't print anything at all in that case.
In any case, as demonstrated by the answers by mklement0 and that other guy, awk is a the better choice here.
Here's a bash-version of the script:
#! /bin/bash
fname="$1"
start_nr="$2"
end_nr="$3"
while IFS=, read -r nr rest || [[ -n $nr && -n $rest ]]; do
if (( $nr < $start_nr )); then continue;
elif (( $nr > $end_nr )); then break; fi
printf "%s,%s\n" "$nr" "$rest"
done < "$fname"
Which you would then call script.sh foo.csv 1000 2000
The script will start printing when the number is large enough and then immediately stops when the number gets above the limit.

Shell Extract Text Before Digits in a String

I've found several examples of extractions before a single character and examples of extracting numbers, but I haven't found anything about extracting characters before numbers.
My question:
Some of the strings I have look like this:
NUC320 Syllabus Template - 8wk
SLA School Template - UL
CJ101 Syllabus Template - 8wk
TECH201 Syllabus Template - 8wk
Test Clone ID17
In cases where the string doesn't contain the data I want, I need it to be skipped. The desired output would be:
NUC-320
CJ-101
TECH-201
SLA School Template - UL & Test Clone ID17 would be skipped.
I imagine the process being something to the effect of:
Extract text before " "
Condition - Check for digits in the string
Extract text before digits and assign it to a variable x
Extract digits and assign to a variable y
Concatenate $x"-"$y and assign to another variable z
More information:
The strings are extracted from a line in a couple thousand text docs using a loop. They will be used to append to a hyperlink and rename a file during the loop.
Edit:
#!/bin/sh
# my files are named 1.txt through 9999.txt i both
# increments the loop and sets the filename to be searched
i=1
while [ $i -lt 10000 ]
do
x=$(head -n 31 $i.txt | tail -1 | cut -c 7-)
if [ ! -z "$x" -a "$x" != " " ]; then
# I'd like to insert the hyperlink with the output on the
# same line (1.txt;cj101 Syllabus Template - 8wk;www.link.com/cj101)
echo "$i.txt;$x" >> syllabus.txt
# else
# rm $i.txt
fi
i=`expr $i + 1`
sleep .1
done
sed for printing lines starting with capital letters followed by digits. It also adds a - between them:
sed -n 's/^\([A-Z]\+\)\([0-9]\+\) .*/\1-\2/p' input
Gives:
NUC-320
CJ-101
TECH-201
A POSIX-compliant awk solution:
awk '{ if (match($1, /[0-9]+$/)) print substr($1, 1, RSTART-1) "-" substr($1, RSTART) }' \
file |
while IFS= read -r token; do
# Process token here (append to hyperlink, ...)
echo "[$token]"
done
awk is used to extract the reformatted tokens of interest, which are then processed in a shell while loop.
match($1, /[0-9]+$/) matches the 1st whitespace-separated field ($1) against extended regex [0-9]+$, i.e., matches only if the fields ends in one or more digits.
substr($1, 1, RSTART-1) "-" substr($1, RSTART) joins the part before the first digit with the run of digits using -, via the special RSTART variable, which indicates the 1-based character position where the most recent match() invocation matched.
awk '$1 ~/[0-9]/{sub(/...$/,"-&",$1);print $1}' file
NUC-320
CJ-101
TECH-201

Use grep to remove words from dictionary whose roots are already present

I am trying to write a random passphrase generator. I have a dictionary with a bunch of words and I would like to remove words whose root is already in the dictionary, so that a dictionary that looks like:
ablaze
able
abler
ablest
abloom
ably
would end up with only
ablaze
able
abloom
ably
because abler and ablest contain able which was previously used.
I would prefer to do this with grep so that I can learn more about how that works. I am capable of writing a program in c or python that will do this.
If the list is sorted so that shorter strings always precede longer strings, you might be able to get fairly good performance out of a simple Awk script.
awk '$1~r && p in k { next } { k[$1]++; print; r= "^" $1; p=$1 }' words
If the current word matches the prefix regex r (defined in a moment) and the prefix p (ditto) is in the list of seen keys, skip. Otherwise, add the current word to the prefix keys, print the current line, create a regex which matches the current word at beginning of line (this is now the prefix regex r) and also remember the prefix string in p.
If all the similar strings are always adjacent (as they would be if you sort the file lexically), you could do away with k and p entirely too, I guess.
awk 'NR>1 && $1~r { next } { print; r="^" $1 }' words
This is based on the assumption that the input file is sorted. In that case, when looking up each word, all matches after the first one can be safely skipped (because they will correspond to "the same word with a different suffix").
#/bin/bash
input=$1
while read -r word ; do
# ignore short words
if [ ${#word} -lt 4 ] ; then continue; fi
# output this line
echo $word
# skip next lines that start with $word as prefix
skip=$(grep -c -E -e "^${word}" $input)
for ((i=1; i<$skip; i++)) ; do read -r word ; done
done <$input
Call as ./filter.sh input > output
This takes somewhat less than 2 minutes on all words of 4 or more letters found in my /usr/share/dict/american-english dictionary. The algorithm is O(n²), and therefore unsuitable for large files.
However, you can speed things up a lot if you avoid using grep at all. This version takes only 4 seconds to do the job (because it does not need to scan the whole file almost once per word). Since it performs a single pass over the input, its complexity is O(n):
#/bin/bash
input=$1
while true ; do
# use already-read word, or fail if cannot read new
if [ -n "$next" ] ; then word=$next; unset next;
elif ! read -r word ; then break; fi
# ignore short words
if [ ${#word} -lt 4 ] ; then continue; fi
# output this word
echo ${word}
# skip words that start with $word as prefix
while read -r next ; do
unique=${next#$word}
if [ ${#next} -eq ${#unique} ] ; then break; fi
done
done <$input
Supposing you want to start with words that share the same first four (up to ten) letters, you could do something like this:
cp /usr/share/dict/words words
str="...."
for num in 4 5 6 7 8 9 10; do
for word in `grep "^$str$" words`; do
grep -v "^$word." words > words.tmp
mv words.tmp words
done
str=".$str"
done
You wouldn't want to start with 1 letter, unless 'a' is not in your dictionary, etc.
Try this BASH script:
a=()
while read -r w; do
[[ ${#a[#]} -eq 0 ]] && a+=("$w") && continue
grep -qvf <(printf "^%s\n" "${a[#]}") <<< "$w" && a+=("$w")
done < file
printf "%s\n" "${a[#]}"
ablaze
able
abloom
ably
It seems like you want to group adverbs together. Some adverbs, including those that can also be adjectives, use er and est to form comparisons:
able, abler, ablest
fast, faster, fastest
soon, sooner, soonest
easy, easier, easiest
This procedure is know as stemming in natural language processing, and can be achieved using a stemmer or lemmatizer. there are popular implementations in python's NLTK module but the problem is not completely solved. The best out the box stemmer is the snowball stemmer but it does not stem adverbs to their root.
import nltk
initial = '''
ablaze
able
abler
ablest
abloom
ably
fast
faster
fastest
'''.splitlines()
snowball = nltk.stem.snowball.SnowballStemmer("english")
stemmed = [snowball.stem(word) for word in initial]
print set(stemmed)
output...
set(['', u'abli', u'faster', u'abl', u'fast', u'abler', u'abloom', u'ablest', u'fastest', u'ablaz'])
the other option is to use a regex stemmer but this has its own difficulties I'm afraid.
patterns = "er$|est$"
regex_stemmer = nltk.stem.RegexpStemmer(patterns, 4)
stemmed = [regex_stemmer.stem(word) for word in initial]
print set(stemmed)
output...
set(['', 'abloom', 'able', 'abl', 'fast', 'ably', 'ablaze'])
If you just want to weed out some of the words, this gross command will work. Note that it'll throw out some legit words like best, but it's dead simple. It assumes you have a test.txt file with one word per line
egrep -v "er$|est$" test.txt >> results.txt
egrep is the same as grep -E. -v means throw out matching lines. x|y means if x or y match, and $ means end of line, so you'd be looking for words that end in er or est

Resources