Removing lines matching a pattern - linux

I want to search for patterns in a file and remove the lines containing the pattern. To do this, am using:
originalLogFile='sample.log'
outputFile='3.txt'
temp=$originalLogFile
while read line
do
echo "Removing"
echo $line
grep -v "$line" $temp > $outputFile
temp=$outputFile
done <$whiteListOfErrors
This works fine for the first iteration. For the second run, it throws :
grep: input file ‘3.txt’ is also the output
Any solutions or alternate methods?

The following should be equivalent
grep -v -f "$whiteListOfErrors" "$originalLogFile" > "$outputFile"

originalLogFile='sample.log'
outputFile='3.txt'
tmpfile='tmp.txt'
temp=$originalLogFile
while read line
do
echo "Removing"
echo $line
grep -v "$line" $temp > $outputFile
cp $outputfile $tmpfile
temp=$tmpfile
done <$whiteListOfErrors

Use sed for this:
sed '/.*pattern.*/d' file
If you have multiple patterns you may use the -e option
sed -e '/.*pattern1.*/d' -e '/.*pattern2.*/d' file
If you have GNU sed (typical on Linux) the -i option is comfortable as it can modify the original file instead of writing to a new file. (But handle with care, in order to not overwrite your original)

Used this to fix the problem:
while read line
do
echo "Removing"
echo $line
grep -v "$line" $temp | tee $outputFile
temp=$outputFile
done <$falseFailures

Trivial solution might be to work with alternating files; e.g.
idx=0
while ...
let next='(idx+1) % 2'
grep ... $file.$idx > $file.$next
idx=$next
A more elegant might be the creation of one large grep command
args=( )
while read line; do args=( "${args[#]}" -v "$line" ); done < $whiteList
grep "${args[#]}" $origFile

Related

replacement in a file only in a fixed line

I am writing a shell script in which I will read a file and will modify it.
there will be occurrence of some string "ABC_1" in multiple lines.
I need to replace it with "XYZ_1" only when there is "OPQ_3" also present in the line else there should be no modification in the line.
please help how can I do replacement if I read a file liken by line.
for FILE in $FILES
do
echo $FILE
while read line
do
if grep -n "OPQ_3" $line
then
sed -i 's/ABC_1/XYZ_2/'
fi
done < $FILE
done
You can use this sed:
sed -i '/OPQ_3\|OPQ_4/s/ABC_1/XYZ_2/' file
Anubhava has the better answer. Here's how you'd write it in bash
for file in $FILES; do
echo "$file"
tmpfile=$(mktemp)
while IFS= read -r line; do
[[ $line == *OPQ_3* ]] && line=${line/ABC_1/XYZ_2/}
echo "$line"
done < "$file" > "$tmpfile"
mv "$tmpfile" "$file"
done
Note IFS= read -r line is the only way to read a line from stdin exactly, without losing any whitespace or special characters.

grep filenames from an exclude log

I have a problem with my bash script. I want to excluding from processing files that are listed in the exclude.log. After a file is processed it is written in to the exclude log.
for I in `ls $1 | grep ./exclude.log -v`
do
echo "Procesing ...."
echo $I >> ./exclude.log
done
$I is not assigned a value.
Also your grep is not right formulated.
You possibly want
LIST=$( grep -v -f /path/to/exclude.log * )
for I in $LIST
do
echo "Procesing ...."
echo $I >> /path/to/exclude.log
done
Make sure you don't have any empty lines in exclude.log
You can use this while loop:
while read -r l; do
echo "$l";
done < <(fgrep -v -wf exclude.log <(printf "%s\n" "$1"/*))

How to clean csv by another csv while in a 'for' loop?

I'm not a linux expert, and usually in this situation PHP would be much more suitable... But due to the circumstances it occurred that I wrote it in Bash :)
I have the following .sh which runs over all .csv files in the current folder and execute a bunch of commands.
The goal: Cleaning email lists in .csv files (not actually .csv but just a .txt file in practice).
for file in $(find . -name "*.csv" ); do
echo "====================================================" >> db_purge_log.txt
echo "$file" >> db_purge_log.txt
echo "----------------------------------------------------" >> db_purge_log.txt
echo "Contacts BEFORE purge:" >> db_purge_log.txt
wc -l $file | cut -d " " -f1 >> db_purge_log.txt
echo " " >> db_purge_log.txt
cat $file | egrep -v "xxx|yyy|zzz" | grep -v -E -i '([0-z])\1{2,}' | uniq | sort -u > tmp_file
mv tmp_file $file ;
echo "Contacts AFTER purge:" >> db_purge_log.txt
wc -l $file | cut -d " " -f1 >> db_purge_log.txt
done
Now the trouble is:
I want to add a command, somewhere in the middle of this loop, to use another .csv file as suppression list, meaning - every line found as perfect match in that suppression list - delete from $file.
At this point my brain is stuck and I can't think of a solution. To be honest, I didn't manage using sort or grep on 2 different files and export to a 3rd file without completely eliminating the duplicated lines cross both files, so I end up with much less data.
Any help would be much appreciated!
Clean up
Before adding functionality to the script, the existing script needs to be cleaned up — a lot.
I/O Redirection — Don't Repeat Yourself
When I see wall-to-wall I/O redirections like that, I want to cry — that isn't how you do it! You have three options to avoid all that:
for file in $(find . -name "*.csv" )
do
echo "===================================================="
echo "$file"
echo "----------------------------------------------------"
echo "Contacts BEFORE purge:"
wc -l $file | cut -d " " -f1
echo " "
cat $file | egrep -v "xxx|yyy|zzz" | grep -v -E -i '([0-z])\1{2,}' | uniq | sort -u > tmp_file
mv tmp_file $file ;
echo "Contacts AFTER purge:"
wc -l $file | cut -d " " -f1
done >> db_purge_log.txt
Or:
{
for file in $(find . -name "*.csv" )
do
echo "===================================================="
echo "$file"
echo "----------------------------------------------------"
echo "Contacts BEFORE purge:"
wc -l $file | cut -d " " -f1
echo " "
cat $file | egrep -v "xxx|yyy|zzz" | grep -v -E -i '([0-z])\1{2,}' | uniq | sort -u > tmp_file
mv tmp_file $file ;
echo "Contacts AFTER purge:"
wc -l $file | cut -d " " -f1
done
} >> db_purge_log.txt
Or even:
exec >>db_purge_log.txt # By default, standard output will go to db_purge_log.txt
for file in $(find . -name "*.csv" )
do
echo "===================================================="
echo "$file"
echo "----------------------------------------------------"
echo "Contacts BEFORE purge:"
wc -l $file | cut -d " " -f1
echo " "
cat $file | egrep -v "xxx|yyy|zzz" | grep -v -E -i '([0-z])\1{2,}' | uniq | sort -u > tmp_file
mv tmp_file $file ;
echo "Contacts AFTER purge:"
wc -l $file | cut -d " " -f1
done
The first form is adequate for this script which has a single loop in it to provide I/O redirection to. The second form, using { and } would handle more general sequences of commands. The third form, using exec, is 'permanent'; you can't recover the original standard output, whereas with the { ... } form you can have different sections of the script writing to different places.
One other advantage of all these variations is that you can trivially send errors to the same place that you're sending standard output if that's what you desire. For example:
exec >>db_purge_log.txt 2>&1
Other issues
Suppressing file name from wc — instead of:
wc -l $file | cut -d " " -f1
use:
wc -l < $file
UUOC — Useless use of cat — instead of:
cat $file | egrep -v "xxx|yyy|zzz" | grep -v -E -i '([0-z])\1{2,}' | uniq | sort -u > tmp_file
use:
egrep -v "xxx|yyy|zzz" $file | grep -v -E -i '([0-z])\1{2,}' | uniq | sort -u > tmp_file
UUOU — Useless use of uniq
It is not at all clear why you need uniq and sort -u; in context, sort -u is sufficient, so:
egrep -v "xxx|yyy|zzz" $file | grep -v -E -i '([0-z])\1{2,}' | sort -u > tmp_file
UUOG — Useless use of grep
egrep is equivalent to grep -E and both are capable of handling multiple regular expressions, and the second will match what is matched by the expression in the parentheses 3 or more times (we really only need to match three times), so in fact the second expression will do the job of the first. And the [0-z] match is dubious. It probably matches sundry punctuation characters as well as the upper and lower case digits, but you're already doing a case-insensitive search because of the -i, so we can regularize all that to:
grep -Eiv '([0-9a-z]){3}' $file | sort -u > tmp_file
File names with spaces
The code is not going to handle file names with spaces, tabs or newlines because of the for file in $(find ...) notation. It probably isn't necessary to deal with that now — be aware of the issue.
Final clean up
for file in $(find . -name "*.csv" )
do
echo "===================================================="
echo "$file"
echo "----------------------------------------------------"
echo "Contacts BEFORE purge:"
wc -l < $file
echo " "
grep -Evi '([0-9a-z]){3}' | sort -u > tmp_file
mv tmp_file $file
echo "Contacts AFTER purge:"
wc -l <$file
done >> db_purge_log.txt
Add the extra functionality
I want to add a command, somewhere in the middle of this loop, to use another .csv file as suppression list — meaning that every line found as perfect match in that suppression list should be deleted from $file.
Since we're already sorting the input files ($file), we can sort the suppression file (call it suppfile='suppressions.txt'too if it is not already sorted. Given that, we then use comm to eliminate the lines that appear in both $file and $suppfile. We're interested in the lines that only appear in $file (or, as will be the case here, in the edited and sorted version of the file), so we want to suppress the common entries and the entries from $suppfile that do not appear in $file. The comm -23 - "$suppfile" command reads the edited, sorted file from standard input - and leaves out the entries from "$suppfile"
suppfile='suppressions.txt' # Must be in sorted order
for file in $(find . -name "*.csv" )
do
echo "===================================================="
echo "$file"
echo "----------------------------------------------------"
echo "Contacts BEFORE purge:"
wc -l < "$file"
echo " "
grep -Evi '([0-9a-z]){3}' | sort -u | comm -23 - "$suppfile" > tmp_file
mv tmp_file "$file"
echo "Contacts AFTER purge:"
wc -l < "$file"
done >> db_purge_log.txt
If the suppression file is not in sorted order, simply sort it into a temporary file. Beware of using the .csv suffix on the suppression file in the current directory; it will catch the file and empty it because every line in the suppression file matches a line in the suppression file, which is not helpful for any files processed after the suppression file.
Oops — I over-simplified the grep regex. It should (probably) be:
grep -Evi '([0-9a-z])\1{2}' $file
The difference is considerable. My original rewrite will look for any three adjacent digits or letters (e.g. 123 or abz); the revision (actually very similar to one of the original commands) looks for a character from [0-9A-Za-z] followed by two occurrences of the same character (e.g. 111 or aaa, but not 123 or abz).
If perchance the alternatives xxx|yyy|zzz were really not 3 repeated characters, you might need two invocations of grep in sequence.
If I understand you correctly, assuming a recent 'nix, grep should do most of the trick for you. The command, grep -vf filterfile input.csv will output the lines in input.csv that do NOT match any regular expression found in filterfile.
A couple of other comments ... uniq needs the input sorted in order to remove dups, so you might want the sort before it in the pipe (unless your input data is sorted).
Or if the input is sorted to start with, grep -u will omit duplicates.
Small suggestion -- you might add a #!/bin/bash as the first line in order to ensure that the script is run by bash rather than the user's login shell (it might not be bash).
HTH.
b

Shell Script to for remote copy and then processing the file

The below script works fine. But when I try to add a command to remote copy and then assign the variable FILENAME with the file received from the remote copy, the while loop doesn't work. I am quite new to scripting so I'm not able to find out what I'm missing. Please help!
#!/bin/sh
#SCRIPT: File processing
#PURPOSE: Process a file line by line with redirected while-read loop.
SSID=$1
ASID=$2
##rcp server0:/oracle/v11//dbs/${SSID}_ora_dir.lst /users/global/rahul/${ASID}_clone_dir.lst
##FILENAME=/users/global/rahul/${ASID}_clone_dir.lst
count=0
while read LINE
do
echo $LINE | sed -e "s/${SSID}/${ASID}/g"
count=`expr $count + 1`
done < $FILENAME
echo -e "\nTotal $count Lines read"
grep -v -e "pattern3" -e "pattern5" -e "pattern6" -e "pattern7" -e "pattern8" -e "pattern9" -e "pattern10" -e "pattern11" -e "
pattern12" ${ASID}_.lst > test_remote.test
When you say, "the while loop doesn't work", if you get an error message you should include that in your question to give us a clue.
Are you sure the rcp command is successful? The file /users/global/rahul/${ASID}_clone_dir.lst exists after the rcp is completed?
Btw your while loop is inefficient. This should be equivalent:
sed -e "s/${SSID}/${ASID}/g" < "$FILENAME"
count=$(wc -l "$FILENAME" | awk '{print $1}')
echo -e "\nTotal $count Lines read"

Find and highlight text in linux command line

I am looking for a linux command that searches a string in a text file,
and highlights (colors) it on every occurence in the file, WITHOUT omitting text lines (like grep does).
I wrote this handy little script. It could probably be expanded to handle args better
#!/bin/bash
if [ "$1" == "" ]; then
echo "Usage: hl PATTERN [FILE]..."
elif [ "$2" == "" ]; then
grep -E --color "$1|$" /dev/stdin
else
grep -E --color "$1|$" $2
fi
it's useful for stuff like highlighting users running processes:
ps -ef | hl "alice|bob"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
command | grep -iz -e "keyword1" -e "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case, -z for treating as a single file)
Alternatively,while reading files
grep -iz -e "keyword1" -e "keyword2" 'filename'
OR
command | grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case,-A and -B for no of lines before/after the keyword to be displayed)
Alternatively,while reading files
grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" 'filename'
command ack with --passthru switch:
ack --passthru pattern path/to/file
I take it you meant "without omitting text lines" (instead of emitting)...
I know of no such command, but you can use a script such as this (this one is a simple solution that takes the filename (without spaces) as the first argument and the search string (also without spaces) as the second):
#!/usr/bin/env bash
ifs_store=$IFS;
IFS=$'\n';
for line in $(cat $1);
do if [ $(echo $line | grep -c $2) -eq 0 ]; then
echo $line;
else
echo $line | grep --color=always $2;
fi
done
IFS=$ifs_store
save as, for instance colorcat.sh, set permissions appropriately (to be able to execute it) and call it as
colorcat.sh filename searchstring
I had a requirement like this recently and hacked up a small program to do exactly this. Link
Usage: ./highlight test.txt '^foo' 'bar$'
Note that this is very rough, but could be made into a general tool with some polishing.
Using dwdiff, output differences with colors and line numbers.
echo "Hello world # $(date)" > file1.txt
echo "Hello world # $(date)" > file2.txt
dwdiff -c -C 0 -L file1.txt file2.txt

Resources