Grep command not working within a bash script - linux

I have a file testtns.txt which has numbers like below :
123
456
I am then passing the input of the file folder path like "/var/www/batchfiles/files/test.csv" . The test.csv has following records :
1,123,On its way to warehouse,20230131
2,456,On its way to warehouse,20230201
3,777,Pickedup,20230201
4,888,Pickedup,20230202
I have created the script printgrep.bash to read the numbers from the file testtns.txt and then grep the csv files in folder "/var/www/batchfiles/files/". I am running the script using command .\printgrep.bash /var/www/batchfiles/files/* and excepting the output to be in output.txt as follows :
1,123,On its way to warehouse,20230131
2,456,On its way to warehouse,20230201
However when I run the above command, output.txt is empty and doesnt have any results. However if I run the grep command as it is, it does return results as excepted.
Can someone let me know why the grep command not working in the below script printgrep.bash :
#!/bin/bash
cat testtns.txt | while read line
do
grep -i "^ $line" $1 >> output.txt
done
Tried even below grep command but still didnt work :
#!/bin/bash
cat testtns.txt | while read line
do
grep -i "$line" $1 >> output.txt
done

Related

How do i execute some line in a file as a command in ternimal?

I write down some commands row by row in a file, and I want to execute the commands through grep and pipe;
for example:
1.there is a file a.txt,which content is like below:
echo "hello world"
ls -l
2.then I want execute the first line in my terminal, so I want it like this:
cat a.txt | grep echo | execute the output of previous commands
so that, I can finally execute the command, which is the first line of a.txt.
(can not find any answer of this, so I come here to find some help.)
You can either pipe the command to bash (or any other shell) to execute it:
sed -n 1p a.txt | bash
or you can use eval with command substitution:
eval $(head -n1 a.txt)
BTW, I showed you another two ways how to extract the line from the file.

How to capture a file name when using unzip -c and doing multiple greps

I am running the following command:
for file in 2017120[1-9]/54100_*.zip; do unzip -c "$file" | grep "3613825" | grep '3418665' ; done
This does a grep job of pulling the data that matches my grep parameters, but I can't figure out how to capture which file the results came from.
I have tried adding grep -H but the result comes back with (standard input).
How can I capture the file name?
When I need to do something like this I just add an echo of the file name to the for loop like this:
for file in 2017120[1-9]/54100_*.zip; do echo $file; unzip -c "$file" | grep "3613825" | grep '3418665' ; done
This prints out the list of files, and the grep line that matches will print immediately after the file that the match is in. like this:
file_1
file_2
file_3
matching line
file_4
file_5
another matching line
file_6
...
Thus I know the matching lines occurred in file_3 and file_5.

Grep No such file or directory Error In Bash Script, Should I Insert Wait Command?

I am running a script that has been working fine. However, yesterday, I got a couple errors. These errors are after several loops of the script:
sed: cant read file3.txt: No such file or directory
grep: file3.txt: No such file or directory
grep: file3.txt: No such file or directory
sed: cant read file3.txt: No such file or directory
grep: file3.txt: No such file or directory
Keep in mind, these errors do not happen consistently. It's occurring once in a while somewhere near this part of the script. File3.txt is the file not being found:
cat file1.txt | while read LINE; do grep -m 1 $LINE file2.txt >> file3.txt; done
sed -i 's/string//g' file3.txt
grep 'string' file3.txt | cut -d '|' -f1-2 > file4.txt
grep -v 'string' file3.txt | cut -d '|' -f1-2 >> file5.txt
sed -i 's/string//' file3.txt
grep -Fvf file3.txt file1.txt > file6.txt
Now, I'm thinking that since file3.txt is being appended, or later operated on by SED, sometimes the next command starts too soon and it can't find the file? Should I put a wait command in between?
I have looked up many pages with this error, but was unable to find anything:
cat file_name | grep "something" results "cat: grep: No such file or directory" in shell scripting
Pipe multiple commands to a single command with no EOF signal wait
grep command works in command line, but not in bash script: get no such file or directory erro
https://serverfault.com/questions/169539/sed-cant-find-a-file-that-obviously-exists
"No such file or directory" but it exists
If you think that putting a wait or sleep command will help, please let me know. Or, if you think there's a better solution, that would be great too. I'm running on Cygwin terminal. Any insight is greatly appreciated.
Instead of redirecting to file3.txt inside the while loop, redirect the whole loop. Then the file will be created even if the loop never runs because the input file is empty.
while read LINE; do
grep -m 1 $LINE file2.txt
done < file1.txt > file3.txt
If file1.txt is ever empty then file3.txt won't be created.
Also do grep -m 1 $LINE file2.txt will cause problems if there are crucial characters (space is the easiest of them).
Let's assume that the $LINE variable contains more than one word separated by spaces: hello world.
Now the command looks like this: grep -m 1 hello world file2.txt - grep interpretation will look something like this: let's find all hello in file named world and file named file2.txt in current folder.
Using "$LINE" instead of $LINE will lead you to a whole different scenario.
Look at the difference between the following two:
grep -m 1 $LINE file2.txt
grep -m 1 "$LINE" file2.txt

UNIX shell script to run a list of grep commands from a file and getting result in a single delimited file

I am beginner in unix programming and a way to automate my work
I want to run a list a grep commands and get the output of all the grep command in a in a single delimited file .
i am using the following bash script. But it's not working .
Mockup sh file:
!/bin/sh
grep -l abcd123
grep -l abcd124
grep -l abcd125
and while running i used the following command
$ ./Mockup.sh > output.txt
Is it the right command?
How can I get both the grep command and output in the output file?
how can i delimit the output after each command and result?
How can I get both the grep command and output in the output file
You can use bash -v (verbose) to print each command before execution on stderr and it's output will be as usual be available on stdout:
bash -v ./Mockup.sh > output.txt 2>&1
cat output.txt
Working Demo
A suitable shell script could be
#!/bin/sh
grep -l 'abcd123\|abcd124\|abcd125' "$#"
provided that the filenames you pass on the invocation of the script are "well behaved", that is no whitespace in them. (Edit Using the "$#" expansion takes care of generic whitespace in the filenames, tx to triplee for his/her comment)
This kind of invocation (with alternative matching strings, as per the \| syntax) has the added advantage that you have exactly one occurrence of a filename in your final list, because grep -l prints once the filename as soon as it finds the first occurrence of one of the three strings in a file.
Addendum about "$#"
% ff () { for i in "$#" ; do printf "[%s]\n" "$i" ; done ; }
% # NB "a s d" below is indeed "a SPACE s TAB d"
% ff "a s d" " ert " '345
345'
[a s d]
[ ert ]
[345
345]
%
cat myscript.sh
########################
#!/bin/bash
echo "Trying to find the file contenting the below string, relace your string with below string"
grep "string" /path/to/folder/* -R -l
########################
save above file and run it as below
sh myscript.sh > output.txt
once the command prmpt get return you can check the output.txt for require output.
Another approach, less efficient, that tries to address the OP question
How can I get both the grep command and output in the output file?
% cat Mockup
#!/bin/sh
grep -o -e string1 -e string2 -e string3 "$#" 2> /dev/null | sort -t: -k2 | uniq
Output: (mocked up as well)
% sh Mockup file{01..99}
file01:string1
file17:string1
file44:string1
file33:string2
file44:string2
file48:string2
%
looking at the output from POV of a consumer, one foresees problems with search strings and/or file names containing colons... oh well, that's another Q maybe

'No such file or directory' error in sh, but the file exists?

I'm running a command to retrieve the location of a logfile using from a .sh bash script
rig=`forever list | grep 'server.*root.*\.log' | awk '{print $8}'`
echoing it prints:
echo $rig
/root/.forever/1cFY.log
But when I try to read the file (which exists) like so:
less $rig
I get:
/root/.forever/1cFY.log: No such file or directory
However if I manually enter the file name without my .sh script it works.
Any ideas?
Looks like you have grep configured to output color. Just drop the grep and use awk:
rig=$(forever list | awk '/server.*root.*\.log/{print $8}')

Resources