How do i execute some line in a file as a command in ternimal? - linux

I write down some commands row by row in a file, and I want to execute the commands through grep and pipe;
for example:
1.there is a file a.txt,which content is like below:
echo "hello world"
ls -l
2.then I want execute the first line in my terminal, so I want it like this:
cat a.txt | grep echo | execute the output of previous commands
so that, I can finally execute the command, which is the first line of a.txt.
(can not find any answer of this, so I come here to find some help.)

You can either pipe the command to bash (or any other shell) to execute it:
sed -n 1p a.txt | bash
or you can use eval with command substitution:
eval $(head -n1 a.txt)
BTW, I showed you another two ways how to extract the line from the file.

Related

Grep command not working within a bash script

I have a file testtns.txt which has numbers like below :
123
456
I am then passing the input of the file folder path like "/var/www/batchfiles/files/test.csv" . The test.csv has following records :
1,123,On its way to warehouse,20230131
2,456,On its way to warehouse,20230201
3,777,Pickedup,20230201
4,888,Pickedup,20230202
I have created the script printgrep.bash to read the numbers from the file testtns.txt and then grep the csv files in folder "/var/www/batchfiles/files/". I am running the script using command .\printgrep.bash /var/www/batchfiles/files/* and excepting the output to be in output.txt as follows :
1,123,On its way to warehouse,20230131
2,456,On its way to warehouse,20230201
However when I run the above command, output.txt is empty and doesnt have any results. However if I run the grep command as it is, it does return results as excepted.
Can someone let me know why the grep command not working in the below script printgrep.bash :
#!/bin/bash
cat testtns.txt | while read line
do
grep -i "^ $line" $1 >> output.txt
done
Tried even below grep command but still didnt work :
#!/bin/bash
cat testtns.txt | while read line
do
grep -i "$line" $1 >> output.txt
done

Using Xargs max-procs with multiple arguments from a file

I have a script that is getting me the results I want. I want to improve the performance of the script.
My script takes argument from file file1.txt.
The contents are below:
table1
table2
table3
and so on
Now when I use the while statement like below the script runs in sequential order.
while statement is below:
while IFS=',' read -r a; do import.sh "$a"; done < file1.txt
Now when I use the xargs max-procs utility in bash then the scripts run in parallel based on no of max-procs.
xargs statement is below:
xargs --max-procs 10 -n 1 sh import.sh < file1.txt
Now I have another script
This script takes arguments from file file2.txt.
The contents are below:
table1,db1
table2,db2
table3,db3
and so on
when I use the while statement the script performs fine.
while IFS=',' read -r a b; do test.sh "$a" "$b"; done < file2.txt
But when I use teh xargs statement then the script gives me usage error.
xargs statement is below.
xargs --max-procs 10 -n 1 sh test.sh < file2.txt
The error statement is below:
Usage : test.sh input_file
Why is this happening?
How can I rectify this?
Your second script, test.sh, expects two arguments, but xargs is feeding it only one (one word, in this case the complete line). You can fix it by first converting commas , to newlines (with a simple sed script) and then passing two arguments (now two lines) per call to test.sh (with -n2):
sed s/,/\\n/g file2.txt | xargs --max-procs 10 -n2 sh test.sh
Note that xargs supports a custom delimiter via -d option, and you could use it in case each line in file2.txt were ending with , (but then you should probably strip a newline prefixed to each first field).

Make a variable containing all digits from the stdout of the command run directly before it

I am trying to make a bash shell script that launches some jobs on a queuing system. After a job is launched, the launch command prints the job-id to the stdout, which I would like to 'trap' and then use in the next command. The job-id digits are the only digits in the stdout message.
#!/bin/bash
./some_function
>>> this is some stdout text and the job number is 1234...
and then I would like to get to:
echo $job_id
>>> 1234
My current method is using a tee command to pipe the original command's stdout to a tmp.txt file and then making the variable by grepping that file with a regex filter...something like:
echo 'pretend this is some dummy output from a function 1234' 2>&1 | tee tmp.txt
job_id=`cat tmp.txt | grep -o '[0-9]'`
echo $job_id
>>> pretend this is some dummy output from a function 1234
>>> 1 2 3 4
...but I get the feeling this is not really the most elegant or 'standard' way of doing this. What is the better way to do this?
And for bonus points, how do I remove the spaces from the grep+regex output?
You can use grep -o when you call your script:
jobid=$(echo 'pretend this is some dummy output from a function 1234' 2>&1 |
tee tmp.txt | grep -Eo '[0-9]+$')
echo "$jobid"
1234
Something like this should work:
$ JOBID=`./some_function | sed 's/[^0-9]*\([0-9]*\)[^0-9]*/\1/'`
$ echo $JOBID
1234

Pass a list of files to perl script via pipe

I am having a problem where my perl script will fail upon having an input piped, but works fine when I just list all the file names individually.
For reference, input of the perl script is read with while(<>).
Example:
script.pl file1.tag file2.tag file3.tag
runs fine.
But the following all fail.
find ./*.tag | chomp | script.pl
ls -l *.tag | perl -pe 's/\n/ /g' | script.pl
find ./*.tag | perl -pe 's/\n/ /g' | script.pl
I also tested dumping it into a text file and catting that into the perl:
cat files.text | script.pl
All of them fail the same way. It is like the script is passed no input arguments and the program just finishes.
From perldoc perlop:
The null filehandle <> is special [...] Input from <> comes either from standard input, or from each file listed on the command line. Here's how it works: the first time <> is evaluated, the #ARGV array is checked, and if it is empty, $ARGV[0] is set to -, which when opened gives you standard input. The #ARGV array is then processed as a list of filenames.
You're not passing any command line arguments to your Perl scripts, so everything you pipe into them is read into STDIN instead of being treated as filenames:
$ echo foo > foo.txt
$ echo bar > bar.txt
$ ls | perl -e 'print "<$_>\n" while <>'
<bar.txt
>
<foo.txt
>
Notice that the files foo.txt and bar.txt are not actually read; all we get is the file names. If you want the files to be opened and read, you have to pass them as command line arguments or explicitly set #ARGV:
$ perl -e 'print "<$_>\n" while <>' *
<bar
>
<foo
>
If you have a large number of files, like you're likely to get from find, you should use xargs as Dyno Hongjun Fu suggested.
However, you don't need find, ls, cat, or your Perl one-liner to run your script on all the .tag files in the current directory. Simply do:
script.pl *.tag
you need xargs, e.g.
find ./ -type f -name "*.tag" | xargs -i script.pl {}
what is chomp?

UNIX shell script to run a list of grep commands from a file and getting result in a single delimited file

I am beginner in unix programming and a way to automate my work
I want to run a list a grep commands and get the output of all the grep command in a in a single delimited file .
i am using the following bash script. But it's not working .
Mockup sh file:
!/bin/sh
grep -l abcd123
grep -l abcd124
grep -l abcd125
and while running i used the following command
$ ./Mockup.sh > output.txt
Is it the right command?
How can I get both the grep command and output in the output file?
how can i delimit the output after each command and result?
How can I get both the grep command and output in the output file
You can use bash -v (verbose) to print each command before execution on stderr and it's output will be as usual be available on stdout:
bash -v ./Mockup.sh > output.txt 2>&1
cat output.txt
Working Demo
A suitable shell script could be
#!/bin/sh
grep -l 'abcd123\|abcd124\|abcd125' "$#"
provided that the filenames you pass on the invocation of the script are "well behaved", that is no whitespace in them. (Edit Using the "$#" expansion takes care of generic whitespace in the filenames, tx to triplee for his/her comment)
This kind of invocation (with alternative matching strings, as per the \| syntax) has the added advantage that you have exactly one occurrence of a filename in your final list, because grep -l prints once the filename as soon as it finds the first occurrence of one of the three strings in a file.
Addendum about "$#"
% ff () { for i in "$#" ; do printf "[%s]\n" "$i" ; done ; }
% # NB "a s d" below is indeed "a SPACE s TAB d"
% ff "a s d" " ert " '345
345'
[a s d]
[ ert ]
[345
345]
%
cat myscript.sh
########################
#!/bin/bash
echo "Trying to find the file contenting the below string, relace your string with below string"
grep "string" /path/to/folder/* -R -l
########################
save above file and run it as below
sh myscript.sh > output.txt
once the command prmpt get return you can check the output.txt for require output.
Another approach, less efficient, that tries to address the OP question
How can I get both the grep command and output in the output file?
% cat Mockup
#!/bin/sh
grep -o -e string1 -e string2 -e string3 "$#" 2> /dev/null | sort -t: -k2 | uniq
Output: (mocked up as well)
% sh Mockup file{01..99}
file01:string1
file17:string1
file44:string1
file33:string2
file44:string2
file48:string2
%
looking at the output from POV of a consumer, one foresees problems with search strings and/or file names containing colons... oh well, that's another Q maybe

Resources