how to grep external txt file in bash script? - linux

I would like to run a script that can be access in other terminals other then my own. like bash <(curl -s http://domain.com/scripts/hello.sh) but I need this script to grep a txt file that is also located on that server for example domain.com/scripts/stuff.txt. What would be the best method?

Download a text file, and pipe it into grep:
curl http://domain.com/scripts/stuff.txt | grep foo

Try:
curl -s http://domain.com/scripts/stuff.txt | grep "$Hi"
You can redirect the output to a local text file, like so:
curl -s http://domain.com/scripts/stuff.txt | grep "$Hi" > stuff_that_starts_with_hi.txt

You can also use wget
wget -qO - http://domain.com/scripts/stuff.txt | grep dat

Related

How do i make my bash script on download automatically turn into a terminal command? [duplicate]

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

Add some text to each line of txt file and pass to wget

I have a file called : filename.txt contains file name with extension
I want to add url before each line like www.abc.com/
and pass it to wget like :
cat filename.txt | xargs -n 1 -P 16 wget -q -P /location
Thanks
Sounds like you want to prefix each line in filename.txt with a string:
sed -e 's#^#www.abc.com/#' filename.txt
I got my answer, thanks to all for your valuable response
awk '{print "https://<URL>" $0;}' filename.txt | xargs -n 1 -P 16 wget -q -P /location

Need response time and download time for the URLs and write shell scripts for same

I have use command to get response time :
curl -s -w "%{time_total}\n" -o /dev/null https://www.ziffi.com/suggestadoc/js/ds.ziffi.https.v308.js
and I also need download time of this below mentioned js file link so used wget command to download this file but i get multiple parameter out put. I just need download time from it
$ wget --output-document=/dev/null https://www.ziffi.com/suggestadoc/js/ds.ziffi.https.v307.js
please suggest
I think what you are looking for is this:
wget --output-document=/dev/null https://www.ziffi.com/suggestadoc/js/ds.ziffi.https.v307.js 2>&1 >/dev/null | grep = | awk '{print $5}' | sed 's/^.*\=//'
Explanation:
2>&1 >/dev/null | --> Makes sure stderr gets piped instead of stdout
grep = --> select the line that contains the '=' symbol
sed 's/^.*\=//' --> deletes everything from linestart to the = symbol

How do you use wget to download most up to date file on a site?

Hello I am trying to use wget to download the most update to day McAfee patch and I am having issues singling out the .tar file. This is what I have:
wget -q -O - ftp://ftp.mcafee.com/pub/antivirus/datfiles/4.x/ | grep -o -m 2 "avvdat-[^\']*"
However when I run the above command it gives me:
avvdat-8065.tar">avvdat-8065.tar</a> (95191040 bytes)
avvdat-8066.tar">avvdat-8066.tar</a> (95385600 bytes)
When I need it to just be the most recent.tar file in between the <a> </a> which in this case would be avvdat-8066.tar. Can someone please help me out with greping the correct .tar I am not too good with regex or sed.
Try this,
wget $(wget -q -O - ftp://ftp.mcafee.com/pub/antivirus/datfiles/4.x/ | grep -Eo "ftp://[^\"\]+" | sort | tail -n1)
I'd suggest modifying your grep regex so it retrieves only the file name, then using sort to sort the results and tail to discard all but the last one.
wget -q -O - ftp://ftp.mcafee.com/pub/antivirus/datfiles/4.x/ | grep -o -m 2 "avvdat-[^\'\"]*" | sort | tail -1

How do I pipe or redirect the output of curl -v?

For some reason the output always gets printed to the terminal, regardless of whether I redirect it via 2> or > or |. Is there a way to get around this? Why is this happening?
add the -s (silent) option to remove the progress meter, then redirect stderr to stdout to get verbose output on the same fd as the response body
curl -vs google.com 2>&1 | less
Your URL probably has ampersands in it. I had this problem, too, and I realized that my URL was full of ampersands (from CGI variables being passed) and so everything was getting sent to background in a weird way and thus not redirecting properly. If you put quotes around the URL it will fix it.
The answer above didn't work for me, what did eventually was this syntax:
curl https://${URL} &> /dev/stdout | tee -a ${LOG}
tee puts the output on the screen, but also appends it to my log.
If you need the output in a file you can use a redirect:
curl https://vi.stackexchange.com/ -vs >curl-output.txt 2>&1
Please be sure not to flip the >curl-output.txt and 2>&1, which will not work due to bash's redirection behavior.
Just my 2 cents.
The below command should do the trick, as answered earlier
curl -vs google.com 2>&1
However if need to get the output to a file,
curl -vs google.com > out.txt 2>&1
should work.
I found the same thing: curl by itself would print to STDOUT, but could not be piped into another program.
At first, I thought I had solved it by using xargs to echo the output first:
curl -s ... <url> | xargs -0 echo | ...
But then, as pointed out in the comments, it also works without the xargs part, so -s (silent mode) is the key to preventing extraneous progress output to STDOUT:
curl -s ... <url> | perl -ne 'print $1 if /<sometag>([^<]+)/'
The above example grabs the simple <sometag> content (containing no embedded tags) from the XML output of the curl statement.
The following worked for me:
Put your curl statement in a script named abc.sh
Now run:
sh abc.sh 1>stdout_output 2>stderr_output
You will get your curl's results in stdout_output and the progress info in stderr_output.
This simple example shows how to capture curl output, and use it in a bash script
test.sh
function main
{
\curl -vs 'http://google.com' 2>&1
# note: add -o /tmp/ignore.png if you want to ignore binary output, by saving it to a file.
}
# capture output of curl to a variable
OUT=$(main)
# search output for something using grep.
echo
echo "$OUT" | grep 302
echo
echo "$OUT" | grep title
Solution = curl -vs google.com 2>&1 | less
BUT, if you want to redirect the output to a file and the output is still on the screen, then the URL response contains a newline char \n which messed up your shell.
To avoit this put everything in a variable:
result=$(curl -v . . . . )

Resources