How can I use WGET to get only status info and save it somewhere? - linux

can I use WGET to get, let's say, status 200 OK and save that status somewhere? If not, how can I do that using ubuntu linux?
Thanks!

With curl you can
curl -L -o /dev/null -s -w "%{http_code}\n" http://google.com >> status.txt

You use --save-headers to add the headers to the output, put the output to the console using -O -, ignore the errors stream using >/dev/null and get only the status line using grep HTTP/.
You can then output that into a file using >status_file
$ wget --save-headers -O - http://google.com/ 2>/dev/null | grep HTTP/ > status_file

The question suggests that the output of the wget command be stored somewhere. As another alternative, the following example shows how to store the output of wget execution in a shell variable (wget_status). Where after the execution of the wget command the status of the execution is stored in the variable wget_status. The wget status is displayed in the console using the echo command.
$ wget_status=$(wget --server-response ${URL} 2>&1 | awk '/^ HTTP/{print $2}')
$ echo $wget_status
200
After the execution of the wget command, the execution status can be manipulated using the value of the wget_status variable.
For more information consult the following link as a reference:
https://www.unix.com/shell-programming-and-scripting/148595-capture-http-response-code-wget.html
The tests were executed using cloudshell on a linux system.
Linux cs-335831867014-default 5.10.90+ #1 SMP Wed Mar 23 09:10:07 UTC 2022 x86_64 GNU/Linux

Related

The script sometimes doesn't run after wget

The script sometimes doesn't run after wget. Perhaps it is necessary to wait for the completion of wget?
#!/usr/bin/env bash
set -Eeuo pipefail
# Installing tor-browser
echo -en "\033[1;33m Installing tor-browser... \033[0m \n"
URL='https://tor.eff.org/download/' # Official mirror https://www.torproject.org/download/, may be blocked
LINK=$(wget -qO- $URL | grep -oP -m 1 'href="\K/dist.+?ALL.tar.xz')
URL='https://tor.eff.org'${LINK}
curl --location $URL | tar xJ --extract --verbose --preserve-permissions
sudo mv tor-browser /opt
sudo chown -R $USER /opt/tor-browser
cd /opt/tor-browser
./start-tor-browser.desktop --register-app
There are pitfalls associated with set -e (aka set -o errexit). See BashFAQ/105 (Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?).
If you decide to use set -e despite the problems then it's a very good idea to set up an ERR trap to show what has happened, and use set -E (aka set -o errtrace) so it fires in functions and subshells etc. A basic ERR trap can be set up with
trap 'echo "ERROR: ERR trap: line $LINENO" >&2' ERR
This will prevent the classic set -e problem: the program stops suddenly, at an unknown place, and for no obvious reason.
Under set -e, the script stops on any error.
set -Eeuo pipefail
# ^
Maybe the site is sometimes unavailable, or the fetched page doesn't match the expression grep is searching for.
You are doing
wget -qO- $URL
according to wget man page
-q
--quiet
Turn off Wget's output.
this is counterproductive for finding objective cause of malfunction, by default wget is verbose and write information to stderr, if you wish to store that into file you might redirect stderr to some file, consider following simple example
wget -O - http://www.example.com 2>>wget_out.txt
it does download Example Domain and write its' content to standard output (-) whilst stderr is appended to file named wget_out.txt, therefore if you run that command e.g. 3 times you will have information from 3 runs in wget_out.txt

Wait until curl command has finished

I'm using curl to grab a list of subscribers. Once this has been downloaded the rest of my script will process the file.
How could I make the script wait until the file has been downloaded and error if it failed?
curl "http://mydomain/api/v1/subscribers" -u
'user:pass' | json_pp >>
new.json
Thanks
As noted in the comment, curl will not return until requests is completed (or failed). I suspect you are looking for a way to identify errors in the curl, which currently are getting lost. Consider the following:
If you just need error status, you can use bash pipefail option set -o pipefail. This will allow you to check for failure in curl
set -o pipefail
if curl ... | json_pp >> new.json ; then
# All good
else
# Something wrong.
fi
Also, you might want to save the "raw" response, before trying to pretty-print it. Either using a temporary file, or using tee
set -o pipefail
if curl ... | tee raw.json | json_pp >> new.json ; then
# All good
else
# Something wrong - look into raw.json
fi

How do i make my bash script on download automatically turn into a terminal command? [duplicate]

Say I have a file at the URL http://mywebsite.example/myscript.txt that contains a script:
#!/bin/bash
echo "Hello, world!"
read -p "What is your name? " name
echo "Hello, ${name}!"
And I'd like to run this script without first saving it to a file. How do I do this?
Now, I've seen the syntax:
bash < <(curl -s http://mywebsite.example/myscript.txt)
But this doesn't seem to work like it would if I saved to a file and then executed. For example readline doesn't work, and the output is just:
$ bash < <(curl -s http://mywebsite.example/myscript.txt)
Hello, world!
Similarly, I've tried:
curl -s http://mywebsite.example/myscript.txt | bash -s --
With the same results.
Originally I had a solution like:
timestamp=`date +%Y%m%d%H%M%S`
curl -s http://mywebsite.example/myscript.txt -o /tmp/.myscript.${timestamp}.tmp
bash /tmp/.myscript.${timestamp}.tmp
rm -f /tmp/.myscript.${timestamp}.tmp
But this seems sloppy, and I'd like a more elegant solution.
I'm aware of the security issues regarding running a shell script from a URL, but let's ignore all of that for right now.
source <(curl -s http://mywebsite.example/myscript.txt)
ought to do it. Alternately, leave off the initial redirection on yours, which is redirecting standard input; bash takes a filename to execute just fine without redirection, and <(command) syntax provides a path.
bash <(curl -s http://mywebsite.example/myscript.txt)
It may be clearer if you look at the output of echo <(cat /dev/null)
This is the way to execute remote script with passing to it some arguments (arg1 arg2):
curl -s http://server/path/script.sh | bash /dev/stdin arg1 arg2
For bash, Bourne shell and fish:
curl -s http://server/path/script.sh | bash -s arg1 arg2
Flag "-s" makes shell read from stdin.
Use:
curl -s -L URL_TO_SCRIPT_HERE | bash
For example:
curl -s -L http://bitly/10hA8iC | bash
Using wget, which is usually part of default system installation:
bash <(wget -qO- http://mywebsite.example/myscript.txt)
You can also do this:
wget -O - https://raw.github.com/luismartingil/commands/master/101_remote2local_wireshark.sh | bash
The best way to do it is
curl http://domain/path/to/script.sh | bash -s arg1 arg2
which is a slight change of answer by #user77115
You can use curl and send it to bash like this:
bash <(curl -s http://mywebsite.example/myscript.txt)
I often using the following is enough
curl -s http://mywebsite.example/myscript.txt | sh
But in a old system( kernel2.4 ), it encounter problems, and do the following can solve it, I tried many others, only the following works
curl -s http://mywebsite.example/myscript.txt -o a.sh && sh a.sh && rm -f a.sh
Examples
$ curl -s someurl | sh
Starting to insert crontab
sh: _name}.sh: command not found
sh: line 208: syntax error near unexpected token `then'
sh: line 208: ` -eq 0 ]]; then'
$
The problem may cause by network slow, or bash version too old that can't handle network slow gracefully
However, the following solves the problem
$ curl -s someurl -o a.sh && sh a.sh && rm -f a.sh
Starting to insert crontab
Insert crontab entry is ok.
Insert crontab is done.
okay
$
Also:
curl -sL https://.... | sudo bash -
Just combining amra and user77115's answers:
wget -qO- https://raw.githubusercontent.com/lingtalfi/TheScientist/master/_bb_autoload/bbstart.sh | bash -s -- -v -v
It executes the bbstart.sh distant script passing it the -v -v options.
Is some unattended scripts I use the following command:
sh -c "$(curl -fsSL <URL>)"
I recommend to avoid executing scripts directly from URLs. You should be sure the URL is safe and check the content of the script before executing, you can use a SHA256 checksum to validate the file before executing.
instead of executing the script directly, first download it and then execute
SOURCE='https://gist.githubusercontent.com/cci-emciftci/123123/raw/123123/sample.sh'
curl $SOURCE -o ./my_sample.sh
chmod +x my_sample.sh
./my_sample.sh
This way is good and conventional:
17:04:59#itqx|~
qx>source <(curl -Ls http://192.168.80.154/cent74/just4Test) Lord Jesus Loves YOU
Remote script test...
Param size: 4
---------
17:19:31#node7|/var/www/html/cent74
arch>cat just4Test
echo Remote script test...
echo Param size: $#
If you want the script run using the current shell, regardless of what it is, use:
${SHELL:-sh} -c "$(wget -qO - http://mywebsite.example/myscript.txt)"
if you have wget, or:
${SHELL:-sh} -c "$(curl -Ls http://mywebsite.example/myscript.txt)"
if you have curl.
This command will still work if the script is interactive, i.e., it asks the user for input.
Note: OpenWRT has a wget clone but not curl, by default.
bash | curl http://your.url.here/script.txt
actual example:
juan#juan-MS-7808:~$ bash | curl https://raw.githubusercontent.com/JPHACKER2k18/markwe/master/testapp.sh
Oh, wow im alive
juan#juan-MS-7808:~$

Output is not been captured in shell script

I am trying to capture the output in a variable but I am not able to do so. I tried below scenarios:
verify=$(su - omc -c "ldapsearch -x -n -D "uid=rac3gp,ou=people,ou=accounts,dc=netact,dc=net" -w hee_120" 2> /dev/null)
When I do echo $verify, it displays blank output
su - omc -c "ldapsearch -x -n -D "uid=rac3gp,ou=people,ou=accounts,dc=netact,dc=net" -w hee_120" >>dd.txt
The output is not captured in another file too. The expected output is
ldap_bind: Invalid credentials (49)
which should be displayed after successful execution.
This sounds like an error to me.
ldap_bind: Invalid credentials (49)
So this could be being printed to stderr? If you change the 2> /dev/null to 2>&1 in your first attempt to store it in a variable then that should work.
As you can see, that's an error message. Error messages are typically written to the stderr.
So do a redirection before capturing: 2>&1 (and don't send it to /dev/null).

How do I pipe or redirect the output of curl -v?

For some reason the output always gets printed to the terminal, regardless of whether I redirect it via 2> or > or |. Is there a way to get around this? Why is this happening?
add the -s (silent) option to remove the progress meter, then redirect stderr to stdout to get verbose output on the same fd as the response body
curl -vs google.com 2>&1 | less
Your URL probably has ampersands in it. I had this problem, too, and I realized that my URL was full of ampersands (from CGI variables being passed) and so everything was getting sent to background in a weird way and thus not redirecting properly. If you put quotes around the URL it will fix it.
The answer above didn't work for me, what did eventually was this syntax:
curl https://${URL} &> /dev/stdout | tee -a ${LOG}
tee puts the output on the screen, but also appends it to my log.
If you need the output in a file you can use a redirect:
curl https://vi.stackexchange.com/ -vs >curl-output.txt 2>&1
Please be sure not to flip the >curl-output.txt and 2>&1, which will not work due to bash's redirection behavior.
Just my 2 cents.
The below command should do the trick, as answered earlier
curl -vs google.com 2>&1
However if need to get the output to a file,
curl -vs google.com > out.txt 2>&1
should work.
I found the same thing: curl by itself would print to STDOUT, but could not be piped into another program.
At first, I thought I had solved it by using xargs to echo the output first:
curl -s ... <url> | xargs -0 echo | ...
But then, as pointed out in the comments, it also works without the xargs part, so -s (silent mode) is the key to preventing extraneous progress output to STDOUT:
curl -s ... <url> | perl -ne 'print $1 if /<sometag>([^<]+)/'
The above example grabs the simple <sometag> content (containing no embedded tags) from the XML output of the curl statement.
The following worked for me:
Put your curl statement in a script named abc.sh
Now run:
sh abc.sh 1>stdout_output 2>stderr_output
You will get your curl's results in stdout_output and the progress info in stderr_output.
This simple example shows how to capture curl output, and use it in a bash script
test.sh
function main
{
\curl -vs 'http://google.com' 2>&1
# note: add -o /tmp/ignore.png if you want to ignore binary output, by saving it to a file.
}
# capture output of curl to a variable
OUT=$(main)
# search output for something using grep.
echo
echo "$OUT" | grep 302
echo
echo "$OUT" | grep title
Solution = curl -vs google.com 2>&1 | less
BUT, if you want to redirect the output to a file and the output is still on the screen, then the URL response contains a newline char \n which messed up your shell.
To avoit this put everything in a variable:
result=$(curl -v . . . . )

Resources