HTTP Request: Is there a way to do a GET within a GET in linux - linux

I am attempting to do a http request within another http request. Is there a way to do this via command line in linux?
wget http://request another wget http://request

Use $() to substute the output of a command:
wget http://someURL?param="$(wget -O - http://otherURL)"
The -O - option tells wget to write the output to standard output instead of a file.

Related

How to resume interrupted download of specific part of file automatically in curl?

I'm working with curl in Linux. I'm downloading a part of a file from Media-fire using bad internet connection , the download always stops after few minutes and when i use the parameter -C - instead of continue downloading only the part of a file i mentioned from where the download stopped , it starts downloading the hole file .
This is command i have used :
curl -v -o file.part8 -r3000000001-3200000000 --retry 999 --retry-max-time 0 -C - http://download2331.mediafire.com/58gp2u3yjuzg/6upaohqwd8kdi9n/Olarila+High+Sierra+2020.raw.bz2
It's clear that server doesn't support byte ranges
i tried with :
curl -L -k -C - -O --header "Range: bytes=0-1000000" https://cdimage.ubuntu.com/kubuntu/releases/20.10/release/kubuntu-20.10-desktop-amd64.iso
and i get :
curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
it seems that the problem is in the server.

How to use spaces within cron?

While trying to run a cronjob (I don't have access to the SSH terminal, I only have access to record crons via a cPanel from my hosting) I need to put a space between the cron command itself:
wget -o https://abc.de/aaaaa/bbb ccc/ddd >/dev/null 2>&1
However, the cron job fails reporting:
wget: Unable to find directory https://abc.de/aaaaa/bbb
So how can I use a space there?
In this case, URL encoding should do the trick:
https://abc.de/aaaaa/bbb%20ccc/ddd
but
wget -o "https://abc.de/aaaaa/bbb ccc/ddd"
should work as well.

How to get http status code and content separately using curl in linux

I have to fetch some data using curl linux utility. There are two cases, one request is successful and second it is not. I want to save output to a file if request is successful and if request is failed due to some reason then error code should be saved only to a log file. I have search a lot on www but could not found exact solution that's why I have posted a new question on curl.
One option is to get the response code with -w, so you could do it something like
code=$(curl -s -o file -w '%{response_code}' http://example.com/)
if test "$code" != "200"; then
echo $code >> response-log
else
echo "wohoo 'file' is fine"
fi
curl -I -s -L <Your URL here> | grep "HTTP/1.1"
curl + grep is your friend, then you can extract the status code later for your need.

grep and curl commands

I am trying to find the instances of the word (pattern) "Zardoz" in the output of this command:
curl http://imdb.com/title/tt0070948
I tried using: curl http://imdb.com/title/tt0070948 | grep "Zardoz"
but it just returned "file not found".
Any suggestions? I would like to use grep to do this.
You need to tell curl use to -L (--location) option:
curl -L http://imdb.com/title/tt0070948 | grep "Zardoz"
(HTTP/HTTPS) If the server reports that the requested page has
moved to a different location (indicated with a Location: header
and a 3XX response code), this option will make curl redo the
request on the new place
When curl follows a redirect and the request is not a plain GET
(for example POST or PUT), it will do the following request with
a GET if the HTTP response was 301, 302, or 303. If the response
code was any other 3xx code, curl will re-send the following
request using the same unmodified method
.

curl usage to get header

Why does this not work:
curl -X HEAD http://www.google.com
But these both work just fine:
curl -I http://www.google.com
curl -X GET http://www.google.com
You need to add the -i flag to the first command, to include the HTTP header in the output. This is required to print headers.
curl -X HEAD -i http://www.google.com
More here: https://serverfault.com/questions/140149/difference-between-curl-i-and-curl-x-head
curl --head https://www.example.net
I was pointed to this by curl itself; when I issued the command with -X HEAD, it printed:
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
google.com is not responding to HTTP HEAD requests, which is why you are seeing a hang for the first command.
It does respond to GET requests, which is why the third command works.
As for the second, curl just prints the headers from a standard request.

Resources