How do I get cURL to not show the progress bar? - linux

I'm trying to use cURL in a script and get it to not show the progress bar.
I've tried the -s, -silent, -S, and -quiet options, but none of them work.
Here's a typical command I've tried:
curl -s http://google.com > temp.html
I only get the progress bar when pushing it to a file, so curl -s http://google.com doesn't have a progress bar, but curl -s http://google.com > temp.html does.

curl -s http://google.com > temp.html
works for curl version 7.19.5 on Ubuntu 9.10 (no progress bar). But if for some reason that does not work on your platform, you could always redirect stderr to /dev/null:
curl http://google.com 2>/dev/null > temp.html

In curl version 7.22.0 on Ubuntu and 7.24.0 on OSX the solution to not show progress but to show errors is to use both -s (--silent) and -S (--show-error) like so:
curl -sS http://google.com > temp.html
This works for both redirected output > /some/file, piped output | less and outputting directly to the terminal for me.
Update: Since curl 7.67.0 there is a new option --no-progress-meter which does precisely this and nothing else, see clonejo's answer for more details.

I found that with curl 7.18.2 the download progress bar is not hidden with:
curl -s http://google.com > temp.html
but it is with:
curl -ss http://google.com > temp.html

Since curl 7.67.0 (2019-11-06) there is --no-progress-meter, which does exactly this, and nothing else. From the man page:
--no-progress-meter
Option to switch off the progress meter output without muting or
otherwise affecting warning and informational messages like -s,
--silent does.
Note that this is the negated option name documented. You can
thus use --progress-meter to enable the progress meter again.
See also -v, --verbose and -s, --silent. Added in 7.67.0.
It's available in Ubuntu ≥20.04 and Debian ≥11 (Bullseye).
For a bit of history on curl's verbosity options, you can read Daniel Stenberg's blog post.

Not sure why it's doing that. Try -s with the -o option to set the output file instead of >.

this could help..
curl 'http://example.com' > /dev/null

On macOS 10.13.6 (High Sierra), the -sS option works. It is especially useful inside Perl, in a command like curl -sS --get {someURL}, which frankly is a whole lot more simple than any of the LWP or HTTP wrappers, for just getting a website or web page's contents.

Related

How to resume interrupted download of specific part of file automatically in curl?

I'm working with curl in Linux. I'm downloading a part of a file from Media-fire using bad internet connection , the download always stops after few minutes and when i use the parameter -C - instead of continue downloading only the part of a file i mentioned from where the download stopped , it starts downloading the hole file .
This is command i have used :
curl -v -o file.part8 -r3000000001-3200000000 --retry 999 --retry-max-time 0 -C - http://download2331.mediafire.com/58gp2u3yjuzg/6upaohqwd8kdi9n/Olarila+High+Sierra+2020.raw.bz2
It's clear that server doesn't support byte ranges
i tried with :
curl -L -k -C - -O --header "Range: bytes=0-1000000" https://cdimage.ubuntu.com/kubuntu/releases/20.10/release/kubuntu-20.10-desktop-amd64.iso
and i get :
curl: (33) HTTP server doesn't seem to support byte ranges. Cannot resume.
it seems that the problem is in the server.

How to get http status code and content separately using curl in linux

I have to fetch some data using curl linux utility. There are two cases, one request is successful and second it is not. I want to save output to a file if request is successful and if request is failed due to some reason then error code should be saved only to a log file. I have search a lot on www but could not found exact solution that's why I have posted a new question on curl.
One option is to get the response code with -w, so you could do it something like
code=$(curl -s -o file -w '%{response_code}' http://example.com/)
if test "$code" != "200"; then
echo $code >> response-log
else
echo "wohoo 'file' is fine"
fi
curl -I -s -L <Your URL here> | grep "HTTP/1.1"
curl + grep is your friend, then you can extract the status code later for your need.

bash redirect output to file but result is incomplete

The question of redirecting output of a command was already asked many times, however I am having a strange behavior. I am using a bash shell (debian) with version
4.3.30(1)-release and tried to redirect output to a file, however not everything are logged in the file.
The bin file that I tries to run is sauce-connectv4.4.1 for linux (client of saucelabs that is publicly available in internet)
If I run
#sudo ./bin/sc --doctor
it showed me a complete lines
it prints :
INFO: resolved to '23.42.27.27'
INFO: resolving 'g2.symcb.com' using
DNS server '10.0.0.5'...
(followed by other line)
INFO: 'google.com' is not in hosts file
INFO: URL https://google.com can be reached
However, if I redirect the same command to a file with the following command
#sudo ./bin/sc --doctor > alloutput.txt 2>&1
and do
#cat alloutput.txt
the same command output is logged, but deprecated as following:
INFO: resolved to '23.42.2me#mymachine:/opt/$
It has incomplete line, and the next lines that follows are not even logged (missing).
I have tried with >> for appending, it has the same problem. Using command &> alloutput.txt also is not printing the whole stuff. Can anyone point out how to get all lines of the above command to be logged completely to the text file?
UPDATE
In the end I manage to use the native binary logging by using --log
alloutput.txt where it completely provide me with the correct output.
However I let this issue open as I am still wondering why one misses some information/lines by doing an output redirection
you should try this: stdbuf -o0
like:
stdbuf -o0 ./bin/sc --doctor 2>&1 | tee -a alloutput.txt
That is a funny problem, I've never seen that happening before. I am going to go out on a limb here and suggest this, see how it works:
sudo ./bin/sc --doctor 2>&1 | tee -a alloutput.txt
#commandtorun &> alloutput.txt
This command will redirects both the error and output to same file.

curl usage to get header

Why does this not work:
curl -X HEAD http://www.google.com
But these both work just fine:
curl -I http://www.google.com
curl -X GET http://www.google.com
You need to add the -i flag to the first command, to include the HTTP header in the output. This is required to print headers.
curl -X HEAD -i http://www.google.com
More here: https://serverfault.com/questions/140149/difference-between-curl-i-and-curl-x-head
curl --head https://www.example.net
I was pointed to this by curl itself; when I issued the command with -X HEAD, it printed:
Warning: Setting custom HTTP method to HEAD with -X/--request may not work the
Warning: way you want. Consider using -I/--head instead.
google.com is not responding to HTTP HEAD requests, which is why you are seeing a hang for the first command.
It does respond to GET requests, which is why the third command works.
As for the second, curl just prints the headers from a standard request.

Using wget in a crontab to run a PHP script

I set up a cron job on my Ubuntu server. Basically, I just want this job to call a php page on an other server. This php page will then clean up some stuff in a database. So I tought it was a good idea to call this page with wget and then send the result to /dev/null because I don't care about the output of this page at all, I just want it to do its database cleaning job.
So here is my crontab:
0 0 * * * /usr/bin/wget -q --post-data 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
(I post a password to make sure no one could run the script but me). It works like a charm except that wget writes each time an empty page in my user directory: the result of downloading the php page.
I don't understand why the result isn't send to /dev/null ? Any idea about the problem here?
Thanks you very much!
wget's output to STDOUT is it trying to make a connection, showing progress, etc.
If you don't want it to store the saved file, use the -O file parameter:
/usr/bin/wget -q --post-data -O /dev/null 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
Checkout the wget manpage. You'll also find the -q option for completely disabling output to STDOUT (but offcourse, redirecting the output as you do works too).
wget -O /dev/null ....
should do the trick
you can mute wget output with the --quiet option
wget --quiet http://example.com

Resources