How to get http status code and content separately using curl in linux - linux

I have to fetch some data using curl linux utility. There are two cases, one request is successful and second it is not. I want to save output to a file if request is successful and if request is failed due to some reason then error code should be saved only to a log file. I have search a lot on www but could not found exact solution that's why I have posted a new question on curl.

One option is to get the response code with -w, so you could do it something like
code=$(curl -s -o file -w '%{response_code}' http://example.com/)
if test "$code" != "200"; then
echo $code >> response-log
else
echo "wohoo 'file' is fine"
fi

curl -I -s -L <Your URL here> | grep "HTTP/1.1"
curl + grep is your friend, then you can extract the status code later for your need.

Related

Get/fetch a file with a bash script using /dev/tcp over https without using curl, wget, etc

I try to read/fetch this file:
https://blockchain-office.com/file.txt with a bash script over dev/tcp without using curl,wget, etc..
I found this example:
exec 3<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.google.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
I change this to my needs like:
exec 3<>/dev/tcp/www.blockchain-office.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand
I think this is because strict ssl/only https connections is on.
So i change it to :
exec 3<>/dev/tcp/www.blockchain-office.com/443
echo -e "GET / HTTP/1.1\r\nhost: https://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
So i even can't get a normal connection without get the file!
All this post's does not fit, looks like ssl/tls is the problem only http/80 works, if i don't use curl, wget, lynx, openssl, etc...:
how to download a file using just bash and nothing else (no curl, wget, perl, etc.)
Using /dev/tcp instead of wget
How to get a response from any URL?
Read file over HTTP in Shell
I need a solution to get/read/fetch a normal txt file from a domain over https only with /dev/tcp no other tools like curl, and output in my terminal or save in a variable without wget, etc.., is it possible and how, or is it there an other solution over the terminal with the standard terminal utilities?
You can use openssl s_client to perform the equivalent operation but delegate the SSL part:
#!/bin/sh
host='blockchain-office.com'
port=443
path='/file.txt'
crlf="$(printf '\r\n_')"
crlf="${crlf%?}"
{
printf '%s\r\n' \
"GET ${path} HTTP/1.1" \
"host: ${host}" \
'Connection: close' \
''
} |
openssl s_client -quiet -connect "${host}:${port}" 2 >/dev/null | {
# Skip headers by reading up until encountering a blank line
while IFS="${crlf}" read -r line && [ -n "$line" ]; do :; done
# Output the raw body content
cat
}
Instead of cat to output the raw body, you may want to check some headers like Content-Type, Content-Transfer-Encoding and even maybe navigate and handle recursive MIME chunks, then decode the raw content to something.
After all the comments and research, the answer is no, we can't get/fetch files using only the standard tools with the shell like /dev/tcp because we can't handle ssl/tls without handle the complete handshake.
It is only possbile with the http/80.
i dont think bash's /dev/tcp supports ssl/tls
If you use /dev/tcp for a http/https connection you have to manage the complete handshake including ssl/tls, http headers, chunks and more. Or you use curl/wget that manage it for you.
then shell is the wrong tool because it is not capable of performing any of the SSL handshake without using external resources/commands. Now relieve and use what you want and can from what I show you here as the cleanest and most portable POSIX-shell grammar implementation of a minimal HTTP session through SSL. And then maybe it is time to consider alternative options (not using HTTPS, using languages with built-in or standard library SSL support).
We will use curl, wget and openssl on seperate docker containers now.
I think there are still some requirements in the future to see if we keep only one of them or all of them.
We will use the script from #Léa Gris in a docker container too.

Unable to POST a request to a server using CURL in BASH

I have been trying to run a BASH script which posts a request to an SMS server and on successful execution a message is received on the mentioned mobile number. Script is as shown below:
curl -k -X POST "http://192.168.10.3/u=admin&h=452ba065ebd1723598a51c7eca11d362&op=pv&to=1234567891&msg=Hello+to+all"
The above script is working fine. The message "Hello to all" is being received on the mobile number 1234567891. This number is however hard coded in the URL. In the actual scenario the mobile number would be available in a variable and the SMS would be sent to the mobile number available in this variable.
I have tried scripts like:
mobile_number="1234567891"
curl -k -X POST "http://192.168.10.3/u=admin&h=452ba065ebd1723598a51c7eca11d362&op=pv&to=$mobile_number&msg=Message+From+world"
and
x="http://192.168.10.3/u=admin&h=452ba065ebd1723598a51c7eca11d362&op=pv&to="
x+="1234567891
x+=&msg=Hello+to+all"
curl -k -X POST $x
However, i have been unsuccessful in executing them successfully. It would be of great help if someone could help me with the syntax.
Try out this principle, bash is different language than c++ or so :-):
#!/bin/bash
to="1234567891"
msg="Hello+to+all"
u="admin"
hash="452ba065ebd1723598a51c7eca11d362"
op="pv"
ip="192.168.10.3"
url="http://${ip}/u=admin&h=${hash}&op=${op}&to=${to}&msg=${msg}"
echo ${url}
Than : curl -k -X POST $url should work fine.

grep and curl commands

I am trying to find the instances of the word (pattern) "Zardoz" in the output of this command:
curl http://imdb.com/title/tt0070948
I tried using: curl http://imdb.com/title/tt0070948 | grep "Zardoz"
but it just returned "file not found".
Any suggestions? I would like to use grep to do this.
You need to tell curl use to -L (--location) option:
curl -L http://imdb.com/title/tt0070948 | grep "Zardoz"
(HTTP/HTTPS) If the server reports that the requested page has
moved to a different location (indicated with a Location: header
and a 3XX response code), this option will make curl redo the
request on the new place
When curl follows a redirect and the request is not a plain GET
(for example POST or PUT), it will do the following request with
a GET if the HTTP response was 301, 302, or 303. If the response
code was any other 3xx code, curl will re-send the following
request using the same unmodified method
.

Check Whether a Web Application is Up or Not

I would like to write a script to check whethere the application is up or not using unix shell scripts.
From googling I found a script wget -O /dev/null -q http://mysite.com, But not sure how this works. Can someone please explain. It will be helpful for me.
Run the wget command
the -O option tells where to put the data that is retrieved
/dev/null is a special UNIX file that is always empty. In other words the data is discarded.
-q means quiet. Normally wget prints lots of info telling its progress in downloading the data so we turn that bit off.
http://mysite.com is the URL of the exact web page that you want to retrieve.
Many programmers create a special page for this purpose that is short, and contains status data. In that case, do not discard it but save it to a log file by replacing -O /dev/null with -a mysite.log.
Check whether you can connect to your web server.
Connect to the port where you web server
If it connects properly your web server is up otherwise down.
You can check farther. (e.g. if index page is proper)
See this shell script.
if wget -O /dev/null -q http://shiplu.mokadd.im;
then
echo Site is up
else
echo Site is down
fi

Using wget in a crontab to run a PHP script

I set up a cron job on my Ubuntu server. Basically, I just want this job to call a php page on an other server. This php page will then clean up some stuff in a database. So I tought it was a good idea to call this page with wget and then send the result to /dev/null because I don't care about the output of this page at all, I just want it to do its database cleaning job.
So here is my crontab:
0 0 * * * /usr/bin/wget -q --post-data 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
(I post a password to make sure no one could run the script but me). It works like a charm except that wget writes each time an empty page in my user directory: the result of downloading the php page.
I don't understand why the result isn't send to /dev/null ? Any idea about the problem here?
Thanks you very much!
wget's output to STDOUT is it trying to make a connection, showing progress, etc.
If you don't want it to store the saved file, use the -O file parameter:
/usr/bin/wget -q --post-data -O /dev/null 'pass=mypassword' http://www.mywebsite.com/myscript.php > /dev/null 2>&1
Checkout the wget manpage. You'll also find the -q option for completely disabling output to STDOUT (but offcourse, redirecting the output as you do works too).
wget -O /dev/null ....
should do the trick
you can mute wget output with the --quiet option
wget --quiet http://example.com

Resources