Why does curl sends data in post as {data=}? - linux

I was coding a simple rest service in Apache Camel, and testing it using the curl command by invoking the endpoint of my service.
The service receives in plain text a simple String like "ABC123-D-FE", but as I made multiple tests, the data was always received as "{ABC123-D-FE=}", always adding the "{ =}".
At first I thought it was my service catching the data like that, but every other method I tried (i.e. rest clients, postman, invoking the service by other services) never reproduced that results, and the service always received just the plain text data.
It was only formatted like that by using curl.
The command was:
curl -X POST -d ABC123-F-DE http://host/service
Can't find any reference to this behaviour, and the only conclusion is something curl does by default (and don't understand why or how to remove it).
I was using the curl command in Ubuntu Mate 20.04.

Edit: per Nick ODell's comment below, it almost certainly means it's parsed to a map with the key "ABC123-F-DE" having an empty value, like this json:
{
"ABC123-F-DE": ""
}
i guess it's your parsed-object-stringify function adding the { to signify start of map, and adding the = to specify "value of this key" and adding } to specify end of map?
lets check what curl actually sends with a little netcat server:
$ nc -l 1111
followed by
$ curl -X POST -d ABC123-F-DE http://localhost:1111
yields:
$ nc -l 1111
POST / HTTP/1.1
Host: localhost:1111
User-Agent: curl/7.84.0
Accept: */*
Content-Length: 11
Content-Type: application/x-www-form-urlencoded
ABC123-F-DE
Conclusion: it's definitely not curl.
My best guess: it's something weird with your server's application/x-www-form-urlencoded-parser?

Related

Get/fetch a file with a bash script using /dev/tcp over https without using curl, wget, etc

I try to read/fetch this file:
https://blockchain-office.com/file.txt with a bash script over dev/tcp without using curl,wget, etc..
I found this example:
exec 3<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.google.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
I change this to my needs like:
exec 3<>/dev/tcp/www.blockchain-office.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand
I think this is because strict ssl/only https connections is on.
So i change it to :
exec 3<>/dev/tcp/www.blockchain-office.com/443
echo -e "GET / HTTP/1.1\r\nhost: https://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
So i even can't get a normal connection without get the file!
All this post's does not fit, looks like ssl/tls is the problem only http/80 works, if i don't use curl, wget, lynx, openssl, etc...:
how to download a file using just bash and nothing else (no curl, wget, perl, etc.)
Using /dev/tcp instead of wget
How to get a response from any URL?
Read file over HTTP in Shell
I need a solution to get/read/fetch a normal txt file from a domain over https only with /dev/tcp no other tools like curl, and output in my terminal or save in a variable without wget, etc.., is it possible and how, or is it there an other solution over the terminal with the standard terminal utilities?
You can use openssl s_client to perform the equivalent operation but delegate the SSL part:
#!/bin/sh
host='blockchain-office.com'
port=443
path='/file.txt'
crlf="$(printf '\r\n_')"
crlf="${crlf%?}"
{
printf '%s\r\n' \
"GET ${path} HTTP/1.1" \
"host: ${host}" \
'Connection: close' \
''
} |
openssl s_client -quiet -connect "${host}:${port}" 2 >/dev/null | {
# Skip headers by reading up until encountering a blank line
while IFS="${crlf}" read -r line && [ -n "$line" ]; do :; done
# Output the raw body content
cat
}
Instead of cat to output the raw body, you may want to check some headers like Content-Type, Content-Transfer-Encoding and even maybe navigate and handle recursive MIME chunks, then decode the raw content to something.
After all the comments and research, the answer is no, we can't get/fetch files using only the standard tools with the shell like /dev/tcp because we can't handle ssl/tls without handle the complete handshake.
It is only possbile with the http/80.
i dont think bash's /dev/tcp supports ssl/tls
If you use /dev/tcp for a http/https connection you have to manage the complete handshake including ssl/tls, http headers, chunks and more. Or you use curl/wget that manage it for you.
then shell is the wrong tool because it is not capable of performing any of the SSL handshake without using external resources/commands. Now relieve and use what you want and can from what I show you here as the cleanest and most portable POSIX-shell grammar implementation of a minimal HTTP session through SSL. And then maybe it is time to consider alternative options (not using HTTPS, using languages with built-in or standard library SSL support).
We will use curl, wget and openssl on seperate docker containers now.
I think there are still some requirements in the future to see if we keep only one of them or all of them.
We will use the script from #Léa Gris in a docker container too.

ngrok retrieve assigned subdomain

I've got a NodeJS script which spins up a ngrok instance, which starts the ngrok binary.
However I'm needing to be able to return back the automatically generated url. I can't find anywhere in the documentation about how to do this.
for example, when you run ngrok http 80 it spins up, generates you a random unique url each time it starts
This question is kinda old, however, I thought to give another more generic option as it doesn't require NodeJS
curl --silent --show-error http://127.0.0.1:4040/api/tunnels | sed -nE 's/.*public_url":"https:..([^"]*).*/\1/p'
This one just inspects the response of calling api/tunnels by applying text processing (sed) to the resulted text and identifies the public URL.
ngrok serves tunnel information at http://localhost:4040/api/tunnels.
curl + jq
curl -Ss http://localhost:4040/api/tunnels | jq -r '.tunnels[0].public_url'
=> https://719c933a.ap.ngrok.io
curl + ruby
curl -Ss http://localhost:4040/api/tunnels | \
ruby -e 'require "json"; puts JSON.parse(STDIN.read).dig("tunnels", 0, "public_url")'
=> https://719c933a.ap.ngrok.io
curl + node
json=$(curl -Ss http://127.0.0.1:4040/api/tunnels);
node -pe "var data = $json; data.tunnels[0].public_url"
=> https://719c933a.ap.ngrok.io

grep and curl commands

I am trying to find the instances of the word (pattern) "Zardoz" in the output of this command:
curl http://imdb.com/title/tt0070948
I tried using: curl http://imdb.com/title/tt0070948 | grep "Zardoz"
but it just returned "file not found".
Any suggestions? I would like to use grep to do this.
You need to tell curl use to -L (--location) option:
curl -L http://imdb.com/title/tt0070948 | grep "Zardoz"
(HTTP/HTTPS) If the server reports that the requested page has
moved to a different location (indicated with a Location: header
and a 3XX response code), this option will make curl redo the
request on the new place
When curl follows a redirect and the request is not a plain GET
(for example POST or PUT), it will do the following request with
a GET if the HTTP response was 301, 302, or 303. If the response
code was any other 3xx code, curl will re-send the following
request using the same unmodified method
.

curl command output has wrong encoding

when i execute
curl "http://weather.yahooapis.com/forecastrss?w=1225955&u=c"
it returns me response with incorrect encoding:
khan#khan-P55A-UD3P:~$ curl "http://weather.yahooapis.com/forecastrss?w=1225955&u=c"
���dž��ud#3��v(
����$j$��~����4(���Xy����wH�o�9<q��,�s\��e"�tA�\݌h�ʄ���
�����h��M���{���J=�m93W
�S�)�e�[sv,�҉eAKM�z{ǔ��g��:���*�����(n�m��&�Jꟈ��Mg�,yn?F�&��_��
ik6 >��0�e&X��簺
sQ~�:�Z;*9�.a"ߕ|��EO[�5"�׫[�k�����1ӆ�n?}r1�u�d��Cڐ��X��`�NF�g!�c��W��G��1�o����Z��53<z`���.��w� s׃��ߖ+�vh��3yt�b}�9
�6�s3K
�W� �0�هF#���>�X֥Qh�ʰv�BQ�R
ʮ�<�4;�ڊ2�8y� �g���6M(��]�|'�U#�ș�B
�8du!�&'�NOB��ț��3�K��fW��
\Rheg�=��F�R;�u�F�s9���&����,��|r��o�E۲�T��V$&�����uf\������v��Z~&�Au��{��ى"m�ʨ���U����2�8�#0F#'������
l���R�XL��~A��̱���p��9��8�iH��nC�i4��^t;����۪���d�V�����7��=S&��2�u�#v~�L`�k���v�0
�[���"<���~�z��j,���X=�zmKD/|���(�p��M���⥁}_�!��GџC��2|�G��<ফe��nb"x ?�e�s��;���r;ﲃ�]�9"l��;�}�w�ٮjwR[�C����#O�
�������#a����s�km���$a�����\)�$�o��Ә�K��FR�*�ý�l�Z
�
&�`_�D�WӠ�>`T��0��| c��⿎K%��n:���~(�����.{��}< /~�^!A��$\���c�<�Á
"��k�_��t����t�n�5�^t�وF��l<V�����oo?
`O���3p��ĝ�S�X�G�x��Ź+�
khan#khan-P55A-UD3P:~$
However, the same command works just fine in another computer.
is there anything i need to be setting in shell in order to get this in correct format ?
i m using ubuntu 14.04 64bits.(Linux khan-P55A-UD3P 3.13.0-40-generic #69-Ubuntu SMP Thu Nov 13 17:53:56 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux)
any ideas? a screenshot of the command can be seen here as well: http://i.imgur.com/QDy7F7i.png
curl will automatically decompress the response if you set the --compressed flag:
curl --compressed "http://example.com"
--compressed (HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
refrence of answer
if you need use this option only for gzip content encoding, use this command
curl -sH 'Accept-encoding: gzip' http://example.com/ | gunzip -
I think is connected with default encoding of your terminal (which default is UTF-8). You can try to pass the stream to the file, for instance:
curl "http://weather.yahooapis.com/forecastrss?w=1225955&u=c" > response
I had the same problem with rest webservice, when I was passing bytes (Pdf content format inside Data Handler). Without passing stream I was receiving data encoded in UTF-8 in terminal and also when I was using soapUi.
Try setting the charset of the terminal to utf-8. A google got me this:
https://unix.stackexchange.com/questions/28136/set-gnome-terminals-default-charset-to-utf8
Before you set the encoding check to make sure that indeed is the issue by determining the current charset as in:
How to get terminal's Character Encoding
There is one more solution to add Accept-Charset header:
curl -vvv -H "Accept-Charset: utf-8" http://example.com/ > tmpFile
Sometimes is as easy as removing Accept-encoding: gzip from your request.

curl and cookies - command line tool

I am trying to get more familiar to curl, so I am trying to send post request to a webpage from a site I am already logged into :
curl --data "name=value" http://www.mysite.com >> post_request.txt
I had an output consisting of all the hmtl page with a message telling me I was not logged in so I retrieved the site's cookie (called PHPSESSID, is that of importance) stored it in /temp/cookies.txt and then
curl --cookie /tmp/cookies.txt --cookie-jar /tmp/cookies.txt --data "name=value" http://www.mysite.com > post_request.txt
but I still have the same message. Someone could help ?
I suggest you take a network dumping tool (wireshark or tcpdump) and sniff the communication between your shell and the server. Check if the cookie is sent and if it has the right content.

Resources