cURL --resolve issues - linux

I am trying to make a cURL request that resolves to a specific IP. From everything I've read, this looks syntactically correct to me but I am still seeing the 'could not resolve host error'. Can someone point me in the right direction? I'm seeing a variety of errors:
curl —-resolve e-dinar.io:443:42.81.15.75 "https://e-dinar.io"
IDN support not present, can't parse Unicode domains
* getaddrinfo(3) failed for —-resolve:80
curl —-resolve e-dinar.io:443:42.81.15.75 "https://e-dinar.io:443"
curl: (6) Couldn't resolve host '—-resolve'
curl: (6) Couldn't resolve host 'e-dinar.io:443'
curl "https://e-dinar.io:443" —-resolve e-dinar.io:443:42.81.15.75
curl: (6) Couldn't resolve host '—-resolve'
curl: (6) Couldn't resolve host 'e-dinar.io:443'
Any thoughts where I am going wrong? Thanks.

You are using wrong - in the curl command (—), change —- to --.
You have a hint in curl response here:
curl: (6) Couldn't resolve host '—-resolve'
curl tries to get data from —-resolve host, it doesn't parse it as a command because of wrong -.

Related

curl: (51) SSL Issue [duplicate]

I have a x.example which serves traffic for both a.example and b.example.
x.example has certificates for both a.example and b.example. The DNS for a.example and b.example is not yet set up.
If I add an /etc/hosts entry for a.example pointing to x.example's ip and run curl -XGET https://a.example, I get a 200.
However if I run curl --header 'Host: a.example' https://x.example, I get:
curl: (51) SSL: no alternative certificate subject name matches target
host name x.example
I would think it would use a.example as the host. Maybe I'm not understanding how SNI/TLS works.
Because a.example is an HTTP header the TLS handshake doesn't have access to it yet? But the URL itself it does have access to?
Indeed SNI in TLS does not work like that. SNI, as everything related to TLS, happens before any kind of HTTP traffic, hence the Host header is not taken into account at that step (but will be useful later on for the webserver to know which host you are connecting too).
So to enable SNI you need a specific switch in your HTTP client to tell it to send the appropriate TLS extension during the handshake with the hostname value you need.
In case of curl, you need at least version 7.18.1 (based on https://curl.haxx.se/changes.html) and then it seems to automatically use the value provided in the Host header. It alo depends on which OpenSSL (or equivalent library on your platform) version it is linked to.
See point 1.10 of https://curl.haxx.se/docs/knownbugs.html that speaks about a bug but explains what happens:
When given a URL with a trailing dot for the host name part: "https://example.com./", libcurl will strip off the dot and use the name without a dot internally and send it dot-less in HTTP Host: headers and in the TLS SNI field.
The --connect-to option could also be useful in your case. Or --resolve as a substitute to /etc/hosts, see https://curl.haxx.se/mail/archive-2015-01/0042.html for am example, or https://makandracards.com/makandra/1613-make-an-http-request-to-a-machine-but-fake-the-hostname
You can add --verbose in all cases to see in more details what is happening. See this example: https://www.claudiokuenzler.com/blog/693/curious-case-of-curl-ssl-tls-sni-http-host-header ; you will also see there how to test directly with openssl.
If you have a.example in your /etc/hosts you should just run curl with https://a.example/ and it should take care of the Host header and hence SNI (or use --resolve instead)
So to answer your question directly, replace
curl --header 'Host: a.example' https://x.example
with
curl --connect-to a.example:443:x.example:443 https://a.example
and it should work perfectly.
The selected answer helped me find the answer, even though it does not contain the answer. The answer in the mail/archive link Patrick Mevzek provided has the wrong port number. So even following that answer will cause it to continue to fail.
I used this container to run a debugging server to inspect the requests. I highly suggest anyone debugging this kind of issue do the same.
Here is how to address the OP's question.
# Instead of this:
# curl --header 'Host: a.example' https://x.example
# Do:
host=a.example
target=x.example
ip=$(dig +short $target | head -n1)
curl -sv --resolve $host:443:$ip https://$host
If you want to ignore bad certificates matches, use -svk instead of -sv
curl -svk --resolve $host:443:$ip https://$host
Note: Since you are using https, you must use 443 in the --resolve argument instead of 80 as was stated on the mail/archive
I had a similar need. Didn't have sudo access to update hosts file.
I use resolve parameter and also added the DNS host name as a header parameter.
--resolve <dns name>:<port>:<ip addr>
curl --request POST --resolve dns_name:443:a.b.c.d 'https://dns_name/x/y' --header 'Host: dns_name' ....
Cheers..

curl always connects to a specific ip:port pair

I'm running into this scenario on one of our linux boxes.
$ curl 10.200.20.66:8087/ping
curl: (7) Failed to connect to 219.135.102.36 port 8118: Connection timed out
$ curl 114.114.114.114:80/x
curl: (7) Failed to connect to 219.135.102.36 port 8118: Connection timed out
As you can see, curl has always been trying to connect 219.135.102.36:8118.
I've tried using nc and telnet and both of them give correct results.
Finally I've turned to strace curl 10.200.20.66:8087/ping and here's output.
Can anybody help explain why this happened?
To be sure what is happening, turn on verbosity with -v switch:
$ http_proxy=1.2.3.4:8080 curl -v http://google.com
* About to connect() to proxy 1.2.3.4 port 8080 (#0)
* Trying 1.2.3.4...
* Connection timed out
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
In your case, I'd guess that Curl tries to use proxy. If that is the case, you should check the following:
http_proxy environment variable:
Check:
env | grep -i proxy
Curl configuration file ~/.curlrc (unlikely, it doesn't show in strace)
Proxy can be proxided on command line (-x or --proxy), so check if curl isn't aliased in your shell
Curl should be used with website, http,https,ftp.
example curl "http://www.google.com"
it provides the output in html format
Kindly check and try in correct format

curl: (7) Failed to connect to port 80, and 443 - on one domain

This question shows research effort; it is useful and clear
I have checked the cURL not working properly
When I run the command curl -I https://www.example.com/sitemap.xml
curl: (7) Failed to connect
Failed to connect on all port
this error only on one domain, all other domain working fine, curl: (7) Failed to connect to port 80, and 443
Thanks...
First Check your /etc/hosts file entries, may be the URL which You're requesting, is pointing to your localhost.
If the URL is not listed in your /etc/hosts file, then try to execute following command to understand the flow of Curl Execution for the particular URL:
curl --ipv4 -v "https://example.com/";
After many search, I found that Hosts settings not correct
Then I check nano /etc/hosts
The Domain point to wrong IP in hosts file
I change the wrong IP and its working Fine
This is new error Related to curl: (7) Failed to connect
curl: (7) Failed to connect
The above error message means that your web-server (at least the one specified with curl) is not running at all — no web-server is running on the specified port and the specified (or implied) port. (So, XML doesn't have anything to do with that.)
you can download the key with browser
then open terminal in downloads
then type sudo apt-key add <key_name>.asc
Mine is Red Hat Enterprise(RHEL) Virtual Machine and I was getting something like the following.
Error "curl: (7) Failed to connect to localhost port 80: Connection refused"
I stopped the firewall by running the following commands and it started working.
sudo systemctl stop firewalld
sudo systemctl disable firewalld
If the curl is to the outside world, like:
curl www.google.com
I have to restart my cntlm service:
systemctl restart cntlm
If it's within my network:
curl inside.server.local
Then a docker network is overlapping something with my CNTLM proxy, and I just remove all docker networks to fix it - you can also just remove the last network you just created, but I'm lazy.
docker network rm $(docker network ls -q)
And then I can work again.

i want to make cron job for the url of nodejs

Inside postman my URL "http://localhost:1000/api/coupon/coupondeactivate" working fine. I want to make this URL in cron job. So I used it in below form.
wget http://localhost:1000/api/coupon/coupondeactivate --header "Referer: localhost:1000"
But when I try to paste above URL in CMD then I found below bug so please kindly help me to solve this issue.
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:1000... connected.
HTTP request sent, awaiting response... 404 Not Found
2016-12-27 12:27:19 ERROR 404: Not Found.
hello please try the following curl request for your url.
watch -n 5 curl --request POST urlname
run above in cmd it working fine for me

curl trying to connect to other port than mentioned in url

im starting to build node app, and for this i am using curl. however, im facing an issue which i belive related to curl and my system configuration i just cant point to the exact issue.
at my nodeJS app i set the app to listen on port 3000, andwhenever i type in the command line :
curl -H "Content-Type: application/json" -X POST -d '{"url":"localhost:3000"}' http://localhost:3000/doAction
im getting the following error :
curl: (7) Failed to connect to localhost port 1080: Connection refused
it seems like curl is forcing connection to port 1080, although i was specifing port 3000, couldt find solution for this in the documentation.
if someone met this issue before and can assist it will be graet. tnx :)
I think your curl is trying to connect to proxy (default port 1080), why don't you run the command with --noproxy '*' i.e
curl --noproxy '*' -H "Content-Type: application/json" -X POST -d '{"url":"localhost:3000"}' http://localhost:3000/doAction

Resources