How to download Bing Static Map in Linux - linux

I'm trying to download the static map using Bing Maps API. It works when I load the URL from Chrome, but when I tried to curl to wget from Linux, I got Auth Failed error.
The URL are identical but for some reason Bing is blocking calls from Linux?
Here's the commands I tried:
wget -O map.png http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/...
curl -O map.png http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/...
Error:
Resolving dev.virtualearth.net (dev.virtualearth.net)... 131.253.14.8
Connecting to dev.virtualearth.net (dev.virtualearth.net)|131.253.14.8|:80... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Username/Password Authentication Failed.
--2016-10-24 15:42:30-- http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/.../12?mapSize=340,500
Reusing existing connection to dev.virtualearth.net:80.
HTTP request sent, awaiting response... 401 Unauthorized
Username/Password Authentication Failed.
I'm not sure if it has anything to do with Key Type, I've tried several from Public Website to Dev/Test but still didn't work.

The url needs to be wrapped (because of & symbol in query string that needs to be escaped) with quotes:
wget 'http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/...'
Examples
Via wget:
wget -O map.jpg 'http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/Bellevue%20Washington?mapLayer=TrafficFlow&key=<key>'
Via curl:
curl -o map.jpg 'http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/Bellevue%20Washington?mapLayer=TrafficFlow&key=<key>'
Have been verified under Ubuntu 16.04

Related

Wget error: HTTP request sent, awaiting response... 401 Unauthorized Authorization failed

So i have to download a whole webpage for worst case scenario if our network collapses. But with Wget I only get Error 401 unauthorized. I suspect kerberos.
I've tried curl, at first it only showed index code then I added --output and it only downloads index page. But only index beacause after that the site is password protected.
wget --header="Authorization: Basic XXXXXXXXXXXXXXXXXXX" --recursive --wait=5 --level=2 --execute="robots = off" --page-requisites --html-extension --convert-links --restrict-file-names=windows --domains my.domain --no-parent https://webpage.internal
and curl
curl --anyauth --negotiate -u admin https://webpage.internal --output index.html
Is there any way to use curl for whole website, or is there a simple fix for my wget.
Thanks.
Okay solved it myself. Just needed to change:
-header="Authorization: Basic XXXXXXXXXXXXXXXXXXX"
to
-header="Authorization: OAuth XXXXXXXXXXXXXXXXXXX"
and it started to clone.
Edit: did not solve.

Nodejs headers not matching actual request

There is a problem with NodeJS v7.9.0 where there can be a request
curl -i -H Accept:application/json -H range:bytes=1-8 -X GET http://localhost:8080/examples/text.txt
However node's request header doesn't match when it is logged
console.log(req.headers.range)
The logged value varies between different values for the exact same request
(some values logged from that request: bytes=1-2, bytes=1-3, bytes=1-4, bytes=1-5, bytes=1-6, bytes=1, 7 bytes=1-8)
Is this a problem with NodeJS or something else with the computer's setup? And how does one fix it
Note the requests are being made with "Rest Web service client" (chrome plugin), and the request above is the equivalent curl command.

Docker - How to check if curl command inside Dockerfile had response code 200

Inside a Dockerfile I try to download an artifact using curl. I have checked that although the artifact doesn't exist (thus getting a 404) the docker build keeps running.
RUN curl -H 'Cache-Control: no-cache' ${STANDALONE_LOCATION} -o $JBOSS_HOME/standalone/configuration/standalone.xml
Is there a way to check that the curl response code is 200 and throw an error otherwise?
You can add -f (or --fail) to the curl call, which causes curl to silently fail on server errors. From the curl manpage:
-f/--fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts. In normal cases when a HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22.
This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407).

How to use wget on a page with authentication

I've been searching the internet about wget, and found many posts on how to use wget to log into a site that has a login page.
The site uses https, and the form fields the login page looks for is "userid" and "password". I've verified this by checking the Network tool in Chrome when you hit F12.
I 've been using the following posts as guidelines:
http://www.unix.com/shell-programming-and-scripting/131020-using-wget-curl-http-post-authentication.html
And
wget with authentication
What I've tried:
testlab:/lua_curl_tests# wget --save-cookies cookies.txt --post-data 'userid=myid&password=123123' https://10.123.11.22/cgi-bin/acd/myapp/controller/method1
wget: unrecognized option `--save-cookies'
BusyBox v1.21.1 (2013-07-05 16:54:31 UTC) multi-call binary.
And also
testlab/lua_curl_tests# wget
http://userid=myid:123123#10.123.11.22/cgi-bin/acd/myapp/controller/method1
Connecting to 10.123.11.22 (10.123.11.22:80) wget: server returned
error: HTTP/1.1 403 Forbidden
Can you tell me what I'm doing wrong? Ultimately, what I'd like to do is login, and post data, then grab the resulting page.
I'm also currently looking at curl - to see if I really should be doing this with curl (lua-curl)

Cronjob with password protected site (.htaccess)

I want to create a cronjob that every X time goes to open a webpage.
This webpage is password protected by .htaccess (user=admin, passwor=pass). The instruction I give is the following:
wget --user=admin --password='pass' http://www.mywebsite.com/test.php
But cron gives me the following error:
--2012-05-02 10:14:01-- http://www.mywebsite.com/test.php
Resolving www.mywebsite.com... IP
Connecting to www.mywebsite.com|IP|:80... connected.
HTTP request sent, awaiting response... 401 Authorization Required
Reusing existing connection to www.mywebsite.com:80.
HTTP request sent, awaiting response... 403 Forbidden
2012-05-02 10:14:01 ERROR 403: Forbidden.
I have also tried doing:
wget admin:pass#http://www.mywebsite.com/test.php
but with similar errors. How can I solve? Thank you in advance for your help.
You are making a small mistake.
keep the http:// before the url.
You have
admin:pass#http://www.mywebsite.com/test.php
Change it to
http://admin:pass#www.mywebsite.com/test.php
Hope that works.
wget --user admin --password pass http://www.mywebsite.com/test.php
Opens every minutes a website with a htaccess password
*/1 * * * * wget -O /dev/null --user admin --password pass "http://www.mywebsite.com/test.php" > /dev/null 2>&1
Add auth parameter to url. This works for me when call url directly.
http://yoururl.ext?auth=id:psw
I don't know how much secure it is...

Resources