Nodejs headers not matching actual request - node.js

There is a problem with NodeJS v7.9.0 where there can be a request
curl -i -H Accept:application/json -H range:bytes=1-8 -X GET http://localhost:8080/examples/text.txt
However node's request header doesn't match when it is logged
console.log(req.headers.range)
The logged value varies between different values for the exact same request
(some values logged from that request: bytes=1-2, bytes=1-3, bytes=1-4, bytes=1-5, bytes=1-6, bytes=1, 7 bytes=1-8)
Is this a problem with NodeJS or something else with the computer's setup? And how does one fix it
Note the requests are being made with "Rest Web service client" (chrome plugin), and the request above is the equivalent curl command.

Related

Header param with underscore in http requests not available at server side when requesting via postman

Following is the curl export of the API call which is failing -
curl -X GET \
'http://endpoint.in/dummy/path?mobile=777777777' \
-H 'Content-Type: application/json' \
-H 'auth_token: iubsaicbsaicbasiucbsa'
The header param auth_token is not available at all in the server side, as checked from logs.
The same curl however works when directly issued as a command. I have the latest postman version v6.2.3 installed.
Also, the same API end point works when requested via other tools like Advanced REST client of chrome.
Previously, I had also checked read this thread http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers
Many servers, like nginx, have a config which if set, discards headers with underscore in name.
However, I could not verify the same because I could not find out exactly how is the server deployed in this. It is a node application and we run this command to run the application -
nohup /bin/forever start -o logs/out.log -e logs/err.log app.js
ps -ef | grep node shows the following -
root 5981 1 0 Jul19 ? 00:00:00 /root/.nvm/v7.2.1/bin/node /usr/lib/node_modules/forever/bin/monitor app.js
root 5991 5981 0 Jul19 ? 00:00:04 /root/.nvm/v7.2.1/bin/node /usr/local/another/path/to/app.js
Update
This is interfering in our automated testing as well, via Jmeter.
Update
We have nginx running on the server and it seems to be calling node process. We observed that on the server where this is working fine, the nginx config file has this setting -
underscores_in_headers on;
But this is not present in the config file of the server where it is not working.
Another observation - I am using latest postman version - 6.2.5, where the issue is there. However, when I sent the same postman collection to another teammate and he hit it after installing postman, it worked for him. I am still not sure whether the issue is with postman or the server setup.
Underscores are not explicitly forbidden in headers, but in the past for CGI underscores were converted to dashes. Because of this legacy NGINX and Apache HTTPD treat underscores in headers as potentially problematic.
https://stackoverflow.com/a/22856867/2955337
You can explicitly set underscores_in_headers on;, but the default is off, so by default NGINX does not accept underscores
http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers
curl apparently converts underscores to dashes to circumvent this issue.
https://github.com/requests/requests/issues/1292#issuecomment-15997612

How to download Bing Static Map in Linux

I'm trying to download the static map using Bing Maps API. It works when I load the URL from Chrome, but when I tried to curl to wget from Linux, I got Auth Failed error.
The URL are identical but for some reason Bing is blocking calls from Linux?
Here's the commands I tried:
wget -O map.png http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/...
curl -O map.png http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/...
Error:
Resolving dev.virtualearth.net (dev.virtualearth.net)... 131.253.14.8
Connecting to dev.virtualearth.net (dev.virtualearth.net)|131.253.14.8|:80... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Username/Password Authentication Failed.
--2016-10-24 15:42:30-- http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/.../12?mapSize=340,500
Reusing existing connection to dev.virtualearth.net:80.
HTTP request sent, awaiting response... 401 Unauthorized
Username/Password Authentication Failed.
I'm not sure if it has anything to do with Key Type, I've tried several from Public Website to Dev/Test but still didn't work.
The url needs to be wrapped (because of & symbol in query string that needs to be escaped) with quotes:
wget 'http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/...'
Examples
Via wget:
wget -O map.jpg 'http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/Bellevue%20Washington?mapLayer=TrafficFlow&key=<key>'
Via curl:
curl -o map.jpg 'http://dev.virtualearth.net/REST/V1/Imagery/Map/Road/Bellevue%20Washington?mapLayer=TrafficFlow&key=<key>'
Have been verified under Ubuntu 16.04

Docker - How to check if curl command inside Dockerfile had response code 200

Inside a Dockerfile I try to download an artifact using curl. I have checked that although the artifact doesn't exist (thus getting a 404) the docker build keeps running.
RUN curl -H 'Cache-Control: no-cache' ${STANDALONE_LOCATION} -o $JBOSS_HOME/standalone/configuration/standalone.xml
Is there a way to check that the curl response code is 200 and throw an error otherwise?
You can add -f (or --fail) to the curl call, which causes curl to silently fail on server errors. From the curl manpage:
-f/--fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts. In normal cases when a HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22.
This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407).

I receive a 502 Bad Gateway occasionally during the execution of my node.js app on IBM Bluemix

I am running a node.js app on Bluemix which basically is a REST API for read/write operations on a Cloudant (CouchDb) database.
Incoming requests are authenticated by an injected passport-http-bearer strategy middleware into the express framework. My app uses the bearer token to receive information about the user.
This is not very performant at the moment (we are working on a caching mechanism), but in general it works. When I send much requests in parallel (e.g. in function tests), sometimes I receive the 502 Bad Gateway response instead of the expected results, which fails nearly every test suite run. On my local deployment it works without problems.
Maybe you have a scaling issue? When you say many parallel requests - is it possible that the service reply times go beyond that what router expects (120 seconds I think) with many requests?
Can you try to push your app a little harder with Apache Bench maybe?
ab -n 10000 -c 100 -s 120 -H "Authorization: Bearer <token>" https://your-app/
And then in parallel check responses with something like:
#!/bin/bash
BEARER=<your-token>
URL=<your-app>
TIMEFORMAT="TIME: %E"
while true; do R=$(time echo -e REQUEST: $(date)\\nREPLY: $(curl -X GET -s --insecure --header "Accept: application/json" --header "Authorization: Bearer $BEARER" "https://$URL") 2>>trace); echo "$R" >>trace; echo $R|grep -q "502" && echo -e "Found 502 reply\n$R"; done
HP
#Jeff-Sloyer is correct in suggesting retry logic in your push script. Additionally, you should check the status of the runtime and any services being used on the Bluemix status page. For more information on why you may be receiving this error, please see the information noted below:
https://www.ng.bluemix.net/docs/troubleshoot/managingapps.html
I have seen this with unreliable networking. I would use some retry login for your push script to guarantee a deploy.

Terminal - How to run the HTTP request 'PUT'

So, what I am trying to do is run from Terminal in Linux an HTTP request, 'PUT'. Not POST, not GET, 'PUT'.
I know in terminal you can just type 'GET http://example.com/', but when I did 'PUT http://example.com' (And a bunch of other variables after that...), Terminal said that PUT is not a command.
Here's what I tried:
:~$ PUT http://example.com
PUT: command not found
Well, is there a substitute for the command 'PUT', or some way of sending that HTTP request from terminal?
I don't want to use any external programs.... I don't want to download or install anything. Any other ways?
I would use curl to achieve this: curl -X PUT http://example.com
curl -X PUT -d arg=val -d arg2=val2 http://sssss.zzzz
will work or use postman for HTTP requests www.getpostman.com if terminal is not your main concern, else, CURL is always there.
You are getting
Terminal said that PUT is not a command.
because the information is not being redirected via a network connection (to something that understands HTTP). bash has limited support by itself for communicating over a network, as discussed in
Tech Tip: TCP/IP Access Using bash
More on Using Bash's Built-in /dev/tcp File (TCP/IP)
Advanced Bash-Scripting Guide: Example 29-1. Using /dev/tcp for troubleshooting
Besides that, the HTTP specification says of PUT:
The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.
Clarifying, if you are PUTing to an existing URI, you may be able to do this, and the command implictly needs some data to reflect a modification.
The example in HTTP - Methods (TutorialsPoint) shows a PUT command used to store an HTML body on a URI. Your script has to redirect the data (as well as the initial request) onto the network connection.
You could do all of that using a here-document, or redirecting a file, e.g., (using that example to show how it might be adapted):
cat >/dev/tcp/example.com/80 <<EOF
PUT /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Connection: Keep-Alive
Content-type: text/html
Content-Length: 182
<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>
EOF
But your script should also provide for reading the server's response.
Using the -X flag with whatever HTTP verb you want:
curl -X PUT -H "Content-Type: multipart/form-data;" -d arg=val -d arg2=val2 localhost:8080
This example also uses the -d flag to provide arguments with your PUT request.

Resources