I am trying to create a multi threaded downloader using nodejs. Currently I am only able to download the file using a single thread. Its a simple http.get request in nodejs.
To create a multi threaded downloader I will have to send some http headers in my request which I am not able to figure out some how. I want to know what http headers should I be sending so that I am able to download a range of bytes from an offset.
var http = require('http');
var options = {
host: 'hostname.com',
path: '/path/to/a/large/file.zip',
headers: {
//Some headers which will help me download only a part of the file.
}
};
callback = function(response) {
response.on('data', function (chunk) {
//write chunk to a file
});
}
http.request(options, callback).end();
You need Range header. Example is given in wiki
Range: bytes=500-999
For more detail see 14.35 Range in HTTP header Definitions
Related
I have a node.js GET API endpoint that calls some backend services to get data.
app.get('/request_backend_data', function(req, res) {
---------------------
}
When there is a delay getting a response back from the backend services, this endpoint(request_backend_data) is getting triggered exactly after 2 minutes. I have checked my application code, but there is no retry logic written anywhere when there is a delay.
Does node.js API endpoint gets called twice in any case(like delay or timeout)?
There might be a few reasons:
some chrome extensions might cause bugs. Those chrome extensions have been causing a lot of issues recently. run your app on a different browser. If there is no issue, that means it is chrome-specific problem.
express might be making requests for favicon.ico. In order to prevent this, use this module : https://www.npmjs.com/package/serve-favicon
add CORS policy. Your server might sending preflight requests Use this npm package: https://www.npmjs.com/package/cors
No there is no default timeouts in nodejs or something like that.
Look for issue at your frontend part:
can be javascript fetch api with 'retry' option set
can be messed up RxJS operators chain which emits events implicitly and triggers another one REST request
can be entire page reload on timeout which leads to retrieve all neccessary data from backend
can be request interceptors (in axios, angular etc) which modify something and re-send
... many potential reasons, but not in backend (nodejs) for sure
Just make simple example and invoke your nodejs 'request_backend_data' endpoint with axois or xmlhttprequest - you will see that problem is not at backend part.
Try checking the api call with the code below, which includes follwing redirects. Add headers as needed (ie, 'Authorization': 'bearer dhqsdkhqd...etc'
var https = require('follow-redirects').https;
var fs = require('fs');
var options = {
'method': 'GET',
'hostname': 'foo.com',
'path': '/request_backend_data',
'headers': {
},
'maxRedirects': 20
};
var req = https.request(options, function (res) {
var chunks = [];
res.on("data", function (chunk) {
chunks.push(chunk);
});
res.on("end", function (chunk) {
var body = Buffer.concat(chunks);
console.log(body.toString());
});
res.on("error", function (error) {
console.error(error);
});
});
req.end();
Paste into a file called test.js then run with node test.js.
I'm experimenting with migrating an ASP.net REST backend to Azure Functions. My possibly naive approach to this was creating a catch-all function that proxies HTTP requests via Node's http module, then slowly replacing endpoints with native Azure Functions with more specific routes. I'm using the CLI, and created a Node function like so:
var http = require("http")
module.exports = function (context, req) {
var body = new Buffer([])
var headers = req.headers
headers["host"] = "my.proxy.target"
var proxy = http.request({
hostname: "my.proxy.target",
port: 80,
method: req.method,
path: req.originalUrl,
headers: headers
}, function(outboundRes) {
console.log("Got response")
context.res.status(outboundRes.statusCode)
for(header in outboundRes.headers) {
console.log("Header", header)
if(header != "set-cookie")
context.res.setHeader(header, outboundRes.headers[header])
else
console.log(outboundRes.headers[header])
}
outboundRes.addListener("data", function(chunk) {
body = Buffer.concat([body, chunk])
})
outboundRes.addListener("end", function() {
console.log("End", context.res)
context.res.raw(body)
})
})
proxy.end(req.body)
}
This almost seems to work, but my backend sets several cookies using several Set-Cookie headers. Node hands these back as an array of cookie values, but it seems like Azure Functions doesn't accept arrays as values, or doesn't permit setting multiple headers with the same name, as seems to be allowed for Set-Cookie.
Is this supported? I've googled and have checked out the TypeScript source for Response, but it doesn't appear to be.
If this isn't supported, what Azure platform services should I use to fail over 404s from one app to another, so I can slowly replace the monolith with Functions? Function proxies would work if I could use them as fallbacks, but that doesn't appear possible.
Thanks!
I have a file in memory (buffer) - there is no file on the file system.
I want to send that buffer to another server that talks HTTP.
For example, some API A creates a file in memory, SignServer manipulates such files, and responds with a new buffer. My API takes the file from A and feeds it to SignServer.
I tried sending the file to SignServer in multiple ways, but it keeps responding with status 400 (missing field 'data' in request).
What I tried:
var http = require('http');
var querystring = require('querystring');
var data = querystring.stringify({
workerName: 'PDFSigner',
data: file_buffer
});
var request = new http.ClientRequest({
hostname: 'localhost',
port: 8080,
path: '/signserver/process',
method: 'GET',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
// I also tried 'multipart/form-data'
'Content-Length': Buffer.byteLength(data)
}
});
request.end(data);
I tried printing data, and it showed:
workerName=PDFSigner&data=
Which is bad because data wasn't set to file_buffer.
I tried printing file_buffer, and it does have content (not null, not undefined, actually has bytes inside).
So stringifying the buffer gave an empty string.
I tried doing the same thing with the request module and it didn't work either.
Note that SignServer isn't written in Node nor JavaScript. It's a Java application, so it probably doesn't work with json (which is why I'm trying to do it with querystring). Yes, I tried sending json.
The reason why data is set to an empty string is described in this issue and the solution is given in this issue.
escape and stringify the buffer like so:
var data = querystring.stringify({
workerName: 'PDFSigner',
data: escape(file_buffer).toString('binary')
});
As #robertklep mentioned, your other problem is that you can't send a big file using application/x-www-form-urlencoded. You'd need to do it with multipart/form-data.
My code is as follows
var http = require('http');
var host=...
var postData=({
//some fun stuff
})
var postOptions ={
host: host,
path: '/api/dostuff',
method: 'POST',
headers:{
AppID:"some stuff",
Authorization: "OAuth token",
"Content-Type":"application/json"
},
};
var req = http.request(postOptions, function(res){
var data = '';
res.on('data', function (chunk) {
data += chunk;
});
res.on('end', function () {
//sanitize data stuff here
console.log("DATA HERE: "+ data);
return data;
});
});
req.write(JSON.stringify(postData));
req.end();
It's a basic HTTP post to a C# server. The important stuff is in the headers. I send the app ID (which is ~50 characters) and the OAuth token (which can be thousands of characters). Right now, the server isn't set up to do anything with the Authorization header. It doesn't even check if its there.
My problem is that when I populate the Authorization header (or any header) with a few random characters as a test, the post succeeds. When I tried it again with a full valid Authorization token (which, to reiterate, is very long) it fails. No matter which part of the header i fill, once it gets too full it returns an error. The error I receive is "Processing of the HTTP request resulted in an exception. Please see the HTTP response returned by the 'Response' property of this exception for details". I was somewhat certain this is a server issue, but when I tried running the exact same body and headers in Postman, I got a valid response.
Does anyone have any idea what is causing this?
There's a compiled constant that's defined to be 80k for Node HTTP headers. Are you running into that? I'd recommend seeing how big the header is with your OAuth token. It shouldn't exceed 80k though, and FWIW, even a kilobyte is huge for OAuth... But regardless... Try dumping the size of the headers (in bytes).
I am trying to do a GET request from node.js. This request is to a REST server which will access Hbase and return the data. The GET request contains all necessary Hbase info (table, key, column-family etc.) in it. Following is the code.
var http = require('http');
var url = {
host: '127.0.0.1',
port: 8000,
path: '/table-name/key/column-family',
headers: {
'Content-Type': 'application/octet-stream'
},
};
http.get(url, function(resp){
console.log("Status: " + resp.statusCode);
console.log("Header: " + JSON.stringify(resp.headers));
resp.setEncoding('utf8');
var completeResponse = '';
resp.on('data', function (chunk) {
completeResponse += chunk;
});
resp.on('end', function(chunk) {
console.log(completeResponse);
});
});
My problem is that the response I get is not always an octet-stream as requested. Most of the time data is in valid format with a header like the following.
{"content-length":"454","x-timestamp":"1395469504346","content-type":"application/octet-stream"}
But, say 1 out of 10 times the response is an XML string with a header like following.
{"content-type":"text/xml","content-length":"793"}
The status code is 200 in both cases and I am always requesting for an existing key. This behavior is seemingly random and not caused by any particular key.
How do I ensure that the response is always an octet-stream and not XML / JSON?
As stated in the comments, you need to set the Accept header to specify the content type you expect (accept) in response from the server. Note that you may accept more than one type of response.
The Content-Type header specify the type of what's in the body of the message. It can be set by your client in case of POST/PATCH request, or by the server in its response. On the receiving side, it is used to know how to handle the body content.
For more detail, you can refer to the comprehensive MDN content negotiation documentation