This question may be a little overwhelming, but I feel close to understanding the way video seeking works in Google Chrome, but it's still very confusing to me and support is difficult to find.
If I am not mistaken, Chrome initially sends a request header with Range bytes=0- to test if the server understands Partial Content requests, and expects the server to respond with status code 206.
I have read the following answers to get a better understanding:
Need more rep to link them, their topics are:
can't seek html5 video or audio in chrome
HTML5 video will not loop
HTTP Range header
My server is powered by Node.js and I am having trouble with getting continuous range requests out of chrome during playback. When a video is requested, the server receives a bytes=0-, the server then responds with status code 206 and
then the media player breaks.
My confusion is with with the response header, because I am not sure how to
construct my response header and handle eventual range requests:
Do I respond with a status code 200 or 206 initially?
When I respond with 206 I only receive bytes=0-, but when I respond with
200 I receive bytes=0- and after that bytes=355856504-.
If I were to subtract 355856504 of the total Content-Length of the video file, the result is 58, and bytes=0-58 seems like a valid Content-Range?
But after those two requests, I receive no more range requests from Chrome.
I am also unsure if the Content-Range in the response header should looks like "bytes=0-58" or like "bytes=0-58/355856562" for example.
Here is the code
if(req.headers.range) console.info(req.headers.range); // prints bytes=0-
const type = rc.sync(media, 0, 32); // determines mime type
const size = fs.statSync(media)["size"]; // determines content length
// String range, initially "bytes=0-" according to Chrome
var Strange = req.headers.range;
res.set({
"Accept-Ranges": "bytes",
"Content-Type": ft(type).mime,
"Content-Length": size-Strange.replace(/bytes=/, "").split("-")[0],
"Content-Range": Strange+size+"/"+size
});
//res.status(206); // one request from chrome, then breaks
res.status(200); // two requests from chrome, then breaks
// this prints 35585604-58, whereas i expect something like 0-58
console.log("should serve range: "+
parseInt(Strange.replace(/bytes=/, "").split("-")[0]) +"-"+
parseInt(size-Strange.replace(/bytes=/, "").split("-")[0])
);
// this function reads some bytes from 'media', and then streams it:
fs.createReadStream(media, {
start: 0,
end: parseInt(size-Strange.replace(/bytes=/, "").split("-")[0]) // 58
}).pipe(res);
Screenshots of the request and response headers when status code is 200:
first response and request headers
second response and request headers
Screenshot of the request and response header when status code is 206:
Need more rep, to show another screenshot
Essentially the request is:
"Range: bytes=0-"
and the Content-Range response is:
"bytes=0-355856562/355856562"
One apparent error is that you are returning an invalid value in the Range header. See the sepcification - it should be 0-355856561/355856562 since the second value after the dash is the last byte position not length.
Related
Recently we have decided to play some video in browser at my company. We want to support Safari, Firefox and Chrome. To stream video, Safari requires that we implement range http requests in servicestack. Our server supports range requests as indicated by the 'Accept-Ranges: bytes' header being returned in the response.
Looking at previous questions we would want to add a prerequest filter, but I don't understand the details of doing so. Adding this to our AppHost.cs's configure function does do something:
PreRequestFilters.Add((req, res) => {
if (req.GetHeader(HttpHeaders.Range) != null) {
var rangeHeader = req.GetHeader(HttpHeaders.Range);
rangeHeader.ExtractHttpRanges(req.ContentLength, out var rangeStart, out var rangeEnd);
if (rangeEnd > req.ContentLength - 1) {
rangeEnd = req.ContentLength - 1;
}
res.AddHttpRangeResponseHeaders(rangeStart, rangeEnd, req.ContentLength);
}
});
Setting a breakpoint I can see that this code is hit. However rangeEnd always equals -1, and ContentLength always equals 0. A rangeEnd of -1 is invalid as per spec, so something is wrong. Most importantly, adding this code breaks video playback in Chrome as well. I'm not sure I'm on the right track. It does not break the loading of pictures.
If you would like the details of the Response/Request headers via the network let me know.
Using the following code:
var net = require("net");
var client = new net.Socket();
client.connect(8080,"localhost",function() {
client.write("GET /edsa-jox/testjson.json HTTP/1.1\r\n");
client.write("Accept-Encoding: gzip\r\n");
client.write("Host: localhost:8080\r\n\r\n");
}
);
client.on("data", function(data) {
console.log(data.toString("utf-8", 0, data.length));
});
I get the following response:
HTTP/1.1 200 OK
Date: Thu, 20 May 2021 22:45:26 GMT
Server: Apache/2.4.25 (Win32) PHP/5.6.30
Last-Modified: Thu, 20 May 2021 20:14:17 GMT
ETag: "1f-5c2c89677c5c7"
Accept-Ranges: bytes
Content-Length: 31
Content-Type: application/json
{"message":"message from json"}
And this response is shown in the console immediately. But since it is coming from the "data" event I guess it would have been coming in chunks if the response was bigger.
So I also tested with the following (all else equal):
var data="";
client.on("data", function(d) {
console.log("1");
data += d.toString("utf-8", 0, d.length);
});
client.on("end", function(d) {
console.log(data);
});
Thinking that I could use the event "end" to be sure that I had the full set of data before doing something else. Which I guess worked, but the unexpected thing was that the "1" was shown immediately but then it took a couple of seconds before the "end" event was triggered.
Question 1) Why is there such a delay with "end" event compared to the last executed "data" event? Is there a better way to do it?
Question 2) Having the above response, which contains both a bunch of headers aswell as a content body. What approach is the best approach to extract the body part?
Note, I want to do this with the net library not the fetch nor the http libraries (or any other abstractions). I want it to be as fast as possible.
i can only see two valid reasons to do it all by hand :
extreme speed needs => then you should consider using "go" or another compiled language
learning (always interesting)
I would recomend you to use express, or any other npm package to deal with everything without reinventing the wheel.
however, i'll help you with what i know :
First thing is to properly decode ut8 strings. You need to use string_decoder because if the data chunk is incomplete, and you call data.toString('utf8'), you will have a mangled character appended. doesn't happen often but hard to debug.
here is a valid way to do it :
const { StringDecoder } = require('string_decoder');
var decoder = new StringDecoder('utf8');
var stdout = '';
stream.on('data', (data) => {
stdout += decoder.write(data);
});
https://blog.raphaelpiccolo.com/post/827
then to answer your questions :
i dont know, may be related to gzip. The server can be slow to stop the connection, or it's the client's fault. Or the network itself. I would try with other clients / servers to be sure, and start profiling.
you need to read http specifications to handle all edge cases (http1/websockets/http2). But i think you are lucky, headers are always separated from the body by a double newline. then if you loop through the data coming from the stream, after it's been decoded, character by character, you can search for this pattern \n\n. Anything coming after will be the body.
one special case i think about is keep alive : if the client and server are in keep alive mode the connection wont be closed between calls. you may need to parse the "Content-Length" header to know how many characters to wait for.
When a node HTTP2 server creates a new push stream, what is the purpose of the request header vs. response header?
Server code:
http2Server.on('stream', (stream) => {
stream.pushStream({ ':path': '/' }, (err, pushStream, headers) => { // send request headers
pushStream.respond({ ':status': 200 }); // send response headers
pushStream.end('some pushed data');
});
stream.end('some data');
});
Client code:
clientHttp2Session.on('stream', (pushedStream, requestHeaders) => { // receive request headers
pushedStream.on('push', (responseHeaders) => { // receive response headers
/* Process response headers */
});
pushedStream.on('data', (chunk) => { /* handle pushed data */ });
});
Both of these must be sent before any data is sent, so it seems one of them is redundant?
MDN states:
Request headers contain more information about the resource to be fetched, or about the client requesting the resource.
Response headers hold additional information about the response, like its location or about the server providing it.
However, that seems to be slanted towards a more client request, server response model - which doesn't apply to push.
The "request header" as you call it, maps to the PUSH_PROMISE frame in HTTP/2 (you can see this in the NodeJS source code).
A PUSH_PROMISE frame is defined in the HTTP/2 spec, and is used to tell the client "hey I'm going to pretend you sent this request, and send you a response to that 'fake request' next."
It is used to inform the client this response is on it's way, and if it needs it, then not to bother making another request for it.
It also allows the client to cancel this push request with a RST_STREAM frame to say "No thanks, I don't want that." This may be because the server is pushing a resource that the client already has in it's cache or for some other reason.
My node server has a strange behaviour when it comes to a GET endpoint that resplies with a big JSON (30-35MB).
I am not using any npm package. Just the core API.
The unexpected behaviour only happens when querying the server from the Internet and it behaves fine if it is queried from the local network.
The problem is that the server stops writing to the response after it writes the first 1260 bytes of the content body. It does not close the connection nor throw an error. Insomnia (the REST client I use for testing) just states that it received a 1260B chunk. If I query the same endpoint from a local machine it says that it received more and bigger chunks (a few KB each).
I don't even think the problem is caused by node but since I am on a clean raspberry pi (installed raspbian and then just node v13.0.1) and the only process I use is node.js I don't know how to find the source of the problem, there is no load balancer or web server to blame. Also the public IP seems OK, every other endpoint is working fine (they reply with less than 1260B per request)
The code for that endpoint looks like this
const text = url.parse(req.url, true).query.text;
if (text.length > 4) {
let results = await models.fullTextSearch(text);
results = await results.map(async result=>{
result.Data = await models.FindData(result.ProductID, 30);
return result;
});
results = await Promise.all(results);
results = JSON.stringify(results);
res.writeHead(200, {'Content-Type': 'application/json', 'Transfer-Encoding': 'chunked', 'Access-Control-Allow-Origin': '*', 'Cache-Control': 'max-age=600'});
res.write(results);
res.end();
break;
}
res.writeHead(403, {'Content-Type': 'text/plain', 'Access-Control-Allow-Origin': '*'});
res.write("You made an invalid request!");
break;
Here are a number of things to do in order to debug this:
Add console.log(results.length) to make sure the length of the data is what you expect it to be.
Add a callback to res.end(function() { console.log('finished sending response')}) to see if the http library thinks it is done sending the response.
Check the return value from res.write(). If it is false (indicating that not all data has yet been sent), add a handler for the drain event and see if it gets called.
Try increasing the sending timeout with res.setTimeout() in case it's just taking too long to send all the data.
I am trying to get data from the Bing search API, and since the existing libraries seem to be based on old discontinued APIs I though I'd try myself using the request library, which appears to be the most common library for this.
My code looks like
var SKEY = "myKey...." ,
ServiceRootURL = 'https://api.datamarket.azure.com/Bing/Search/v1/Composite';
function getBingData(query, top, skip, cb) {
var params = {
Sources: "'web'",
Query: "'"+query+"'",
'$format': "JSON",
'$top': top, '$skip': skip
},
req = request.get(ServiceRootURL).auth(SKEY, SKEY, false).qs(params);
request(req, cb)
}
getBingData("bookline.hu", 50, 0, someCallbackWhichParsesTheBody)
Bing returns some JSON and I can work with it sometimes but if the response body contains a large amount of non ASCII characters JSON.parse complains that the string is malformed. I tried switching to an ATOM content type, but there was no difference, the xml was invalid. Inspecting the response body as available in the request() callback actually shows bad code.
So I tried the same request with some python code, and that appears to work fine all the time. For reference:
r = requests.get(
'https://api.datamarket.azure.com/Bing/Search/v1/Composite?Sources=%27web%27&Query=%27sexy%20cosplay%20girls%27&$format=json',
auth=HTTPBasicAuth(SKEY,SKEY))
stuffWithResponse(r.json())
I am unable to reproduce the problem with smaller responses (e.g. limiting the number of results) and unable to identify a single result which causes the issue (by stepping up the offset).
My impression is that the response gets read in chunks, transcoded somehow and reassembled back in a bad way, which means the json/atom data becomes invalid if some multibyte character gets split, which happens on larger responses but not small ones.
Being new to node, I am not sure if there is something I should be doing (setting the encoding somewhere? Bing returns UTF-8, so this doesn't seem needed).
Anyone has any idea of what is going on?
FWIW, I'm on OSX 10.8, node is v0.8.20 installed via macports, request is v2.14.0 installed via npm.
i'm not sure about the request library but the default nodejs one works well for me. It also seems a lot easier to read than your library and does indeed come back in chunks.
http://nodejs.org/api/http.html#http_http_request_options_callback
or for https (like your req) http://nodejs.org/api/https.html#https_https_request_options_callback (the same really though)
For the options a little tip: use url parse
var url = require('url');
var params = '{}'
var dataURL = url.parse(ServiceRootURL);
var post_options = {
hostname: dataURL.hostname,
port: dataURL.port || 80,
path: dataURL.path,
method: 'GET',
headers: {
'Content-Type': 'application/json; charset=utf-8',
'Content-Length': params.length
}
};
obviously params needs to be the data you want to send
I think your request authentication is incorrect. Authentication has to be provided before request.get.
See the documentation for request HTTP authentication. qs is an object that has to be passed to request options just like url and auth.
Also you are using same req for second request. You should know that request.get returns a stream for the GET of url given. Your next request using req will go wrong.
If you only need HTTPBasicAuth, this should also work
//remove req = request.get and subsequent request
request.get('http://some.server.com/', {
'auth': {
'user': 'username',
'pass': 'password',
'sendImmediately': false
}
},function (error, response, body) {
});
The callback argument gets 3 arguments. The first is an error when applicable (usually from the http.Client option not the http.ClientRequest object). The second is an http.ClientResponse object. The third is the response body String or Buffer.
The second object is the response stream. To use it you must use events 'data', 'end', 'error' and 'close'.
Be sure to use the arguments correctly.
You have to pass the option {json:true} to enable json parsing of the response