Can an RTSP TEARDOWN request be received before a SETUP request? - rtsp

I was under the impression that since a TEARDOWN request releases resources that are normally allocated when a SETUP is made, a TEARDOWN request was only necessary after the SETUP request.
However, I just had an Android device that sent a TEARDOWN immediately after receiving the response to a DESCRIBE request (before the SETUP request, the Session: parameter of the request was empty).
This was unexpected, and I was not able to have a confirmation, even by re-reading the RFC, if this is legit or not.
Can anybody provide information on this? Ideally with am official reference...

Servers should typically be prepared to talk to various clients, and it is a good idea to design servers error prone: clients might send weird commands and servers should reasonably respond. TEARDOWN stops streaming, so it makes no sense to issue it before SETUP, however it is still legit to send this command without SETUP, the server receiving it would just have nothing to do, no resources to free. It is up to server to decide whether to respond with 200 OK, or another status indicating that command makes no sense in this context (e.g. the provided session identifier is not valid).

Related

Sending a response after jobs have finished processing in Express

So, I have Express server that accepts a request. The request is web scraping that takes 3-4 minute to finish. I'm using Bull to queue the jobs and processing it as and when it is ready. The challenge is to send this results from processed jobs back as response. Is there any way I can achieve this? I'm running the app on heroku, but heroku has a request timeout of 30sec.
You don’t have to wait until the back end finished do the request identified who is requesting . Authenticate the user. Do a res.status(202).send({message:”text});
Even though the response was sended to the client you can keep processing and stuff
NOTE: Do not put a return keyword before res.status...
The HyperText Transfer Protocol (HTTP) 202 Accepted response status code indicates that the request has been accepted for processing, but the processing has not been completed; in fact, processing may not have started yet. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place.
202 is non-committal, meaning that there is no way for the HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing.
You always need to send response immediately due to timeout. Since your process takes about 3-4 minutes, it is better to send a response immediately mentioning that the request was successfully received and will be processed.
Now, when the task is completed, you can use socket.io or web sockets to notify the client from the server side. You can also pass a response.
The client side also can check continuously if the job was completed on the server side, this is called polling and is required with older browsers which don't support web sockets. socket.io falls back to polling when browsers don't support web sockets.
Visit socket.io for more information and documentation.
Best approach to this problem is socket.io library. It can send data to client send whenever you want. It triggers a function on client side which receives the data. Socket.io supports different languages and it is really ease to use.
website link
Documentation Link
create a jobs table in a database or persistant storage like redis
save each job in the table upon request with a unique id
update status to running on starting the job
sent HTTP 202 - Accepted
At the client implement a polling script, At the server implement a job status route/api. The api accept a job id and queries the job table and respond with the status
When the job is finished update the job table with status completed, when the jon is errored updated the job table with status failed and maybe a description column to store the cause for error
This solution makes your system horizontaly scalable and distributed. It also prevents the consequences of unexpected connection drops. Polling interval depends on average job completion duration. I would recommend an average interval of 5 second
This can be even improved to store job completion progress in the jobs table so that the client can even display a progress bar
->Request time out occurs when your connection is idle, different servers implement in a different way so timeout time differs
1)The solution for this timeout problem would be to make your connections open(constant), that is the connection between client and servers should remain constant.
So for such scenarios use WebSockets, which ensures that after the initial request and response handshake between client and server the connection stays open.
there are many libraries to implement realtime connection.Eg Pubnub,socket.io. This is the same technology used for live streaming.
Node js can handle many concurrent connections and its lightweight too, won't use many resources too.

ForEach Bulk Post Requests are failing

I have this script where I'm taking a large dataset and calling a remote api, using request-promise, using a post method. If I do this individually, the request works just fine. However, if I loop through a sample set of 200-records using forEach and async/await, only about 6-15 of the requests come back with a status of 200, the others are returning with a 500 error.
I've worked with the owner of the API, and their logs only show the 200-requests. So I don't think node is actually sending out the ones that come back as 500.
Has anyone run into this, and/or know how I can get around this?
To my knowledge, there's no code in node.js that automatically makes a 500 http response for you. Those 500 responses are apparently coming from the target server's network. You could look at a network trace on your server machine to see for sure.
If they are not in the target server logs, then it's probably coming from some defense mechanism deployed in front of their server to stop misuse or overuse of their server (such as rate limiting from one source) and/or to protect its ability to respond to a meaningful number of requests (proxy, firewall, load balancer, etc...). It could even be part of a configuration in the hosting facility.
You will likely need to find out how many simultaneous requests the target server will accept without error and then modify your code to never send more than that number of requests at once. They could also be measuring requests/sec to it might not only be an in-flight count, but could be the rate at which requests are sent.

can an http client lie about a request's content-length?

specifically, i'm talking about from the point of view of a node.js server. it's hard to test this in node.js because http.client validates content-length.
can a client lie about a body's content-length, at least at the point it reaches http.createServer().on('request')?
Can a client send a body that is larger than content-length? I don't think this is possible as it is most likely checked at the parser level, but i want proof.
Can a client send a body that is smaller than content-length? I think this may be true.
i'm worried about malicious users that may not use a well-behaved http client.
Of course it's possible. You could simply open a TCP socket connection to whatever IP/port a web server is running on and write anything you'd like there. Of course well-behaved clients don't do this, but there's nothing stopping a client from doing so.
However, this tends to be whatever HTTP stack your using on the server's problem, in this case node. It needs to 1) not blindly read in (huge) content-length bytes as that could crash the server miserably and 2) make sure (for reasonable-sized requests) that the client isn't lying.
In the case of node, it's visible right about here: https://github.com/joyent/node/blob/master/deps/http_parser/http_parser.c#L1471
Just try it and see ;-)
it depends
of course you can send a wring content length, the question is what does the client do with it.
there are script or server clients that may attempt to download your content and get completely messed up.
most browsers seem to have some error tolerant behaviour implemented however there are many differenet implementations.
i remember having a old ie keeping the socket open never closing it. this ended up in a never ending page load.
some netscape browser seem to be totally dependent on the right content length.
a good idea is to leave content-legths away. this should work on every browser.

Node.js options to push updates to some microcontrollers that have an HTTP 1.1 stack

The title pretty well says it.
I need the microcontrollers to stay connected to the server to receive updates within a couple of seconds and I'm not quite sure how to do this.
The client in this case is very limited to say the least and it seems like all the solutions I've found for polling or something like socket.io require dropping some significant JavaScript to the client.
If I'm left having to reimplement one of those libraries in C on the micro I could definitely use some pointers on the leanest way to handle it.
I can't just pound the server with constant requests because this is going to increase to a fair number of connected micros.
Just use ordinary long polling: each controller initially makes an HTTP request and waits for a response, which happens when there's an update. Once the controller receives the response, it makes another request. Lather, rinse, repeat. This won't hammer the server because each controller makes only one request per update, and node's architecture is such that you can have lots of requests pending, since you aren't creating a new thread or process for each active connection.

In NodeJS, if I don't respond to a request, can anything negative happen to my server?

I have a nodeJS server running. There are some requests that the server will receive that don't need a response (just updating in the server). If the update fails, it isn't something that the client will need to worry about. In order to save bandwidth, I'd like to not respond to said requests. Can not responding to requests somehow affect my server's performance?
Assuming you are using http, You have to at least return an http response code. If you don't you are violating http -- the client is going to wait for a response, and will die trying (i.e. will timeout after a while).
According to the documentation for end, you must call end for every response. That is going to send a response code for you, if you don't specify one. So yes, need to respond.

Resources