How to limit stream length in node.js - node.js

My Node.js web service will accept user uploads. I need to limit maximum number of bytes user can upload. I don't want to rely solely on content-length header since an invalid value can be provided. Is there a way I can limit request stream length I pipe to disk or db?
I thought about stream.Transform that will throw an exception if the stream length is longer than content-length header value. Probably, there is a built-in function?

Related

Is there any limit on the size of data that can be stored in Spring Integration message header

I have a requirement where I need to store a List of records in to Spring integration message header, so that it can be used later in the flow. This list can grow up to 100,000 records.
I would like to know is there a limit on the size of data which can be stored in spring integration header?
Also is there any alternative approach which can be taken to fulfill this requirement for ex: claim check usage etc.
Thanks.
If you don't do any persistence in between or don't propagate the message over messaging middleware (Kafka, JMS etc.), then all the data are in the memory and you definitely is just limited to what you have dedicated to JVM heap. So, if keeping those huge objects in the memory is the problem, then indeed a Claim Check is a good pattern to follow. This way the whole message is serialized to the external MessageStore which can restore later on via returned claim check key.

How to handle requests larger than 1MB on EventHub

I'm working on a NestJS project that receives data from SAP MII and then send it to EventHub. Unfortunately, EventHub supports a maximum of 1MB (https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-quotas), and in my case, SAP MII sometimes returns 4MB+ and I still need to send it to EventHub.
I have a few ideas on my mind, but I'm not sure if there's a better approach to it or even if there's a way to change EventHub size limit.

Does IIS Request Content Filtering Load the full request before filter

I'm looking into IIS Request filtering by content-length. I've set the max allowed content length :
appcmd set config /section:requestfiltering /requestlimits.maxallowedcontentlength:30000000
My question is about when the filter will occur.
Will IIS first read ALL the request into memory and then throw an error, or will it raise an issue as soon as it reaches the threshold?
The IIS Request Filtering module is processed very early in the request pipeline. Unwanted requests are quickly discarded before proceeding to application code which is slower and has a much larger attack surface. For this reason, some have reported performance increases after implementing Request Filtering settings.
Limitations
Request Filtering Limitations include the following:
Stateless - Request Filtering has no knowledge of application or session state. Each request is processed individually regardless of whether a session has or has not been established.
Request Header Only - Request Filtering can only inspect the request header. It has no visibility into the request body or any part of the response.
Basic Logic - Regular expressions and wildcard matches are not available. Most settings consist of establishing size constraints while others perform simple string matching.
maxAllowedContentLength
Request Filtering checks the value of the Content-Length request header. If the value exceeds that which is set for maxAllowedContentLength the client will receive an HTTP 404.13.
The IIS 8.5 STIG recommends a value of 30000000 or less.
IISRFBaseline
This above information is based on my PowerShell module IISRFBaseline. It helps establish an IIS Request Filtering baseline by leveraging Microsoft Logparser to scan a website's content directory and IIS logs.
Many of the settings have a dedicated markdown file providing more information about the setting. The one for maxAllowedContentLength can be found at the following:
https://github.com/phbits/IISRFBaseline/blob/master/IISRFBaseline-maxAllowedContentLength.md
Update - #johnny-5 comment
The filtering happens immediately which makes sense because Request Filtering only has visibility into the request header. This was confirmed via the following methods:
Failed Request Tracing - the Request Filtering module responded to the request with an HTTP 413 Request entity too large.
http.sys event tracing - the request is accepted and handed off to the IIS website. Shortly thereafter is an entry showing the HTTP 413 response. The time between was not nearly long enough for the upload to complete.
Packet capture - Using Microsoft Network Monitor, the HTTP conversation shows IIS immediately responded with an HTTP 413 Request entity too large.
The part you're rightfully concerned with is that IIS still accepts the upload regardless of file size. I found the limiting factor to be connectionTimeout which has a default setting of 120 seconds. If the file is "completed" before the timeout then an HTTP 413 error message is displayed. When a timeout occurs, the browser shows a connection reset since the TCP connection is destroyed by IIS after sending a TCP ACK/RST.
To test this further the timeout was increased and set to connectionTimeout=6000. Then a large upload was submitted and the following IIS components were stopped one at a time. After each stop, the upload was checked via Network Monitor and confirmed to be still running.
Website
Application Pool (Stop-WebAppPool -Name AppPoolName)
World Wide Web Publishing Service (Stop-Service -Name W3SVC)
With all three stopped I verified there was no IIS process still running and yet bytes were still being uploaded. This leads me to conclude that the connection is maintained by http.sys. The fact that connectionTimeout is closely tied to http.sys seems to support this. I do not know if the uploaded bytes go to a buffer or are simply discarded. The event tracing messages didn't provide anything helpful in this context.
Leaving out the Content-Length request header will result in an RFC protocol error (i.e. HTTP 400 Bad request) generated by http.sys since the size of the HTTP payload isn't being declared.

How to set Azure Webapp Websocket output buffer/frame size?

We're sending rather large chunks of data over websockets from an Azure Web App. It all works fine, but for some reason, the outgoing buffer size is 4096 bytes, which gives a lot of overhead for large data transmissions.
On a local developer machine this IIS/.Net buffer seems to be 16384 (or possible 16383 cause i'm getting the stream in one frame with 1 byte, and the next frame 16383 and on it goes). The reading buffer in the client in 65536 for reach frame.
All code is written in .NET so the sending side is simply creating a large ArraySegment and sending it with the ClientWebSocket.SendAsync which is much to high up the chain to actually decide how it's sent.
My question is, is it possible to change the size of the frames/buffers on the Azure Web App?
Clearly it's possible to change it on either OS or IIS level (http.sys?), since our Windows 10 dev machines have a different send buffer, but I really can't find where and how.

Optimizing a humbly large node.js response

I am responding with a large (10 MB) payload from node.js to an Akka spray.io based actor, and getting a chunk size error from spray/akka [akka://spray-actor-system/user/IO-HTTP/group-0/3] Received illegal response: HTTP chunk size exceeds the configured limit of 1048576 bytes.
My node.js code just plainly sends off the response in one res.end command. (It does so because the response is generated in a non-streamed way and therefore there has been no "inherent" gain in streaming it further along, at least up until a spray.io client-side was added).
I am wondering what's the simplest way to chunk the response using node.js api, in case I prefer not handling very large http responses by increasing the spray.io size limit. Also whether there's any performance downside in sending responses this size from node.js - i.e. does node.js block on the res.end operation?

Resources