Does AWS ApiGateway Socket Splits Messages at 4 KB? - node.js

I'm trying to implement WebRTC Signalling Server using AWS ApiGateway with javascript in order to connect 2 clients (Client A - Web Browser; Client B - Unreal Engine's PixelStreaming Plugin).
(Note: For this implementation I'm not using the signalling sever implementation provided by Unreal, but I'm creating a replacement implementation using AWS ApiGateway)
At first I connect successfully both clients to the Api Gateway WebSocket sever and setup the required configuration.
Then I'm generating the WebRTC "offer" from the browser (Client A) and send it using method ApiGatewayManagementApiClient::PostToConnectionCommand (JS AWS SDK v3) via Lambda to the receiver's ConnectionID (Client B). I expect when the receiver (Client B) get the offer to generate an "answer". Instead of an answer from Client B I get the following error:
Error: Failed to parse SS message ...<message-chunk>...
In the logs I get this error 2 times but with different chunks of my offer (first beginning of the JSON and second somewhere in the middle). This lead me to believe that the message got split, so I tried removing parts of the JSON (shorten it) until I stop getting the error. Doing this I found that this error disappears when the exact length of the JSON message is 4096 chars (4 KB). When I send event one byte over this I get the error (2 times).
According to AWS documentation they have a maximum frame size of 32 KB. My full message is around 7 KB, so I expect that it is going to be sent at once. Unfortunately this doesn't seem to be the case my message is split to 4 KB mark.
I have working implementation of the exact same case in NodeJS and building a custom server with the ws library. Inside this implementation everything is working correctly (my 7 KB offer get delivered in once piece and I receive answer and the connection is initiated without errors). I send "offer" and then receive "answer" back.
I've used wscat to connect to both servers (the NodeJS and the ApiGateway) assuming the role of Client B and I got both "offers". Then I compared them
and they are exactly the same. (In wscat the message doesn't look to be split, but I assume that they concat the chunks before showing it.)
Based on that I can assert that I'm configuring the clients properly and that the "offer" is good to receive "answer". The only difference is that if I send the offer from the NodeJS I receive answer and all works, but if I send exactly the same offer from AWS ApiGateway I got error that the data cannot be parsed (because it is split in 4 KB chunks).
I have spent 3 days trying to resolve this issue and I'm certain that everything I send and the way I send it is correct. So I come to conclusion that I for some reason hit some limit in AWS API Gateway or Lambda (I saw that Lambda has 4 KB limit on environment variables, but I'm not using them so it should not be related) which causes my message to be split in parts. This causes the Unreal's Pixel Streaming Plugin to try deserialize the JSON in chunks instead of the whole JSON message at once and this results in 2 errors that the data cannot be parsed.
I cannot change the code of the client to first wait all the chunks to arrive and then process it. I have to use it "as is". (It is a plugin and it is written in C++)
I also tried to relay the API API Gateway Socket Message via NodeJS client and then to create a NodeJS Web Socket server connect the Unreal Engine Plugin to my NodeJS server and forward the message via it to the Unreal Engine Plugin. (AWS -> NodeJS Client -> NodeJS Server -> Unreal instead of AWS -> Unreal). This variant is working correctly without problem and I receive answer. The problem is that I introduce an additional NodeJS client and Server which beats the purpose of using API Gateway at first place...
So my question is: Is is possible that AWS API Gateway is splitting my message in 4KB chunks (or Lambda does it) and if it is so is it possible to somehow increase this limit so I receive the whole message at once instead?
I can provide code in case it is needed, but I don't think the problem is related to the concrete implementation.

Related

How is it possible that node.js creates 2 instances of singleton class?

Setup: typescript 4.9.4, next.js 13.0.6
What I’m trying to do: messenger app. Messaging system based in Server-sent events (SSE), not WS
What’s the idea: incoming messages are handled by SSE endpoint: https://github.com/sincerely-manny/messenger.nextjs/blob/main/pages/api/messenger/incoming.ts
outgoing messages are being accepted as POST-requests here: https://github.com/sincerely-manny/messenger.nextjs/blob/main/pages/api/messenger/outgoing.ts
Singleton-class is collecting list of clients/connections and response-objects: https://github.com/sincerely-manny/messenger.nextjs/blob/main/lib/sse/serverSentEvents.ts
Whenever anyone needs to send a message he is to grab the instance of SSE class and trigger the "send" method.
Front-end part: https://github.com/sincerely-manny/messenger.nextjs/blob/main/app/messenger/page.tsx
Expected behaviour: upon establishing first connection instance of SSE class is created. Then every call of the send method finds corresponding to the client response object and and puts message to the stream
Actual behaviour: upon connecting to sse endpoint instance (1) of the class is created. Client is registered in list. But (!) sending a message creates another (2) instance of singleton-class with empty clients list. Hence sent message is lost. But (!!) after refreshing the page and creating new connection app takes this second (2) instance, puts client there and everything starts working as expected.
The question: how’s that possible and what should I do to avoid this unwanted behaviour.
Update: turns out the problem persists only in dev mode, while compiling pages on-the-fly. That makes it easier, but doesn’t explain why it happens.

Cloud Run 502 Bad Gateway - Protocol Error

Currently working on a microservice architecture where I have one particular microservice that will have a specific mechanism :
Receiving a Request saying it needs some data
Sending Status 202 - Accepted To Client
Generating Data and Saving it to a redis instance
Receiving a Request to see if data is ready
Data is not ready in redis instance : Sending status 102 To Client
Data is ready in redis instance : sending it back
The first point works fine with this kind of code :
res.sendStatus(202)
processData(req)
But I have different behavior Locally and when hosted on Cloud Run for the second point.
Locally, the 2nd request is not handled while the first one process is not ended and I presumed it was normal on a threading perspective.
Is there something that might be used to make express still handle the other request while the first one is sent to the client but the process is not ended ?
But considering that Google Cloud Run is based on instances and auto-scaling, I thought that well, the first one is locked because the process is not ended ? No problem ! A new one will come and handle the other request that will then check the redis instance key status.
It seems that I was wrong ! When I do the call to check the status of the data, if the data is not yet done, Cloud run send me back this error (502 Gateway) :
upstream connect error or disconnect/reset before headers. reset reason: protocol error
However, I don't have any res status to 502 so it seems that either Cloud Run or Express send this itself.
My only option would be to split my Cloud Run instance into a Cloud Function + a Cloud Run. The Cloud run would trigger the process in a Cloud Function but I'm pretty short on time so if I don't have any other option I will have to do that but I would hope to be able to manage it without introducing a new Cloud Function
Do you have any explanation about the fact that id doesn't worky locally and on Cloud run ?
My considerations are not convincing me and I don't find any truth :
Maybe a client can't do 2 request at the same time : Which seems not logical
Maybe express can't handle several request at the same time : Which does not seems logical to me
Any clues that seems more plausible ?

Generate ClientHello and ServerHello using node

I would like to generate ClientHello and ServerHello messages using NodeJs for testing purposes.
Is there an easy way to generate these messages using NodeJS API (the HTTPS all ready doing it but I can't find where).
If I can't use an existing API then how do I create the simplest messages according to the RFC using NodeJS (How to build the messages in a Buffer correctly)

Poco::Net::HTTPSClientSession receiveResponse always truncated abnormally

Encountered a very weird issue.
I have two VMs, running CentOS Linux.
Server side has a REST API (Using none-Poco socket), and one of the API is to response a POST.
On the client side, use POCO library to call the REST.
If the returned message is long, it will be truncated at 176 k, or 240 k, or 288 k.
Same code, same environment, running on server side, Good.
On the client VM, use python to do the REST call, Good.
ONLY failed if I use the same good code, on client VM.
When msg got truncated, the https status code always return 200
On the server side, I logged the response message that I sent every time. Everything looks normal.
I have tried whole bunch of things, like:
set the socket timeout and receiveResponse timeout to an hour
wait for 2 seconds after I send the request but before I call the receive
Set the receive buffer big enough
Try whole bunch of approach to make sure receive stream is empty, no more data
It just does not work.
Anyone have similar issue? I started pulling my hair.... Please talk to me, anything... before I am bold.

ASIHttpRequest / node.js / heroku: "Client closed connection before receiving entire response"

I have an iphone app using ASIHttpRequest. The server code is on heroku in node.js
From time to time, a single request is sent from the iphone (only one trace) app but it is received twice on herokuapp (I can see twice the same request in the heroku logs).
I though at the beginning the request was requested twice because of an error in the first attempt but it's not the case as both request (the one I need and the second one I don't need) are performed on server side.
Any idea ?
Are you starting the queue with accurate progress turned on? If so, ASIHTTP makes one request (HEAD) to get the total size of the data to be downloaded, then it makes the real request. Hope that helps.
If that's not the case, try setting the persistent connection to NO, like so:
[asiRequest setShouldAttemptPersistentConnection:NO];
From my understanding, the latest version of ASIHTTPRequest defaults the persistent connection to NO. You can read more here:
https://github.com/pokeb/asi-http-request/issues/94

Resources