ServiceWorker is not fetching all requests - requirejs

I'm playing around with ServiceWorkers.
In the current case I'm trying an example worker: https://github.com/GoogleChrome/samples/blob/gh-pages/service-worker/prefetch/service-worker.js
After running it in my context (complex framework with jquery and require) only small part of the request are going through the ServiceWorker.

Related

FMETP WebGL Unity Build. Emscripten error when FMNetworkManager activated in Heirarchy

So I have been working on a project that streams a video feed from an Oculus quest to a WebGL build running on a remote server (Digital Ocean)
I have two issues currently...
1.When I build to WebGl and push the update online. It will only run if I disable the FMNetworkManager.
If I run the app locally, it has no issues and I have been able to have video sent from the Quest headset to the receiver app.
Part of the response is as follows:
An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:
uncaught exception: abort("To use dlopen, you need to use Emscripten's linking support, see https://github.com/kripken/emscripten/wiki/Linking") at jsStackTrace (Viewer.wasm.framework.unityweb:8:15620)
stackTrace (Viewer.wasm.framework.unityweb:8:15791)
onAbort#https://curtin-cooking-control-nr9un.ondigitalocean.app/Build/UnityLoader.js:4:11199
abort (Viewer.wasm.framework.unityweb:8:500966)
_dlopen (Viewer.wasm.framework.unityweb:8:181966)
#blob:https://***/de128118-3923-4c88-8092-7a9945d90746 line 8 > WebAssembly.instantiate:wasm-function[60882]:0x1413efb (blob:***/de128118-3923-4c88-8092-7a9945d90746 line 8 > WebAssembly.instantiate:wasm-function[62313]:0x1453761)
...
...
...WebAssembly.instantiate:wasm-function[63454]:0x148b9a9)
UnityModule [UnityModule/Module.dynCall_v] (Viewer.wasm.framework.unityweb:8:484391)
browserIterationFunc (Viewer.wasm.framework.unityweb:8:186188)
runIter (Viewer.wasm.framework.unityweb:8:189261)
Browser_mainLoop_runner (Viewer.wasm.framework.unityweb:8:187723)
So I understand there is an issue relating to (wasm) Emscripten and have scoured the internet looking for solutions to no avail.
While I have mentioned I have had video streaming from one device to another. I have only had this functioning locally. With a node.js server also running on Digital Ocean. Which appears to be functioning, seeing both devices being registered by the server at runtime. In each app, while I see what appears to be data transferring by seeing Last Sent Time updating, plus FM Web Socket Network_debug also pushes [connected: True] to a text ui. The IsConnected or Found Server checkboxes inside FM Client (script) fail to check as being connected.
FMNetworkManager
I'm by no means an expert in unity programming, webgl, and webserver setup so my understanding of getting this to function has left me looking at many irrelevant solutions while attempting to make little changes with elements that some solutions suggest with others leaving me blank-eyed looking into space wondering, where do I even implement that.
Any guidance would be great, a step-by-step solution would be fantastic.
[Edit - Detailed Error]
UnityLoader.js:1150 wasm streaming compile failed: TypeError: Could not download wasm module
printErr # UnityLoader.js:1150
Promise.catch (async)
doNativeWasm # 524174d7-d893-4b91-8…0-aa564a23702d:1176
(anonymous) # 524174d7-d893-4b91-8…0-aa564a23702d:1246
(anonymous) # 524174d7-d893-4b91-8…-aa564a23702d:20166
UnityLoader.loadCode.Module # UnityLoader.js:889
script.onload # UnityLoader.js:854
load (async)
loadCode # UnityLoader.js:849
processWasmFrameworkJob # UnityLoader.js:885
job.callback # UnityLoader.js:475
setTimeout (async)
job.complete # UnityLoader.js:490
(anonymous) # UnityLoader.js:951
decompressor.worker.onmessage # UnityLoader.js:89
Thanks in advance
Aaron
You have wrong use in combining FMNetworkUDP and FMWebsocket together.
For WebGL build, UDP is not allowed, which causes the error as expected.
Your websocket ip is reachable, because it's reachable via your IP.
But please try not to expose your server IP in public forum like stackoverflow, which everyone can connect to your server anytime in future.
You should take away FMNetworkManager completely, while keeping only FMWebsocket components for webgl streaming.
You may test it with their Websocket streaming example scene with webgl build.

Nestjs Request and Application Lifecycle

I am looking for information about the request and application life-cycle for The NestJS framework. Specifically:
What is the order of execution of the following processes in a request, for a route that implements: middleware, pipes, guards, interceptors, and any other potential request process
What is the lifespan of modules and providers in a NestJS application? Do they last for the lifespan of a request, or the application, or something else?
Are there any lifecycle hooks, in addition to OnModuleInit and OnModuleDestroy?
What causes a Modeule to be destroyed (and trigger the OnModuleDestroy event)?
What is the order of execution of the following processes in a request, for a route that implements: middleware, pipes, guards, interceptors, and any other potential request process
The common order is:
Middlewares
Guards
Interceptors (before the stream is manipulated)
Pipes
Interceptors (after the stream is manipulated)
Exception filters (if any exception is caught)
What is the lifespan of modules and providers in a NestJS application? Do they last for the lifespan of a request, or the application, or something else?
They do last for the lifespan of the application. Modules are destroyed when a NestApplication or a NestMicroservice is being closed (see close method from INestApplication).
Are there any lifecycle hooks, in addition to OnModuleInit and OnModuleDestroy?
No there aren't at the moment.
What causes a Modeule to be destroyed (and trigger the OnModuleDestroy event)?
See my answer to the second point. As you look interested in lifecyle hooks, you might pay some interest to issues #938 and #550
What is the order of execution of the following processes in a request, for a route that implements: middleware, pipes, guards, interceptors, and any other potential request process
Middleware -> Guards -> Interceptors (code before next.handle()) -> Pipes -> Route Handler -> Interceptors (eg: next.handle().pipe( tap(() => changeResponse()) ) )-> Exception Filter (if exception is thrown)
With all three of them, you can inject other dependencies (like services,...) in their constructor.
What is the lifespan of modules and providers in a NestJS application? Do they last for the lifespan of a request, or the application, or something else?
A provider can have any of the following scopes:
SINGLETON - A single instance of the provider is shared across the entire application. The instance lifetime is tied directly to the application lifecycle. Once the application has bootstrapped, all singleton providers have been instantiated. Singleton scope is used by default.
REQUEST - A new instance of the provider is created exclusively for each incoming request. The instance is garbage-collected after the request has completed processing.
TRANSIENT - Transient providers are not shared across consumers. Each consumer that injects a transient provider will receive a new, dedicated instance.
Using singleton scope is recommended for most use cases. Sharing providers across consumers and across requests means that an instance can be cached and its initialization occurs only once, during application startup.
Example
import { Injectable, Scope } from '#nestjs/common';
#Injectable({ scope: Scope.REQUEST })
export class CatsService {}
Are there any lifecycle hooks, in addition to OnModuleInit and OnModuleDestroy?
OnApplicationBootstrap - Called once the application has fully started and is bootstrapped
OnApplicationShutdown - Responds to the system signals (when application gets shutdown by e.g. SIGTERM). Use this hook to gracefully shutdown a Nest application. This feature is often used with Kubernetes, Heroku or similar services.
Both OnModuleInit and OnApplicationBootstrap hooks allow you to defer the application initialization process (return a Promise or mark the method as async).
What causes a Module to be destroyed (and trigger the OnModuleDestroy event)?
Usually shutdown signal from Kubernetes, Heroku or similar services.

TypeScript: Large memory consumption while using ZeroMQ ROUTER/DEALER

We have recently started working on Typescript language for one of the application where a queue'd communication is expected between a server and client/clients.
For achieving the queue'd communication, we are trying to use the ZeroMQ library version 4.6.0 as a npm package: npm install -g zeromq and npm install -g #types/zeromq.
The exact scenario :
The client is going to send thousands of messages to the server over ZeroMQ. The server in-turn will be responding with some acknowledgement message per incoming message from the client. Based on the acknowledgement message, the client will send next message.
ZeroMQ pattern used :
The ROUTER/DEALER pattern (we cannot use any other pattern).
Client side code :
import Zmq = require('zeromq');
let clientSocket : Zmq.Socket;
let messageQueue = [];
export class ZmqCommunicator
{
constructor(connString : string)
{
clientSocket = Zmq.socket('dealer');
clientSocket.connect(connString);
clientSocket.on('message', this.ReceiveMessage);
}
public ReceiveMessage = (msg) => {
var argl = arguments.length,
envelopes = Array.prototype.slice.call(arguments, 0, argl - 1),
payload = arguments[0];
var json = JSON.parse(msg.toString('utf8'));
if(json.type != "error" && json.type =='ack'){
if(messageQueue.length>0){
this.Dispatch(messageQueue.splice(0, 1)[0]);
}
}
public Dispatch(message) {
clientSocket.send(JSON.stringify(message));
}
public SendMessage(msg: Message, isHandshakeMessage : boolean){
// The if condition will be called only once for the first handshake message. For all other messages, the else condition will be called always.
if(isHandshakeMessage == true){
clientSocket.send(JSON.stringify(message));
}
else{
messageQueue.push(msg);
}
}
}
On the server side, we already have a ROUTER socket configured.
The above code is pretty straight forward. The SendMessage() function is essentially getting called for thousands of messages and the code works successfully but with load of memory consumption.
Problem :
Because the behavior of ZeroMQ is asynchronous, the client has to wait on the call back call ReceiveMessage() whenever it has to send a new message to ZeroMQ ROUTER (which is evident from the flow to the method Dispatch).
Based on our limited knowledge with TypeScript and usage of ZeroMQ with TypeScript, the problem is that because default thread running the typescript code (which creates the required 1000+ messages and sends to SendMessage()) continues its execution (creating and sending more messages) after sending the first message (handshake message essentially), unless all the 1000+ messages are created and sent to SendMessage() (which is not sending the data but queuing the data as we want to interpret the acknowledgement message sent by the router socket and only based on the acknowledgement we want to send the next message), the call does not come to the ReceiveMessage() call back method.
It is to say that the call comes to ReceiveMessage() only after the default thread creating and calling SendMessage() is done doing this for 1000+ message and now there is no other task for it to do any further.
Because ZeroMQ does not provide any synchronous mechanism of sending/receiving data using the ROUTER/DEALER, we had to utilize the queue as per the above code using a messageQueue object.
This mechanism will load a huge size messageQueue (with 1000+ messages) in memory and will dequeue only after the default thread gets to the ReceiveMessage() call at the end. The situation will only worsen if say we have 10000+ or even more messages to be sent.
Questions :
We have validated this behavior certainly. So we are sure of the understanding that we have explained above. Is there any gap in our understanding of either/or TypeScript or ZeroMQ usage?
Is there any concept like a blocking queue/limited size array in Typescript which would take limited entries on queue, and block any new additions to the queue until the existing ones are queues (which essentially applies that the default thread pauses its processing till the time the call back ReceiveMessage() is called which will de-queue entries from the queue)?
Is there any synchronous ZeroMQ methodology (We have used it in similar setup for C# where we pool on ZeroMQ and received the data synchronously)?.
Any leads on using multi-threading for such a scenario? Not sure if Typescript supports multi threading to a good extent.
Note : We have searched on many forums and have not got any leads any where. The above description may have multiple questions inside one question (against the rules of stackoverflow forum); but for us all of these questions are interlinked to using ZeroMQ effectively in Typescript.
Looking forward to getting some leads from the community.
Welcome to ZeroMQ
If this is your first read about ZeroMQ, feel free to first take a 5 seconds read - about the main conceptual differences in [ ZeroMQ hierarchy in less than a five seconds ] Section.
1 ) ... Is there any gap in our understanding of either/or TypeScript or ZeroMQ usage ?
Whereas I cannot serve for the TypeScript part, let me mention a few details, that may help you move forwards. While ZeroMQ is principally a broker-less, asynchronous signalling/messaging framework, it has many flavours of use and there are tools to enforce both a synchronous and asynchronous cooperation between the application code and the ZeroMQ Context()-instance, which is the cornerstone of all the services design.
The native API provides means to define, whether a respective call ought block, until a message processing across the Context()-instance's boundary was able to get completed, or, on the very contrary, if a call ought obey the ZMQ_DONTWAIT and asynchronously return the control back to the caller, irrespectively of the operation(s) (in-)completion.
As additional tricks, one may opt to configure ZMQ_SND_HWM + ZMQ_RCV_HWM and other related .setsockopt()-options, so as to meet a specific blocking / silent-dropping behaviours.
Because ZeroMQ does not provide any synchronous mechanism of sending/receiving data
Well, ZeroMQ API does provide means for a synchronous call to .send()/.recv() methods, where the caller is blocked until any feasible message could get delivered into / from a Context()-engine's domain of control.
Obviously, the TypeScript language binding/wrapper is responsible for exposing these native API services to your hands.
3 ) Is there any synchronous ZeroMQ methodology (We have used it in similar setup for C# where we pool on ZeroMQ and received the data synchronously) ?
Yes, there are several such :
- the native API, if not instructed by a ZMQ_DONTWAIT flag, blocks until a message can get served
- the native API provides a Poller()-object, that can .poll(), if given a -1 as a long duration specifier to wait for sought for events, blocking the caller until any such event comes and appears to the Poller()-instance.
Again, the TypeScript language binding/wrapper is responsible for exposing these native API services to your hands.
... Large memory consumption ...
Well, this may signal a poor resources management care. ZeroMQ messages, once got allocated, ought become also free-d, where appropriate. Check your TypeScript code and the TypeScript language binding/wrapper sources, if the resources systematically get disposed off and free-d from memory.

Requests and connections double on node 4.1.2

We're currently in the process of updating from node 0.10 to node 4.1.2 and we're seeing some weird patterns. The number of connections to our postgres database doubles1 and we're seeing the same pattern with requests to external services2. We are running a clustered app running the native cluster API and the number of workers is the same for both versions.
I'm failing to understand why upgrading the runtime language would apparently change application behaviour by doubling requests to external services.
One of the interesting things I've noticed with 0.12 and 4.x is the change in garbage collection. I've not used the pg module before so I don't know internally how it maintains it's pools of if it would be affected by memory or garbage collection. If you haven't defined default memory setting for node you could try giving that a shot and see if you see any other results.
node --max_old_space_size <some sane value in MB>
I ran into something similar, but I was getting double file writes. I don't know your exact case, but I've seen a scenario where requests could almost exactly double.
in the update to 4.1.2, process.send and child.send has gone from synchronous to asynchronous.
I found an issue like this:
var child = fork('./request.js');
var test = {};
child.send(small request);
child.send(large request);
child.on('response', function (val) {
console.log('small request came back: ' + val);
test = val;
});
if(!test){
//retry request
} ...
So where as previously the blocking sends has allowed this code to work, the non-blocking version assumes an error has occurred and retries. No error actually occurred, so double the requests come in.

(Google App Engine Python 2.7) webapp2 RequestHandler on main thread?

I started with a simple Python 2.5 app engine app and migrated it to Python 2.7 in hopes of taking advantage of its multithreaded abilities. After migrating, I noticed that webapp2.RequestHandler instances are all being called from the Main Thread.
I have an AJAX client firing up multiple asynchronous requests. One of the requests I'd like to respond only when certain event occurs on server side. Let's just say that event sleeps for 10 seconds for now. The problem is that sleep occurs in the Main Thread and occupies the thread before processing my second ASYNC request from AJAX. What am I missing?
Here's a stack trace:
PyDevDebug [PyDev Google App Run]
dev_appserver.py
MainThread - pid4276_seq4
post [test1.py:53]
dispatch [webapp2.py:570]
call [webapp2.py:1102]
default_dispatcher [webapp2.py:1278]
call [webapp2.py:1529]
Handle [wsgi.py:223]
HandleRequest [wsgi.py:298]
HandleRequest [runtime.py:151]
ExecutePy27Handler [dev_appserver.py:1525]
ExecuteCGI [dev_appserver.py:1701]
Dispatch [dev_appserver.py:1803]
Dispatch [dev_appserver.py:719]
_Dispatch [dev_appserver.py:2870]
_HandleRequest [dev_appserver.py:3001]
do_POST [dev_appserver.py:2794]
handle_one_request [BaseHTTPServer.py:328]
handle [BaseHTTPServer.py:340]
init [SocketServer.py:638]
init [dev_appserver.py:2780]
finish_request [SocketServer.py:323]
process_request [SocketServer.py:310]
_handle_request_noblock [SocketServer.py:284]
handle_request [dev_appserver.py:3991]
serve_forever [dev_appserver.py:4028]
main [dev_appserver_main.py:721]
[dev_appserver_main.py:747]
run_file [dev_appserver.py:167]
[dev_appserver.py:171]
run [pydevd.py:1090]
[pydevd.py:1397]
Thread-4 - pid4276_seq5
dev_appserver.py
It seems that in the dev (local) server parallel threads are not executed in parallel, but serial.
There is an "experimental dev server" here which you just reminded me of:
An experimental new development server for Google App Engine.
Multithreaded serving for better performance for complex applications and more correct semantics e.g. accessing your own application through urlfetch no longer deadlocks.
That might solve it locally, but I've not tried it personally.

Resources