FMETP WebGL Unity Build. Emscripten error when FMNetworkManager activated in Heirarchy - node.js

So I have been working on a project that streams a video feed from an Oculus quest to a WebGL build running on a remote server (Digital Ocean)
I have two issues currently...
1.When I build to WebGl and push the update online. It will only run if I disable the FMNetworkManager.
If I run the app locally, it has no issues and I have been able to have video sent from the Quest headset to the receiver app.
Part of the response is as follows:
An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:
uncaught exception: abort("To use dlopen, you need to use Emscripten's linking support, see https://github.com/kripken/emscripten/wiki/Linking") at jsStackTrace (Viewer.wasm.framework.unityweb:8:15620)
stackTrace (Viewer.wasm.framework.unityweb:8:15791)
onAbort#https://curtin-cooking-control-nr9un.ondigitalocean.app/Build/UnityLoader.js:4:11199
abort (Viewer.wasm.framework.unityweb:8:500966)
_dlopen (Viewer.wasm.framework.unityweb:8:181966)
#blob:https://***/de128118-3923-4c88-8092-7a9945d90746 line 8 > WebAssembly.instantiate:wasm-function[60882]:0x1413efb (blob:***/de128118-3923-4c88-8092-7a9945d90746 line 8 > WebAssembly.instantiate:wasm-function[62313]:0x1453761)
...
...
...WebAssembly.instantiate:wasm-function[63454]:0x148b9a9)
UnityModule [UnityModule/Module.dynCall_v] (Viewer.wasm.framework.unityweb:8:484391)
browserIterationFunc (Viewer.wasm.framework.unityweb:8:186188)
runIter (Viewer.wasm.framework.unityweb:8:189261)
Browser_mainLoop_runner (Viewer.wasm.framework.unityweb:8:187723)
So I understand there is an issue relating to (wasm) Emscripten and have scoured the internet looking for solutions to no avail.
While I have mentioned I have had video streaming from one device to another. I have only had this functioning locally. With a node.js server also running on Digital Ocean. Which appears to be functioning, seeing both devices being registered by the server at runtime. In each app, while I see what appears to be data transferring by seeing Last Sent Time updating, plus FM Web Socket Network_debug also pushes [connected: True] to a text ui. The IsConnected or Found Server checkboxes inside FM Client (script) fail to check as being connected.
FMNetworkManager
I'm by no means an expert in unity programming, webgl, and webserver setup so my understanding of getting this to function has left me looking at many irrelevant solutions while attempting to make little changes with elements that some solutions suggest with others leaving me blank-eyed looking into space wondering, where do I even implement that.
Any guidance would be great, a step-by-step solution would be fantastic.
[Edit - Detailed Error]
UnityLoader.js:1150 wasm streaming compile failed: TypeError: Could not download wasm module
printErr # UnityLoader.js:1150
Promise.catch (async)
doNativeWasm # 524174d7-d893-4b91-8…0-aa564a23702d:1176
(anonymous) # 524174d7-d893-4b91-8…0-aa564a23702d:1246
(anonymous) # 524174d7-d893-4b91-8…-aa564a23702d:20166
UnityLoader.loadCode.Module # UnityLoader.js:889
script.onload # UnityLoader.js:854
load (async)
loadCode # UnityLoader.js:849
processWasmFrameworkJob # UnityLoader.js:885
job.callback # UnityLoader.js:475
setTimeout (async)
job.complete # UnityLoader.js:490
(anonymous) # UnityLoader.js:951
decompressor.worker.onmessage # UnityLoader.js:89
Thanks in advance
Aaron

You have wrong use in combining FMNetworkUDP and FMWebsocket together.
For WebGL build, UDP is not allowed, which causes the error as expected.
Your websocket ip is reachable, because it's reachable via your IP.
But please try not to expose your server IP in public forum like stackoverflow, which everyone can connect to your server anytime in future.
You should take away FMNetworkManager completely, while keeping only FMWebsocket components for webgl streaming.
You may test it with their Websocket streaming example scene with webgl build.

Related

Duplicating debug configuration for multi Scheme is not working in React native-iOS

I've developed my project using react native, and now I am trying to implement multi-scheme for my dev, uat, and prod environments.
For the above, I've setup schemes and duplicated the release and debug configuration for each one of the scheme, and specified the different bundle ids, different app names, user defined variables. Now my situation is if I am running the scheme locally( as for run the debug configuration is set) then I'm falling into the below error:
Thread 6: "Unhandled JS Exception: Invariant Violation: TurboModuleRegistry.getEnforcing(...): 'DevSettings' could not be found. Verify that a module by this name is registered in the na..., stack:\ngetEnforcing#4725:28\n#41349:50\nloadModuleImplementation#271:14\n#41308:40\nloadModuleImplementation#271:14\n#35877:18\nloadModuleImplementation#271:14\n#28987:16\nloadModuleImplementation#271:14\nguardedLoadModule#163:47\nglobal code#326655:4\n"
Terminating app due to uncaught exception 'RCTFatalException: Unhandled JS Exception: Invariant Violation: TurboModuleRegistry.getEnforcing(...): 'DevSettings' could not be found. Verify that a module by this name is registered in the native binary.', reason: 'Unhandled JS Exception: Invariant Violation: TurboModuleRegistry.getEnforcing(...): 'DevSettings' could not be found. Verify that a module by this name is registered in the na..., stack:
*** I've attached screenshot of my Development scheme setting where I've used release build config(Development) and debug build config(DevelopmentDebug)
Check this image
FYI:
If I choose release build configuration for Run, Test and Analyse and run the scheme then it is working fine, but in this case I can't use debugger for my development purpose.
Also, I don't have any issues with archiving or releasing the archive to testflight using CICD, as I've selected the release configuration for Archive and Profile under the scheme setting.
Please help me out, as this will make the development very difficult, as I don't have access to the debugger window.
Looking forward to get positive help.
Thanks
(: Please try to add some logs that would be very helpful

"Error: Key not loaded" in h2o deployed through a K3s cluster, using python3 client

I can confirm the 3-replica cluster of h2o inside K3s is correctly deployed, as executing in the Python3 interpreter h2o.init(ip="x.x.x.x") works as expected. I followed the instructions noted here: https://www.h2o.ai/blog/running-h2o-cluster-on-a-kubernetes-cluster/
Nevertheless, I had to modify the service.yaml and comment out the line which says clusterIP: None, as K3s was complaining about something related to its inability to set the clusterIP to None. But even though, I can certify it is working correctly, and I am able to use an external IP to connect to the cluster.
If I try to load the dataset using the h2o cluster inside the K3s cluster using the exact same steps as described here http://docs.h2o.ai/h2o/latest-stable/h2o-docs/automl.html, this is the output that I get:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Key not loaded: Key<Frame> https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv
Request: POST /3/ParseSetup
data: {'check_header': '0', 'source_frames': '["https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv"]'}
The same error occurs if I use the h2o.upoad_file("x.csv") method.
There is a clue about what may be happening here: Key not loaded: Key<Frame> while POSTing source frame through ParseSetup in H2O API call but I am not using curl, and I can not find any parameter that could help me overcome this issue: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/h2o.html?highlight=import_file#h2o.import_file
I need to use the Python client inside the same K3s cluster due to different technical reasons, so I am not able to kick off nor Flow nor Firebug to know what may be happening.
I can confirm it is working correctly when I simply issue a h2o.init(), using the local Java instance.
UPDATE 1:
I have tried in different K3s clusters without success. I changed the service.yaml to a NodePort, and now this is the error traceback:
>>> train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
...
h2o.exceptions.H2OResponseError: Server error java.lang.IllegalArgumentException:
Error: Job is missing
Request: GET /3/Jobs/$03010a2a016132d4ffffffff$_a2366be93ec99a78d7bc161de8c54d67
UPDATE 2:
I have tried using different services (NodePort, LoadBalancer, ClusterIP) and none of them work. I also have tried using Minikube with the official image, and with a custom image made by me, without success. I suspect this is something related to either h2o itself, or the clustering between pods. I will keep digging and let's think there will be some gold in it.
UPDATE 3:
I also found out that the post about running H2O in Docker is really outdated https://www.h2o.ai/blog/h2o-docker/ nor is working the Dockerfile present at GitHub (I changed it to uncomment the ENTRYPOINT section without success): https://github.com/h2oai/h2o-3/blob/master/Dockerfile
Even though, I tried with the custom image I built for h2o-k8s and it is working seamlessly in pure Docker. I am wondering why it is still not working in K8s...
UPDATE 4:
I have tried modifying the environment variable called H2O_KUBERNETES_SERVICE_DNS without success.
In the meantime, the cluster started to be unavailable, that is, the readinessProbe's would not successfully complete. No matter what I change now, it does not work.
I spinned up a K3d cluster in local to see what happened, and surprisingly, the readinessProbe's were not failing, using v3.30.0.6. But now I started testing it with R instead of Python. I am glad I tried, because I may have pinpointed what was wrong. There is a version mismatch between the client and the server. So I updated accordingly the image to v3.30.0.1.
But now again, the readinessProbe is not working in my k3d cluster, so I am unable to test it.
It seems it is working now. R client version 3.30.0.1 with server version 3.30.0.1. Also tried with Python version 3.30.0.7 and server version 3.30.0.7 and it started working. Marvelous. The problem was caused by a version mismatch between the client and the server, as the python client was updated to 3.30.0.7 while the latest server for docker was 3.30.0.6.

Stackdriver-trace on Google Cloud Run failing, while working fine on localhost

I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error:
"#google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key."
I made sure that the service account has tracing agent role.
First line in my app.js
require('#google-cloud/trace-agent').start();
running locally I am using .env file containing
GOOGLE_APPLICATION_CREDENTIALS=<path to credentials.json>
According to https://github.com/googleapis/cloud-trace-nodejs These values are auto-detected if the application is running on Google Cloud Platform so, I don't have this credentials on the gcp image
There are two challenges to using this library with Cloud Run:
Despite the note about auto-detection, Cloud Run is an exception. It is not yet autodetected. This can be addressed for now with some explicit configuration.
Because Cloud Run services only have resources until they respond to a request, queued up trace data may not be sent before CPU resources are withdrawn. This can be addressed for now by configuring the trace agent to flush ASAP
const tracer = require('#google-cloud/trace-agent').start({
serviceContext: {
service: process.env.K_SERVICE || "unknown-service",
version: process.env.K_REVISION || "unknown-revision"
},
flushDelaySeconds: 1,
});
On a quick review I couldn't see how to trigger the trace flush, but the shorter timeout should help avoid some delays in seeing the trace data appear in Stackdriver.
EDIT: While nice in theory, in practice there's still significant race conditions with CPU withdrawal. Filed https://github.com/googleapis/cloud-trace-nodejs/issues/1161 to see if we can find a more consistent solution.

Chrome run fails in Azure Functions: An attempt was made to access a socket in a way forbidden by its access permissions

I wrote a web bot that uses Selenium framework to crawl. Installed ChromeDriver 72.0.3626.69 and also downloaded Chromium 72.0.3626.121. The app initializes ChromeDriver with this included Chromium binary (and NOT a locally installed Chrome binary). All this perfectly works on my machine locally.
I've been attempting now to port the app to Azure Functions. I wrote a function, tested it, and it works fine locally. But once I publish it to Azure Functions it fails due to about 182 errors of type:
An attempt was made to access a socket in a way forbidden by its
access permissions
I know this happens due to exceeding the TCP connection limits of Azure sandbox, but the only attempt here was to create an instance of ChromeDriver (not even navigate anywhere yet!)
Here is a screenshot of Azure Function call log.
That error appears about 182 times in a row, and that's basically just an attempt to create a browser instance (or ChromeDriver instance, to be precise - can't be sure if that's Chromium or ChromeDriver causing the issue).
The question: Have anyone experienced issues with ChromeDriver/Chromium creating so many (obviously excessive) connections when launching? And what might help to avoid this.
If that's of any help, this is basically a piece of code that crashes on the last line:
ChromeOptions options = new ChromeOptions();
options.BinaryLocation = this.chromePath;
options.AddArgument("no-sandbox");
options.AddArgument("disable-infobars");
options.AddArgument("--disable-extensions");
if (this.headlessMode)
{
options.AddArgument("headless");
}
options.AddUserProfilePreference("profile.default_content_setting_values.images", 2);
Log.LogInformation("Chrome options compiled. Creating ChromeDriverService...");
var driverService = ChromeDriverService.CreateDefaultService(this.driverPath);
driver = new ChromeDriver(driverService, options, timeout);
I believe you are running this function in a Windows Function App which is subject to quite a few limitations as described in this wiki.
But when running on Linux, functions are basically run in a docker container, removing most of these restrictions that windows has. I believe what you are trying should be possible there.
You could either just deploy your function to a Linux Function App or even build a container and use that directly as well.

Random 'ECONNABORTED' error when using sendFile in Express/Node

I have set a node server with Express middleware. I get the ECONNABORTED error randomly on some files when loading an HTML file which triggers about 10 other loads (js, css, etc.). The exact error is:
{ [Error: Request aborted] code: 'ECONNABORTED' }
Generated by this simplified code (after I tried to debug the issue):
res.sendFile(res.locals.physicalUrl,function (err) {
if (err)
console.log(err);
...
}
Many posts talk about this error resulting from not specifying the full path name. That is not the situation here. I do specify the full path and indeed the error is randomly generated. There are times when the page and all its subsequent links load perfectly and there are times when they do not. I tried to flush the cache and did not find any pattern to connect it with this.
This specific error appears to be a a generic term for socket connection getting aborted and is discussed in the context of other applications like FTP.
Having realized that the node worker threads can be increased, I tried to do so using:
process.env.UV_THREADPOOL_SIZE = 20;
However, my understanding is that even absent this, at most the file transfer may have to wait for a worker thread to be free and not get aborted. I am not talking about big files here, all files are less than 1 MB.
I have a gut feeling that this has nothing to do with node directly.
Please point to any other possibilities (node or otherwise) to handle this error. Also, any other indirect solutions? Retrying a few times could be one but that would be clumsy. EDIT: No, I cannot retry. Headers are already sent with the error!
A SIDE NOTE:
Many examples on the use of sendFile skip using the callback thereby giving the impression that it is a synchronous call. It is not. Do use the callback at all times, check for success and only then move on to the "next" middleware or take appropriate steps if the send fails for whatever reason. Not doing so can make it difficult to debug the consequences in an asynchronous environment.
See https://stackoverflow.com/a/36949631/2798152
Could it be possible that in some cases you terminate the connection by calling res.end before the asynchronous call to res.sendFile ends?
If that's not the case - can you pastebin more of your application code?
Uninstalling and Re-installing MongoDB solved this for me.
I was facing the same problem. It started happening when I had to force restart my laptop because it became unresponsive. On restarting, trying to connect to mongo server using nodejs, always threw ECONNABORTED error

Resources