The V8 platform used by this instance of Node does not support creating Workers - node.js

With my current project I run into the following error message when creating a worker:
ERROR Error: The V8 platform used by this instance
of Node does not support creating Workers
I found a variety of posts here on SO with comments like these: It was added in nodejs v10.5.0.
Does anyone know whats going on?
$ process.versions
ares:'1.16.0'
brotli:'1.0.7'
chrome:'85.0.4183.39'
electron:'10.0.0-beta.14'
http_parser:'2.9.3'
icu:'67.1'
llhttp:'2.0.4'
modules:'82'
napi:'5'
nghttp2:'1.41.0'
node:'12.16.3'
openssl:'1.1.0'
unicode:'13.0'
main.ts
win = new BrowserWindow({
webPreferences: {
nodeIntegrationInWorker: true,
nodeIntegration: true,
allowRunningInsecureContent: (serve) ? true : false,
},
});

Launch Workers from main.js
I had my Workers launching from my renderer.js and then I noticed a comment here (https://www.giters.com/nrkno/sofie-atem-connection/issues/125) that mentioned there's a bug that results in this V8 platform message when Workers are kicked-off outside of main.js.
In my particular case, it's not working fully yet, but I don't get this message any more and I think my outstanding problems are unrelated.

I had a similar error when I tried to use the worker pool in electron. I solved it by adding the {workertype: 'process'} when I create the worker pool as follows.
const pool = workerpool.pool('', { workerType: 'process' });

Related

How to run a job through Queue in arangodb

I am moving from ArangoDb 2.5.7 to ArangoDb 3.1.7. I have managed to make everything work except the Jobs. I look at the documentation and I don't understand If I have to create a separate service just for that ?
So, I have a foxx application myApp
manifest.json
{
"name": "myApp",
"version": "0.0.1",
"author": "Deepak",
"files":
{
"/static": "static"
},
"engines":
{
"arangodb": "^3.1.7"
},
"scripts":
{
"setup": "./scripts/setup.js",
"myJob": "./scripts/myJob.js"
},
"main": "index.js"
}
index.js
'use strict';
module.context.use('/one', require('./app'));
app.js
const createRouter = require('org/arangodb/foxx/router');
const controller = createRouter();
module.exports = controller;
const queues = require('#arangodb/foxx/queues');
queue = queues.create('myQueue', 2);
queue.push({mount:"/myJob", name:"myJob"}, {"a":4}, {"allowUnknown": true});
myJob.js
const argv = module.context.argv;
var obj = argv[0];
console.log('obj:'+obj);
I get following error:
Job failed:
ArangoError: service not found
Mount path: "/myJob".
I am not sure if I have to expand myJob as an external service. Can you help me. I don't see a complete example of how to do it.
To answer your question:
You do not have to extract the job script into a new service. You can specify the mount point of the current service by using module.context.mount.
You can find more information about the context object in the documentation: https://docs.arangodb.com/3.1/Manual/Foxx/Context.html
By the way, it's probably not a good idea to arbitrarily create jobs at mount-time. The common use case for the queue is to create jobs in route handlers as a side-effect of incoming requests (e.g. to dispatch a welcome e-mail on signup).
If you create a job at mount-time (e.g. in your main file or a file required by it) the job will be created whenever the file as executed, which will be at least once for each Foxx thread (by default ArangoDB uses multiple Foxx threads to handle parallel requests) or when development mode is enabled once per request(!).
Likewise if you create a job in your setup script it will be created whenever the setup script is executed, although this will only happen in one thread each time (but still once per request when development mode is active).
If you need e.g. a periodic job that lives alongside your service, you should put it in a unique queue and only create it in your setup script after checking whether it already exists.
On the changes in the queue API:
The queue API changed in 2.6 due to a serious issue with the old API that would frequently result in pending jobs not being properly rescheduled when the ArangoDB daemon was restarted after a job had been pushed to the queue.
Specifically ArangoDB 2.6 introduced so-called script-based (rather than function-based) job types: https://docs.arangodb.com/3.1/Manual/ReleaseNotes/UpgradingChanges26.html#foxx-queues
Support for the old function-based job types was dropped in ArangoDB 2.7 and the cookbook recipe was updated to reflect script-based job types: https://docs.arangodb.com/2.8/cookbook/FoxxQueues.html
A more detailed description of the new queue can be found in the documentation: https://docs.arangodb.com/3.1/Manual/Foxx/Scripts.html

Spawn new child process with own console window

I've got a parent application in node.js which needs to spawn multiple worker applications (also in node.js) applications according to need.
I've already got communication working between them - don't need to use any of the built-in node stuff.
Now the problem is that I'd like each worker process to have it's own console window - since I do a lot of writing to the console and I want to keep an eye on it.
I've looked through the Node child_process documentation, and it says that by setting options to detached:
On Windows, setting options.detached to true makes it possible for the child process to continue running after the parent exits. The child will have its own console window.
However when I use my own code
const Process = require("child_process").spawn;
Process(process.argv[0], ["myApplicationPath","otherArgs"],{detached: true,stdio: ['ignore']});
It doesn't work. The child application does spawn, but no console window turns up.
I'm a bit late here, but I just had to figure this out as well, so here is the answer for anyone else who is struggling with this:
I managed to spawn my child application in its own console using this:
childProcess.spawn("<cmd>", [], {shell: true, detached: true});
In addition to the {detached: true} what OP is using, I used {shell: true}. With the combination of both, I managed to spawn my child application with its own console.

Requests and connections double on node 4.1.2

We're currently in the process of updating from node 0.10 to node 4.1.2 and we're seeing some weird patterns. The number of connections to our postgres database doubles1 and we're seeing the same pattern with requests to external services2. We are running a clustered app running the native cluster API and the number of workers is the same for both versions.
I'm failing to understand why upgrading the runtime language would apparently change application behaviour by doubling requests to external services.
One of the interesting things I've noticed with 0.12 and 4.x is the change in garbage collection. I've not used the pg module before so I don't know internally how it maintains it's pools of if it would be affected by memory or garbage collection. If you haven't defined default memory setting for node you could try giving that a shot and see if you see any other results.
node --max_old_space_size <some sane value in MB>
I ran into something similar, but I was getting double file writes. I don't know your exact case, but I've seen a scenario where requests could almost exactly double.
in the update to 4.1.2, process.send and child.send has gone from synchronous to asynchronous.
I found an issue like this:
var child = fork('./request.js');
var test = {};
child.send(small request);
child.send(large request);
child.on('response', function (val) {
console.log('small request came back: ' + val);
test = val;
});
if(!test){
//retry request
} ...
So where as previously the blocking sends has allowed this code to work, the non-blocking version assumes an error has occurred and retries. No error actually occurred, so double the requests come in.

Starting a scheduling service in sails.js with forever from within sails with access to all waterline models

I have a standalone scheduling service set to execute some logic every 1 hour, I want to start this service with forever right after sails start and I am not sure what's the best way to do that.
// services/Scheduler.js
sails.load(function() {
setInterval( logicFn , config.schedulingInterval);
});
Sails can execute bootstrap logic in the config.bootstrap module and I'll be using the forever-monitor node module \
var forever = require('forever-monitor'),
scheduler = new (forever.Monitor)( schedulerPath, {
max: 20,
silent: true,
args: []
});
module.exports.bootstrap = function(cb) {
scheduler.start();
cb();
};
What if the service failed and restarted for whatever reason would it have access to all waterline models again, how to ensure it works as intended every time?
as brittonjb said in comments, a simple solution is to use the cron module for scheduling.
You can specify a function for it to call at whatever interval you wish; this function could be defined within /config/bootstrap.js or it could be defined somewhere else (e.g. mail.dailyReminders() if you have a mail service with a dailyReminders method);
Please please please, always share your sails.js version number! This is really important for people googling questions/answers!
There are many ways to go about doing this. However, for those that want the "sails.js" way, there are hooks for newer sails.js versions.
See this issue thread in github, specifically, after the issue gets closed some very helpful solutions get provided by some users. The latest is shared by "scott-wyatt", commented on Dec 28, 2014:
https://github.com/balderdashy/sails/issues/2092

Channels keep increasing for every exchange.publish() in RabbitMQ with node-amqp library

I'm using node-amqp library for my nodejs project. I also posted the issue to it's github project page.
It keeps creating new channels and they stay idle forever. After an hour channels were ~12000. I checked the options for exchange and publish but so far I'm not even close to solution.
What's wrong with the code and/or is there any options/settings for rabbitmq server for the issue?
Here is the sample code:
connection.exchange("brcks-wfa",{type:'direct',durable:true}, function(exchange) {
setInterval(function() {
...
awS.forEach(function(wc){
...
nstbs.forEach(function(br){
...
BUpdate(brnewinfo,function(st){
if(st){
exchange.publish(route, brnewinfo,{contentType:"application/json"});
}
});
});
...
});
}, 4000);
});
There is a bug in node-amqp where channels are not closed. The rabbit MQ team no longer recommends using this library anymore, instead they are recommending ampq.node which is a bit more low-level and lets/requires you to handle channels manually.

Resources