In my Electron project, I'm trying to make a module singleton by setting it as a global. Since I use jquery in this module, I import it in the renderer process and then send it to the main via ipc and set it as global there. Here is related part of my code:
main.js:
ipcMain.on( "setGlobal", ( event, global_var) => {
global[global_var[0]] = global_var[1];
console.log(global_var);
event.returnValue = 1;
} );
renderer.js:
const favourites = require("./components/favourites");
console.log(favourites);
ipcRenderer.sendSync("setGlobal", ["favourites", favourites]);
console.log(remote.getGlobal("favourites"));
The outputs of console.logs in the renderer process are in the image below:
And the output of the main process is:
[ 'favourites', { favourites: [] } ]
As you see, the object (module) I sent from the ipcRenderer is changed in the ipcMain, it lost its add and init functions. Do you have any idea what is the reason of this behavior and how to fix it?
PS: To be sure, I tested it with a simple objects that contains functions instead of require("favourites"). They are also behaving in the same way. I did a workaround by using only the entities as global and passing them to all of the functions as arguments. However, it's not a good way in sense of code readability.
You cannot use IPC like that. As noted in docs (eg. sendSync)
Send a message to the main process synchronously via channel, you can also send arbitrary arguments. Arguments will be serialized in JSON internally and hence no functions or prototype chain will be included.
Your functions are simply not making it to main process.
For making a module singleton you should just use the singleton pattern in your module, and use require in main process and remote.require in renderer. Since require using cache (at least by default), the same module should be returned. (more or less. Further reading can be found here)
For example if you export a class:
let _instance = null
class MyClass {
constructor() {
if (_instance === null) _instance = this
return _instance
}
...
}
module.exports = MyClass
After #pergy's answer, I decided to drop IPC and use only the globals. So, here is the workaround I find:
main process:
global.provider = {};
renderer process:
const favourites = require("./components/favourites");
remote.getGlobal("provider").favourites = favourites;
other modules:
const favourites = remote.getGlobal("provider").favourites;
Related
I've created a Rust lib with wasm-bindgen, compiled it to WASM using wasm-pack, and am trying to import it inside an AudioWorkletProcessor's thread.
On my main thread, I fetch the file and pass the resulting buffer to the Worker:
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule("./my-custom-audio-processor.js");
const my-custom-audio-processorHandle = new AudioWorkletNode(
audioContext,
"my-custom-audio-processor"
);
WebAssembly.compileStreaming(fetch('./my-wasm-lib_bg.wasm')).then(async compiledModule => {
my-custom-audio-processorHandle.port.postMessage({
compiledModule,
});
});
Inside the Worker's thread, we have this:
this.port.onmessage = async event => {
const instance = await WebAssembly.instantiate(event.data.compiledModule, /* here's the problem */);
}
The problem is that the WASM file specifies an import, which is one of the.js file that got generated along with it:
(import "./my-wasm-lib_bg.js" "__wbindgen_throw" (func (;0;) (type 0)))
This is the object it expects me to pass as the .instantiate() method's second argument:
let imports = {};
imports['./my-wasm-lib_bg.js'] = require('my-wasm-lib/my-wasm-lib_bg.js');
However, this only works if I run .instantiate() from the main thread. If I try these last lines above inside the Worker's thread, I get that "require is not defined".
Passing the instance to the Worker's thread doesn't work either, nor does passing the require function.
Is there any configuration I can use to avoid that line from going into my WASM lib? Or is there any way to call "require" from within the Worker's thread?
We are in the process of embedding JS in our application, and we will use a few dozen scripts each assigned to an event. Inside these scripts we provide a minimal callback api,
function onevent(value)
{ // user javascript code here
}
which is called whenever that event happens. The scripts have to have their own global, since this funtion has always the same name and we access it from cpp code with
duk_get_global_string(js_context_duk, "onevent");
duk_push_number(js_context_duk, val);
if (duk_pcall(js_context_duk, 1) != 0)
{
printf("Duk error: %s\n", duk_safe_to_string(js_context_duk, -1));
}
duk_pop(js_context_duk); /* ignore result */
Then again we want to allow minimal communication between scripts, e.g.
Script 1
var a = 1;
function onevent(val)
{
log(a);
}
Script 2
function onevent(val)
{
a++;
}
Is there a way we achieve this? Maybe by introducing an own 'ueber-' global object, that is defined once and referencable everywhere? It should be possible to add properties to this 'ueber-global object' from any script like
Script 1
function onevent(val)
{
log(ueber.a);
}
Script 2
function onevent(val)
{
ueber.a=1;
}
Instead of simple JS files you could use modules. duktape comes with a code example to implement a module system (including its code isolation) like in Node.js. Having that in place you can export variables that should be sharable.
We have an approach that seems to work now. After creating the new context with
duk_push_thread_new_globalenv(master_ctx);
new_ctx = duk_require_context(master_ctx, -1);
duk_copy_element_reference(master_ctx, new_ctx, "ueber");
we issue this call sequence in for all properties/objects/functions created in the main context:
void duk_copy_element_reference(duk_context* src, duk_context* dst, const char* element)
{
duk_get_global_string(src, element);
duk_require_stack(dst, 1);
duk_xcopy_top(dst, src, 1);
duk_put_global_string(dst, element);
}
It seems to work (because everything is in the same heap and all is single threaded). Maybe someone with deeper insight into duktape can comment on this? Is this a feasible solution with no side effects?
edit: mark this as answer. works as expected, no memory leaks or other issues.
How can I run a single method multiple times multi-threaded when called as a method of a class?
At first I tried to use the cluster module, but I realize it just re-runs the whole process from the start, rightfully so.
How can I achieve something like what's outlined below?
I want a class's method to spawn n processes, and when the parallel tasks are completed, I can resolve a promise which the method returns.
The problem with the code below is that calling cluster.fork() will fork index.js process.
index.js
const Person = require('./Person.js');
var Mary = new Person('Mary');
Mary.run(5).then(() => {...});
console.log('I should only run once, but I am called 5 times too many');
Person.js
const cluster = require('cluster');
class Person{
run(distance){
var completed = 0;
return new Promise((resolve, reject) => {
for(var i = 0; i < distance; i++) {
// run a separate process for each
cluster.fork().send(i).on('message', message => {
if (message === 'completed') { ++completed; }
if (completed === distance) { resolve(); }
});
}
});
}
}
I think the short answer is impossible. It's even worse - this has nothing to do with js. To multi (process or thread) in your particular problem you will essentially need a copy of the object in every thread, since it needs (maybe) access to fields - in this case you would need to either initialize it in every thread or share memory. That last one I don't think is provided in cluster, and not trivial in other languages in every use case.
If the calculation is independent of the Person I suggest you extract it, and use the usual (in index.js):
if(cluster.isWorker) {
//Use the i for calculation
} else {
//Create Person, then fork children in for loop
}
You then collect the results and change the Person as needed. You will be copying index.js, but this is standard and you only run what you need.
The problem is if results are dependent on Person. If these are constant for all i you can still send them to your forks independently. Otherwise what you have is the only way to fork. In general forking in cluster is not meant for methods, but for the app itself, which is the standard forking behavior.
Another solution
Following your comment, I suggest you checkout child_process.execFile or child_process.exec on same file.
This way you can spawn a totally independent process on the fly. Now instead of calling cluster.fork you call execFile. You can use either the exit code or stdout as return values (stderr etc.). Promise is now replaced with:
var results = []
for(var i = 0; i < distance; i++) {
// run a separate process for each
results.push(child_process.execFile().child.execFile('node', 'mymethod.js`,i]));
}
//... catch the exit event from all results or return a callback using results.
Inside mymethod.js Have your code that takes i and returns what you want either in the exit code or through stdout, both properties of the returned child_process. This is a bit un-node.js-y since you're waiting on asynchronous calls, but you're requirements are non standard. Since I'm not sure how you use this perhaps returning a callback with the array is a better idea.
Note: My question is about the way of including/passing the dispatcher instance around, not about how the pattern is useful.
I am studying the Flux Architecture and I cannot get my head around the concept of the dispatcher (instance) potentially being included everywhere...
What if I want to trigger an Action from my Model Layer? It feels weird to me to include an instance of an object in my Model files... I feel like this is missing some injection pattern...
I have the impression that the exact PHP equivalent is something (that feels) horrible similar to:
<?php
$dispatcher = require '../dispatcher_instance.php';
class MyModel {
...
public function someMethod() {
...
$dispatcher->...
}
}
I think my question is not exactly only related to the Flux Architecture but more to the NodeJS "way of doing things"/practices in general.
TLDR:
No, it is not bad practice to pass around the instance of the dispatcher in your stores
All data stores should have a reference to the dispatcher
The invoking/consuming code (in React, this is usually the view) should only have references to the action-creators, not the dispatcher
Your code doesn't quite align with React because you are creating a public mutable function on your data store.
The ONLY way to communicate with a store in Flux is via message passing which always flows through the dispatcher.
For example:
var Dispatcher = require('MyAppDispatcher');
var ExampleActions = require('ExampleActions');
var _data = 10;
var ExampleStore = assign({}, EventEmitter.prototype, {
getData() {
return _data;
},
emitChange() {
this.emit('change');
},
dispatcherKey: Dispatcher.register(payload => {
var {action} = payload;
switch (action.type) {
case ACTIONS.ADD_1:
_data += 1;
ExampleStore.emitChange();
ExampleActions.doThatOtherThing();
break;
}
})
});
module.exports = ExampleStore;
By closing over _data instead of having a data property directly on the store, you can enforce the message passing rule. It's a private member.
Also important to note, although you can call Dispatcher.emit() directly, it's not a good idea.
There are two main reasons to go through the action-creators:
Consistency - This is how your views and other consuming code interacts with the stores
Easier Refactoring - If you ever remove the ADD_1 action from your app, this code will throw an exception rather than silently failing by sending a message that doesn't match any of the switch statements in any of the stores
Main Advantages to this Approach
Loose coupling - Adding and removing features is a breeze. Stores can respond to any event in the system with by adding one line of code.
Less complexity - One way data flow makes wrapping head around data flow a lot easier. Less interdependencies.
Easier debugging - You can debug every change in your system with a few lines of code.
debugging example:
var MyAppDispatcher = require('MyAppDispatcher');
MyAppDispatcher.register(payload => {
console.debug(payload);
});
Which is the best approach for listening/launching events in node.js?
I've been testing event launching and listening in node.js by extending the model with EventEmitter and I'm wondering if it has sense this approach since the events are only listened when there is a instance of the model.
How can be achieved that events will be listened while the node app is alive??
Example for extending the model using eventEmitter.
// myModel.js
var util = require('util');
var events2 = require('events').EventEmitter;
var MyModel = function() {
events2.call(this);
// Create a event listener
this.on('myEvent', function(value) {
console.log('hi!');
});
};
MyModel.prototype.dummyFunction = function(params) {
// just a dummy function.
}
util.inherits(MyModel, events2);
module.exports = MyModel;
EDIT: A more clear question about this would be: how to keep a permanent process that listens the events during the app execution and it has a global scope (something like a running event manager that listens events produced in the app).
Would be a solution to require the file myModel.js in app.js? How this kind of things are solved in node.js?
I'm not entirely sure what you mean about events only being active when there is an instance of a model since without something to listen and react to them, events cannot occur.
Having said that, it is certainly reasonable to:
util.inherits(global,EventEmitter)
global.on('myEvent',function(){
/* do something useful */
});
which would allow you to:
global.emit('myEvent')
or even:
var ee=new EventEmitter();
ee.on('myEvent',...)
As for how to properly use EventEmitter: it's defined as
function EventEmitter() {}
which does not provide for initialization, so it should be sufficient to:
var Thing=function(){};
util.inherits(Thing,EventEmitter);
which will extend instances of Thing with:
setMaxListeners(num)
emit(type,...)
addListener(type,listener) -- aliased as on()
once(type,listener)
removeListener(type,listener)
removeAllListeners()
listeners(type)
The only possible "gotcha" is that EventEmitter adds its own _events object property to any extended object (this) which suggests you should not name any of your own object properties with the same name without unexpected behavior.