Passing data from server to html with Electron/Nodejs - node.js

I'm using preload and renderer js to pass data from html to server. I have a main window and I open another window (add window). I take data from add window and pass it to server. I receive the data on server, but I don't know how to send callback with data from server to main window html.
In preload I have:
contextBridge.exposeInMainWorld(
'windowControls',
{
add: (data)=> ipcRenderer.send('item:add',data),
received:(data)=> ipcRenderer.send('item:received',data)
In rendererAddwindow:
var input = document.getElementById("inputItem").value;
windowControls.add(input)
In app.js:
// Catch item:add
ipcMain.on('item:add',(e,item)=>{
console.log('item',item); // Here I can read item
mainWindow.webContents.on('did-finish-load',()=>{
mainWindow.webContents.send('item:received',item)
});
addWindow.close();
})
What should I write in rendererMain to get data as a callback in main window? The main renderer is executed at first run and not when callback is triggered (if I triggered callback with these lines at all).

The did-finish-load event is not what you are looking for. This event is fired once the webpage is loaded, it is emited only once if you stay on the same page.
You have 2 solutions to answer a message received in the main process.
Invoke the message instead of sending it
You should refer to the documentation to learn about invoking the message.
Here is the example from the documentation :
// Renderer process
ipcRenderer.invoke('some-name', someArgument).then((result) => {
// ...
})
// Main process
ipcMain.handle('some-name', async (event, someArgument) => {
const result = await doSomeWork(someArgument)
return result
})
Here is what it should look like in your example :
// Renderer process
ipcRenderer.invoke('item:add', item) // This sends the item to main process and wait for the answer
.then((data) => { // Callback triggered once the result comes back
console.log(data) // Do what you want with the data
})
// Main process
ipcMain.handle('item:add', async (event, item) => {
console.log(item)
return item // Or return whatever you want
})
Send a new message
This is not the best solution since it can become very complexe as the app grows. But you can send a new message from main to renderer :
// app.js file
ipcMain.on('item:add',(e,item)=>{
console.log({item})
if(yourWindow) { // It can throw an error if yourWindow is null or defined
yourWindow.webContents.send('item:received',item)
}
})

In app.js (when data is received from add window input):
// Catch item:add
ipcMain.on('item:add',(e,item)=>{
console.log('item',item); // Here I can read item
mainWindow.send('itemreceived',item)
addWindow.close();
})
In preload.js (outside contextBridge.exposeInMainWorld()):
const { contextBridge, ipcRenderer } = require('electron')
// Set up context bridge between the renderer process and the main process
contextBridge.exposeInMainWorld(
'windowControls',
{
close: () => ipcRenderer.send('windowControls:close'),
maximize: () => ipcRenderer.send('windowControls:maximize'),
minimize: () => ipcRenderer.send('windowControls:minimize'),
add: (data)=> ipcRenderer.send('item:add',data),
}
)
ipcRenderer.on('itemreceived',(event,message)=>{
console.log('item received message',message);
}
Similar example is here: https://gist.github.com/malept/3a8fcdc000fbd803d9a3d2b9f6944612

Related

Verifying a text on a dialog box with Cypress and Cucumber

I'm trying to verify a text message on a dialog box using cypress and cucumber. The test cases are working perfectly fine when it's within "it function". Here is is the sample code :
it ('Verify if the Login is successful', function()
{
cy.visit('loginTest.html')
cy.get('#username').type('shahin')
cy.get('#password').type('tala')
cy.contains('Login').click()
cy.on('window:alert', (str) => {
expect(str).to.equal(`Login Successfully`)
})
})
However when i add the BDD keywords it looks like the function doesn't get evaluated at all. It works for When but not the Then scenario. I think it needs to be handled in Js in a different way. I have uploaded the cypress log as well. Below is the code :
When('I click on the login button', () => {
cy.contains('Login').click()
})
Then('Successful POP up message should be displayed', () => {
cy.on('window:alert', (str) => {
expect(str).to.equal(`Login Successfully`)
})
Cypress Log
The first thing is cy.on('window:alert'... is a passive event listener, it does not do anything until an event is emitted by the app.
This means you need to set it before the event is triggered (e.g Login click),
When('I click on the login button', () => {
cy.on('window:alert', ...something here...) // set up the event listener
cy.contains('Login').click() // action that triggers the event
})
If you do your expect() within the callback of the event listener it mucks up your BDD flow (Then() is redundant).
Use a stub to catch the event, and assert the stub properties inside Then().
let stub // declare outside so it's visible in both When and Then
When('I click on the login button', () => {
stub = cy.stub() // set stub here (must be inside a test)
cy.on('window:alert', stub) // capture call
cy.contains('Login').click()
})
Then('message is displayed', () => {
expect(stub).to.have.been.calledWith('Login Successful')
})
Why does it() work?
Essentially, with it() all the code is within one block vs two blocks for When() Then().
The async commands are queued for later exectution, but the synchronous cy.on() is executed immediately - even though it's the last line it gets executed first.
it('...', () => {
// Queued and executed (slightly) later
cy.visit('loginTest.html')
cy.get('#username').type('shahin')
cy.get('#password').type('tala')
cy.contains('Login').click()
// executed immediately (so actually first line to run)
cy.on('window:alert', (str) => {
expect(str).to.equal(`Login Successfully`)
})
})
The When() and Then() blocks are executed in sequence, so you don't get the same pattern as with it().

Socket.io async/await for .on()

I'm building a socket.io Node JS application and my socket.io server will be listening for data from many socket.io clients, I need to save data to an API via my socket.io server as quickly as possible and figure that async/await is the best way forward.
Right now, I've got a function inside my .on('connection'), but is there a way I can make this an async function rather than have a nested function inside?
io.use((socket, next) => {
if (!socket.handshake.query || !socket.handshake.query.token) {
console.log('Authentication Error 1')
return
}
jwt.verify(socket.handshake.query.token, process.env.AGENT_SIGNING_SECRET, (err, decoded) => {
if (err) {
console.log('Authentication Error 2')
return
}
socket.decoded = decoded
next()
})
}).on('connection', socket => {
socket.on('agent-performance', data => {
async function savePerformance () {
const saved = await db.saveToDb('http://127.0.0.1:8000/api/profiler/save', data)
console.log(saved)
}
savePerformance()
})
})
Sort of, but you'll probably want to keep your current code if there can be multiple agent-performance events. You can modify the following, but it'd be messy and less readable. Event emitters still exist for a reason, they're not made obsolete by the introduction of promises. If it's performance you're after, your current code is probably faster and more resistant to backpressure and easier to error-handle.
events.on is a utility function that takes an event emitter (like socket) and returns an iterator that yields promises. You can await those with for await of.
events.once is a utility function that takes an event emitter (like socket) and returns a promise that resolves when the specified event is executed.
const { on, once } = require('events');
(async function() {
// This is an iterator that can emit infinite number of times.
const iterator = on(io, 'connection');
// Yield a promise, await it, run what is between `{ }` and repeat.
for await (const socket of iterator) {
const data = await once(socket, 'agent-performance');
const saved = await db.saveToDb(/* etc */);
}
})();
As the names imply, on is similar to socket.on and once is similar to socket.once. In the above example:
connected user 1, first agent-performance event: OK
connected user 1, second agent-performance event: not handled, there's no more event handler, since once is "used up".
connected user 2, first agent-performance event: OK
The documentation for on has a note about concurrency when using for await (x of on(...)), but I don't know if that would be a problem in your usecase.
// The execution of this inner block is synchronous and it
// processes one event at a time (even with await). Do not use
// if concurrent execution is required.

nodejs setTimeout and recursive calls context (this/self)

Im currently working on a project and got stuck at the setTimeout() function in a recursive function. Im rather new to promises so may i did implement this part not corretly too.
The programm should do:
add a listener to a stream event 'readable'
write a request to the stream if specific periodic data is read
resolve promise and remove listener after some other data (answer) is
recieved
Send message from Stream to another process
repeat by calling the same method recursivly with a delay of 10secs
Bassicly im trying to poll from a stream every 10 seconds.
The code looks simplyfied like this:
class XYZ {
myFunction(commands, intervall, i) {
var self = this;
var promise = new Promise((resolve, reject) => {
// I have to write to a Stream and listen for an answer
self.dataStream.write(someData, () => {
self.dataStream.addListener('readable', handleStuff);
});
// Function that handles incoming data from the Stream
var handleStuff = function () {
if (self.dataStream == someFormat) {
self.dataStream.write(commands[i]);
} else {
self.dataStream.removeListener('readable', hadleStuff);
resolve(self.dataStream.read());
}
}
});
// Resolving by sending msg and calling recursivly
promise.then((message) => {
self.send(message);
if (i + 1 > resetValue) {
setTimeout(() => {
self.myFunction(commands, intervall, 0);
}, intervall);
} else {
self.myFunction(commands, intervall, i + 1);
}
});
}
};
And i call it like this:
var myXYZ = new XYZ();
myXYZ.myFunction(myCommands, 10000, 0);
Now when i run this the initial call, it works just fine and sends the message from the dataStream to another process. But when the setTimeout() function is called the function gets "stuck" after writing data to the stream for the first time and the promise is not resolved neither rejected.
My guess is that im mixing up the context (this/self) in my code. Theres sadly no error message, so that i think my logic is faulty. It also works fine if i just remove the setTimeout() function. My Question now is how does setTimeout() change the context from which the code operates?

Why is my RxJS Observable completing right away?

I'm a bit new to RxJS and it is kicking my ass, so I hope someone can help!
I'm using RxJS(5) on my express server to handle behaviour where I have to save a bunch of Document objects and then email each of them to their recepients. The code in my documents/create endpoint looks like this:
// Each element in this stream is an array of `Document` model objects: [<Document>, <Document>, <Document>]
const saveDocs$ = Observable.fromPromise(Document.handleCreateBatch(docs, companyId, userId));
const saveThenEmailDocs$ = saveDocs$
.switchMap((docs) => sendInitialEmails$$(docs, user))
.do(x => {
// Here x is the `Document` model object
debugger;
});
// First saves all the docs, and then begins to email them all.
// The reason we want to save them all first is because, if an email fails,
// we can still ensure that the document is saved
saveThenEmailDocs$
.subscribe(
(doc) => {
// This never hits
},
(err) => {},
() => {
// This hits immediately.. Why though?
}
);
The sendInitialEmails$$ function returns an Observable and looks like this:
sendInitialEmails$$ (docs, fromUser) {
return Rx.Observable.create((observer) => {
// Emails each document to their recepients
docs.forEach((doc) => {
mailer.send({...}, (err) => {
if (err) {
observer.error(err);
} else {
observer.next(doc);
}
});
});
// When all the docs have finished sending, complete the
// stream
observer.complete();
});
});
The problem is that when I subscribe to saveThenEmailDocs$, my next handler is never called, and it goes straight to complete. I have no idea why... Inversely if I remove the observer.complete() call from sendInitialEmails$$, the next handler is called every time and the complete handler in subscribe is never called.
Why isn't the expected behaviour of next next complete happening, instead it's one or the other... Am I missing something?
I can only assume that mailer.send is an asynchronous call.
Your observer.complete() is called when all the asynchronous calls have been launched, but before any of them could complete.
In such cases I would either make an stream of observable values from the docs array rather than wrap it like this.
Or, if you would like to wrap it manually into an observable, I suggest you look into the library async and use
async.each(docs, function(doc, callback) {...}, function finalized(err){...})

EventEmitter memory leak detected: Proper way to pass CSV data to multiple modules?

I am dipping my toe into using different npm modules my own way whereas before I just executed already created gulpfiles. The npm module penthouse loads a webpage and determines the above the fold CSS for that page. I am trying to take that module and use it with a site crawler so I can get the above the fold css for all pages, and store that CSS in a table.
So essentially I am:
Crawling a site to get all the urls
capturing the page id from each url
storing pages & their id's in a CSV
load the CSV and pass each URL to penthouse
take penthouse output and store it in a table
So I am fine up until the last two steps. When I am reading the CSV, I get the error possible EventEmitter memory leak detected. 11 exit listeners added. Use emitter.setMaxListeners() to increase limit.
The stack trace points here at line 134. After reading about the error, it makes sense because I see a bunch of event listeners being added, but I don't see penthouse ever really executing and closing the event listeners.
It works just fine standalone as expected (Running penthouse against a single page then exiting). But when I execute the code below to try and loop through all URLs in a csv, it spits out the memory leak error twice, and just hangs. None of my console.log statements in the following script are executed.
However, I added console.log to the end of the penthouse index.js file, and it is executed multiple times (where it adds event listeners), but it never timeouts or exits.
So it's clear I am not integrating this properly, but not sure how to proceed. What would be the best way to force it to read one line in the CSV at a time, process the URL, then take the output and store it in the DB before moving onto the next line?
const fs = require('fs');
var csv = require('fast-csv');
var penthouse = require('penthouse'),
path = require('path');
var readUrlCsv = function() {
var stream = fs.createReadStream("/home/vagrant/urls.csv");
var csvStream = csv()
//returns single line from CSV
.on("data", function(data) {
// data[0]: table id, data[1]: page type, data[2]: url
penthouse({
url : data[2],
css : './dist/styles/main.css'
}, function(err, criticalCss) {
if (err) {
console.log(err);
}
console.log('do we ever get here?'); //answer is no
if (data[1] === 'post') {
wp.posts().id( data[0] ).post({
inline_css: criticalCss
}).then(function( response ) {
console.log('saved to db');
});
} else {
wp.pages().id( data[0] ).page({
inline_css: criticalCss
}).then(function( response ) {
console.log('saved to db');
});
}
});
})
.on("end", function(){
console.log("done");
});
return stream.pipe(csvStream);
};
UPDATE
Changed my method to look like below so it processes all rows first, but still throws the same error. Writes "done" to the console, and immediately spits out the memory warning twice.
var readUrlCsv = function() {
var stream = fs.createReadStream("/home/vagrant/urls.csv");
var urls = [];
var csvStream = csv()
.on("data", function(data) {
// data[0]: table id, data[1]: page type, data[2]: url
urls.push(data);
})
.on("end", function(){
console.log("done");
buildCriticalCss(urls);
});
return stream.pipe(csvStream);
};
var buildCriticalCss = function(urls) {
//console.log(urls);
urls.forEach(function(data, idx) {
//console.log(data);
penthouse({
url : data[2],
css : './dist/styles/main.css',
// OPTIONAL params
width : 1300, // viewport width
height : 900, // viewport height
timeout: 30000, // ms; abort critical css generation after this timeout
strict: false, // set to true to throw on css errors (will run faster if no errors)
maxEmbeddedBase64Length: 1000 // charaters; strip out inline base64 encoded resources larger than this
}, function(err, criticalCss) {
if (err) {
console.log(err);
}
console.log('do we ever finish one?');
if (data[1] === 'post') {
console.log('saving post ' + data[0]);
wp.posts().id( data[0] ).post({
inline_css: criticalCss
}).then(function( response ) {
console.log('saved post to db');
});
} else {
console.log('saving page ' + data[0]);
wp.pages().id( data[0] ).page({
inline_css: criticalCss
}).then(function( response ) {
console.log('saved page to db');
});
}
});
});
};
Update 2
I took the simple approach to control the amount of concurrent processes spawned.
var readUrlCsv = function() {
var stream = fs.createReadStream("/home/vagrant/urls.csv");
var urls = [];
var csvStream = csv()
.on("data", function(data) {
// data[0]: table id, data[1]: page type, data[2]: url
urls.push(data);
})
.on("end", function(){
console.log("done");
//console.log(urls);
buildCriticalCss(urls);
});
return stream.pipe(csvStream);
};
function buildCriticalCss(data) {
var row = data.shift();
console.log(row);
penthouse({
url : row[2],
css : './dist/styles/main.css',
// OPTIONAL params
width : 1300, // viewport width
height : 900, // viewport height
timeout: 30000, // ms; abort critical css generation after this timeout
strict: false, // set to true to throw on css errors (will run faster if no errors)
maxEmbeddedBase64Length: 1000 // charaters; strip out inline base64 encoded resources larger than this
}, function(err, criticalCss) {
if (err) {
console.log('err');
}
// handle your criticalCSS
console.log('finished');
console.log(row[2]);
// now start next job, if we have more urls
if (data.length !== 0) {
buildCriticalCss(data);
}
});
}
The error message you're seeing is a default printed to the console by node's event library if more than the allowed number of event listeners are defined for an instance of EventEmitter. It does not indicate an actual memory leak. Rather it is displayed to make sure you're aware of the possibility of a leak.
You can see this by checking the event.EventEmitter source code at lines 20 and 244.
To stop EventEmitter from displaying this message and since penthouse does not expose its specific EventEmitter, you'll need to set the default allowed event emitter listeners to something larger than its default value of 10 using:
var EventEmitter=require('event').EventEmitter;
EventEmitter.defaultMaxListeners=20;
Note that according to Node's documentation for EventEmitter.defaultMaxListeners, this will change the maximum number of listeners for all instances of EventEmitter, including those that have already been defined previous to the change.
Or you could simply ignore the message.
Further to the hanging of your code, I'd advise gathering all the results from the parsing of your CSV into an array, and then processing the array contents separately from the parsing process.
This would accomplish two things: It would allow you to
be assured the entire CSV file was valid before you started processing, and
instrument debugging messages while processing each element, which would give you deeper insight into how each element of the array was processed.
UPDATE
As noted below, depending on how many URLs you're processing, you're probably overwhelming Node's ability to handle all of your requests in parallel.
One easy way to proceed would be to use eventing to marshall your processing so your URLs are processed sequentially, as in:
var assert=require('assert'),
event=require('events'),
fs=require('fs'),
csv=require('fast-csv');
penthouse=require('penthouse');
var emitter=new events.EventEmitter();
/** Container for URL records read from CSV file.
*
* #type {Array}
*/
var urls=[];
/** Reads urls from file and triggers processing
*
* #emits processUrl
*/
var readUrlCsv = function() {
var stream = fs.createReadStream("/home/vagrant/urls.csv");
stream.on('error',function(e){ // always handle errors!!
console.error('failed to createReadStream: %s',e);
process.exit(-1);
});
var csvStream = csv()
.on("data", function(data) {
// data[0]: table id, data[1]: page type, data[2]: url
urls.push(data);
})
.on("end", function(){
console.log("done reading csv");
//console.log(urls);
emitter.emit('processUrl'); // start processing URLs
})
.on('error',function(e){
console.error('failed to parse CSV: %s',e);
process.exit(-1);
});
// no return required since we don't do anything with the result
stream.pipe(csvStream);
};
/** Event handler to process a single URL
*
* #emits processUrl
*/
var onProcessUrl=function(){
// always check your assumptions
assert(Array.isArray(urls),'urls must be an array');
var urlRecord=urls.shift();
if(urlRecord){
assert(Array.isArray(urlRecord),'urlRecord must be an array');
assert(urlRecord.length>2,'urlRecord must have at least three elements');
penthouse(
{
// ...
},
function(e,criticalCss){
if(e){
console.error('failed to process record %s: %s',urlRecord,e);
return; // IMPORTANT! do not drop through to rest of func!
}
// do what you need with the result here
if(urls.length===0){ // ok, we're done
console.log('completed processing URLs');
return;
}
emitter.emit('processUrl');
}
);
}
}
/**
* processUrl event - triggers processing of next URL
*
* #event processUrl
*/
emitter.on('processUrl',onProcessUrl); // assign handler
// start everything going...
readUrlCsv();
The benefit of using events here rather than your solution is the lack of recursion which can easily overwhelm your stack.
Hint: You can use events to handle all program flow issues normally addressed by Promises or modules like async.
And since events are at the very heart of Node (the "event loop"), it's really the best, most efficient way to solve such problems.
It's both elegant and "The Node Way"!
Here is a gist that illustrates the technique, without relying on streams or penthouse, the output of which is:
url: url1
RESULT: RESULT FOR url1
url: url2
RESULT: RESULT FOR url2
url: url3
RESULT: RESULT FOR url3
completed processing URLs
Besides using console.logs which usually is enough, you can also use the built in debugger: https://nodejs.org/api/debugger.html
Another thing you can do is go into the node_modules/penthouse directory and add your console.logs or debugger statement into the code for that module. That way you can debug your program there rather than the module just being a black box.
Also make sure there isn't some kind of race condition where for example the CSV doesn't always get output before it tries to read them in.
I think that the memory leak issue is probably a red herring as far as making your code function.
From your comment it sounds like you want to do something like the following with async.mapSeries: http://promise-nuggets.github.io/articles/15-map-in-series.html You could also use promises as it shows or even after getting promises set up use the async/await stuff with a regular for loop after compiling with babel. In the long run I recommend doing that sort of thing with async/await and babel but that might be overkill just to get this working.

Resources