Is it possible to close a Puppeteer Browser using its contextId? - node.js

This is an update to a question I had asked previously but wasn't thinking straight when I asked the question (I was taking a very backwards approach to my solution). I'm currently working with Puppeteer and I'm trying to create an association between a particular task and a puppeteer browser instance. Right now I am currently getting the browser's context id using:
const {browserContextId} = await browser._connection.send('Target.createBrowserContext');
and I am storing it in a file along with other details related to the task. My goal is to somehow be able to close that browser instance using the stored context id. I've given this issue a read on the Puppeteer's GitHub hoping that it would help in some way but it seems that it's not super helpful to me as its not really related to what I'm doing.
The real issue is that I am going to be spawning browser instances in one file and attempting to close them in another, otherwise this wouldn't be an issue at all. Right now the only thing I've been able to do is just spawn another browser instance using the context id (pretty much useless for my task) and have had no luck in closing it or disposing it.
Any help would be greatly appreciated, thanks!
P.S. If there is a better approach to solving this association issue I'm all ears!

Turns out I was thinking about it way too much and trying to make it too complex. For anyone trying to do something similar I'll leave my solution here for you.
Instead of using the browser's context id I found it much easier to just grab the browser's process id (pid). From there I could kill the process using different strategies based on where I was running the close command.
For Node.js:
// Lets say for example we're instantiating our browser like this
const browser = await puppeteer.launch({ headless: false });
// You can simply run this to get the browser's pid
const browserPID = browser.process().pid
// Then you can either store it for use later
fs.writeFile(file, JSON.stringify(jsondata, null, 4), (err) => {
if (err) console.log(err);
})
// Or just kill the process
process.kill(browserPID);
Keep in mind that if you are storing the PID you need to read the file and parse the data to pass into the process.kill command.
For React via Electron
// Require process from the electron file
const process = window.require('process');
// Then same process as before
process.kill(yourbrowserPID);
Hopefully my stupidity can help someone in the future if they are trying to do something similar. It was way easier than I was making it out to be.

Related

Strapi & react-admin : I'd like to set 'Content-Range' header dynamically when any fetchAll query fires

I'm still a novice web developer, so please bear with me if I miss something fundamental !
I'm creating a backoffice for a Strapi backend, using react-admin.
React-admin library uses a 'data provider' to link itself with an API. Luckily someone already wrote a data provider for Strapi. I had no problem with step 1 and 2 of this README, and I can authenticate to Strapi within my React app.
I now want to fetch and display my Strapi data, starting with Users. In order to do that, quoting Step 3 of this readme : 'In controllers I need to set the Content-Range header with the total number of results to build the pagination'.
So far I tried to do this in my User controller, with no success.
What I try to achieve:
First, I'd like it to simply work with the ctx.set('Content-Range', ...) hard-coded in the controller like aforementioned Step 3.
Second, I've thought it would be very dirty to c/p this logic in every controller (not to mention in any future controllers), instead of having some callback function dynamically appending the Content-Range header to any fetchAll request. Ultimately that's what I aim for, because with ~40 Strapi objects to administrate already and plenty more to come, it has to scale.
Technical infos
node -v: 11.13.0
npm -v: 6.7.0
strapi version: 3.0.0-alpha.25.2
uname -r output: Linux 4.14.106-97.85.amzn2.x86_64
DB: mySQL v2.16
So far I've tried accessing the count() method of User model like aforementioned step3, but my controller doesn't look like the example as I'm working with users-permissions plugin.
This is the action I've tried to edit (located in project/plugins/users-permissions/controllers/User.js)
find: async (ctx) => {
let data = await strapi.plugins['users-permissions'].services.user.fetchAll(ctx.query);
data.reduce((acc, user) => {
acc.push(_.omit(user.toJSON ? user.toJSON() : user, ['password', 'resetPasswordToken']));
return acc;
}, []);
// Send 200 `ok`
ctx.send(data);
},
From what I've gathered on Strapi documentation (here and also here), context is a sort of wrapper object. I only worked with Express-generated APIs before, so I understood this snippet as 'use fetchAll method of the User model object, with ctx.query as an argument', but I had no luck logging this ctx.query. And as I can't log stuff, I'm kinda blocked.
In my exploration, I naively tried to log the full ctx object and work from there:
// Send 200 `ok`
ctx.send(data);
strapi.log.info(ctx.query, ' were query');
strapi.log.info(ctx.request, 'were request');
strapi.log.info(ctx.response, 'were response');
strapi.log.info(ctx.res, 'were res');
strapi.log.info(ctx.req, 'were req');
strapi.log.info(ctx, 'is full context')
},
Unfortunately, I fear I miss something obvious, as it gives me no input at all. Making a fetchAll request from my React app with these console.logs print this in my terminal:
[2019-09-19T12:43:03.409Z] info were query
[2019-09-19T12:43:03.410Z] info were request
[2019-09-19T12:43:03.418Z] info were response
[2019-09-19T12:43:03.419Z] info were res
[2019-09-19T12:43:03.419Z] info were req
[2019-09-19T12:43:03.419Z] info is full context
[2019-09-19T12:43:03.435Z] debug GET /users?_sort=id:DESC&_start=0&_limit=10& (74 ms)
While in my frontend I get the good ol' The Content-Range header is missing in the HTTP Response message I'm trying to solve.
After writing this wall of text I realize the logging issue is separated from my original problem, but if I was able to at least log ctx properly, maybe I'd be able to find the solution myself.
Trying to summarize:
Actual problem is, how do I set my Content-Range properly in my strapi controller ? (partially answered cf. edit 3)
Collateral problem n°1: Can't even log ctx object (cf. edit 2)
Collateral problem n°2: Once I figure out the actual problem, is it feasible to address it dynamically (basically some callback function for index/fetchAll routes, in which the model is a variable, on which I'd call the appropriate count() method, and finally append the result to my response header)? I'm not asking for the code here, just if you think it's feasible and/or know a more elegant way.
Thank you for reading through and excuse me if it was confuse; I wasn't sure which infos would be relevant, so I thought the more the better.
/edit1: forgot to mention, in my controller I also tried to log strapi.plugins['users-permissions'].services.user object to see if it actually has a count() method but got no luck with that either. Also tried the original snippet (Step 3 of aforementioned README), but failed as expected as afaik I don't see the User model being imported anywhere (the only import in User.js being lodash)
/edit2: About the logs, my bad, I just misunderstood the documentation. I now do:
ctx.send(data);
strapi.log.info('ctx should be : ', {ctx});
strapi.log.info('ctx.req = ', {...ctx.req});
strapi.log.info('ctx.res = ', {...ctx.res});
strapi.log.info('ctx.request = ', {...ctx.request});
ctrapi.log.info('ctx.response = ', {...ctx.response});
Ctx logs this way; also it seems that it needs the spread operator to display nested objects ({ctx.req} crash the server, {...ctx.req} is okay). Cool, because it narrows the question to what's interesting.
/edit3: As expected, having logs helps big time. I've managed to display my users (although in the dirty way). Couldn't find any count() method, but watching the data object that is passed to ctx.send(), it's equivalent to your typical 'res.data' i.e a pure JSON with my user list. So a simple .length did the trick:
let data = await strapi.plugins['users-permissions'].services.user.fetchAll(ctx.query);
data.reduce((acc, user) => {
acc.push(_.omit(user.toJSON ? user.toJSON() : user, ['password', 'resetPasswordToken']));
return acc;
}, []);
ctx.set('Content-Range', data.length) // <-- it did the trick
// Send 200 `ok`
ctx.send(data);
Now starting to work on the hard part: the dynamic callback function that will do that for any index/fetchAll call. Will update once I figure it out
I'm using React Admin and Strapi together and installed ra-strapi-provider.
A little boring to paste Content-Range header into all of my controllers, so I searched for a better solution. Then I've found middleware concept and created one that fits my needs. It's probably not the best solution, but do its job well:
const _ = require("lodash");
module.exports = strapi => {
return {
// can also be async
initialize() {
strapi.app.use(async (ctx, next) => {
await next();
if (_.isArray(ctx.response.body))
ctx.set("Content-Range", ctx.response.body.length);
});
}
};
};
I hope it helps
For people still landing on this page:
Strapi has been updated from #alpha to #beta. Care, as some of the code in my OP is no longer valid; also some of their documentation is not up to date.
I failed to find a "clever" way to solve this problem; in the end I copy/pasted the ctx.set('Content-Range', data.length) bit in all relevant controllers and it just worked.
If somebody comes with a clever solution for that problem I'll happily accept his answer. With the current Strapi version I don't think it's doable with policies or lifecycle callbacks.
The "quick & easy fix" is still to customize each relevant Strapi controller.
With strapi#beta you don't have direct access to controller's code: you'll first need to "rewrite" one with the help of this doc. Then add the ctx.set('Content-Range', data.length) bit. Test it properly with RA, so for the other controllers, you'll just have to create the folder, name the file, copy/paste your code + "Search & Replace" on model name.
The "longer & cleaner fix" would be to dive into the react-admin source code and refactorize so the lack of "Content-Range" header doesn't break pagination.
You'll now have to maintain your own react-admin fork, so make sure you're already committed into this library and have A LOT of tables to manage through it (so much that customizing every Strapi controller will be too tedious).
Before forking RA, please remember all the stuff you can do with the Strapi backoffice alone (including embedding your custom React app into it) and ensure it will be worth the trouble.

My Discord.js bot uses a command handler. How can I then create play/skip/pause/resume/etc commands in different files?

I set up the command handler for my bot using the Discord.js guide (I am relatively new to Discord.js, as well as JavaScript itself, I'd say). However, as all my commands are in different files, is there a way that I can share variables between the files? I've tried experimenting with exporting modules, but sadly could not get it to work.
For example (I think it's somewhat understandable, but still), to skip a song you must first check if there is actually any audio streaming (which is all done in the play file), then end the current stream and move on to the next one in the queue (the variable for which is also in the play file).
I have gotten a separate music bot up and running, but all the code is in one file, linked together by if/else if/else chains. Perhaps I could just copy this code into the main file for my other bot instead of using the command handler for those specific commands?
I assume that there is a way to do this that is quite obvious, and I apologize if I am wasting peoples' time.
Also, I don't believe code is required for this question but if I'm wrong, please let me know.
Thank you in advance.
EDIT:
I have also read this question multiple times beforehand and have tried the solution, although I haven't gotten it to work.
A simple way to "carry over" variables without exporting anything is to assign them to a property of your client. That way, wherever you have your client (or bot) variable, you also have access to the needed information without requiring a file.
For example...
ready.js (assuming you have an event handler; otherwise your ready event)
client.queue = {};
for (guild of client.guilds) client.queue[guild.id] = [];
play.js
const queue = client.queue[message.guild.id];
queue.push({ song: 'Old Town Road', requester: message.author.id });
queue.js
const queue = client.queue[message.guild.id];
message.channel.send(`**${queue.length}** song${queue.length !== 1 ? 's' : ''} queued.`)
.catch(console.error);

Nightmare doesn't run twice in a row - NodeJS

EDIT
I have noticed the removal of the .end() function appears to solve the issue, but after reading the Nightmare docs on the use of .end() it says: Completes any queue operations, disconnect and close the electron process.
Now while this does solve the problem, am I now just opening more and more electron processes each time the route is called, which will eventually cause the server to run out of memory, or is this a safe way to fix the issue?
ORIGINAL TEXT
Please consider the following problem:
I am developing a Node based service that will allow the user to request screenshot of a particular URL.
For this I am using Nightmare to visit the URL, wait 2 seconds, take a screenshot, which is saved to the disk, convert it to base64, delete the image and then return the base64 string.
console.log('Nightmare starts');
nightmare
.goto(url)
.wait(2000)
.screenshot(filename)
.end()
.then(function (result)
{
fs.exists(filename, function(exists)
{
if (exists)
{
data = fs.readFileSync(filename);
var base64 = data.toString('base64')
fs.unlink(filename);
var output = {'message':'success','map_image':base64};
res.send(output);
}
});
})
.catch(function (error)
{
console.error('Search failed:', error);
});
console.log("Nightmare Finished");
The above code works just fine, the first time it runs. However any subsequent calls to this just consoles "Nightmare starts" and "Nightmare Finished" instantly with the actual code in-between not running. I don't appear to have any errors display, nothing is caught if I wrap it in a try/catch. The node requires a reboot to allow it to happen again.
Something worth noting is that I am running on a headless ubuntu machine, as electron (one of the nightmare dependencies) appears to need a GUI, I am using xvfb to launch the node using the following command:
xvfb-run --auto-servernum --server-num=1 node server.js
I'm assuming this may be an issue with some resource not being released correctly on the first run, but any assistance would be appreciated.
Also open to any constructive criticism of my code, very new to Node and i'm sure i'm not writing in the most optimal way (sync file loading etc)
It appears that you are simply misplacing where you are creating the nightmare instances. Cannot help much without some more code snippet and information.
Way 1
Create nightmare instance every time and close them after you are done with your task. It will require some time to boot up the instance, but it will also lessen the memory load. Not to mention you can have multiple nightmare instances for different users.
Way 2
Don't end and re-use same nightmare instance. Have multiple nightmare instances and queue the call for screenshot. The websites will load fast and it won't take time to boot up an instance, but you will have longer wait time for longer queue.

running node.js and selenium on winddows

I've just picked up node.js and selenium the other day so I apologize for this introductory question but I haven't been able to find an answer on this. I've written a .js script that uses webdriverio. To use this I open 2 cmd windows (I'm running off windows 7) one where I type selenium-standalone start to get selenium to open. Then I run in the other one node ..../script.js . This gets me a beautiful browser that does what it's suppose to 1/10. The other 9/10 times I get a Session deleted due to client timeout. Since this is to be quick and easy I don't really care if it times out I just want it to restart this process. Any suggestions how to do this?
From the sounds of it, your node.js program may be trying to connect to the Selenium server, but without allowing for enough time for it to be able to establish the browser reliably too. Perhaps a case for using .pause(10000) as in:
var Selenium = function () {
this.client = webdriverio.remote(options);
};
Selenium.prototype.refreshURL = function (url, cb) {
var self = this;
this.client
.init()
.url(url)
.pause(10000)
// etc.
}
A good workaround for setting a pause is to use waitFor* - there are multiple options like
http://webdriver.io/api/utility/waitForVisible.html
or
http://webdriver.io/api/utility/waitForExist.html
.waitForVisible('body', 20000000).then(function(isVisible){
//.. you can add also small timeout here to dodge low hardware lags
});

is it possible to read /proc/{pid}/stat then transfer to client real time?

suppose i want to monitor mysql, as far as i know, all mysql runtime info is stored in /proc/{mysql_pid}/stat. so is it possible to read and parse mysql stat info via node.js and client display the chart real time ?
nagios and alternative is so heavy, and sometimes i just want to monitor some progress info. so i want a lightweight solution.
I tried using Node Inotify, which is an excellent library. Yet, it seems like the proc filesystem doesn't event to inotify when the stat files are changed. If you're watching a file on a normal filesystem, though, this is how you can do it using that library:
sys = require('sys');
fs = require('fs');
Inotify = require('inotify').Inotify;
var inotify = new Inotify();
function callback(ev) {
console.log(sys.inspect(ev));
}
var home_dir = {
path: '/proc/5499/stat'
, watch_for: Inotify.IN_ALL_EVENTS
, callback: callback
};
var home_watch_descriptor = inotify.addWatch(home_dir);
Just change Inotify.IN_ALL_EVENTS to whatever action you want to watch for, which is documented on the github page I linked.
Sorry this doesn't solve your particular problem, but I thought I'd post it informationally.
I'm assuming you haven't tried implementing a solution to your problem. What exactly do you mean by "real time"? What sort of client are you talking about?
If you're talking about a web browser client, there's no reason you couldn't update some sort of display every half-second with loads of clients (or much faster, if the charts aren't too intricate).
You should be more specific if you want a more specific answer than that.

Resources