SlimerJS extensions.getAddons.cache.enabled - node.js

I'm having an issue running SlimerJS through CasperJS, I get the next message "1414441945905 addons.repository WARN cacheEnabled: Couldn't get pref: extensions.getAddons.cache.enabled".
I have a nodejs scraper running with CasperJS and Slimer (0.9.3) as the engine. This process is executed by another process (as a child_process.spawn). Also, this process is scheduled with PM2. It runs okay but sometime throws this error, and it hangs up, any ideas?

I had such error, to fix this you should add the line:
pref("extensions.getAddons.cache.enabled", true);
into your *pref*.js file. (in your firefox folder execute this:
nano `find ./ -name "*pref*.js"`
)
See for details: http://bugzilla.mozilla.org/show_bug.cgi?id=953998#c4

Related

Open localhost:3000 in kiosk mode after the Node.js server has finished spinning up

I'm working on a raspberry pi project that involves running a node server in kiosk mode.
I'm using BROWSER=none to suppress the default opening of the localhost upon the server being run.
I'm thinking I should be able to use wait-on to force the bash script that runs the kiosk mode to wait until the server is fully up. Would I use something like this?
"scripts": {
...
"kiosk": "concurrently -n \"npm start\" \"wait-on http://localhost:3000 & /home/pi/kiosk.sh\""
},
It gives me the following error(s) which I'm not quite able to decipher:
[npm start] server does not have extension for -dpms option
[npm start] libEGL warning: DRI2: failed to authenticate
[npm start] [1498:1498:1125/180040.467781:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is egl
[npm start] [1498:1498:1125/180040.786918:ERROR:viz_main_impl.cc(162)] Exiting GPU process due to errors during initialization
[npm start] [1558:1558:1125/180041.392714:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader
[npm start] [1443:1590:1125/180042.359030:ERROR:object_proxy.cc(622)] Failed to call method: org.freedesktop.DBus.Properties.Get: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[npm start] [1443:1590:1125/180042.364570:ERROR:object_proxy.cc(622)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[npm start] [1443:1590:1125/180042.367155:ERROR:object_proxy.cc(622)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[npm start] Fontconfig error: Cannot load default config file: No such file: (null)
I'm now realizing the error in my code has more to do with kiosk.sh than it does with the npm commands. Here's the code to kiosk.sh:
#!/bin/bash
xset s noblank
xset s off
xset -dpms
unclutter -root &
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' /home/pi/.config/chromium/Default/Preferences
sed -i 's/"exit_type":"Crashed"/"exit_type":"Normal"/' /home/pi/.config/chromium/Default/Preferences
/usr/bin/chromium-browser --noerrdialogs --disable-infobars --kiosk http://localhost:3000/ &
& and && mean different things, && means AND, & means background process, run that service in the background and continue with the next.
I think what you're trying to do is wait-on service && example, not wait-on service & example.
What will happen with what you've done is it will run the wait-on, then immediately background process it, then immediately run the shell script without waiting for anything. Your script will run before the server is up.
That's not really your issue though, I believe your issue is with chromium itself. There's an open issue for it here: https://bugs.chromium.org/p/chromium/issues/detail?id=1221905&q=Passthrough%20is%20not%20supported%2C%20GL%20is%20swiftshader&can=1. That issue was last updated earlier this year and seems to still be unresolved.
There was also another answer for it here: Passthrough is not supported, GL is disabled.
I've seen quite a few people suggest that you use --headless and --disable-gpu and --disable-software-rasterizer. People have mentioned that some of those options are only required on windows and some have already been fixed, I don't know which of those are actually required.
This answer here: Force headless chromium/chrome to use actual gpu instead of Google SwiftShader, mentioned that you can force webgl using --enable-webgl to prevent it from loading swiftshader and use the gpu. You can do this if you need to force it in headless mode.
It seems to have something to do with webgl or hardware acceleration. Apparently it happens if you've disabled gpu acceleration and then it's forced to fallback on swiftloader.
I don't know which one of those is actually going to help you, you'll have to play around with it. However I have seen over 10 different chromium and other related issues all made during 2021 because of this bug in chromium.
What's more is that I'm not sure it's actually a critical error, some people mention it's just showing the error but can be just ignored. I don't know if that's the case.
I assume that you are using the package "wait-on" (https://www.npmjs.com/package/wait-on). The wait-on command is used without npm in front of it.
Try to use
wait-on http://localhost:3000 && /home/pi/kiosk.sh
You could use the "child_process" npm package to execute your bash script once the server is ready. Assuming you use Express.js in your backend, this should work with little modification
const exec = require('child_process');
//all your other codes and whatevers
app.listen(3000, () => {
var kiosk = exec('sh kiosk.sh',
(error, stdout, stderr) => {
if (error) {
console.log(`exec error: ${error}`);
}
});
});
wait-on waits until the process is closed. You are not closing chromium so it never continues. If you want to wait until the server is running. You can log the server's status to a text file and have your bash script read it in a loop until it contains the ready text you specify.
If you want to confirm beyond a reasonable doubt that the server is running as needed.
You can install the npm package puppetter. Then use the create and run a node script from bash using page.goto command to load the web page in an instance of chromium and use waitForSelector to check if the DOM element of your web page exists.
Then you use call process.exit() with whatever error codes you want to use to confirm that the page is live and running.

How to run two shell scripts at startup?

I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.

nodejs require() fails when called from php script (linux)

I have this script, minimathjax.js
console.log('toto');
var mjAPI = require("/home/pi/node_modules/MathJax-node/lib/mj-page.js");
console.log('titi');
....
It works fine when called from console ('node minimathjax.js' in its folder).
But when I try to call it from a php file :
$string = 'node /home/pi/node_modules/MathJax-node/minimathjax.js';
$res = exec ($string);
echo $res;
I just get 'toto', indicating that the require() fails.
How can I solve that ? It worked when I wrote it in Windows, and fails on linux (raspbian).
Is it related with permissions ?
You can use ls command for check file permission.
I solved this : I had installed a deprecated package (MathJax-node has become mathjax-node), the new version doesn't fail in its require...
But I still don't understand why it works directly and doesn't work when called from php ????

mocha-phantomjs-core - slimerjs hangs without any error

Using mocha-phantomjs-core with slimerjs
I manage to run my tests successfully from CMD:
slimerjs mocha-phantomjs-core.js tests.html tap
Slimerjs window opens, I see the a browser window and all seems good, but the CMD doesn't finish (seems to wait for something). nothing is happening until I close the slimerjs window. I want to output the test result (using TAP reporter) as a file.
is that possible?
https://github.com/nathanboktae/mocha-phantomjs-core/issues/25
system.stderr.writeLine doesn't work on CMD or GIT bash... I've changed mocha-phantomjs-core.js fail function stderr to do stdout instead. now I get the error:
Likely due to external resource loading and timing, your tests require
calling window.initMochaPhantomJS() before calling any mocha setup
functions. See #12
So I had to add window.initMochaPhantomJS() before the setup function.. how silly! all this because I couldn't see any error due to the stderr issue not printed

Gulp.js process working on dev by not test/prod

I have a gulp.js process using the gulp-phantom plugin that works perfectly on my dev setup, Mac OS X 10.10, however on my test / prod environment (EC2 Amazon Linux) it just doesn't work at all, however it also isn't giving any sort of error message or any other helpful output, the task just starts and finishes again almost straight away:
Dev environment output:
$ gulp crawlSite
[17:39:19] Using gulpfile ~/Documents/dev/mysite.co.uk/gulpfile.js
[17:39:19] Starting 'crawlSite'...
[17:40:15] Finished 'crawlSite' after 57 s
Test environment output:
$ gulp crawlSite
[17:34:27] Using gulpfile /var/www/html/mysite.co.uk/gulpfile.js
[17:34:27] Starting 'crawlSite'...
[17:34:27] Finished 'crawlSite' after 715 ms
As you can see on the dev environment the process takes 57 seconds however on test it is only 715 milliseconds and on test it is not creating the files that my phantom script should be creating. My gulp task is very simple:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom());
});
and my phantom script "phantom-crawl-website.js" file is in the same directory as the gulpfile.js file.
I have check that all the node modules are installed and that PhantomJS is installed globally on the test environment and everything checks out ok. If I run:
$ phantomjs phantom-crawl-website.js
from the command prompt on the test environment that works fine and it crawls the site and creates the files.
I have tried to use the gulp-phantom options for "debug" however I can never seem to see any output from this. I have tried using gulp-debug as well as follows:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug: true}))
.pipe(debug());
});
However all this does is give me the gulp-phantom output filename ("phantom-crawl-website.txt"). I have also tried to write the gulp-phantom output file in the following way:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug:true}))
.pipe(gulp.dest("./phantomOutput/"));
});
But all I get from this is a blank file created in the "phantomOutput" directory called "phantom-crawl-website.txt".
Can anyone advise what I am doing wrong and how I would be able to see the phantomJS debug output so I can work out what the problem is.
Thanks so much in advance.
UPDATE
I've managed to get some output from the gulp-phantom process by adding the following to the gulp-phantom index.js file:
program.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
Once this was added I'm now getting the following error message:
stderr: Can't open '/dev/stdin'
But still no luck actually getting it to work.
Found the issue. In the gulp-phantom module there appears to be an error with it using /dev/stdin were phantomjs expecting the phantom filename to be passed. On Mac OS X the /dev/stdin contains the contents of the file but on Linux it is denied permission to read it.
To fix it I removed the line that was pushing '/dev/stdin' into the arguments stack and then added one a bit further down in the "through" function call to pass the full path and filename to the phantomjs process instead.
I will issue a pull request to the gulp-phantom module creator and see if they accept this as fix for the issue.

Resources