mocha-phantomjs-core - slimerjs hangs without any error - slimerjs

Using mocha-phantomjs-core with slimerjs
I manage to run my tests successfully from CMD:
slimerjs mocha-phantomjs-core.js tests.html tap
Slimerjs window opens, I see the a browser window and all seems good, but the CMD doesn't finish (seems to wait for something). nothing is happening until I close the slimerjs window. I want to output the test result (using TAP reporter) as a file.
is that possible?

https://github.com/nathanboktae/mocha-phantomjs-core/issues/25
system.stderr.writeLine doesn't work on CMD or GIT bash... I've changed mocha-phantomjs-core.js fail function stderr to do stdout instead. now I get the error:
Likely due to external resource loading and timing, your tests require
calling window.initMochaPhantomJS() before calling any mocha setup
functions. See #12
So I had to add window.initMochaPhantomJS() before the setup function.. how silly! all this because I couldn't see any error due to the stderr issue not printed

Related

Execution via node js script with problems

Good morning, I have a problem running a command on the linux terminal and returning it to an api, for some reason I get a response on the server that has nothing to do with it, it's as if I just ran the "show route". The answer is in an array but this is normal, the problem is that they have nothing to do with the manual command.
Code:
Manual command:
Result the server receives:
Try
exec('/usr/sbin/birdc show route', (err, stdout, stderr)=>{})

Open localhost:3000 in kiosk mode after the Node.js server has finished spinning up

I'm working on a raspberry pi project that involves running a node server in kiosk mode.
I'm using BROWSER=none to suppress the default opening of the localhost upon the server being run.
I'm thinking I should be able to use wait-on to force the bash script that runs the kiosk mode to wait until the server is fully up. Would I use something like this?
"scripts": {
...
"kiosk": "concurrently -n \"npm start\" \"wait-on http://localhost:3000 & /home/pi/kiosk.sh\""
},
It gives me the following error(s) which I'm not quite able to decipher:
[npm start] server does not have extension for -dpms option
[npm start] libEGL warning: DRI2: failed to authenticate
[npm start] [1498:1498:1125/180040.467781:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is egl
[npm start] [1498:1498:1125/180040.786918:ERROR:viz_main_impl.cc(162)] Exiting GPU process due to errors during initialization
[npm start] [1558:1558:1125/180041.392714:ERROR:gpu_init.cc(441)] Passthrough is not supported, GL is swiftshader
[npm start] [1443:1590:1125/180042.359030:ERROR:object_proxy.cc(622)] Failed to call method: org.freedesktop.DBus.Properties.Get: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[npm start] [1443:1590:1125/180042.364570:ERROR:object_proxy.cc(622)] Failed to call method: org.freedesktop.UPower.GetDisplayDevice: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[npm start] [1443:1590:1125/180042.367155:ERROR:object_proxy.cc(622)] Failed to call method: org.freedesktop.UPower.EnumerateDevices: object_path= /org/freedesktop/UPower: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.UPower was not provided by any .service files
[npm start] Fontconfig error: Cannot load default config file: No such file: (null)
I'm now realizing the error in my code has more to do with kiosk.sh than it does with the npm commands. Here's the code to kiosk.sh:
#!/bin/bash
xset s noblank
xset s off
xset -dpms
unclutter -root &
sed -i 's/"exited_cleanly":false/"exited_cleanly":true/' /home/pi/.config/chromium/Default/Preferences
sed -i 's/"exit_type":"Crashed"/"exit_type":"Normal"/' /home/pi/.config/chromium/Default/Preferences
/usr/bin/chromium-browser --noerrdialogs --disable-infobars --kiosk http://localhost:3000/ &
& and && mean different things, && means AND, & means background process, run that service in the background and continue with the next.
I think what you're trying to do is wait-on service && example, not wait-on service & example.
What will happen with what you've done is it will run the wait-on, then immediately background process it, then immediately run the shell script without waiting for anything. Your script will run before the server is up.
That's not really your issue though, I believe your issue is with chromium itself. There's an open issue for it here: https://bugs.chromium.org/p/chromium/issues/detail?id=1221905&q=Passthrough%20is%20not%20supported%2C%20GL%20is%20swiftshader&can=1. That issue was last updated earlier this year and seems to still be unresolved.
There was also another answer for it here: Passthrough is not supported, GL is disabled.
I've seen quite a few people suggest that you use --headless and --disable-gpu and --disable-software-rasterizer. People have mentioned that some of those options are only required on windows and some have already been fixed, I don't know which of those are actually required.
This answer here: Force headless chromium/chrome to use actual gpu instead of Google SwiftShader, mentioned that you can force webgl using --enable-webgl to prevent it from loading swiftshader and use the gpu. You can do this if you need to force it in headless mode.
It seems to have something to do with webgl or hardware acceleration. Apparently it happens if you've disabled gpu acceleration and then it's forced to fallback on swiftloader.
I don't know which one of those is actually going to help you, you'll have to play around with it. However I have seen over 10 different chromium and other related issues all made during 2021 because of this bug in chromium.
What's more is that I'm not sure it's actually a critical error, some people mention it's just showing the error but can be just ignored. I don't know if that's the case.
I assume that you are using the package "wait-on" (https://www.npmjs.com/package/wait-on). The wait-on command is used without npm in front of it.
Try to use
wait-on http://localhost:3000 && /home/pi/kiosk.sh
You could use the "child_process" npm package to execute your bash script once the server is ready. Assuming you use Express.js in your backend, this should work with little modification
const exec = require('child_process');
//all your other codes and whatevers
app.listen(3000, () => {
var kiosk = exec('sh kiosk.sh',
(error, stdout, stderr) => {
if (error) {
console.log(`exec error: ${error}`);
}
});
});
wait-on waits until the process is closed. You are not closing chromium so it never continues. If you want to wait until the server is running. You can log the server's status to a text file and have your bash script read it in a loop until it contains the ready text you specify.
If you want to confirm beyond a reasonable doubt that the server is running as needed.
You can install the npm package puppetter. Then use the create and run a node script from bash using page.goto command to load the web page in an instance of chromium and use waitForSelector to check if the DOM element of your web page exists.
Then you use call process.exit() with whatever error codes you want to use to confirm that the page is live and running.

Gulp.js process working on dev by not test/prod

I have a gulp.js process using the gulp-phantom plugin that works perfectly on my dev setup, Mac OS X 10.10, however on my test / prod environment (EC2 Amazon Linux) it just doesn't work at all, however it also isn't giving any sort of error message or any other helpful output, the task just starts and finishes again almost straight away:
Dev environment output:
$ gulp crawlSite
[17:39:19] Using gulpfile ~/Documents/dev/mysite.co.uk/gulpfile.js
[17:39:19] Starting 'crawlSite'...
[17:40:15] Finished 'crawlSite' after 57 s
Test environment output:
$ gulp crawlSite
[17:34:27] Using gulpfile /var/www/html/mysite.co.uk/gulpfile.js
[17:34:27] Starting 'crawlSite'...
[17:34:27] Finished 'crawlSite' after 715 ms
As you can see on the dev environment the process takes 57 seconds however on test it is only 715 milliseconds and on test it is not creating the files that my phantom script should be creating. My gulp task is very simple:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom());
});
and my phantom script "phantom-crawl-website.js" file is in the same directory as the gulpfile.js file.
I have check that all the node modules are installed and that PhantomJS is installed globally on the test environment and everything checks out ok. If I run:
$ phantomjs phantom-crawl-website.js
from the command prompt on the test environment that works fine and it crawls the site and creates the files.
I have tried to use the gulp-phantom options for "debug" however I can never seem to see any output from this. I have tried using gulp-debug as well as follows:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug: true}))
.pipe(debug());
});
However all this does is give me the gulp-phantom output filename ("phantom-crawl-website.txt"). I have also tried to write the gulp-phantom output file in the following way:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug:true}))
.pipe(gulp.dest("./phantomOutput/"));
});
But all I get from this is a blank file created in the "phantomOutput" directory called "phantom-crawl-website.txt".
Can anyone advise what I am doing wrong and how I would be able to see the phantomJS debug output so I can work out what the problem is.
Thanks so much in advance.
UPDATE
I've managed to get some output from the gulp-phantom process by adding the following to the gulp-phantom index.js file:
program.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
Once this was added I'm now getting the following error message:
stderr: Can't open '/dev/stdin'
But still no luck actually getting it to work.
Found the issue. In the gulp-phantom module there appears to be an error with it using /dev/stdin were phantomjs expecting the phantom filename to be passed. On Mac OS X the /dev/stdin contains the contents of the file but on Linux it is denied permission to read it.
To fix it I removed the line that was pushing '/dev/stdin' into the arguments stack and then added one a bit further down in the "through" function call to pass the full path and filename to the phantomjs process instead.
I will issue a pull request to the gulp-phantom module creator and see if they accept this as fix for the issue.

SlimerJS extensions.getAddons.cache.enabled

I'm having an issue running SlimerJS through CasperJS, I get the next message "1414441945905 addons.repository WARN cacheEnabled: Couldn't get pref: extensions.getAddons.cache.enabled".
I have a nodejs scraper running with CasperJS and Slimer (0.9.3) as the engine. This process is executed by another process (as a child_process.spawn). Also, this process is scheduled with PM2. It runs okay but sometime throws this error, and it hangs up, any ideas?
I had such error, to fix this you should add the line:
pref("extensions.getAddons.cache.enabled", true);
into your *pref*.js file. (in your firefox folder execute this:
nano `find ./ -name "*pref*.js"`
)
See for details: http://bugzilla.mozilla.org/show_bug.cgi?id=953998#c4

How can I run mocha tests remotely on IntelliJ IDEA 13 (or WebStorm)?

IntelliJ IDEA 13 has really excellent support for Mocha tests through the Node.js plugin: https://www.jetbrains.com/idea/webhelp/running-mocha-unit-tests.html
The problem is, while I edit code on my local machine, I have a VM (vagrant) in which I run and test the code, so it's as production-like as possible.
I wrote a small bash script to run my tests remotely on this VM whenever I invoke "Run" from within IntelliJ, and the results pop up in the console well enough, however I'd love to use the excellent interface that appears whenever the Mocha test runner is invoked.
Any ideas?
Update: There's a much better way to do this now. See https://github.com/TechnologyAdvice/fake-mocha
Success!!
Here's how I did it. This is specific to connecting back to vagrant, but can be tweaked for any remote server to which you have key-based SSH privileges.
Somewhere on your remote machine, or even within your codebase, store the NodeJS plugin's mocha reporter (6 .js files at the time of this writing). These are found in NodeJS/js/mocha under your main IntelliJ config folder, which on OSX is ~/Library/Application Support/IntelliJIdea13. Know the absolute path to where you put them.
Edit your 'Run Configurations'
Add a new one using 'Mocha'
Set 'Node interpreter' to the full path to your ssh executable. On my machine, it's /usr/bin/ssh.
Set the 'Node options' to this behemoth, tweaking as necessary for your own configuration:
-i /Users/USERNAME/.vagrant.d/insecure_private_key vagrant#MACHINE_IP "cd /vagrant; node_modules/mocha/bin/_mocha --recursive --timeout 2000 --ui bdd --reporter /vagrant/tools/mocha_intellij/mochaIntellijReporter.js test" #
REMEMBER! The # at the end is IMPORTANT, as it will cancel out everything else the Mocha run config adds to this command. Also, remember to use an absolute path everywhere that I have one.
Set 'Working directory', 'Mocha package', and 'Test directory' to exactly what they should be if you were running mocha tests locally. These will not impact the test execution, but this interface WILL check to make sure these are valid paths.
Name it, save, and run!
Fully integrated, remote testing bliss.
1) In Webstorm, create a "Remote Debug" configuration, using port 5858.
2) Make sure that port is open on your server or VM.
3) On the remote server, execute Mocha with the --debug-brk option: mocha test --debug-brk
4) Back in Webstorm, start the remote-debug you created in Step 1, and and execution should pause on set breakpoints.

Resources