I have a gulp.js process using the gulp-phantom plugin that works perfectly on my dev setup, Mac OS X 10.10, however on my test / prod environment (EC2 Amazon Linux) it just doesn't work at all, however it also isn't giving any sort of error message or any other helpful output, the task just starts and finishes again almost straight away:
Dev environment output:
$ gulp crawlSite
[17:39:19] Using gulpfile ~/Documents/dev/mysite.co.uk/gulpfile.js
[17:39:19] Starting 'crawlSite'...
[17:40:15] Finished 'crawlSite' after 57 s
Test environment output:
$ gulp crawlSite
[17:34:27] Using gulpfile /var/www/html/mysite.co.uk/gulpfile.js
[17:34:27] Starting 'crawlSite'...
[17:34:27] Finished 'crawlSite' after 715 ms
As you can see on the dev environment the process takes 57 seconds however on test it is only 715 milliseconds and on test it is not creating the files that my phantom script should be creating. My gulp task is very simple:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom());
});
and my phantom script "phantom-crawl-website.js" file is in the same directory as the gulpfile.js file.
I have check that all the node modules are installed and that PhantomJS is installed globally on the test environment and everything checks out ok. If I run:
$ phantomjs phantom-crawl-website.js
from the command prompt on the test environment that works fine and it crawls the site and creates the files.
I have tried to use the gulp-phantom options for "debug" however I can never seem to see any output from this. I have tried using gulp-debug as well as follows:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug: true}))
.pipe(debug());
});
However all this does is give me the gulp-phantom output filename ("phantom-crawl-website.txt"). I have also tried to write the gulp-phantom output file in the following way:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug:true}))
.pipe(gulp.dest("./phantomOutput/"));
});
But all I get from this is a blank file created in the "phantomOutput" directory called "phantom-crawl-website.txt".
Can anyone advise what I am doing wrong and how I would be able to see the phantomJS debug output so I can work out what the problem is.
Thanks so much in advance.
UPDATE
I've managed to get some output from the gulp-phantom process by adding the following to the gulp-phantom index.js file:
program.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
Once this was added I'm now getting the following error message:
stderr: Can't open '/dev/stdin'
But still no luck actually getting it to work.
Found the issue. In the gulp-phantom module there appears to be an error with it using /dev/stdin were phantomjs expecting the phantom filename to be passed. On Mac OS X the /dev/stdin contains the contents of the file but on Linux it is denied permission to read it.
To fix it I removed the line that was pushing '/dev/stdin' into the arguments stack and then added one a bit further down in the "through" function call to pass the full path and filename to the phantomjs process instead.
I will issue a pull request to the gulp-phantom module creator and see if they accept this as fix for the issue.
Related
beforeAll(async () => {
mongo = new MongoMemoryServer();
const mongoURI = await mongo.getConnectionString();
await mongoose.connect(mongoURI, {
useNewUrlParser: true,
useUnifiedTopology: true
});
});
For some reason mongodb-memory-server, doesn't work and it seems that it's because it's downloading mongodb for some reason? Wasn't mongodb supposed to be included with the package, what is the package downloading? How do we prevent mongodb-memory-server from downloading everytime I use it? Is there a way to make it work as it's intended?
$ npm run test
> auth#1.0.0 test C:\Users\admin\Desktop\projects\react-node-docker-kubernetes-app-two\auth
> jest --watchAll --no-cache
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method:
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer - no running instance, call `start()` command
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer Called MongoMemoryServer.start() method
2020-06-06T03:12:45.214Z MongoMS:MongoMemoryServer Starting MongoDB instance with following options: {"port":51830,"dbName":"b67a9bfd-d8af-4d7f-85c7-c2fd37832f59","ip":"127.0.0.1","storageEngine":"ephemeralForTest","dbPath":"C:\\Users\\admin\\AppData\\Local\\Temp\\mongo-mem-205304KB93HW36L9ZD","tmpDir":{"name":"C:\\Users\\admin\\AppData\\Local\\Temp\\mongo-mem-205304KB93HW36L9ZD"},"uri":"mongodb://127.0.0.1:51830/b67a9bfd-d8af-4d7f-85c7-c2fd37832f59?"}
2020-06-06T03:12:45.217Z MongoMS:MongoBinary MongoBinary options: {"downloadDir":"C:\\Users\\admin\\Desktop\\projects\\react-node-docker-kubernetes-app-two\\auth\\node_modules\\.cache\\mongodb-memory-server\\mongodb-binaries","platform":"win32","arch":"ia32","version":"4.0.14"}
2020-06-06T03:12:45.233Z MongoMS:MongoBinaryDownloadUrl Using "mongodb-win32-i386-2008plus-ssl-4.0.14.zip" as the Archive String
2020-06-06T03:12:45.233Z MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org" as the mirror
2020-06-06T03:12:45.235Z MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-4.0.14.zip"
2020-06-06T03:14:45.508Z MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method
2020-06-06T03:14:45.508Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method:
FAIL src/test/__test___/Routes.test.ts
● Test suite failed to run
Error: Status Code is 403 (MongoDB's 404)
This means that the requested version-platform combination dosnt exist
at ClientRequest.<anonymous> (node_modules/mongodb-memory-server-core/src/util/MongoBinaryDownload.ts:321:17)
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 127.136s
Ran all test suites.
Seems you have the same issue like I have had.
https://github.com/nodkz/mongodb-memory-server/issues/316
Specify binary version in package.json
E.g:
"config": {
"mongodbMemoryServer": {
"version": "latest"
}
},
I hope it helps.
For me, "latest" (as in accepted answer) did not work, the latest current version "4.4.1" worked:
"config": {
"mongodbMemoryServer": {
"version": "4.4.1"
}
}
For anyone getting the dreaded
''
Error: Status Code is 403 (MongoDB's 404)
This means that the requested version-platform combination doesn't exist
''
I found an easy fix.
in the package.json file we need to add an "arch" for the mongo memory server config
"config": {
"mongodbMemoryServer": {
"debug": "1",
"arch": "x64"
}
},
the error is occurring because the URL link that mongo memory server is creating to download a binary version of mongo is wrong or inaccessible.
By adding debug we now are able to get a console log of the mongo memory server process and it should correctly download because we changed the arch variable to a one that worked for me. **You might need to change the arch depending on you system.
Without adding the arch I was able to see why it was crashing in the console log here:
MongoMS:MongoBinaryDownloadUrl Using "mongodb-win32-i386-2008plus-ssl-latest.zip" as the Archive String +0ms
MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org" as the mirror +1ms
MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-latest.zip" +0ms
MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method +2s
MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method: +0ms
If you notice it is trying to download "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-latest.zip" - if you visit the link you will notice it is a BROKEN LINK and that is the reason mongo memory server is failing to download.
For some reason mongo memory server was defaulting to the i386 arch, which didn't work in my case because the link was broken / inaccessible when I visited it. *normally a download should start right away when visiting a link like that.
I was able to configure the to the correct arch manually in the package.json file. Once I did that, it started to download mongo binary and ran all my tests no problem. You will even notice a console log of the download and displaying the correct download link.
You can find your system arch by going to the command prompt and typing
WINDOWS
SET Processor
MAC
uname -a
** EDIT **
The reason I was running into this was because I was running a 32 bit version of Node.js and my Windows machine was a 64 bit system. After installing to a 64 bit version of Node.js I no longer have to specify the arch type in Package.json file.
you can find what architecture type your Node.js is by typing in your terminal:
node -p "process.arch"
Status 403 usually means that your ip is restricted from server(for example maybe your country is in sanction list like iran,syria,...).
The best solution for this challenge is to change dns to dns of vpns.
In linux just type:
sudo nano /etc/resolv.conf
And then type your dns in nameserver place.
Try this version mongodb-memory-server#6.5.1
I found a solution for this problem that worked for me.
I just set writing permissions to the binary file of mongod that is used for mongo-memory and is saved in the .cache path of your computer or in the node_modules folder.
just locale the mongod file and set writing permission to the file with chmod +x mongod
I am having an issue with creating a new file while my snap is running; example:
1) Snap starts and checks for the config.json file at ./config/config.json
2) If that file is not found (it never is the first time the application runs) it will create it fs.writeFile('./config/config.json', 'My Data', 'utf8', (err) => {....})
3) I Then look for that file later to use it.
I am able to run my node app and all works as expected when using node index.js
I am also able to run using snap try prime/ --devmode and all works.
When running snap try prime/ I get this error in the syslog
Error: ENOENT: no such file or directory, open './config/config.json'
It is erroring at the point of creation.
Any help with this would be awesome!! Thanks in advance.
I was able to solve this by NOT creating and checking for the config files in NodeJS and moving all of that logic to an install hook (https://docs.snapcraft.io/build-snaps/hooks).
So now my Install hook will check for the config file and create it if it's not there, then I allow NodeJS to write to that file later so I can still make all the HTTP requests in NodeJS and not in Bash. Below is my Install hook, don't forget to make it executable.
This file is located at snap/hooks/install
#!/bin/sh
set -e
CONFIG_FILE=$SNAP_COMMON/config.json
if [ ! -f $CONFIG_FILE ]; then
# File Not Found, Create it
echo '{}' > $CONFIG_FILE
fi
Hope this helps someone!
I am developing a library/tool which lets the user execute arbitrary commands via SSH. Overall, this lib/tool will serve as software deployment tool, which needs access to remote machines to execute several commands (cd, mkdir, git ..., npm install, scp, etc).
While it basically works to execute remote commands via SSH, it seems that every time the command npm install is executed, the SSH connection gets terminated. I cannot tell what is causing this, but this very simple Node.js script can demonstrate it:
const spawn = require('child_process').spawn;
const bat = spawn('ssh', ['-T', '-oRequestTTY=no', '-oBatchMode=yes', 'user#host']);
bat.stdout.on('data', (data) => console.log('STDOUT: ' + data.toString()));
bat.stderr.on('data', (data) => console.log('STDERR: ' + data.toString()));
bat.on('exit', (code) => console.log(`Child exited with code ${code}`));
setTimeout(() => {
bat.stdin.write('pwd -P\n');
bat.stdin.write('cd someDir\n');
bat.stdin.write('npm install\n');
setTimeout(() => bat.stdin.write('pwd -P\n'), 2000);
}, 2000);
This will break/terminate the forked SSH process after npm install, so the delayed pwd -P will also fail. Removing the npm install command will make the SSH process stay intact until the app is terminated by the user.
I have actually faced this problem when I was working with the c library libssh, which had the very same issue, although I failed to notice that the npm install command seems to actually trigger the problem. See this related post: Channel in libssh gets closed for no obvious reason
What I found out is:
1. I am using a non-pty SSH shell
2. The libsshpacket-level debug output shows a packet 98 sent by the server just before the connection is closed
3. According to the RFC, packet 98 is SSH_MSG_CHANNEL_REQUEST, which can also be used to request a pty
So my asumption is, that I am working on a non-pty shell over SSH, and that something in the npm programm directly or indirectly leads to a server-side request for a pty, which cannot be handled by the non-pty shell I am working on, and thus, the SSH connection is closed on protocol-level.
Now my question is, what could be causing this, and is there a way to get rid of this problem?
Update may 29th
Actually, I was able to investigate this issue further. I started editing npm-cli.js and commented out everything, then step-by-step uncommented the lines, to see what is going to trigger the above behaviour.
At first, it seemed like the set-blocking module included via npmlog was causing the issue, but after also commenting out the actual code of set-blocking in index.js:4 (which is only stream._handle.setBlocking(blocking)), the bad behaviour still occurs, which confused the fuck out of me. Experimenting further revealed that actually the single piece of process.stdout is causing the whole issue.
To verify, I did the following:
Comment-out lines 22-94 in npm-cli.js. Executing npm install now will essentially do nothing. Also, running my example program above will not have the error.
Add the following code to npm-cli.js:19: if (process.stdout) ;. This also essentially does nothing, but it leads to the error occur again if executing the above test program.
Do the opposite test and change if (process.stdout) ; to if (process) ; - the test program will now run again without error.
So basically the above Node.js testprogram which demonstrates the error can be changed so that instead of npm install a simple oneline script is invoked, which results in the same error:
const spawn = require('child_process').spawn;
const bat = spawn('ssh', ['user#host']);
setTimeout(() => {
bat.stdin.write('node test.js\n'); // this breaks it
setTimeout(() => bat.stdin.write('pwd -P\n'), 2000);
}, 2000);
With test.js containing:
if (process.stdout) ; // only this line
I dont know how the process global object is constructed, or what is going on here, but something in Node.js' process/process.stdout object is somehow breaking SSH connections. I therefore think this is not an actual issue of npm, only indirectly.
Could anyone help to clarify?
Using mocha-phantomjs-core with slimerjs
I manage to run my tests successfully from CMD:
slimerjs mocha-phantomjs-core.js tests.html tap
Slimerjs window opens, I see the a browser window and all seems good, but the CMD doesn't finish (seems to wait for something). nothing is happening until I close the slimerjs window. I want to output the test result (using TAP reporter) as a file.
is that possible?
https://github.com/nathanboktae/mocha-phantomjs-core/issues/25
system.stderr.writeLine doesn't work on CMD or GIT bash... I've changed mocha-phantomjs-core.js fail function stderr to do stdout instead. now I get the error:
Likely due to external resource loading and timing, your tests require
calling window.initMochaPhantomJS() before calling any mocha setup
functions. See #12
So I had to add window.initMochaPhantomJS() before the setup function.. how silly! all this because I couldn't see any error due to the stderr issue not printed
I have a node.js project that compiles less files to css when I start the app. I do this by modifying the start script in package.json like so:
{
// omitted for brevity
start: { lessc public/stylesheets/styles.less > public/stylesheets/styles.css; node app.js; }
}
This works nicely locally, but not at all on my Windows Azure instance. Either because less needs to be installed globally on the machine for this to work, or because Azure doesn't run npm start. Or both. Either way, I need another solution!
I thought custom deployments was the answer (I'm using git remote deployment) and I tried modifying the deploy.cmd to include
call "lessc public/stylesheets/styles.less > public/stylesheets/styles.css;"
No joy. I even tried
call "%SITE_ROOT%/node_modules/less/bin/lessc %SITE_ROOT%/public/stylesheets/styles.less > %SITE_ROOT%/public/stylesheets/styles.css;
Am I coming at this the wrong way? How can I keep the compiled css files out of my source control and compile them on the server after deployment to Azure?
Thanks!
OK, I finally have this going, I think.
For some reason, even though the physical file is on the disk (I can see them with my FTP client), Azure is not letting me run lessc in the \node_modules\less\bin folder, but it does let me run the version in the \node_modules\.bin folder.
In the end, I added the following lines to my deploy.cmd file, and it worked!
IF NOT DEFINED LESS_COMPILER (
SET LESS_COMPILER=%DEPLOYMENT_TARGET%\node_modules\.bin\lessc
)
call %LESS_COMPILER% %DEPLOYMENT_TARGET%\public\stylesheets\styles.less > %DEPLOYMENT_TARGET%\public\stylesheets\styles.css