Package.on_test runs even when not testing - node.js

This is what our app looks like when starting:
=> Started proxy.
=> Started MongoDB.
>>>>> IN ON_TEST
meteor-collection-management: updating npm dependencies -- mongodb...
Argh. In meteor-collection-management/package.js (our own package), there is this:
Package.on_test(function(api) {
console.log(">>>>> IN ON_TEST");
Npm.depends({
mongodb: "1.4.1"
});
api.use(['meteor-collection-management', 'tinytest', 'test-helpers']);
api.add_files('tests/dbobject-test.js', ['client', 'server']);
api.add_files('tests/enums-test.js', ['client', 'server']);
});
Why is Package.on_test running? I am not running in test mode not even in node debug mode.

The on test function just runs to build a dependency map, even though its not actually used. I see you've opened an issue on it too. Theres more info on what it does here: https://github.com/meteor/meteor/blob/a40a6273953c0e18eddcd67919754814461c5dd4/tools/packages.js#L1434
So it builds out .test, and needs to run the method to get the required files. Meteor needs to know what it needs before the project can run, which is probably why both run. (Package's need to be built into single files, as slightly different to the rest of Meteor)

Related

Worker thread postMessage() vs command line command

I recently learned about Worker threads in Node JS. I was trying to create a worker thread to run Stockfish chess engine in node js.
The npm package I am using for this is called stockfish. I tried using node-stockfish before this but it was not installing with npm as it was using an older version of the type definition for the "AbortSignal" variable apparently causing compatibility issues.
For the current npm package that I am using even though I was able to install it successfully, I could find very little documentation on how to use it. So I tried out a few ideas.
import { Worker } from "worker_threads";
const engine = new Worker("./node_modules/stockfish/src/stockfish.js")
engine.on('message', (data) => console.log(data))
engine.postMessage('position startpos move e2e4 e7e5')
engine.postMessage('go movetime 3000')
Here I tried to run the stockfish.js as a worker thread and send commands to it with the postMessage() function. This however did not work and it gave the following output:
worker.js received unknown command undefined
position startpos move e2e4 e7e5
worker.js received unknown command undefined
go movetime 3000
But I know these commands are valid commands if I run the same js from the command line like so:
It might be because I am using the flags --experimental-wasm-threads and --experimental-wasm-simd when I am running it from the command line. I found this command to run it from the little documentation that was present. But I don't know how to mention these flags when I run it through a worker thread.
Otherwise it could also be that I don't understand how worker threads work yet and postMessage() is not the same as sending it a command from the command line.
Any help is greatly appreciated.
I switched to using stockfish.wasm library instead. With this library I was able to achieve what I wanted and I don't need to use any worker threads for now. Maybe I can add this to a worker thread if required later. Here is a simple example:
const Stockfish = require("stockfish.wasm")
Stockfish().then((engine) => {
engine.addMessageListener((output) => {
console.log(output);
// Do something with the output data here
})
engine.postMessage("uci");
engine.postMessage("ucinewgame");
engine.postMessage("position startpos");
engine.postMessage("go depth 20");
});

How can I run some code in Node prior to running a browser test with Intern?

With Intern, how can I run some setup code in Node prior to running browser tests, but not when running Node tests? I know that I could do that outside of Intern completely, but is there anything that's a part of Intern that could handle that?
For a more concrete example: I'm running tests for an HTTP library that communicates with a Python server. When running in Node, I can run spawn("python", ["app.py"]) to start the server. However, in the browser, I would need to run that command before the browser begins running the tests.
Phrased another way: is there a built-in way with Intern to run some code in the Node process prior to launching the browser tests?
By default, Intern will run the plugins configured for node regardless of which environment you're running in.
So, you could create a plugin that hooks into the runStart and runEnd events like this:
intern.on("runStart", () => {
console.log("Starting...");
// Setup code here
});
intern.on("runEnd", () => {
console.log("Ending...");
// Teardown code here
});
These handlers will run inside the Node process, and thus have access to all the available Node APIs.
Additionally, you can detect which environments are being tested by looking at intern.config.environments:
{
environments: [
{
browserName: 'chrome',
browserVersion: undefined,
version: undefined
}
]
}
By looking at the environments, you can determine whether or not you need to run your setup code.

child_process.fork not starting an express server inside of packaged electron app

I have an electron app where I need not only to run the interface to the user but also start an express server that will serve files for people connected through the network.
I have everything working if I start both electron and the express server normally, but I'm pretty confident that I will need the server running in a different thread to avoid slugish interface and even problems with the server.
For that matter I tried to run my express server using the child_process.fork and it worked when I use npm start, but when I use electron-builder to create an .exe, the installed program doesn't start the express server.
I tried to run my server right away using:
require('child_process').fork('app/server/mainServer.js')
I tried several changes, prefixing the file with __dirname, process.resourcesPath and even hard coding the generated file path; changing the fork options to pass cwd: __dirname, detached: true and stdio: 'ignore'; and even tried using spawn with process.execPath, which will also work with npm start but won't when packaged (it keeps opening new instances of my app, seems obvious after you do hehe)
Note: If I don't fork and require the server script right away, using require('server/mainServer.js') it works on the packaged app, so the problem most like isn't the express itself.
Note 2: I have asar: false to solve other problems, so this is not the problem solver here.
I put up a small git project to show my problem:
https://github.com/victorivens05/electron-fork-error
Any help will be highly appreciated.
With the great help from Samuel Attard (https://github.com/MarshallOfSound) I was able to solve the problem (he solved for me actually)
As he said:
the default electron app will launch the first file path provided to it
so `electron path/to/thing` will work
in a packaged state, that launch logic is not present
it will always run the app you have packaged regardless of the CLI args passed to it
you need to handle the argument manually yourself
and launch that JS file if it's passed in as the 1st argument
The first argument to fork simply calls `process.execPath` with the first
argument being the path provided afaik
The issue is that when packaged Electron apps don't automatically run the
path provided to them
they run the app that is packaged within them
In other words. fork is actually spawn being executed with process.execPath and passing the fork's first argument as the second for spawn.
What happens in a packaged app is that the process.execPath isn't electron but the packaged app itself. So if you try to spawn, the app will be open over and over again.
So, what Samuel suggest was implemented like this:
if (process.argv[1] === '--start-server') {
require('./server/mainServer.js')
return
}
require('./local/mainLocal.js')
require('child_process').spawn(process.execPath, ['--start-server'])
That way, the first time the packaged app will be executed, the process.argv[1] will be empty, so the server won't start. It will then execute the electron part (mainLocal in my case) and start the app over, but this time passing the argv. Next time the app starts, it will start the server and stop the execution, so the app won't open again because spawn is never reached.
Huge thanks to Samuel.

How to correct a Bluemix Node.js app that can't accept connections

I created a new Node.js app on Bluemix this morning and downloaded the boilerplate code. I worked on it locally and then pushed it up. On Bluemix, it refuses to start. The error according to the logs is:
Instance (index 0) failed to start accepting connections
So I Googled for that, in every case where I found the result, the answer was that my application was trying to use a specific port instead of letting Bluemix set it.
Ok, but I'm setting the host/port with the exact code the boilerplate uses:
var appEnv = cfenv.getAppEnv();
// start server on the specified port and binding host
app.listen(appEnv.port, function() {
// print a message when the server starts listening
console.log("server starting on " + appEnv.url);
});
So if this is incorrect, it means the code Bluemix told me to download itself is incorrect as well, and I can't imagine that is the issue.
To identify whether cfenv is at fault, I've tested that piece of code with a number of more complex Node.js apps I have, and they work perfectly on Bluemix.
That message can also come when an application you've deployed to Bluemix fails to start at all. Here's a few things you can do to troubleshoot your Node.js application on Bluemix.
Tail logs in another terminal while pushing with "cf logs
". Inspect logs after the failure to see if something
failed during the staging process.
Check that your start command in one of two recommended places, scripts.start in package.json or in a Procfile with web: node <start-script>.
Check that your application works locally. First (optional), create a .cfignore file with "/node_modules" in it, so that when you push the app to Bluemix, CF CLI doesn't push your entire folder of node_modules (as they will be installed dynamically). Next, wipe out your node_modules directory and do an npm install --production followed by npm start (or your custom start command). This is what Bluemix does when trying to start your application, so you should double check that it works locally.
Finally, try bumping up your memory, although this is very unlikely that this is why your application fails to start.

How do I successfully notify Airbrake of a deployment when using capistrano to deploy a Node.js project?

This is a bit of an oddball question.
Capistrano 2.14.2
I'm using capistrano to deploy a couple of Node.js projects, and this works fine (from within the same rvm and gemset Ruby installation). However, I'd like to have Airbrake be notified of these deployments.
Using the 'airbrake' Node.js module, and calling
airbrake.trackDeployment({repo: '...'});
works, but not sure how to reliably call this just once at deploy time. If I call it within my server, then Airbrake is notified of a "deployment" every time my server starts, which is obviously not correct.
Adding
require 'airbrake/capistrano'
to deploy.rb definitely does not work.
How do others successfully use
airbrake.trackDeployment
?
You could create a simple js file you'd run locally (on your machine for example) that notifies airbrake as a last deploy task. You could for example use the backtick operator to run a task:
deploy.task :notify_airbrake do
`node notify_airbrake.js`
end
If you don't have node installed locally, you could also pick one of the servers to run the notification script through ssh:
deploy.task :notify_airbrake do
`ssh youserver "node notify_airbrake.js"`
end
Based on this solution http://dwradcliffe.com/2011/09/26/using-airbrake-with-node.html (which is clearly embedded in a Rails app.), I came up with the following, which depends solely on Javascript:
In my Node.js root directory, create a deploy.js file, like so:
var airbrake = require('airbrake').createClient("AIRBRAKE_API_KEY");
var deployment = {rev: process.argv[2],
repo: process.argv[3],
env: process.argv[4],
user: process.argv[5]};
airbrake.trackDeployment(deployment, function(err, params) {
if (err) {throw err}
console.log('Tracked deployment of %s to %s', params.rev, params.env);
})
In config/deploy.rb, add
require 'airbrake/capistrano'
and
namespace :airbrake do
desc "Notify Airbrake of a new deploy."
task :deploy do
system "node deploy.js #{current_revision} #{repository} #{stage} #{user}"
end
end

Resources