Did anyone set up something like this for himself using the existing
node.js REPL? I didn't think of a quick way to do it.
The way I do it today is using emacs and this:
https://github.com/ivan4th/swank-js
This module is composed of:
A SLIME-js addon to emacs which, in combination with js2-mode, lets
you simply issue a C-M-x somewhere in the body of a function def - and
off goes the function's string to the ..
Swank-js server (yes, you could eval from your local-machine
directly to a remote process) written in Node.js - It receives the
string of the function you eval'ed and actually evals it
A whole part that lets you connect to another port on that server
with your BROWSER and then lets you manipulate the DOM on that browser
(which is pretty amazing but not relevant)
My solution uses SLIME-js on the emacs side AND I require('swank-
js') on my app.js file
Now.. I have several issues and questions regarding my solution or
other possible ones:
Q1: Is this overdoing it? Does someone have a secret way to eval stuff
from nano into his live process?
Q2: I had to change the way swank-js is EVALing.. it used some
kind of black magic like this:
var Script = process.binding('evals').Script;
var evalcx = Script.runInContext;
....
this.context = Script.createContext();
for (var i in global) this.context[i] = global[i];
this.context.module = module;
this.context.require = require;
...
r = evalcx("CODECODE", this.context, "repl");
which, as far I understand, just copies the global variables to the
new context, and upon eval, doesn't change the original function
definitions - SOOO.. I am just using plain "eval" and IT
WORKS.
Do you have any comments regarding this?
Q3: In order to re-eval a function, it needs to be a GLOBAL function -
Is it bad practice to have all function definitions as global (clojure-like) ? Do you think there is another way to do this?
Actually, swank.js is getting much better, and it is now much easier to set up swank js with your project using NPM. I'm in the process of writing the documentation right now, but the functionality is there!
Check this out http://nodejs.org/api/vm.html
var util = require('util'),
vm = require('vm'),
sandbox = {
animal: 'cat',
count: 2
};
vm.runInNewContext('count += 1; name = "kitty"', sandbox, 'myfile.vm');
console.log(util.inspect(sandbox));
// { animal: 'cat', count: 3, name: 'kitty' }
Should help you a lot, all of the sandbox things for node uses it :) but you can use it directly :)
You might take a look at jsapp.us, which runs JS in a sandbox, and then exposes that to the world as a quick little test server. Here's the jsapp.us github repo.
Also, stop into #node.js and ask questions for a quicker response :)
Related
I'm new to Node and the virtual machine vm2. In the documentation for the latter, it gives an example of its usage:
let functionInSandbox = vm.run("module.exports = function(who) { console.log('hello '+ who); }");
functionInSandbox('world');
Question: what is this actually doing?
Firstly, why is module.exports used here at all? Ie, why not omit it as below?
let functionInSandbox = vm.run("function(who) { console.log('hello '+ who); }");
functionInSandbox('world');
Secondly, another way of looking at it: in regular node programming, it is beginner's knowledge that require(inc) is used in one file to assign to a variable, what, in another file (chosen by inc), is assigned to module.exports. How is that different to the above usage with vm2?
Specifically: is require(...) being implicitly called in the above? How could multiple modules be defined (as above) and referred to, within one sandbox?
It's hard to know what questions to even ask - really I'm just hoping for an explanation of ways in which module.exports can be used with vm2 in ways that differ to regular node programming, highlighting differences.
I am using the environment variable and arguments parsing module called nconf for my node.js Express web server.
https://github.com/indexzero/nconf
I decided that the best way to make the nconf data global was to simply attach it to the process variable (as in process.env), is this a good idea or bad idea? Will it slow down execution in weighing down "process"?
Here is my code:
var nconf = require('nconf');
nconf.argv()
.env()
.file({ file: './config/config.json' });
nconf.defaults({
'http': {
'port': 3000
}
});
process.nconf = nconf;
//now I can retrieve config settings anywhere like so process.nconf.get('key');
frankly, I kind of like this solution. Now I can retrieve the config data anywhere, without having to require a module. But there may be downsides to this...and it could quite possibly be a very bad idea. IDK.
It won't slow down the execution, but feels "smelly". It's hard to discover, and it will be difficult to test, if you ever decide you need to.
A better solution would be to attach settings to a module and use require() to import it wherever needed.
The best solution would be to just pass your settings object to the classes or modules that need it. Either directly, or as part of some kind of "global context".
Eg.
var global = {
settings: {
port: 8080
}
}
//...
global.api = new Api(global);
//...
function Api(global) {
var port = global.settings.port;
}
UPDATE: more info on why the original pattern is bad:
1) Discoverability
You attach your settings to process.settings and go off to a different project. A year later, someone else takes over or you need to update things. Will you remember you attached your settings to process.nconf? Or was it process.settings?
Now imagine you have 10 different global things, attached under different names, on different places.
It's not as bad as attaching directly to the global context, but it's certainly better to clearly see where the stuff you're using is coming from (constructor or module).
2) Testing
You decide you need to test your module. So now you need to tweak your settings for each test instead of loading them from a file or argv. How do you do that?
In case of the global process.nconf or require("settings") patterns, you need to do something like this:
function canOpenAPIOnTheConfiguredPort(done) {
var nconfSaveApiPort = process.nconf.api.port;
process.nconf.api.port = '1234';
var api = new Api();
test.assertEqual(api.port, '1234');
process.nconf.api.port = nconfSaveApiPort;
done();
}
As your application grows, this quickly becomes annoying (eg. imagine having to mock 10 things). In comparison, here's how you do it using the dependency injection (constructor) pattern.
function canOpenAPIOnTheConfiguredPort(done) {
var api = new Api({
port: '1234'
});
test.assertEqual(api.port, '1234');
done();
}
Notice that nconf is a singleton.
I use to configure at the very beginning of the program and then when I need a setting in another file I do:
var nconf = require ('nconf');
nconf.get('x');
I want to make sure that in case the code is running in test mode, that it does not (accidentally) access the wrong database. What is the best way to detect if the code is currently running in test mode?
As already mentioned in comment it is bad practice to build your code aware of tests. I even can't find mentioned topic on SO and even outside.
However, I can think of ways to detect the fact of being launched in test.
For me mocha doesn't add itself to global scope, but adds global.it.
So your check may be
var isInTest = typeof global.it === 'function';
I would suggest to be sure you don't false-detect to add check for global.sinon and global.chai which you most likely used in your node.js tests.
Inspecting process.argv is a good approach in my experience.
For instance if I console.log(process.argv) during a test I get the following:
[
'node',
'/usr/local/bin/gulp',
'test',
'--file',
'getSSAI.test.unit.js',
'--bail',
'--watch'
]
From which you can see that gulp is being used. Using yargs makes interpretting this a whole lot easier.
I strongly agree with Kirill and in general that code shouldn't be aware of the fact that it's being tested (in your case perhaps you could pass in your db binding / connection via a constructor?), for things like logging I can see why you might want to detect this.
Easiest option is to just use the detect-mocha [NPM package.
var detectMocha = require('detect-mocha');
if(detectMocha()) {
// doSomethingFancy
}
If you don't want to do that, the relevant code is just
function isMochaRunning(context) {
return ['afterEach','after','beforeEach','before','describe','it'].every(function(functionName){
return context[functionName] instanceof Function;
})
Where context is the current window or global.
I agreed with #Joshua on his answer, he says Inspecting process.argv is a good approach in my experience.
So, I've written a simple detecting mocha code.
const _MOCHA_PATH = new RegExp('(\\\\|/)node_modules\\1mocha\\1bin\\1_mocha$');
var isMochaRunning = process.argv.findIndex(arg => _MOCHA_PATH.test(arg)) > -1;
In a small project with no logging infrastructure, I use
if (process.env.npm_lifecycle_event !== 'test')
console.error(e);
to avoid logging expected errors during testing, as they would interfere with test output.
I'm writing a node module to consume a REST API for a service. For all intents and purposes we might as well say it's twitter (though it's not).
The API is not small. Over a dozen endpoints. Given that I want to offer convenience methods for each of the endpoints I need to split up the code over multiple files. One file would be far too large.
Right now I am testing the pattern I will outline below, but would appreciate any advice as to other means by which I might break up this code. My goal essentially is to extend the prototype of a single object, but do so using multiple files.
Here's the "model" I'm using so far, but don't think is really a good idea:
TwitterClient.js
function TwitterClient(){
this.foo = "bar";
}
require("fs").readdirSync("./endpoints").forEach(function(file) {
require("./endpoints/" + file)(TwitterClient);
});
var exports = module.exports = TwitterClient;
endpoints/endpointA.js etc
module.exports = function(TwitterClient){
TwitterClient.prototype.someMethod = function(){
//do things here
}
}
The basic idea obviously is that any file in the endpoints folder is automatically loaded and the TwitterClient is passed in to it, so that it's prototype can be accessed/extended.
I don't plan to stick with this pattern because for some reason it seems like a bad idea to me.
Any suggestions of better patterns are very much appreciated, cheers
One of the pleasures of frameworks like Rails is being able to interact with models on the command line. Being very new to node.js, I often find myself pasting chunks of app code into the REPL to play with objects. It's dirty.
Is there a magic bullet that more experienced node developers use to get access to their app specific stuff from within the node prompt? Would a solution be to package up the whole app, or parts of the app, into modules to be require()d? I'm still living in one-big-ol'-file land, so pulling everything out is, while inevitable, a little daunting.
Thanks in advance for any helpful hints you can offer!
One-big-ol'-file land is actually a good place to be in for what you want to do. Nodejs can also require it's REPL in the code itself, which will save you copy and pasting.
Here is a simple example from one of my projects. Near the top of your file do something similar to this:
function _cb() {
console.log(arguments)
}
var repl = require("repl");
var context = repl.start("$ ").context;
context.cb = _cb;
Now just add to the context throughout your code. The _cb is a dummy callback to play with function calls that require one (and see what they'll return).
Seems like the REPL API has changed quite a bit, this code works for me:
var replServer = repl.start({
prompt: "node > ",
input: process.stdin,
output: process.stdout,
useGlobal: true
});
replServer.on('exit', function() {
console.log("REPL DONE");
});
You can also take a look at this answer https://stackoverflow.com/a/27536499/1936097. This code will automatically load a REPL if the file is run directly from node AND add all your declared methods and variables to the context automatically.