I can instrument a file/folder and write it to disk like so:
$ istanbul instrument public --output public-coverage --embed-source true
however I am wondering if there is a way to instrument files on the fly and serve them to the browser without ever writing the instrumented files to disk. Something like this:
app.use(function(req,res,next){
const file = req.path; // whatever
const k = cp.spawn('istanbul', ['instrument']);
fs.createReadStream(file).pipe(k.stdin).pipe(res);
});
does anyone know if that's possible and how?
Actually it is.
Below is an example where I'd just intercept the request for a normal "main.js" file and return the "instrumented" version instead.
Just a proof of concept, without any error handling and only for a specific file, but I think you get the point.
Alternatively you can load up instanbul in your code "require("istanbul")" and perform the action without actual spawns
app.get("/main.js", (req, res, next) => {
const cmd = path.join(__dirname, "node_modules", ".bin", "istanbul");
const file = path.join(__dirname, "public/main.js");
const s = spawn(cmd, ["instrument", file, "--embed-source", "--no-compact", "--preserve-comments"]);
s.stdout.on("data", (data) => {
res.send(data);
});
});
Related
I'm using NodeJS on a VM. One part of it serves up pages, and another part is an API. I've run into a problem, where fs.createReadStream attempts to access a different path than what is being passed into the function. I made a small test server to see if it was something else in the server affecting path usage, for whatever reason, but it's happening on my test server as well. First, here's the code:
const fs = require('fs');
const path = require('path');
const csv = require('csv-parser');
const readCSV = (filename) => {
console.log('READ CSV GOT ' + filename); // show me what you got
return new Promise((resolve, reject) => {
const arr = [];
fs.createReadStream(filename)
.pipe(csv())
.on('data', row => {
arr.push(row);
})
.on('error', err => {
console.log(err);
})
.on('end', () => {
resolve(arr);
});
}
}
// tried this:
// const dir = path.relative(
// path.join('path', 'to', 'this', 'file),
// path.join('path', 'to', 'CONTENT.csv')
// );
// tried a literal relative path:
// const dir = '../data/CONTENT.csv';
// tried a literal absolute path:
// const dir = '/repo/directory/server/data/CONTENT.csv';
// tried an absolute path:
const dir = path.join(__dirname, 'data', 'CONTENT.csv');
const content = readCSV(dir)
.then(result => {console.log(result[0]);})
.catch(err => {console.log(err);});
...but any way I slice it, I get the following output:
READCSV GOT /repo/directory/server/data/CONTENT.csv
throw er; // Unhandled 'error' event
^
Error: ENOENT: no such file or directory, open '/repo/directory/data/CONTENT.csv'
i.e., is fs.createReadStream somehow stripping out the directory of the server, for some reason? I suppose I could hard code the directory into the call to createReadStream, maybe? I just want to know why this is happening.
Some extra: I'm stuck on node v8.11, can't go any higher. On the server itself, I believe I'm using older function(param) {...} instead of arrow functions -- but the behavior is exactly the same.
Please help!!
Code is perfect working.
I think you file CONTENT.csv should be in data folder like "/repo/directory/data/CONTENT.csv".
I'm answering my own question, because I found an answer, I'm not entirely sure why it's working, and at least it's interesting. To the best of my estimation, it's got something to do with the call stack, and where NodeJS identifies as the origin of the function call. I've got my server set up in an MVC pattern so my main app.js is in the root dir, and the function that's being called is in /controllers folder, and I've been trying to do relative paths from that folder -- I'm still not sure why absolute paths didn't work.
The call stack goes:
app.js:
app.use('/somepath', endpointRouter);
...then in endpointRouter.js:
router.get('/request/file', endpointController.getFile);
...then finally in endpointController.js:
const readCSV = filename => {
//the code I shared
}
exports.getFile = (req, res, next) => {
// code that calls readCSV(filename)
}
...and I believe that because Node views the chain as originating from app.js, it then treats all relative paths as relative to app.js, in my root folder. Basically when I switched to the super unintuitive single-dot-relative path: './data/CONTENT.csv', it worked with no issue.
I want to get the console contents of the current running Node.js script.
I've tried to do this event but it doesn't work:
setInterval(function() { console.log("Hello World!") }, 1000);
process.stdout.on('message', (message) => {
console.log('stdout: ' + message.toString())
})
It doesn't listen to the event.
This is not a fully Node.js solution but it is very good in case you run Linux.
Create a start.sh file.
Put the following into it:
start.sh:
#!/bin/bash
touch ./console.txt
node ./MyScript.js |& tee console.txt &
wait
Now open your Node.js script (MyScript.js) and use this Express.js event:
MyScript.js:
const fs = require('fs');
app.get('/console', function(req, res){
var console2 = fs.readFileSync("./console.txt", 'utf8');
res.send(console2);
});
Always start your Node.js application by calling start.sh
Now calling http://example.com/console should output the console!
A part of this answer was used.
NOTE: To format the line breaks of the console output to be shown correctly in the browsers, you can use a module like nl2br.
An advice: The problems aren't always solved the direct way, most of the problems are solved using indirect ways. Keep searching about the possible ways to achieve what you want and don't search about what you're looking for only.
There's no 'message' event on process.stdout.
I want to make a GET in my Express.js app called /getconsole .. it
should return the console of the current running Node.js script (which
is running the Express.js app too)
What you should use is a custom logger, I recommend winston with a file transport, and then you can read from that file when you issue a request to your endpoint.
const express = require('express');
const fs = require('fs');
const winston = require('winston');
const path = require('path');
const logFile = path.join(__dirname, 'out.log');
const app = express();
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.Console({
format: winston.format.simple()
}),
new winston.transports.File({
filename: logFile
})
]
});
// Don't use console.log anymore.
logger.info('Hi');
app.get('/console', (req, res) => {
// Secure this endpoint somehow
fs.createReadStream(logFile)
.pipe(res);
});
app.get('/log', (req, res) => {
logger.info('Log: ' + req.query.message);
});
app.listen(3000);
You can also use a websocket connection, and create a custom winston transport to emit the logs.
stdout, when going to a tty (terminal) is an instance of a writable stream. The same is true of stderr, to which node writes error messages. Those streams don't have message events. The on() method allows solicitation of any named event, even those that will never fire.
Your requirement is not clear from your question. If you want to intercept and inspect console.log operations, you can pipe stdout to some other stream. Similarly, you can pipe stderr to some other stream to intercept and inspect errors.
Or, in a burst of ugliness and poor maintainability, you can redefine the console.log and console.error functions to something that does what you need.
It sounds like you want to buffer up the material written to the console, and then return it to an http client in response to a GET operation. To do that you would either
stop using console.log for that output, and switch over to a high-quality logging npm package like winston.
redefine console.log (and possibly console.error) to save its output in some kind of simple express-app-scope data structure, perhaps an array of strings. Then implement your GET to read that array of strings, format it, and return it.
My first suggestion is more scalable.
By the way, please consider the security implications of making your console log available to malicious strangers.
I have a app designed as follows;
//server.js =====================================================
var restify = require('restify'),
route1 = require('./routes/route1),
route2 = require('./routes/route2),
....
....
....
var server = restify.createServer({
name: 'xyz_server'
});
route1(server);
route2(server);
Now each route file looks like belwo
//route1.js =====================================================
module.exports = function(server) {
server.get('/someRoute',function(req,res,next){
//.. do something
});
server.get('/anotherRoute',function(req,res,next){
//..something else
});
};
Now the issue is tht we have dozen's of route files and hundreds of routes in total.
There are multiple developers working on this project and several routes are being added daily.
Is there a function in restify gives me a list of all routes in the system ?
What i am looking for is something like:
server.listAllRoutes();
Is anyone aware of this ?
Try something like this
function listAllRoutes(server){
console.log('GET paths:');
server.router.routes.GET.forEach(
function(value){console.log(value.spec.path);}
);
console.log('PUT paths:');
server.router.routes.PUT.forEach(
function(value){console.log(value.spec.path);}
);
}
listAllRoutes(server);
This should list all GET and PUT paths, adding POST and DELETE should be easy :)
2019 update: server.router.routes is no longer available instead we have server.router.getRoutes() which returns a Map. So we can log all the routes using:
function listAllRoutes(server) {
Object.values(server.router.getRoutes()).forEach(value =>
console.log(
`ENDPOINT REGISTERED :: ${value.method} :: ${server.url}${value.path}`
)
);
}
http://restify.com/docs/server-api/#server
There is a router.getRoutes() method, but it returns an object which is not the best to work with for listing things. You could fiddle around with that to turn it into an array with the shape that you like.
Alternatively, you can access all the routes as an array and then map them, even better if you use a lib like better-console to give you console.table in node. The following is working nicely for me in restify#8.3.0:
import console from 'better-console';
function listRoutes(server) {
const { routes } = server.router._registry._findMyWay; // beware these are probably meant to be private APIs, they could be subject to change
const mapped = routes.map(({ method, path }) => ({ method, path }));
console.table(mapped.sort((a, b) => a.method > b.method));
}
Need help.
I use gulp-conect and it livereload method. But if I build a few template in time, get a lot of page refresh. Is any solution, I want to build few templates with single page refresh?
So, I reproduce the problem you have and came accross this working solution.
First, lets check gulp plugins you need:
gulp-jade
gulp-livereload
optional: gulp-load-plugins
In case you need some of them go to:
http://gulpjs.com/plugins/
Search for them and install them.
Strategy: I created a gulp task called live that will check your *.jade files, and as you are working on a certain file & saving it, gulp will compile it into html and refresh the browser.
In order to accomplish that, we define a function called compileAndRefresh that will take the file returned by the watcher. It will compile that file into html and the refesh the browser (test with livereload plugin for chrome).
Notes:
I always use gulp-load-plugin to load plugins, so thats whay I use plugins.jad and plugins.livereload.
This will only compile files that are saved and while you have the task live exucting on the command line. Will not compile other files that are not in use. In order to accomplish that, you need to define a task that compiles all files, not only the ones that have been changed.
Assume .jade files in /jade and html output to /html
So, here is the gulpfile.js:
var gulp = require('gulp'),
gulpLoadPlugins = require('gulp-load-plugins'),
plugins = gulpLoadPlugins();
gulp.task('webserver', function() {
gulp.src('./html')
.pipe(plugins.webserver({
livereload: true
}));
gulp.watch('./jade/*.jade', function(event) {
compileAndRefresh(event.path);
});
});
function compileAndRefresh(file) {
gulp.src(file)
.pipe(plugins.jade({
}))
.pipe(gulp.dest('./html'))
}
Post edit notes:
Removed liveReload call from compileAndRefresh (webserver will do that).
Use gulp-server plugin insted of gulp-connect, as they suggest on their repository: "New plugin based on connect 3 using the gulp.src() API. Written in plain javascript. https://github.com/schickling/gulp-webserver"
Something you can do is to watch only files that changes, and then apply a function only to those files that have been changed, something like this:
gulp.task('live', function() {
gulp.watch('templates/folder', function(event) {
refresh_templates(event.path);
});
});
function refresh_templates(file) {
return
gulp.src(file)
.pipe(plugins.embedlr())
.pipe(plugins.livereload());
}
PS: this is not a working example, and I dont know if you are using embedlr, but the point, is that you can watch, and use a callback to call another function with the files that are changing, and the manipulate only those files. Also, I supposed that your goal is to refresh the templates for your browser, but you manipulate as you like, save them on dest or do whatever you want.
Key point here is to show how to manipulate file that changes: callback of watch + custom function.
var jadeTask = function(path) {
path = path || loc.jade + '/*.jade';
if (/source/.test(path)) {
path = loc.jade + '/**/*.jade';
}
return gulp.src(path)
.pipe(changed(loc.markup, {extension: '.html'}))
.pipe(jade({
locals : json_array,
pretty : true
}))
.pipe(gulp.dest(loc.markup))
.pipe(connect.reload());
}
First install required plugins
gulp
express
gulp-jade
connect-livereload
tiny-lr
connect
then write the code
var gulp = require('gulp');
var express = require('express');
var path = require('path');
var connect = require("connect");
var jade = require('gulp-jade');
var app = express();
gulp.task('express', function() {
app.use(require('connect-livereload')({port: 8002}));
app.use(express.static(path.join(__dirname, '/dist')));
app.listen(8000);
});
var tinylr;
gulp.task('livereload', function() {
tinylr = require('tiny-lr')();
tinylr.listen(8002);
});
function notifyLiveReload(event) {
var fileName = require('path').relative(__dirname, event.path);
tinylr.changed({
body: {
files: [fileName]
}
});
}
gulp.task('jade', function(){
gulp.src('src/*.jade')
.pipe(jade())
.pipe(gulp.dest('dist'))
});
gulp.task('watch', function() {
gulp.watch('dist/*.html', notifyLiveReload);
gulp.watch('src/*.jade', ['jade']);
});
gulp.task('default', ['livereload', 'express', 'watch', 'jade'], function() {
});
find the example here at GitHub
I'm a beginner in Node.js, and was having trouble with this piece of code.
var fs = require('fs');
Framework.Router = function() {
this.run = function(req, res) {
fs.exists(global.info.controller_file, function(exists) {
if (exists) {
// Here's the problem
res.writeHead(200, {'Content-Type':'text/html'});
var cname = App.ucfirst(global.info.controller)+'Controller';
var c = require(global.info.controller_file);
var c = new App[cname]();
var action = global.info.action;
c[action].apply(global.info.action, global.info.params);
res.end();
} else {
App.notFound();
return false;
}
});
}
};
The problem lies in the part after checking if the 'global.info.controller_file' exists, I can't seem to get the code to work properly inside the: if (exists) { ... NOT WORKING }
I tried logging out the values for all the variables in that section, and they have their expected values, however the line: c[action].apply(global.info.action, global.info.params);
is not running as expected. It is supposed to call a function in the controller_file and is supposed to do a simple res.write('hello world');. I wasn't having this problem before I started checking for the file using fs.exists. Everything inside the if statement, worked perfectly fine before this check.
Why is the code not running as expected? Why does the request just time out?
Does it have something to do with the whole synchronous vs asynchronous thing? (Sorry, I'm a complete beginner)
Thank you
Like others have commented, I would suggest you rewrite your code to bring it more in-line with the Node.js design patterns, then see if your problem still exists. In the meantime, here's something which may help:
The advice about not using require dynamically at "run time" should be heeded, and calling fs.exists() on every request is tremendously wasteful. However, say you want to load all *.js files in a directory (perhaps a "controllers" directory). This is best accomplished using an index.js file.
For example, save the following as app/controllers/index.js
var fs = require('fs');
var files = fs.readdirSync(__dirname);
var dotJs = /\.js$/;
for (var i in files) {
if (files[i] !== 'index.js' && dotJs.test(files[i]))
exports[files[i].replace(dotJs, '')] = require('./' + files[i]);
}
Then, at the start of app/router.js, add:
var controllers = require('./controllers');
Now you can access the app/controllers/test.js module by using controllers.test. So, instead of:
fs.exists(controllerFile, function (exists) {
if (exists) {
...
}
});
simply:
if (controllers[controllerName]) {
...
}
This way you can retain the dynamic functionality you desire without unnecessary disk IO.