Curl command is not processing changes I make to file - node.js

I am trying to test parsing of a zip file in node.js using curl from the command line. Originally, I had a route that looks like this:
app.post('/processZip', (req, res) => {
const zip = req.file
console.log(req)
extractCSVFilesFromZip(zip, '/tmp/connections', '/tmp/messages')
const connectionsOutputPath = '/tmp/connections'
const messagesOutputPath = '/tmp/messages'
console.log(`Size of Parsed Connections File: ${connectionsOutputPath.size}`)
console.log(`Size of Parsed Messages File: ${messagesOutputPath.size}`)
res.send('success!')
})
which calls a function that looks like this:
const extractCSVFilesFromZip = (zipFilePath, connectionsCSVOutputPath,
messagesCSVOutputPath) => {
console.log(zipFilePath)
fs.createReadStream(zipFilePath)
.pipe(unzip.Parse())
.on('entry', entry => {
const [
fileName,
size
] = [
entry.path,
entry.size
]
if (fileName === 'Connections.csv') {
console.log(`Size of Connections File to Parse: ${size}`)
entry.pipe(fs.createWriteStream(connectionsCSVOutputPath))
} else if (fileName === 'Messages.csv') {
console.log(`Size of Messages File to Parse: ${size}`)
entry.pipe(fs.createWriteStream(messagesCSVOutputPath))
} else {
entry.autodrain()
}
})
}
I am using this curl command to test the request:
curl -F file=#../../../Downloads/Basic_LinkedInDataExport_09-14-2018.zip http://localhost:5000/processZip/
Originally, it gave me an error pointing to the first instance of createReadStream in the function, so I commented out all the code and just tried to console.log(zipFilePath) to see what is being sent. But I still get the same error. In fact, I can comment out, remove, or change any of the code in either the route or the file, but it makes no difference. I still get the same error. It's as if curl is still sending the request to a cached version of the files, and not processing the changes I am making. But if I examine the files from the command line with sudo nano I can see the updated versions. What could be causing this issue? I have saved the files and restarted the server each time. Could it be that I need to wait longer than usual for the changes to be processed because it is a larger codebase than I am used to working in, or is something else to blame. For what it is worth, the servers are being run by forever. Thanks in advance for any help!

Okay I figured it out, there was a ghost process running on 5000. killall -9 node did the trick!

Related

Slash command registers command from wrong folder discord.js14

I'm tired of trying to solve this. First off, here is my deployment code
const { REST, Routes } = require('discord.js');
const fs = require('node:fs');
const { client_id } = require('./config.json')
const commands = [];
// Grab all the command files from the commands directory you created earlier
const commandFiles = fs.readdirSync('./slashCommands').filter(file => file.endsWith('.js'));
// Grab the SlashCommandBuilder#toJSON() output of each command's data for deployment
for (const file of commandFiles) {
const command = require(`./slashCommands/${file}`);
commands.push(command.data.toJSON());
}
// Construct and prepare an instance of the REST module
const rest = new REST({ version: '10' }).setToken(process.env.TOKEN);
// and deploy your commands!
(async () => {
try {
console.log(`Started refreshing ${commands.length} application (/) commands.`);
// The put method is used to fully refresh all commands in the guild with the current set
const data = await rest.put(
Routes.applicationCommands(client_id),
{ body: commands },
);
console.log(`Successfully reloaded ${data.length} application (/) commands.`);
} catch (error) {
// And of course, make sure you catch and log any errors!
console.error(error);
}
})();
It is supposed to get the command from the "slashCommand" folder. So I run 'node deploy-commands.js' and it works.
The problem is when I do the slash command '/ping', I get this error:
/home/runner/Nocinel/commands/ping.js:8
message.reply('🏓 **Ball is going over the net...**').then(m => { m.edit(`**🏓 Pong!\n:stopwatch: Uptime: ${Math.round(message.client.uptime / 60000)} minutes\n:sparkling_heart: Websocket Heartbeat: ${message.client.ws.ping}ms\n:round_pushpin: Rountrip Latency: ${m.createdTimestamp - message.createdTimestamp}ms**`) });
^
TypeError: m.edit is not a function
at /home/runner/Nocinel/commands/ping.js:8:73
repl process died unexpectedly: exit status 1
Now this error indicates that I am running a command from my "command" folder rather than my "slashCommand" folder. Which doesnt make sense because I explicitly coded it to only get commands from the "slash command folder"
I have restarted, deleted, waited for an hour, and tested it multiple times, it always gives the same disappointing result. I see absolutely nothing wrong with my code.
There is no problem with registring comannd (deploy-comannds.js is only registring comannds not using making them work). Problem have to be in your index.js you have to handle interaction comannds to your folder slashComannds. Registring comannds was sucessfull.
Documentation:
https://discordjs.guide/creating-your-bot/command-handling.html#loading-command-files

Electron app in production mode fails to run external script via child_process.spawnSync API in Mac but works perfectly in Linux

The app computes the sum of the exponentials of the two entered integers using an R code. The inputs are passed in the form of a JSON object to the R code via child_process.spawnSync API of node.js.
The app was packaged using electron-packager(v15.2.0) and its structure is as shown in the screenshot below. Source code to reproduce this issue can be obtained from this GitHub folder: https://github.com/wasimaftab/Utils/tree/master/test_js_r_interaction
index.js file contains the code to interact with R. Important note, you need to install rjson R package before attempting to run the electron app as it is used in R to extract the arguments from json object.
In Ubuntu (18.04) the output as expected, see the screenshot below,
The same code fails in Mac (Catalina 10.15.7) after packaging but, works perfectly in development mode, see the screenshot below.
The actual error is as follows:
Error: spawnSync Rscript ENOENT
at Object.spawnSync (internal/child_process.js:1041:20)
at Object.spawnSync (child_process.js:625:24)
at callSync (file:///Users/admin/Desktop/test_js_r_interaction/release-builds-mac/test_js_r_interaction-darwin-x64/test_js_r_interaction.app/Contents/Resources/app.asar/src/index.js:25:23)
at HTMLButtonElement.<anonymous> (file:///Users/admin/Desktop/test_js_r_interaction/release-builds-mac/test_js_r_interaction-darwin-x64/test_js_r_interaction.app/Contents/Resources/app.asar/src/index.js:84:20)
and the js code to interact with R is as follows:
const path = require("path");
const child_process = require('child_process');
const RSCRIPT = 'Rscript';
const defaultOptions = {
verboseResult: false
}
function parseStdout(output) {
try {
output = output.substr(output.indexOf('"{'), output.lastIndexOf('}"'));
return JSON.parse(JSON.parse(output));
} catch (err) {
return err;
}
}
function callSync(script, args, options) {
options = options || defaultOptions;
const result = args ?
child_process.spawnSync(RSCRIPT, [script, JSON.stringify(args)]) :
child_process.spawnSync(RSCRIPT, [script]);
if (result.status == 0) {
const ret = parseStdout(result.stdout.toString());
if (!(ret instanceof Error)) {
if (options.verboseResult) {
return {
pid: result.pid,
result: ret
};
} else {
return ret;
};
} else {
return {
pid: result.pid,
error: ret.message
};
}
} else if (result.status == 1) {
return {
pid: result.pid,
error: result.stderr.toString()
};
} else {
return {
pid: result.pid,
error: result.stderr.toString()
//error: result.stdout.toString()
};
}
}
I will appreciate any suggestion to fix this issue, thanks in advance
The error you're getting, ENOENT, suggests that your OS cannot find Rscript. This can result from the following scenarios:
Rscript is not even installed.
Rscript is installed, but not executable without a "shell". To check whether it is installed, open a Terminal and execute the command as your Electron application would.
If Rscript can be executed from within a Terminal, there could be something wrong with how the installation has set your paths up. There shouldn't be, really, but it might be necessary to execute spawnSync with additional options, such as { shell: true } to get the correct value of PATH.
If Rscript cannot be executed from within a Terminal, forcing Electron to spawn a shell via the above options (which really is what a Terminal does) will not solve this problem. In this case, try to use the complete path to Rscript as the command instead, if you happen to know where it should be installed.
If neither of those solutions help, try reinstalling Rscript altogether and try again. As I can see nothing which would be wrong with your code, I believe it is a problem of installation.
For more information on child_process.spawnSync (command, args, options), see its documentation.

Get console output of current script

I want to get the console contents of the current running Node.js script.
I've tried to do this event but it doesn't work:
setInterval(function() { console.log("Hello World!") }, 1000);
process.stdout.on('message', (message) => {
console.log('stdout: ' + message.toString())
})
It doesn't listen to the event.
This is not a fully Node.js solution but it is very good in case you run Linux.
Create a start.sh file.
Put the following into it:
start.sh:
#!/bin/bash
touch ./console.txt
node ./MyScript.js |& tee console.txt &
wait
Now open your Node.js script (MyScript.js) and use this Express.js event:
MyScript.js:
const fs = require('fs');
app.get('/console', function(req, res){
var console2 = fs.readFileSync("./console.txt", 'utf8');
res.send(console2);
});
Always start your Node.js application by calling start.sh
Now calling http://example.com/console should output the console!
A part of this answer was used.
NOTE: To format the line breaks of the console output to be shown correctly in the browsers, you can use a module like nl2br.
An advice: The problems aren't always solved the direct way, most of the problems are solved using indirect ways. Keep searching about the possible ways to achieve what you want and don't search about what you're looking for only.
There's no 'message' event on process.stdout.
I want to make a GET in my Express.js app called /getconsole .. it
should return the console of the current running Node.js script (which
is running the Express.js app too)
What you should use is a custom logger, I recommend winston with a file transport, and then you can read from that file when you issue a request to your endpoint.
const express = require('express');
const fs = require('fs');
const winston = require('winston');
const path = require('path');
const logFile = path.join(__dirname, 'out.log');
const app = express();
const logger = winston.createLogger({
level: 'info',
format: winston.format.json(),
transports: [
new winston.transports.Console({
format: winston.format.simple()
}),
new winston.transports.File({
filename: logFile
})
]
});
// Don't use console.log anymore.
logger.info('Hi');
app.get('/console', (req, res) => {
// Secure this endpoint somehow
fs.createReadStream(logFile)
.pipe(res);
});
app.get('/log', (req, res) => {
logger.info('Log: ' + req.query.message);
});
app.listen(3000);
You can also use a websocket connection, and create a custom winston transport to emit the logs.
stdout, when going to a tty (terminal) is an instance of a writable stream. The same is true of stderr, to which node writes error messages. Those streams don't have message events. The on() method allows solicitation of any named event, even those that will never fire.
Your requirement is not clear from your question. If you want to intercept and inspect console.log operations, you can pipe stdout to some other stream. Similarly, you can pipe stderr to some other stream to intercept and inspect errors.
Or, in a burst of ugliness and poor maintainability, you can redefine the console.log and console.error functions to something that does what you need.
It sounds like you want to buffer up the material written to the console, and then return it to an http client in response to a GET operation. To do that you would either
stop using console.log for that output, and switch over to a high-quality logging npm package like winston.
redefine console.log (and possibly console.error) to save its output in some kind of simple express-app-scope data structure, perhaps an array of strings. Then implement your GET to read that array of strings, format it, and return it.
My first suggestion is more scalable.
By the way, please consider the security implications of making your console log available to malicious strangers.

pre-load / pre-require directories of .js route files

Using Express with Node.js, we might do something like this:
app.use('api/:controller/:action/:id', function(req,res,next){
var controller = req.params.controller;
var action = req.params.action;
var route = require('./routes/' + controller + '/' + action);
route(req,res,next);
}
now this is all fine and well, except there is at least one problem: the route file is dynamically loaded at runtime if this file has not been 'require'd yet. Which means it's a little bit slower at least.
Does someone have a script that recurses through a directory and pre-loads/pre-requires all the .js files when a server first starts up?
I have a similar problem for the front-end as well, using RequireJS. The solution seems to be to write a bash script that writes out all the .js filepaths in a directory and its subdirectories to a text file. then when the server starts up, it reads that text file and requires all the files in the directory that are listed in the text file. Is that the best way to do it?
If you can use io.js, it can preload modules using command-line -r or --require:
iojs -r <module_name> server.js
I created an NPM module that does this for the front-end, doing it for Node.js / CommonJS is another story.
https://www.npmjs.com/package/requirejs-metagen
you can use it like so:
var grm = require('requirejs-metagen'); //you can use with Gulp
var controllersOpts = {
inputFolder: './public/static/app/js/controllers/all',
appendThisToDependencies: 'app/js/controllers/',
appendThisToReturnedItems: '',
eliminateSharedFolder: true,
output: './public/static/app/js/meta/allControllers.js'
};
grm(controllersOpts,function(err){
//handle errors your own way
});
it generates a corresponding AMD/RequireJS module like so:
define(
[
"app/js/controllers/all/jobs",
"app/js/controllers/all/users"
],
function(){
return {
"jobs": arguments[0],
"users": arguments[1]
}
});
you can also require subdirectories and all that stuff like so:
var allViewsOpts = {
inputFolder: './public/static/app/js/jsx',
appendThisToDependencies: 'app/js/',
appendThisToReturnedItems: '',
eliminateSharedFolder: true,
output: './public/static/app/js/meta/allViews.js'
}
grm(allViewsOpts );
which generates output like so:
define([
"app/js/jsx/BaseView",
"app/js/jsx/reactComponents/FluxCart",
"app/js/jsx/reactComponents/FluxCartApp",
"app/js/jsx/reactComponents/FluxProduct",
"app/js/jsx/reactComponents/Item",
"app/js/jsx/reactComponents/Job",
"app/js/jsx/reactComponents/JobsList",
"app/js/jsx/reactComponents/listView",
"app/js/jsx/reactComponents/Picture",
"app/js/jsx/reactComponents/PictureList",
"app/js/jsx/reactComponents/RealTimeSearchView",
"app/js/jsx/reactComponents/Service",
"app/js/jsx/reactComponents/ServiceChooser",
"app/js/jsx/reactComponents/todoList",
"app/js/jsx/relViews/getAll/getAll",
"app/js/jsx/relViews/jobs/jobsView",
"app/js/jsx/standardViews/dashboardView",
"app/js/jsx/standardViews/overviewView",
"app/js/jsx/standardViews/pictureView",
"app/js/jsx/standardViews/portalView",
"app/js/jsx/standardViews/registeredUsersView",
"app/js/jsx/standardViews/userProfileView"
],
function(){
return {
"BaseView": arguments[0],
"reactComponents/FluxCart": arguments[1],
"reactComponents/FluxCartApp": arguments[2],
"reactComponents/FluxProduct": arguments[3],
"reactComponents/Item": arguments[4],
"reactComponents/Job": arguments[5],
"reactComponents/JobsList": arguments[6],
"reactComponents/listView": arguments[7],
"reactComponents/Picture": arguments[9],
"reactComponents/PictureList": arguments[10],
"reactComponents/RealTimeSearchView": arguments[11],
"reactComponents/Service": arguments[12],
"reactComponents/ServiceChooser": arguments[13],
"relViews/getAll/getAll": arguments[14],
"relViews/jobs/jobsView": arguments[15],
"standardViews/dashboardView": arguments[16],
"standardViews/overviewView": arguments[17],
"standardViews/pictureView": arguments[18],
"standardViews/portalView": arguments[19],
"standardViews/registeredUsersView": arguments[20],
"standardViews/userProfileView": arguments[21]
}
});
I need to update the library so it returns the stream so you can handle when it completes, otherwise it works great.

How to use jasmine with gulp.watch

I'm trying to make my tests run each time I'm saving some files. Here is the gulp watch:
gulp.task('jasmine', function() {
gulp.src('spec/nodejs/*Spec.js')
.pipe(jasmine({verbose:true, includeStackTrace: true}));
});
gulp.task('watch', function () {
gulp.watch(['app/*.js', 'app/!(embed)**/*.js','spec/nodejs/*.js'], ['jasmine']);
});
To test for example app/maps.js I'm creating a spec/nodejs/mapsSpec.js file like this:
'use strict';
var maps = require('../../app/maps');
describe('/maps related routes', function(){
it('should ...', function(){...}
...
If I change a spec file everything is working well, if I modify app/maps.js file the change trigger the test. if I modify it again tests are tiggered but the modifications do not taking effect. For example if I add a console.log('foo') in a second time, I will not see it until I relaunch gulp watch and save it again. So only one run of jasmine is ok when using it with gulp.watch.
I guess it's because require is cached by nodejs in the gulp process. So how should I do ?
I took a look at the code of gulp-jasmine. The problem is that the only file from the cache is the Specs.js file. The cache of the children(the reqquired files to test) aren't cleared.
Within the index.js of gulp-jasmine is a row which deletes the cache:
delete require.cache[require.resolve(path.resolve(file.path))];
If you put the next block of code before the delete, you will delete all the children's cache and will it run correctly after every time you save your file.
var files = require.cache[require.resolve(path.resolve(file.path))];
if( typeof files !== 'undefined' ) {
for( var i in files.children ) {
delete require.cache[ files.children[i].id ];
}
}
You can change this in the node_modules.
I will go for a pull request, so maybe in the near future this will be solved permanently.
Also wrote a post about it on: http://navelpluisje.nl/entry/fix-cache-problem-jasmine-tests-with-gulp
I haven't found a fix for this issue, but you can work around it via the gulp-shell task.
npm install gulp-shell --save-dev
then
var shell = require('gulp-shell');
...
gulp.task('jasmine', function() {
gulp.src('spec/nodejs/*Spec.js')
.pipe(shell('minijasminenode spec/*Spec.js'));
});
You'll also need jasmine installed as a direct dependency (gulp-jasmine uses minijasminenode)

Resources