Node.js - write CSV file creates empty file in production, while OK in Mocha testing - node.js

This gist shows a code snippet that dumps an object into a CSV file.
File writing is done using module csv-write-stream and it returns a promise.
This code works flawlessly in all the Mocha tests that I have made.
When the code is invoked by the main nodejs app (a server-side REPL application involving raw process.stdin and console.log invocations as interaction with the user), the CSV file is created, but it's always empty and no error/warning seems to be thrown.
I have debugged extensively the REPL code with node-debug and the Chrome dev tools: I am sure that the event handlers of the WriteStream are working properly: no 'error', all 'data' seems to be handled, 'close' runs, the promise resolves as expected.
Nevertheless, in the latter case the file is always 0 bytes. I have checked several Q&A's and cannot find anything as specific as this.
My questions are:
can I be missing some errors? how can I be sure to track all communications about the file write?
in which direction could I intensify my investigation? which other setup could help me isolate the problem?
since the problem may be due to the presence of process.stdin in the equation, what is a way to create a simple, light-weight interaction with the user without having to write a webapp?
I am working on Windows 7. Node 6.9.4, npm 3.5.3, csv-write-stream 2.0.0.

I managed to fix this issue in two ways, either by:
resolving the promise upon the 'finish' event of the FileWriteStream rather than on the 'end' event of the CSVWriteStream
removing the process.exit() I was using at the end of my operations with process.stdin (this implies that this tutorial page may be in need of some corrections)

Related

Node fluent-ffmpeg killing process kills server - How to start and stop recorder?

I'm using fluent-ffmpeg in a node application. Recording from the screen/camera to an mp4 file. Would like a server request to start and another request to stop recording (links to a web interface - testing some tech with a view to making an Electron App later with it).
Starting is fine, but cannot figure out how to stop it.
This is the code to start (to run on MacOS):
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.native()
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.save(filePath);
This is what I thought would stop it from documentation and reading around the subject:
recordingProcessVideo.kill('SIGINT');
However, when I call this command, the server quits with the following ambiguous message:
code ELIFECYCLE
errno 1
Also, the video file produced will not open as if it quit before it completed. Can't seem to work it out, as from the docs and what people have written, to start and stop the recorder should be to make the process, then kill it when ready. Anyone know the correct way - been looking for ages but can't find any answers.
Using Node v10.15.2 and Ffmpeg version V92718-g092cb17983 running on MacOS 10.14.3.
Thanks for any help.
I have solved the issue through tracing out all the messages FFMpeg issued in the terminal. For some unknown reason, my installation of FFMpeg throws an error when completing the video and does not correctly close the file. This is happening in the terminal as well, though the error doesn't really display, and ends up with an MP4 that actually works in all video players - even the browser - with the exception of Quicktime, which is what I was using on this occasion. To prevent the error from crashing my Node application, I just needed to add an error handler to the video call. Indeed, I was adding the handler in my original code, but I was adding it to the process and NOT the original call to FFMPeg. So the code which works looks like this (I catch all of the end events and log them in this example).
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.native()
.on('error', error => console.log(`Encoding Error: ${error.message}`))
.on('exit', () => console.log('Video recorder exited'))
.on('close', () => console.log('Video recorder closed'))
.on('end', () => console.log('Video Transcoding succeeded !'))
.save(file.video);
I have two version of FFMpeg on my laptop and both fail. The official downloaded release installed on my computer (V4.1.1) and the Node packaged version my app is using, which will make distribution via Electron easier, as it won't have the dependency of installing FFMpeg on the local machine running the app (#ffmpeg-installer/ffmpeg). So the reason the video fails to export is some magical reason to do with my laptop, which I have to figure out, but importantly, my code works now and is resilient to this failing now.
Maybe it will help someone in the future.
To complete the ffmpeg conversion process
You need to run ffmpeg conversion like this as follows:
recordingProcessVideo = ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.native()
.videoFilters(`crop=${width}:${height}:${x}:${y}`)
.save(filePath);
recordingProcessVideo.run();
And then you can disable your conversion command ffmpeg:
recordingProcessVideo.kill();
The key point is the launch method .run() and you need to run it when the template has already been passed to the variable recordingProcessVideo
After such a launch recordingProcessVideo.run();
You will be able to disable recordingProcessVideo.kill();
The bottom line is that, ffmpeg only passes the template to your variable recordingProcessVideo and if you run .run() immediately when creating a template example:
ffmpeg(`${screenID}:none`)
.inputFormat('avfoundation')
.save(filePath);
.run();
Then the variable recordingProcessVideo will be empty.
This is my first comment on this site, do not scold much for mistakes :)

How to get Flow type checker to detect changes in my files?

So Flow only works correctly the first time I run it, and then I have to restart my computer before it'll work correctly again.
Specifically, the problem I'm seeing is that we are using the Flow language to add type annotations to our JS code. Our linter script is setup to run flow type checking among other things. However, when I fix an issue in my code and then rerun the linter script, it still comes back with the exact same errors... BUT when it shows the piece of code where the error is supposed to be, it actually shows my updated code that's fixed.
So as an example, I had a file I copied into the project, that I didn't think I really needed, but maybe I would. So I copied it in just in case. Well then it came up with a bunch of linter errors, so I decided to just delete the file since I didn't really need it. So then I run "yarn lint --fix" again, but it's still complaining about that file, EVEN THOUGH THE FILE DOESN"T EXIST! Now interestingly, where the linter output is supposed to show the code for those errors it's just blank.
Or another example, let's say I had a couple of functions in my code:
100: function foo() {}
...
150: function bar() {}
And foo has a lot of errors because it was some throw away code I don't need anymore and so I just delete it. So the new code looks like:
100: function bar() {}
Well I rerun the linter and get an error like:
Error ------------------------ function foo has incorrect
something...blah blah
src/.../file.js
100| function bar() {}
I also tested this out on a coworker's machine and they got the same behavior that I did. So it's not something specific to my machine, although it could be specific to our project?
Note: There doesn't appear to be a tag for Flow, but I couldn't post without including at least one tag, so I used flowlang even though that's actually a different language :-( I'm assuming that anyone looking for flow would also use that tag since it's the closest.
The first time you launch Flow it starts up a background process that is then used for subsequent type checking. Unfortunately this background process is extremely slow, and buggy to boot. In linux you can run:
killall flow
To stop the background process. Then if you rerun the flow type checker, it will actually see all your latest changes.

Log statements in Node Module not getting printed

I am a new to Node JS. I have included a module via npm install '<module_name>. It was built correctly and there was no errors. Now, I wanted to debug it, so I placed a few console.log('some text') in code blocks of the module to see if the code by passes that line. Anyway, none of the log statements were displayed.
I am wondering if I have to compile or something the modules after adding the log staements. Am I missing something here.
Your console.log statements are not being run, this could be caused by many things.
Assuming you have added the console.log statements to the module code in the node_modules directory of your app..
does the module have src and dist directories and you have not edited the code that is actually being run? (this relates to needing to recompile, but editing the actual code that the module is running will be quicker and easier)
if this is in a server or long running script it will need to be restarted to load the changes
is this in a browser which might be caching the code (turn off browser cache)
is the code where you added the log statements actually being hit?
I would make sure I had a console.log statement in a part of the code guaranteed to be hit, just as a sanity check.
For anyone coming here in the future, try console.error instead of console.log. For some decided reason or another, log was being overriding by the library I was monkey fixing. Took me way too long to find the culprit.

nodejs - jade ReferenceError: process is not defined

the projcet is generated by webstorm's express template.
the npm dependencies haven be installed!
the result page is OK when I run the appplicaion, but the console will always say:
'ReferenceError: process is not defined'
why will this happened ? I am under Win7 64bit.
I finally found where the issue originates. It's not Jade or Express, it's Uglify-JS which is a dependency of transformers which is a dependency of Jade which is often a dependency of express.
I have experienced this issue in WebStorm IDE by JetBrains on Windows 7 and Windows 8 (both 64-bit).
I already went ahead and fixed the issue in this pull request.
All I needed to do was include process in the node vm context object. After doing that I received a new error:
[ReferenceError: Buffer is not defined]
All I had to do was include Buffer in the vm context object as well and I no longer get those silly messages.
I still don't fully understand why it only happens during debugging but in my extremely limited experience I've come to find that the node vm module is a fickle thing, or at least the way some people use it is.
Edit: It's a bug in the node vm module itself. I figured out how to reproduce it and I figured out why it only happens during debugging.
This bug only happens if you include the third (file) argument to vm.runInContext(code, context, file);. All the documentation says about this argument is that it is optional and is only used in stack traces. Right off the bat you can now see why it only happens during debugging. However, when you pass in this argument some funny behavior begins to occur.
To reproduce the error (note that the file argument must be passed in or this error never occurs at all):
The file argument must end with ".js" and must contain at least one forward slash or double backslash. Since this argument is expected to be a file path it makes sense that the presence of these might trigger some other functionality.
The code you pass in (first argument) must not begin with a function. If it begins with a function then the error does not occur. So far it seems that beginning the code with anything but a function will generate the reference error. Don't ask me why this argument has any effect on whether or not the error shows up because I have no idea.
You can fix the error by including process in the context object that you pass to vm.createContext(contextObject);.
var context = vm.createContext({
console: console,
process: process
});
If your file path argument is well-formed (matches the requirements in #1) then including process in the context will get rid of the error message; that is, unless your file path does not point to an actual file in which case you see the following:
{ [Error: ENOENT, no such file or directory 'c:\suatils.js']
errno: 34,
code: 'ENOENT',
path: 'c:\\test.js',
syscall: 'open' }
Pointing it to an actual file will get rid of this error.
I'm going to fork the node repository and see if I can improve this function and the way it behaves, then maybe I'll submit a pull request. At the very least I will open a ticket for the node team.
Edit 2: I've determined that it is an issue with WebStorm specifically. When WebStorm starts the node process we get this issue. If you debug from the command line there is no problem.
Video: http://youtu.be/WkL9a-TVHNY?hd=1
Try this...
set the webstorm debugger to break on all unhandled exceptions. then run the app in debug mode. I think you will find the [referenceError] is being thrown from the referenced fs.js.
More specifically, fs.js line 684:
fs.statSync = function(path) {
nullCheck(path);
**return binding.stat(pathModule._makeLong(path));**
};
These were my findings using the same dev environment as you. (win 64, webstorm, node, etc...)
from there you can use webstorms evaluate expression to re-run that line of code and see exactly why you are failing.
I had the same error coming up and it was because I had the following at the top of my file:
const argv = require("minimist")(process.argv.slice(two));
const process = require("child_process");
That confused node, I guess it thought process hadn't been defined at that point.
Changing the second line to a different variable name resolved the issue

node.js -- execute command synchronously and get result

I'm trying to execute a child_process synchronously in node.js (Yes, I know this is bad, I have a good reason) and retrieve any output on stdout, but I can't quite figure out how...
I found this SO post: node.js execute system command synchronously that describes how to use a library (node-ffi) to execute the command, and this works great, but the only thing I'm able to get is the process exit code. Any data the command executes is sent directly to stdout -- how do I capture this?
> run('whoami')
username
0
in otherwords, username is echo'd to stdout, the result of run is 0.
I'd much rather figure out how to read stdout
So I have a solution working, but don't exactly like it... Just posting here for reference:
I'm using the node-ffi library referenced in the other SO post. I have a function that:
takes in a given command
appends >> run-sync-output
executes it
reads run-sync-output synchronously and stores the result
deletes this tmp file
returns result
There's an obvious issue where if the user doesn't have write access to the current directory, it will fail. Plus, it's just wasted effort. :-/
I have built a node.js module that solves this exact problem. Check it out :)
exec-plan
Update
The above module solves your original problem, because it allows for the synchronous chaining of child processes. Each link in the chain gets the stdout from the previous process in the chain.
I had a similar problem and I ended up writing a node extension for this. You can check out the git repository. It's open source and free and all that good stuff !
https://github.com/aponxi/npm-execxi
ExecXI is a node extension written in C++ to execute shell commands
one by one, outputting the command's output to the console in
real-time. Optional chained, and unchained ways are present; meaning
that you can choose to stop the script after a command fails
(chained), or you can continue as if nothing has happened !
Usage instructions are in the ReadMe file. Feel free to make pull requests or submit issues!
However it doesn't return the stdout yet... Well, I just released it today. Maybe we can build on it.
Anyway, I thought it was worth to mention it. I also posted this to a similar question: node.js execute system command synchronously
Since Node version v0.11.12, there is a child_process.execSync function for this.
Other than writing code a little diferent, there's actually no reason to do anything synched.
What don't you like about this? (docs)
var exec = require('child_process').exec;
exec('whoami', function (error, username) {
console.log('stdout: %s', username);
continueWithYourCode();
});

Resources