fs.watch unexpected behavior - node.js

If I run the below program as node watcher.js file.txt, then it works as expected when I touch file.txt. But if I open file.txt in vim and save, then it ceases to detect future modifications to the file. This seems really weird to me, why does this behavior occur?
var fs = require('fs');
var args = process.argv;
if (args.length <= 2) {
console.log('USAGE: ' + args[1] + ' filename');
process.exit(1);
}
var filename = args[2];
fs.watch(filename, function(event, filename) {
console.log('file ' + filename + ' changed!');
});

It is important to inspect the content of the first argument, not just the filename. The issue is that event can be either 'change' OR 'rename'.
In this case, it looks like vim is actually renaming the old file and making a new one.

Related

How to find file extension change with node.js?

How do you detect a file extension change and save the new extension or full file path to a variable? This is a unique specific question.
Code I have so far dont mind if you dont use it:
const puppeteer = require('C:/Users/user1/Desktop/puppeteer_automation/node_modules/puppeteer');
(async () => {
var filename = "C:/Users/user1/Downloads/file.crdownload";
var downloadanduploadpath = "C:/Users/user1/Downloads";
var fs = require('fs');
var event1 = "change";
var currentstat = fs.stat();
const WATCH_TARGET = filename;
fs.watch(WATCH_TARGET, function(event1, downloadanduploadpath) {
console.log('File "' + filename + '" was changed: ' + eventType);
});
/*
fs.watch(downloadanduploadpath, (event1, filename) => {
console.log('event is: ' + event1);
if (filename) {
console.log('filename provided: ' + filename);
} else {
console.log('filename not provided');
}
});*/
//console.log(lastdownloadedimage);
})();
Error:
(node:9876) UnhandledPromiseRejectionWarning: TypeError
[ERR_INVALID_CALLBACK]: Callback must be a function. Received
undefined
As described in the docs,
The listener callback gets two arguments (eventType, filename). eventType is either 'rename' or 'change', and filename is the name of the file which triggered the event.
you can use the eventType argument to determine whether the filename has been changed.
UPDATE: I updated my example code to observe the directory instead of the file itself, in order to get the new filename in the filename argument
Example:
let fs = require('fs');
(async () => {
const WATCH_DIR = "C:/Users/user1/Downloads";
let target_file = "file.crdownload";
let renameTriggered = false;
fs.watch(WATCH_DIR, function(eventType, filename) {
if(eventType == 'rename') {
// Check if the target filename was changed (in the first event
// the old filename disappears, which marks the beginning of a renaming process)
if(filename == target_file) {
// Toggle renaming status
renameTriggered = true;
}
// The second event captures the new filename, which completes the process
else if(renameTriggered) {
// Toggle renaming status
renameTriggered = false;
console.log('File "' + target_file + '" was renamed to: "' + filename + '"');
// Update target filename
target_file = filename;
}
}
});
})();
Observing the directory however will trigger the callback on any filename changes that occur (includes even deleting and creating files). Therefore we check if the eventType is equal to 'rename'.
From the perspective of fs.watch() the renaming takes place in two steps: In the fisrt step the 'disappearing' of the old filename is detected which triggers the event passing the old filename as the argument filename. In the second step the 'appearance' of the new filename is detected which triggers the rename event again, this time with the new filename as the second argument, which is what we are looking for.
Your code example, is calling fs.watch() wrong. You don't pass arguments to the callback yourself. You just name parameters.
This program below all by itself works on Windows. I pass it a directory and watch for changes to any of the files in that directory. You can also pass it the filename for an existing file and watch for changes to just that file:
const fs = require('fs');
fs.watch("./watcherDir", function(eventType, filename) {
console.log(`eventType=${eventType}, filename=${filename}`);
});
eventType and filename are passed by fs.watch() to the callback, they are not something you pass. You just create the name of the arguments that you can then use as function arguments inside the callback.
When I rename a file in that directory from example.json to example.xxx, I get these two callbacks:
eventType=rename, filename=example.json
eventType=rename, filename=example.xxx
The filenames are, of course, relative to the directory I passed to fs.watch().
Note: As documented, this feature does not work the same on all platforms so for any further debugging we would need to know what platform you're running it on.

nodejs: each line in separate file

I want to split a file: each line in a separate file. The initial file is really big. I finished with code bellow:
var fileCounter = -1;
function getWritable() {
fileCounter++;
writable = fs.createWriteStream('data/part'+ fileCounter + '.txt', {flags:'w'});
return writable;
}
var readable = fs.createReadStream(file).pipe(split());
readable.on('data', function (line) {
var flag = getWritable().write(line, function() {
readable.resume();
});
if (!flag) {
readable.pause();
}
});
It works but it is ugly. Is there more nodish way to do that? maybe with piping and without pause/resume.
NB: it's not a question about lines/files/etc . The question is about streams and I just try to illustrate it with the problem
You can use Node's built-in readline module.
var fs = require('fs');
var readline = require('readline');
var fileCounter = -1;
var file = "foo.txt";
readline.createInterface({
input: fs.createReadStream(file),
terminal: false
}).on('line', function(line) {
var writable = fs.createWriteStream('data/part'+ fileCounter + '.txt', {flags:'w'});
writable.write(line);
fileCounter++
});
Note that this will lose the last line of the file if there is no newline at the end, so make sure your last line of data is followed by a newline.
Also note that the docs indicate that it is Stability index 2, meaning:
Stability: 2 - Unstable The API is in the process of settling, but has
not yet had sufficient real-world testing to be considered stable.
Backwards-compatibility will be maintained if reasonable.
How about the following? Did you try? Pause and resume logic isn't realy needed here.
var split = require('split');
var fs = require('fs');
var fileCounter = -1;
var readable = fs.createReadStream(file).pipe(split());
readable.on('data', function (line) {
fileCounter++;
var writable = fs.createWriteStream('data/part'+ fileCounter + '.txt', {flags:'w'});
writable.write(line);
writable.close();
});
Piping dynamically would be hard...
EDIT: You could create a writable (so pipe()able) object that would, on('data') event, do the "create file, open it, write the data, close it" but it :
wouldn't be reusable
wouldn't follow the KISS principle
would require a special and specific logic for file naming (It would accept a string pattern as an argument in its constructor with a placeholder for the number. Etc...)
I realy don't recommend that path, or you're going to take ages implementing a non-realy-reusable module. Though, that would make a good writable implementation exercise.

Is 7zip stdout broken? Is there a way to capture the progress in nodejs? [Windows]

I am trying to get the stdout of 7zip when it processes files and get the percentage in nodeJs, but it doesn't behave as expected. 7zip doesn't output anything to stdout until the very end of the execution. Which is not very helpful.. especially when I have large files being compressed and no feedback is shown for a very long time.
The code I am using (simplified):
// 7zip test, place the 7z.exe in the same dir, if it's not on %PATH%
var cp = require('child_process');
var inputFile = process.argv[2]; if(inputFile==null) return;
var regProgress = /(\d{1,3})%\s*$/; //get the last percentage of the string, 3 digits
var proc = cp.spawn("7z.exe",["a","-t7z" ,"-y" ,inputFile + ".7z",inputFile]);
proc.stdout.setEncoding("utf8");
proc.stdout.on("data",function(data){
if(regProgress.test(data))
console.log("Progress = " + regProgress.exec(data)[1] + "%");
});
proc.once("exit",function(exit,sig){ console.log("Complete"); });
I have used the same code to get the percentage with WinRar successfully and I am beginning to think that 7zip might be buggy? Or I am doing it wrong? Can I forcefully read the stdout of a process with a timer perhaps?
The same code above, with the exception of the following line replaced, works as expected with WinRar.
var proc = cp.spawn("Rar.exe",["a","-s","-ma5","-o+",inputFile+".rar",inputFile]);
If anyone knows why this happens and if it is fixable, I would be grateful! :-)
p.s. I have tried 7za.exe, the command line version of 7zip, also the stable, beta and alpha versions, they all have the same issue
It is no longer needed to use a terminal emulator like pty.js, you can pass the -bsp1 to 7z to force to output the progress to stdout.
7-zip only outputs progress when stdout is a terminal.
To trick 7-zip, you need to npm install pty.js (requires Visual Studio or VS Express with Windows SDK) and then use code like:
var pty = require('pty');
var inputFile = process.argv[2],
pathTo7zip = 'c:\\Program Files\\7-Zip\\7z.exe';
if (inputFile == null)
return;
var term = pty.spawn(process.env.ComSpec, [], {
name: 'ansi',
cols: 200,
rows: 30,
cwd: process.env.HOME,
env: process.env
});
var rePrg = /(\d{1,3})%\r\n?/g,
reEsc = /\u001b\[\w{2}/g,
reCwd = new RegExp('^' + process.cwd().replace(/\\/g, '\\\\'), 'm');
prompts = 0,
buffer = '';
term.on('data', function(data) {
var m, idx;
buffer += data;
// remove terminal escape sequences
buffer = buffer.replace(reEsc, '');
// check for multiple progress indicators in the current buffer
while (m = rePrg.exec(buffer)) {
idx = m.index + m[0].length;
console.log(m[1] + ' percent done!');
}
// check for the cmd.exe prompt
if (m = reCwd.exec(buffer)) {
if (++prompts === 2) {
// command is done
return term.kill();
} else {
// first prompt is before we started the actual 7-zip process
if (idx === undefined) {
// we didn't see a progress indicator, so make sure to truncate the
// prompt from our buffer so that we don't accidentally detect the same
// prompt twice
buffer = buffer.substring(m.index + m[0].length);
return;
}
}
}
// truncate the part of our buffer that we're done processing
if (idx !== undefined)
buffer = buffer.substring(idx);
});
term.write('"'
+ pathTo7zip
+ '" a -t7z -y "'
+ inputFile
+ '.7z" "'
+ inputFile
+ '"\r');
It should be noted that 7-zip does not always output 100% at finish. If the file compresses quickly, you may just see only a single 57% for example, so you will have to handle that however you want.

fs.unlink doesn't delete the file

On my Express server I want to take the file uploaded by the user and rename it to match their username. If the username uploads a new file, the previous file is replaced.
Here's the code:
var newPath = 'uploads/' + user.username + '.' + (file.extension).toLowerCase();
var basePath = path.resolve(__dirname + '../../../') + '/';
// Copy, rename and delete temp file
var is = fs.createReadStream(basePath + file.path);
var os = fs.createWriteStream(basePath + newPath);
is.pipe(os);
is.on('end', function (error) {
if (err) return res.send(500);
fs.unlink(basePath + file.path);
});
Problem is that fs.unlink(basePath + file.path); doesn't actually delete the old file on my machine (OSX 10.9.2). How can i make sure the temp file is deleted?
The file basePath + file.path has link reference by the read stream is. The removal of the file contents shall be postponed until all references to the file are closed. You might want to call fs.unlink on the close event.
I just use fs.writeFile which overwrites a file. I have code in my app to this synchronously, but it looks like this:
if( !fs.existsSync( filename ) ) fs.openSync( filename, "wx" );
fs.writeFileSync( filename, data );
Basically I check if the file exists, and if it doesn't I open it. Then, I just write to the file. If it is new or already existing, only my data is present in the file when I look at it, overwriting what was there.

Meteor/Node writeFile crashes server

I have the following code:
Meteor.methods({
saveFile: function(blob, name, path, encoding) {
var path = cleanPath(path), fs = __meteor_bootstrap__.require('fs'),
name = cleanName(name || 'file'), encoding = encoding || 'binary',
chroot = Meteor.chroot || 'public';
// Clean up the path. Remove any initial and final '/' -we prefix them-,
// any sort of attempt to go to the parent directory '..' and any empty directories in
// between '/////' - which may happen after removing '..'
path = chroot + (path ? '/' + path + '/' : '/');
// TODO Add file existance checks, etc...
fs.writeFile(path + name, blob, encoding, function(err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
console.log('The file ' + name + ' (' + encoding + ') was saved to ' + path);
}
});
function cleanPath(str) {
if (str) {
return str.replace(/\.\./g,'').replace(/\/+/g,'').
replace(/^\/+/,'').replace(/\/+$/,'');
}
}
function cleanName(str) {
return str.replace(/\.\./g,'').replace(/\//g,'');
}
}
});
Which I took from this project
https://gist.github.com/dariocravero/3922137
The code works fine, and it saves the file, however it repeats the call several time and each time it causes meteor to reset using windows version 0.5.4. The F12 console ends up looking like this: . The meteor console loops over the startup code each time the 503 happens and repeats the console logs in the saveFile function.
Furthermore in the target directory the image thumbnail keeps displaying and then display as broken, then a valid thumbnail again, as if the fs is writing it multiple times.
Here is the code that calls the function:
"click .savePhoto":function(e, template){
e.preventDefault();
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var id = e.srcElement.id;
var item = Session.get("employeeItem");
var file = template.find('input[name='+id+']').files[0];
// $(template).append("Loading...");
var dataURL = '/.bgimages/'+file.name;
Meteor.saveFile(file, file.name, "/.bgimages/", function(){
if(id=="goodPhoto"){
EmployeeCollection.update(item._id, { $set: { good_photo: dataURL }});
}else{
EmployeeCollection.update(item._id, { $set: { bad_photo: dataURL }});
}
// Update an image on the page with the data
$(template.find('img.'+id)).delay(1000).attr('src', dataURL);
});
},
What's causing the server to reset?
My guess would be that since Meteor has a built-in "automatic directories scanning in search for file changes", in order to implement auto relaunching of the application to newest code-base, the file you are creating is actually causing the server reset.
Meteor doesn't scan directories beginning with a dot (so called "hidden" directories) such as .git for example, so you could use this behaviour to your advantage by setting the path of your files to a .directory of your own.
You should also consider using writeFileSync insofar as Meteor methods are intended to run synchronously (inside node fibers) contrary to the usual node way of asynchronous calls, in this code it's no big deal but for example you couldn't use any Meteor mechanics inside the writeFile callback.
asynchronousCall(function(error,result){
if(error){
// handle error
}
else{
// do something with result
Collection.update(id,result);// error ! Meteor code must run inside fiber
}
});
var result=synchronousCall();
Collection.update(id,result);// good to go !
Of course there is a way to turn any asynchronous call inside a synchronous one using fibers/future, but that's beyond the point of this question : I recommend reading this EventedMind episode on node future to understand this specific area.

Resources