I am using the following call to delete existing files in my nodeJS app runing on Linux (RHEL).
fs.unlink(downloadsFolder + '/' + file)
However, after a few days I noticed the files are still in the system since the file handles were not released. I restarted the node server and those files were eventually gone. How do I fix this issue programmatically?
dzdo lsof -L | grep -i deleted
node 48782 root 600743243 403197165 /mnt/downloads/file_1516312894734.csv (deleted)
node 48782 root 14999 403197166 /mnt/downloads/file_1516729327306.csv (deleted)
I also get this warning in the logs for the fs.unlink(), would this be causing it?
(node:48782) [DEP0013] DeprecationWarning: Calling an asynchronous function without callback is deprecated.
As the warning says, the fs.unlink method is asynchronous which means you have to provide a callback function that will be executed when delete operation complete:
fs.unlink(downloadsFolder + '/' + file, (err) => {
if (err) throw err;
console.log('File successfully deleted');
});
Or you could use synchronous version of fs.unlink:
fs.unlinkSync(downloadsFolder + '/' + file);
I bumped into the same problem and since this question has no proper solution yet, that's how I handled it;
In my case, I changed this code:
const uploaded = await s3.send(newPutObjectCommand({
Bucket: `myBucketName-${module}`,
Key: pusher === 'storage'
? user + '/' + filename
: `${user}/${fName}`,
Body: fileStream
}));
for this one:
const uploaded = ctx === 'p'
? await s3.send(new PutObjectCommand({
Bucket: `ek-knote-${module}`,
Key: pusher === 'storage'
? user + '/' + filename
: `${user}/${fName}`,
Body: fileStream
}))
: {$metadata: {httpStatusCode: 200}};
And then is when that bug appear.
This piece of logic only uploads a file into aws s3. The only difference is that the down below just does so when ctx (the context) has value 'p', (production). I rewrite it that way so when I'm in 'd' (development) I don't actually upload the file (because they have the bad vice to charge for resources used). The body of the object I'm sending has a constant named 'fileStream'. That is just a read stream:
const fileStream = fs.createReadStream(fileRoute);
So it didn't happen before. So what was the bug?
For some reason I unknow, when fileStream const is used is somehow destroyed, but when I skipped that part, once the file get's unlinked still get's hanging there until the server is reboot. I think it's because the const is being held by memory (because you can't actually erease it by right clicking and selecting delete either) but don't quote me on that;
So I fixed it by actually destroying, for lack of a better word, the fileStream:
fork.on('close', async code => {
obj.doc.destroy();
await unlink(obj.file);
});
obj is an object I have, among others, the file route (file) and the actual readStream (doc). Just destroying it before unlink it fix the problem.
Related
I am trying to write a file in node.js using fs.writeFile, I use the following code:
const fs = require('filer');
const jsonString = JSON.stringify(myObj)
fs.writeFile('/myFile.txt', jsonString, function (err) {
if (err) throw err;
console.log('Saved!');
});
}
I am sure the file is created, because I can read it by fs.readFile referring to the same address, but I cannot find it on the disk by using windows search. What I understood, if I change the localhost port it saves the files in another location. I already tried "process.cwd()", but it didn't work.
I really appreciate it if someone could help.
try to use : __dirname instead of process.cwd()
const fs = require('fs');
const path = require('path');
const filePath = path.join(__dirname, '/myFile.txt');
console.log(filePath);
const jsonString = JSON.stringify({ name: "kalo" })
fs.writeFile(filePath, jsonString, (err) => {
if (err) throw err;
console.log('The file has been saved!');
});
And I would like to know why are you using 'filer' instead of default fs module?
fs module is native module that provides file handling in node js. so you don't need to install it specifically. This code perfectly worked and it prints absolute location of the file as well.Just run this code if it doesn't work, I think you should re install node js. I have updated the answer.You can also use fs.writeFileSync method as well.
From documentation: "String form paths are interpreted as UTF-8 character sequences identifying the absolute or relative filename. Relative paths will be resolved relative to the current working directory as determined by calling process.cwd()."
So in order to determine your working directory (i.e. where fs create files by default) call (works for me):
console.log(process.cwd());
Then if you would like to change your working directory, you can call (works for me as well):
process.chdir('path_to_new_directory');
Path can be relative or absolute.
This is also from documentation: "The process.chdir() method changes the current working directory of the Node.js process or throws an exception if doing so fails (for instance, if the specified directory does not exist)."
I am using this code in my electron app to connect to an sftp server where I need to collect some data. I have no problem listing the files in the /out folder, but it fails to get the sftp file with 'deined permission' error. Ideally I would like to be able get() file and access the text data within directly in the function without storing to a file.
let Client = require('ssh2-sftp-client');
let sftp = new Client();
var root = '/out';
var today = new Date();
var mon = ((today.getMonth()+1) < 10)? "0" + (today.getMonth()+1) : (today.getMonth()+1);
var date = (today.getDate() < 10)? "0" + today.getDate() : today.getDate();
var fileDate = mon + date;
sftp.connect({
host: '<server-address>',
port: 2222,
username: 'XXXXXXXX',
password: 'xxxxxxxx',
privateKey: fs.readFileSync(path.join(__dirname, '../rsa/<file-name-here>.pem'))
})
.then(() => {
return sftp.list(root, 'SN5M' + fileDate);
})
.then((fileInfo) => {
if (fileInfo) {
var filePath = root + '/' + fileInfo[fileInfo.length - 1].name;
return sftp.get(filePath).then((file) => {
console.log(file);
event.returnValue = file;
sftp.end();
})
.catch((err) => {
console.log('File get error', err);
event.returnValue = err;
sftp.end();
});
}
})
.catch((err) => {
console.log('File info error', err);
event.returnValue = err;
sftp.end();
});
Try this and see if it works or not
'get' returns (String|Stream|Buffer).
let dst = fs.createWriteStream('/local/file/path/data.txt');
sftp.get(filePath,dst)
Refer https://www.npmjs.com/package/ssh2-sftp-client#orga0dfcd5
Looking at your code, you have two problems.
If you call get() with only 1 argument, it returns a buffer, not a file. To get the file, just do
client.get(sourceFilePath, localFilePath)
and the file will be saved locally as localFilePath. Both arguments are strings and need to be full paths i.e. include the filename, not just the directory. The filename for the second argument can be different from the first. However, if all you want is to retrieve the file, you are better off using fastGet() rather than get(). The get() method is good for when you want to do something in the code with the data e.g. a buffer or write stream piping/processing. The fastGet() method is faster than get() as it does the transfer using concurrent processes, but does not permit use of buffers or streams for further processing.
The error message you are seeing is either due to the way you are calling get() or it is an indication you don't have permission to read the file your trying to access (as the user your connected with). Easiest way to check this is to use the openSSH sftp program (available on Linux, mac and windows) and the key your using (use the -i switch) to try and download the file. If it fails with a permission error, then you know it is a permission error and not a problem with your code or ssh2-sftp-client module.
EDIT: I just noticed you are also using both a password and a key file. You don't need both - either one will work, but you don't need to use both. I tend to use a keyfile when possible as it avoids having to have a password stored somewhere. Make sure not to add a passphrase to your key. Alternatively, you can use something like the dotenv module and store your credentials and other config in a .env file which you do not check into version control.
I have a file(data.file an image), I would like to save this image. Now an image with the same name could exist before it. I would like to overwrite if so or create it if it does not exist since before. I read that the flag "w" should do this.
Code:
fs.writeFile('/avatar/myFile.png', data.file, {
flag: "w"
}, function(err) {
if (err) {
return console.log(err);
}
console.log("The file was saved!");
});
Error:
[Error: ENOENT: no such file or directory, open '/avatar/myFile.png']
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/avatar/myFile.png'
This is probably because you are trying to write to root of file system instead of your app directory '/avatar/myFile.png' -> __dirname + '/avatar/myFile.png' should do the trick, also check if folder exists. node.js won't create parent folder for you.
Many of us are getting this error because parent path does not exist. E.g. you have /tmp directory available but there is no folder "foo" and you are writing to /tmp/foo/bar.txt.
To solve this, you can use mkdirp - adapted from How to write file if parent folder doesn't exist?
Option A) Using Callbacks
const mkdirp = require('mkdirp');
const fs = require('fs');
const getDirName = require('path').dirname;
function writeFile(path, contents, cb) {
mkdirp(getDirName(path), function (err) {
if (err) return cb(err);
fs.writeFile(path, contents, cb);
});
}
Option B) Using Async/Await
Or if you have an environment where you can use async/await:
const mkdirp = require('mkdirp');
const fs = require('fs');
const writeFile = async (path, content) => {
await mkdirp(path);
fs.writeFileSync(path, content);
}
I solved a similar problem where I was trying to create a file with a name that contained characters that are not allowed. Watch out for that as well because it gives the same error message.
I ran into this error when creating some nested folders asynchronously right before creating the files. The destination folders wouldn't always be created before promises to write the files started. I solved this by using mkdirSync instead of 'mkdir' in order to create the folders synchronously.
try {
fs.mkdirSync(DestinationFolder, { recursive: true } );
} catch (e) {
console.log('Cannot create folder ', e);
}
fs.writeFile(path.join(DestinationFolder, fileName), 'File Content Here', (err) => {
if (err) throw err;
});
Actually, the error message for the file names that are not allowed in Linux/ Unix system comes up with the same error which is extremely confusing. Please check the file name if it has any of the reserved characters. These are the reserved /, >, <, |, :, & characters for Linux / Unix system. For a good read follow this link.
It tells you that the avatar folder does not exist.
Before writing a file into this folder, you need to check that a directory called "avatar" exists and if it doesn't, create it:
if (!fs.existsSync('/avatar')) {
fs.mkdirSync('/avatar', { recursive: true});
}
you can use './' as a prefix for your path.
in your example, you will write:
fs.writeFile('./avatar/myFile.png', data.file, (err) => {
if (err) {
return console.log(err);
}
console.log("The file was saved!");
});
I had this error because I tried to run:
fs.writeFile(file)
fs.unlink(file)
...lots of code... probably not async issue...
fs.writeFile(file)
in the same script. The exception occurred on the second writeFile call. Removing the first two calls solved the problem.
In my case, I use async fs.mkdir() and then, without waiting for this task to complete, I tried to create a file fs.writeFile()...
As SergeS mentioned, using / attempts to write in your system root folder, but instead of using __dirname, which points to the path of the file where writeFile is invoked, you can use process.cwd() to point to the project's directory. Example:
writeFile(`${process.cwd()}/pictures/myFile.png`, data, (err) => {...});
If you want to avoid string concatenations/interpolations, you may also use path.join(process.cwd(), 'pictures', 'myFile.png') (more details, including directory creation, in this digitalocean article).
I am using NodeJS on heroku.
I read a file from another server and save it into my application in the /temp directory.
Next, I read the same file to pass it on to my client.
The code that saves the file and then subsequently reads it is:
http.request(options, function (pdfResponse) {
var filename = Math.random().toString(36).slice(2) + '.pdf',
filepath = nodePath.join(process.cwd(),'temp/' + filename);
pdfResponse.on('end', function () {
fs.readFile(filepath, function (err, contents) {
//Stuff to do after reading
});
});
//Read the response and save it directly into a file
pdfResponse.pipe(fs.createWriteStream(filepath));
});
This works well on my localhost.
However, when deployed to heroku, I get the following error:
events.js:72
throw er; // Unhandled 'error' event
Error: ENOENT, open '/app/temp/nvks0626yjf0qkt9.pdf'
Process exited with status 8
State changed from up to crashed
I am using process.cwd() to ensure that the path is correctly used. But even then it did not help. As per the heroku documentation, I am free to create files in the applications directory, which I am doing. But I can't figure out why this is failing to read the file...
The error you describe there is consistent with /app/temp/ not existing. You need to create it before you start writing in it. The idea is:
var fs = require("fs");
var path = require("path");
var temp_dir = path.join(process.cwd(), 'temp/');
if (!fs.existsSync(temp_dir))
fs.mkdirSync(temp_dir);
I've used the sync version of the calls for illustrations purposes only. This code should be part of the start up code for your app (instead of being called for each request) and how you should structure it depends on your specific application.
I have the following code:
Meteor.methods({
saveFile: function(blob, name, path, encoding) {
var path = cleanPath(path), fs = __meteor_bootstrap__.require('fs'),
name = cleanName(name || 'file'), encoding = encoding || 'binary',
chroot = Meteor.chroot || 'public';
// Clean up the path. Remove any initial and final '/' -we prefix them-,
// any sort of attempt to go to the parent directory '..' and any empty directories in
// between '/////' - which may happen after removing '..'
path = chroot + (path ? '/' + path + '/' : '/');
// TODO Add file existance checks, etc...
fs.writeFile(path + name, blob, encoding, function(err) {
if (err) {
throw (new Meteor.Error(500, 'Failed to save file.', err));
} else {
console.log('The file ' + name + ' (' + encoding + ') was saved to ' + path);
}
});
function cleanPath(str) {
if (str) {
return str.replace(/\.\./g,'').replace(/\/+/g,'').
replace(/^\/+/,'').replace(/\/+$/,'');
}
}
function cleanName(str) {
return str.replace(/\.\./g,'').replace(/\//g,'');
}
}
});
Which I took from this project
https://gist.github.com/dariocravero/3922137
The code works fine, and it saves the file, however it repeats the call several time and each time it causes meteor to reset using windows version 0.5.4. The F12 console ends up looking like this: . The meteor console loops over the startup code each time the 503 happens and repeats the console logs in the saveFile function.
Furthermore in the target directory the image thumbnail keeps displaying and then display as broken, then a valid thumbnail again, as if the fs is writing it multiple times.
Here is the code that calls the function:
"click .savePhoto":function(e, template){
e.preventDefault();
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var id = e.srcElement.id;
var item = Session.get("employeeItem");
var file = template.find('input[name='+id+']').files[0];
// $(template).append("Loading...");
var dataURL = '/.bgimages/'+file.name;
Meteor.saveFile(file, file.name, "/.bgimages/", function(){
if(id=="goodPhoto"){
EmployeeCollection.update(item._id, { $set: { good_photo: dataURL }});
}else{
EmployeeCollection.update(item._id, { $set: { bad_photo: dataURL }});
}
// Update an image on the page with the data
$(template.find('img.'+id)).delay(1000).attr('src', dataURL);
});
},
What's causing the server to reset?
My guess would be that since Meteor has a built-in "automatic directories scanning in search for file changes", in order to implement auto relaunching of the application to newest code-base, the file you are creating is actually causing the server reset.
Meteor doesn't scan directories beginning with a dot (so called "hidden" directories) such as .git for example, so you could use this behaviour to your advantage by setting the path of your files to a .directory of your own.
You should also consider using writeFileSync insofar as Meteor methods are intended to run synchronously (inside node fibers) contrary to the usual node way of asynchronous calls, in this code it's no big deal but for example you couldn't use any Meteor mechanics inside the writeFile callback.
asynchronousCall(function(error,result){
if(error){
// handle error
}
else{
// do something with result
Collection.update(id,result);// error ! Meteor code must run inside fiber
}
});
var result=synchronousCall();
Collection.update(id,result);// good to go !
Of course there is a way to turn any asynchronous call inside a synchronous one using fibers/future, but that's beyond the point of this question : I recommend reading this EventedMind episode on node future to understand this specific area.