Delete an empty folder right after deletion of all the images inside - node.js

I'm writing a function to delete a user in my application, which is powered by NodeJS Mongoose and Cloudinary, and I want to erase all the pictures the user has uploaded into its personal folder and the folder itself. This is the code I wrote to delete a single user (Please note that the name of the user is the name of its folder):
module.exports = (id, callback) => {
User.findByIdAndDelete(id, function (err, user) {
if (err)
return callback(err);
if (!user)
return callback(null, null);
cloudinary.api.delete_resources_by_prefix(`${user.username}/`, function (err) {
if (err && err.http_code !== 404) {
return callback(err);
}
cloudinary.api.delete_folder(`${user.username}`, function (error, result) {
if (error && error.http_code !== 404) {
return callback(error);
}
return callback(null, `${user.username}`);
});
});
});
};
The issue is when I run it that the second API request replies with this error:
{ message: 'Folder is not empty', http_code: 400 }
This is obviously not true because I have deleted the files in the API call above. I checked also the correct behavior of the first call by the UI of cloudinary and everything works right except for the last call. So what I'm asking is:
There's an undocumented method to delete a folder and its content in one single call?
If not, how can I do that without getting this error?
There's any workaround that does not involve folders? I have watched tags but I don't know if it could result in a performance degrade.

Do you have backup enable? can you search in the media library for deleted images?
You can delete the folder by using delete by prefix (folder name) in the bulk delete option.
Here is a script to delete the empty folders:
to_delete=[]
res = cloudinary.api.subfolders("Top Folder")
for resource in res['folders']:
to_delete.append(resource['path'])
continue
i = 0
while i < len(to_delete):
print(to_delete[i])
cloudinary.api.delete_folder(to_delete[i])
i += 1
If you need help, you can also contact us at support#cloudinary.com with your cloud name and we can find more informations.

If the backup is enabled it not considered as an empty folder.
You can open a ticket at support#cloudinary.com and we can check it out.

Related

Nodejs fs file handling: Getting error type in order to handle it more efficient

I have the very simple function reading a json file:
const loadJsonContentFromFile=function(path,callback){
fs.readFile(path, 'utf8', function (err, data) {
if (err){
return callback(err)
}
try {
const obj = JSON.parse(data);
return callback(null,obj);
} catch(error){
return callback(error);
}
});
}
But I want further indications on the err object regarding the file reading. In other words I want to know why the fs.readFile failed to read the file in order to provide into the callback a tailor made response message instead of the ones that nodejs by default provides, for example if sustem user does not have the permissions to read the file I want to provide a message like:
Your user has not the correct wrights to read the file ./somefile.txt please run sudo chmod +r ./somefile.txt in order to give the right permissions.
Whilst if the file does not exists I want to provide an error message like:
The file ./somefile.txt does not exist
It sounds trivial but I think is a good example to fine-handle an error that has been returned. In order to achieve that I want to be able to identify the error that readFile callback accepts as an argument.
In php I would use the Error object's class name in order to figure out what type of error is. But in Node.js how I can do that?
NOTE:
I know that an approach to the problem's solution is to check before reading the file if the file exists and has the correct permissions. But I believe that is not the only one solution so I am looking for an alternate one on the existing problem.
You can check against err.code and return a custom error that suits your needs.
const loadJsonContentFromFile = function(path,callback) {
fs.readFile(path, 'utf8', function(err, data) {
if(err) {
if(err.code === 'EACCESS') {
return callback(
// Or create your custom error: ForbiddenError...
new Error('Your user has not the correct permissions to read the file...')
);
}
if(err.code === 'ENOENT') {
return callback(
new Error(`The file ${path} does not exist`)
);
}
}
/** ... **/
});
}
You can check the docs for more error codes.

node ncp filter not working

I am trying to filter files through node ncp library but it its filter is not working.
Once the filter gets false return it breaks the whole copying process
ncp(source, destination, options, function (err) {
if (err) {
console.error("backup error:", err);
}
console.log("Backup of done!');
});
var options = {
filter: function (file) {
console.log("copying file:", file);
var res = file.toString().indexOf("\\testdrive") !== -1;
console.log("res:", res);
return !res;
},
//filter: new RegExp("\\testdrive"),//Or RegEx intance
};
So once the filter function or RegEx instance gets false result the whole copy break
options.filter - a RegExp instance, against which each file name is tested to determine whether to copy it or not, or a function taking single parameter: copied file name, returning true or false, determining whether to copy file or not.
Just found the solution:
Seems like the filter RegExp/function will not only be called for the filenames that ncp is supposed to copy, but also for the foldernames.
The first foldername it filters is apparently the one you passed to ncp as source. And if that fails, ncp just stops copying anything in that folder.
See: https://github.com/AvianFlu/ncp/issues/130
For those coming late to the party (like me):
ncp traverse dir tree in a way that directory full paths are subjected to filter as well, on top of that, source directory itself is tested as well. In my case I wanted to copy bunch (lets call them one.svg, zwei.svg, tres.svg) of SVG files from single level directory called images, which resulted in following code:
ncp(srcImages, outImages, { filter: /.*(images|one\.svg|zwei\.svg|tres\.svg)$/ }, err => {
if (err) return console.error(err);
console.log('done!')
});
PS: please note that there is $ on the end of the regex meaning that we try to match end of the string

Can a node.js server know if a server file is created?

which is the most elegant way or technology to let a node.js server know if a file is created on a server?
The idea is: a new image has been created (from a webcam or so) -> dispatch an event!
UPDATE: The name of the new file in the directory is not known a priori and the file is generated by an external software.
You should take a look at fs.watch(). It allows you to "watch" a file or directory and receive events when things change.
Note: The documentation states that fs.watch is not consistent across platforms, so you should take that in to account before using it.
fs.watch(fileOrDirectoryPath, function(event, filename) {
// Something changed with filename, trigger event appropriately
});
Also something to be aware of from the docs:
Providing filename argument in the callback is not supported on every
platform (currently it's only supported on Linux and Windows). Even on
supported platforms filename is not always guaranteed to be provided.
Therefore, don't assume that filename argument is always provided in
the callback, and have some fallback logic if it is null.
If filename is not available on your platform and you're watching a directory you may need to do something where you initially read the directory and cache the list of files in it. Then, when you get an event from fs.watch, read the directory again and compare it to the cached list of files to see what was added (if anything).
Update 1: There's a good module called watch, on github, which makes it easy to watch a directory for new files.
Update 2: I threw together an example of how to use fs.watch to get notified when new files are added to a directory. I think the module I linked to above is probably the better way to go, but I thought it would be nice to have a basic example of how it might work if you were to do it yourself.
Note: This is a fairly simplistic example just to show how it could work in general. It could almost certainly be done more efficiently and it's far from throughly tested.
function watchForNewFiles(directory, callback) {
// Get a list of all the files in the directory
fs.readdir(directory, function(err, files) {
if (err) {
callback(err);
} else {
var originalFiles = files;
// Start watching the directory for new events
var watcher = fs.watch(directory, function(event, filename) {
// Get the updated list of all the files in the directory
fs.readdir(directory, function(err, files) {
if (err) {
callback(err);
} else {
// Filter out any files we already knew about
var newFiles = files.filter(function(f) {
return (originalFiles.indexOf(f) < 0);
});
// Reset our list of "original" files
originalFiles = files;
// If there are new files detected, call the callback
if (newFiles.length) {
callback(null, newFiles);
}
}
})
});
}
});
}
Then, to watch a directory you'd call it with:
watchForNewFiles(someDirectoryPath, function(err, files) {
if (err) {
// handle error
} else {
// handle any newly added files
// "files" is an array of filenames that have been added to the directory
}
});
I came up with my own solution using this code here:
var fs = require('fs');
var intID = setInterval(check,1000);
function check() {
fs.exists('file.txt', function check(exists) {
if (exists) {
console.log("Created!");
clearInterval(intID);
}
});
}
You could add a parameter to the check function with the name of the file and call it in the path.
I did some tests on fs.watch() and it does not work if the file is not created. fs.watch() has multiple issues anyways and I would never suggest using it... It does work to check if the file was deleted though...

How can I remove temporary files created by fs.lstat?

I have a simple method below which checks to see if a directory exists. I have noticed that when fs.lstat get's called it creates, what looks like a temporary file with a name along the lines of '12116-ocskz3'
Why does lstat create these temporary files and how can I remove them?
self.checkDirectory = function (callback) {
fs.lstat(uploadDir, function (err, stats) {
// Linux fielsystem manual - http://linux.die.net/man/2/lstat
if (!err && stats.isDirectory()) {
//Directory exists
console.log('This directory already exists!');
if (typeof(callback) == 'function') {
callback(true, uploadDir);
}
} else if (err.code === 'ENOENT') {
// ENOENT - A component of path does not exist, or path is an empty string.
console.log(err.code + ': This directory doesn\'t exists!');
if (typeof(callback) == 'function') {
callback(false, uploadDir);
}
}
});
};
lstat does not create any temporary files
Edit: okay, as discovered in comments, multipart module is creating them. It has been blogged about several times, just search for it somewhere.
The easiest solution is to not use bodyParser (it's deprecated anyway for precisely this reason), use express.json() and express.urlencoded() instead. If you really need to upload files, read docs about how to deal with them. It should be somewhere in req.files afair.
The issues was caused by using the encrypt attribute on the form element with the value seen below:
enctype="multipart/form-data
I think multipart is being replaced with something more favorable in a future release due, issues with temporary files being one of the reasons I think.

Delete multiple entities at once using Azure Table Storage Node.js interface

The only delete operation I get deletes one at a time: https://www.windowsazure.com/en-us/develop/nodejs/how-to-guides/table-services/#delete-entity
What I want is equivalent to the SQL statement
DELETE FROM MyTable WHERE PartitionKey = 'something'
Also on that page is a way to send a batch (although I could not get this to work with delete, anyone know why?). However, I'd first have to do a select to get the list of entities I want to delete in order to get their RowKeys. I was wondering whether it's possible to do it in only one request to Azure.
Thanks in advance.
UPDATE: Here's the code I tried and it doesn't work. I have confirmed that all arguments are correct when the function is called.
// subAccts all have PartitionKey = pKey
function deleteAccount(pKey, rKey, subAccts, callback) {
var tasks = subAccts; // rename for readability
tasks.push({ PartitionKey: pKey, RowKey: rKey });
tableService.beginBatch();
async.forEach(tasks, function(task, callback) {
tableService.deleteEntity(myTable, task, function(error) {
if (!error) {
callback(null);
}
else {
console.log(error);
callback(error);
}
});
}, function(error) {
if (error) {
console.log(error);
callback(error);
return;
}
tableService.commitBatch(callback);
});
}
If you don't know the list of entities you want to delete in advance, you'll have to query first to find them.
You might consider restructuring your table, though. If, instead of just putting these entities in the same partition, you could put them all in the same table (by themselves), you could delete the table. This is commonly used for deleting old logs... create tables like log_january, log_february, and then you can just delete an entire month at a time with a single command.
EDIT
There appears to be a bug in the library. Try this edit as a workaround:
BatchServiceClient.prototype.addOperation = function (webResource, outputData) {
if (azureutil.objectIsNull(outputData)) {
outputData = '';
}
if (webResource.httpVerb !== 'GET') {
webResource.headers[HeaderConstants.CONTENT_ID] = this.operations.length + 1;
if (webResource.httpVerb !== 'DELETE') {
webResource.headers[HeaderConstants.CONTENT_TYPE] = 'application/atom+xml;type=entry';
} else {
delete webResource.headers[HeaderConstants.CONTENT_TYPE];
}
...
I've created a pull request to fix this in the dev branch: https://github.com/WindowsAzure/azure-sdk-for-node/pull/300.
Until this is fixed, you can always clone my fork (https://github.com/smarx/azure-sdk-for-node), checkout the dev branch, and npm install that, rather than hand-editing the code.

Resources