Flag to Create Missing Directories During fs.promises.writeFile - node.js

As I review these file system flags, I'm I correct in concluding that there is no flag you can pass to fs.promises.writeFile that will automatically create all missing directories leading up to a filename? If not, which flag does this?
I don't like solutions that check for the existence of the folders first before attempting writeFile, because after the folders are created that check happens every time you write to a file in that folder.
In my program, after the folders are created once, it should always be there, so it seems more efficient to only create the folders if there is an exception. However, I'm hoping there is a flag that avoids all this micro-management.
If a flag for auto-creating the folders doesn't exist for writeFile, then I'd like to attempt writeFile first, and then (only if there is an exception) create the folders recursively.

fs.promises.writeFile() does not automatically create the directory structure for you. That must exist first.
If you want to automatically create the path because you received an error indicative of a path problem, you can use fs.promises.mkdir() and pass the recursive flag.
And you could, of course, create your own wrapper function that calls fs.promises.writeFile() and if it gets whatever error you get when the path doesn't exist (you'd have to test to see exactly what that error is), then call fs.promises.mkdir() and then repeat the fs.promises.writeFile(). It could all be wrapped in your own utility function.

Related

How to open or create a directory if doesn't exist atomically

Is there a way to open a directory and create it if doesn't exist atomically ?
My use case is simple, I use a directory to watch every public key that are allowed to connect to my server, but if the directory doesn't exist I want to create it, unfortunately for now I only see a two step solution, create the path with create_dir_all() and then open it with read_dir() but this create a possible situation where the directory is delete between the two calls (very unlucky in my docker container... but anyway !)
I didn't find any solution in linux to do that and I'm quite surprise cause that a very common operation for file.
I found a related question Create a directory and return a dirfd with `open` but it's focus on file descriptor and so is more specific. The answer seem to say this doesn't prevent race condition, I don't really understand the context, my only concern is to avoid to create the directory and than try to open it and it's fail. It's more for convenience and robustness of code.

How do I get the filename of an open std::fs::File in Rust?

I have an open std::fs::File, and I want to get it's filename, e.g. as a PathBuf. How do I do that?
The simple solution would be to just save the path used in the call to File::open. Unfortunately, this does not work for me. I am trying to write a program that reads log files, and the program that writes the logs keep changing the filenames as part of it's log rotation. So the file may very well have been renamed since it was opened. This is on Linux, so renaming open files is possible.
How do I get around this issue, and get the current filename of an open file?
On a typical Unix filesystem, a file may have multiple filenames at once, or even none at all. The file metadata is stored in an inode, which has a unique inode number, and this inode number can be linked from any number of directory entries. However, there are no reverse links from the inode back to the directory entries.
Given an open File object in Rust, you can get the inode number using the ino() method. If you know the directory the log file is in, you can use std::fs::read_dir() to iterate over all entries in that directory, and each entry will also have an ino() method, so you can find the one(s) matching your open file object. Of course this approach is subject to race conditions – the directory entry may already be gone again once you try to do anything with it.
On linux, files handles held by the current process can be found under /proc/self/fd. These look and act like symlinks to the original files (though I think they may technically be something else - perhaps someone who knows more can chip in).
You can therefore recover the (possibly changed) file name by constructing the correct path in /proc/self/fd using your file descriptor, and then following the symlink back to the filesystem.
This snippet shows the steps:
use std::fs::read_link;
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
// if f is your std::fs::File
// first construct the path to the symlink under /proc
let path_in_proc = PathBuf::from(format!("/proc/self/fd/{}", f.as_raw_fd()));
// ...and follow it back to the original file
let new_file_name = read_link(path_in_proc).unwrap();

Error of "encountered a second time" by find.pm

everyone,
when I deploy my package to a linux environment, I met this error:
.../Linux-2.6c2.5-i686/Ncurses/Ncurses-15766.0-0/lib/libncurses.so.5 is encountered a second time at /apollo/_env/FBAMerchantAutoRemovalDaemon-swit1na.1755067.237551097.1107633519/perl/lib/perl5.8-dist/File/Find.pm line 542.
though I read the perl script, I have no idea what is wrong. I suspect my environment is tainted. Does anyone have idea what is wrong and how can I debug this problem? Thanks a lot in advance!
Zhe
From perldoc File::Find
follow
Causes symbolic links to be followed. Since directory trees with symbolic links (followed) may contain files more than once and may even have cycles, a hash has to be built up with an entry for each file. This might be expensive both in space and time for a large directory tree. See "follow_fast" and "follow_skip" below. If either follow or follow_fast is in effect:
It is guaranteed that an lstat has been called before the user's wanted() function is called. This enables fast file checks involving _. Note that this guarantee no longer holds if follow or follow_fast are not set.
There is a variable $File::Find::fullname which holds the absolute pathname of the file with all symbolic links resolved. If the link is a dangling symbolic link, then fullname will be set to undef.
So, if, for the purposes of your application, if it is OK to follow symlinks, invoke find with the follow option set:
find({ wanted => \&process, follow => 1 }, $dir);
Or, consider if one of the other follow_skip behaviors is more appropriate for your application:
follow_skip
follow_skip==1, which is the default, causes all files which are neither directories nor symbolic links to be ignored if they are about to be processed a second time. If a directory or a symbolic link are about to be processed a second time, File::Find dies.
follow_skip==0 causes File::Find to die if any file is about to be processed a second time.
follow_skip==2 causes File::Find to ignore any duplicate files and directories but to proceed normally otherwise.
It may be that follow_skip => 2 is more appropriate for your application. Only you can make that decision.

When I create a Temporary File/Directory, when will it be removed?

Julia contains a number of methods for making temporary files and directories.
I'm making fairly heavy use of them (and /dev/shm), to inferface with libraries that really want to work with actual files (JLD/HDF5, and OpenStack Swift).
I had been assuming they would be deleted when their finalisers on the pointer to there name were called.
But then after exiting julia it seemed like they were all still there.
Will linux delete them?
If the app didn't clean after itself, the OS will delete the files eventually. It depends on system settings when temp files are deleted. For example, it can happen on boot or nightly (via cron job) or some another way.
See this answer, for example: How is the /tmp directory cleaned up?
What you are likely looking for,
given your surprise that they were not removed, based on going out of scope, as the do block versions of mktemp.
In the very documentation you linked.
mktemp(f::Function[, parent=tempdir()])
Apply the function f to the result of mktemp(parent) and remove the temporary file upon completion.
mktempdir(f::Function[, parent=tempdir()])
Apply the function f to the result of mktempdir(parent) and remove the temporary directory upon completion.
Which you can use like:
mktempdir("/dev/shm") do tdir
fname = joinpath(tdir, name)
#Do some things with your new temp filename `fname` in your tempdir `tdir`
end
#the directory referenced by `tdir`, and `fname`, have now been deleted.

node.js - check if file exists before creating temp file

I want to create a temporary file/directory in node.js. To do this, I'm attempting a simple algorithm:
Generate a file name based on pid, time, and random chars
Check if file exists
if yes: return to step 1 and repeat
if not: create the file and return it
Here's the problem: The node.js documentation for fs.exists explicitly states that fs.exists should not be used, and instead one should just use fs.open and catch a potential error:
http://nodejs.org/docs/latest/api/fs.html#fs_fs_exists_path_callback
In my case, I am not interested in opening the file if it exists, I am strictly trying to find a filename that does not yet exist. Is there a way I can go about this that does not use fs.exists? Alternatively, if I do use fs.exists, should I be worried that this method will become deprecated in the future?
Use fs.open with the 'wx' flags instead so that you'll create the file if it doesn't exist, and return an error if it already exists.
That way you eliminate the (albeit minute) possibility that the file is created between a fs.exists check and an fs.open call to create the file.

Resources