I have been using Node.JS file system module for performing various file related operations. I have a need to verify the file name if exists in a directory and if exists i would need to keep a suffix at the end of the file. Typically how windows does with duplicate file names..
if TestFile.txt already exists and another file with same names comes in during processing the new file should be renamed as TestFile (1).txt and next file with same name should be renamed as TestFile (2).txt.
What could be the best way to achieve this. Do i have to use a temporary array to keep all file names and traverse through for each? This is a multi threaded environment and there could be 50,000+ documents coming for processing.
Thanks a ton.
Related
I want to use fetch to gather a line of info from multiple nodes, and store them in the same txt file.
right now I have:
fetch:
src: /path/to/file.txt
dest: /ansible/path/to/file.txt
flat: yes
Instead of adding info to the existing txt file, it overrides the file and deletes the old info.
According to the official documentation of fetch module
Files that already exist at dest will be overwritten if they are different than the src.
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/fetch_module.html
You maybe could use lineinfile or blockinfile module.
Fetch all the files that you want to append while renaming them with some combination of ansible_hostname in the name string. All files need to be added in the same folder, the destination name used will make the difference since you get the same file name from all sources it might be ending up overwriting the name. Use a changing variable like ansible_hostname or some sort of node identifier like IP address. Use this variable in building the file name for your fetched file
get a list of all the fetched files in a variable
Iterate thru that variable and then try lookup for each file
block={{lookup('file', 'sourceFile')}}
You can also iterate over all files in a folder, while appending to the end of the destination file. In your case, I believe blockinfile will be appropriate for this operation.
I have an open std::fs::File, and I want to get it's filename, e.g. as a PathBuf. How do I do that?
The simple solution would be to just save the path used in the call to File::open. Unfortunately, this does not work for me. I am trying to write a program that reads log files, and the program that writes the logs keep changing the filenames as part of it's log rotation. So the file may very well have been renamed since it was opened. This is on Linux, so renaming open files is possible.
How do I get around this issue, and get the current filename of an open file?
On a typical Unix filesystem, a file may have multiple filenames at once, or even none at all. The file metadata is stored in an inode, which has a unique inode number, and this inode number can be linked from any number of directory entries. However, there are no reverse links from the inode back to the directory entries.
Given an open File object in Rust, you can get the inode number using the ino() method. If you know the directory the log file is in, you can use std::fs::read_dir() to iterate over all entries in that directory, and each entry will also have an ino() method, so you can find the one(s) matching your open file object. Of course this approach is subject to race conditions – the directory entry may already be gone again once you try to do anything with it.
On linux, files handles held by the current process can be found under /proc/self/fd. These look and act like symlinks to the original files (though I think they may technically be something else - perhaps someone who knows more can chip in).
You can therefore recover the (possibly changed) file name by constructing the correct path in /proc/self/fd using your file descriptor, and then following the symlink back to the filesystem.
This snippet shows the steps:
use std::fs::read_link;
use std::os::unix::io::AsRawFd;
use std::path::PathBuf;
// if f is your std::fs::File
// first construct the path to the symlink under /proc
let path_in_proc = PathBuf::from(format!("/proc/self/fd/{}", f.as_raw_fd()));
// ...and follow it back to the original file
let new_file_name = read_link(path_in_proc).unwrap();
I have multiple zip files in a folder with names like below:
"abc.zip-20181002084936558425"
How to rename all of them with one command to get result like below:
"abc-20181002084936558425.zip"
I want the timestamp before the extension for multiple filenames. Now every file has different timestamp so renaming should consider that. Can I rename multiple files like this using single command.
Providing your file are really all with the same name convention and you being in the right directory :
for i in *.zip-*; do newName=${i//.zip};mv $i $newName".zip";done
should do the trick.
I have a process that periodically gets files from a server and copy them with SFTP to a local directory. It should not overwrite the file if it already exists. I know with something like Winston I can automatically rotate the log file when it fills up, but in this case I need a similar functionality to rotate files if they already exist.
An example:
The routine copies a remote file called testfile.txt to a local directory. The next time it's run the same remote file is found and copied. But now I want to rename the first testfile.txt to testfile.txt.0 so it's not overwritten. And so on - after a while I'd have a directory of files with the name testfile.txt.N and the most recent testfile.txt.
What you can do is you can append date and time on the file name that gives every filename a unique name and also helps you archive it.
For example you text.txt can be either 20170202_181921_test.txt or test_20170202_181921.txt
You can use a JavaScript Date Object to get date and time.
P.S show your code of downloading files so that I can add more to that.
I want to implement a file system using FUSE. When the contents of the directory is requested, only the types of files in that directory will be reported as subdirectories. For example, if there are ugur.PDF, guler.JPG and devatate.PNG files in the directory, the files themselves will not be reported, but types (PDF, JPG and PNG) will be reported instead. I tried to implement this file system. My problem is i can not imagine that how can i report without changing ls-l command? how does ls-l command work? (I have a readdir function and it loads file_type and file_name to buffer. I tried to change but i couldn't achive)
How does ls -l work? That seems to be the crux of the issue. It calls fstat(2). That system call fills the stat structure. When you implement a fuse provider, you are providing the data that ends up in the stat structure. If you want to change the directory structure, you return the necessary fabricated information for the various items.
I can think of two approaches to this:
1. Use a database like SQLite
you can store the files, path and file types in the database. Then when the user gets into some directory, you can say do some query like select file_types where path="" and populate as directories using filler()
2. Recursively traverse original path
you can create a list of all file types in the current directory, then use filler() to post them as directories. Then, when user enters any one directory, you can again do a check or something to see if cur_path is in last_path(orig directory), and you can select those file types and display them