How to find all extensions of file in directory - python-3.x

I am trying to list all extensions of file e.g. file.txt, file.png, file.xml, file.pdf.
I know that file is in directory but I know what kind of extensions it might have.
Also files might have custom extension for instance file.source_1 thus creating list and checking might be very inefficient.
Result should be a list/tuple (txt, png, xml, pdf, ...)

This is operating system specific.
For Linux, you want to use opendir with readdir. Both are wrapped in Python in its filesys and os module.
Also files might have custom extension for instance file.source_1 thus creating list and checking might be very inefficient.
On computers with real rotating hard disks, the bottleneck would be the disk access time.
At last, some other process (outside of your Python script) might change your file system (while your Python script is running).
For huge directories or file systems (e.g. terabytes) consider caching and memoizing that information (e.g. in some sqlite database)

Related

Copy/move semantics on FUSE

I have a hash-value database with tags and I want to implement a FUSE interface for it. Because values are indexed by their hashes they must be read-only.
Native interface for this database is very simple:
You can download, upload or tag a file.
You can get the set of all defined tags.
You can search for files tagged in accordance to a boolean combination of tags.
FUSE interface semantics are simple:
Database is viewed as a big synthetic directory hierarchy where values are files named by its hash and tags are directories.
cd-ing inside a directory is semantically equivalent to search for a given tag (naming conventions on paths can be used to implement boolean operations).
read-ing a file is semantically equivalent to download (part of) a value (FUSE allows an stateless read so open and close can be no-ops).
Copying/moving an inexistent file into a given path is equivalent to upload and tag it. Copying/moving an existent file into a given path is equivalent to add new tags.
Any other operation throws an error.
This FUSE interface is quite usable and allows you to easily embed a tag file system inside a hierarchical one without the need of external tools like TagSpaces or Evernote.
My problem arises identifying a file copy or move from any other forbidden operation with FUSE interface: there are endless possible combination of operations with equivalent semantics.
What is the most reliable way to identify a file copy or move with FUSE interface?
Hooking rename of a file should be straightforward by implementing rename() fuse call. In this call, you will get path of both old and new location, so that you can check if the file comes from outside or not. That said, this would work only if user space tool renames a file by invoking rename(2) kernel call.
On the other hand, hooking file copy operation would be harder: it can't be done directly as there is no such fuse call - copying happens in user space completely and so it's not directly detectable in kernel space.
You could try to do some heuristics and process incoming fuse operations to detect rename of already stored file (eg. by hashing content of new file and comparing that with already existing files), but I'm not sure how much it makes sense in your case or if it would be actually practical.

How to check if a file is opened in Linux?

The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.

Level 2 I/O in Linux using readdir() possible?

I am trying to traverse a directory structure and open every file in that structure. To traverse, I am using opendir() and readdir(). Since I already have the entity, it seems stupid to build a path and open the file -- that presumably forces Linux to find the directory and file I just traversed.
Level 2 I/O (open, creat, read, write) require a path. Is there any call to either open a filename inside a directory, or open a file given an inode?
You probably should use nftw(3) to recursively traverse a file tree.
Otherwise, in a portable way, construct your directory + filename path using e.g.
snprintf(pathbuf, sizeof(pathbuf), "%s/%s", dirname, filename);
(or perhaps using asprintf(3) but don't forget to later free the result)
And to answer your question about opening a file in a directory, you could use the Linux or POSIX2008 specific openat(2). But I believe that you should really use nftw or construct your path like suggested above. Read also about O_PATH and O_TMPFILE in open(2).
BTW, the kernel has to access several times the directory (actually, the metadata is cached by file system kernel code), just because another process could have written inside it while you are traversing it.
Don't even think of opening a file thru its inode number: this will violate several file system abstractions! (but might be hardly possible by insane and disgusting tricks, e.g. debugfs - and this could probably harm very strongly your filesystem!!).
Remember that files are generally inodes, and can have zero (a process did open then unlink(2) a file while keeping the open file descriptor), one (this is the usual case), or several (e.g. /foo/bar1 and /gee/bar2 could be hard-linked using link(2) ....) file names.
Some file systems (e.g. FAT ...) don't have real inodes. The kernel fakes something in that case.

I/O Performance in Linux

File A in a directory which have 10000 files, and file B in a directory which have 10 files, Would read/write file A slower than file B?
Would it be affected by different journaling file system?
No.
Browsing the directory and opening a file will be slower (whether or not that's noticeable in practice depends on the filesystem). Input/output on the file is exactly the same.
EDIT:
To clarify, the "file" in the directory is not really the file, but a link ("hard link", as opposed to symbolic link), which is merely a kind of name with some metadata, but otherwise unrelated to what you'd consider "the file". That's also the historical reason why deleting a file is done via the unlink syscall, not via a hypothetical deletefile call. unlink removes the link, and if that was the last link (but only then!), the file.
It is perfectly legal for one file to have a hundred links in different directories, and it is perfectly legal to open a file and then move it to a different place or even unlink it (while it remains open!). It does not affect your ability to read/write on the file descriptor in any way, even when a file (to your knowledge) does not even exist any more.
In general, once a file has been opened and you have a handle to it, the performance of accessing that file will be the same no matter how many other files are in the same directory. You may be able to detect a small difference in the time it takes to open the file, as the OS will have to search for the file name in the directory.
Journaling aims to reduce the recover time from file system crashes, IMHO, it will not affect the read/write speed of files. Journaling ext2

Files informations on unix-based file systems

When I create a new file (eg. touch file.txt) its size equals to 0B.
I'm wondering where are its informations (size, last modify date, owner name, file name) stored.
These informations are stored on hd and are managed by kernel, of course, but I'd love to know something more about them:
Where and how I may get them, using a programming language as C, for example, and how I may change them.
Are these informations changeable, simply using a programming language, or maybe kernel avoids this operations?
I'm working on Unix based file systems, and I'm asking informations especially about this fs.
On unix system, they're traditionally stored in the metadata part of a file representation called an inode
You can fetch this information with the stat() call, see these fields, you can change the owner and permissions with chown() and chmod()
This information is retrievable using the stat() function (and others in its family). Where it's stored is up to the specific file system and for what should be obvious reasons, you cannot change them unless you have raw access to the drive -- and that should be avoided unless you're ok with losing everything on that drive.
The metadata such as owner, size and dates are usually stored in a structure called index-node (inode), which resides in the filesystem's superblock.

Resources