I need a program to list all the file that are accessed/opened by a process in Linux.
It should work like this,
o/p: The full path of the files that the process is accessing.
Don't want to use 'lsof' utility or any other utility.
Is there anyway to achieve this programmatically?
If you want just the files which are accessible thru opened file descriptors by process of pid 1234, list the /proc/1234/fd/ directory (most of the entries are symlinks). You'll also get additional details thru /proc/1234/fdinfo/
Try
ls -l /proc/self/fd/
to get an idea of what these files contain.
Programatically you could use readdir(3) after opendir(3) on these directories (and also readlink(2), at least for entries in /proc/1234/fd/ ....). See also proc(5)
Notice that /proc/ is Linux specific. Some other Unixes have it (e.g. Solaris), with very different contents, properties, semantics.
If you care also about files which have been opened and closed in the past by some process, it is much more difficult. See also inotify(7) and ptrace(2)...
To convert a file path to a "canonical" absolute fiile path, use realpath(3).
Related
With more and more programs installed on my computer, I am tired of seeing lots of dotfiles while I have to access them often. For some reason I won't hide dotfiles when browsing files. Is there a way to move them to a better place I want them to stay (e.g. ~/.config/$PROGCONF) without affecting programs while running?
Symlinks still leave file symbols, which is far from my expectation. I expect that operations like listdirs() won't show the files while opening them uses a redirection.
"For some reason it won't hide dotfiles when browsing files.":
That depends on the file manager you use. nautilus hides it by default and most file managers have an option to "show/hide hidden files". The ls command by default omits out hidden files (files starting with a dot). It lists all files with the option -a.
"Is there a way to move them to a better place":
Programs which have support for "XDG user directories" can store their config files in `~/.config/$PROGRAM_NAME/. If the program doesn't support that and expects the config file to be present in the home directory, there is little you can do (Maybe you can give us a list of what programs' config files you want to move). The process differs for each program.
Let me give an example with vim. Its config file is ~/.vimrc. Lets say you move the file to ~/.config/vim/.vimrc. You can make vim read the file by launching vim using the following command.
vim -u ~/.config/vim/.vimrc
You can modify the .desktop entry or create a new shell script to launch vim using the above command and put it inside /usr/local/bin/ or create shell functions / aliases. You can read more about changing vim's config file location in this SO question.
This arch wiki article has application specific information.
"without affecting programs while running":
It depends on a few factors namely the file system used, the program we are dealing with and so on.
Generally, deleting / moving files only unlinks the file name from an inode and programs read / write files using inodes. Read more here. And most programs read the config file at the start, load the values into memory. They rarely read the config files again. So, if you move your config file while the program is running (assuming the program supports config in both places), you won't see a difference until the program is restarted.
"I expect that operations like listdirs() won't show the files"
I am assuming you are talking about os.listdir() in python. If files are present, os.listdir() will list them, there is little you can change about that. But you can write custom functions to omit out the hidden files from being listed.
This SO question can help with that.
I am developing a Qt application in Linux. I wanted to pass Linux commands to a terminal. That worked but now i also want to get a response from the terminal for this specific command.
For example,
ls -a
As you know this command lists the directories and files of the current working directory. I now want to pass the returned values from the ls call to my application. What is a correct way to do this?
QProcess is the qt class that will let you spawn a process and read the result. There's an example of usage for reading the result of a command on that page.
popen() , api of linux systerm , return FILE * that you can read it like a file descriptor, may help youp erhaps。
Parsing ls(1) output is dangerous -- make a few files with funny names in a directory and test it out:
touch "one file"
touch "`printf "\x0a\x0a\x0ahello\x0a world"`"
That creates two files in the current working directory. I expect your attempts to parse ls(1) output won't work. This might be alright if you're showing the results to a human, (though a human will be immensely confused if a filename includes output that looks just like ls(1) output!) but if you're trying to present something like an explorer.exe or Finder.app representation of files in the filesystem, this is horribly broken.
Instead, use opendir(3), readdir(3), and closedir(3) to read directory entries yourself. This will be safer, more portable, and (as a side benefit) slightly better performing.
I am trying to write a script or a piece of code to archive files, but I do not want to archive anything that is currently open. I need to find a way to determine what files in a directory are open. I want to use either Perl or a shell script, but can try use other languages if needed. It will be in a Linux environment and I do not have the option to use lsof. I have also had inconsistant results with fuser. Thanks for any help.
I am trying to take log files in a directory and move them to another directory. If the files are open however, I do not want to do anything with them.
You are approaching the problem incorrectly. You wish to keep files from being modified underneath you while you are reading, and cannot do that without operating system support. The best that you can hope for in a multi-user system is to keep your archive metadata consistent.
For example, if you are creating the archive directory, make sure that the number of bytes stored in the archive matches the directory. You can checksum the file contents before and after reading the filesystem and compare that with what you wrote to the archive and perhaps flag it as "inconsistent".
What are you trying to accomplish?
Added in response to comment:
Look at logrotate to steal ideas about how to handle this consistently just have it do the work for you. If you are concerned that rename of files will make processes that are currently writing them will break things, take a look at man 2 rename:
rename() renames a file, moving it
between directories if required. Any
other hard links to the file (as
created using link(2)) are unaffected.
Open file descriptors for oldpath are
also unaffected.
If newpath already exists it will be atomically replaced (subject
to a few conditions; see ERRORS
below), so that there is no point at
which another process attempting to
access newpath will find it missing.
Try ls -l /proc/*/fd/* as root.
msw has answered the question correctly but if you want to file the list of open processes, the lsof command will give it to you.
Under Linux I can open a directory using opendir and then use readdir to get the filenames.
I have been experimenting with scandir and thought "great I can search for the files in this directory that I want by passing in a custom filter", and sort using a custom sort where I want to sort by creation date. But then I realised how limited the dirent structure is. It contains only minimal information.
Is this the only API possible? i.e. do I have to stat every single file to get it's size for sorting? Is this how ls -t works?
That is, indeed, how ls -t works, as 'strace ls -t' will confirm. Historically, a UNIX directory was just a special file containing a list of file names, and applications were expected to read and parse that "file" themselves. Naturally, that led to problems when newer file systems were developed that expanded the fixed length of file names, so the opendir/readdir/closedir interface was developed to abstract away the filesystem directory implementation. But the limitation on what is directly available in a directory listing remains.
POSIX does not have any facility for storing creation time, much less retrieving it.
How could I track changes of specific directory in UNIX? For example, I launch some utility which create some files during its execution. I want to know what exact files were created during one particular launch. Is there any simple way to get such information? Problem is that:
I cannot flush directory content after script execution
Files created with the name that has hash as a compound part. There is no possibility to get this hash from script for subsequent search.
There could be several scripts executed simultaneously, I do not want to see files created by another process in the same folder.
Please notice that I do not want to know whether directory has been changed as stated here, I need filenames which ideally could be grepped to match specific pattern.
You need to subscribe to file system change notifications.
You should use something like FAM, gamin, or inotify to detect when a file has been created, closed, etc.
You could use strace -f myscript to trace all system calls made by the script, and use grep to filter the system calls that create new files.
You could use the Linux Auditing System. Here is a howto link:
http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html
You can use the script command to track the commands launched.