Where to find recent files and how to manage them? - linux

I am aware that in distributions such as Ubuntu it is very easy to clear recent files, but I have three questions about recent files:
Does window manager handle these or the Linux itself?
Where can I find the history and how to manage them manually?
Are they usually in same place across different distributions?
I am sitting on Arch Linux with i3 window manager.

It is the desktop environment that does handle recent files (KDE uses baloo for example, Nautilus uses ~/.local/share/recently-used.xbel). There is no uniform way of handling recent files.
Potential candidates for what you are looking for are:
the DBus file managers interface, but I have not found anything relevant in the Dolphin implementation
the Recent File Storage Specification but I'm not aware of any implementations
the st_atime field from the struct stat structure written by the stat system call, but it would show any access, not only when a user opens the file and is not guaranteed to be available for the filesystem (see the noatime option when mounting a filesystem, in the Filesystem Independent Mount Options)
Your best bet is to write you own library which would then use KDE/GNOME libraries (or any other backend, if there are other desktop environments which implement these features) to get the data.
The i3 window manager does not implement this however since it does only handle window management and almost nothing else.

Related

Using the same git repository between windows and linux results in an extra commit

I have an NTFS partition that contains my data, shared between two operating systems (I am dual booting Linux and Windows). I have a repository that I have been working on using Linux for some time and all was good, until I tried opening the repository on windows. I noticed that I had unstaged changes even though I didn't change anything. If I commit them and open the repository from Linux I have another set of unstaged changes and the cycle goes on. When committing the changes that appear are a list of mode change [some #] => [some other #] [file name] for all tracked files.
I have seen some people saying that it is not a good idea to share a repository between different operating systems but without saying why. Can someone explain why does this happen, and if it can be solved (without using a different repository if possible).
PS. yes I am using Github to host the repo if this would make any difference.
Git stores information about the files in the working tree in the index. Part of the data stored in the index is information about the device and inode of each file. This information differs based on the operating system, since different operating systems number their devices differently. Consequently, sharing a working tree across operating systems will, at the very least, result in all files needing to be re-read when running git status or certain other commands after having switched operating systems.
In addition, Linux keeps executable permissions in the file system and Windows does not. Because NTFS is a Windows file system, it does not maintain executable permissions. Linux can only assume that every file is executable, and so your commits result in many files that could not be usefully executed being marked as executable. That's why the permissions seem to change.
In general, NTFS is not a good file system for Linux. You are better off using a UDF file system, which will work both on Linux and Windows, but can keep and use POSIX permissions.
As previously mentioned, you are going to have problems sharing a working tree across operating systems. UDF may make it functional and avoid the current problems with switching permissions, but it is still not a recommended solution and you should avoid it.

Linux configuration data: standard way of storing application settings?

I am currently working on a configuration application for an embedded device and was wondering if there is a standard and accepted way of keeping application settings in Linux, something analogous to the Registry system in Windows.
GNOME seems to have a gnome-settings system that some graphical applications use, but I am going to be working on a headless, embedded device. The best advice I could find so far seems to be that I should just keep it under /etc.
Is there a universally accepted way of keeping app/user settings in Linux or is it simply a case of keeping it in a file under /etc?
Thanks.
Generally, global settings are in /etc, and per-user settings would be in a .file or .directory under the user's home directory. So you have (from memory), /etc/bashrc and ~/.bashrc for bash.
Oh, and not all applications put their configuration under /etc. That's usually for applications that need some level of system management, as opposed to purely user applications like a word processor or game.

Best way to monitor file system changes in linux

I'm looking at building a file system sync utility that monitors file system activity, but it appears that some of the file system monitoring features in the linux kernel are obsolete or not fully featured.
What my research as found
dnotify came first with notification has the features of notifying for delete,modify,access,attribs,create,move can determine file descriptor, however is now out dated by inotify and fanotify
inotify came out second with notification has the features of notifying access, modify, attrib, close, move, delete, create, etc however it does not give you a file descriptor or process and will be outdated by fanotify
fanotify is the latest which informs of access, modify, close, but does not inform of delete or attribs, but does provide file descriptor
I need a way of determining the process (e.g. from fd) and things like delete, modify, attribs, etc in order to sync everything, any suggestions? Unfortunately dnotify seems the best but most out-dated
You should use a library instead of inotify and friends - something like FAM or Gamin (it's the same API for both). This will make your program portable to other Unixes.
There's a good lib providing file descriptors or process with inotify. It has his own C API and the inotifywatch util (good for scripts), all in inotify-tools package.
http://www.ibm.com/developerworks/linux/library/l-ubuntu-inotify/index.html
http://www.infoq.com/articles/inotify-linux-file-system-event-monitoring
I strongly disagree that fanotify will outdate inotify.
FAM and gamin are very good server/client options. Both of them use inotify as first option over the outdated dnotify and polls. I prefer gamin.
incron is a useful tool for the operations like this. You may create a configuration file for the directory or file that you want to watch.
http://inotify.aiken.cz/?section=incron&page=about&lang=en
in ubuntu
sudo apt-get install incron
/etc/incron.d/mynotification.conf
# notification for user creation
/home IN_ALL_EVENTS /opt/notify_user_created.sh $#

Real-Time File Mirroring in Linux to a NAS

Can anyone tell how I might best mirror selected files and folders to a NAS, (Network Addrssable Storage) box from a Linux workstation in real-time?
These are very large files, (> 50GB) and are being continually modified, so I would only like to change those portions of the files that have been changed, added or deleted.
FYI: These files are actually Virtual Box virtual hard disk (VDI) files.
I discovered that my Synology DS211J NAS can run an RSync service. So I enabled that and used lsyncd for the live mirror... the VirtualBox VMs... all works very well.
Rsync only synchronises the parts of files that have change and so is very efficient at synchronising large files.
Of the solutions that #awm mentioned, only drbd provides block-level, realtime synchronization. The other tools will meet your goal of only propagating deltas, but they operate asynchronously. In fact, rsync will work just as well in this case, since you're not trying to provide bi-directional synchronization.
For drbd to provide block-level replication, you need need to install the drbd kernel modules and userspace tools on both the workstation on the NAS...which means this solution is only appropriate if your NAS is actually a fairly generic Linux box over which you have a great deal of control.
Before hand I just want to suggest that you don't do this. You can easily bottlenet your network and NAS and cause all sorts of problem on your host.
That being said, these claim they can do it:
Unison can be found at: http://www.cis.upenn.edu/~bcpierce/unison/
PeerSoft can do it too: http://www.peersoftware.com/products/peersync/peersyncserver/overview.aspx
Maybe - http://www.drbd.org/

Load time of a binary in linux

I have a general "feeling" that applications open faster on Windows than on Linux. I know this is too vague/non-scientific but if I were to compare load time of an application e.g. VLC on Windows and Linux how would I go about ? Also, I would like to study the differences in loading mechanism used by windows and Linux for binaries so any reference would very much appreciated.
The Linux loader can give you lots of information about the binding process.
LD_DEBUG=help ls
See the ld.so(8) man page for more details.
To really measure this you'd need to be able to flush the file cache on each OS before measuring.
One thing that Windows does is immediately after bootup it begins loading a list of frequently used DLLs and applications into file cache. This is called SuperFetch and it works pretty well.
Linux distros sometimes have a similar list that is preloaded into file cache by a program called readahead. The problem with the Linux distros is that this list is fixed at install time and isn't automatically updated, so it usually only includes programs such as the default user desktop, web browser, email application, etc.
To flush the file cache on Linux, do the following command as root:
echo 3 > /proc/sys/vm/drop_caches
To flush the file cache on Windows? I don't know, I will need to look.

Resources