Linux Console / standard out save by default - linux

I have been attempting to find an answer for the following question everywhere:
Does the standard output / console on linux save the contents to a file by default?
I am not looking to save the contents or redirect the output (i already know about that), i am just wondering if it happens already by some default process included with linux and ran by root. Finding an answer has been difficult due to all the redirection questions.
Thanks.

Unix based systems won't save the console outputs anywhere by default.
As you may know, hardware terminals (tty) and pseudo-terminals (pty) are just ways for a process to cast bytes, but it seems there is no system process that catches and log these casts.
What is stored in /dev/pts files and can we open them?

Related

How to check if a file is opened in Linux?

The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.

Unix and Linux /proc PID system

For my intro to operating systems class we were introduced to the /proc directory and many of the features that can be used to access data stored in the process ID's that are available in /proc.
When I was trying out some commands learned (and a few I looked up) on the UNIX server hosted by my school I noticed that some sub directories that were present in a process, that I created, were a file type called "TeX font metric data" or a .tfm file. I figured that was the file type that was used when my professor showed us how to get data from the directories like status and map.
When I entered the command cat /proc/(PID)/status to look into the status file I got a random assortment of characters and white space. When I tried the same command on a process I created in my schools Linux server I was shown the information I expected to see in the status and map files.
My question is:
Why did the Unix server produce the random characters from my process's /proc/(PID)/status file while the Linux server gave me the data I would expect from the same command? Also Is there a way to access the Unix /proc data by accessing the /proc directory?
The Linux procfs you are familiar with, aka /proc/ is not a POSIX thing. It's OS-specific and multiple OSes just happen to implement similar things also called /proc.
Because no formal standard covers it, it's allowed to be / going to be different on any *nix-like system that implements it.
My guess with /proc/(PID)/status is that your UNIX is dumping the process status in a binary form instead of easy to read plain text.
See also:
Knowing the process status using procf/<pid>/status
If you can determine WHAT Unix you're on (odds are, Solaris since there's a free variant) you should be able to find a more specific answer.

How to automatically load a given so into any newly-started process under Linux?

Under Windows, there are several ways to automatically load a given dll into any newly-started process.
Is it possible to do the same thing under Linux?
Is it possible to do the same thing under Linux?
There is /etc/ld.so.preload, but that only works for dynamically-linked program binaries. Documentation here.
You also need to be extremely careful: if you specify something that can't be preloaded, you may make your system unbootable, or you may no longer be able to log in.

Monitor STDERR of all processes running on my linux machine

I would like to monitor the STDERR channel of all the processes running on my Linux. Monitoring should preferably be done at real-time (i.e. while the process is running), but post-processing will also do. It should be done without requiring root permissions, and without breaking any security features.
I have done a good bit of searching, and found some utilities such as reptyr and screenify, and a few explanations on how to do this with gdb (for example here). However, all of these seem to be doing both too much and too little. Too much in the sense that they take full control of the process's stream handles (i.e. closing original one and opening a new one). Too little in the sense that they have serious limitations, such as the fact that require disabling security features, such as ptrace_scope.
Any advice would be highly appreciated!
Maybe this question would get more answers on SU. The only thing I could think of would be to monitor the files and devices already opened as STDERR. Of course, this would not work if STDERR is redirected to /dev/null.
You can get all the file descriptors for STDERR with:
ls -l /dev/proc/[0-9]*/fd/2
If you own the process, accessing its STDERR file descriptor or output file should be possible in the language of your choice without being root.

Retrieving a list of all file descriptors (files) that a process ever opened in linux

I would like to be able to get a list of all of the file descriptors (now considering this question to pertain to actual files) that a process ever opened during the runtime of the process. The problem with polling /proc/(PID)/fd/ is that you only get a snapshot in time of what is currently open. Is there a way to force linux to keep this information around long enough to log it for the entire run of the process?
First, notice that a file descriptor which is open-ed then close-d by the application is recycled by the kernel (a future open could give the same file descriptor). See open(2) and close(2) and read Advanced Linux Programming.
Then, consider using strace(1); you'll be able to log all the syscalls (or perhaps just open, socket, close, accept, ... that is the syscalls changing the file descriptor table). Of course strace is using the ptrace(2) syscall (which you probably don't want to bother using directly).
The simplest way would be to run strace -o /tmp/mytrace.tr yourprog argments... and to look, e.g. with some pager like less, into the quite big /tmp/mytrace.tr file.
As Gearoid Murphy commented you could restrict the output of strace using e.g. -e trace=file.
BTW, to debug Makefile-s this is the wrong approach. Learn more about remake.

Resources