How do I determine the commands being issued when I use a GUI? - linux

I am working on a Linux machine (running openSUSE 13.1 w/ KDE, specifically) and I would like to determine what commands are actually being issued in the background when I do something with an application's GUI.
My question is very similar to the following one which has received no answer:
https://stackoverflow.com/questions/20930239/how-can-i-see-the-commands-being-passed-in-backend-of-a-gui-application
If it helps at all, the specific task I am trying to accomplish is figuring out what the command line-equivalent is for sending a file to the Trash in KDE's Dolphin utility. I would like to make an alias for this functionality in my .bashrc so that I have a "gentler" alternative to rm. But I would rather know the answer to my more general question so that I can do similar things in the future.
My naive guess was that a log file might exist somewhere. Then I could do a task with a GUI and just tail that log file afterward to see what the underlying commands were for what I just did in the GUI. As far as I can tell, however, no such log exists.

To move a file foo to your trash bin, try
mv foo $HOME/Trash/
so you could make that a shell function in your .bashrc
function movetotrash() {
mv $* $HOME/Trash/
}
AFAIK, most GUI applications don't have log files. They are generally free software (and using free software libraries), so you could study their source code and improve it. Try to interact with their communities (and use strace as I commented)
BTW, not every GUI application is using commands. Some are (e.g. IDE are indeed forking commands like gcc) but others just do directly syscalls (probably a file manager won't fork an mv but just would copy contents or call the rename(2) syscall).

Related

Linux ~/.bashrc export most recent directory

I have several environment variables in my ~/.bashrc that point to different directories. I am running a program that creates a new folder every time that it runs and puts a time stamp in the directory name. For example, baseline_2015_11_10_15_40_31-model-stride_1-type_1. Is there away of making a variable that can link to the last created directory?
cd $CURRENT_DIR
Your mileage may vary a lot depending on what exactly do you need to accomplish. However, it almost all cases I would advise against doing something that weird and unreliable like what's described below and revise your architecture to avoid hunting for directories.
Method 1
If your program creates a subdirectory inside current directory, and you always know that nothing else happens in that directory and you want a subdirectory with latest creation timestamp, then you can do something like:
your_complex_program_that_creates_dir
TARGET_DIR=$(ls -t1 --group-directories-first | head -n1)
cd "$TARGET_DIR"
Method 2
If a lot of stuff happens on the system, then you'll end up monitoring what your program does with the filesystem and reacting when it creates a directory. There are two ways to do that, using strace and inotify, both are relatively complex. Here's the way to do that with strace:
strace -o some_temp_file.strace your_complex_program_that_creates_dir
TARGET_DIR=$(sed -ne '/^mkdir(/ { s/^mkdir("\(.*\)", .*).*$/\1/; p }' some_temp_file.strace
cd "$TARGET_DIR"
This snippet runs your_complex_program_that_creates_dir under control of strace, which essentially logs every system call your program makes into a file. Afterwards, this file is analyzed to seek a line like
mkdir("target_dir", 0777) = 0
and extract value of "target_dir" into a variable. Note that:
if your program creates more than 1 directory (even for temporary purposes and deletes them afterwards, or whatever) — there's really no way to determine which of them to grab
running a program with strace is much slower that normal due to huge overhead of logging all the syscalls.
it's super non-portable — facilities like strace exist on most modern OS, but implementations will vary a lot
A solution with inotify works in the same way, but using different mechanism — i.e. it uses OS hook to log all the operations that process performs with file system and then react to it (remember created directory).
However, I repeat, I'd strongly suggest against using any of these solutions beyond research interest.

How to check if a file is opened in Linux?

The thing is, I want to track if a user tries to open a file on a shared account. I'm looking for any record/technique that helps me know if the concerned file is opened, at run time.
I want to create a script which monitors if the file is open, and if it is, I want it to send an alert to a particular email address. The file I'm thinking of is a regular file.
I tried using lsof | grep filename for checking if a file is open in gedit, but the command doesn't return anything.
Actually, I'm trying this for a pet project, and thus the question.
The command lsof -t filename shows the IDs of all processes that have the particular file opened. lsof -t filename | wc -w gives you the number of processes currently accessing the file.
The fact that a file has been read into an editor like gedit does not mean that the file is still open. The editor most likely opens the file, reads its contents and then closes the file. After you have edited the file you have the choice to overwrite the existing file or save as another file.
You could (in addition of other answers) use the Linux-specific inotify(7) facilities.
I am understanding that you want to track one (or a few) particular given file, with a fixed file path (actually a given i-node). E.g. you would want to track when /var/run/foobar is accessed or modified, and do something when that happens
In particular, you might want to install and use incrond(8) and configure it thru incrontab(5)
If you want to run a script when some given file (on a native local, e.g. Ext4, BTRS, ... but not NFS file system) is accessed or modified, use inotify incrond is exactly done for that purpose.
PS. AFAIK, inotify don't work well for remote network files, e.g. NFS filesystems (in particular when another NFS client machine is modifying a file).
If the files you are fond of are somehow source files, you might be interested by revision control systems (like git) or builder systems (like GNU make); in a certain way these tools are related to file modification.
You could also have the particular file system sits in some FUSE filesystem, and write your own FUSE daemon.
If you can restrict and modify the programs accessing the file, you might want to use advisory locking, e.g. flock(2), lockf(3).
Perhaps the data sitting in the file should be in some database (e.g. sqlite or a real DBMS like PostGreSQL ou MongoDB). ACID properties are important ....
Notice that the filesystem and the mount options may matter a lot.
You might want to use the stat(1) command.
It is difficult to help more without understanding the real use case and the motivation. You should avoid some XY problem
Probably, the workflow is wrong (having a shared file between several users able to write it), and you should approach the overall issue in some other way. For a pet project I would at least recommend using some advisory lock, and access & modify the information only thru your own programs (perhaps setuid) using flock (this excludes ordinary editors like gedit or commands like cat ...). However, your implicit use case seems to be well suited for a DBMS approach (a database does not have to contain a lot of data, it might be tiny), or some index locked file like GDBM library is handling.
Remember that on POSIX systems and Linux, several processes can access (and even modify) the same file simultaneously (unless you use some locking or synchronization).
Reading the Advanced Linux Programming book (freely available) would give you a broader picture (but it does not mention inotify which appeared aften the book was written).
You can use ls -lrt, it displays the last RW operations in the shell. Then you can conclude whether the file is opened or not. Make sure that you are in the exact directory.

Linux equivalent of Windows "Startup" folder

I want to run a program when my embedded Linux's desktop has started up, in the same way as Windows runs programs in the "Startup" folder. How can I do this?
Specifically, my target hardware is Beaglebone Black, the Debian variant (rev C board). The Window Manager is the default one.
In Linux these are called init scripts and usually sit in /etc/init.d. How they should be defined varies between different distros but today many use the Linux Standard Base (LSB) Init Script format.
Good readings on this:
https://wiki.debian.org/LSBInitScripts
https://www.debian-administration.org/article/28/Making_scripts_run_at_boot_time_with_Debian
There are multiple ways to start a program, it turns out. LXDE - the window manager - supports auto-start of .desktop files places in either ~/.config/autostart or /etc/xdg/autostart - hooray!
http://wiki.lxde.org/en/Autostart
Except... though I can run a simple program as proof-of-concept in this way, when I try to run mine, it fails. I can't figure out why. The file
.xsession-errors.old
contains X server errors ("resource temporarily unavailable").
I am now using another mechanism - running the code from a shell script (this is necessary because I need to specify a working directory for the program). This uses the "autostart" file in /etc/xdg/lxsession/, and at least it works. Well kind of. I either have to "sleep 5" before running, or prefixing the run with an # symbol which forces a retry if it fails. It looks a little like something my code is dependent on is not in place at the precise time the autostart mechanism finds it. I can find no way of ensuring startup order. This is plainly a crock of stinky stuff.

the Role of "/etc" in Zen of Unix / GNU Linux

I have a question on Zen of Unix / GNU Linux, which looks like an aha-moment for me...
Apparently ANY standard Unix/Linux program has a config file, which the most are located at /etc directory.
Can we derive the concept as follow:
1- As an application developer you should design your software which have a customiztion file (possibly located at /etc.)
2- Then, Admins or users can SET these configs based on their needs and run your program.
3- Changing the behavior of your program should ONLY depends on its config file.
If you're asking whether this is true, that tends to be the convention, yes. Keep in mind that developers are free to design their programs to run however they want to and that they tend to follow this pattern only for convenience of similarity.
Other patterns you may see:
Programs with no global settings and only per user settings may store their settings in ~/.[something], or maybe somewhere else entirely. Many programs do this AND use /etc. Bash is a good example, using /etc/profile/.bashrc for default settings and ~/.bashrc for user settings.
Very large standalone installations of some programs may package all of their files into their own .../etc, .../bin, etc.. directories, and will not use the typical system directories at all. An example of this is charybdis, an ircd, which stores everything in a folder specified at compile time (Mine lives in /var/ircd, so I have /var/ircd/etc, /var/ircd/bin, /var/ircd/lib, ...)
OSX is a certified Unix and tries not to use etc - in effect only Apple supplied programs should change /etc, they supply alternatives.
However for all OSs including Windows you do have a separate configuration/customisation file (or in Windows the registry) and there probably need to be two of these. One that is set and controlled by admins and one for changes the user makes. The former of these can use /etc for Linux see the Filesystem_Hierarchy_Standard

Homework: How can I log processes for auditing using the bash shell?

I am very new to linux and am sorry for the newbie questions.
I had a homework extra credit question that I was trying to do but failed to get it.
Q. Write a security shell script that logs the following information
for every process: User ID, time started, time ended (0 if process is
still running), whether the process has tried to access a secure file
(stored as either yes or no) The log created is called
process_security_log where each of the above pieces of information is
stored on a separate line and each entry follows immediately (that is,
there are no blank lines). Write a shell script that will examine
this log and output the User ID of any process that is still running
that has tried to access a secure file.
I started by trying to just capturing the User and echo it but failed.
output=`ps -ef | grep [*]`
set -- $output
User=$1
echo $User
The output of ps is both insufficient and incapable of producing data required by this question.
You need something like auditd, SELinux, or straight up kernel hacks (ie. fork.c) to do anything remotely in the realm of security logging.
Update
Others have made suggestions to use shell command logging, ps and friends (proc or sysfs). They can be useful, and do have their place (obviously). I would argue that they shouldn't be relied on for this purpose, especially in an educational context.
... whether the process has tried to access a secure file (stored as either yes or no)
Seems to be the one that the other answers are ignoring. I stand by my original answer, but as Daniel points out there are other interesting ways to garnish this data.
systemtap
pref
LTTng
For an educational exercise these tools will help provide a more complete answer.
Since this is homework, I'm assuming that the scenario isn't a real-world scenario, and is merely a learning exercise. The shell is not really the right place to do security auditing or process accounting. However, here are some pointers that may help you discover what you can do at the shell prompt.
You might set the bash PROMPT_COMMAND to do your process logging.
You can tail or grep your command history for use in logging.
You can use /usr/bin/script (usually found in the bsdutils package) to create a typescript of your session.
You can run ps in a loop, using subshells or the watch utility, to see what processes are currently running.
You can use pidof or pgrep to find processes more easily.
You can modify your .bashrc or other shell startup file to set up your environment or start your logging tools.
As a starting point, you might begin with something trivial like this:
$ export PROMPT_COMMAND='history | tail -n1'
56 export PROMPT_COMMAND='history | tail -n1'
$ ls /etc/passwd
/etc/passwd
57 ls /etc/passwd
and build in any additional logging data or process information that you think necessary. Hope that gets you pointed in the right direction!
Take a look at the /proc pseudo-filesystem.
Inside of this, there is a subdirectory for every process that is currently running - process [pid] has its information available in /proc/[pid]/. Inside of that directory, you might make use of /prod/[pid]/stat/ or /proc/[pid]/status to get information about which user started the process and when.
I'm not sure what the assignment means by a "secure file," but if you have some way of determining which files are secure, you get get information about open files (including their names) through /prod/[pid]/fd/ and /prod/[pid]/fdinfo.
Is /proc enough for true security logging? No, but /proc is enough to get information about which processes are currently running on the system, which is probably what you need for a homework assignment about shell scripting. Also, outside of this class you'll probably find /proc useful later for other purposes, such as seeing the mapped pages for a process. This can come in handy if you're writing a stack trace utility or want to know how they work, or if you're debugging code that uses memory-mapped files.

Resources