How to check when a file has been changed in linux? - linux

I have a Linux command line program.
It produces output to a file.
The output file is modified continuously by the program after short time intervals.
Every time, the program changes the file, I want to be notified.
Is there any command line for that, or any script which could help me?

I think icrond is what you need
The incrond (inotify cron daemon) is a daemon which monitors filesystem events (such as add a new file, delete a file and so on) and executes commands or shell scripts. It’s use is generally similar to cron.
Take a look here for some examples http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/

I think you need
Linux: inotify or
File Alteration monitor or
incron or
Linux audit
Also please look here
Also for script you might need as follows using inotify tool.
while true; do
change=$(inotifywait -e close_write,moved_to,create .)
change=${change#./ * }
if [ "$change" = "myfile" ]
then
echo -e "my file changed"
fi
done

Related

How to log every single command executed from shell script

I am trying to find a way to record every single command that is executed by any user on the system.
Things that I have came across earlier.
It is possible to view shell commands executed from the terminal using ~/.bashrc_history file.
There is a catch here, It logs only those commands which were executed interactively from bash shell/terminal.
This solves one of my problems. But in addition to it, I would like to log those commands also which were executed as a part of the shell script.
Note: I don't have control over shell script. Therefore, adding verbose mode like #!/bin/bash -xe is not possible.
However, this can be assumed that I have root access as a system administrator.
Eg: I have another user that has access to the system. And he runs the following shell script using from his account.
#!/bin/sh
nmap google.com
and run as "$ sh script.sh"
Now, What I want is "nmap google.com" command should be logged somewhere once this file is executed.
Thanks in advance. Even a small help is appreciated.
Edit: I would like to clarify that users are unaware that they are being monitored. So I need a solution something at system level(may be agent running with root). I cannot depend on user to log suspicious activity. Of-course everyone will avoid such tricks to put blame on someone else if they do something fishy or wrong
I am aware that you were asking for Bash and Shell scripting and tagged your question accordingly, but in respect to your requirements
Record every single command that is executed by any user on the system
Users are unaware that they are being monitored
A solution something at system level
I am under the assumption that you are looking for Audit Logging.
So you may take advantage from articles like
Log all commands run by Admins on production servers
Log every command executed by a User
You can run the script in this way:
execute bash (it will override the shebang)
ts to prefix every lines
logs both in terminal and files
bash -x script.sh |& ts | tee -a /tmp/$(date +%F).log
You may ask the other user to create an alias.
Edit:
You may also add this into /etc/profile (sourced when users login)
exec > >(tee -a /tmp/$(date +%F).log)
Do it also for error output if needed. Keep it splited.

How to count number of times a file was executed on linux

I have an executable file and I would like to know how many times it is being executed. The file is located on a network file system. Is there a way to do this with a script using one of Linux utilities? The limitation I have is that I would like to avoid changing the file itself. For example I will not add a file with a counter which would be updated by an executable script. And I will not make the executable script call some API to increment a counter in e.g. database.
I don't know exactly how to watch a file for execution, but you can construct something with inotify watching how many times it is opened:
You could have a script like that:
#! /bin/bash
EXEC_CNT=0
FILE_TO_WATCH=/path/to/your/file
while inotifywait -e open "$FILE_TO_WATCH"
do
((EXEC_CNT++))
echo "$FILE_TO_WATCH opened $EXEC_CNT times"
# Or to store in a file:
# echo "$FILE_TO_WATCH opened $EXEC_CNT times" >> "$FILE_TO_WATCH.log"
done
In case of a network share, this script must be runned on the computer that share its file system.

detect when a file opens in bash

I am extremely new to bash scripting, and I need to create a script that will run a function whenever the user opens a given file (/etc/hosts) with any program.
How can I make my script detect when the file is opened?
If you have the the inotify-tools package installed (as #TobySpeight mentions in a comment above), then you have the inotifywait command available to you, so you can do something like this:
while inotifywait -e open /etc/hosts
do
echo 'hosts was opened!'
done
There are lots of options ... RTFM ... to choose files to watch, etc.
I'm guessing, but I suspect there's a race condition in my code above though. If something opens the file while the script is running the echo command, it won't notice, and could miss it while it loops back. Maybe that doesn't matter though.

IO Redirection in Linux Bash shell scripts not recreating moved/deleted file?

I am quite new to shell programming on Linux and in my Linux instance, I am redirecting the stdout and stderr of a program to two files in following manner and run it in background
myprog > run.log 2>> err.log &
This works fine, and I get my desired behavior
Now there is a another background process that monitors the run.log and err.log, and moves them to other file names, if the log files grow beyond a certain threshold.
e.g. mv err.log err[date-time].log
my expectation is that after this file move happens, err.log will be created again by the myprog output redirection and new output will be written to that new file. However, after my log file monitoring process moves the file, err.log or run.log never get created again although myprog continues to run without any issues.
Is this the normal behavior in Linux? If it is, what should I do to get my expected behavior working?
Yes, it is. Unless you first program reopen the files, it will keep writing to the old file, even if you can't access it anymore. In fact, the space used by that removed file will only be available after every process closes it. If reopening it is not possible (ie. you can't change the executable nor restart it), then a solution like http://httpd.apache.org/docs/2.4/programs/rotatelogs.html is your best bet.
It can rotate logs based on filesize or time, and even call a custom script after a rotation.
Example usage:
myprog | rotatelogs logname.log 50M
This way the log will be rotated whenever the size reaches 50 megabytes.
[EDIT: pointed to a newer version of rotatelogs]
If I had to guess, it actually associates the process that is logging with a file descriptor, not a file name. When you rename it, you only change the file name. So the process just keeps logging to the file. Just a guess. If I were tasked with fixing it, I would stop the logging process and restart it at that point to re-associate it with the right file.
Just a guess.
Software with support for log rotation actually has support written in for this rotation. If you look at man logrotate, you'll notice that a typical configuration looks like this:
"/var/log/httpd/access.log" /var/log/httpd/error.log {
rotate 5
mail www#my.org
size 100k
sharedscripts
postrotate
/usr/bin/killall -HUP httpd
endscript
}
...which is to say that it sends a HUP signal to the program whose log has been rotated; that program has a signal handler that reopens its output files.
You can do this in your shell scripts too:
reopen_logs() {
exec >>run.log 2>>err.log
}
trap reopen_logs HUP
...then, after rotating your logs, run kill -HUP pid_of_yourscript; on the next occasion when the script itself is executing a command (since signal handlers only run between foregrounded executables), it will reopen its output to recreate the log file without needing to restart.

Edit shell script while it's running

Can you edit a shell script while it's running and have the changes affect the running script?
I'm curious about the specific case of a csh script I have that batch runs a bunch of different build flavors and runs all night. If something occurs to me mid operation, I'd like to go in and add additional commands, or comment out un-executed ones.
If not possible, is there any shell or batch-mechanism that would allow me to do this?
Of course I've tried it, but it will be hours before I see if it worked or not, and I'm curious about what's happening or not happening behind the scenes.
It does affect, at least bash in my environment, but in very unpleasant way. See these codes. First a.sh:
#!/bin/sh
echo "First echo"
read y
echo "$y"
echo "That's all."
b.sh:
#!/bin/sh
echo "First echo"
read y
echo "Inserted"
echo "$y"
# echo "That's all."
Do
$ cp a.sh run.sh
$ ./run.sh
$ # open another terminal
$ cp b.sh run.sh # while 'read' is in effect
$ # Then type "hello."
In my case, the output is always:
hello
hello
That's all.
That's all.
(Of course it's far better to automate it, but the above example is readable.)
[edit] This is unpredictable, thus dangerous. The best workaround is , as described here put all in a brace, and before the closing brace, put "exit". Read the linked answer well to avoid pitfalls.
[added] The exact behavior depends on one extra newline, and perhaps also on your Unix flavor, filesystem, etc. If you simply want to see some influences, simply add "echo foo/bar" to b.sh before and/or after the "read" line.
Try this... create a file called bash-is-odd.sh:
#!/bin/bash
echo "echo yes i do odd things" >> bash-is-odd.sh
That demonstrates that bash is, indeed, interpreting the script "as you go". Indeed, editing a long-running script has unpredictable results, inserting random characters etc. Why? Because bash reads from the last byte position, so editing shifts the location of the current character being read.
Bash is, in a word, very, very unsafe because of this "feature". svn and rsync when used with bash scripts are particularly troubling, because by default they "merge" the results... editing in place. rsync has a mode that fixes this. svn and git do not.
I present a solution. Create a file called /bin/bashx:
#!/bin/bash
source "$1"
Now use #!/bin/bashx on your scripts and always run them with bashx instead of bash. This fixes the issue - you can safely rsync your scripts.
Alternative (in-line) solution proposed/tested by #AF7:
{
# your script
exit $?
}
Curly braces protect against edits, and exit protects against appends. Of course, we'd all be much better off if bash came with an option, like -w (whole file), or something that did this.
Break your script into functions, and each time a function is called you source it from a separate file. Then you could edit the files at any time and your running script will pick up the changes next time it gets sourced.
foo() {
source foo.sh
}
foo
Good question!
Hope this simple script helps
#!/bin/sh
echo "Waiting..."
echo "echo \"Success! Edits to a .sh while it executes do affect the executing script! I added this line to myself during execution\" " >> ${0}
sleep 5
echo "When I was run, this was the last line"
It does seem under linux that changes made to an executing .sh are enacted by the executing script, if you can type fast enough!
An interesting side note - if you are running a Python script it does not change. (This is probably blatantly obvious to anyone who understands how shell runs Python scripts, but thought it might be a useful reminder for someone looking for this functionality.)
I created:
#!/usr/bin/env python3
import time
print('Starts')
time.sleep(10)
print('Finishes unchanged')
Then in another shell, while this is sleeping, edit the last line. When this completes it displays the unaltered line, presumably because it is running a .pyc? Same happens on Ubuntu and macOS.
I don't have csh installed, but
#!/bin/sh
echo Waiting...
sleep 60
echo Change didn't happen
Run that, quickly edit the last line to read
echo Change happened
Output is
Waiting...
/home/dave/tmp/change.sh: 4: Syntax error: Unterminated quoted string
Hrmph.
I guess edits to the shell scripts don't take effect until they're rerun.
If this is all in a single script, then no it will not work. However, if you set it up as a driver script calling sub-scripts, then you might be able to change a sub-script before it's called, or before it's called again if you're looping, and in that case I believe those changes would be reflected in the execution.
I'm hearing no... but what about with some indirection:
BatchRunner.sh
Command1.sh
Command2.sh
Command1.sh
runSomething
Command2.sh
runSomethingElse
Then you should be able to edit the contents of each command file before BatchRunner gets to it right?
OR
A cleaner version would have BatchRunner look to a single file where it would consecutively run one line at a time. Then you should be able to edit this second file while the first is running right?
Use Zsh instead for your scripting.
AFAICT, Zsh does not exhibit this frustrating behavior.
usually, it uncommon to edit your script while its running. All you have to do is to put in control check for your operations. Use if/else statements to check for conditions. If something fail, then do this, else do that. That's the way to go.
Scripts don't work that way; the executing copy is independent from the source file that you are editing. Next time the script is run, it will be based on the most recently saved version of the source file.
It might be wise to break out this script into multiple files, and run them individually. This will reduce the execution time to failure. (ie, split the batch into one build flavor scripts, running each one individually to see which one is causing the trouble).

Resources