What script/user/process is writing to a file? - linux

I'm trying to find out what script/user/process is writing to a file .
I have 4 hosts that have the same NFS mounted
I have made a scipt and put it on all of the host and with no success
Can sombody please help with this
The script is running from 5:50 to 6:10 this is the period when the my file gets written to
This is the script that I made :
#!/bin/sh
log=~/file-access.log
check_time_to_run() {
tempTime=$1
if [ $tempTime -gt 555 -a $tempTime -lt 610 ]; then
#Intre intervalul 5:55 si 6:10
lsof /cdpool/Xprint/Liste_Drucker >> $log
else
#In afara intervalului
exit 1
fi
}
while true; do
currTime=`date +%k%M`
check_time_to_run $currTime
sleep 0.1s
done

Don't use a shell script for this at all. Instead, install sysdig, and run:
sysdig 'fd.filename=/cdpool/Xprint/Liste_Drucker'
...leave that open, and whenever anything writes to or reads from that file, an appropriate log message will be printed.
If you want to print both the username and the process name (with arguments) for the job printing to the file, the following format string will do so:
sysdig \
-p '%user.name %proc.name - %evt.dir %evt.type %evt.args' \
'fd.filename=/cdpool/Xprint/Liste_Drucker'

Related

How to check if file size is not incrementing ,if not then kill the $$ of script

I am trying to figure out a way to monitor the files I am dumping from my script. If there is no increment seen in child files then kill my script.
I am doing this to free up the resources when not needed. Here is what I think of , but I think my apporch is going to add burden to CPU. Can anyone please suggest more efficent way of doing this?
Below script is suppose to poll in every 15 sec and collect two file size of same file, if the size of the two samples are same then exit.
checkUsage() {
while true; do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]; then
echo -e "[Error]: No activity noted on this window from 5 sec. Exiting..."
kill -9 $$
fi
done
}
And I am planning to call it as follow (in background):
checkUsage /var/log/messages &
I can also get solution if, someone suggest how to monitor tail command and if nothing printing on tail then exit. NOT SURE WHY PEOPLE ARE CONFUSED. End goal of this question is to ,check if the some file is edited in last 15 seconds. If not exit or throw some error.
I have achived this by above script,but I don't know if this is the smartest way of achiveing this. I have asked this question to know views from other if there is any alternative way or better way of doing it.
I would based the check on file modification time instead of size, so something like this (untested code):
checkUsage() {
while true; do
# Test if file mtime is 'second arg' seconds older than date, default to 10 seconds
if [ $(( $(date +"%s") - $(stat -c%Y /var/log/message) )) -gt ${2-10} ]; then
echo -e "[Error]: No activity noted on this window from ${2-10} sec. Exiting..."
return 1
fi
#Sleep 'first arg' second, 15 seconds by default
sleep ${1-15}
done
}
The idea is to compare the file mtime with current time, if it's greater than second argument seconds, print the message and return.
And then I would call it like this later (or with no args to use defaults):
[ checkusage 20 10 ] || exit 1
Which would exit the script with code 1 as when the function return from it's infinite loop (as long as the file is modified)
Edit: reading me again, the target file could be a parameter too, to allow a better reuse of the function, left as an exercise to the reader.
If on Linux, in a local file system (Ext4, BTRFS, ...) -not a network file system- then you could consider inotify(7) facilities: something could be triggered when some file or directory changes or is accessed.
In particular, you might have some incron job thru incrontab(5) file; maybe it could communicate with some other job ...
PS. I am not sure to understand what you really want to do...
I suppose an external programme is modifying /var/log/messages.
If this is the case, below is my script (with minor changes to yours)
#Bash script to monitor changes to file
#!/bin/bash
checkUsage() # Note that U is in caps
{
while true
do
sleep 15
fileSize=$(stat -c%s $1)
sleep 10;
fileSizeNew=$(stat -c%s $1)
if [ "$fileSize" == "$fileSizeNew" ]
then
echo -e "[Notice : ] no changes noted in $1 : gracefully exiting"
exit # previously this was kill -9 $$
# changing this to exit would end the program gracefully.
# use kill -9 to kill a process which is not under your control.
# kill -9 sends the SIGKILL signal.
fi
done
}
checkUsage $1 # I have added this to your script
#End of the script
Save the script as checkusage and run it like :
./checkusage /var/log/messages &
Edit :
Since you're looking for better solutions I would suggest inotifywait, thanks for the suggestion from the other answerer.
Below would be my code :
while inotifywait -t 10 -q -e modify $1 >/dev/null
do
sleep 15 # as you said the polling would happen in 15 seconds.
done
echo "Script exited gracefully : $1 has not been changed"
Below are the details from the inotifywait manpage
-t <seconds>, --timeout <seconds> Exit if an appropriate event has not occurred within <seconds> seconds. If <seconds> is zero (the default),
wait indefinitely for an event.
-e <event>, --event <event> Listen for specific event(s) only. The events which can be listened for are listed in the EVENTS section.
This option can be specified more than once. If omitted, all events
are listened for.
-q, --quiet If specified once, the program will be less verbose. Specifically, it will not state when it has completed establishing all
inotify watches.
modify(Event) A watched file or a file within a watched directory was
written to.
Notes
You might have to install the inotify-tools first to make use of the inotifywait command. Check the inotify-tools page at Github.

CRON JOB running during copying the files

I have a script that is running every 2 minutes by looking into a folder and checking if new files have been delivered
the problem is that sometimes the script starts during the files are copied (the files being quite big) so an email is sent to the customer with "file x is empty" but of course the file is good. the correct message email is received afterwards.
for avoiding the overlapping between different processing I have set up a CRON_RUN file like this. If there are files that are currently processing then my script exits. when the program finishes, the CRON_RUN file is removed for allowing to the next process to be started if exists of course
Now the processing is not overlapping any more but I have observed that
if I deliver a file in the input and during the copy I start the processing the program identifies an EMPTY file so it is still not as expected.
is there a command like?:
if copy process is in run or checksum file is not completed then
exit
else
do my program
fi
my current implementation so far is:
#!/bin/bash
MYDIR=`dirname $0`
ext_dir="/server/oracle/apps/delivery"
if test -f $ext_dir/CRON_RUN
then
echo $0 CRON RUN >&2
exit 1
fi
touch $ext_dir/CRON_RUN
export CRON_RUN=CRON_RUN
for......
my program....
export CRON_RUN=""
rm -f $ext_dir/CRON_RUN
could you please tell me if I understood correctly how to use flock command?
#!/bin/bash
MYDIR=`dirname $0`
ext_dir="/server/oracle/apps/delivery"
if test -f $ext_dir/CRON_RUN
then
echo $0 CRON RUN >&2
exit 1
fi
touch $ext_dir/CRON_RUN
export CRON_RUN=CRON_RUN
#running the program for each input folder
#inside the input some files might be in copying
#if a process copying is in progress program should exit
# if not then the load-input script should start
for input_folder in electro food comp rof
do
(
# Wait for lock on /server/oracle/apps/delivery/$input_folder/.load-input.exclusivelock(fd 200) for 10 seconds
flock -x -w 10 200 || exit 1
# Do stuff
$MYDIR/load-input $input_folder $mylog > $mylog 2>&1
) 200>/server/oracle/apps/delivery/$input_folder/.load-input.exclusivelock
done
export CRON_RUN=""
rm -f $ext_dir/CRON_RUN

Run file for game server

Alright, so I have a .sh file that I run that will launch my server with the certain specifics that I'm looking for. It launches the server through screen into it's own screen. Here's the code for my run.sh file.
#!/bin/bash
# run.sh
# conversion of run.bat to shell script.
echo "Protecting srcds from random crashes"
echo "Now launching Garrys Mod RequiemRP"
sleep 5
screen -A -m -d -S gmserver ./srcds_run -console -game garrysmod +maxplayers 32 +map rp_downtown_v6 -autoupdate
echo "Server initialized. Type screen -x to resume"
Usually I use a batch file to do this, but I'm now using linux for my server hosting. Part of that batch file was if srcds (the server itself) were to crash, the run.bat file would restart the server automatically. I'm looking to do this with my run.sh file, but I'm unsure how to.
Perhaps you could make a service or script that will periodically check if the process is running. This will check if it's on and if it isn't, it will turn it on when executed.
#!/bin/bash
ps cax | grep srcds > /dev/null
if [ $? -eq 0 ]; then
exit
else
bash /path/to/run.sh
fi
I tested the command and it works. For my virtualized debian 9 system.

Condor job - running shell script as executable

I’m trying to run a Condor job where the executable is a shell script which invokes certain Java classes.
Universe = vanilla
Executable = /script/testingNew.sh
requirements = (OpSys == "LINUX")
Output = /locfiles/myfile.out
Log = /locfiles/myfile.log
Error = /locfiles/myfile.err
when_to_transfer_output = ON_EXIT
Notification = Error
Queue
Here’s the content for /script/testingNew.sh file –
(Just becaz I’m getting error, I have removed the Java commands for now)
#!/bin/sh
inputfolder=/n/test_avp/test-modules/data/json
srcFolder=/n/test_avp/test-modules
logsFolder=/n/test_avp/test-modules/log
libFolder=/n/test_avp/test-modules/lib
confFolder=/n/test_avp/test-modules/conf
twpath=/n/test_avp/test-modules/normsrc
dataFolder=/n/test_avp/test-modules/data
scriptFolder=/n/test_avp/test-modules/script
locFolder=/n/test_avp/test-modules/locfiles
bakUpFldr=/n/test_avp/test-modules/backupCurrent
cd $inputfolder
filename=`date -u +"%Y%m%d%H%M"`.txt
echo $filename $(date -u)
mkdir $bakUpFldr/`date -u +"%Y%m%d"`
dirname=`date -u +"%Y%m%d"`
flnme=current_json_`date -u +"%Y%m%d%H%M%S"`.txt
echo DIRNameis $dirname Filenameis $flnme
cp $dataFolder/current_json.txt $bakUpFldr/`date -u +"%Y%m%d"`/current_json_$filename
cp $dataFolder/current_json.txt $filename
mkdir $inputfolder/`date -u +"%Y%m%d"`
echo Creating Directory $(date -u)
mv $filename $filename.inprocess
echo Created Inprocess file $(date -u)
Also, here’s the error log from Condor –
000 (424639.000.000) 09/09 16:08:18 Job submitted from host: <135.207.178.237:9582>
...
001 (424639.000.000) 09/09 16:08:35 Job executing on host: <135.207.179.68:9314>
...
007 (424639.000.000) 09/09 16:08:35 Shadow exception!
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
0 - Run Bytes Sent By Job
0 - Run Bytes Received By Job
...
012 (424639.000.000) 09/09 16:08:35 Job was held.
Error from slot1#marcus-8: Failed to execute '/n/test_avp/test-modules/script/testingNew.sh': (errno=8: 'Exec format error')
Code 6 Subcode 8
...
Can anyone explain whats causing this error, also how to resolve this?
The testingNew.sh scripts run fine on the Linux box, if executed on a network machine seperately.
Thx a lot!! - GR
The cause, in our case, was the shell script using DOS line endings instead of Unix ones.
The Linux kernel will happily try to feed the script not to /bin/sh (as you intend) but to /bin/sh
. (Do you see that trailing carriage return character? Neither do I, but the Linux kernel does.) That file doesn't exist, so then, as a last resort, it will try to execute the script as a binary executable, which fails with the given error.
You need to specify input as:
input = /dev/null
Source: Submitting a job to Condor

Redirecting Output of Bash Child Scripts

I have a basic script that outputs various status messages. e.g.
~$ ./myscript.sh
0 of 100
1 of 100
2 of 100
...
I wanted to wrap this in a parent script, in order to run a sequence of child-scripts and send an email upon overall completion, e.g. topscript.sh
#!/bin/bash
START=$(date +%s)
/usr/local/bin/myscript.sh
/usr/local/bin/otherscript.sh
/usr/local/bin/anotherscript.sh
RET=$?
END=$(date +%s)
echo -e "Subject:Task Complete\nBegan on $START and finished at $END and exited with status $RET.\n" | sendmail -v group#mydomain.com
I'm running this like:
~$ topscript.sh >/var/log/topscript.log 2>&1
However, when I run tail -f /var/log/topscript.log to inspect the log I see nothing, even though running top shows myscript.sh is currently being executed, and therefore, presumably outputting status messages.
Why isn't the stdout/stderr from the child scripts being captured in the parent's log? How do I fix this?
EDIT: I'm also running these on a remote machine, connected via ssh using pseudo-tty allocation, e.g. ssh -t user#host. Could the pseudo-tty be interfering?
I just tried your the following: I have three files t1.sh, t2.sh, and t3.sh all with the following content:
#!/bin/bash
for((i=0;i<10;i++)) ; do
echo $i of 9
sleep 1
done
And a script called myscript.sh with the following content:
#!/bin/bash
./t1.sh
./t2.sh
./t3.sh
echo "All Done"
When I run ./myscript.sh > topscript.log 2>&1 and then in another terminal run tail -f topscript.log I see the lines being output just fine in the log file.
Perhaps the things being run in your subscripts use a large output buffer? I know when I've run python scripts before, it has a pretty big output buffer so you don't see any output for a while. Do you actually see the entire output in the email that gets sent out at the end of topscript.sh? Is it just that while the processes run you're not seeing the output?
try
unbuffer topscript.sh >/var/log/topscript.log 2>&1
Note that unbuffer is not always available as a std binary in old-style Unix platforms and may require a search and installation for a package to support it.
I hope this helps.

Resources