I have an executable file and I would like to know how many times it is being executed. The file is located on a network file system. Is there a way to do this with a script using one of Linux utilities? The limitation I have is that I would like to avoid changing the file itself. For example I will not add a file with a counter which would be updated by an executable script. And I will not make the executable script call some API to increment a counter in e.g. database.
I don't know exactly how to watch a file for execution, but you can construct something with inotify watching how many times it is opened:
You could have a script like that:
#! /bin/bash
EXEC_CNT=0
FILE_TO_WATCH=/path/to/your/file
while inotifywait -e open "$FILE_TO_WATCH"
do
((EXEC_CNT++))
echo "$FILE_TO_WATCH opened $EXEC_CNT times"
# Or to store in a file:
# echo "$FILE_TO_WATCH opened $EXEC_CNT times" >> "$FILE_TO_WATCH.log"
done
In case of a network share, this script must be runned on the computer that share its file system.
Related
I am currently working on project to automate a manual task in my office. We have a process that we have to re-trigger some of our ID's when they fall in repair. As part of the process, we have to extract those ID's from a oracle DB table and then put in a file on our Linux server and run the command like this-
Example file:
$cat /task/abc_YYYYMMDD_1.txt
23456
45678
...and so on
cat abc_YYYYMMDD_1.txt | scripttoprocess -args
I am using an existing java based code called 'scripttoprocess'. I can't see what's inside this code as it is encrypted( it seems) in my script. I simply go to the location where my files are present present and then use it like this:
cd /export/incoming/task
for i in `ls abc_YYYYMMDD*.txt`;do
cat $i | scripttoprocess -args
if [ $? -eq 0];then
mv $i /export/incoming/HIST/
fi
done
scripttoprocess is and existing script. I am just calling it in my own script. My script is running continuously in a loop in the background. It simply searches for abc_YYYYMMDD_1.txt file in /task directory and if it detects such a file then it starts processing the file. But I have noticed that my script starts processing the file well before it is fully written and sometime moves the file to HIST without fully processing it.
How can handle this situation. I want to be fully sure that file is completely written before I start processing it. Secondly, Is there any way to take control of the file like preparing a control file which contains list of the files which are present in the /task directory. And then I can cat this control file and pick up file names from inside of it ? Your guidance will be much appreciated.
I used
iwatch -e close_write -c "/usr/bin/pdflatex -interaction batchmode %f" document.tex
To run a command (Latex to PDF conversion) when a file (document.tex) is closed after writing to it, which you could do as well.
However, there is a caveat: This was only meant to catch manual edits to the file and failure was not critical. Therefore, this ignores the case that immediately after closing, it is opened and written again. Ask yourself if that is good enough for you.
I agree with #TenG, normally you shouldn't move a file until it is fully written. If you know for sure that the file is finished (like a file from yesterday) then you can move it safely, otherwise you can process it, but not move it. You can for example process a part of it and remember the number of processed rows so that you don't restart from scratch next time.
If you really really want to work with files that are "in progress", sometimes tail -F works for this case, but then your bash script is an ongoing process as well, not a job, and you have to manage it.
You can also check if a file is currently open (and thus unfinished) using lsof (see https://superuser.com/questions/97844/how-can-i-determine-what-process-has-a-file-open-in-linux ; check if file is open with lsof ).
Change the process, that extracts the ID's from the oracle DB table.
You can use the mv as commented by #TenG, or put something special in the file that shows the work is done:
#!/bin/bash
source file_that_runs_sqlcommands_with_credentials
output=$(your_sql_function "select * from repairjobs")
# Something more for removing them from the table and check the number of deleted records
printf "%s\nFinished\n" "${output}" >> /task/abc_YYYYMMDD_1.txt
or
#!/bin/bash
source file_that_runs_sqlcommands_with_credentials
output=$(your_sql_function "select * from repairjobs union select 'EOF' from dual")
# Something more for removing them from the table and check the number of deleted records
printf "%s\n" "${output}" >> /task/abc_YYYYMMDD_1.txt
I have a Linux command line program.
It produces output to a file.
The output file is modified continuously by the program after short time intervals.
Every time, the program changes the file, I want to be notified.
Is there any command line for that, or any script which could help me?
I think icrond is what you need
The incrond (inotify cron daemon) is a daemon which monitors filesystem events (such as add a new file, delete a file and so on) and executes commands or shell scripts. It’s use is generally similar to cron.
Take a look here for some examples http://www.cyberciti.biz/faq/linux-inotify-examples-to-replicate-directories/
I think you need
Linux: inotify or
File Alteration monitor or
incron or
Linux audit
Also please look here
Also for script you might need as follows using inotify tool.
while true; do
change=$(inotifywait -e close_write,moved_to,create .)
change=${change#./ * }
if [ "$change" = "myfile" ]
then
echo -e "my file changed"
fi
done
I have a bash script that I'm running from DVD. This script copies multi-volume tar files from DVD to the local machine. Part-way through the copy, the script prompts the user to insert a second DVD, at which point the remaining files are copied. The script exists on the first DVD but not on the second.
This script is simply stopping after the last file is copied, but prior to starting the tar multi-volume extract operation and subsequent processing. There are no errors or messages reported. I've tried running bash with '-x' but there's nothing suspicious - not even an exit statement. Even more unfortunate is the fact that this behavior is inconsistent. Sometimes the script will stop, but other times it will continue with no problems.
I have run strace on the script. Following the conclusion of the copy operations, I see this:
read(255, "\0\0\0\0\0\0\0\0\0\0"..., 5007) = 1302
read(255, "", 5007) = 0
exit_group(0) = ?
I know that bash reads the script file into memory and executes it from there, but is it possible that it's trying to re-read the script file at some point and failing (since it no longer exists)? The tar files are quite large, and it takes approximately 10-15 minutes from the time the script starts to the time the last file is copied (from the second DVD).
I see you have already found a workaround, so I will just try to uncover what's happening:
bash isn't reading the whole script into memory, it's doing buffered reads on it, only as much as necessary each time (presumably that's for code sharing with terminal input). Before any external commands are launched, bash seeks to the exact position in the script and continues to read from there after the command finishes. You can see this if you edit the script file while it's running:
term1$ cat > test.sh
sleep 8
echo DONE
term1$ bash test.sh
While the sleep is executing, change the script from another terminal:
term2$ cat > test.sh
echo HAHA
Observe how bash becomes confused when the sleep is complete:
test.sh: line 2: A: command not found
It remembers that the position in the input file was 8 before the sleep, so it tries to read from there and is confronted with the last A from the overwritten script.
Now to your case. Normally, having a file open from a dvd locks the drive and prohibits disk change. If you nevertheless manage to change the disk, that should definitely involve an umount which should then invalidate the script fd. That's clearly not happening according to your strace output, which is a little strange. In any case, bash won't be able to read the rest of the script.
I'm logged into a remote server via SFTP at the command line. The folder I'm in contains hundreds of thousands of files. I need to get a list of these files in a text file so I can access them programmatically, as none of the PHP SFTP clients are able to return such a large list of files.
When I run an ls on the directory ( within the SFTP session ), it takes about 20 minutes for the file list to finally display.
I don't have write access on this server, so I can't pipe the output to a file on the remote server.
How can I pipe the output to a text file on my local machine ... or get a list of the files to my local machine some other way?
If you're willing to wait the 20 minutes for the data to scroll across your screen you can capture all the output using "script".
Call 'script' before you start your ssh or sftp session and it will capture all terminal output to your local disk. Type 'exit' to finish the capture.
NAME
script -- make typescript of terminal session
SYNOPSIS
script [-akq] [-t time] [file [command ...]]
DESCRIPTION
The script utility makes a typescript of everything printed on your ter-
minal. It is useful for students who need a hardcopy record of an inter-
active session as proof of an assignment, as the typescript file can be
printed out later with lpr(1).
If the argument file is given, script saves all dialogue in file. If no
file name is given, the typescript is saved in the file typescript.
If the argument command is given, script will run the specified command
with an optional argument vector instead of an interactive shell.
The following options are available:
-a Append the output to file or typescript, retaining the prior con-
tents.
-k Log keys sent to program as well as output.
-q Run in quiet mode, omit the start and stop status messages.
-t time
Specify time interval between flushing script output file. A
value of 0 causes script to flush for every character I/O event.
The default interval is 30 seconds.
The script ends when the forked shell (or command) exits (a control-D to
exit the Bourne shell (sh(1)), and exit, logout or control-D (if
ignoreeof is not set) for the C-shell, csh(1)).
Certain interactive commands, such as vi(1), create garbage in the type-
script file. The script utility works best with commands that do not
manipulate the screen. The results are meant to emulate a hardcopy ter-
minal, not an addressable one.
ENVIRONMENT
The following environment variable is utilized by script:
SHELL If the variable SHELL exists, the shell forked by script will be
that shell. If SHELL is not set, the Bourne shell is assumed.
(Most shells set this variable automatically).
SEE ALSO
csh(1) (for the history mechanism).
HISTORY
The script command appeared in 3.0BSD.
BUGS
The script utility places everything in the log file, including linefeeds
and backspaces. This is not what the naive user expects.
It is not possible to specify a command without also naming the script
file because of argument parsing compatibility issues.
When running in -k mode, echo cancelling is far from ideal. The slave
terminal mode is checked for ECHO mode to check when to avoid manual echo
logging. This does not work when in a raw mode where the program being
run is doing manual echo.
Wu's answer is good if you do it remotely. Here is another option if you are logged onto the remote server and want to send the file back home to yourself:
Proper answer is here: http://scratching.psybermonkey.net/2011/02/ssh-how-to-pipe-output-from-local-to.html
your_command | ssh username#server "cat > filename.txt"
If you have ssh access, that would be very easy:
ssh user#server ls > foo.txt
Otherwise, you can just redirect sftp's STDOUT and STDERR to a file. You have to type password and commands blindly though.
In my case following worked:
ssh user#server ls /path/to/source/folder/ > /path/to/destination/folder/filenames.txt
I wrote it in Git Bash. This will first ssh then list all files of source folder and then save the file names to the destination text file.
In this way you can also save the output to json file. Just change the file extension to json instead of txt.
For appending output just put ">>" instead of ">".
I want to write a script for linux, that will first copy a movie/series file to cache with something like:
cat /filepath/filename > /dev/null
and than open the same file in vlc.
The problem is getting the file name and path in to the script. I would like to simply double click a file, or somehow make this a faster process than typing this manually (especially because the file names of some series are just inconsistent and hard to type, even with auto-complete).
This is useful for watching movies or series on a laptop/netbook, since it allows the disk to spin down.
You should be able to create your own 'program' in a bash script which takes its first argument to be the filename using the convention "$1".
The bash script should look something like the below. I tested it, storing the script in the file cachedvlc.sh. The inverted commas helping to handle whitespace and weird characters...
#!/bin/bash
cat "$1" > /dev/null
vlc "$1"
...and will need to be made executable by changing its permissions through the file manager or running this in the terminal...
chmod u+x cachedvlc.sh
Then within your operating system, associate your bash script with the type of file you want to launch. For example on Ubuntu, you could add your script and call it 'Cached VLC' to the Menu using the 'Main Menu' application, then right-click on the file in Nautilus and choose 'Open with' to select your bash script.
After this, double-clicking or right-clicking on a file within your filemanager should be good enough to launch a cached view. This assumes what you say about caching is in fact correct, which I can't easily check.