Processing files in perl and retrying few times if execution fails - linux

I am processing files present in a directory.
I am making REST calls for each file, and if the REST response is successful then I move the file to a succeeded directory and if it is a failure then I move it to a failed directory.
This is working fine. Curl implementation in Perl is working and the directory structure is in Linux OS.
Now I want to process each file in the failed directory and make a REST call. Each file is given three opportunities, and if all three attempts fail then the file is moved to some other directory.
My question is how to implement the part which process the files in the failed directory and make up to three attempts for all files. How can I maintain a count of how many times each file has been processed, and when to stop retrying?
I am asking for suggestions or design approach which can be taken here. I want the simplest possible solution.
I have thought about some solutions like,
Append the number of attempts in front of the file name like first, second etc.
Try to move files in folder structure like first, second, third depending on where it failed.
Let me know if any experts have better suggestions or ideas.

Related

Want to store a value in local ../usr from shell script

I just want to store some values while running shell script ,
scenario : if im running shell script it will do some operation and it will store the results/activity done.
then again I'm running the same script I should identify these are executed and you can continue from here . some what I need . how to do that? can we use .lock file or else any other best ways are there?
I just want to store some values while running shell script , how to do that? can we use .lock file or else any other best ways are there?
.lock files are by convention used to identify running services and I would therefor vote against it.
It just sounds like you want to keep track of your progress.
If you do not mind the data being erased post reboot I'd suggest you simple use /tmp for that (this remains in memory), do mind that if we are talking very large amounts this will drain your available mem.
Without knowing your use case it's hard to tell you what is the best solution.
But I would suggest writing an empty file that just indicates that your script is in progress(very similar to lock behaviour) and a second file that just keeps track of what items you processed.
Then just loop over the items and skip until you hit a 'new' item.
If we are talking very large amounts you should consider using a local database or database server.

"Just in time" read only filesystem using mkfifo and inotifywait

I am writing some gross middleware - basically, I have some old code that needs to open 100,000 files for reading only, expecting them all to be in one folder. It never writes. It is multiprocess so it can try to open ~30 files at the same time. The old way, I would have to actually copy the files into that folder (or use links, NFS, etc.). Worth noting I have no ability to change this old code - its just a binary.
I have some new, fancy code that can retrieve a file almost instantly. I want to tie these things together, so when the old code tries to open the file, it is actually, in real time, running the new code.
So I thought of mkfifo and inotifywait. Instead of a folder of 100,000 files, I can make a folder of 100,000 named pipes. So far so good. The legacy code goes to open the files, not knowing that they are indeed named pipes. The problem is, I don't know what order the legacy code is going to open the files (nice, right?). So I would like to TRIGGER the named pipe WRITE (from my fancy new code) when the legacy code goes in for the read. I can't spawn 100,000 writes and have them all block. So I thought hey - inotifywait makes sense. Every time the legacy goes to open the pipe, it triggers a read event, which can then be used to spawn the pipe writer in the background. The problem is.. inotifywait doesn't trigger the read event until AFTER the writer has been spawned!
Any ideas of how to solve this? Basically - I want to intercept a file open, block for a couple hundred ms while I retrieve the contents of the file, then return that contents. Ideally I don't have to create a custom FUSE filesystem to do this.. its just a read-only file open. The problem is this needs to run fast and in parallel.. and I don't know which files are going to be opened in what order. Gotta be a quick and dirty way!
Thanks in advance for everyone's time.

Running script twice at time

I'm making a little simple script to improve the efficienty of my work team.
The script simply searches a file that the user gives as param.
./check_file test_file.xml
I used only ls and cp commands and there's no log or temporary files.
My question is: should I put a .lock file to be sure that the script runs only once at time or can I avoid this control?
Usually I create a lock file, because my scripts write temporary files and if two users run at the same moment the script, it explodes.
Thanks!
Generally speaking, no. I would recommend avoiding temporary files as much as possible, preferring pipes instead. However, I doubt it's always possible to avoid temporary files, so when I have to, I use $$ in the filename (current process ID or PID). So if you're using /tmp/check_file.tmp as your temporary filename, instead use /tmp/check_file.$$.tmp - then two processes can run at a time, each with their own PID, and not overlap.
Slightly more advanced is to also use ${TMP:-/tmp} as the temporary directory instead of just /tmp - that way users can specify a different directory for each run, and thereby also avoid any overlaps.

Best practice for using Kiba as a batch process on files

We'd like to run Kiba as a batch process on a series of files. What would be the best structure to give a file mask, download the files from FTP, and then run the ETL job on each, sending a success or failure notification on a per file basis?
Is there a way to do this from within Kiba, or is the best practice just to handle all the non-ETL stuff externally, and then just call kiba on each file?
I would initially start with the simplest possible thing, which is like you said, using external files then calling Kiba on each one. E.g. :
Build a rake task to download the files locally (and remove them from the FTP, or at least move them to a separate folder to avoid double-processing), inside a well-known folder which will act as an inbox. See here for interesting links on how to do that.
Then build another rake task to iterate over the inbox folder and process a given file (using Dir[pattern].each).
Make sure to use a helper such as:
def system!(command)
fail "Command #{command} failed" unless system(command)
end
to make sure you detect failures in execution when making system calls.
For your ETL file itself, you would use one at_exit block to capture failure and notify accordingly (see example here with Bugsnag, and a post_process block to capture success and notify in that case.
This will definitely work and is simple, that said there are other possibilities, such as a single ETL file which will download files in a pre_process block, then have a source which will yield one filename per downloaded file, and maybe a transform which could itself call kiba on the command line, or even more advanced solutions.
I would stick to the simplest possible solution to get started, as always!

"find" command cannot detect files added during execution

Stackoverflow has saved my life on countless occasions over the years. Now, it's time for me to post my first question ever, the answer to which I have been unable to find so far.
I have a tool (language/implementation is irrelevant) which accepts a text file as input. This text file (let's call it file_list.txt) contains a long list of file paths, one per line. The tool then iterates over the lines in file_list.txt and does something with every file path. This needs to be done continuously and file_list.txt needs to always contain the latest file paths because users continuously upload or delete files from the share being monitored. To achieve this, I have set up a cron job which calls a script. First the script calls the find utility with the search parameters required and pipes the output to a temporary file. When the file is fully populated, it is moved to file_list.txt. Then, once this is done, the tool is invoked with file_list.txt as an input parameter.
So far, so good. The share being monitored is VERY LARGE (~60 TB) and the find command takes around 5 hours to execute. This is not a problem since we have multiple overlapping find commands running in parallel (triggered once per hour). The entire setup runs on a compute farm, so CPU utilization, etc. is also not an issue.
The problem arises in the lag time for file detection. Ideally, I want a user to add a file and I want one of the already running, overlapping find commands to detect this file within a matter of minutes. However, I have noticed that none of the already-running find commands will detect this file. Only a find command started AFTER this file was added will detect it. This means that generally, I need to wait around 5 hours for a newly added file to be detected. This leads me to believe that the find utility somehow acts on a "cached" version of the share state when it was triggered. Is this true? Can anyone confirm this? And if so, what can I do to improve the detection lag?
Please let me know if further clarificaion is required. I am happy to provide any further details.
To summarize: you have a gigantic filesystem volume (60 TB) which contains a huge number of files, and you use find(1) to name a large number of those files and put those names into a text file for analysis. You have discovered that files are not listed if they are created after find(1) was started but before it finished.
I think the best solution is to stop thinking of this as a batch job, and do it "online" using inotify(7). You can use the inotify API to be immediately informed of changes to your filesystem, including new files being created. There is of course the original C API, as well as the excellent pyinotify.
With inotify, you can start a watcher program once and leave it running continuously (under a supervisor if needed for restarts). The operating system can then notify you whenever a relevant filesystem event occurs, and you can respond immediately rather than waiting for the next scan.
The one downside for your use case might be that the watcher program does need to run on a machine which has the filesystem mounted locally. But the overall compute resources required are probably much less than your current approach of repeated linear scans.
executing find commands and piping the output to temporary files might work up to a certain scale, but is far from optimal. If you want a less resource intensive, more reactive solution, I would recommend considering to reimplement your software using the inotify interface:
The inotify API provides a mechanism for monitoring filesystem events.
Inotify can be used to monitor individual files, or to monitor
directories. When a directory is monitored, inotify will return
events for the directory itself, and for files inside the directory.
So an event will be raised for each file change; or file being added.
Note that you can then keep an internal list of files up to date which only needs to be changed when you get a event.

Resources