Best practice for using Kiba as a batch process on files - kiba-etl

We'd like to run Kiba as a batch process on a series of files. What would be the best structure to give a file mask, download the files from FTP, and then run the ETL job on each, sending a success or failure notification on a per file basis?
Is there a way to do this from within Kiba, or is the best practice just to handle all the non-ETL stuff externally, and then just call kiba on each file?

I would initially start with the simplest possible thing, which is like you said, using external files then calling Kiba on each one. E.g. :
Build a rake task to download the files locally (and remove them from the FTP, or at least move them to a separate folder to avoid double-processing), inside a well-known folder which will act as an inbox. See here for interesting links on how to do that.
Then build another rake task to iterate over the inbox folder and process a given file (using Dir[pattern].each).
Make sure to use a helper such as:
def system!(command)
fail "Command #{command} failed" unless system(command)
end
to make sure you detect failures in execution when making system calls.
For your ETL file itself, you would use one at_exit block to capture failure and notify accordingly (see example here with Bugsnag, and a post_process block to capture success and notify in that case.
This will definitely work and is simple, that said there are other possibilities, such as a single ETL file which will download files in a pre_process block, then have a source which will yield one filename per downloaded file, and maybe a transform which could itself call kiba on the command line, or even more advanced solutions.
I would stick to the simplest possible solution to get started, as always!

Related

Want to store a value in local ../usr from shell script

I just want to store some values while running shell script ,
scenario : if im running shell script it will do some operation and it will store the results/activity done.
then again I'm running the same script I should identify these are executed and you can continue from here . some what I need . how to do that? can we use .lock file or else any other best ways are there?
I just want to store some values while running shell script , how to do that? can we use .lock file or else any other best ways are there?
.lock files are by convention used to identify running services and I would therefor vote against it.
It just sounds like you want to keep track of your progress.
If you do not mind the data being erased post reboot I'd suggest you simple use /tmp for that (this remains in memory), do mind that if we are talking very large amounts this will drain your available mem.
Without knowing your use case it's hard to tell you what is the best solution.
But I would suggest writing an empty file that just indicates that your script is in progress(very similar to lock behaviour) and a second file that just keeps track of what items you processed.
Then just loop over the items and skip until you hit a 'new' item.
If we are talking very large amounts you should consider using a local database or database server.

"find" command cannot detect files added during execution

Stackoverflow has saved my life on countless occasions over the years. Now, it's time for me to post my first question ever, the answer to which I have been unable to find so far.
I have a tool (language/implementation is irrelevant) which accepts a text file as input. This text file (let's call it file_list.txt) contains a long list of file paths, one per line. The tool then iterates over the lines in file_list.txt and does something with every file path. This needs to be done continuously and file_list.txt needs to always contain the latest file paths because users continuously upload or delete files from the share being monitored. To achieve this, I have set up a cron job which calls a script. First the script calls the find utility with the search parameters required and pipes the output to a temporary file. When the file is fully populated, it is moved to file_list.txt. Then, once this is done, the tool is invoked with file_list.txt as an input parameter.
So far, so good. The share being monitored is VERY LARGE (~60 TB) and the find command takes around 5 hours to execute. This is not a problem since we have multiple overlapping find commands running in parallel (triggered once per hour). The entire setup runs on a compute farm, so CPU utilization, etc. is also not an issue.
The problem arises in the lag time for file detection. Ideally, I want a user to add a file and I want one of the already running, overlapping find commands to detect this file within a matter of minutes. However, I have noticed that none of the already-running find commands will detect this file. Only a find command started AFTER this file was added will detect it. This means that generally, I need to wait around 5 hours for a newly added file to be detected. This leads me to believe that the find utility somehow acts on a "cached" version of the share state when it was triggered. Is this true? Can anyone confirm this? And if so, what can I do to improve the detection lag?
Please let me know if further clarificaion is required. I am happy to provide any further details.
To summarize: you have a gigantic filesystem volume (60 TB) which contains a huge number of files, and you use find(1) to name a large number of those files and put those names into a text file for analysis. You have discovered that files are not listed if they are created after find(1) was started but before it finished.
I think the best solution is to stop thinking of this as a batch job, and do it "online" using inotify(7). You can use the inotify API to be immediately informed of changes to your filesystem, including new files being created. There is of course the original C API, as well as the excellent pyinotify.
With inotify, you can start a watcher program once and leave it running continuously (under a supervisor if needed for restarts). The operating system can then notify you whenever a relevant filesystem event occurs, and you can respond immediately rather than waiting for the next scan.
The one downside for your use case might be that the watcher program does need to run on a machine which has the filesystem mounted locally. But the overall compute resources required are probably much less than your current approach of repeated linear scans.
executing find commands and piping the output to temporary files might work up to a certain scale, but is far from optimal. If you want a less resource intensive, more reactive solution, I would recommend considering to reimplement your software using the inotify interface:
The inotify API provides a mechanism for monitoring filesystem events.
Inotify can be used to monitor individual files, or to monitor
directories. When a directory is monitored, inotify will return
events for the directory itself, and for files inside the directory.
So an event will be raised for each file change; or file being added.
Note that you can then keep an internal list of files up to date which only needs to be changed when you get a event.

Processing files in perl and retrying few times if execution fails

I am processing files present in a directory.
I am making REST calls for each file, and if the REST response is successful then I move the file to a succeeded directory and if it is a failure then I move it to a failed directory.
This is working fine. Curl implementation in Perl is working and the directory structure is in Linux OS.
Now I want to process each file in the failed directory and make a REST call. Each file is given three opportunities, and if all three attempts fail then the file is moved to some other directory.
My question is how to implement the part which process the files in the failed directory and make up to three attempts for all files. How can I maintain a count of how many times each file has been processed, and when to stop retrying?
I am asking for suggestions or design approach which can be taken here. I want the simplest possible solution.
I have thought about some solutions like,
Append the number of attempts in front of the file name like first, second etc.
Try to move files in folder structure like first, second, third depending on where it failed.
Let me know if any experts have better suggestions or ideas.

Loop until data set is not in use with JCL

I am working in mainframe and I need to wait a dataset is released to execute automatically a JOB. Do you know any simple way to loop until a dataset is not in use in JCL? I was looking on the web and i found some solutions with REXX but they seemed too complicated to do such simple thing as I need. Also I have never used REXX.
Regards!
P.D. Also, the data set could not exist.
Edit: I need this becouse I run a XCOM Job which transfer a file of another system to a mainframe dataset. The problem is when this JOB finish, maybe the file is still beign transfered, and would like to wait to transfer be completed before to start the next JOB. Maybe editing the sentence of the next JOB associated to the dataset.
The easy way to do this is to ensure that your file transfer package allocates the dataset with an OLD disposition, that will create a system level enqueue on the dataset and prevent your job from running until the enqueue is released.
Many file transfer packages offer some sort of 'file complete' exit that can also trigger a job once a dataset transmission is fully complete.
But you can't loop in JCL. You can in REXX, but it has a host of issues that you have to deal with, not at all simple.

Generating a file in one Gradle task to be operated on by another Gradle task

I am working with Gradle for the first time on a project, having come from Ant on my last project. So far I like what I have been seeing, though I'm trying to figure out how to do some things that have kind of been throwing me for a loop. I am wondering what is the right pattern to use in the following situation.
I have a series of files on which I need to perform a number of operations. Each task operates on the newly generated output files of the task that came before it. I will try to contrive an example to demonstrate my question, since my project is somewhat more complicated and internal.
Stage One
First, let's say I have a task that must write out 100 separate text files with a random number in each. The name of the file doesn't matter, and let's say they all will live under parentFolder.
Example:
parentFolder
|
|-file1
|-file2
...
|-file100
I would think my initial take would be to do this in a loop inside the doLast (shortcutted with <<) closure of a custom task -- something like this:
task generateNumberFiles << {
File parentFolder = mkdir(buildDir.toString() + "/parentFolder")
for (int x=0; x<=100; x++)
{
File currentFile = file(parentFolder.toString() + "/file" + String.valueOf(x))
currentFile.write(String.valueOf(Math.random()))
}
}
Stage Two
Next, let's say I need to read each file generated in the generateNumberFiles task and zip each file into a separate archive in a second folder, zipFolder. I'll put it under the parentFolder for simplicity, but the location isn't important.
Desired output:
parentFolder
|
|-file1
|-file2
...
|-file100
|-zipFolder
|
|-file1.zip
|-file2.zip
...
|-file100.zip
This seems problematic because, in theory, it's like I need to create a Zip task for each file (to generate a separate archive per file). So I suppose this is the first part of the question: how do I create a separate task to act on a bunch of files generated during a prior task and have that task run as part of the build process?
Adding the task at execution time is definitely possible, but getting the tasks to run seems more problematic. I have read that using .execute() is inadvisable and technically it is an internal method.
I also had the thought to add dependsOn to a subsequent task with a .matching { Task task -> task.name.startsWith("blah")} block. This seems like it won't work though, because task dependency is resolved [during the Gradle configuration phase][1]. So how can I create tasks to operate on these files since they didn't exist at configuration time?
Stage Three
Finally, let's complicate it a little more and say I need to perform some other custom action on the ZIP archives generated in Stage Two, something not built in to Gradle. I can't think of a realistic example, so let's just say I have to read the first byte of each ZIP and upload it to some server -- something that involves operating on each ZIP independently.
Stage Three is somewhat just a continuation of my question in Stage Two. I feel like the Gradle-y way to do this kind of thing would be to create tasks that perform a unit of work on each file and use some dependency to cause those tasks to execute. However, if the tasks don't exist when the dependency graph is built, how do I accomplish this sort of thing? On the other hand, am I totally off and is there some other way to do this sort of thing?
[1]: "Gradle builds the complete dependency graph before any task is executed." http://www.gradle.org/docs/current/userguide/build_lifecycle.html
You cannot create tasks during the execution phase. As you have probably figured out, since Gradle constructs that task execution graph during the configuration phase, you cannot add tasks later.
If you are simply trying to consume the output of one task as the input of another then that becomes a simple dependsOn relationship, just like Ant. I believe where you may be going down the wrong path is by thinking you need to dynamically create a Gradle Zip task for every archive you intend to create. In this case, since the number of archives you will be creating is dynamic based on the output of another task (ie. determined during execution) you could simply just create a single task which created all those zip files. Easiest way of accomplishing this would simply to use Ant's zip task via Gradle's Ant support.
We do something similar. While Mark Vieira's answer is correct, there may be a way to adjust things on both ends a bit. Specifically:
You could discover, as we do, all the zip files you need to create during the configuration phase. This will allow you to create any number of zip tasks, name them appropriately and relate them correctly. This will also allow you to individually build them as needed and take advantage of the incremental build support with up-to-date checks.
If you have something you need to do before you can discover what you need for (1) and if that is relatively simple, you could code that specifically not as a task but as a configuration step.
Note that "Gradle-y" way is flexible but don't do this just because you may feel this is "Gradle-y". Do what is right. You need individual tasks if you want to be able to invoke and relate them individually, perhaps optimize build performance by skipping up-to-date ones. If this is not what you care about, don't worry about making each file into its own task.

Resources