Quartz Scheduling to delete files - cron

I am using the file component with an argument quartz scheduler in order to pull some files from a given directory on every hour. Then i transform the data from the files and move the content to other files in other directory. After that I am moving the input files to an archive directory. When a file is moved to this directory it should stay there only a week and then it should be deleted automatically. The problem is that Im not really sure how can i start a new cron job because I dont really know when any of the files is moved to that archive directory. Maybe is something really trivial but I am pretty new to camel and I dont know the solution. Thank you in advance.

Use option "filterFile"
Every file has modified timestamp and you can use this timestamp to filter file that are older than 1 week. Under file component, there exist an option filterFile
filterFile=${date:file:yyyyMMdd}<${date:now-7d:yyyyMMdd}
Above evaluation comes from file language, ${date:file:yyyyMMdd} denote modified timestamp of the file in form (year)(month)(day) and ${date:now-7d:yyyyMMdd} denote current time minus 7 days in form (year)(month)(day).

Related

rename a file from multiple directories in linux

I am trying to rename a file from multiple directories
I have a file called slave.log in multiple directories like slave1,slave2......slave17. so daily log rotation happens and creates a new file with current dateformat in it whereas the file data contains a previous day data . I want to rename those files with previous date format .
I have written shell script which works fine but the problem here is , I need to pass the path as parameter but likewise I have 17 directories so i cant schedule 17 cron enteries to run ...I have only basic knowledge about scripting . Please help me with best solution for this scenario

Rotate logfiles on an hourly basis by appending date and hour

I wanted to implement a log rotation option in linux. I have a *.trc file where all the logs are getting written. I wanted a new log file to be created every hour. I have done some analysis and found the below
I have done some analysis and got to know about the logrotate option. Where we need to update the rotation details for a specific file in the logrotate.conf file
I wanted to know if there is an option without using the logrotate option. I wanted to rotate the logfiles on an hourly basis, so something like appending date and hour information to the log file and create new files based on the current hour information.
Im looking for some suggestions on how to implement the log rotation using the second option specified above.
Any details on the above would be really helpful
If you have control over the process that creates the logs, you could just timestamp the file at the moment of creation. This will remove the need to rename the log.
Before you write every line you check the time. If one hour passed after that file was created, you close the current file and open a new one with a new timestamp.
If you do not have control over the process, you can pipe the output of your process (stdout,stderr) to multilog, which is a binary that's part of the package daemon-tools in most Linux distros.
https://cr.yp.to/daemontools/multilog.html

Node.js file rotation

I have a process that periodically gets files from a server and copy them with SFTP to a local directory. It should not overwrite the file if it already exists. I know with something like Winston I can automatically rotate the log file when it fills up, but in this case I need a similar functionality to rotate files if they already exist.
An example:
The routine copies a remote file called testfile.txt to a local directory. The next time it's run the same remote file is found and copied. But now I want to rename the first testfile.txt to testfile.txt.0 so it's not overwritten. And so on - after a while I'd have a directory of files with the name testfile.txt.N and the most recent testfile.txt.
What you can do is you can append date and time on the file name that gives every filename a unique name and also helps you archive it.
For example you text.txt can be either 20170202_181921_test.txt or test_20170202_181921.txt
You can use a JavaScript Date Object to get date and time.
P.S show your code of downloading files so that I can add more to that.

How to retrive Files generated in the past 120 minutes in Linux and also moved to another location

For one of my Project, I have a certain challenge where I need to take all the reports generated in a certain path, I want this to be an automated process in "Linux". I know the way how to get the file names which have been updated in the past 120 mins, but not the files directly. Now my requirements are in such a way
Take a certain files that have been updated in past 120 mins from the path
/source/folder/which/contains/files
Now do some bussiness logic on this generated files which i can take care of
Move this files to
/destination/folder/where/files/should/go
I know how to achieve #2 and #3 but not sure of #1. Can someone help me how can i achieve this.
Thanks in Advance.
Write a shell script. Sample below. I haven't provided the commands to get the actual list of file names as you said you know how to do that.
#!/bin/sh
files=<my file list>
for file in $files; do
cp $file <destination_dirctory>
done

Uploading delta changes to perforce and label

Every month I need to sync big chunk of files from git and upload it to perforce and create a p4 label with all files uploaded, and from p4 I do builds, this is odd but we cannot change this now. so far I am uploading the files to new directory in p4 every time and creating label with all uploaded files, so I get a clean label for "this month's build" though most of the files didn't change compared to last month's upload, this happens through shell script, is there a way to upload only the delta changes and create label with old files (not uploaded as it didn't change) + changed/brand new files uploaded? any high level directions will help, thanks.
Do I understand correctly that "I am uploading the files to new directory in p4 every time" means that, each time you perform a 'p4 submit', the files in your changelist are actually different file names, not just newer revisions of the same files?
That is, if you did 'p4 describe' of the first changelist you submitted, would it look like:
//depot/dir/r1/A.c#1
//depot/dir/r1/B.c#1
//depot/dir/r1/C.c#1
while the next time you ran your tool, and did a 'p4 describe' of the changelist, it would look like:
//depot/dir/r2/A.c#1
//depot/dir/r2/B.c#1
//depot/dir/r2/C.c#1
If that is what you are doing, then if you could instead copy the files to the same location each time, so that the first time your tool ran you got:
//depot/dir/A.c#1
//depot/dir/B.c#1
//depot/dir/C.c#1
and the second time your tool ran you got:
//depot/dir/A.c#2
//depot/dir/B.c#2
//depot/dir/C.c#2
If you did things that way, then you could use the 'revertunchanged' feature of Perforce, and the files that were unchanged from the previous run of your tool would not be submitted, and the revision number would not change, so that over time you might get:
//depot/dir/A.c#7
//depot/dir/B.c#4
//depot/dir/C.c#19
Each time you run your tool, you can still create a label, because the label includes not just the file name but also the revision number, so the first label you created might include:
//depot/dir/A.c#1
//depot/dir/B.c#1
//depot/dir/C.c#1
while the 19th label you created might include:
//depot/dir/A.c#7
//depot/dir/B.c#4
//depot/dir/C.c#19
Hopefully this will get you pointed in a direction that is more suited to your goals.

Resources