There is a file having default content existing in it. I need to append a set of commands to this file and have it in the file all the time. But, my changes are flushed twice every hour due to puppet configuration. I need some way to keep my changes persistent in the file. Cron cannot be used because the Cron file also get flushed because of the puppet.
Related
For the first time when we run the sync cronjob (product/content) sync, it runs properly and creates media dump in the admin tab.
from next time when we run it, it just shows successful but actually, sync does not happen.
When I go back and clear the media dump from the admin tab, it starts working and again creats media dump.
So every time I am forced to manually clear the media dump for making this sync job to work.
please advise.
CatalogVersionSyncJob is designed to run only once with each instance. So if we create a sync job instance by ImpEx/HMC, it'll work for first time but in the second execution, it won't get any newly/modified items and no item will be synced. Which mean, the system needs a new instance for each sync execution!
If we execute catalog sync from Catalog Management Tool(HMC/backoffice), then each time, it internally creates a new instance of selected sync job. Hence, it's working.
To solve this, write the custom job which basically does the same thing as HMC/backoffice does internally. Like creates a new instance, assign sync job, and execute it.
For more information, refer configure-catalog-sync-cronjob-Hybris
I've encountered this issue, and the workaround was to create another CronJob that would remove those media dumps before the sync runs.
At a high-level we have a CompositeCronJob that does two things (there are actually more, but I'll just say we have 2 for the sake of this issue) in sequence:
Remove the media dump from the Sync CronJob
Sync CronJob
Alternate title: How can I prevent the deletion of a file if rsync (or any other app) currently has the file open for reading?
I have a cron on my live system that dumps a database to a backup file every 30 min.
On my backup system I have a cron that runs rsync to fetch that backup file, also every 30 min.
I thought about creating lock files and using them to tell the remote if it's safe to fetch the file or visa versa. I also thought about having the dump script create file names in a rotation and create a "name" file that the remote fetches to know what file is "safe" to retrieve. I know I can synchronize the cron jobs but I'm wondering if there is a better, safer way.
Is there some OS feature I can use to accomplish this?
I have a two part question about making our cluster more secure from accidental changes and deletions from running cluster jobs. I don't care about restricting the read or execute permissions of the job, basically I want the job to not be able to delete or change any files outside of its initial directory.
1) Is it possible to chroot a job submitted to sun grid so the job can only write to directory specified when the job is submitted?
2) It is possible to do the above but also allow read-only access to the rest of the system with all paths intact? essentially to have the filesystem appear normal and complete, but the job can only write and delete from the directory it started in.
I'm writing a Rails 3 application that needs to be able to trigger modifications to unix system config files. I'd like to insulate the file modifications from the consumer side by running them in a background process. I've considered writing out a temp file in rails and then copying the file with a bash script but that doesn't really insulate the system. I've also considered pulling from the database manually with a cron based script and updating the configs.
But what I would really like is a component that can hook into the rails environment, read out what is needed from the database, and update the config files. This process needs to be run as root because the config files mostly live in /etc/whatever.
Any suggestions?
Thanks!
My brother the network admin says you should write a Ruby/Perl/whatever script to do the actual modifications that validates the input and actually makes the modification. You could call this script from rails with. You will still want to sanitize #parameters. System.exec("/usr/bin/sudo", ["/path/to/script"] + #parameters)
As far as getting the data from the database you could either make a new SQL connection, and get the info from database.yml in rails. note data in the database should be validated since you are running as root.
The situation is as follows:
A series of remote workstations collect field data and ftp the collected field data to a server through ftp. The data is sent as a CSV file which is stored in a unique directory for each workstation in the FTP server.
Each workstation sends a new update every 10 minutes, causing the previous data to be overwritten. We would like to somehow concatenate or store this data automatically. The workstation's processing is limited and cannot be extended as it's an embedded system.
One suggestion offered was to run a cronjob in the FTP server, however there is a Terms of service restriction to only allow cronjobs in 30 minute intervals as it's shared-hosting. Given the number of workstations uploading and the 10 minute interval between uploads it looks like the cronjob's 30 minute limit between calls might be a problem.
Is there any other approach that might be suggested? The available server-side scripting languages are perl, php and python.
Upgrading to a dedicated server might be necessary, but I'd still like to get input on how to solve this problem in the most elegant manner.
Most modern Linux's will support inotify to let your process know when the contents of a diretory has changed, so you don't even need to poll.
Edit: With regard to the comment below from Mark Baker :
"Be careful though, as you'll be notified as soon as the file is created, not when it's closed. So you'll need some way to make sure you don't pick up partial files."
That will happen with the inotify watch you set on the directory level - the way to make sure you then don't pick up the partial file is to set a further inotify watch on the new file and look for the IN_CLOSE event so that you know the file has been written to completely.
Once your process has seen this, you can delete the inotify watch on this new file, and process it at your leisure.
You might consider a persistent daemon that keeps polling the target directories:
grab_lockfile() or exit();
while (1) {
if (new_files()) {
process_new_files();
}
sleep(60);
}
Then your cron job can just try to start the daemon every 30 minutes. If the daemon can't grab the lockfile, it just dies, so there's no worry about multiple daemons running.
Another approach to consider would be to submit the files via HTTP POST and then process them via a CGI. This way, you guarantee that they've been dealt with properly at the time of submission.
The 30 minute limitation is pretty silly really. Starting processes in linux is not an expensive operation, so if all you're doing is checking for new files there's no good reason not to do it more often than that. We have cron jobs that run every minute and they don't have any noticeable effect on performance. However, I realise it's not your rule and if you're going to stick with that hosting provider you don't have a choice.
You'll need a long running daemon of some kind. The easy way is to just poll regularly, and probably that's what I'd do. Inotify, so you get notified as soon as a file is created, is a better option.
You can use inotify from perl with Linux::Inotify, or from python with pyinotify.
Be careful though, as you'll be notified as soon as the file is created, not when it's closed. So you'll need some way to make sure you don't pick up partial files.
With polling it's less likely you'll see partial files, but it will happen eventually and will be a nasty hard-to-reproduce bug when it does happen, so better to deal with the problem now.
If you're looking to stay with your existing FTP server setup then I'd advise using something like inotify or daemonized process to watch the upload directories. If you're OK with moving to a different FTP server, you might take a look at pyftpdlib which is a Python FTP server lib.
I've been a part of the dev team for pyftpdlib a while and one of more common requests was for a way to "process" files once they've finished uploading. Because of that we created an on_file_received() callback method that's triggered on completion of an upload (See issue #79 on our issue tracker for details).
If you're comfortable in Python then it might work out well for you to run pyftpdlib as your FTP server and run your processing code from the callback method. Note that pyftpdlib is asynchronous and not multi-threaded, so your callback method can't be blocking. If you need to run long-running tasks I would recommend a separate Python process or thread be used for the actual processing work.