Good afternoon! There is a site on the Kohana 3.3, which is done sending Email, sms, etc.
Such a question, if we are just 1 minute will launch the task (such as newsletter email), Minion Task allows to perform this task, if it is already running? For example, if it was not completed since the previous start-up and is currently executing?
Minion task has not that functionality.
You have two options:
1. Lock the cron from linux: you can create a shell script that creates a lock file, run your task and delete the lock file at the end and before that check if the file exists to do nothing.
Use some app context variable (i.e.: kohana file cache, database, etc) to create a flag for the task is running and check for this flag in your php code, add the flag if the process is not running and clear it after the process finished.
Hope this help, let me know if you have aby doubt about this solution.
Related
I am using non-containerized jenkins on server. It gets terminated automatically each time I try to restart it.
Process which is killing jenkins is "/var/tmp/bbb/bbb". This process gets triggered by jenkins and even If I try to kill this process and restart jenkins service, jenkins again trigger this process which eventually happen to be a reason for termination of jenkins.
/var/tmp/bbb/bbb
I've also searched on google but couldn't find anything useful. Please help.
servers htop report here
Check your root crontab.
I found the same process, made investigation. My instance was using for mining with scripts in crontab.
These scripts stop processes, stop all security services and syslog, create new users, find ssh-keys, change iptables, etc.
Me and my team are using a shared hosting service with a limited linux container (without root and privileged user) and we need to develop a new feature that involves long running tasks (> 600ms).
We thought of two possible solutions:
Breaking apart the task, and via the frontend, make one separate http request to the server.
Use bash screen to run a bash script with a infinite loop calling php artisan schedule:run (mimicking cronjob)
I don't feel very confortable with the first solution, moving server logic to the browser seens wrong in my opinion.
The second solution is only a supposition (not tested), we are not sure if the bash screen would randomly stop at any time.
What would be the least unstable way to achive our goal? Thx
Assuming you already explored this because you mention that a CRON would not be an option, but even unprivileged users can setup a CRON, which is the simplest solution in combination with the Laravel scheduler.
If an actual CRON using the scheduler is really out of the question I do think making an HTTP endpoint you could call from the browser is the next best thing. Just don't think an endpoint you can call from a browser that you can only call it from a browser ;)
Take for example https://www.easycron.com/ (no affiliation but the first Google result). You can setup a CRON job there to call a URL to trigger those tasks on a CRON interval. Internally at my company called the "poor mans CRON" :)
I would agree that running a "screen" session is the most unreliable since on a server reboot those are not started again and if you "infinite loop" crashes it will not automatically restart.
If you go that route (or any CRON route) you can add some monitoring using for example https://healthchecks.io/ (again no affiliation, Google). This allows you to define a CRON schedule and gives you a URL to call after the CRON finishes, if your CRON does not call that URL according to the CRON schedule you will be notified. Good to have as insurance.
I have scheduled one shell script in cron that fires emails depending on the condition.
I have modified the sender email address.
Now , the issue is while i tested the script in different test environments, somewhere probably it is still active and firing emails from test environment. I have checked crontabs in test environments but nowhere i found out it is scheduled.
Can you guys help me on how to track back where from those emails getting triggered? which instance? which cron etc.
Thanks in advance.
I suggest consulting the cron log file. This file records when and what programs cron starts on behalf of which user.
For FreeBSD this is /var/log/cron. On Linux, as always, it may depend on your distro, the cron implementation, phase of the moon :-) Running man cron might point you at the right file in the FILES section.
We are using a dedicated Amazon Ubuntu ec2 instance as Cron server, which executed 16 cron jobs at different time intervals i.e, 10 cron jobs in morning 4:15 - 7:15 and the rest # 23:00 - 23:50. I get the results via email. I want to configure something, which shoots email message at the end of they day listing the cron jobs that are executed successfully and the one that failed.
I have a jenkins configured ubuntu instance for auto-building Dev, Beta, Staging & Live environments. Can i add these cron jobs(shell scripts) as external jobs in the jenkins and monitor them. Is it possible?
Definitely possible! You can monitor external cron jobs as described here:
https://wiki.jenkins-ci.org/display/JENKINS/Monitoring+external+jobs
You can also add cron job (-like behavior) to Jenkins by creating a freestyle software project and add "Execute shell" as build process.
It's a bit more convenient since you can also trigger the execution via Jenkins ("Build now").
You might be able to combine the Jenkins monitor external job project type with a matrix project. At the very least the former will enable you to monitor the cron jobs individually.
Alternatively you could have the last monitored cron job of the day trigger building a project that checks the status of all the cron jobs (for example by retrieving and comparing the build numbers of the last and the last successful builds) and sends an email accordingly. The email plugin might be useful for the latter.
Check the CPAN or do some web digging for shell or perl script for managing cron jobs and extend its behaviour to do some reporting which you can render using HTML. Alternatively write a servlet and a some function calls to do just that.
This becomes your own standalone monitor application, which can sit in jenkins or deployed independently. If you choose to add it to jenkins, then add the reporting HTML file and its scripts to the container holding deployed web files for jenkins, word of advice place your files and script in a separate container.
Add a hyperlink to jenkins index html which will load your reporter. Now reboot tomcat and go from there.
Another option could be to take a look at Cronitor (https://cronitor.io). It basically boils down to being a tracking beacon that uses http requests to ping when a cron job/scheduled task starts and ends.
You'll be notified if your job doesn't run on schedule, or if it runs for too long/too short, etc. You can also configure it to send alerts to you via email, sms, but also Slack, Hipchat, Pagerduty and others.
The situation is as follows:
A series of remote workstations collect field data and ftp the collected field data to a server through ftp. The data is sent as a CSV file which is stored in a unique directory for each workstation in the FTP server.
Each workstation sends a new update every 10 minutes, causing the previous data to be overwritten. We would like to somehow concatenate or store this data automatically. The workstation's processing is limited and cannot be extended as it's an embedded system.
One suggestion offered was to run a cronjob in the FTP server, however there is a Terms of service restriction to only allow cronjobs in 30 minute intervals as it's shared-hosting. Given the number of workstations uploading and the 10 minute interval between uploads it looks like the cronjob's 30 minute limit between calls might be a problem.
Is there any other approach that might be suggested? The available server-side scripting languages are perl, php and python.
Upgrading to a dedicated server might be necessary, but I'd still like to get input on how to solve this problem in the most elegant manner.
Most modern Linux's will support inotify to let your process know when the contents of a diretory has changed, so you don't even need to poll.
Edit: With regard to the comment below from Mark Baker :
"Be careful though, as you'll be notified as soon as the file is created, not when it's closed. So you'll need some way to make sure you don't pick up partial files."
That will happen with the inotify watch you set on the directory level - the way to make sure you then don't pick up the partial file is to set a further inotify watch on the new file and look for the IN_CLOSE event so that you know the file has been written to completely.
Once your process has seen this, you can delete the inotify watch on this new file, and process it at your leisure.
You might consider a persistent daemon that keeps polling the target directories:
grab_lockfile() or exit();
while (1) {
if (new_files()) {
process_new_files();
}
sleep(60);
}
Then your cron job can just try to start the daemon every 30 minutes. If the daemon can't grab the lockfile, it just dies, so there's no worry about multiple daemons running.
Another approach to consider would be to submit the files via HTTP POST and then process them via a CGI. This way, you guarantee that they've been dealt with properly at the time of submission.
The 30 minute limitation is pretty silly really. Starting processes in linux is not an expensive operation, so if all you're doing is checking for new files there's no good reason not to do it more often than that. We have cron jobs that run every minute and they don't have any noticeable effect on performance. However, I realise it's not your rule and if you're going to stick with that hosting provider you don't have a choice.
You'll need a long running daemon of some kind. The easy way is to just poll regularly, and probably that's what I'd do. Inotify, so you get notified as soon as a file is created, is a better option.
You can use inotify from perl with Linux::Inotify, or from python with pyinotify.
Be careful though, as you'll be notified as soon as the file is created, not when it's closed. So you'll need some way to make sure you don't pick up partial files.
With polling it's less likely you'll see partial files, but it will happen eventually and will be a nasty hard-to-reproduce bug when it does happen, so better to deal with the problem now.
If you're looking to stay with your existing FTP server setup then I'd advise using something like inotify or daemonized process to watch the upload directories. If you're OK with moving to a different FTP server, you might take a look at pyftpdlib which is a Python FTP server lib.
I've been a part of the dev team for pyftpdlib a while and one of more common requests was for a way to "process" files once they've finished uploading. Because of that we created an on_file_received() callback method that's triggered on completion of an upload (See issue #79 on our issue tracker for details).
If you're comfortable in Python then it might work out well for you to run pyftpdlib as your FTP server and run your processing code from the callback method. Note that pyftpdlib is asynchronous and not multi-threaded, so your callback method can't be blocking. If you need to run long-running tasks I would recommend a separate Python process or thread be used for the actual processing work.