Is it possible to run a cron every 86410 seconds or simply every 1 day and 10 seconds?
I have a service which takes 24 hours to process the data from the moment it is called! Now, I need to make sure that I am giving the service enough time to process the data so instead of calling the service every 24 hours, I need to call the service every 24 hours and few seconds!
Is it possible using a cron?
I think it is interesting to note that if you do want to restart the processing every 86410 seconds, then your service start times would drift over days, later and later every time - so if you originally scheduled your process to start at 08:00, after about a year it would be starting at 09:00, and after about 23.6 years, it would go around the clock to start again at 8am.
Cron was definitely not designed for that kind of thing :-)
But if you are running on a recent Linux OS, you can use SystemD timer units to do exactly that. You may be familiar with SystemD service units - as this is how you write services for modern Linuxes, but SystemD can do a lot more, and one of those things is scheduling things that require interesting schedules.
Supposed you run your processing job as a SystemD service, it may look something like this:
/etc/systemd/system/data-processing.service
[Unit]
Description=Process some data
[Service]
Type=simple # its the default, but I thought I'd be explicit
ExecStart=/usr/bin/my-data-processor
You can then set up a timer unit to launch this service every 86410 seconds very simply - create a timer unit file in /etc/systemd/system/data-processing.timer with this content:
[Unit]
Description=start processing every day and 10 seconds
[Timer]
OnBootSec=0 # Start immediately after bootup
# Start the next processing 86410 seconds after the last start
OnUnitActive=86410
AccuracySec=1 # change from the default of 60, otherwise
# the service might start 86460 after the last start
[Install]
WantedBy=timers.target
Then just enable and start the timer unit - but not the service. If the service is enabled, you probably want to disable it as well - the timer will take care of running it as needed.
systemctl daemon-reload
systemctl enable data-processing.timer
systemctl start data-processing.timer
Looking at it a bit more, you mentioned that you want to start the next run of the service after the previous run has completed. What happens if it doesn't take exactly 86400 seconds to finish processing? If we change the requirement to be "restart the data processing service after it finished running, but give it 10 seconds to cool down first" then you don't need a timer at all - you just need to have SystemD restart the service after a 10 seconds cooldown, whenever it is done.
We can change the service unit above to do exactly that:
[Unit]
Description=Process some data
[Service]
Type=simple
ExecStart=/usr/bin/my-data-processor
Restart=always
RestartSec=10
Related
I have to debug an application that always gets killed via SIGABRT signal due to some mysterious watchdog timeout in systemd after exactly 3 minutes. Is there any logging etc. that helps me find out which of the many systemd parameters triggers the abort?
The application needs to notify watchdog messages to systemd. There are several ways of doing this.
The watchdog internal is set in the systemd service file, and the line looks like
WatchdogSec=4s
3 minutes seems like a long time, so it looks like the app is not feeding the watchdog.
See https://www.freedesktop.org/software/systemd/man/sd_notify.html for documentation on how to feed the watchdog.
I've made a program that communicates with hardware over CAN bus. When I start my program via CLI, everything seems to run fine, but starting the process via a Systemd service leads to paused traffic
I'm making a system that communicates with hardware over CAN bus. When I start my program via CLI, everything seems to run fine, I'll quantify this in a second.
Then I created systemd services, like below, to autostart the process on system power up.
By plotting log timestamps, we noticed that there are periodic pauses in the CAN traffic, anywhere between 250ms to a few seconds, every 5 or so minutes (not a regular rate), within a 30 minute window. If we switch back to starting up via CLI, we might get one 100ms drop over a 3 hour period, essentially no issue.
Technically, we can tolerate pauses like this in the traffic, but the issue is that we don't understand the cause of these dropped messages (run via systemd vs starting up manually via command line).
Does anyone have an inkling what's going on here?
Other notes:
- We don't use any environment variables or parameters (read in via config file).
- We've just watched CAN traffic with nothing running, no drops, so we're pretty confident it's not our hardware/socketCAN driver
- We've tried starting via services on an Arch laptop and didn't see this pausing behavior.
[Unit]
Description=Simple service to start CAN C2 process
[Service]
Type=simple
User=dzyne
WorkingDirectory=/home/thisguy/canProg/build/bin
ExecStart=/home/thisguy/canProg/build/bin/piccolo
Restart=on-failure
# or always, on-abort, etc
RestartSec=5
[Install]
WantedBy=multi-user.target
I'd expect that no pauses between messages larger then about ~20-100ms, our tolerance, when run via system service
I have a UpStart service job that has many (~100) instance that need to be started. Each of them is a resource-heavy process that does a lot of disk reading/writing during startup. When all of them start or respawn at the same time, they cause trouble due to excessive disk read/write requests.
I need a way to limit the number of instances that UpStart tries to start or respawn simultaneously. For example, is there a way to let UpStart hold off launching another instance until, say 30 seconds, after the startup or respawning of another instance has begun?
You can start them in sequence by using
start on started otherUpstartService
You can use pre-start or post-stop to pause after each job. E.g post-stop exec sleep 5
I am starting with Heroku and I have a webapp that has a part that needs to run once every week (Mondays preferably). I had been reading something about workers: here and here and here... But I still have many doubts:
1) This workers, runs on background without a strict control, can´t be scheduled to run once a week. or am I wrong? If I am wrong how can I schedule it?
2) To make them work, what exactly do I need to do? Type
web: node webApp.js
worker: node worker.js
in the Procfile (where worker.js is the part of the program that needs to run only once a week). And that is all?? nothing else?? so easy??
3) And the last one... but the most important. The "squamous matter of money"... One dyno is the same as one worker, so if you have a dyno running for the web you need to buy another for the worker... no? And on the list of prices a extra dyno cost 34.5$ (27.87€). It isn´t cheap... so I want to know if I am right, is it necessary buy a dyno if you want to run a worker?
You might find that the Heroku Scheduler add-on (https://devcenter.heroku.com/articles/scheduler) is a 'good enough' low cost option. You are charged for the hours that your scheduled tasks run for so if you have a regular job that only takes a short time to run it would work out much cheaper than a continuous worker process.
Its not as flexible with regard to scheduling as other options. It can be set up to run a task at a specific time every day or hourly. So if you need to have your task run say only on Mondays then you would need to have the scheduler run daily then check the day within your worker.js and exit immediately on other days.
I have a timer clocking every one minute in C# window Service. And, This pc is running 24 hours and never shut down. My workflow is that user will set the time for my service to run some processes. eg. he set 14:20. So, When my timer hit 14:20, I run some sql functions. Will it be any impact on performance If i run the timer like that for 24 hours?
Is there any better way?
You could use something like Quartz.NET: http://quartznet.sourceforge.net/