I am just can figure out how to configure a Cron job in Quartz with initial delay.
So i need something that runs every hour with an initial delay of 10 min.
"* * 0/1 * * ?"
Here's a late answer, hopefully this helps others. I solved the issue by having 2 scheduled functions in my service class:
#EnableScheduling
public class DeviceService {
#Scheduled(initialDelayString = "${devices.update.initial}", fixedDelay = 2592000000L)
public void initialUpdateDevices() {
updateDevices();
}
#Scheduled(cron = "${devices.update.cron}")
public void cronUpdateDevices() {
updateDevices();
}
private void updateDevices() {
...
}
}
The initial delay and the cron expression are set in application.properties. The fixedDelay is there because Spring doesn't allow initialDelay alone. I set it to 2592000000ms, which is 30 days. In our application, the potential extra update doesn't do any harm.
In application.properties:
devices.update.initial = 600000
devices.update.cron = 0 30 1 * * *
Initially run after 10 minutes (60000ms) and then every night at 01:30.
In application-test.properties for unit testing:
devices.update.initial = 86400000
devices.update.cron = 0 30 1 24 12 *
None of our unit tests take 1 day to execute so 86400000 milliseconds is a safe bet. The cron "0 30 1 24 12 *" is set to Christmas Eve's night when people should be dreaming of nice things.
Related
Please help me in running the following loop precisely every 10 seconds in windows vc++.
Initially It should start at something like say 12:12:40:000, It should neglect the milliseconds it takes to do some work commented, and restart the next loop at 12:12:50:000 and so on every 10 seconds precisely.
void controlloop()
{
struct timeb start, end;
while(1)
{
ftime(&start);
if(start.time %10 == 0)
break;
else
Sleep(100);
}
while(1)
{
ftime(&start);
if(start.time %10 == 0)
{
// some work here which will roughly take 100 ms
ftime(&end);
elapsedtime = (int) (1000.0 * (end.time - start.time) + (end.millitm - start.millitm));
if(elapsedtime > 10000)
{
sleeptime = 0;
}
else
{
sleeptime = 10000-(elapsedtime);
}
}
Sleep(sleeptime);
}//1
}
The Sleep approach only guarantees you sleep at least 10 seconds. After that your thread is considered eligible for scheduling and on the next quanta it will be considered again. You are still subject to the priority of any other threads on the system, the number of logical cores, etc. You are also still subject to the resolution of the threading quanta which is by default ~15 ms. You can change it with timeBeginPeriod, but that has system-wide power implications.
For more information on Windows scheduling see Microsoft Docs. For more on the power issues, see this blog post.
For Windows the best option is to use the high-frequency performance counter via QueryPerformanceCounter. You use QueryPerformanceFrequency to convert between cycles and seconds.
LARGE_INTEGER qpcFrequency;
QueryPerformanceFrequency(&qpcFrequency);
LARGE_INTEGER startTime;
QueryPerformanceCounter(&startTime);
LARGE_INTEGER tenSeconds;
tenSeconds.QuadPart = startTime .QuadPart + qpcFrequency.QuadPart * 10;
while (true)
{
LARGE_INTEGER currentTime;
QueryPerformanceCounter(¤tTime);
if (currentTime.QuadPart >= tenSeconds.QuadPart)
break;
}
The timer resolution for QPC is typically close the cycle speed of your CPU processor.
If you want to run a thread for as close to 10 seconds as you can while still yielding the processor use:
LARGE_INTEGER qpcFrequency;
QueryPerformanceFrequency(&qpcFrequency);
LARGE_INTEGER startTime;
QueryPerformanceCounter(&startTime);
LARGE_INTEGER tenSeconds;
tenSeconds.QuadPart = startTime .QuadPart + qpcFrequency.QuadPart * 10;
while (true)
{
LARGE_INTEGER currentTIme;
QueryPerformanceCounter(¤tTIme);
if (currentTime.QuadPart >= tenSeconds.QuadPart)
{
// do a thing
tenSeconds.QuadPart = currentTime.QuadPart + qpcFrequency.QuadPart * 10;
SwitchToThread();
}
This is not really the most efficient way to do a periodic timer, but you asked for precision not efficiency.
If you are using VS 2015 or later, you can use the C++11 type high_resolution_clock which uses QPC for it’s implementation. In older versions of Visual C++ used ‘file system time’ which is back to your original resolution problem with ftime.
I am dealing with some Rust code that works with durations of days but the implementation of Duration::days(n) is, per the documentation n * 24 * 60 * 60 seconds, which isn't n days because not all days are 24 * 60 * 60 seconds.
This behaviour is well documented:
pub fn days(days: i64) -> Duration
Makes a new Duration with given number of days. Equivalent to
Duration::seconds(days * 24 * 60 * 60) with overflow checks. Panics
when the duration is out of bounds.
Is there a way with Rust Chrono to get a duration that is, strictly, 1 day rather than a number of seconds and is compatible with the DateTime types? Not all days are the same number of seconds. seconds and days are quite different units. If there were such a function then the following would always give a result that is the same time of day on the following day?
let start = Local.now();
let one_day_later = start + function_that_returns_a_duration_of_days(1);
Again, Duration:days(1) is not such a function because it returns 1 * 24 * 60 * 60 seconds, rather than 1 day.
For example, with TZ set to America/Denver the following:
let start = Local.ymd(2019, 3, 10).and_hms(0, 0, 0);
println!("start: {}", start);
let end = Local.ymd(2019, 3, 11).and_hms(0, 0, 0);
println!("end: {}", end);
let elapsed_seconds = end.timestamp() - start.timestamp();
println!("elapsed_seconds: {}", elapsed_seconds);
let end2 = start + Duration::days(1);
println!("end2: {}", end2);
let elapsed_seconds2 = end2.timestamp() - start.timestamp();
println!("elapsed_seconds2: {}", elapsed_seconds2);
Returns:
start: 2019-03-10 00:00:00 -07:00
end: 2019-03-11 00:00:00 -06:00
elapsed_seconds: 82800
end2: 2019-03-11 01:00:00 -06:00
elapsed_seconds2: 86400
It adds 86400 seconds, rather than 1 day.
I can get the correct result with:
let one_day_later =
(start.date() + Duration::days(1)).and_hms(start.hour(), start.minute(), start.second());
But I would prefer a function that returns a duration of days and in general would like to know more about Rust Chrono capabilities for handling durations. Does it have durations with units other than seconds? What about weeks, months and years, which also have variable numbers of seconds.
I should probably say that I don't know Rust, only having worked with it for a few days now and I haven't much read the source code. I did look at it, but find it difficult to understand due to my limited familiarity with the language.
A Duration is an amount of time. There is no amount of time that when added to an instant, always yields the same time on the next day, because as you have noticed, calendar days may have different amounts of time in them.
Not only years, weeks and days, but even hours and minutes do not always comprise the same amount of time (Leap second). A Duration is an amount of time, not a "calendar unit". So no, a Duration is not capable of expressing an idea like "same time next week".
The easiest way to express "same time next day" is with the succ and and_time methods on Date:
let one_day_later = start.date().succ().and_time(start.time());
and_time will panic if the time does not exist on the new date.
My scheduler looks like this:
#Scheduled(cron = "* 30 11 * * *")
That's nice. Every 11:30 it runs! works great! but if the scheduler will finish working on 11:30:10, scheduler runs again. How can I add seconds here? Is it true?
#Scheduled(cron = "0 30 11 * * *")
#Scheduled(cron = "0 30 11 * * *")
I want Logstash, when it's processing input entries, to simply drop entries that are older than N days.
I assume I'll use the date module and obviously drop, but I don't know how to connect them.
The only way that I know to do date level comparison is via Ruby code. You need the date filter to parse the timestamp (that's its own issue).
Once you parse the date into a field (e.g., event["#timestamp"]), then you can use it to determine if you want to ignore it or not:
5.0:
ruby {
code => "event.cancel if (Time.now.to_f - event.get('#timestamp').to_f) > (60 * 60 * 24 * 5)"
}
Pre-5.x:
ruby {
code => "event.cancel if (Time.now.to_f - event['#timestamp'].to_f) > (60 * 60 * 24 * 5)"
}
In this case, 5 is N.
Also, it's worth pointing out that this is relative to the machine time where Logstash happens to be running. If it's inaccurate, then it will impact date math. Similarly, if the source machine's system clock is wrong, then it too can be a problem.
Drawing on Alain's good point, you could use this store the lag time, in addition to just dropping based on it.
5.0:
ruby {
code => "event.set('lag_seconds', Time.now.to_f - event.get('#timestamp').to_f))"
}
# 5 represents the number of days to allow
if [lag_seconds] > (60 * 60 * 24 * 5) {
drop { }
}
Pre-5.x:
ruby {
code => "event['lag_seconds'] = Time.now.to_f - event['#timestamp'].to_f)"
}
# 5 represents the number of days to allow
if [lag_seconds] > (60 * 60 * 24 * 5) {
drop { }
}
Using this approach, you would then be indexing lag_seconds, which is a fractional amount, thereby allowing you to analyze lag in your index if this goes into ES or some other data store.
Having source like this:
void foo() {
func1();
if(qqq) {
func2();
};
func3();
func4();
for(...) {
func5();
}
}
I want to obtain info like this:
void foo() {
5 ms; 2 times; func1();
0 ms; 2 times; if(qqq) {
0 ms; 0 times; func2();
0 ms; 2 times; };
20 ms; 2 times; func3();
5 s ; 2 times; func4();
0 ms; 60 times; for(...) {
30 ms; 60 times; func5();
0 ms; 60 times; }
}
I.e. information about how long in average it took to execute this line (real clock time, including waiting in syscalls) and how many times is it executed.
What tools should I use?
I expect the tool to instrument each function to measure it's running time, which is used by instrumentation inside calling function that writes log file (or counts in memory and then dumps).
gprof is pretty standard for GNU built (gcc, g++) programs: http://www.cs.utah.edu/dept/old/texinfo/as/gprof_toc.html
Here is what the output looks like: http://www.cs.utah.edu/dept/old/texinfo/as/gprof.html#SEC5
Take a trial run of Zoom. You won't be disappointed.
P.S. Don't expect instrumentation to do the job. For either line-level or function-level information, a wall-time stack sampler delivers the goods, assuming you don't absolutely need precise invocation counts (which have little relevance to performance).
ADDED: I'm on Windows, so I just ran your code with LTProf. The output looks like this:
void foo(){
5 func1();
if(qqq){
5 func2();
}
5 func3();
5 func4();
// I made this 16, not 60, so the total time would be 20 sec.
for(int i = 0; i < 16; i++){
80 func5();
}
}
where each func() does a Sleep(1000) and qqq is True, so the whole thing runs for 20 seconds. The numbers on the left are the percent of samples (6,667 samples) that have that line on them. So, for example, a single call to one of the func functions uses 1 second or 5% of the total time. So you can see that the line where func5() is called uses 80% of the total time. (That is, 16 out of the 20 seconds.) All the other lines were on the stack so little, comparatively, that their percents are zero.
I would present the information differently, but this should give a sense of what stack sampling can tell you.
Either Zoom or Intel VTune.