Here is the function
time_t time_from_string(const char* timestr)
{
if (!timestr)
return 0;
struct tm t1;
memset(&t1, 0, sizeof(t1));
int nfields = sscanf(timestr, "%04d:%02d:%02d %02d:%02d:%02d",
&t1.tm_year, &t1.tm_mon, &t1.tm_mday, &t1.tm_hour,
&t1.tm_min, &t1.tm_sec);
if (nfields != 6)
return 0;
t1.tm_year -= 1900;
t1.tm_mon--;
t1.tm_isdst = -1; // mktime should try itself to figure out what DST was
time_t result = mktime(&t1);
return result;
}
When I call it with the argument "2007:11:14 11:19:07", it returns 1195028347 in Linux (Ubuntu 12.04, gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3) and 1195024747 in Windows (windows 7, Visual Studio 2010).
As can be seen, time difference is 3600.
I run both operating systems on the same computer (dual-boot), which is in the MSK time zone.
Both OSes are synchronized with the internet time, and their system clock show correct time.
When I call this function with another argument, "2012:08:21 18:20:40", I get 1345558840 in both systems.
Why does the results differ in several cases?
EDIT
Forgot to mention. I control the contents of the t1 variable after call to mktime().
In both systems:
t1.tm_sec = 7;
t1.tm_min = 19;
t1.tm_hour = 11;
t1.tm_mday = 14;
t1.tm_mon = 10;
t1.tm_year = 107;
t1.tm_wday = 3;
t1.tm_yday = 317;
t1.tm_isdst = 0;
Please, mention the last line. Both systems determine that there is no daylight savings in effect.
Linux additionally shows the following fields in struct tm:
t1.gmtoff = 10800;
t1.tm_zone = "MSK";
From Wikipedia: Moscow Time
Until 2011, during the winter, between the last Sunday of October and the last Sunday of March, Moscow Standard Time (MSK, МСК) was 3 hours ahead of UTC, or UTC+3; during the summer, Moscow Time shifted forward an additional hour ahead of Moscow Standard Time to become Moscow Summer Time (MSD), making it UTC+4.
In 2011, the Russian government proclaimed that daylight saving time would in future be observed all year round, thus effectively displacing standard time—an action which the government claimed emerged from health concerns attributed to the annual shift back-and-forth between standard time and daylight saving time. On 27 March 2011, Muscovites set their clocks forward for a final time, effectively observing MSD, or UTC+4, permanently.
Since Moscow observed winter time (UTC+3) on 2007-11-14, 11:19:07 MSK was 08:19:07 UTC, and the Unix timestamp was 1195028347.
It looks like the value you get on Linux is correct, and the value you get on Windows seems to assume UTC+4 which is incorrect.
Related
In the following example, I would like to format EPOCH (1/1/1970) in different time zones. For example, I may wish to format EPOCH using the Los Angeles time zone and/or format EPOCH using the New York timezone.
UErrorCode uErrorCode = U_ZERO_ERROR;
UnicodeString unicodeString;
UDate uDate;
icu::Locale locale = icu::Locale("en");
TimeZone* timeZone = TimeZone::createTimeZone("America/Los_Angeles");
Calendar* calendar = Calendar::createInstance(timeZone, uErrorCode);
// setting calendar to EPOCH, e.g. zero MS from 1/1/1970
calendar->setTime(0, uErrorCode);
// get calendar time as milliseconds (UDate)
uDate = calendar->getTime(uErrorCode);
DateFormat* dateFormat = DateFormat::createDateTimeInstance(
icu::DateFormat::MEDIUM, // date style
icu::DateFormat::SHORT, // time style
locale);
unicodeString = dateFormat->format(uDate, unicodeString, uErrorCode);
std::string str;
unicodeString.toUTF8String(str);
std::cout << "Date: " << str;
// Use getOffset to get the stdOffset and dstOffset for the given time
int32_t stdOffset, dstOffset;
timeZone->getOffset(uDate, true, stdOffset, dstOffset, uErrorCode);
std::cout << " | ";
std::cout << "Time zone STD offset: " << stdOffset / (1000 * 60 * 60) << " | ";
std::cout << "Time zone DST offset: " << dstOffset / (1000 * 60 * 60) << std::endl;
The problem that I have is that the output is not formatted respective to the time zone.
Here is the output when using the Los Angeles time zone:
Date: Dec 31, 1969, 6:00 PM | Time zone STD offset: -8 | Time zone DST offset: 0
Here is the output when using the New York time zone:
Date: Dec 31, 1969, 6:00 PM | Time zone STD offset: -5 | Time zone DST offset: 0
Please notice that the date is not EPOCH and secondly notice that the dates and times for both outputs are identical. The offsets are correct, but the date/time display is not.
UPDATE
It is important to note that the displayed date/time is 6 hours behind since I'm currently (-6 UTC) meaning that you ADD 6 hours to Dec. 31, 1969 at 6:00PM which would then equal EPOCH Jan. 1, 1970 12:00AM.
ICU is using my PC's timezone automatically since I have found no way to specify timezone when formatting date/time using DateFormat::Format(...). If format() accepted a timezone argument to override my PC's local timezone, I would not be having this issue.
You should call dateFormat->setTimeZone(*timeZone) to specify time zone.
#earts had it right, because you are formatting based on the scalar time value and not the calendar.
Alternatively, you can format the Calendar object itself, which will use the timezone and time from that calendar:
unicodeString = dateFormat -> format(*calendar,
unicodeString,
(FieldPositionIterator*) nullptr, // ignored
uErrorCode);
Note, though, that using the above function, the calendar type had better match that of the dateformat. An easy way to do that is to make sure you pass in the locale parameter when creating the calendar:
// above:
Calendar* calendar = Calendar::createInstance(timeZone, locale, uErrorCode);
I am using Qt5 on Windows7 platform.
I have an app running 24/24, that it's supposed to connect to some remote devices in order to open or close the service on them. Connection is done via TCP.
For each day of the week there is/should be the possibility to set the hour&minute for both operations/tasks: open-service and close-service, as in the code below:
#define SUNDAY 0
#define MONDAY 1
//...
#define SATURDAY 6
struct Day_OpenCloseService
{
bool automaticOpenService;
int openHour;
int openMinute;
bool automaticCloseService;
int closeHour;
int closeMinute;
};
QVector<Day_OpenCloseService> Week_OpenCloseService(7);
Week_OpenCloseService[SUNDAY].automaticOpenService = true;
Week_OpenCloseService[SUNDAY].openHour = 7;
Week_OpenCloseService[SUNDAY].openMinute = 0;
Week_OpenCloseService[SUNDAY].automaticCloseService = false;
//
Week_OpenCloseService[MONDAY].automaticOpenService = true;
Week_OpenCloseService[MONDAY].openHour = 4;
Week_OpenCloseService[MONDAY].openMinute = 30;
Week_OpenCloseService[MONDAY].automaticCloseService = true;
Week_OpenCloseService[MONDAY].closeHour = 23;
Week_OpenCloseService[MONDAY].closeMinute = 0;
// ...
Week_OpenCloseService[SATURDAY].automaticOpenService = true;
Week_OpenCloseService[SATURDAY].openHour = 6;
Week_OpenCloseService[SATURDAY].openMinute = 15;
Week_OpenCloseService[SATURDAY].automaticCloseService = false;
Week_OpenCloseService[SATURDAY].closeHour = 23;
Week_OpenCloseService[SATURDAY].closeMinute = 59;
If automaticOpenService is true for a day, then an open-service will be executed at the specified hour&minute, in a new thread (I suppose).
If automaticOpenService is false, then no open-service is executed for that day of the week.
And the same goes for the automaticCloseService...
Now, the question is:
How to start the open-service and close-service tasks, based on the above "scheduler"?
Ok, the open-service and close-service tasks are not implemented yet, but they will be just some simple commands via TCP connection to the remote devices (which are listening on a certain port).
I'm still weighing on how to implement that, too... (single-thread, multi-thread, concurrent, etc).
A basic implementation of a scheduler will hold a list of upcoming tasks (maybe with just two items in the list in your case) that is kept sorted by the time at which those tasks need to be executed. Since you are using Qt, you could use QDateTime objects to represent the times at which your upcoming tasks need to be done.
Once you have that list set up, it's just a matter of calculating how many seconds remain between the current time and the timestamp of the first item in the list, and then waiting that number of seconds. The QDateTime::secsTo() method is very useful here as it will do just that calculation for you. You can then call QTimer::singleShot() to make it so that a signal will be emitted in that-many seconds.
When the qTimer's signal is emitted and your slot-method is called, you slot method will check the QDateTime of the first item in the list; if the current time is greater than or equal to that item's QDateTime, then it's time to execute the task, and the pop that item off the head of the list (and maybe reschedule a new task for tomorrow?). Repeat until either the list is empty or the first item in the list has a QDateTime that is still in the future, in which case you'd go back to step 1 again. Repeat indefinitely.
Note that multithreading isn't required to accomplish this task under Qt (and using multithreading wouldn't make the task any easier, either, so I'd avoid it if possible).
How can I calculate the amount of processing time used by a process in C on Linux. Specifically, I want to determine how much time elapses when encrypting a file using openssl.
The easiest way for you to do this is by using the clock() function from <time.h> to report the amount of CPU time used by the calling process.
From SUSv4:
The clock() function shall return the implementation's best
approximation to the processor time used by the process since the
beginning of an implementation-defined era related only to the process
invocation.
RETURN VALUE
To determine the time in seconds, the value returned by clock() should
be divided by the value of the macro CLOCKS_PER_SEC. If the processor
time used is not available or its value cannot be represented,
the function shall return the value (clock_t)-1.
Try following,
time_t start, end;
double cpu_time_used;
start = clock();
/* Do encrypting ... */
end = clock();
cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
I have a question about how the glibc ctime() works.
Follows my snippet:
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main (int argc,char** argv)
{
int ret=EXIT_SUCCESS;
time_t tm1;
time_t tm2;
tm1 = time(NULL);
tm2 = tm1 + 60; // 60 seconds later
puts("1st method-------");
printf("tm1 = %stm2 = %s",ctime(&tm1),ctime(&tm2));
puts("2nd method-------");
printf("tm1 = %s",ctime(&tm1));
printf("tm2 = %s",ctime(&tm2));
return(ret);
}
I got:
1st method-------
tm1 = Sat Jan 14 01:13:28 2012
tm2 = Sat Jan 14 01:13:28 2012
2nd method-------
tm1 = Sat Jan 14 01:13:28 2012
tm2 = Sat Jan 14 01:14:28 2012
As you see, in the first method both tm have the same value which is not correct. In the 2nd method I got correct values.
I know that ctime() puts those string in static buffer, and to overwrite it we need a successive call to ctime().
Q: Do I not doing successive call in 1st method?
Thank you for reply.
You've provided all the info necessary to solve the problem.
The second method works as you'd expect: ctime gets called, fills the buffer, and the results get printed; this process is then repeated. So you get the two distinct times printed.
For the first method, the order is different: ctime is called, then it is called again, and only then do the results get printed. The results from each call to ctime is the same, at least as far as printf is concerned: the address of the static buffer. But the contents of that buffer was changed by each call, and since printf doesn't look in the buffer until both ctime calls are done, it ends up printing the newer contents twice.
So you ARE making both calls in the first method, its just that the results of the first call get overwritten before they get printed.
I want to open a new log file each a program runs, so I create a filename with the current time.
FILE * fplog;
void OpenLog()
{
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
sprintf(buf,"ecrew%d%02d%02d_%02d%02d%02d.log",
now.date().year(),now.date().month(),now.date().day(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
}
This works perfectly in a debug build, producing files with names such as
ecrew20110309_141506.log
However the same code fails strangely in a release build
ecrew198619589827196617_141338.log
BTW, this also fails in the same way:
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
boost::gregorian::date day (boost::gregorian::day_clock::local_day());
sprintf(buf,"ecrew%d%02d%02d_%02d%02d%02d.log",
day.year(),day.month(),day.day(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
This works:
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
sprintf(buf,"ecrew%s_%02d%02d%02d.log",
to_iso_string( boost::gregorian::day_clock::local_day() ).c_str(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
I'd still be curious why the previous two version fail in release build, but work in debug.
Okay I'm a bit late but as I stumbled on to your question when looking for the answer myself ( day_clock::local_day() gives weird results when compiled as Release, here on Win XP + Boost 1.46 ) ,
I thought I should come back with what worked for me.
The data seems to be stocked (I just use year, month and day) in a 16 bit manner but when you read them you get a 32 bit integer and whatever bug there is, it writes garbage into the top bits or it doesn't clean 'em out before writing to the lower bytes.
So my workaround is just to zero out the topmost 16 bits:
date todaysdate(day_clock::local_day());
int year = todaysdate.year() & 0xFFFF;
instead of say:
date todaysdate(day_clock::local_day());
int year = todaysdate.year();
and it works well for me anyway.
Valmond