Time Zone code translation from Windows to Linux in FreePascal - linux

I have this code that works in FreePascal under Windows and need to translate it to Linux but I'm completely lost on the Time Zone Bias value:
function DateTimeToInternetTime(const aDateTime: TDateTime): String;
{$IFDEF WIN32}
var
LocalTimeZone: TTimeZoneInformation;
{$ENDIF ~WIN32}
begin
{$IFDEF WIN32}
// eg. Sun, 06 Nov 1994 08:49:37 GMT RFC 822, updated by 1123
Result := FormatDateTime('ddd, dd mmm yyyy hh:nn:ss', aDateTime);
// Get the Local Time Zone Bias and report as GMT +/-Bias
GetTimeZoneInformation(LocalTimeZone);
Result := Result + 'GMT ' + IntToStr(LocalTimeZone.Bias div 60);
{$ELSE}
// !!!! Here I need the above code translated !!!!
Result := 'Sat, 06 Jun 2009 18:00:00 GMT 0000';
{$ENDIF ~WIN32}
end;

This guy has the answer: http://www.mail-archive.com/fpc-pascal#lists.freepascal.org/msg08467.html
So you'll want to add the uses clause:
uses unix,sysutils,baseunix
variables to hold the time / timezone:
var
timeval: TTimeVal;
timezone: PTimeZone;
..and get the 'minutes west'.
{$ELSE}
Result := FormatDateTime('ddd, dd mmm yyyy hh:nn:ss', aDateTime);
TimeZone := nil;
fpGetTimeOfDay (#TimeVal, TimeZone);
Result := Result + 'GMT ' + IntToStr(timezone^.tz_minuteswest div 60);
{$ENDIF ~WIN32}

I haven't done a lot of pascal lately, so these is just a hint, rather than a complete answer.
But check out your compiler how to call and link c-code. Then you can use time.h similar as in this C-example:
/* localtime example */
#include <stdio.h>
#include <time.h>
int main ()
{
time_t rawtime;
struct tm * timeinfo;
time ( &rawtime );
timeinfo = localtime ( &rawtime );
printf ( "Current local time and date: %s", asctime (timeinfo) );
return 0;
}
This program will output something like
Current local time and date: Sat Jun 06 18:00:00 2009
You can use sprintf instead of printf to "print" into an array of characters, and strftime to give a format string how similar to 'ddd, dd mmm yyyy hh:nn:ss' (probably "%a, %d %b %Y %H:%M:%S") and use the 'long int timezone' global variable instead of 'LocalTimeZone.Bias'.
I guess the main hurdle is to figure out how to call the c-code. Maybe you can even use time.h directly from pascal, I would investigate that.

Related

Why tm_gmtoff field of struct tm is not documented in man page?

I need to get the difference between UTC and the local time using GCC on Linux.
It seems that the preferred way is to examine tm_gmtoff field of a struct tm returned by localtime function.
https://stackoverflow.com/a/47218792
However, tm_gmtoff is not documented in the man page of localtime, but
only tm_zone is.
https://man7.org/linux/man-pages/man3/localtime.3.html
It looks like tm_gmtoff and tm_zone exist in the header file.
19 # ifdef __USE_MISC
20 long int tm_gmtoff; /* Seconds east of UTC. */
21 const char *tm_zone; /* Timezone abbreviation. */
22 # else
23 long int __tm_gmtoff; /* Seconds east of UTC. */
24 const char *__tm_zone; /* Timezone abbreviation. */
25 # endif
https://sourceware.org/git/?p=glibc.git;a=blob;f=time/bits/types/struct_tm.h;h=b13b631228d0ec36691b25db2e1f9b1d66b54bb0;hb=HEAD
I'm not sure why tm_gmtoff is omitted in the man page. Could it be a man-page bug introduced in the following commit?
https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/commit/man3/ctime.3?id=ba39b288ab07149417867533821300256f310615&h=master
I reported this to the maintainers. It has been fixed by the following commit.
https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/commit/?id=20f1ee93171895341877b8c5679a33823c4ca582

Display ICU UDate formatted for different timezones

In the following example, I would like to format EPOCH (1/1/1970) in different time zones. For example, I may wish to format EPOCH using the Los Angeles time zone and/or format EPOCH using the New York timezone.
UErrorCode uErrorCode = U_ZERO_ERROR;
UnicodeString unicodeString;
UDate uDate;
icu::Locale locale = icu::Locale("en");
TimeZone* timeZone = TimeZone::createTimeZone("America/Los_Angeles");
Calendar* calendar = Calendar::createInstance(timeZone, uErrorCode);
// setting calendar to EPOCH, e.g. zero MS from 1/1/1970
calendar->setTime(0, uErrorCode);
// get calendar time as milliseconds (UDate)
uDate = calendar->getTime(uErrorCode);
DateFormat* dateFormat = DateFormat::createDateTimeInstance(
icu::DateFormat::MEDIUM, // date style
icu::DateFormat::SHORT, // time style
locale);
unicodeString = dateFormat->format(uDate, unicodeString, uErrorCode);
std::string str;
unicodeString.toUTF8String(str);
std::cout << "Date: " << str;
// Use getOffset to get the stdOffset and dstOffset for the given time
int32_t stdOffset, dstOffset;
timeZone->getOffset(uDate, true, stdOffset, dstOffset, uErrorCode);
std::cout << " | ";
std::cout << "Time zone STD offset: " << stdOffset / (1000 * 60 * 60) << " | ";
std::cout << "Time zone DST offset: " << dstOffset / (1000 * 60 * 60) << std::endl;
The problem that I have is that the output is not formatted respective to the time zone.
Here is the output when using the Los Angeles time zone:
Date: Dec 31, 1969, 6:00 PM | Time zone STD offset: -8 | Time zone DST offset: 0
Here is the output when using the New York time zone:
Date: Dec 31, 1969, 6:00 PM | Time zone STD offset: -5 | Time zone DST offset: 0
Please notice that the date is not EPOCH and secondly notice that the dates and times for both outputs are identical. The offsets are correct, but the date/time display is not.
UPDATE
It is important to note that the displayed date/time is 6 hours behind since I'm currently (-6 UTC) meaning that you ADD 6 hours to Dec. 31, 1969 at 6:00PM which would then equal EPOCH Jan. 1, 1970 12:00AM.
ICU is using my PC's timezone automatically since I have found no way to specify timezone when formatting date/time using DateFormat::Format(...). If format() accepted a timezone argument to override my PC's local timezone, I would not be having this issue.
You should call dateFormat->setTimeZone(*timeZone) to specify time zone.
#earts had it right, because you are formatting based on the scalar time value and not the calendar.
Alternatively, you can format the Calendar object itself, which will use the timezone and time from that calendar:
unicodeString = dateFormat -> format(*calendar,
unicodeString,
(FieldPositionIterator*) nullptr, // ignored
uErrorCode);
Note, though, that using the above function, the calendar type had better match that of the dateformat. An easy way to do that is to make sure you pass in the locale parameter when creating the calendar:
// above:
Calendar* calendar = Calendar::createInstance(timeZone, locale, uErrorCode);

Sometimes different results of mktime in windows and in linux

Here is the function
time_t time_from_string(const char* timestr)
{
if (!timestr)
return 0;
struct tm t1;
memset(&t1, 0, sizeof(t1));
int nfields = sscanf(timestr, "%04d:%02d:%02d %02d:%02d:%02d",
&t1.tm_year, &t1.tm_mon, &t1.tm_mday, &t1.tm_hour,
&t1.tm_min, &t1.tm_sec);
if (nfields != 6)
return 0;
t1.tm_year -= 1900;
t1.tm_mon--;
t1.tm_isdst = -1; // mktime should try itself to figure out what DST was
time_t result = mktime(&t1);
return result;
}
When I call it with the argument "2007:11:14 11:19:07", it returns 1195028347 in Linux (Ubuntu 12.04, gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3) and 1195024747 in Windows (windows 7, Visual Studio 2010).
As can be seen, time difference is 3600.
I run both operating systems on the same computer (dual-boot), which is in the MSK time zone.
Both OSes are synchronized with the internet time, and their system clock show correct time.
When I call this function with another argument, "2012:08:21 18:20:40", I get 1345558840 in both systems.
Why does the results differ in several cases?
EDIT
Forgot to mention. I control the contents of the t1 variable after call to mktime().
In both systems:
t1.tm_sec = 7;
t1.tm_min = 19;
t1.tm_hour = 11;
t1.tm_mday = 14;
t1.tm_mon = 10;
t1.tm_year = 107;
t1.tm_wday = 3;
t1.tm_yday = 317;
t1.tm_isdst = 0;
Please, mention the last line. Both systems determine that there is no daylight savings in effect.
Linux additionally shows the following fields in struct tm:
t1.gmtoff = 10800;
t1.tm_zone = "MSK";
From Wikipedia: Moscow Time
Until 2011, during the winter, between the last Sunday of October and the last Sunday of March, Moscow Standard Time (MSK, МСК) was 3 hours ahead of UTC, or UTC+3; during the summer, Moscow Time shifted forward an additional hour ahead of Moscow Standard Time to become Moscow Summer Time (MSD), making it UTC+4.
In 2011, the Russian government proclaimed that daylight saving time would in future be observed all year round, thus effectively displacing standard time—an action which the government claimed emerged from health concerns attributed to the annual shift back-and-forth between standard time and daylight saving time. On 27 March 2011, Muscovites set their clocks forward for a final time, effectively observing MSD, or UTC+4, permanently.
Since Moscow observed winter time (UTC+3) on 2007-11-14, 11:19:07 MSK was 08:19:07 UTC, and the Unix timestamp was 1195028347.
It looks like the value you get on Linux is correct, and the value you get on Windows seems to assume UTC+4 which is incorrect.

pthread_cond_timedwait returns one second early

The program below produces this output:
$ ./test_condvar 9000
1343868189.623067126 1343868198.623067126 FIRST
1343868197.623132345 1343868206.623132345 TIMEOUT
1343868205.623190120 1343868214.623190120 TIMEOUT
1343868213.623248184 1343868222.623248184 TIMEOUT
1343868221.623311549 1343868230.623311549 TIMEOUT
1343868229.623369718 1343868238.623369718 TIMEOUT
1343868237.623428856 1343868246.623428856 TIMEOUT
Note that reading across rows shows a time delta of the intended 9 seconds, but reading down columns show that pthread_cond_timedwait returns ETIMEDOUT in 8 seconds.
pthread lib is glibc 2.12. running Red Hat EL6. uname -a shows 2.6.32-131.12.1.el6.x86_64 #1 SMP Tue Aug 23 11:13:45 CDT 2011 x86_64 x86_64 x86_64 GNU/Linux
it looks like pthread_cond_timedwait relies on lll_futex_timed_wait for the timeout behavior.
Any ideas on where else to search for an explanation?
#include <time.h>
#include <sys/time.h>
#include <pthread.h>
#include <errno.h>
#include <stdlib.h>
#include <stdio.h>
int main ( int argc, char *argv[] )
{
pthread_mutexattr_t mtx_attr;
pthread_mutex_t mtx;
pthread_condattr_t cond_attr;
pthread_cond_t cond;
int milliseconds;
const char *res = "FIRST";
if ( argc < 2 )
{
fputs ( "must specify interval in milliseconds", stderr );
exit ( EXIT_FAILURE );
}
milliseconds = atoi ( argv[1] );
pthread_mutexattr_init ( &mtx_attr );
pthread_mutexattr_settype ( &mtx_attr, PTHREAD_MUTEX_NORMAL );
pthread_mutexattr_setpshared ( &mtx_attr, PTHREAD_PROCESS_PRIVATE );
pthread_mutex_init ( &mtx, &mtx_attr );
pthread_mutexattr_destroy ( &mtx_attr );
#ifdef USE_CONDATTR
pthread_condattr_init ( &cond_attr );
if ( pthread_condattr_setclock ( &cond_attr, CLOCK_REALTIME ) != 0 )
{
fputs ( "pthread_condattr_setclock failed", stderr );
exit ( EXIT_FAILURE );
}
pthread_cond_init ( &cond, &cond_attr );
pthread_condattr_destroy ( &cond_attr );
#else
pthread_cond_init ( &cond, NULL );
#endif
for (;;)
{
struct timespec now, ts;
clock_gettime ( CLOCK_REALTIME, &now );
ts.tv_sec = now.tv_sec + milliseconds / 1000;
ts.tv_nsec = now.tv_nsec + (milliseconds % 1000) * 1000000;
if (ts.tv_nsec > 1000000000)
{
ts.tv_nsec -= 1000000000;
++ts.tv_sec;
}
printf ( "%ld.%09ld %ld.%09ld %s\n", now.tv_sec, now.tv_nsec,
ts.tv_sec, ts.tv_nsec, res );
pthread_mutex_lock ( &mtx );
if ( pthread_cond_timedwait ( &cond, &mtx, &ts ) == ETIMEDOUT )
res = "TIMEOUT";
else
res = "OTHER";
pthread_mutex_unlock ( &mtx );
}
}
There was a Linux kernel bug triggered by the insertion of a leap second on July 1st this year, which resulted in futexes expiring one second too early until either the machine was rebooted or you ran the workaround:
# date -s "`date`"
It sounds like you've been bitten by that.
I'm not sure that this is related to the specific issue but your line:
if (ts.tv_nsec > 1000000000)
should really be:
if (ts.tv_nsec >= 1000000000)
And, in fact, if you do something unexpected and pass in 10000 (for example), you may want to consider making it:
while (ts.tv_nsec >= 1000000000)
though it's probably better at some point to use modulus arithmetic so that loop doesn't run for too long.
Other than that, this appears to be some sort of issue with your environment. The code works fine for me under Debian, Linux MYBOX 2.6.32-5-686 #1 SMP Sun May 6 04:01:19 UTC 2012 i686 GNU/Linux:
1343871427.442705862 1343871436.442705862 FIRST
1343871436.442773672 1343871445.442773672 TIMEOUT
1343871445.442832158 1343871454.442832158 TIMEOUT
:
One possibility is the fact that the system clock is not sacrosanct - it may be modified periodically by NTP or other time synchronisation processes. I mention that as a possibility but it seems a little strange that it would happen in the short time between the timeout and getting the current time.
One test would be to use a different timeout (such as alternating seven and thirteen seconds) to see if the effect is the same (those numbers were chosen to be prime and unlikely to be a multiple of any other activity on the system.

Understand successive call to ctime()

I have a question about how the glibc ctime() works.
Follows my snippet:
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main (int argc,char** argv)
{
int ret=EXIT_SUCCESS;
time_t tm1;
time_t tm2;
tm1 = time(NULL);
tm2 = tm1 + 60; // 60 seconds later
puts("1st method-------");
printf("tm1 = %stm2 = %s",ctime(&tm1),ctime(&tm2));
puts("2nd method-------");
printf("tm1 = %s",ctime(&tm1));
printf("tm2 = %s",ctime(&tm2));
return(ret);
}
I got:
1st method-------
tm1 = Sat Jan 14 01:13:28 2012
tm2 = Sat Jan 14 01:13:28 2012
2nd method-------
tm1 = Sat Jan 14 01:13:28 2012
tm2 = Sat Jan 14 01:14:28 2012
As you see, in the first method both tm have the same value which is not correct. In the 2nd method I got correct values.
I know that ctime() puts those string in static buffer, and to overwrite it we need a successive call to ctime().
Q: Do I not doing successive call in 1st method?
Thank you for reply.
You've provided all the info necessary to solve the problem.
The second method works as you'd expect: ctime gets called, fills the buffer, and the results get printed; this process is then repeated. So you get the two distinct times printed.
For the first method, the order is different: ctime is called, then it is called again, and only then do the results get printed. The results from each call to ctime is the same, at least as far as printf is concerned: the address of the static buffer. But the contents of that buffer was changed by each call, and since printf doesn't look in the buffer until both ctime calls are done, it ends up printing the newer contents twice.
So you ARE making both calls in the first method, its just that the results of the first call get overwritten before they get printed.

Resources