Understand successive call to ctime() - ctime

I have a question about how the glibc ctime() works.
Follows my snippet:
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main (int argc,char** argv)
{
int ret=EXIT_SUCCESS;
time_t tm1;
time_t tm2;
tm1 = time(NULL);
tm2 = tm1 + 60; // 60 seconds later
puts("1st method-------");
printf("tm1 = %stm2 = %s",ctime(&tm1),ctime(&tm2));
puts("2nd method-------");
printf("tm1 = %s",ctime(&tm1));
printf("tm2 = %s",ctime(&tm2));
return(ret);
}
I got:
1st method-------
tm1 = Sat Jan 14 01:13:28 2012
tm2 = Sat Jan 14 01:13:28 2012
2nd method-------
tm1 = Sat Jan 14 01:13:28 2012
tm2 = Sat Jan 14 01:14:28 2012
As you see, in the first method both tm have the same value which is not correct. In the 2nd method I got correct values.
I know that ctime() puts those string in static buffer, and to overwrite it we need a successive call to ctime().
Q: Do I not doing successive call in 1st method?
Thank you for reply.

You've provided all the info necessary to solve the problem.
The second method works as you'd expect: ctime gets called, fills the buffer, and the results get printed; this process is then repeated. So you get the two distinct times printed.
For the first method, the order is different: ctime is called, then it is called again, and only then do the results get printed. The results from each call to ctime is the same, at least as far as printf is concerned: the address of the static buffer. But the contents of that buffer was changed by each call, and since printf doesn't look in the buffer until both ctime calls are done, it ends up printing the newer contents twice.
So you ARE making both calls in the first method, its just that the results of the first call get overwritten before they get printed.

Related

C Function to return a String resulting in corrupted top size

I am trying to write a program that calls upon an [external library (?)] (I'm not sure that I'm using the right terminology here) that I am also writing to clean up a provided string. For example, if my main.c program were to be provided with a string such as:
asdfFAweWFwseFL Wefawf JAWEFfja FAWSEF
it would call upon a function in externalLibrary.c (lets call it externalLibrary_Clean for now) that would take in the string, and return all characters in upper case without spaces:
ASDFFAWEWFWSEFLWEFAWFJAWEFFJAFAWSEF
The crazy part is that I have this working... so long as my string doesn't exceed 26 characters in length. As soon as I add a 27th character, I end up with an error that says
malloc(): corrupted top size.
Here is externalLibrary.c:
#include "externalLibrary.h"
#include <ctype.h>
#include <malloc.h>
#include <assert.h>
#include <string.h>
char * restrict externalLibrary_Clean(const char* restrict input) {
// first we define the return value as a pointer and initialize
// an integer to count the length of the string
char * returnVal = malloc(sizeof(input));
char * initialReturnVal = returnVal; //point to the start location
// until we hit the end of the string, we use this while loop to
// iterate through it
while (*input != '\0') {
if (isalpha(*input)) { // if we encounter an alphabet character (a-z/A-Z)
// then we convert it to an uppercase value and point our return value at it
*returnVal = toupper(*input);
returnVal++; //we use this to move our return value to the next location in memory
}
input++; // we move to the next memory location on the provided character pointer
}
*returnVal = '\0'; //once we have exhausted the input character pointer, we terminate our return value
return initialReturnVal;
}
int * restrict externalLibrary_getFrequencies(char * ar, int length){
static int freq[26];
for (int i = 0; i < length; i++){
freq[(ar[i]-65)]++;
}
return freq;
}
the header file for it (externalLibrary.h):
#ifndef LEARNINGC_EXTERNALLIBRARY_H
#define LEARNINGC_EXTERNALLIBRARY_H
#ifdef __cplusplus
extern "C" {
#endif
char * restrict externalLibrary_Clean(const char* restrict input);
int * restrict externalLibrary_getFrequencies(char * ar, int length);
#ifdef __cplusplus
}
#endif
#endif //LEARNINGC_EXTERNALLIBRARY_H
my main.c file from where all the action is happening:
#include <stdio.h>
#include "externalLibrary.h"
int main() {
char * unfilteredString = "ASDFOIWEGOASDGLKASJGISUAAAA";//if this exceeds 26 characters, the program breaks
char * cleanString = externalLibrary_Clean(unfilteredString);
//int * charDist = externalLibrary_getFrequencies(cleanString, 25); //this works just fine... for now
printf("\nOutput: %s\n", unfilteredString);
printf("\nCleaned Output: %s\n", cleanString);
/*for(int i = 0; i < 26; i++){
if(charDist[i] == 0){
}
else {
printf("%c: %d \n", (i + 65), charDist[i]);
}
}*/
return 0;
}
I'm extremely well versed in Java programming and I'm trying to translate my knowledge over to C as I wish to learn how my computer works in more detail (and have finer control over things such as memory).
If I were solving this problem in Java, it would be as simple as creating two class files: one called main.java and one called externalLibrary.java, where I would have static String Clean(string input) and then call upon it in main.java with String cleanString = externalLibrary.Clean(unfilteredString).
Clearly this isn't how C works, but I want to learn how (and why my code is crashing with corrupted top size)
The bug is this line:
char * returnVal = malloc(sizeof(input));
The reason it is a bug is that it requests an allocation large enough space to store a pointer, meaning 8 bytes in a 64-bit program. What you want to do is to allocate enough space to store the modified string, which you can do with the following line:
char *returnVal = malloc(strlen(input) + 1);
So the other part of your question is why the program doesn't crash when your string is less than 26 characters. The reason is that malloc is allowed to give the caller slightly more than the caller requested.
In your case, the message "malloc(): corrupted top size" suggests that you are using libc malloc, which is the default on Linux. That variant of malloc, in a 64-bit process, would always give you at least 0x18 (24) bytes (minimum chunk size 0x20 - 8 bytes for the size/status). In the specific case that the allocation immediately precedes the "top" allocation, writing past the end of the allocation will clobber the "top" size.
If your string is larger than 23 (0x17) you will start to clobber the size/status of the subsequent allocation because you also need 1 byte to store the trailing NULL. However, any string 23 characters or shorter will not cause a problem.
As to why you didn't get an error with a string with 26 characters, to answer that one would have to see that exact program with the string of 26 characters that does not crash to give a more precise answer. For example, if the program provided a 26-character input that contained 3 blanks, this would would require only 26 + 1 - 3 = 24 bytes in the allocation, which would fit.
If you are not interested in that level of detail, fixing the malloc call to request the proper amount will fix your crash.

Why tm_gmtoff field of struct tm is not documented in man page?

I need to get the difference between UTC and the local time using GCC on Linux.
It seems that the preferred way is to examine tm_gmtoff field of a struct tm returned by localtime function.
https://stackoverflow.com/a/47218792
However, tm_gmtoff is not documented in the man page of localtime, but
only tm_zone is.
https://man7.org/linux/man-pages/man3/localtime.3.html
It looks like tm_gmtoff and tm_zone exist in the header file.
19 # ifdef __USE_MISC
20 long int tm_gmtoff; /* Seconds east of UTC. */
21 const char *tm_zone; /* Timezone abbreviation. */
22 # else
23 long int __tm_gmtoff; /* Seconds east of UTC. */
24 const char *__tm_zone; /* Timezone abbreviation. */
25 # endif
https://sourceware.org/git/?p=glibc.git;a=blob;f=time/bits/types/struct_tm.h;h=b13b631228d0ec36691b25db2e1f9b1d66b54bb0;hb=HEAD
I'm not sure why tm_gmtoff is omitted in the man page. Could it be a man-page bug introduced in the following commit?
https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/commit/man3/ctime.3?id=ba39b288ab07149417867533821300256f310615&h=master
I reported this to the maintainers. It has been fixed by the following commit.
https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/commit/?id=20f1ee93171895341877b8c5679a33823c4ca582

Sometimes different results of mktime in windows and in linux

Here is the function
time_t time_from_string(const char* timestr)
{
if (!timestr)
return 0;
struct tm t1;
memset(&t1, 0, sizeof(t1));
int nfields = sscanf(timestr, "%04d:%02d:%02d %02d:%02d:%02d",
&t1.tm_year, &t1.tm_mon, &t1.tm_mday, &t1.tm_hour,
&t1.tm_min, &t1.tm_sec);
if (nfields != 6)
return 0;
t1.tm_year -= 1900;
t1.tm_mon--;
t1.tm_isdst = -1; // mktime should try itself to figure out what DST was
time_t result = mktime(&t1);
return result;
}
When I call it with the argument "2007:11:14 11:19:07", it returns 1195028347 in Linux (Ubuntu 12.04, gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3) and 1195024747 in Windows (windows 7, Visual Studio 2010).
As can be seen, time difference is 3600.
I run both operating systems on the same computer (dual-boot), which is in the MSK time zone.
Both OSes are synchronized with the internet time, and their system clock show correct time.
When I call this function with another argument, "2012:08:21 18:20:40", I get 1345558840 in both systems.
Why does the results differ in several cases?
EDIT
Forgot to mention. I control the contents of the t1 variable after call to mktime().
In both systems:
t1.tm_sec = 7;
t1.tm_min = 19;
t1.tm_hour = 11;
t1.tm_mday = 14;
t1.tm_mon = 10;
t1.tm_year = 107;
t1.tm_wday = 3;
t1.tm_yday = 317;
t1.tm_isdst = 0;
Please, mention the last line. Both systems determine that there is no daylight savings in effect.
Linux additionally shows the following fields in struct tm:
t1.gmtoff = 10800;
t1.tm_zone = "MSK";
From Wikipedia: Moscow Time
Until 2011, during the winter, between the last Sunday of October and the last Sunday of March, Moscow Standard Time (MSK, МСК) was 3 hours ahead of UTC, or UTC+3; during the summer, Moscow Time shifted forward an additional hour ahead of Moscow Standard Time to become Moscow Summer Time (MSD), making it UTC+4.
In 2011, the Russian government proclaimed that daylight saving time would in future be observed all year round, thus effectively displacing standard time—an action which the government claimed emerged from health concerns attributed to the annual shift back-and-forth between standard time and daylight saving time. On 27 March 2011, Muscovites set their clocks forward for a final time, effectively observing MSD, or UTC+4, permanently.
Since Moscow observed winter time (UTC+3) on 2007-11-14, 11:19:07 MSK was 08:19:07 UTC, and the Unix timestamp was 1195028347.
It looks like the value you get on Linux is correct, and the value you get on Windows seems to assume UTC+4 which is incorrect.

SIGSEGV segmentation fault at strftime_l lib64/libc.so.6

I'am porting pro*c codes from UNIX to LINUX. The codes are compiled and created executables successfully. But during run time its raising segmentation fault. I debugged the code step by step and the below is the output of GDB debug.
Breakpoint 4 at 0x3b19690f50
(gdb) n
525 strftime (buf, MAX_STRING_LEN, "%d/%b/%Y:%H:%M:%S", dummy_time);
(gdb) n
Breakpoint 4, 0x0000003b19690f50 in strftime () from /lib64/libc.so.6
(gdb) n
Single stepping until exit from function strftime,
which has no line number information.
0x0000003b19690f70 in strftime_l () from /lib64/libc.so.6
(gdb) n
Single stepping until exit from function strftime_l,
which has no line number information.
Program received signal SIGSEGV, Segmentation fault.
0x0000003b19690f8b in strftime_l () from /lib64/libc.so.6
Actually in code the function strftime() is called. But I have no idea why it is reaching strftime_l() in /lib64/libc.so.6.
This issue is not coming in UNIX. please help on this. code is
static void speed_hack_libs(void)
{
time_t dummy_time_t = time(NULL);
struct tm *dummy_time = localtime (&dummy_time_t);
struct tm *other_dummy_time = gmtime (&dummy_time_t);
char buf[MAX_STRING_LEN];
strftime (buf, MAX_STRING_LEN, "%d/%b/%Y:%H:%M:%S", dummy_time);
}
struct tm *dummy_time = localtime (&dummy_time_t);
struct tm *other_dummy_time = gmtime (&dummy_time_t);
This is not gonna work. From the man page:
The localtime() function converts the calendar time timep to broken-down time representation, expressed relative to the user's specified time-zone. ... The return value points to a statically allocated struct which might be overwritten by
subsequent calls to any of the date and time functions.
The gmtime() function converts the calendar time timep to broken-down time representation, expressed in Coordinated Universal Time (UTC). It
may return NULL when the year does not fit into an integer. The return value points to a statically allocated struct which might be overwritten by subsequent calls to any of the date and time functions.
So, *dummy_time will probably be overwritten by the time you use it, and contain unpredictable garbage. You should copy the data to your buffer like this:
struct tm dummy_time ;
memcpy(&dummy_time, localtime (&dummy_time_t), sizeof(struct tm));
Although I'm not sure how could this cause a SIGSEGV (might be something with getting the month names etc. - check if the problem persists with LC_ALL=C), you must fix this before you can move on. Also, check (in the debugger) the contents of *dummy_time.
It is calling strftime_l because you compiled 64 bit - that is the 64 bit library entry point for strftime. You have two pointers in strftime - a string and a struct tm pointer. One of them is pointing to invalid memory. jpalacek gave you where to look first.
Did you add the time.h header file? I think you have missed it.

boost::posix_time fails in release build

I want to open a new log file each a program runs, so I create a filename with the current time.
FILE * fplog;
void OpenLog()
{
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
sprintf(buf,"ecrew%d%02d%02d_%02d%02d%02d.log",
now.date().year(),now.date().month(),now.date().day(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
}
This works perfectly in a debug build, producing files with names such as
ecrew20110309_141506.log
However the same code fails strangely in a release build
ecrew198619589827196617_141338.log
BTW, this also fails in the same way:
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
boost::gregorian::date day (boost::gregorian::day_clock::local_day());
sprintf(buf,"ecrew%d%02d%02d_%02d%02d%02d.log",
day.year(),day.month(),day.day(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
This works:
boost::posix_time::ptime now = boost::posix_time::second_clock::local_time();
char buf[256];
sprintf(buf,"ecrew%s_%02d%02d%02d.log",
to_iso_string( boost::gregorian::day_clock::local_day() ).c_str(),
now.time_of_day().hours(),now.time_of_day().minutes(),now.time_of_day().seconds());
fplog = fopen(buf,"w");
I'd still be curious why the previous two version fail in release build, but work in debug.
Okay I'm a bit late but as I stumbled on to your question when looking for the answer myself ( day_clock::local_day() gives weird results when compiled as Release, here on Win XP + Boost 1.46 ) ,
I thought I should come back with what worked for me.
The data seems to be stocked (I just use year, month and day) in a 16 bit manner but when you read them you get a 32 bit integer and whatever bug there is, it writes garbage into the top bits or it doesn't clean 'em out before writing to the lower bytes.
So my workaround is just to zero out the topmost 16 bits:
date todaysdate(day_clock::local_day());
int year = todaysdate.year() & 0xFFFF;
instead of say:
date todaysdate(day_clock::local_day());
int year = todaysdate.year();
and it works well for me anyway.
Valmond

Resources