Should I use CString::Format or sprintf_s - string

I have 2 code like this
Code 1: CString::Format method
CString cStr;
char* ToString(CString pszFormat, ...)
{
va_list argList;
va_start(argList, pszFormat);
cStr.FormatV(_T(pszFormat), argList);
va_end(argList);
return (LPTSTR)(LPCTSTR)cStr;
//Note that this will return the pointer to the cstring content
}
Code 2: sprintf_s method
char strChar[100];
char* ToString(char const* const _Format, ...)
{
va_list argList;
va_start(argList, _Format);
vsprintf_s(strChar, _Format, argList);
va_end(argList);
return strChar;
//Note that this will return the pointer to the string content
}
In code 1, I feel totally safe - I don't have to afraid about the length maybe too long. But I'm afraid that code 1 may reduce the performance. I don't know if it may cause memory leak or not. And I think that if Cstring has dynamic length, maybe it will allocate and free memory like no one business.
So I come up with code 2. But in code 2, I face the risk if I pass the _Format too long - like string that has length as 1000 - then the program will crash with 'buffer too small' error.
I don't know which one is better: CString::Format or sprintf_s ??? If sprintf_s really increase performance and CString::Format is bad for performance, then I'll take more effort to prevent 'buffer too small' in sprintf_s. But if sprintf_s not worth it - I'll take CString::Format.
Thank for reading.

If you are worried that the buffer in strChar might overflow, then why not simply use vsnprintf_s(). The second argument will restrict the number of characters written to the output buffer. You can modify your 'ToString()' function to receive this extra sizeOfBuffer field and pass it on to vsnprintf_s().
See [https://msdn.microsoft.com/en-us/library/d3xd30zz.aspx][1] for the details and other ways to prevent a buffer overrun.

Related

Debugging in threading building Blocks

I would like to program in threading building blocks with tasks. But how does one do the debugging in practice?
In general the print method is a solid technique for debugging programs.
In my experience with MPI parallelization, the right way to do logging is that each thread print its debugging information in its own file (say "debug_irank" with irank the rank in the MPI_COMM_WORLD) so that the logical errors can be found.
How can something similar be achieved with TBB? It is not clear how to access the thread number in the thread pool as this is obviously something internal to tbb.
Alternatively, one could add an additional index specifying the rank when a task is generated but this makes the code rather complicated since the whole program has to take care of that.
First, get the program working with 1 thread. To do this, construct a task_scheduler_init as the first thing in main, like this:
#include "tbb/tbb.h"
int main() {
tbb::task_scheduler_init init(1);
...
}
Be sure to compile with the macro TBB_USE_DEBUG set to 1 so that TBB's checking will be enabled.
If the single-threaded version works, but the multi-threaded version does not, consider using Intel Inspector to spot race conditions. Be sure to compile with TBB_USE_THREADING_TOOLS so that Inspector gets enough information.
Otherwise, I usually first start by adding assertions, because the machine can check assertions much faster than I can read log messages. If I am really puzzled about why an assertion is failing, I use printfs and task ids (not thread ids). Easiest way to create a task id is to allocate one by post-incrementing a tbb::atomic<size_t> and storing the result in the task.
If I'm having a really bad day and the printfs are changing program behavior so that the error does not show up, I use "delayed printfs". Stuff the printf arguments in a circular buffer, and run printf on the records later after the failure is detected. Typically for the buffer, I use an array of structs containing the format string and a few word-size values, and make the array size a power of two. Then an atomic increment and mask suffices to allocate slots. E.g., something like this:
const size_t bufSize = 1024;
struct record {
const char* format;
void *arg0, *arg1;
};
tbb::atomic<size_t> head;
record buf[bufSize];
void recf(const char* fmt, void* a, void* b) {
record* r = &buf[head++ & bufSize-1];
r->format = fmt;
r->arg0 = a;
r->arg1 = b;
}
void recf(const char* fmt, int a, int b) {
record* r = &buf[head++ & bufSize-1];
r->format = fmt;
r->arg0 = (void*)a;
r->arg1 = (void*)b;
}
The two recf routines record the format and the values. The casting is somewhat abusive, but on most architectures you can print the record correctly in practice with printf(r->format, r->arg0, r->arg1) even if the the 2nd overload of recf created the record.
~
~

VC++: how-to convert CString to TCHAR*

VC++: how-to convert CString value to TCHAR*.One method is GetBuffer(..) function. Is there any other way we can convert CString to TCHAR*.
CString::GetBuffer() doesn't make any conversion, it gives direct access to string.
To make a copy of CString:
TCHAR* buf = _tcsdup(str);
free(buf);
or
TCHAR* buf = new TCHAR[str.GetLength() + 1];
_tcscpy_s(buf, str.GetLength() + 1, str);
delete[]buf;
However the above code is usually not useful. You might want to modify it like so:
TCHAR buf[300];
_tcscpy_s(buf, TEXT("text"));
Usually you need to this to read data in to buffer, so you want to make the buffer size larger than the current size.
Or you can just use CString::GetBuffer(), again you might want to make the buffer size bigger.
GetWindowText(hwnd, str.GetBuffer(300), 300);
str.ReleaseBuffer(); //release immediately
TRACE(TEXT("%s\n"), str);
In other cases you need only const cast const TCHAR* cstr = str;
Lastly, TCHAR is not very useful. If your code is compatible with both ANSI and unicode then you might as well make it unicode only. But that's just a suggestion.
This depends on why you need a non-const TCHAR*. There are two main scenarios:
Manual update of the contents of a CString object:In that case you will have to call CSimpleStringT::GetBuffer (specifying the minimal length of the final string), update the contents, and call CSimpleStringT::ReleaseBuffer. Calling ReleaseBuffer is mandatory, as it updates internal state. Failure to call ReleaseBuffer can lead to the string exposing unexpected behavior.
Failure to expose const-correctness at an interface:If this is the case you can either update the interface to take a const TCHAR* instead of a TCHAR*, and invoke CSimpleStringT::operator PCXSTR by passing the CString object.If you cannot update the interface, you are best advised to make a copy into a TCHAR array and pass a pointer to this copy.If you can make sure that the implementation will not ever modify the contents referenced through the TCHAR* parameter, you could use a const_cast instead. This is not recommended, as it can introduce bugs in the future, by modifying unrelated code.

Variant type storage and alignment issues

I've made a variant type to use instead of boost::variant. Mine works storing an index of the current type on a list of the possible types, and storing data in a byte array with enough space to store the biggest type.
unsigned char data[my_types::max_size];
int type;
Now, when I write a value to this variant type comes the trouble. I use the following:
template<typename T>
void set(T a) {
int t = type_index(T);
if (t != -1) {
type = t;
puts("writing atom data");
*((T *) data) = a; //THIS PART CRASHES!!!!
puts("did it!");
} else {
throw atom_bad_assignment;
}
}
The line that crashes is the one that stores data to the internal buffer. As you can see, I just cast the byte array directly to a pointer of the desired type. This gives me bad address signals and bus errors when trying to write some values.
I'm using GCC on a 64-bit system. How do I set the alignment for the byte array to make sure the address of the array is 64-bit aligned? (or properly aligned for any architecture I might port this project to).
EDIT: Thank you all, but the mistake was somewhere else. Apparently, Intel doesn't really care about alignment. Aligned stuff is faster but not mandatory, and the program works fine this way. My problem was I didn't clear the data buffer before writing stuff and this caused trouble with the constructors of some types. I will not, however, mark the question as answered, so more people can give me tips on alignment ;)
See http://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Variable-Attributes.html
unsigned char data[my_types::max_size] __attribute__ ((aligned));
int type;
I believe
#pragma pack(64)
will work on all modern compilers; it definitely works on GCC.
A more correct solution (that doesn't mess with packing globally) would be:
#pragma pack(push, 64)
// define union here
#pragma pack(pop)

Two structs, one references another

typedef struct Radios_Frequencia {
char tipo_radio[3];
int qt_radio;
int frequencia;
}Radiof;
typedef struct Radio_Cidade {
char nome_cidade[30];
char nome_radio[30];
char dono_radio[3];
int numero_horas;
int audiencia;
Radiof *fre;
}R_cidade;
void Cadastrar_Radio(R_cidade**q){
printf("%d\n",i);
q[0]=(R_cidade*)malloc(sizeof(R_cidade));
printf("informa a frequencia da radio\n");
scanf("%d",&q[0]->fre->frequencia); //problem here
printf("%d\n",q[0]->fre->frequencia); // problem here
}
i want to know why this function void Cadastrar_Radio(R_cidade**q) does not print the data
You allocated storage for your primary structure but not the secondary one. Change
q[0]=(R_cidade*)malloc(sizeof(R_cidade));
to:
q[0]=(R_cidade*)malloc(sizeof(R_cidade));
q[0]->fre = malloc(sizeof(Radiof));
which will allocate both. Without that, there's a very good chance that fre will point off into never-never land (as in "you can never never tell what's going to happen since it's undefined behaviour).
You've allocated some storage, but you've not properly initialized any of it.
You won't get anything reliable to print until you put reliable values into the structures.
Additionally, as PaxDiablo also pointed out, you've allocated the space for the R_cidade structure, but not for the Radiof component of it. You're using scanf() to read a value into space that has not been allocated; that is not reliable - undefined behaviour at best, but most usually core dump time.
Note that although the two types are linked, the C compiler most certainly doesn't do any allocation of Radiof simply because R_cidade mentions it. It can't tell whether the pointer in R_cidade is meant to be to a single structure or the start of an array of structures, for example, so it cannot tell how much space to allocate. Besides, you might not want to initialize that structure every time - you might be happy to have left pointing nowhere (a null pointer) except in some special circumstances known only to you.
You should also verify that the memory allocation succeeded, or use a memory allocator that guarantees never to return a null or invalid pointer. Classically, that might be a cover function for the standard malloc() function:
#undef NDEBUG
#include <assert.h>
void *emalloc(size_t nbytes)
{
void *space = malloc(nbytes);
assert(space != 0);
return(space);
}
That's crude but effective. I use non-crashing error reporting routines of my own devising in place of the assert:
#include "stderr.h"
void *emalloc(size_t nbytes)
{
void *space = malloc(nbytes);
if (space == 0)
err_error("Out of memory\n");
return space;
}

Identifying memory leaks in C++

I've got the following bit of code, which I've narrowed down to be causing a memory leak (that is, in Task Manager, the Private Working Set of memory increases with the same repeated input string). I understand the concepts of heaps and stacks for memory, as well as the general rules for avoiding memory leaks, but something somewhere is still going wrong:
while(!quit){
char* thebuffer = new char[210];
//checked the function, it isn't creating the leak
int size = FuncToObtainInputTextFromApp(thebuffer); //stored in thebuffer
string bufferstring = thebuffer;
int startlog = bufferstring.find("$");
int endlog = bufferstring.find("&");
string str_text="";
str_text = bufferstring.substr(startlog,endlog-startlog+1);
String^ str_text_m = gcnew String(str_text_m.c_str());
//some work done
delete str_text_m;
delete [] thebuffer;
}
The only thing I can think of is it might be the creation of 'string str_text' since it never goes out of scope since it just reloops in the while? If so, how would I resolve that? Defining it outside the while loop wouldn't solve it since it'd also remain in scope then too. Any help would be greatly appreciated.
You should use scope-bound resource management (also known as RAII), it's good practice in any case. Never allocate memory manually, keep it in an automatically allocated class that will clean up the resource for you in the destructor.
You code might read:
while(!quit)
{
// completely safe, no leaks possible
std::vector<char> thebuffer(210);
int size = FuncToObtainInputTextFromApp(&thebuffer[0]);
// you never used size, this should be better
string bufferstring(thebuffer, size);
// find does not return an int, but a size_t
std::size_t startlog = bufferstring.find("$");
std::size_t endlog = bufferstring.find("&");
// why was this split across two lines?
// there's also no checks to ensure the above find
// calls worked, be careful
string str_text = bufferstring.substr(startlog, endlog - startlog + 1);
// why copy the string into a String? why not construct
// this directly?
String^ str_text_m = gcnew String(str_text_m.c_str());
// ...
// don't really need to do that, I think,
// it's garbage collected for a reason
// delete str_text_m;
}
The point is, you won't get memory leaks if you're ensured your resources are freed by themselves. Maybe the garbage collector is causing your leak detector to mis-fire.
On a side note, your code seems to have lots of unnecessary copying, you might want to rethink how many times you copy the string around. (For example, find "$" and "&" while it's in the vector, and just copy from there into str_text, no need for an intermediate copy.)
Are you #using std, so that str_text's type is std::string? Maybe you meant to write -
String^ str_text_m = gcnew String(str_text.c_str());
(and not gcnew String(str_text_m.c_str()) ) ?
Most importantly, allocating a String (or any object) with gcnew is declaring that you will not be delete'ing it explicitly - you leave it up to the garbage collector. Not sure what happens if you do delete it (technically it's not even a pointer. Definitely does not reference anything on the CRT heap, where new/delete have power).
You can probably safely comment str_text_m's deletion. You can expect gradual memory increase (where the gcnew's accumulate) and sudden decreases (where the garbage collection kicks in) in some intervals.
Even better, you can probably reuse str_text_m, along the lines of -
String^ str_text_m = gcnew String();
while(!quit){
...
str_text_m = String(str_text.c_str());
...
}
I know its recommended to set the freed variable to NULL after deleting it just to prevent any invalid memory reference. May help, may not.
delete [] thebuffer;
thebuffer = NULL; // Clear a to prevent using invalid memory reference
There is a tool called DevPartner which can catch all memory leaks at runtime. If you have the pdb for your application this will give you the line numbers in your application where all memory leak has been observed.
This is best used for really big applications.

Resources