Writing buffer to BYTE* using CURL - visual-c++

How to write a request buffer to a variable of type BYTE*? I have alreay tried the write own func CURLOPT_WRITEFUNCTION - https://pastebin.com/UBrp7Wyx and set the CURLOPT_WRITEDATA to
BYTE* raw_data = nullptr;
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &raw_data);

Actually, why you need to store BYTE type? If you want to use CURL for websites and want to accept some extended symbols (like cyrillic), you will not get expected behaviour (for example, UTF-8 has backward compatibility with ASCII, so simple reinterpret or +128 to each symbol will not help). It will have same behaviour as if you converting some char buffer to something else.
So code like this with simple chars work fine:
...
// yours write_function
size_t curlWriteFunc(char* data, size_t size, size_t nmemb, std::string* buffer)
{
size_t result = 0;
if (buffer != NULL)
{
buffer->append(data, size * nmemb);
result = size * nmemb;
}
return result;
}
int main()
{
... // some prepares, setting up easy_handler and so on
std::string buffer;
// set up write function and pointer to buffer
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &curlWriteFunc);
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &buffer);
... // make request and other things
}
However, if you are using CURL not for websites but for own client, which recives some binary data and BYTE is required, you will have to use some own struct which contains BYTE *, controls allocated size and reallocate memory if needed... Or just use vector<BYTE>.

You need a data structure to accumulate data as it comes in, a std::string suffices for that.
At the end, you can then simply do: (assuming you have a std::string s)
BYTE * raw_data = (BYTE *)s.data();
Note that the raw_data pointer is only valid for as long as the string is alive. If you need to contents to survive for longer than that, use a heap allocation and memcpy:
BYTE * raw_data = new BYTE[s.size()];
memcpy(raw_data, s.data(), s.size());
Note that you need to delete[] raw_data yourself at some point.

Related

Converting between WinRT HttpBufferContent and unmanaged memory in C++cx

As part of a WinRT C++cx component, what's the most efficient way to convert an unmanaged buffer of bytes (say expressed as a std::string) back and forth with a Windows::Web::Http::HttpBufferContent?
This is what I ended up with, but it doesn't seem very optimal:
std::string to HttpBufferContent:
std::string m_body = ...;
auto writer = ref new DataWriter();
writer->WriteBytes(ArrayReference<unsigned char>(reinterpret_cast<unsigned char*>(const_cast<char*>(m_body.data())), m_body.length()));
auto content = ref new HttpBufferContent(writer->DetachBuffer());
HttpBufferContent to std::string:
HttpBufferContent^ content = ...
auto operation = content->ReadAsBufferAsync();
auto task = create_task(operation);
if (task.wait() == task_status::completed) {
auto buffer = task.get();
size_t length = buffer->Length;
if (length > 0) {
unsigned char* storage = static_cast<unsigned char*>(malloc(length));
DataReader::FromBuffer(buffer)->ReadBytes(ArrayReference<unsigned char>(storage, length));
auto m_body = std::string(reinterpret_cast<char*>(storage), length);
free(storage);
}
} else {
abort();
}
UPDATE: Here's the version I ended up using (you can trivially create a HttpBufferContent^ from an Windows::Storage::Streams::IBuffer^):
void IBufferToString(IBuffer^ buffer, std::string& string) {
Array<unsigned char>^ array = nullptr;
CryptographicBuffer::CopyToByteArray(buffer, &array); // TODO: Avoid copy
string.assign(reinterpret_cast<char*>(array->Data), array->Length);
}
IBuffer^ StringToIBuffer(const std::string& string) {
auto array = ArrayReference<unsigned char>(reinterpret_cast<unsigned char*>(const_cast<char*>(string.data())), string.length());
return CryptographicBuffer::CreateFromByteArray(array);
}
I think you are making at least one unnecessary copy of your data in your current approach for HttpBufferContent to std::string, you could improve this by accessing the IBuffer data directly, see the accepted answer here: Getting an array of bytes out of Windows::Storage::Streams::IBuffer
I think it's better to use smart pointer (no memory management needed) :
#include <wrl.h>
#include <robuffer.h>
#include <memory>
using namespace Windows::Storage::Streams;
using namespace Microsoft::WRL;
IBuffer^ buffer;
ComPtr<IBufferByteAccess> byte_access;
reinterpret_cast<IInspectable*>(buffer)->QueryInterface(IID_PPV_ARGS(&byte_access));
std::unique_ptr<byte[]> raw_buffer = std::make_unique<byte[]>(buffer->Length);
byte_access->Buffer(raw_buffer.get());
std::string str(reinterpret_cast<char*>(raw_buffer.get())); // just 1 copy

Qt4: how to send QString inside a struct via QSharedMemory

I have a struct
struct control_data{
int column_number;
QString cell;
};
I need to send it to another thread with the help of QShareMemory. I read that you can't do this because QString contains pointers inside. Any other ways?
You have to serialize your struct to a Byte array. You can always convert your QString to a const char* like this:
myString.toStdString().c_str();
But serializing a QString should work.
The first step is to serialize your struct to a QDatastream using Qt, example here.
Then once your struct can be read and written you can pass it to a shared memory.
A complete example of using QSharedMemory can be found here.
Here is the relevant code:
// First, test whether a shared memory segment is already attached to the process.
// If so, detach it
if (sharedMem.isAttached())
{
sharedMem.detach();
}
...
QBuffer buffer;
buffer.open( QBuffer::ReadWrite );
QDataStream out( &buffer );
out << youStruct;
int size = buffer.size(); // size of int + size of QString in bytes
if ( !sharedMem.create( size ) ) {
return;
}
// Write into the shared memory
sharedMem.lock();
char *to = (char*)sharedMem.data();
const char *from = buffer.data().data();
memcpy( to, from, qMin( sharedMem.size(), size ) );
sharedMem.unlock();

Malloc fails and the logging, isn't this foolish?

Here's the code, isn't this pointless?
unsigned char * rngBuf = malloc(nBytes);
if(!rngBuf) {
DDLogError(#"Unable to allocate buffer for random bytes.");
[self displayHUDMessage:#"Memory Error."];
return;
}
If the malloc fails, why would logging or displaying a hud succeed?
What's the best way to deal with this situation?
It would be foolish only if displaying the error required nbytes or more memory from the heap (vs from the stack, static or quoted strings, or compile-time resolved string macros like FILE and FUNCTION).
I'm not familiar with your API, but I make sure to use statically allocated print buffers in my code. Here's an example.
// ... file management data structures...
typedef struct {
FILE *fp;
size_t knt; // record counter for use when needed
ULLNG bytes; // total number of bytes written to .fp file handle
char name[128];
char mode_str[8];
} FILE_DESC, *pFILE_DESC;
typedef struct {
FILE_DESC *fargs; // for the price of a FILE pointer we get all the details :)
size_t bytes;
char buff[1<<20];
} SMART_PRN_BUFF, pSMART_PRN_BUFF;
// ---- allocate 4 static/global smart write-buffers here
static SMART_PRN_BUFF FiPrn[4];
Paired with an initializer near the top of main() like...
FILE_DESC File[4] = {{NULL,0,0,"","w+t"},{NULL,0,0,"","w+t"},{NULL,0,0,"","r+b"}, {NULL,0,0,"","w"}};
printf("\nProcessing %i file names in \n%s", argc -1, argv[0]);
for(int i=0; i<argc; i++) {
if(NULL != argv[i+1]) {
strcpy(File[i].name, argv[i+1]);
File[i].fp = fopen(File[i].name, File[i].mode_str);
if(NULL==File[i].fp) {
printf("\nFile %s could not be opened in %s mode at line %i in %s\n--- ABORTING PROGRAM ---\n\n",
File[i].name, File[i].mode_str, __LINE__, __FILE__);
exit(EXIT_FAILURE);
} else {
printf("\nFile %s opened in %s mode", File[i].name, File[i].mode_str);
}
}
FiPrn[i].fargs = &File[i];
}
Usage is like ...
FiPrn[ERR].bytes += sprintf(FiPrn[ERR].buff,
"\nSize of FiPrn structure is %llu", (ULLNG)sizeof(FiPrn));
WriteToFile(&FiPrn[ERR]);
As you might have noticed, I have to use printf() vs sprintf() until I get the files opened and can write to them, but the static memory is allocated as SMART_PRN_BUFF.buff[1<<20], which gives me 1 megabyte for each of the 4 files I've provisioned here before I even attempt to open the files they will write to.
So I am completely confident I can print returns from malloc() and calloc() a few lines later, like this...
if(NULL == Sequencer)
FiPrn[ERR].bytes += sprintf(FiPrn[ERR].buff + FiPrn[ERR].bytes,
"\nCalloc() for Sequencer returned %p at line %i in file %s in function %s()\n",STD_ERR(Sequencer));
Where STD_ERR() is a macro that helps insure the code emits uniform error messages, like this...
#define STD_ERR(rtn) rtn,__LINE__,__FILE__,__FUNCTION__ // uniform error handling

Transform an operation to generic method

I am working in visual c++, usually I do it on .NET, because I need a method which is available only on this language. What I want to do is obtain the frames per second of a video file. The best I could make was creating a project with this main() method, in which (after Debug) I could see the result is saving fine in the res variable.
void main()
{
// initialize the COM library
CoInitialize(NULL);
// get a property store for the video file
IPropertyStore* store = NULL;
SHGetPropertyStoreFromParsingName(L"C:\\Users\\Public\\Videos\\Sample Videos\\Wildlife.wmv",
NULL, GPS_READWRITE, __uuidof(IPropertyStore), (void**)&store);
// get the frame rate
PROPVARIANT variant;
store->GetValue(PKEY_Video_FrameRate, &variant);
int res = variant.intVal;
store->Release();
}
Now, I want to create this method generic, in order to obtain the frameRate of any video. For example, if the method's name is frameRate:
char* path = "C:\\Users\\Public\\Videos\\Sample Videos\\Wildlife.wmv";
int fps = frameRate(path);
Thanks
Does this not work?
int getFrameRate(std::wstring path)
{
// initialize the COM library
CoInitialize(NULL);
// get a property store for the video file
IPropertyStore* store = NULL;
SHGetPropertyStoreFromParsingName(path.c_str(),
NULL, GPS_READWRITE, __uuidof(IPropertyStore), (void**)&store);
// get the frame rate
PROPVARIANT variant;
store->GetValue(PKEY_Video_FrameRate, &variant);
int res = variant.intVal;
store->Release();
return res;
}
The assumption here is that SHGetPropertyStoreFromParsingName takes a string as its first parameter. In C++ I recommend staying away from char*, std::string is preferable in almost all situations. The only difficulty I see is making sure path is the correct type.
If you don't want to recompile your code for every video path, then you can read the path from the program parameters. To do that, modify you main() as follows:
int main(int argc, char* argv[])
{
if (argc != 2)
{
std::cout << "You have to specify the video path!" << std::endl;
return 1;
}
const char* path = arg[1];
// Rest of the program logic
return 0;
}
You can pass more than one parameter, if you want to. Note that there is always at least 1 argument (arg[0] is the program name). For further reading on the topic go here.

How to free up memory after converting a managed string to UTF-8 encoded unmanaged char*?

I'm not familiar with C++/CLI so not sure how to free up the memory when using the code below (got the solution here and modified a little):
char* ManagedStringToUnmanagedUTF8Char( String^ s )
{
array<unsigned char> ^bytes = Encoding::UTF8->GetBytes( s );
pin_ptr<unsigned char> pinnedPtr = &bytes[0];
return (char*)pinnedPtr;
}
The above code is working when I tested it by writing the char in a text file. Please let me know if I'm missing something (need to clean up pinnedPtr?).
Now when I use it:
char* foobar = ManagedStringToUnmanagedUTF8Char("testing");
//do something with foobar
//do I need to free up memory by deleting foobar here?
//I tried 'delete foobar' or free(foobar) but it crashes my program
Hans Passant's comment is correct that the returned pointer to the buffer can be moved in memory by the garbage collector. This is because, when the function stack unwinds, pin_ptr will unpin the pointer.
The solution is to
Obtain the System::String buffer and pin it so that the GC cannot
move it.
Allocate memory on the unmanaged heap (or just heap) where
it is not under the GC's jurisdiction, and cannot be moved by the
GC.
Copy memory (and convert to desired encoding) from the
System::String buffer to the buffer allocated on the unmanaged heap.
Unpin the pointer so the GC can once again move the System::String
in memory. (This is done when pin_ptr goes out of the function
scope.)
Sample code:
char* ManagedStringToUnmanagedUTF8Char(String^ str)
{
// obtain the buffer from System::String and pin it
pin_ptr<const wchar_t> wch = PtrToStringChars(str);
// get number of bytes required
int nBytes = ::WideCharToMultiByte(CP_UTF8, NULL, wch, -1, NULL, 0, NULL, NULL);
assert(nBytes >= 0);
// allocate memory in C++ where GC cannot move
char* lpszBuffer = new char[nBytes];
// initialize buffer to null
ZeroMemory(lpszBuffer, (nBytes) * sizeof(char));
// Convert wchar_t* to char*, specify UTF-8 encoding
nBytes = ::WideCharToMultiByte(CP_UTF8, NULL, wch, -1, lpszBuffer, nBytes, NULL, NULL);
assert(nBytes >= 0);
// return the buffer
return lpszBuffer;
}
Now, when using:
char* foobar = ManagedStringToUnmanagedUTF8Char("testing");
//do something with foobar
//when foobar is no longer needed, you need to delete it
//because ManagedStringToUnmanagedUTF8Char has allocated it on the unmanaged heap.
delete foobar;
I'm not familiar with Visual-C++ either, but according to this article
Pinning pointers cannot be used as: [...] the return type of a function
I'm not sure whether the pointer will be valid when the function ends (even though it's disguised as a char*.
It seems that you declare some local variables in the function that you want to pass to the calling scope. 'However, possibly these will be out of scope anyway when you return from the function.
Maybe you should reconsider what you are trying to achieve in the first place?
Note, that in the article you have referenced a std::string (passed by value, i.e. by copy) is used as return parameter.
std::string managedStringToStlString( System::String ^s )
{
Encoding ^u8 = Encoding::UTF8;
array<unsigned char> ^bytes = u8->GetBytes( s );
pin_ptr<unsigned char> pinnedPtr = &bytes[0];
return string( (char*)pinnedPtr );
}
Thereby no local variables are passed out of their scope. The string is handled over by copy as an unmanaged std::string. This is exactly what this post suggests.
When you need a const char* later, you can use the string::c_str() method to get one. Note, that you can also write std::string to a file using file streams.
Is this an option for you?

Resources