Unicode conversion in C++17 [duplicate] - string

A bit of foreground: my task required converting UTF-8 XML file to UTF-16 (with proper header, of course). And so I searched about usual ways of converting UTF-8 to UTF-16, and found out that one should use templates from <codecvt>.
But now when it is deprecated, I wonder what is the new common way of doing the same task?
(Don't mind using Boost at all, but other than that I prefer to stay as close to standard library as possible.)

Don't worry about that.
According to the same information source:
this library component should be retired to Annex D, along side ,
until a suitable replacement is standardized.
So, you can still use it until a new standardized, more-secure version is done.

std::codecvt template from <locale> itself isn't deprecated. For UTF-8 to UTF-16, there is still std::codecvt<char16_t, char, std::mbstate_t> specialization.
However, since std::wstring_convert and std::wbuffer_convert are deprecated along with the standard conversion facets, there isn't any easy way to convert strings using the facets.
So, as Bolas already answered: Implement it yourself (or you can use a third party library, as always) or keep using the deprecated API.

Since nobody really answers the question and provides usable replacement code, here is one but it's only for Windows:
#include <string>
#include <stdexcept>
#include <Windows.h>
std::wstring string_to_wide_string(const std::string& string)
{
if (string.empty())
{
return L"";
}
const auto size_needed = MultiByteToWideChar(CP_UTF8, 0, &string.at(0), (int)string.size(), nullptr, 0);
if (size_needed <= 0)
{
throw std::runtime_error("MultiByteToWideChar() failed: " + std::to_string(size_needed));
}
std::wstring result(size_needed, 0);
MultiByteToWideChar(CP_UTF8, 0, &string.at(0), (int)string.size(), &result.at(0), size_needed);
return result;
}
std::string wide_string_to_string(const std::wstring& wide_string)
{
if (wide_string.empty())
{
return "";
}
const auto size_needed = WideCharToMultiByte(CP_UTF8, 0, &wide_string.at(0), (int)wide_string.size(), nullptr, 0, nullptr, nullptr);
if (size_needed <= 0)
{
throw std::runtime_error("WideCharToMultiByte() failed: " + std::to_string(size_needed));
}
std::string result(size_needed, 0);
WideCharToMultiByte(CP_UTF8, 0, &wide_string.at(0), (int)wide_string.size(), &result.at(0), size_needed, nullptr, nullptr);
return result;
}

The new way is... you write it yourself. Or just rely on deprecated functionality. Hopefully, the standards committee won't actually remove codecvt until there is a functioning replacement.
But at present, there isn't one.

Related

C6386 warning passing WCHAR param to StringCchCopy - how to resolve?

This code is giving me the following code analysis warning:
C6386 Buffer overrun while writing to m_NID.szTip.
bool CCenterCursorOnScreenDlg::SetTrayIconTipText()
{
if (StringCchCopy(m_NID.szTip, sizeof(m_NID.szTip), m_strTrayIconTip) != S_OK)
return FALSE;
m_NID.uFlags |= NIF_TIP;
return Shell_NotifyIcon(NIM_MODIFY, &m_NID);
}
The structure in question (m_NID is of type NOTIFYICONDATA). How can I resolve this warning?
The C++17 std::size(m_NID.szTip) in <iterator> is a "modern C++" equivalent to the old sizeof(m_NID.szTip)/sizeof(m_NID.szTip[0]). For MSVC, this is supported by the Standard C++ Library even when in the default C++11/C++14 compile mode.
https://en.cppreference.com/w/cpp/iterator/size
With that said, strsafe.h is very much the first-generation old-school approach here. A better option is to use the "Safer CRT" functions which are already part of the Visual Studio runtime. You could instead just use srcpy_s or wcscpy_s both of which have C++ template forms that automatically determine the size of fixed-sized buffers.
If using MBCS:
if (strcpy_s(m_NID.szTip, m_strTrayIconTip) != S_OK)
or using UNICODE:
if (wcscpy_s(m_NID.szTip, m_strTrayIconTip) != S_OK)

Visual C++ lambdas always output debug information

If I instantiate a lambda somewhere (and the compiler doesn't inline it), I can find a string showing where the lambda is located in my c++ code like this:
... ?AV<lambda_1>#?0??MyFunction#MyScopedClass#MyNamespace##SAXXZ# ...
I don't want this information in the executable, as it could give away important names of classes and functions.
All kinds of output debug information are turned off. If I use a normal function instead, the final executable doesn't have this information, so manually converting all lambdas into normal functions would "fix it". But what's the best way to handle this? Can we tell the compiler to transform lambdas into normal functions?
UPDATE: I tested with other compilers: g++ and clang. They both leave the same references. I also found another unanswered question about this Gcc - why are lambdas not stripped during release build Please don't come with the "why do you care about a few symbols anyway".
Here's some code you can test with:
#include <iostream>
#include <functional>
class MyUnscopedClass
{
public:
MyUnscopedClass(const std::function<void(int x)>& f) :
f(f)
{
}
std::function<void(int x)> f;
};
namespace MyNamespace
{
class MyScopedClass
{
public:
static void temp1(int x)
{
std::cout << x * (x + 1);
}
static void MyFunction()
{
//MyUnscopedClass obj(temp1); // no symbols
MyUnscopedClass obj([](int x) // ?AV<lambda_1>#?0??MyFunction#MyScopedClass#MyNamespace##SAXXZ#
{
std::cout << x;
});
obj.f(23);
}
};
}
int main()
{
MyNamespace::MyScopedClass::MyFunction();
}
With the help of #dxiv in the comments, I found the problematic setting.
Configuration Properties > General > C++ Language Standard
can't be, for some reason,
Preview - Features from the Latest C++ Working Draft (std:c++latest)
So I set it to the second most recent one
ISO C++17 Standard (std:c++17)
and I get a random identifier instead.
AV<lambda_f65614ace4683bbc78b79ad57f781b7f>##
I'm still curious how this identifier is chosen though.

Linux Rendering offscreen with OpenGL 3.2+ w/ FBOs

I have ubuntu machine, and a command line application written in OS X which renders something offscreen using FBOs. This is part of the code.
this->systemProvider->setupContext(); //be careful with this one. to add thingies to identify if a context is set up or not
this->systemProvider->useContext();
glewExperimental = GL_TRUE;
glewInit();
GLuint framebuffer, renderbuffer, depthRenderBuffer;
GLuint imageWidth = _viewPortWidth,
imageHeight = _viewPortHeight;
//Set up a FBO with one renderbuffer attachment
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB, imageWidth, imageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);
//Now bind a depth buffer to the FBO
glGenRenderbuffers(1, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, _viewPortWidth, _viewPortHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
The "system provider" is a C++ wrapper around OS X's NSOpenGLContext, which is used just to create a rendering context and making it current, without associating it with a window. All the rendering happens in the FBOs.
I am trying to use the same approach for Linux (Ubuntu) using GLX, but I am having a hard time doing it, since I see that GLX requires a pixel buffer.
I am trying to follow this tutorial:
http://renderingpipeline.com/2012/05/windowless-opengl/
At the end it uses a pixel buffer to make the context current, which I hear is deprecated and we should abandon it in favour of Frame Buffer Objects, is that right (I may be wrong about this).
Does anyone have a better approach, or idea?
I don't know if it's the best solution, but it surely works for me.
Binding the functions to local variables that we can use
typedef GLXContext (*glXCreateContextAttribsARBProc)(Display*, GLXFBConfig, GLXContext, Bool, const int*);
typedef Bool (*glXMakeContextCurrentARBProc)(Display*, GLXDrawable, GLXDrawable, GLXContext);
static glXCreateContextAttribsARBProc glXCreateContextAttribsARB = NULL;
static glXMakeContextCurrentARBProc glXMakeContextCurrentARB = NULL;
Our objects as class properties:
Display *display;
GLXPbuffer pbuffer;
GLXContext openGLContext;
Setting up the context:
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB( (const GLubyte *) "glXCreateContextAttribsARB" );
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB( (const GLubyte *) "glXMakeContextCurrent");
display = XOpenDisplay(NULL);
if (display == NULL){
std::cout << "error getting the X display";
}
static int visualAttribs[] = {None};
int numberOfFrameBufferConfigurations;
GLXFBConfig *fbConfigs = glXChooseFBConfig(display, DefaultScreen(display), visualAttribs, &numberOfFrameBufferConfigurations);
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB ,3,
GLX_CONTEXT_MINOR_VERSION_ARB, 2,
GLX_CONTEXT_FLAGS_ARB, GLX_CONTEXT_DEBUG_BIT_ARB,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_CORE_PROFILE_BIT_ARB,
None
};
std::cout << "initialising context...";
this->openGLContext = glXCreateContextAttribsARB(display, fbConfigs[0], 0, True, context_attribs);
int pBufferAttribs[] = {
GLX_PBUFFER_WIDTH, (int)this->initialWidth,
GLX_PBUFFER_HEIGHT, (int)this->initialHeight,
None
};
this->pbuffer = glXCreatePbuffer(display, fbConfigs[0], pBufferAttribs);
XFree(fbConfigs);
XSync(display, False);
Using the context:
if(!glXMakeContextCurrent(display, pbuffer, pbuffer, openGLContext)){
std::cout << "error with content creation\n";
}else{
std::cout << "made a context the current context\n";
}
After that, one can use FBOs normally, as he would in any other occasion. Up to this day, my question is actually unanswered (if there is any better alternative), so I am just offering a solution that worked for me. Seems to me that GLX does not use the notion of pixel buffers the same way as OpenGL does, hence my confusion. The preferred way to render offscreen is FBOs, but for an OpenGL context to be created on Linux, a pixel buffer (the GLX kind) must be created. After that, using FBOs with the code I provided in the question will work as expected, the same way it does on OS X.

WinRT Asynchronous File operations in C++

I'm currently working on a metro app that requires a few textual resources. Part of the build process is copying all of these resources to a folder inside of the app's installation directory. What I'd like to do is gather a list of these resource files, and process each one accordingly. Unfortunately, my attempts to do so have been less than successful.
Since I'm building for WinRT, I can't use the very useful "FindFirstFile" and "FindNextFile" functions. I've been trying to get the job done using the WinRT Asynchronous file IO operations.
auto getResourceFolder = installedLocation->GetFolderFromPathAsync( folderPath );
getResourceFolder->Completed = ref new Windows::Foundation::AsyncOperationCompletedHandler< Windows::Storage::StorageFolder^ >(
[this]( Windows::Foundation::IAsyncOperation< Windows::Storage::StorageFolder^ >^ operation ) {
if( operation->Status == Windows::Foundation::AsyncStatus::Completed ) {
auto resourceFolder = operation->GetResults();
auto getResourceFiles = resourceFolder->GetFilesAsync();
getResourceFiles->Completed = ref new Windows::Foundation::AsyncOperationCompletedHandler< IVectorView< Windows::Storage::IStorageFile^ >^ >(
[this]( Windows::Foundation::IAsyncOperation< IVectorView< Windows::Storage::IStorageFile^ >^ >^ operation ) {
if( operation->Status == Windows::Foundation::AsyncStatus::Completed ) {
auto resourceFiles = operation->GetResults();
for( unsigned int i = 0; i < resourceFiles->Size; ++i ) {
// Process File
}
}
});
}
});
Which fails to compile:
error C2664: 'Windows::Foundation::IAsyncOperation<TResult>::Completed::set' : cannot convert parameter 1 from 'Windows::Foundation::AsyncOperationCompletedHandler<TResult> ^' to 'Windows::Foundation::AsyncOperationCompletedHandler<TResult> ^'
The error isn't making any sense to me. I've tried rewriting the above code so that the lambda handler functions are not inline, but it hasn't made a difference. I'm not sure what's wrong.
Any ideas? Thanks in advance.
[Note: I have omitted most namespace qualification from the code and error messages for brevity.]
The Visual Studio Error List pane only shows the first line of each error (this is a very useful feature, especially when programming in C++, because some error messages from the compiler are exceedingly long. You need to look at the Output window to see the rest of the error message:
error C2664: 'IAsyncOperation<TResult>::Completed::set' :
cannot convert parameter 1
from 'AsyncOperationCompletedHandler<TResult> ^'
to 'AsyncOperationCompletedHandler<TResult> ^'
with
[
TResult=IVectorView<StorageFile ^> ^
]
and
[
TResult=IVectorView<IStorageFile ^> ^
]
and
[
TResult=IVectorView<StorageFile ^> ^
]
No user-defined-conversion operator available, or
Types pointed to are unrelated;
conversion requires reinterpret_cast, C-style cast or function-style cast
This is still a bit confusing because all three templates use a parameter named TResult. To decipher the error, note that the order of the templates in the first line matches the order of the template argument lists in the rest of the line.
The issue here is that you are mixing usage of StorageFile and IStorageFile. On both of these lines, you need to use StorageFile (see carrots under lines for where IStorageFile is used):
getResourceFiles->Completed = ref new Windows::Foundation::AsyncOperationCompletedHandler< IVectorView< Windows::Storage::IStorageFile^ >^ >(
^
[this]( Windows::Foundation::IAsyncOperation< IVectorView< Windows::Storage::IStorageFile^ >^ >^ operation ) {
^
Note that once you fix this issue, you'll get another pair of errors because your lambdas need to have two parameters; the second is an AsyncStatus. In the end, they should both be declared as:
// Namespaces omitted for brevity
[this](IAsyncOperation<StorageFolder^>^ operation, AsyncStatus status) { }
Since I'm building for WinRT, I can't use the very useful FindFirstFile and FindNextFile functions.
Note that you can, in fact, use both FindFirstFileEx and FindNextFile in a Metro style app. (The non-Ex FindFirstFile is not usable).
You should use the asynchronous WinRT functions wherever you can and wherever it is practical, but that doesn't mean there isn't still a use for these other functions.
A far simpler solution is to use PPL for your async operations. Instead of manually rolling the async operation, try:
create_task(installedLocation->GetFolderFromPathAsync(folderPath)
.then([this](Windows::Storage::StorageFolder^ folder) {
return folder->GetFilesAsync();
})
.then([this](IVectorView<Windows::Storage::StorageFile^ >^ files) {
for( unsigned int i = 0; i < files->Size; ++i ) {
// Process File
}
});
I'm not 100% on the syntax, this was written in the SO code editor, but it shows how PPL dramatically reduces the complexity of this kind of code - basically you use create_task to create a task, then use the .then method on the task to specify a lambda which is used for async completion.
EDIT: Updated to remove the nested lambda.

RBuf8 to char* in Symbian C++

I am downloading a text string from a web service into an RBuf8 using this kind of code (it works..)
void CMyApp::BodyReceivedL( const TDesC8& data ) {
int newLength = iTextBuffer.Length() + data.Length();
if (iTextBuffer.MaxLength() < newLength)
{
iTextBuffer.ReAllocL(newLength);
}
iTextBuffer.Append(data);
}
I want to then convert the RBuf8 into a char* string I can display in a label or whatever.. or for the purposes of debug, display in
RDebug::Printf("downloading text %S", charstring);
edit for clarity..
My conversion function looks like this..
void CMyApp::DownloadCompleteL() {
{
RBuf16 buf;
buf.CreateL(iTextBuffer.Length());
buf.Copy(iTextBuffer);
RDebug::Printf("downloaded text %S", buf);
iTextBuffer.SetLength(0);
iTextBuffer.ReAlloc(0);
}
But this still causes a crash. I am using S60 3rd Edition FP2 v1.1
What you may need is something to the effect of:
RDebug::Print( _L( "downloaded text %S" ), &buf );
This tutorial may help you.
void RBuf16::Copy(const TDesC8&) will take an 8bit descriptor and convert it into a 16bit descriptor.
You should be able to display any 16bit descriptor on the screen. If it doesn't seem to work, post the specific API you're using.
When an API can be used with an undefined number of parameters (like void RDebug::Printf(const char*, ...) ), %S is used for "pointer to 16bit descriptor". Note the uppercase %S.
Thanks, the %S is a helpful reminder.
However, this doesn't seem to work.. my conversion function looks like this..
void CMyApp::DownloadCompleteL() {
{
RBuf16 buf;
buf.CreateL(iTextBuffer.Length());
buf.Copy(iTextBuffer);
RDebug::Printf("downloaded text %S", buf);
iTextBuffer.SetLength(0);
iTextBuffer.ReAlloc(0);
}
But this still causes a crash. I am using S60 3rd Edition FP2 v1.1
You have to supply a pointer to the descriptor in RDebuf::Printf so it should be
RDebug::Print(_L("downloaded text %S"), &buf);
Although use of _L is discouraged. _LIT macro is preferred.
As stated by quickrecipesonsymbainosblogspotcom, you need to pass a pointer to the descriptor.
RDebug::Printf("downloaded text %S", &buf); //note the address-of operator
This works because RBuf8 is derived from TDes8 (and the same with the 16-bit versions).

Resources