#include <iostream>
#include <string>
using namespace std;
int main() {
string str {5, 'c'};
cout << str; // "\005c"
}
Output: c
With gdb, it confirms that str contains "\005c" with
str[0] = '\005'
str[1] = 'c'
Why str[0] is not being printed in output console?
Used c++ version: c++11
ASCII 5 represents a signal intended to trigger a response at the receiving end. It is not visible on the console.
Reference: http://ascii.cl/
For example: try 65 instead of 5, you will see 'A'.
The ASCII number 5 is non printable. ASCII table
ASCII representation of 5 which is 53 is printable.
string str {53, 'c'};
cout << str; // 5c
Since ASCII value of 5 is 53.
So, you can try this way:
#include <iostream>
#include <string>
using namespace std;
int main() {
string str {53, 'c'};
cout << str; // "\005c"
}
Related
I have a string with the value 788597.31 and I am converting this value to double but when I print the variable only 788597 is displayed. I have used std::stod(string) and even stringstream but everytime I get the same previous value. Can anybody help me with this?
I want to store this string value in a double varaible.
#include<iostream>
#include<string>
#include<sstream>
using namespace std;
int main()
{
string s="788597.31";
double d=stod(s);
cout<<d<<" ";
stringstream g;
double a;
g<<s; g>>a;
cout<<a;
return 0;
}
The problem is in how you are printing your result, not in the string parsing. This program:
#include <iostream>
using namespace std;
int main() {
cout << 788597.31 << endl;
return 0;
}
also prints 788597.
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
cout << setprecision(10) << 788597.31 << endl;
return 0;
}
prints 788597.31
If you want to see more than the default 6 significant digits your program needs to say so.
For a vector of strings, return the sum of each string's size.
I tried to use accumulate, together with a lambda function (Is it the best way of calculating what I want in 1-line?)
Codes are written in wandbox (https://wandbox.org/permlink/YAqXGiwxuGVZkDPT)
#include <iostream>
#include <numeric>
#include <string>
#include <vector>
using namespace std;
int main() {
vector<string> v = {"abc", "def", "ghi"};
size_t totalSize = accumulate(v.begin(), v.end(), [](string s){return s.size();});
cout << totalSize << endl;
return 0;
}
I expect to get a number (9), however, errors are returned:
/opt/wandbox/gcc-head/include/c++/10.0.0/bits/stl_numeric.h:135:39: note: 'std::__cxx11::basic_string' is not derived from 'const __gnu_cxx::__normal_iterator<_Iterator, _Container>'
135 | __init = _GLIBCXX_MOVE_IF_20(__init) + *__first;
I want to know how to fix my codes? Thanks.
That's because you do not use std::accumulate properly. Namely, you 1) did not specify the initial value and 2) provided unary predicate instead of a binary. Please check the docs.
The proper way to write what you want would be:
#include <iostream>
#include <numeric>
#include <string>
#include <vector>
using namespace std;
int main() {
vector<string> v = {"abc", "def", "ghi"};
size_t totalSize = accumulate(v.begin(), v.end(), 0,
[](size_t sum, const std::string& str){ return sum + str.size(); });
cout << totalSize << endl;
return 0;
}
Both issues are fixed in this code:
0 is specified as initial value, because std::accumulate needs to know where to start, and
The lambda now accepts two parameters: accumulated value, and the next element.
Also note how std::string is passed by const ref into the lambda, while you passed it by value, which was leading to string copy on each invocation, which is not cool
Consider the following code:
#include <string>
#include <fstream>
#include <iomanip>
int main() {
std::string s = "\xe2\x82\xac\u20ac";
std::ofstream out("test.txt");
out << s.length() << ":" << s << std::endl;
out << std::endl;
out.close();
}
Under GCC 4.8 on Linux (Ubuntu 14.04), the file test.txt contains this:
6:€€
Under Visual C++ 2013 on Windows, it contains this:
4:€\x80
(By '\x80' I mean the single 8-bit character 0x80).
I've been completely unable to get either compiler to output a € character using std::wstring.
Two questions:
What exactly does the Microsoft compiler think it's doing with the char* literal? It's obviously doing something to encode it, but what is not clear.
What is the right way to rewrite the above code using std::wstring and std::wofstream so that it outputs two € characters?
This is because you are using \u20ac which is a Unicode character literal in an ASCII string.
MSVC encodes "\xe2\x82\xac\u20ac" as 0xe2, 0x82, 0xac, 0x80, which is 4 narrow characters. It essentially encodes \u20ac as 0x80 because it mapped the euro character to the standard 1252 codepage
GCC is converting the Unicode literal /u20ac to the 3-byte UTF-8 sequence 0xe2, 0x82, 0xac so the resulting string ends up as 0xe2, 0x82, 0xac, 0xe2, 0x82, 0xac.
If you use std::wstring = L"\xe2\x82\xac\u20ac" it gets encoded by MSVC as 0xe2, 0x00, 0x82, 0x00, 0xac, 0x00, 0xac, 0x20 which is 4 wide characters, but since you are mixing a hand-created UTF-8 with a UTF-16, the resulting string doesn't make much sense. If you use a std::wstring = L"\u20ac\u20ac" you get 2 Unicode characters in a wide-string as you'd expect.
The next problem is that MSVC's ofstream and wofstream always write in ANSI/ASCII. To get it to write in UTF-8 you should use <codecvt> (VS 2010 or later):
#include <string>
#include <fstream>
#include <iomanip>
#include <codecvt>
int main()
{
std::wstring s = L"\u20ac\u20ac";
std::wofstream out("test.txt");
std::locale loc(std::locale::classic(), new std::codecvt_utf8<wchar_t>);
out.imbue(loc);
out << s.length() << L":" << s << std::endl;
out << std::endl;
out.close();
}
and to write UTF-16 (or more specifically UTF-16LE):
#include <string>
#include <fstream>
#include <iomanip>
#include <codecvt>
int main()
{
std::wstring s = L"\u20ac\u20ac";
std::wofstream out("test.txt", std::ios::binary );
std::locale loc(std::locale::classic(), new std::codecvt_utf16<wchar_t, 0x10ffff, std::little_endian>);
out.imbue(loc);
out << s.length() << L":" << s << L"\r\n";
out << L"\r\n";
out.close();
}
Note: With UTF-16 you have to use a binary mode rather than text mode to avoid corruption, so we can't use std::endl and have to use L"\r\n" to get the correct end-of-line text file behavior.
I'm using CryptoPP to encode a string "abc", the problem is the encoded base64 string always has a trailing '0x0a'?
here's my code:
#include <iostream>
#include <string>
using namespace std;
#include "crypto/base64.h"
using namespace CryptoPP;
int main() {
string in("abc");
string encoded;
CryptoPP::StringSource ss(
in,
true,
new CryptoPP::Base64Encoder(
new CryptoPP::StringSink(encoded)
)
);
cout << encoded.length() << endl;// outputs 5, should be 4
cout << encoded;
}
the string "abc" should be encoded to "YWJj", but the result is YWJj\n, (\n == 0x0a), and the length is 5 .
Changing the source string doesn't help, any string will be encrypted with a trailing \n
Why is that?
Thanks
From the Base64Encoder docs, the signature of the constructor is
Base64Encoder (BufferedTransformation *attachment=NULL,
bool insertLineBreaks=true,
int maxLineLength=72)
So you get line breaks by default in your encoded string. To avoid this, just do:
CryptoPP::StringSource ss(
in,
true,
new CryptoPP::Base64Encoder(
new CryptoPP::StringSink(encoded),
false
)
);
I programmed a little Application in C++. There is a ListBox in the UI. And I want to use the selected Item of ListBox for an Algorithm where I can use only wstrings.
All in all I have two questions:
-how can I convert my
String^ curItem = listBox2->SelectedItem->ToString();
to a wstring test?
-What means the ^ in the code?
Thanks a lot!
It should be as simple as:
std::wstring result = msclr::interop::marshal_as<std::wstring>(curItem);
You'll also need header files to make that work:
#include <msclr\marshal.h>
#include <msclr\marshal_cppstd.h>
What this marshal_as specialization looks like inside, for the curious:
#include <vcclr.h>
pin_ptr<WCHAR> content = PtrToStringChars(curItem);
std::wstring result(content, curItem->Length);
This works because System::String is stored as wide characters internally. If you wanted a std::string, you'd have to perform Unicode conversion with e.g. WideCharToMultiByte. Convenient that marshal_as handles all the details for you.
I flagged this as a duplicate, but here's the answer on how to get from System.String^ to a std::string.
String^ test = L"I am a .Net string of type System::String";
IntPtr ptrToNativeString = Marshal::StringToHGlobalAnsi(test);
char* nativeString = static_cast<char*>(ptrToNativeString.ToPointer());
The trick is make sure you use Interop and marshalling, because you have to cross the boundary from managed code to non-managed code.
My version is:
Platform::String^ str = L"my text";
std::wstring wstring = str->Data();
With Visual Studio 2015, just do this:
String^ s = "Bonjour!";
C++/CLI
#include <vcclr.h>
pin_ptr<const wchar_t> ptr = PtrToStringChars(s);
C++/CX
const wchart_t* ptr = s->Data();
According to microsoft:
Ref How to: Convert System::String to Standard String
You can convert a String to std::string or std::wstring, without using PtrToStringChars in Vcclr.h.
// convert_system_string.cpp
// compile with: /clr
#include <string>
#include <iostream>
using namespace std;
using namespace System;
void MarshalString ( String ^ s, string& os ) {
using namespace Runtime::InteropServices;
const char* chars =
(const char*)(Marshal::StringToHGlobalAnsi(s)).ToPointer();
os = chars;
Marshal::FreeHGlobal(IntPtr((void*)chars));
}
void MarshalString ( String ^ s, wstring& os ) {
using namespace Runtime::InteropServices;
const wchar_t* chars =
(const wchar_t*)(Marshal::StringToHGlobalUni(s)).ToPointer();
os = chars;
Marshal::FreeHGlobal(IntPtr((void*)chars));
}
int main() {
string a = "test";
wstring b = L"test2";
String ^ c = gcnew String("abcd");
cout << a << endl;
MarshalString(c, a);
c = "efgh";
MarshalString(c, b);
cout << a << endl;
wcout << b << endl;
}
output:
test
abcd
efgh