VC6 /r/n and Write works; Visual Studio 2013 does not work - visual-c++

the following code
if(!cfile.Open(fileName, CFile::modeCreate | CFile::modeReadWrite)){
return;
}
ggg.Format(_T("0 \r\n"));
cfile.Write(ggg, ggg.GetLength());
ggg.Format(_T("SECTION \r\n"));
cfile.Write(ggg, ggg.GetLength());
produces the following:
0 SECTI
clearly this is wrong: (a) \r\n is ignored, and (b) the word SECTION is cut off.
Can someone please tell me what I am doing wrong?
The same code without _T() in VC6 produces the correct results.
Thank you
a.

Apparently, you are building a Unicode build; CString (presumably that's what ggg is) holds a sequence of wchar_t characters, each two bytes large. ggg.GetLength() is the length of the string in characters.
However, CFile::Write takes the length in bytes, not in characters. You are passing half the number of bytes actually taken by the string, so only half the number of characters gets written.

Have you considered changing lines like:
cfile.Write(ggg, ggg.GetLength());
to`
cfile.Write(ggg, ggg.GetLength() * sizeof(TCHAR))
Write needs the number of bytes (not characters). Since Unicode is 2 bytes wide you need to account for that. sizeof(TCHAR) should be the number of bytes each character takes on a given platform. If it is built as Ansi it would be 1 and Unicode would have 2. Multiply that by the string length and the number of bytes should be correct.
Information on TCHAR can be found on MSDN documentation here. In particular it is defined as:
The _TCHAR data type is defined conditionally in Tchar.h. If the symbol _UNICODE is defined for your build, _TCHAR is defined as wchar_t; otherwise, for single-byte and MBCS builds, it is defined as char. (wchar_t, the basic Unicode wide-character data type, is the 16-bit counterpart to an 8-bit signed char.)
TCHAR and _TCHAR in your usage should be synonymous. However I believe these days Microsoft recommends including <tchar.h> and using _TCHAR. What I can't tell you is if _TCHAR existed on VC 6.0.
If using the method above - if you build using Unicode your output files will be in Unicode. If you build for Ansi it will be output as 8bit ASCII.
Want CFile.write to output Ascii no matter what? Read on...
If you want all text written to the file as 8bit ASCII you are going to have to use one of the macros for conversion. In particular CT2A. More on the macros can be found in this MSDN article. Each macro can be broken up by name, however CT2A says convert the Generic character string (equivalent to W when _UNICODE is defined, equivalent to A otherwise) to Ascii per the chart at the link. So no matter whether using Unicode or Ascii it would output Ascii. Your code would look something like:
ggg.Format(_T("0 \r\n"));
cfile.Write(CT2A(ggg), ggg.GetLength());
ggg.Format(_T("SECTION \r\n"));
cfile.Write(CT2A(ggg), ggg.GetLength());
Since the macro converts everything to Ascii CString's GetLength() will suffice.

Related

How to to remove b or byte object prefix after packing hex in little endian?

i currently pack hex number in little endian with struct.pack() or p32() from pwnlib, i always got bytes object output.
b'\xde\xad\xbe\xef'
i tried str.decode('utf-8') but in some case there is error output.
is there a way to decode this ?
im using python3 and pwntools 4.3
The byte object prefix is important, it is wrong to delete it.
It is just a python-internal representation of the object. If you need it to write a C string containing the same bytes, you should write a function for it, or encode it using hex escapes or octal escapes.
In python 3 bytes is not text. It is a sequence of octets, array of bytes.
Text is a sequence of unicode codepoints, like unicode type in python 2.
'\xaa' is just a shorthand for '\u00aa', and the shorthand is only creating confusion, so avoid it if possible. Use bytes objects where you mean binary data and unicode string text objects where you mean text.
See https://github.com/Gallopsled/pwntools-tutorial/blob/master/bytes.md

Converting Scandinavian letters from wstring to string

Goal
Converting a wstring containing ÅåÄäÖöÆæØø into a string in C++.
Environment
C++17, Visual Studio Community 2017, Windows 10 Pro 64-bit
Description
I'm trying to convert a wstring to string and has implemented the solution suggested at
https://stackoverflow.com/a/3999597/1997617
// This is the code I use:
// Convert a wide Unicode string to an UTF8 string
std::string toString(const std::wstring &wstr)
{
if (wstr.empty()) return std::string();
int size_needed = WideCharToMultiByte(CP_UTF8, 0, &wstr[0], (int)wstr.size(), NULL, 0, NULL, NULL);
std::string strTo(size_needed, 0);
WideCharToMultiByte(CP_UTF8, 0, &wstr[0], (int)wstr.size(), &strTo[0], size_needed, NULL, NULL);
return strTo;
}
So far so good.
My problem is that I have to handle scandinavian letters (ÅåÄäÖöÆæØø) in addition to the English ones. Regard the input wstring below.
L"C:\\Users\\BjornLa\\Å-å-Ä-ä-Ö-ö Æ-æ-Ø-ø\\AEther Adept.jpg"
When returned it has become...
"C:\\Users\\BjornLa\\Å-å-Ä-ä-Ö-ö Æ-æ-Ø-ø\\AEther Adept.jpg"
... which causes me some trouble.
Question
So I would like to ask an often asked question, but with a small addition:
How do I convert a wstring to a string when it contains Scandinavian characters?
So, I did some additional read-up and experimenting based on the comments I've got.
Turns at the solution is quite simple. Just change CP_UTF8 to CP_ACP!
However...
Microsoft suggest that one actually should use CP_UTF8, if you read between the lines at the MSDN method documentation. The note for CP_ACP reads:
This value can be different on different computers, even on the same
network. It can be changed on the same computer, leading to stored
data becoming irrecoverably corrupted. This value is only intended for
temporary use and permanent storage should use UTF-16 or UTF-8 if
possible.
Also, the note for the entire method reads:
The ANSI code pages can be different on different computers, or can be
changed for a single computer, leading to data corruption. For the
most consistent results, applications should use Unicode, such as
UTF-8 or UTF-16, instead of a specific code page, unless legacy
standards or data formats prevent the use of Unicode. If using Unicode
is not possible, applications should tag the data stream with the
appropriate encoding name when protocols allow it. HTML and XML files
allow tagging, but text files do not.
So even though this CP_ACP-solution works fine for my test-cases, it remains to see if it is an overall good solution.

Converting bits to string (data)

I have file which contains some data (text copied and pasted from the "What You Will Learn" portion of this PDF). Firstly, I have converted the contents in the file to bits successfully. However, when I try to convert it back to the original format, some of the characters are not correctly converted, as shown below:
Cisco has
developed the Cisco Open Network Environment (ONE)
architecture as a multifaceted approach to network
programmability delivered across three pillars:
??)É¥ Í?н??ÁÁ±¥?Ñ¥½¸ÁɽÉ?µµ¥¹?¥¹Ñ?É???Ì?¡A%̤?)?áÁ½Í??¥É?ѱ佸Íݥѡ?Ì?¹É½ÕÑ?ÉÌѼ?Õµ?¹Ð?)?á¥ÍÑ¥¹?=Á?¹±½ÜÍÁ?¥?¥?Ñ¥½¹Ì* ¤&öGV7F?öâ×&VG?÷VäfÆ÷r6öçG&öÆÆW"æB÷VäfÆ÷r ¦vVçG0¨?HÝZ]HÙ??ÙXÝÈÈ[]?\??\X[Ý?\?^\Ë?\X[?Ù\?XÙ\Ë[??\ÛÝ\?ÙHÜ?Ú\Ý?][Û?Ø\X?[]Y\È[?H?]HÙ[
As you can see here some characters are converted successfully, others are not.
My code is below:
file = open("test.txt",'r')
myfile = ''.join(map(str,file))
l = []
for i in myfile:
asc11 = ord(i)
b = "{0:08b}".format(asc11)
l.extend(int(y) for y in b)
string_bin = ''.join(map(str,l))
mydata = ''.join(chr(int(string_bin[i:i+8], 2)) for i in range(0,len(string_bin), 8))
print(mydata)
What wrong with my code? What I need to change to make it work properly?
What's Going On?
You are running into an encoding issue because some characters in the PDF are non-ASCII characters. For example, the bullet points are U+2022 which require 3 bytes of storage.
When Python reads from your file, it doesn't know what encoding you used to write that data. Thus it reads bytes from the file and uses a character encoding to translate them into strs which are stored using Python's own internal unicode format. (This differs from Python 2 where open() returned raw bytes stored in a str which you could then manually decoded to unicode.)
Thus, in Python 3, open() accepts a named encoding parameter. For example open("test.txt",'r', encoding='ascii'). Because you don't specify the encoding when you call open(), you end up using your system's default encoding. For instance, on my laptop, the default encoding is CP1252 (LATIN-1). Yours may differ.
Whatever encoding Python uses to interpret your file, it then internally uses it's own unicode format to store your string. This means that your string may internally use mutli-byte characters even if the original encoding did not. For example, my laptop uses CP1252 to interpret U+2022 as • which is internally stored as U+00e2, U+20AC and U+00A2 -- € is stored using a multi-byte character even though it was just one byte in the original file.
Let's assume you computer is sane and uses UTF-8 by default (this explanation is similar for many multi-byte characters). When you reach a bullet point, it is stored as U+2022. When you call ord('\u2022') the result is 8226. When you then call "{0:08b}".format(8226) this returns "10000000100010". That's a 14 character string. Your parsing code assumes all of the ordinals will generate 8 character strings. Because of this, the "binary" output becomes misaligned. This means that when you then parse the binary string in 8-character segments, it gets thrown off and starts interpreting things as control characters and all sorts of foreign language characters.
If you call open(..., encoding='ascii'), Python will actually throw an exception because it reads non-valid ASCII characters.
Possible Solutions
I'm not sure why exactly you are converting the input string into the representation that you are using. It's not binary, as your question title would suggest. Rather, you've converted the data into a textual representation of it's binary encoding.
Technically speaking, when you store encoded text to a file, it's stored using a binary representation. Python, and any text editor, has to decode those bytes into it's internal character representation before it can display them as text. Thus, calling open("test.txt", "r", encoding="utf-8") reads the binary data out of your text file and converts it into Python's internal unicode format. Similarly, calling myfile.encode('utf-8') will return the UTF-8 encoded bytes which can then be written to a file, network socket, etc.
If, however, you do need to use a format similar to what you are currently using, first, I still recommend you specify an encoding when you call open() (I recommend UTF-8). Then you can consider these options:
Detect and omit non-ASCII characters. They will have an ordinal >= 128.
Mimic UTF-16 or UTF-32 and output multi-byte output for all characters. For example, use "{0:032b}".format(asc11) and then parse the result in 32-character chunks. It's memory and storage inefficient, but it will preserve multi-byte characters.
Regardless, I highly recommend reading the Dive Into Python 3 chapter about strings.

Extract single unicode character from string

The problem begins when I stumble upon unicode characters. For example, árbol. Right now I handle this by asking if the character at position i, that is, string (i:i) is less than 127. That means that it belongs to the ASCII table, with this I know for sure thatstring (i:i) is a complete single character. In the other case (>= 127) and for my example 'árbol', string (1,2) is the complete character.
I think the way I'm handling the strings solves the problem for my practical purposes (handling files in spanish, polish and russian), but in the case of handling chinese letter where characters may take up to 4 bytes then I would have problems.
Is there a way in fortran to single out unicode characters inside a string?
gfortran does not currently support non-ASCII characters in UTF-8 encoded files, see here. You can find the corresponding bug report here.
As a work-around, you can specify the unicode char in Hex-notation: char(int(z'00E1'), ucs4), or '\u00E1'. The latter requires the compile option -fbackslash to enable the evaluation of the backslash.
program character_kind
use iso_fortran_env
implicit none
integer, parameter :: ucs4 = selected_char_kind ('ISO_10646')
character(kind=ucs4, len=20) :: string
! string = ucs4_'árbol' ! This does not work
! string = char(int(z'00E1'), ucs4) // ucs4_'rbol' ! This works
string = ucs4_'\u00E1rbol' ! This is also working
open (output_unit, encoding='UTF-8')
print *, string(1:1)
print *, string
end program character_kind
ifort seems not to support ISO_10646 at all, selected_char_kind ('ISO_10646') returns -1. With ifort 15.0.0 I get the same message as described here.

How to store binary data in a Lua string

I needed to create a custom file format with embedded meta information. Instead of whipping up my own format I decide to just use Lua.
texture
{
format=GL_LUMINANCE_ALPHA;
type=GL_UNSIGNED_BYTE;
width=256;
height=128;
pixels=[[
<binary-data-here>]];
}
texture is a function that takes a table as its sole argument. It then looks up the various parameters by name in the table and forwards the call on to a C++ routine. Nothing out of the ordinary I hope.
Occasionally the files fail to parse with the following error:
my_file.lua:8: unexpected symbol near ']'
What's going on here?
Is there a better way to store binary data in Lua?
Update
It turns out that storing binary data is a Lua string is non-trivial. But it is possible when taking care with 3 sequences.
Long-format-string-literals cannot have an embedded closing-long-bracket (]], ]=], etc).
This one is pretty obvious.
Long-format-string-literals cannot end with something like ]== which would match the chosen closing-long-bracket.
This one is more subtle. Luckily the script will fail to compile if done wrong.
The data cannot embed \n or \r.
Lua's built in line-end processing messes these up. This problem is much more subtle. The script will compile fine but it will yield the wrong data. 0x13 => 0x10, 0x1013 => 0x10, etc.
To get around these limitations I split the binary data up on \r, \n, then pick a long-bracket that works, finally emit Lua that concats the various parts back together. I used a script that does this for me.
input: XXXX\nXX]]XX\r\nXX]]XX]=
texture
{
--other fields omitted
pixels= '' ..
[[XXXX]] ..
'\n' ..
[=[XX]]XX]=] ..
'\r\n' ..
[==[XX]]XX]=]==];
}
Lua is able to encode most characters in long bracket format including nulls. However, Lua opens the script file in text mode and this causes some problems. On my Windows system the following characters have problems:
Char code(s) Problem
-------------- -------------------------------
13 (CR) Is translated to 10 (LF)
13 10 (CR LF) Is translated to 10 (LF)
26 (EOF) Causes "unfinished long string near '<eof>'"
If you are not using windows than these may not cause problems, but there may be different text-mode based problems.
I was only able to produce the error you received by encoding multiple close brackets:
a=[[
]]] --> a.lua:2: unexpected symbol near ']'
But, this was easily fixed with the following:
a=[==[
]]==]
The binary data needs to be encoded into printable characters. The simplest method for decoding purposes would be to use C-like escape sequences for all bytes. For example, hex bytes 13 41 42 1E would be encoded as '\19\65\66\30'. Of course, then the encoded data is three to four times larger than the source binary.
Alternatively, you could use something like Base64, but that would have to be decoded at runtime instead of relying on the Lua interpreter. Personally, I'd probably go the Base64 route. There are Lua examples of Base64 encoding and decoding.
Another alternative would be have two files. Use a well defined image format file (e.g. TGA) that is pointed to by a separate Lua script with the additional metadata. If you don't want two files to move around then they could be combined in an archive.

Resources