I have binary data stored in a file. I am doing this:
byte[] fileBytes = File.ReadAllBytes(#"c:\carlist.dat");
string ascii = Encoding.ASCII.GetString(fileBytes);
This is giving me following result with lot of invalid characters. What am i doing wrong?
?D{F ?x#??4????? NBR-OF-CARSNUMBER-OF-CARS!"#??? NBR-OF-CARS$%??1y0#123?G??#$ NBR-OF-CARS%45??1y# NUMBER-OF-CARSd?
hmm... seems like a save was made from a byte buffer where after NBR-OF-CARS was written some numeric data. If you have an access to the code that saves the file could you check if there are numbers over there and if there are - check does the code converts numbers to string before witing the value into the binary stream.
Related
Using python 3.6. I have a bytes array that is coming over a socket (a FIX message) that contains hex character 1, start of header, as a delimiter. The raw bytes look like
b'8=FIX.4.4\x019=65\x0135=0\x0152=20220809-21:37:06.893\x0149=TRADEWEB\x0156=ABFIXREPO\x01347=UTF-8\x0110=045\x01'
I want to store this bytes array to a file for logging. I have seen FIX messages where this delimiter is converted to ^A control character. The final string I would like to have is -
8=FIX.4.4^A9=65^A35=0^A52=20220809-21:37:06.893^A49=TRADEWEB^A56=ABFIXREPO^A347=UTF-8^A10=045^A
I have tried various different ways to achieve this but could not, for example, tried repr(bytes) and (ord(b) in bytes).
Any pointers are highly appreciated.
Thanks.
One way I can think of doing this is by splitting the original decoded byte array , and insert control character using a loop.
data = b'8=FIX.4.4\x019=65\x0135=0\x0152=20220809-21:37:06.893\x0149=TRADEWEB\x0156=ABFIXREPO\x01347=UTF-8\x0110=045\x01'
data = data.decode().split("\x01")
print(data)
ctl_char = "^A"
string =""
for i in data:
string+=i
string+=ctl_char
print(string)
Please note there can be a better way of doing it.
I'm using an API which returns a Base64 encoded file that I want to parse and harvest data from. I'm having trouble decoding the Base64, as it comes back with garbled characters. The code I have is below.
Base64 decoder = new Base64()
def jsonSlurper = new JsonSlurper()
def json = jsonSlurper.parseText(Requests.getInventory(app).toString())
String stockB64 = json.getAt("stock")
byte[] decoded = decoder.decode(stockB64)
println(new String(decoded, "US-ASCII"))
I've also tried println(new String(decoded, "UTF-8")) and this returns the same garbled output. I've pasted in an example snipped of the output for reference.
� ���v���
��W`�C�:�f��y�z��A��%J,S���}qF88D q )��'�C�c�X��������+n!��`nn���.��:�g����[��)��f^���c�VK��X�W_����������?4��L���D�������i�9|�X��������\���L�V���gY-K�^����
��b�����~s��;����g���\�ie�Ki}_������
What am I doing wrong here?
You don't need the Base64 class wherever you took it from. You can simply do stockB64.decodeBase64() to get the decoded byte array. Are you sure what you have there is actual text that is encoded. Usually base64 encoded means that this is some binary like an image. If it is text you could have put it as string in the json simply. Maybe save the resulting byte array to a file and then investigate the file type by content.
I have a array of integer which I want to dump in one binary file (HEX file to be specific) using python script
I have written a code as
MemDump = Debug.readMemory(ic.IConnectDebug.fRealTime, 0, 0xB0009CC4, 0xCFF, 1)
MemData = MemDump[:3321]
# Create New file in binary mode and open for writing
fp = open("MON.dmp", 'w')
sys.stdout = fp
for byte in MemData:
print(byte)
Here MemDump contains an array of integer values. From this array first 3321 bytes I want to dump in file.
Here I am getting the the output in file MON.dmp but in ASCII fromat.
and if I create file in binary format using
fp = open("MON.dmp", 'wb')
print(byte) command gives me an error saying
'str' does not support the buffer interface
Thank you in Advance.
You need to convert byte to a binary string before you can write it to a file opened in 'wb' mode. This can be done using the bytearray() function. So in this case you should use:
for byte in MemData:
print(bytearray(byte))
My Code -
var utf8 = require('utf8');
var y = utf8.encode('एस एम एस गपशप');
console.log(y);
Input -
एस एम एस गपशप
Expecting Output - \xE0\xA4\x8F\xE0\xA4\xB8\x20\xE0\xA4\x8F\xE0\xA4\xAE\x20\xE0\xA4\x8F\xE0\xA4\xB8\x20\xE0\xA4\x97\xE0\xA4\xAA\xE0\xA4\xB6\xE0\xA4\xAA
Example Encoding using utf8.js
Output -
à¤à¤¸ à¤à¤® à¤à¤¸ à¤à¤ªà¤¶à¤ª
What am I doing wrong? Please help!
That code appears to be working. That output looks like UTF-8 bytes interpreted as some 8-bit character set, most likely ISO-8859-1, which is easily recognisable by the repeating patterns.
That example output is just how you would represent that string in source code.
Try this:
var utf8 = require('utf8');
var y = utf8.encode('एस');
console.log(y);
console.log('\xE0\xA4\x8F\xE0\xA4\xB8');
You will probably see the same output twice.
You can easily write some code to get that hexadecimal forms back using a lookup table and the charCodeAt function, but it is a rather unusual way to represent a string in JavaScript. JSON for example either just uses the literal characters, or '\uXXXX' escapes.
I'm receiving data as utf8 from a source and this data was originally in binary form (it was a Buffer). I have to convert back this data to a Buffer. I'm having a hard time figuring how to do this.
Here's a small sample that shows my problem:
var hexString = 'e61b08020304e61c09020304e61d0a020304e61e65';
var buffer1 = new Buffer(hexString, 'hex');
var str = buffer1.toString('utf8');
var buffer2 = new Buffer(str, 'utf8');
console.log('original content:', hexString);
console.log('buffer1 contains:', buffer1.toString('hex'));
console.log('buffer2 contains:', buffer2.toString('hex'));
prints
original content: e61b08020304e61c09020304e61d0a020304e61e65
buffer1 contains: e61b08020304e61c09020304e61d0a020304e61e65
buffer2 contains: efbfbd1b08020304efbfbd1c09020304efbfbd1d0a020304efbfbd1e65
Here, I would like buffer2 to be the exact same thing as buffer1.
How can I convert an utf8 string to its original binary Buffer?
You cannot expect binary data converted to utf8 and back again to be the same as the original binary data because of the way utf8 works (especially when invalid utf8 characters are replaced with \ufffd).
You have to use another format that correctly preserves the data. This could be 'hex', 'base64', 'binary', or some other binary-safe format provided by a third-party module. Obviously you should probably keep it as a Buffer if you can.
The accepted answer is misleading. Your main problem is that you're dealing with invalid UTF-8. If the data were valid, the conversion would not cause issues.
Specifically, take the first two bytes: e61b.
In binary, that's: 11100110, 00011011. This is invalid. Take a look at this diagram from the utf-8 wikipedia page.
This says that if a byte starts with 1110, the next byte must start with two bytes starting with 10 after it. This is not the case here.
Whenever js hits an invalid character, it replaces it with �, the unicode replacement character. The codepoint for that is U+FFFD, and the utf-8 encoding of that code point is efbfbd. Notice that this shows up in your output a few times.