Need help debugging a LISP script error / octet sequence 141 - excel

I'm running an Excel macro that calls a LISP script, which has always worked fine in the past, but now it's coming up with this error:
decoding error on stream # >SB-SYS:FD STREAM for "file:Y\...\FILE0617.CMT"
{27B22531}>
(:EXTERNAL-FORMAT :CP1252):
the octet sequence (141) cannot be decoded
What specifically should I be looking for that might be causing this error? The input file is formatted the same as the ones that worked in the past without error.
What does octet sequence 141 refer to?

141 is a cedilla (ç). I'm guessing that you got someone with a name with a ç in it for the first time and Lisp doesn't handle the encoding right.

Related

How to detect encoding errors in a Node.js Buffer

I'm reading a file in Node.js, into a Buffer object, and I'm decoding the UTF-8 content of the Buffer using Buffer.toString('utf8'). If there are encoding errors, I want to report a failure.
The toString() method handles decoding errors by substituting an xFFFD character, which I can detect by searching the result. But xFFFD is a legal character in the input file, and I don't want to report an error if the xFFFD was present and correctly encoded in the input.
Is there any way I can distinguish a Buffer that contains a legitimately-encoded xFFFD character from one that contains an encoding error?
The solution proposed by #eol in a comment on the question appears to meet the requirements.

bytes().decode proviedes neither Error nor Output

print(bytes(97).decode('utf8'))
This line is part of my project and its giving me a headache for a while now, If I run this the Output is just an empty space, no Error or anything else. The 97 came from encoding the 'a' with utf8. I want do work with the encoded numbers so I changed them to Integers, but I cant get them to decode once I'm done working with them
So if you run bytes(97) the output is b'\x00\x00\x00\x00\...
but if you run bytes([97]) you get b'a' which I assume is what you expected.
EDIT:
I just found this question which explains the why:
Converting int to bytes in Python 3

EncodingGroovyMethods and decodeBase64

I have ran into an issue on Windows where encoded file is read and decoded using EncodingGroovyMethods#decodeBase64:
getClass().getResourceAsStream('/endoded_file').text.decodeBase64()
This gives me:
bad character in base64 value
File itself has CRLF endings and groovy decodeBase64 implementation snippet has a comment so:
} else if (sixBit == 66) {
// RFC 2045 says that I'm allowed to take the presence of
// these characters as evidence of data corruption
// So I will
throw new RuntimeException("bad character in base64 value"); // TODO: change this exception type
}
I looked up RFC 2045 and CLRF pair is suppose to be legal. I have tried same with org.apache.commons.codec.binary.Base64#decodeBase64 and it works. Is this a bug in groovy or was this intentional ?
I am using groovy 2.4.7.
This is not a bug, but a different way of how corrupt data is handled. Looking at the source code of Base64 in Apache commons, you can see the documentation:
* Ignores all non-base64 characters. This is how chunked (e.g. 76 character) data is handled, since CR and LF are
* silently ignored, but has implications for other bytes, too. This method subscribes to the garbage-in,
* garbage-out philosophy: it will not check the provided data for validity.
So, while the Apache Base64 decoder silently ignores the corrupt data, the Groovy one will complain about it. The RFC documentation is a bit fuzzy about it:
In base64 data, characters other than those in Table 1, line breaks, and other
white space probably indicate a transmission error, about which a
warning message or even a message rejection might be appropriate
under some circumstances.
While warning messages are hardly useful (who checks for warnings anyway?), the Groovy authors decided to go into the path of 'message rejection'.
TLDR; they are both fine, just a different way of handling corrupt data. If you can, try to fix or reject the incorrect data.

Google Protocol Buffers (protobuf) in Python3 - trouble with ParseFromString (encoding?)

I've got Google Protocol buffers 80% working in Python3. My .proto file works, I'm encoding data, life is almost good.
The problem is that I can't ParseFromString the result of SerializeToString.
When I print SerializeToString it looks like what I'd expect, a fairly compact binary representation (preceded by b').
My guess is that perhaps this is a difference in the way Python2 and Python3 handle strings. The putput of SerializeToString is Bytes, not a string.
Printed output of SerializeToString (Python type is ):
b'\x10\xd7\xeb\x8e\xcd\x04\x1a\x0cnamegoeshere2#\x08\x80\xf8\xde\xc3\x9f\xb0\x81\x89\x14\x11\x00\x00\x00\x00\x00\x80d\xc0\x19\x00\x00\x00\x00\x00\xc0m#!\x00\x00\x00\x00\x00\x80R\xc0)\x00\x00\x00\x00\x00x\xb7\xc01\x00\x00\x00\x00\x00\x8c\x95#9\x00\x00\x00\x00\x00\x16\xb2#'
result of ParseFromString(message):
None
No error is provided...
So - my best guess is that all I need to do is .decode() the bytes object generated, the problem is that I have no clue what the encoding is. I've tried UTF-8, -16, Latin-1, and a few others without success. My Google-Fu is strong but I haven't found anything on this.
Any help would be appreciated.
ParseFromString is a method -- it does not return anything, but rather fills in self with the parsed content. Use it like:
message = MyMessageType()
message.ParseFromString(data)
print(message.some_field)

How to store binary data in a Lua string

I needed to create a custom file format with embedded meta information. Instead of whipping up my own format I decide to just use Lua.
texture
{
format=GL_LUMINANCE_ALPHA;
type=GL_UNSIGNED_BYTE;
width=256;
height=128;
pixels=[[
<binary-data-here>]];
}
texture is a function that takes a table as its sole argument. It then looks up the various parameters by name in the table and forwards the call on to a C++ routine. Nothing out of the ordinary I hope.
Occasionally the files fail to parse with the following error:
my_file.lua:8: unexpected symbol near ']'
What's going on here?
Is there a better way to store binary data in Lua?
Update
It turns out that storing binary data is a Lua string is non-trivial. But it is possible when taking care with 3 sequences.
Long-format-string-literals cannot have an embedded closing-long-bracket (]], ]=], etc).
This one is pretty obvious.
Long-format-string-literals cannot end with something like ]== which would match the chosen closing-long-bracket.
This one is more subtle. Luckily the script will fail to compile if done wrong.
The data cannot embed \n or \r.
Lua's built in line-end processing messes these up. This problem is much more subtle. The script will compile fine but it will yield the wrong data. 0x13 => 0x10, 0x1013 => 0x10, etc.
To get around these limitations I split the binary data up on \r, \n, then pick a long-bracket that works, finally emit Lua that concats the various parts back together. I used a script that does this for me.
input: XXXX\nXX]]XX\r\nXX]]XX]=
texture
{
--other fields omitted
pixels= '' ..
[[XXXX]] ..
'\n' ..
[=[XX]]XX]=] ..
'\r\n' ..
[==[XX]]XX]=]==];
}
Lua is able to encode most characters in long bracket format including nulls. However, Lua opens the script file in text mode and this causes some problems. On my Windows system the following characters have problems:
Char code(s) Problem
-------------- -------------------------------
13 (CR) Is translated to 10 (LF)
13 10 (CR LF) Is translated to 10 (LF)
26 (EOF) Causes "unfinished long string near '<eof>'"
If you are not using windows than these may not cause problems, but there may be different text-mode based problems.
I was only able to produce the error you received by encoding multiple close brackets:
a=[[
]]] --> a.lua:2: unexpected symbol near ']'
But, this was easily fixed with the following:
a=[==[
]]==]
The binary data needs to be encoded into printable characters. The simplest method for decoding purposes would be to use C-like escape sequences for all bytes. For example, hex bytes 13 41 42 1E would be encoded as '\19\65\66\30'. Of course, then the encoded data is three to four times larger than the source binary.
Alternatively, you could use something like Base64, but that would have to be decoded at runtime instead of relying on the Lua interpreter. Personally, I'd probably go the Base64 route. There are Lua examples of Base64 encoding and decoding.
Another alternative would be have two files. Use a well defined image format file (e.g. TGA) that is pointed to by a separate Lua script with the additional metadata. If you don't want two files to move around then they could be combined in an archive.

Resources