MS Visual c++ 2008 char buffer longer than defined - visual-c++

I've got a char* buffer to hold a file that i read in binary mode. I know the length of the file is 70 bytes and this is the value being used to produce a buffer of the correct size. The problem is, there is 17 or 18 extra spaces in the array so some random characters are being added to the end. Could the be a unicode issue?
ulFLen stores the size of the file in bytes and has the correct value (70 for the file i'm testing on)
//Set up a buffer to store the file
pcfBuffer = new char[ulFLen];
//Reading the file
cout<<"Inputting File...";
fStream.seekg(0,ios::beg);
fStream.read(pcfBuffer,ulFLen);
if(!fStream.good()){cout<<"FAILED"<<endl;}else{cout<<"SUCCESS"<<endl;}

As it is a char array, you probably forgot a terminating NUL character.
The right way in this case would be:
//Set up a buffer to store the file and a terminating NUL character
pcfBuffer = new char[ulFLen+1];
//Reading the file
cout<<"Inputting File...";
fStream.seekg(0,ios::beg);
fStream.read(pcfBuffer,ulFLen);
if(!fStream.good()){cout<<"FAILED"<<endl;}else{cout<<"SUCCESS"<<endl;}
// Add NUL character
pcfBuffer[ulFLen] = 0;
But note that you only need a terminating NUL character for routines that depend on it, like string routines or when using printf using %s. If you use routines that use the fact that you know the length (70 characters), it will work without a NUL character, too.

Add following snippet after the data has been read, it will add the terminating Zero which is needed.
And btw it should be pcfBuffer = new char[ulFLen+1];
size_t read_count = fStream.gcount();
if(read_count<=ulFlen)
pcfBuffer[read_count]=0;
This will work no matter how much data has to be read (in your case gcount() should always return 70, so you could do following instead: pcfBuffer[70]=0;)

Related

How do I convert a string of escape sequences to bytes?

Creating a TCP server that needs to process some data. I have a net.Conn instance "connection"from which I will read said data. The lower part of the snippet brings about an error noting that it cannot use the 'esc' value as a byte value.
const(
esc = "\a\n"
)
....
c := bufio.NewReader(connection)
data, err := c.ReadBytes(esc)
Clearly, some conversion is needed but when I try
const(
esc = "\a\n"
)
....
c := bufio.NewReader(connection)
data, err := c.ReadBytes(byte(esc))
The compiler makes note that I cannot convert esc to byte. Is it due to the fact that I declared "\a\n" as a const value on the package level? Or is there something else overall associated with how I'm framing the bytes to be read?
You can't convert esc to byte because you can't convert strings into single bytes. You can convert a string into a byte slice ([]byte).
The bufio.Reader only supports single byte delimiters, you should use a bufio.Scanner with a custom split function instead for multi-byte delimiters.
Perhaps a modified version of https://stackoverflow.com/a/37531472/1205448

Node.js buffer with é character

I've created a simple buffer, and I've put into it my string "Héllo"
Here's my code:
var str = "Héllo";
var buff = new Buffer(str.length);
buff.write(str);
console.log(buff.toString("utf8"));
however, it returns "Héll" and not Héllo, why?
How can I fix that?
UTF-8 characters can have different length - from 1 byte to 4 bytes - look at this answer https://stackoverflow.com/a/9533324/4486609
So, it is not ok to assume that it have 2 bytes, as you did.
As for the right length, look at this https://stackoverflow.com/a/9864762/4486609
.length is reporting the number of chars, not the number of bytes. But new Buffer() is expecting the number of bytes. The 'é' requires two bytes. So the last char is falling off the end of the buffer and being truncated.
If you don't need to support anything older than Node.js 4.x.x, you can use Buffer.from():
let buffer = Buffer.from('Héllo');
console.log(buffer.toString()); // 'Héllo'

Read system call on text files

So I have the code below that i received to an examination and these two parts didn't really knew how to solve them.
#define MAX_LINE 4096
char line[MAX_LINE];
fd = open("../.././test.txt". O_RDONLY);
read(fd, line, MAX_LINE);
read(fd, line, MAX_LINE);
Explain which is the minimum and the maximum number of text lines that could be read by the given code.
Change the code to read exactly on line of text.
Thank you !
Explain which is the minimum and the maximum number of text lines that could be read by the given code.
Minimum is 0, or a fraction. Maximum is probably 2047 or 2048 if the line terminator is assumed to be \n alone, or 4096/3 +/- 1 if it is \r\n, or 4096 if the lines are allowed to be empty and the line terminator is assumed to be \n. I would say the question is radically underspecified, and complain.
Change the code to read exactly on line of text.
Again this is radically underspecified. If use of stdio is allowed, which isn't stated, and aren't system calls, there are several choices. If it isn't, you would have to write a loop and a string-concatenation.
I think fgets() is appropriate to read one line.
#define MAX_LINE 4096
char line[MAX_LINE];
FILE *fp;
fp = fopen("../.././test.txt", "r");
fgets(line, sizeof(line), fp); // read one line

Reading stream line by line without knowing its encoding

I have a situation where I need to process some data from a stream line by line. The problem is that the encoding of the data is not known in advance; it might be UTF-8 or any legacy single-byte encoding (e.g. Latin1, ISO-8859-5, etc). It will not be UTF16 or exotics like EBCDIC, so I can reasonably expect \n to be unambiguous, so in theory I can split it into lines. At some point, when I encounter an empty line, I will need to feed the rest of the stream somewhere else (without splitting it into lines, but still without any reencoding); think in terms of HTTP-style headers followed by an opaque body.
Here is what I got:
function processStream(stream) {
var buffer = '';
function splitLines(data) {
buffer += data;
var lf = buffer.indexOf('\n');
while (lf >= 0) {
var line = buffer.substr(0, lf - 1);
buffer = buffer.substr(lf + 1);
this.emit('line', line);
lf = buffer.indexOf('\n');
}
}
function processHeader(line) {
if (line.length) {
// do something with the line
} else {
// end of headers, stop splitting lines and start processing the body
this
.removeListener('data', splitLines)
.removeAllListeners('line')
.on('data', processBody);
if (buffer.length) {
// process leftover buffer as part of the body
processBody(buffer);
buffer = '';
}
}
}
function processBody(data) {
// do something with the body chunks
}
stream.setEncoding('binary');
stream
.on('data', splitLines)
.on('line', processHeader);
}
It does the job, but the problem is that the binary encoding is deprecated and will probably disappear in the future, leaving me without that option. All other Buffer encodings will either mangle the data or fail to decode it altogether if (most likely, when) it does not match the encoding. Working with Uint8Array instead will mean slow and inconvenient Javascript loops over the data just to find a newline.
Any suggestions on how to split a stream into lines on the fly, while remaining encoding-agnostic without using binary encoding?
Disclaimer: I'm not a Javascript developer.
At some point, when I encounter an empty line, I will need to feed the rest of the stream somewhere else (without splitting it into lines, but still without any reencoding)
Right. In that case, it sounds like you really don't want to think about the data as text at all. Treat it as you would any binary data, and split it on byte 0x0A. (Note that if it came from Windows to start with, you might want to also remove any trailing 0x0D value.)
I know it's text really, but without any encoding information, it's dangerous to impose any sort of interpretation on the data.
So you should keep two pieces of state:
A list of byte arrays
The current buffer
When you receive data, you logically want to create a new array with the current buffer prepending the new data. (For efficiency you may not want to actually create such an array, but I would do so to start with, until you've got it working.) Look for any 0x0A bytes, and split the array accordingly (create a new byte array as a "slice" of the existing array, and add the slice to the list). The new "current buffer" will be whatever data you have left after the final 0x0A.
If you see two 0x0A values in a row, then you'd go into your second mode of just copying the data.
This is all assuming that the Javascript / Node combination allows you to manipulate binary data as binary data, but I'd be shocked if it didn't. The important point is not to interpret it as text at any point.

Lua: Read hex values from binary

I'm trying to read hex values from a binary file. I don't have a problem with extracting a string and converting letters to hex values, but how can I do that with control characters and other non-printable characters? Is there a way of reading the string directly in hex values without the need of converting it?
Have a look over here:
As a last example, the following program makes a dump of a binary file. Again, the first program argument is the input file name; the output goes to the standard output. The program reads the file in chunks of 10 bytes. For each chunk, it writes the hexadecimal representation of each byte, and then it writes the chunk as text, changing control characters to dots.
local f = assert(io.open(arg[1], "rb"))
local block = 10
while true do
local bytes = f:read(block)
if not bytes then break end
for b in string.gfind(bytes, ".") do
io.write(string.format("%02X ", string.byte(b)))
end
io.write(string.rep(" ", block - string.len(bytes) + 1))
io.write(string.gsub(bytes, "%c", "."), "\n")
end
From your question it's not clear what exactly you aim to do, so I'll give 2 approaches.
Either you have a file full with hex values, and read it like this:
s='ABCDEF1234567890'
t={}
for val in s:lower():gmatch'(%x%x)' do
-- do whatever you want with the data
t[#t+1]=s:char(val)
end
Or you have a binary file, and you convert it to hex values:
s='kl978331asdfjhvkasdf'
t={s:byte(1,-1)}

Resources