NodeJS Buffer- read little endian buffer - node.js

I have a 256-bit long but written as a little endian:
<Buffer 21 a2 bc 03 6d 18 2f 11 f5 5a bd 5c b4 32 a2 7b 22 79 7e 53 9b cb 44 5b 0e 00 00 00 00 00 00 00>
How can I correctly print it as a hexadeciaml value?
buf.toString('hex')
buk.toString('hex').split("").reverse().join("")) gives 0x00000000000000e0b544bcb935e79722b72a234bc5dba55f11f281d630cb2a12 instead of 0x000000000000000e5b44cb9b537e79227ba232b45cbd5af5112f186d03bca221

You can use match instead of split to get an array of the two character groups. Then you can reverse the array and join it.
buf.toString('hex').match(/.{2}/g).reverse().join("")

Actually Buffer objects support reverse() method, and it may be better to use it before converting to hex string,
buf.reverse().toString('hex')

Related

Update row with Buffer into bytea type column, using Postgres and NodeJS

I'm trying to store a Buffer into a bytea type column. I'm using a Postgres database and I have successfully connected to this database with node-postgres. I am able to update any other field, but I just can't find out what the syntax is to properly store a Buffer.
At the moment, there are already images in that database, that were written with a different system and language. I am not able to to re-use this system to achieve what we need.
The output of those existing images is also a Buffer:
<Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 04 38 00 00 04 38 08 06 00 00 00 ec 10 6c 8f 00 00 00 04 73 42 49 54 08 08 08 08 7c 08 64 88 00 ... 13315 more bytes>
And I have prepared the an image that should overwrite this value:
<Buffer 75 ab 5a 8a 66 a0 7b fa 67 81 b6 ac 7b ae 22 54 13 91 c3 42 86 82 80 00 00 03 52 52 11 14 80 00 00 2a 00 00 00 2a 02 00 00 00 00 14 48 3e 9a 00 00 00 ... 3153 more bytes>.
All good, so far.
I now need to use the proper SQL UPDATE statement, but I have not been able to figure that out. I have found some answers suggesting converting it using .toString('hex') and prepending it with \\x, but this does not result in the same value format.
My update statement now looks something like this (where imageData is the second Buffer example above):
await pool.query(
`UPDATE image
SET data = '${imageData}'::bytea
WHERE id = '00413567-fdd7-4765-be30-7f80c2d8ce57'`
)
Some requirements:
I can not use an external file
I can not use a different value format
I can not use a different tech stack

What happens when a Transform stream is paused?

If you have a flowing readable stream piped into a Transform stream (for instance stream.PassThrough), what happens if you pause the duplex stream? Will it pause the original readable stream as well? If not, does the stream data coming from the readable stream "leak out"? Or does the data accumulate somewhere in program memory?
EDIT Apparently, the readable stream should indeed pause. However, there seems to be a delay before the pause.
In the repl:
Welcome to Node.js v14.5.0.
Type ".help" for more information.
> const fs = require('fs');
undefined
> const stream = require('stream');
undefined
> const transform = new stream.PassThrough();
undefined
> const readable = fs.createReadStream('yes.jpg');
undefined
> readable.readableFlowing;
null
> readable.on('data', console.log);
console.log(readable.readableFlowing);
console.log(readable.isPaused());
readable.pipe(transform);
console.log(readable.readableFlowing);
console.log(readable.isPaused());
true
false
true
false
undefined
> <Buffer ff d8 ff e1 00 22 45 78 69 66 00 00 4d 4d 00 2a 00 00 00 08 00 01 01 12 00 03 00 00 00 01 00 01 00 00 00 00 00 00 ff db 00 43 00 02 01 01 02 01 01 02 ... 65486 more bytes>
<Buffer f2 5b ab 7d ff 00 bd 5c 15 6a 46 99 b4 70 bc de fc 4f 9c 2e 63 b7 d6 13 6c 2a ab 54 74 7b 35 d2 f5 56 5b 85 5f f8 15 49 6d a7 c9 e1 f7 89 a6 68 b6 a3 ... 65486 more bytes>
> readable.isPaused();
true
> readable.readableEnded;
false
what happens if you pause the duplex stream? Will it pause the original readable stream as well?
Yes, it will pause any other streams in the chain. When streams are piped together or when one is a properly implemented transform, they implement flow control so nobody is forced to buffer more than they want. This is one of the nice features of using piped/transform streams (they automatically do flow control).
But, all .pipe() does is register some event handlers. Those event handlers don't actually get any events until your block of code finishes executing statements and returns back to the event loop. So, your large block of code that starts with readable.on() runs in its entirety before any events run from either the readaonble.on('data', ...) or from the readable.pipe().

Show NUL character in Sublime Text 3

I'm attempting to copy/paste ASCII characters from a Hex editor into a Sublime Text 3 Plain Text document, although NUL characters do not show/display and the string is truncated:
Hexadecimal:
48 65 6C 6C 6F 2C 20 57 6F 72 6C 64 21 00 66 6F
6F 62 61 72 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00
ASCII:
Hello, World!�foobar�������������������������
Sublime Text: Truncates copied string and does not show NUL characters
TextMate: Shows NUL via "Show Invisibles"
I've tried the suggestion mentioned here by adding "draw_white_space": "all" to my preferences — still no luck! Is this possible with Sublime Text 3?
You're not alone in having this problem - others have posted bug reports about this behaviour: https://github.com/SublimeTextIssues/Core/issues/393
However it's not consistent:
Behaviour seems dependent on the file and where the NUL chars exist;
Similar issue here, with the console: https://github.com/SublimeTextIssues/Core/issues/1939

nodejs add null terminated string to buffer

I am trying to replicate a packet.
This packet:
2C 00 65 00 03 00 00 00 00 00 00 00 42 4C 41 5A
45 00 00 00 00 00 00 00 00 42 4C 41 5A 45......
2c 00 is the size of the packet...
65 00 is the packet id 101...
03 00 is the number of elements in the array...
Now here comes my problem, 42 4C 41 5A 45 is a string... There are exactly 3 Instances of that string in that packet if it is complete... But my problem is it is not just null terminated it has 00 00 00 00 spaces between those instances.
My code:
function channel_list(channels) {
var packet = new SmartBuffer();
packet.writeUInt16LE(101); // response packet for list of channels
packet.writeUInt16LE(channels.length)
channels.forEach(function (key){
console.log(key);
packet.writeStringNT(key);
});
packet.writeUInt16LE(packet.length + 2, 0);
console.log(packet.toBuffer());
}
But how do I add the padding?
I am using this package, https://github.com/JoshGlazebrook/smart-buffer/
Smart-Buffer keeps track of its position for you so you do not need to specify an offset to know where to insert the data to pad your string. You could do something like this with your existing code:
channels.forEach(function (key){
console.log(key);
packet.writeString(key); // This is the string with no padding added.
packet.writeUInt32BE(0); // Four 0x00's are added after the string itself.
});
I'm assuming you want: 42 4C 41 5A 45 00 00 00 00 42 4C 41 5A 45 00 00 00 00 etc.
Editing based on comments:
There is no built in way to do what you want, but you could do something like this:
channels.forEach(function (key){
console.log(key);
packet.writeString(key);
for(var i = 0; i <= (9 - key.length); i++)
packet.writeInt8(0);
});

When using hexdump to check /dev/mem, why are some addresses missing?

Here is the command I used:
sudo hexdump -C /dev/mem | less
And part of the result it dumped:
00000070 53 ff 00 f0 a4 f0 00 f0 c7 ef 00 f0 e0 ba 00 c0 |S...............|
00000080 ef 27 00 f0 ef 27 00 f0 ef 27 00 f0 ef 27 00 f0 |.'...'...'...'..|
*
00000100 99 1b 32 e7 01 e4 00 f0 65 f0 00 f0 e0 be 00 c0 |..2.....e.......|
00000110 ef 27 00 f0 ef 27 00 f0 ef 27 00 f0 ef 27 00 f0 |.'...'...'...'..|
*
00000180 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
The interesting thing is that addresses in [0x00000120, 0x0000017f] are ignored as "*" instead of the value I suppose to see.
As far as I can imagine, those parts are protected from being read, but why? Or am I missing something?
hexdump is suppressing duplicate lines to make the output easier to read.
From the 'man hexdump' page:
-v Cause hexdump to display all input data. Without the -v option,
any number of groups of output lines, which would be identical to
the immediately preceding group of output lines (except for the
input offsets), are replaced with a line comprised of a single
asterisk.

Resources