I have a PGP encrypted file called file.pgp which must not be in ascii-armor but binary. It looks like this:
�P��3E��Q� �i`p���
����&�9
�ֻ�<P�+�[����R0��$���q����VJ��hu���bE"2��M1r��j�K�v�#6�3E�Ҳ�A�W{Z
��FEԭ�YV��6g�V���e�,I�Zpw�r��8׆
�mc��h��n���k�p�>JH\�G�6��M1|>�G�fl�J���6��
ج��
�_��y8�..{���_⮵���F���~�vt
�8AB;z����m^��Xp���VӅCzD�ճn
����{+d�3�"��N�1p�
When I'm using GNU base64 encoder, I'm getting this string:
$ cat file.gpg | base64
hQEMA1DujfGcM0WiAQgAvcIMUfydsSDmaWBwnoWACrsapePpJpU5Co68276SK2XVBqY2YyNUgzAF
oawkpMjfcQS+7+nJVkrb7Gh1h4L9YkUiMo+dTTFyzs5qskuECNZ25UA2rzNF+NKyq0HZV3sXWg3P
AwZNZbNJIAc4xWlBNfsNoda7zhk8UJArj1sAiKPw5VIKjahGRdSt2FlWurs2Z5EXVriLG0aHZbAs
SeCjWnB3Aalyoo8414aGbWOr5WjU7rpugBLw52uAcJgcPkpIXMJjCEf4gTbc1k0xfD4YjUejZmyH
H0rYAAHw3DbjyQrYrLmHC9Vfm655HBU40xceLi5/e4n2Dxge+F/irrW9o9JGAfCf5OZ+gXZ0Ggv9
t620m704QUI7eryy0ddtXoGsWHCxu4gaVtOFQ3pEp9WzZghuC5j1/c57K2T4lzP+IvEfo07fMRFw
tg==
With the GNU base64 tool, I can successfully reconvert it to the originating pgp-file and decrypt it.
I want to implement a similar tool in NodeJS. I can successfully convert ASCII text but not binary content. My provisional code looks like this:
var stdin = process.openStdin();
var data = "";
stdin.on('data', function(chunk) {
data += chunk;
});
stdin.on('end', function() {
console.log(new Buffer(text, 'binary').toString('base64'));
});
Usage: $ cat file.gpg | node base64.js
The output looks different to what GNU base64 offers. Also I can't convert it back to the original file.gpg file - GnuPG can't find anything to decrypt.
This happens because you pass a string and not a buffer as theGleep point in its comment.
You can do it like this:
let stdin = process.openStdin();
let data = [];
stdin.on('data', chunk => {
data.push(chunk);
});
stdin.on('end', () => {
console.log(Buffer.concat(data).toString('base64'));
});
Related
I'm reading a binary file from my file system using Node.
When I compare the streamed data to a hexdump that I've done, I get different results.
My node code looks like this:
this.readStream = fs.createReadStream(file, {
encoding: 'binary',
});
this.readStream.on('data', (data: string | Buffer) => {
if (!(data instanceof Buffer)) {
data = Buffer.from(data);
}
let dataString = "";
data.forEach(v => {
dataString += v.toString(2).padStart(8, '0')
});
console.log(dataString);
});
The output from above is:
000000000000000000000000000000000000000000000010001000100010001000100010001000100010001000100010001000100010001000100010001000110001100111000010101000100010001100100010001000100010001100100010001000100010001100100010001000100010001000110010001000110010001000100010001000110010001000100010001000110010001000100010001000110000011100100010001000110010001000100010001000110010001000100010001000110010001000100010001000100010001000100011001000100010001000100011001000100010001000100011001000100010001000100011000001011100001010100010001000110010001000100010001000110010001000100010001000110010001000100010001000011100001010110110001000011100001010100010001100100010000111000010101000100011001000100011001000100010001000100011000001110010001000100011001000100010001000100000011000100010001000100011001000100010001000100011110000101010001000100011001000100010001000100011001000100010001000100011001000100010001000100000010001000110001000100011001000100010001000100011001000100010001000100011001000100010001000100011001000100010001100100010001000100010000001100010001000100010001100100010001000100010001100000110001000100010001100100010001000100010001100100010001000100010001100100010001000100010001100100010001000110010001000100010001000110010001000100010001000110010001000100010001000000100010111000010101000100010001100100010001000100010001100100010001000100010001100100010001000100010001100000110001000110010001000100010001000000110001000100010001000110010001000100010001000110000011000100010001000110010001000100010001000110010001000100010001000110010001000100010001000000110001000100011001000100010001000100011001000100010001000100011001000100010001000100011000111101100001010100010001000110010001000100010001000000110001000100010001000100010001000011010001000011100001110100010001000110010001000100010001000110010001000100010001000110010001000100010001000000100011000100010001000110010001000100010001000110010001000100010001000110010001000100010001000110001101000100011001000100010001000100000011000100010001000100011001000100010001000100011000001101100001010100010001000110010001000100010001000110010001000100010001000110010001000100010001000000100010100100011001000100001001000100011001000100001001000100011001000100001001000100011000001011100001010010010001000110010001000010010001000110010001000010010001000110010001000010010001000100010000100100011001000100001001000100000110000111010001000010010001000110010001000010010001000110000010001010010001000110010001000010010001000110010001000010010001000110010001000010010001000000110000100100011001000100001001000100011001000100001001000100011001000100001001000100011000001100001001000100011001000100001001000100000110000111010001000010010001000110010001000010010001000110011000100100011001000100001001000100011001000100001001000100011001000100001001000100000110000111001110111000010100100100010000100100010001010100010001011000010101000100011001000100011001000100001001000100011110000111011111111000011101111100000000000000000110000101000000100000001110000101000001000000010110000101000001100000011110000101000010000000100110000101000010100000101110000101000011000000110110000101000011100000111110000101000100000001000110000101000100100001001110000101000101000001010110000101000101100001100000011010000111000001111000100000001000100010010000100110111111111000011101111111100001110111111110000111011111111000011101111111100001110111111110000111011111111000011101111111100001110111111110000111011111111000011101111111100001110111111110000111011111111000011101111111100001110111100
The Hexdump that I used is: xxd -b -g 0 ${file}
And its output is:
0000000000000000000000000000000000000000000000100010001000100010001000100010001000100010001000100010001000100010001000100010001100011001101000100010001100100010001000100010001100100010001000100010001100100010001000100010001000110010001000110010001000100010001000110010001000100010001000110010001000100010001000110000011100100010001000110010001000100010001000110010001000100010001000110010001000100010001000100010001000100011001000100010001000100011001000100010001000100011001000100010001000100011000001011010001000100011001000100010001000100011001000100010001000100011001000100010001000100001101101100010000110100010001100100010000110100010001100100010001100100010001000100010001100000111001000100010001100100010001000100010000001100010001000100010001100100010001000100010001110100010001000110010001000100010001000110010001000100010001000110010001000100010001000000100010001100010001000110010001000100010001000110010001000100010001000110010001000100010001000110010001000100011001000100010001000100000011000100010001000100011001000100010001000100011000001100010001000100011001000100010001000100011001000100010001000100011001000100010001000100011001000100010001100100010001000100010001100100010001000100010001100100010001000100010000001000101101000100010001100100010001000100010001100100010001000100010001100100010001000100010001100000110001000110010001000100010001000000110001000100010001000110010001000100010001000110000011000100010001000110010001000100010001000110010001000100010001000110010001000100010001000000110001000100011001000100010001000100011001000100010001000100011001000100010001000100011000111101010001000100011001000100010001000100000011000100010001000100010001000100001101000100001111000100010001100100010001000100010001100100010001000100010001100100010001000100010000001000110001000100010001100100010001000100010001100100010001000100010001100100010001000100010001100011010001000110010001000100010001000000110001000100010001000110010001000100010001000110000011010100010001000110010001000100010001000110010001000100010001000110010001000100010001000000100010100100011001000100001001000100011001000100001001000100011001000100001001000100011000001011001001000100011001000100001001000100011001000100001001000100011001000100001001000100010001000010010001100100010000100100010000011100010000100100010001100100010000100100010001100000100010100100010001100100010000100100010001100100010000100100010001100100010000100100010000001100001001000110010001000010010001000110010001000010010001000110010001000010010001000110000011000010010001000110010001000010010001000001110001000010010001000110010001000010010001000110011000100100011001000100001001000100011001000100001001000100011001000100001001000100000110111011001001000100001001000100010101000100010101000100011001000100011001000100001001000100011111111111111111000000000000000001000000100000001100000100000001010000011000000111000010000000100100001010000010110000110000001101000011100000111100010000000100010001001000010011000101000001010100010110000110000001101000011100000111100010000000100010001001000010011011111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111100
I don't believe the difference is because of big or little endian.
I don't think the issue is because of my use of padStart since each unit of the buffer should be 8 bits (1 byte).
The file I'm reading is only 398 bytes
So the question is how do I get Node to read a binary file perfectly from the HDD, and get a binary string representation of the file that is identical to what I would get from a binary dump of the same file.
I'm trying to compress an image with pngquant. Here is the code:
let output = '';
const quant = cp.spawn('pngquant', ['256', '--speed', '10'], {
stdio: [null, null, 'ignore'],
});
quant.stdout.on('data', data => output += data);
quant.on('close', () => {
fs.writeFileSync('image.png', output);
fs.writeFileSync('image_original.png', image);
process.exit(0);
});
quant.stdin.write(image);
image is a Buffer with pure PNG data.
The code works, however, for some reason, it generates incorrect PNG. Not only that, but also it's size is more than original's.
When I execute this from the terminal, I get excellent output file:
pngquant 256 --speed 10 < image_original.png > image.png
I have no idea of what's going on; the data in output file seems pretty PNG-ish.
EDIT: I have managed to make it work:
let output = [];
quant.stdout.on('data', data => output.push(data));
quant.stdin.write(image);
quant.on('close', () => {
const image = Buffer.concat(output);
fs.writeFileSync('image.png', image);
});
I assume that is related to how strings are represented in the NodeJS. Would be happy to get some explanation.
I am creating an application with Node.js and I am trying to read a file called "datalog.txt." I use the "append" function to write to the file:
//Appends buffer data to a given file
function append(filename, buffer) {
let fd = fs.openSync(filename, 'a+');
fs.writeSync(fd, str2ab(buffer));
fs.closeSync(fd);
}
//Converts string to buffer
function str2ab(str) {
var buf = new ArrayBuffer(str.length*2); // 2 bytes for each char
var bufView = new Uint16Array(buf);
for (var i=0, strLen=str.length; i < strLen; i++) {
bufView[i] = str.charCodeAt(i);
}
return buf;
}
append("datalog.txt","12345");
This seems to work great. However, now I want to use fs.readFileSync to read from the file. I tried using this:
const data = fs.readFileSync('datalog.txt', 'utf16le');
I changed the encoding parameter to all of the encoding types listed in the Node documentation, but all of them resulted in this error:
TypeError: Argument at index 2 is invalid: Invalid encoding
All I want to be able to do is be able to read the data from "datalog.txt." Any help would be greatly appreciated!
NOTE: Once I can read the data of the file, I want to be able to get a list of all the lines of the file.
Encoding and type are an object:
const data = fs.readFileSync('datalog.txt', {encoding:'utf16le'});
Okay, after a few hours of troubleshooting a looking at the docs I figured out a way to do this.
try {
// get metadata on the file (we need the file size)
let fileData = fs.statSync("datalog.txt");
// create ArrayBuffer to hold the file contents
let dataBuffer = new ArrayBuffer(fileData["size"]);
// read the contents of the file into the ArrayBuffer
fs.readSync(fs.openSync("datalog.txt", 'r'), dataBuffer, 0, fileData["size"], 0);
// convert the ArrayBuffer into a string
let data = String.fromCharCode.apply(null, new Uint16Array(dataBuffer));
// split the contents into lines
let dataLines = data.split(/\r?\n/);
// print out each line
dataLines.forEach((line) => {
console.log(line);
});
} catch (err) {
console.error(err);
}
Hope it helps someone else with the same problem!
This works for me:
index.js
const fs = require('fs');
// Write
fs.writeFileSync('./customfile.txt', 'Content_For_Writing');
// Read
const file_content = fs.readFileSync('./customfile.txt', {encoding:'utf8'}).toString();
console.log(file_content);
node index.js
Output:
Content_For_Writing
Process finished with exit code 0
We have a ruby instance that sends a message to a node instance via rabbitmq (bunny and amqplib) like below:
{ :type => data, :data => msg }.to_bson.to_s
This seems to be going pretty well, but msg's are sometimes long and we are sending them across data centers. zlib would help a lot.
doing smth like this in the ruby sender:
encoded_data = Zlib::Deflate.deflate(msg).force_encoding(msg.encoding)
and then reading it inside node:
data = zlib.inflateSync(encoded_data)
returns
"\x9C" from ASCII-8BIT to UTF-8
Is what I'm trying to do possible?
I am not a Ruby dev, so I will write the Ruby part in more or less pseudo code.
Ruby code (run online at https://repl.it/BoRD/0)
require 'json'
require 'zlib'
car = {:make => "bmw", :year => "2003"}
car_str = car.to_json
puts "car_str", car_str
car_byte = Zlib::Deflate.deflate(car_str)
# If you try to `puts car_byte`, it will crash with the following error:
# "\x9C" from ASCII-8BIT to UTF-8
#(repl):14:in `puts'
#(repl):14:in `puts'
#(repl):14:in `initialize'
car_str_dec = Zlib::Inflate.inflate(car_byte)
puts "car_str_dec", car_str_dec
# You can check that the decoded message is the same as the source.
# somehow send `car_byte`, the encoded bytes to RabbitMQ.
Node code
var zlib = require('zlib');
// somehow get the message from RabbitMQ.
var data = '...';
zlib.inflate(data, function (err, buffer) {
if (err) {
// Handle the error.
} else {
// If source didn't have any encoding,
// no need to specify the encoding.
console.log(buffer.toString());
}
});
I also suggest you to stick with async functions in Node instead of their sync alternatives.
I'm using a Latin1 encoded DB and can't change it to UTF-8 meaning that I run into issues with certain application data. I'm using Tesseract to OCR a document (tesseract encodes in UTF-8) and tried to use iconv-lite; however, it creates a buffer and to convert that buffer into a string. But again, buffer to string conversion does not allow "latin1" encoding.
I've read a bunch of questions/answers; however, all I get is setting client encoding and stuff like that.
Any ideas?
Since Node.js v7.1.0, you can use the transcode function from the buffer module:
https://nodejs.org/api/buffer.html#buffer_buffer_transcode_source_fromenc_toenc
For example:
const buffer = require('buffer');
const latin1Buffer = buffer.transcode(Buffer.from(utf8String), "utf8", "latin1");
const latin1String = latin1Buffer.toString("latin1");
You can create a buffer from the UFT8 string you have, and then decode that buffer to Latin 1 using iconv-lite, like this
var buff = new Buffer(tesseract_string, 'utf8');
var DB_str = iconv.decode(buff, 'ISO-8859-1');
I've found a way to convert any encoded text file, to UTF8
var
fs = require('fs'),
charsetDetector = require('node-icu-charset-detector'),
iconvlite = require('iconv-lite');
/* Having different encodings
* on text files in a git repo
* but need to serve always on
* standard 'utf-8'
*/
function getFileContentsInUTF8(file_path) {
var content = fs.readFileSync(file_path);
var original_charset = charsetDetector.detectCharset(content);
var jsString = iconvlite.decode(content, original_charset.toString());
return jsString;
}
I'ts also in a gist here: https://gist.github.com/jacargentina/be454c13fa19003cf9f48175e82304d5
Maybe you can try this, where content should be your database buffer data (in latin1 encoding)