How to convert a crypto::sha2::Sha256 hash into a &[u8] representation? - rust

I'm currently trying to generate an ED25519 keypair from a SHA256 hash (via rust-crypto crate):
extern crate crypto; // rust-crypto = "0.2.36"
use crypto::ed25519;
use crypto::sha2::Sha256;
use crypto::digest::Digest;
fn main() {
let phrase = "purchase hobby popular celery evil fantasy someone party position gossip host gather";
let mut seed = Sha256::new();
seed.input_str(&phrase);
let (_priv, _publ) = ed25519::keypair(&seed); // expects slice
}
However, I totally fail to understand how to correctly pass the SHA256 to the ed25519::keypair() function. I traced down that &seed.result_str() results in:
"fc37862cb425ca4368e8e368c54bb6ea0a1f305a225978564d1bdabdc7d99bdb"
This is the correct hash, while &seed.result_str().as_bytes() results in:
[102, 99, 51, 55, 56, 54, 50, 99, 98, 52, 50, 53, 99, 97, 52, 51, 54, 56, 101, 56, 101, 51, 54, 56, 99, 53, 52, 98, 98, 54, 101, 97, 48, 97, 49, 102, 51, 48, 53, 97, 50, 50, 53, 57, 55, 56, 53, 54, 52, 100, 49, 98, 100, 97, 98, 100, 99, 55, 100, 57, 57, 98, 100, 98]
Which is something I do not want, something entirely different. The question now breaks down to:
|
36 | let (_priv, _publ) = ed25519::keypair(&seed);
| ^^^^^ expected slice, found struct `crypto::sha2::Sha256`
|
= note: expected type `&[u8]`
found type `&crypto::sha2::Sha256`
How to correctly convert the crypto::sha2::Sha256 hash into a [u8] representation?

The Sha256 API may be a little confusing at first because it is designed so that it doesn't allocate any new memory for the data. That's to avoid wasting a memory allocation, in case you want to allocate it yourself. Instead, you give it a buffer to write to:
// Create a buffer in which to write the bytes, making sure it's
// big enough for the size of the hash
let mut bytes = vec![0; seed.output_bytes()];
// Write the raw bytes from the hash into the buffer
seed.result(&mut bytes);
// A reference to a Vec can be coerced to a slice
let (_priv, _publ) = ed25519::keypair(&bytes);

Related

What data format is this alongside ascii and decimal?

Consider the following:
use sha2::{Sha256,Digest};
fn main() {
let mut hasher = Sha256::new();
hasher.update(b"hello world");
let result = hasher.finalize();
let str_result = format!("{:x}", result);
println!("A string is: {:x}", result);
println!("ASCII decimal maps: {:?}", str_result.bytes());
println!("What data coding is this?: {:?}", result);
}
The SHA256 hash as a string is: b94d27b9934d3e08a52e52d7da7dabfac484efe37a5380ee9088f7ace2efcde9
ASCII decimal maps: Bytes(Copied { it: Iter([98, 57, 52, 100, 50, 55, 98, 57, 57, 51, 52, 100, 51, 101, 48, 56, 97, 53, 50, 101, 53, 50, 100, 55, 100, 97, 55, 100, 97, 98, 102, 97, 99, 52, 56, 52, 101, 102, 101, 51, 55, 97, 53, 51, 56, 48, 101, 101, 57, 48, 56, 56, 102, 55, 97, 99, 101, 50, 101, 102, 99, 100, 101, 57]) })
What data coding is this?: [185, 77, 39, 185, 147, 77, 62, 8, 165, 46, 82, 215, 218, 125, 171, 250, 196, 132, 239, 227, 122, 83, 128, 238, 144, 136, 247, 172, 226, 239, 205, 233]
The first two make sense, we have the ASCII representations, followed by the ASCII > Decimal map. What is the third format? [185, 77, 39, 185, 147, 77, 62, 8, 165, 46, 82, 215, 218, 125, 171, 250, 196, 132, 239, 227, 122, 83, 128, 238, 144, 136, 247, 172, 226, 239, 205, 233]?
It's the bytes of the hash represented as an array of decimals instead of as a hexadecimal string.
b94d27... -> [185, 77, 39 ...]
0xb9 -> 185
0x4d -> 77
0x27 -> 39

How to print object's element?

I have this small program in rust, not in a cargo project.
use std::process::Command;
fn main() {
let result= Command::new("git").arg("status").output().expect("Ok");
println!("{:?}", result);
}
but after building and running it I get
Output { status: ExitStatus(unix_wait_status(0)), stdout: "On branch main\nYour branch is up to date with 'origin/main'.\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\tbasics/guessing_game/\n\tcmdtest\n\tcmdtest.rs\n\nnothing added to commit but untracked files present (use \"git add\" to track)\n", stderr: "" }
If I try:
use std::process::Command;
fn main() {
let result= Command::new("git").arg("status").output().expect("Ok");
println!("{:?}", result.stdout);
}
I get
[79, 110, 32, 98, 114, 97, 110, 99, 104, 32, 109, 97, 105, 110, 10, 89, 111, 117, 114, 32, 98, 114, 97, 110, 99, 104, 32, 105, 115, 32, 117, 112, 32, 116, 111, 32, 100, 97, 116, 101, 32, 119, 105, 116, 104, 32, 39, 111, 114, 105, 103, 105, 110, 47, 109, 97, 105, 110, 39, 46, 10, 10, 85, 110, 116, 114, 97, 99, 107, 101, 100, 32, 102, 105, 108, 101, 115, 58, 10, 32, 32, 40, 117, 115, 101, 32, 34, 103, 105, 116, 32, 97, 100, 100, 32, 60, 102, 105, 108, 101, 62, 46, 46, 46, 34, 32, 116, 111, 32, 105, 110, 99, 108, 117, 100, 101, 32, 105, 110, 32, 119...
How can I print the string inside stdout and not numbers, in a user friendly format?
stdout may not be valid unicode, in which case you cannot print it. If you're sure it will be (which is probably the case with git), you can use String::from_utf8().unwrap():
fn main() {
let result = Command::new("git").arg("status").output().expect("Ok");
println!("{}", String::from_utf8(result.stdout).unwrap());
}
If you don't know, you can either check the Result from_utf8() returns, or use String::from_utf8_lossy() to turn invalid characters into U+FFFD REPLACEMENT CHARACTER �:
fn main() {
let result = Command::new("git").arg("status").output().expect("Ok");
println!("{}", String::from_utf8_lossy(&result.stdout));
}

can't access json data even json.parse not working

I have parsed a file using lambda-multipart-parser and got results like this
and my code is
let result = await parser.parse(event);
let a= (result.files[0].content);
and o/p is
{
"type": "Buffer",
"data": [
65,
99,
99,
111,
117,
110,
116,
110,
117,
109,
98,
101,
114,
44,
85,
115,
101,
114,
110,
97,
109,
101,
44,
80,
97,
115,
115,
119,
111,
114,
100,
44,
76,
99,
111,
110,
97,
109,
101,
44,
83,
116,
97,
116,
117,
115 ]}
so to get data if i give
let a= (result.files[0].content.data);
I am getting output blank(i.e 1 in postman)
Even thought result.files[0].content is written out as a plain object, it is actually a Buffer object. So you'll have to use one of the methods on the object to retrieve the information, e.g., result.files[0].content.toString('utf-8') to get the UTF-8 string represented by the buffer.

Unexpected result when calling toString on a buffer in Node

I'm in a situation where I need to revert data back to a buffer that has had toString called on it. For example:
const buffer // I need this, or equivalent
const bufferString = buffer.toString() // This is all I have
The node documentation implies that .toString() defaults to 'utf8' encoding, and I can revert this with Buffer.from(bufferString, 'utf8'), but this doesn't work and I get different data. (maybe some data loss when it is converted to a string, although the documentation doesn't seem to mention this).
Does anyone know why this is happening or how to fix it?
Here is the data I have to reproduce this:
const intArr = [31, 139, 8, 0, 0, 0, 0, 0, 0, 0, 170, 86, 42, 201, 207, 78, 205, 83, 178, 82, 178, 76, 78, 53, 179, 72, 74, 51, 215, 53, 54, 51, 51, 211, 53, 49, 78, 50, 210, 77, 74, 49, 182, 208, 53, 52, 178, 180, 72, 75, 76, 52, 75, 180, 76, 50, 81, 170, 5, 0, 0, 0, 255, 255, 3, 0, 29, 73, 93, 151, 48, 0, 0, 0]
const buffer = Buffer.from(intArr) // The buffer I want!
const bufferString = buffer.toString() // The string I have!, note .toString() and .toString('utf8') are equivalent
const differentBuffer = Buffer.from(bufferString, 'utf8')
You can get the initial intArr from a buffer by doing this:
JSON.parse(JSON.stringify(Buffer.from(buffer)))['data']
Edit: interestingly calling .toString() on differentBuffer gives the same initial string.
I think the important part of the documentation you linked is When decoding a Buffer into a string that does not exclusively contain valid UTF-8 data, the Unicode replacement character U+FFFD � will be used to represent those errors. When you are converting your buffer into a utf8 string, not all characters are valid utf8, as you can see by doing a console.log(bufferString); almost all of it comes out as gibberish. Therefore you are irretrievably losing data when converting from the buffer into a utf8 string and you can't get that lost data back when converting back into the buffer.
In your example if you were to use utf16 instead of utf8 you don't lose information and thus your buffer is the same once converting back. I.E
const intArr = [31, 139, 8, 0, 0, 0, 0, 0, 0, 0, 170, 86, 42, 201, 207, 78, 205, 83, 178, 82, 178, 76, 78, 53, 179, 72, 74, 51, 215, 53, 54, 51, 51, 211, 53, 49, 78, 50, 210, 77, 74, 49, 182, 208, 53, 52, 178, 180, 72, 75, 76, 52, 75, 180, 76, 50, 81, 170, 5, 0, 0, 0, 255, 255, 3, 0, 29, 73, 93, 151, 48, 0, 0, 0]
const buffer = Buffer.from(intArr);
const bufferString = buffer.toString('utf16le');
const differentBuffer = Buffer.from(bufferString, 'utf16le') ;
console.log(buffer); // same as the below log
console.log(differentBuffer); // same as the above log
Use the 'latin1' or 'binary' encoding with Buffer.toString and Buffer.from. Those encodings are the same and map bytes to the unicode characters U+0000 to U+00FF.

Ruby equivalent of Node .toString('ascii')

I am struggling with converting a Node application to Ruby. I have a Buffer of integers that I need to encode as an ASCII string.
In Node this is done like this:
const a = Buffer([53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76])
const b = a.toString('hex')
// b = "357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"
const c = a.toString('ascii')
// c = '5qx9\bpR"Ho\u0004.\u0012R>\u0005XPa:\u000bj|v\u0013?T\u001e~xL'
I want to get the same output in Ruby but I don't know how to convert a to c. I used b to validate that a is parsed the same in Ruby and Node and it looks like it's working.
a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].pack('C*')
b = a.unpack('H*')
# ["357ff178398870d2a2c86f842e92d23e855850613a8beafcf613bf541e7ef84c"]
# c = ???
I have tried serveral things, virtually all of the unpack options, and I also tried using the encode function but I lack the understanding of what the problem is here.
Okay well I am not that familiar with Node.js but you can get fairly close with some basic understandings:
Node states:
'ascii' - For 7-bit ASCII data only. This encoding is fast and will strip the high bit if set.
Update After rereading the nod.js description I think it just means it will drop 127 and only focus on the first 7 bits so this can be simplified to:
def node_js_ascii(bytes)
bytes.map {|b| b % 128 }
.reject(&127.method(:==))
.pack('C*')
.encode(Encoding::UTF_8)
end
node_js_ascii(a)
#=> #=> "5qx9\bpR\"Ho\u0004.\u0012R>\u0005XPa:\vj|v\u0013?T\u001E~xL"
Now the only differences are that node.js uses "\u000b" to represent a vertical tab and ruby uses "\v" and that ruby uses uppercase characters for unicode rather than lowercase ("\u001E" vs "\u001e") (you could handle this if you so chose)
Please note This form of encoding is not reversible due to the fact that you have characters that are greater than 8 bits in your byte array.
TL;DR (previous explanation and solution only works up to 8 bits)
Okay so we know the max supported decimal is 127 ("1111111".to_i(2)) and that node will strip the high bit if set meaning [I am assuming] 241 (an 8 bit number will become 113 if we strip the high bit)
With that understanding we can use:
a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b|
b < 128 ? b : b - 128
end.pack('C*')
#=> "5\x7Fqx9\bpR\"Ho\x04.\x12R>\x05XPa:\vj|v\x13?T\x1E~xL"
Then we can encode that as UTF-8 like so:
a.encode(Encoding::UTF_8)
#=> "5\u007Fqx9\bpR\"Ho\u0004.\u0012R>\u0005XPa:\vj|v\u0013?T\u001E~xL"
but there is still is still an issue here.
It seems Node.js also ignores the Delete (127) when it converts to 'ascii' (I mean the high bit is set but if we strip it then it is 63 ("?") which doesn't match the output) so we can fix that too
a = [53, 127, 241, 120, 57, 136, 112, 210, 162, 200, 111, 132, 46, 146, 210, 62, 133, 88, 80, 97, 58, 139, 234, 252, 246, 19, 191, 84, 30, 126, 248, 76].map do |b|
b < 127 ? b : b - 128
end.pack('C*')
#=> "5\xFFqx9\bpR\"Ho\x04.\x12R>\x05XPa:\vj|v\x13?T\x1E~xL"
a.encode(Encoding::UTF_8, undef: :replace, replace: '')
#=> "5qx9\bpR\"Ho\u0004.\u0012R>\u0005XPa:\vj|v\u0013?T\u001E~xL"
Now since 127 - 128 = -1 (negative signed bit) becomes "\xFF" an undefined character in UTF-8 so we add undef: :replace what to do when the character is undefined use replace and we add replace: '' to replace with nothing.

Resources