I want to use the blake2AsHex kind of function in Rust. This function exists in javascript but I am looking for a corresponding function in rust. So far, using the primitives of Substrate which are:
pub fn blake2_256(data: &[u8]) -> [u8; 32]
// Do a Blake2 256-bit hash and return result.
I am getting a different value.
When I execute this in console:
util_crypto.blake2AsHex("0x0000000000000000000000000000000000000000000000000000000000000001")
I get the desired value: 0x33e423980c9b37d048bd5fadbd4a2aeb95146922045405accc2f468d0ef96988. However, when I execute this rust code:
let res = hex::encode(&blake2_256("0x0000000000000000000000000000000000000000000000000000000000000001".as_bytes()));
println!("File Hash encoding: {:?}", res);
I get a different value:
47016246ca22488cf19f5e2e274124494d272c69150c3db5f091c9306b6223fc
Hence, how can I implement blake2AsHex in Rust?
Again you have an issue with data types here.
"0x0000000000000000000000000000000000000000000000000000000000000001".as_bytes()
is converting a big string to bytes, not the hexadecimal representation.
You need to correctly create the bytearray that you want to represent, and then it should work.
You are already using hex::encode for bytes to hex string... you should be using hex::decode for hex string to bytes:
https://docs.rs/hex/0.3.1/hex/fn.decode.html
Decodes a hex string into raw bytes.
Related
Really important to mention: I'm working in NOSTD env: elrond_wasm
https://docs.rs/elrond-wasm/0.17.1/elrond_wasm/api/trait.CryptoApi.html#tymethod.sha256
I'm trying to get a u32 => sha256 => String
let hash = self.crypto().sha256(&[1u8, 2u8, 3u8]);
if (String::from_utf8(hash.to_vec()).is_err()) {
uri.append_bytes("error".as_bytes());
}
Am I doing something wrong? It's always giving an error. When printed, I get some gibberish like: D�z�G��a�w9��M��y��;oȠc��!
&[1u8, 2u8, 3u8] this is just an example, but I tried a bunch of options
let mut serialized_attributes = Vec::new();
"123".top_encode(&mut serialized_attributes).unwrap();
or 123u32. to_be_bytes()
or 123u32.to_string().to_bytes()
all same result.
You should not try to print the raw hash bytes directly (as that is basically binary garbage), but instead convert it into a meaningful representation like hex.
You can try to use the hex crate for that:
https://docs.rs/hex/0.3.1/hex/fn.encode.html
What's the most straightforward way to convert a hex string into a float? (without using 3rd party crates).
Does Rust provide some equivalent to Python's struct.unpack('!f', bytes.fromhex('41973333'))
See this question for Python & Java, mentioning for reference.
This is quite easy without external crates:
fn main() {
// Hex string to 4-bytes, aka. u32
let bytes = u32::from_str_radix("41973333", 16).unwrap();
// Reinterpret 4-bytes as f32:
let float = unsafe { std::mem::transmute::<u32, f32>(bytes) };
// Print 18.9
println!("{}", float);
}
Playground link.
There's f32::from_bits which performs the transmute in safe code. Note that transmuting is not the same as struct.unpack, since struct.unpack lets you specify endianness and has a well-defined IEEE representation.
I'm getting into Rust programming to realize a small program and I'm a little bit lost in string conversions.
In my program, I have a vector as follows:
let mut name: Vec<winnt::WCHAR> = Vec::new();
WCHAR is the same as a u16 on my Windows machine.
I hand over the Vec<u16> to a C function (as a pointer) which fills it with data. I then need to convert the string contained in the vector into a &str. However, no matter, what I try, I can not manage to get this conversion working.
The only thing I managed to get working is to convert it to a WideString:
widestr = unsafe { WideCString::from_ptr_str(name.as_ptr()) };
But this seems to be a step into the wrong direction.
What is the best way to convert the Vec<u16> to an &str under the assumption that the vector holds a valid and null-terminated string.
I then need to convert the string contained in the vector into a &str. However, no matter, what I try, I can not manage to get this conversion working.
There's no way of making this a "free" conversion.
A &str is a Unicode string encoded with UTF-8. This is a byte-oriented encoding. If you have UTF-16 (or the different but common UCS-2 encoding), there's no way to read one as the other. That's equivalent to trying to read a JPEG image as a PDF. Both chunks of data might be a string, but the encoding is important.
The first question is "do you really need to do that?". Many times, you can take data from one function and shovel it back into another function, never looking at it. If you can get away with that, that might be be best answer.
If you do need to transform it, then you have to deal with the errors that can occur. An arbitrary array of 16-bit integers may not be valid UTF-16 or UCS-2. These encodings have edge cases that can easily produce invalid strings. Null-termination is another aspect - Unicode actually allows for embedded NUL characters, so a null-terminated string can't hold all possible Unicode characters!
Once you've ensured that the encoding is valid 1 and figured out how many entries in the input vector comprise the string, then you have to decode the input format and re-encode to the output format. This is likely to require some kind of new allocation, so you are most likely to end up with a String, which can then be used most anywhere a &str can be used.
There is a built-in method to convert UTF-16 data to a String: String::from_utf16. Note that it returns a Result to allow for these error cases. There's also String::from_utf16_lossy, which replaces invalid encoded parts with the Unicode replacement character.
let name = [0x68, 0x65, 0x6c, 0x6c, 0x6f];
let a = String::from_utf16(&name);
let b = String::from_utf16_lossy(&name);
println!("{:?}", a);
println!("{:?}", b);
If you are starting from a pointer to a u16 or WCHAR, you will need to convert to a slice first by using slice::from_raw_parts. If you have a null-terminated string, you need to find the NUL yourself and slice the input appropriately.
1: This is actually a great way of using types; a &str is guaranteed to be UTF-8 encoded, so no further check needs to be made. Similarly, the WideCString is likely to perform a check once upon construction and then can skip the check on later uses.
This is my simple hack for this case. There must be a bug; fix for your own case:
let mut v = vec![0u16; MAX_PATH as usize];
// imaginary win32 function
win32_function(v.as_mut_ptr());
let mut path = String::new();
for val in v.iter() {
let c: u8 = (*val & 0xFF) as u8;
if c == 0 {
break;
} else {
path.push(c as char);
}
}
I'm writing a small client/server program for encrypted network communications and have the following struct to allow the endpoints to negotiate capabilities.
struct KeyExchangePacket {
kexinit: u8,
replay_cookie: [u8; 32],
kex_algorithms: String,
kgen_algorithms: String,
encryption_algorithms: String,
mac_algorithms: String,
compression_algorithms: String,
supported_languages: String,
}
I need to convert the fields into bytes in order to send them over a TcpStream, but I currently have to convert them one at a time.
send_buffer.extend_from_slice(kex_algorithms.as_bytes());
send_buffer.extend_from_slice(kgen_algorithms.as_bytes());
etc...
Is there a way to iterate over the fields and push their byte values into a buffer for sending?
Is there a way to iterate over the fields
No. You have to implement it yourself, or find a macro / compiler plugin that will do it for you.
See How to iterate or map over tuples? for a similar question.
Think about how iterators work. An iterator has to yield a single type for each iteration. What would that type be for your struct composed of at least 3 different types?
Bincode does this.
let packet = KeyExchangePacket { /* ... */ };
let size_limit = bincode::SizeLimit::Infinite;
let encoded: Vec<u8> = bincode::serde::serialize(&packet, size_limit).unwrap();
From the readme:
The encoding (and thus decoding) proceeds unsurprisingly -- primitive types are encoded according to the underlying Writer, tuples and structs are encoded by encoding their fields one-by-one, and enums are encoded by first writing out the tag representing the variant and then the contents.
I want to perform a very simple task, but I cannot manage to stop the compiler from complaining.
fn transform(s: String) -> String {
let bytes = s.as_bytes();
format!("{}/{}", bytes[0..2], bytes[2..4])
}
[u8] does not have a constant size known at compile-time.
Some tips making this operation to work as intended?
Indeed, the size of a [u8] isn't known at compile time. The size of &[u8] however is known at compile time because it's just a pointer plus a usize representing the length of sequence.
format!("{:?}/{:?}", &bytes[0..2], &bytes[2..4])
Rust strings are encoded in utf-8, so working with strings in this way is generally a bad idea because a single unicode character may consist of multiple bytes.