Trying to figure out why MulAssign is considered not implemented - rust

I'm quite new to rust and to familiarize my self with it I'm trying to implement my own matrix structs.
An other user made me realize that maybe I did not have enough info, so heres a minimum viable code that gives me the error. https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=0dcd5a8c2ebdecb80867addfe7d38ccc
MathData is a trait implemented by f32 and f64 to limit the generic to only these two.
pub trait MatData<T> {}
impl MatData<f64> for f64 {}
impl MatData<f32> for f32 {}
The matrix Struct and Implementation:
pub struct Mat4<T: MatData<T>> {
data: [T; 16],
}
impl<T: MatData<T>> Index<usize> for Mat4<T> {
type Output = T;
fn index(&self, i: usize) -> &T {
&self.data[i]
}
}
impl<T: MatData<T>> IndexMut<usize> for Mat4<T> {
fn index_mut(&mut self, i: usize) -> &mut T {
&mut self.data[i]
}
}
impl<T> MulAssign for Mat4<T>
where
T: MatData<T> + Mul + Add + Mul<Output = T> + Add<Output = T> + Copy,
{
fn mul_assign(&mut self, rhs: Self) {
self[0] = self[0] * rhs[0] + self[1] * rhs[4] + self[2] * rhs[8] + self[3] * rhs[12];
self[1] = self[0] * rhs[1] + self[1] * rhs[5] + self[2] * rhs[9] + self[3] * rhs[13];
self[2] = self[0] * rhs[2] + self[1] * rhs[6] + self[2] * rhs[10] + self[3] * rhs[14];
self[3] = self[0] * rhs[3] + self[1] * rhs[7] + self[2] * rhs[11] + self[3] * rhs[15];
self[4] = self[4] * rhs[0] + self[5] * rhs[4] + self[6] * rhs[8] + self[7] * rhs[12];
self[5] = self[4] * rhs[1] + self[5] * rhs[5] + self[6] * rhs[9] + self[7] * rhs[13];
self[6] = self[4] * rhs[2] + self[5] * rhs[6] + self[6] * rhs[10] + self[7] * rhs[14];
self[7] = self[4] * rhs[3] + self[5] * rhs[7] + self[6] * rhs[11] + self[7] * rhs[15];
self[8] = self[8] * rhs[0] + self[9] * rhs[4] + self[10] * rhs[8] + self[11] * rhs[12];
self[9] = self[8] * rhs[1] + self[9] * rhs[5] + self[10] * rhs[9] + self[11] * rhs[13];
self[10] = self[8] * rhs[2] + self[9] * rhs[6] + self[10] * rhs[10] + self[11] * rhs[14];
self[11] = self[8] * rhs[3] + self[9] * rhs[7] + self[10] * rhs[11] + self[11] * rhs[15];
self[12] = self[12] * rhs[0] + self[13] * rhs[4] + self[14] * rhs[8] + self[15] * rhs[12];
self[13] = self[12] * rhs[1] + self[13] * rhs[5] + self[14] * rhs[9] + self[15] * rhs[13];
self[14] = self[12] * rhs[2] + self[13] * rhs[6] + self[14] * rhs[10] + self[15] * rhs[14];
self[15] = self[12] * rhs[3] + self[13] * rhs[7] + self[14] * rhs[11] + self[15] * rhs[15];
}
}
When using it like so:
mat_a *= mat_b;
I get this from the compiler:
binary assignment operation *= cannot be applied to type matrix::Mat4<T>
the trait std::ops::MulAssign is not implemented for matrix::Mat4<T>"

This happens because your MulAssign implementation:
impl<T> MulAssign for Mat4<T>
where
T: MatData<T> + Mul + Add + Mul<Output = T> + Add<Output = T> + Copy,
is not general enough as Transform implementation which is:
impl<T> Transform<T>
where
T: MatData<T> + Add + Mul + Copy,
The difference is MulAssign implementation explicitly states the associated types of Mul and Add as Output = T, this makes the MulAssign impl less general than Transform impl and as a result you cannot use MulAssign behavior in Transform
To use it you either need to make MulAssign more general(according to your implementation it is not possible) or(then) you need to make Transform less general like :
impl<T> Transform<T>
where
T: MatData<T> + Mul<Output = T> + Add<Output = T> + Copy,
{

Related

Ray Tracing In One Weekend - Refraction issues

I'm, currently working through Ray Tracing In One Weekend to get familiar with rust. Adding the dielectric material (glass) is causing me some headaches: My refraction isn't flipping upside down!
Here's the code I'm using for my Vector struct:
impl Vec3 {
pub fn new(x: f64, y: f64, z: f64) -> Vec3 { Vec3 {x, y, z} }
pub fn x(&self) -> f64 { self.x }
pub fn y(&self) -> f64 { self.y }
pub fn z(&self) -> f64 { self.z }
pub fn length_squared(&self) -> f64 {
self.x * self.x + self.y * self.y + self.z * self.z
}
pub fn length(&self) -> f64 { self.distance(&Vec3::default()) }
pub fn unit_vector(&self) -> Vec3 {
let length = self.length();
Vec3::new(self.x / length, self.y / length, self.z / length)
}
pub fn dot(&self, v:&Vec3) -> f64 {
self.x * v.x + self.y * v.y + self.z * v.z
}
pub fn cross(&self, v:&Vec3) -> Vec3 {
Vec3::new(
self.y * v.z - self.z * v.y,
self.z * v.x - self.x * v.z,
self.x * v.y - self.y * v.x
)
}
pub fn distance(&self, other: &Vec3) -> f64 {
let dx = self.x - other.x();
let dy = self.y - other.y();
let dz = self.z - other.z();
(dx * dx + dy * dy + dz * dz).sqrt()
}
pub fn random(min: f64, max:f64) -> Self {
let between = Uniform::from(min..max);
let mut rng = rand::thread_rng();
Vec3::new(
between.sample(&mut rng),
between.sample(&mut rng),
between.sample(&mut rng))
}
pub fn random_in_unit_sphere() -> Self {
loop {
let v = Vec3::random(-1.0, 1.0);
if v.length_squared() < 1.0 {
return v;
}
}
}
pub fn random_in_hemisphere(normal: &Vec3) -> Self {
let vec = Vec3::random_in_unit_sphere();
if vec.dot(normal) > 0.0 {
vec
} else {
-vec
}
}
pub fn random_unit_vector() -> Self { Vec3::random_in_unit_sphere().unit_vector() }
pub fn near_zero(&self) -> bool {
const MAXIMUM_DISTANCE_FROM_ZERO:f64 = 1e-8;
self.x.abs() < MAXIMUM_DISTANCE_FROM_ZERO &&
self.y.abs() < MAXIMUM_DISTANCE_FROM_ZERO &&
self.z.abs() < MAXIMUM_DISTANCE_FROM_ZERO
}
pub fn reflected(&self, normal: &Vec3) -> Vec3 {
let dp = self.dot(normal);
let dp = dp * 2.0 * (*normal);
*self - dp
}
pub fn refract(&self, normal: &Vec3, etai_over_etat: f64) -> Vec3 {
let dot = (-(*self)).dot(normal);
let cos_theta = dot.min(1.0);
let out_perp = etai_over_etat * ((*self) + cos_theta * (*normal));
let inner = 1.0 - out_perp.length_squared();
let abs = inner.abs();
let r = -(abs.sqrt());
let out_parallel = r * (*normal);
out_perp + out_paralle
}
}
And this is my scatter function for the material:
fn scatter(&self, ray: &Ray, hit_record: &HitRecord) -> Option<(Option<Ray>, Color)> {
let refraction_ratio = if hit_record.front_face {
1.0/self.index_of_refraction
} else {
self.index_of_refraction
};
let unit_direction = ray.direction().unit_vector();
let cos_theta = ((-unit_direction).dot(&hit_record.normal)).min(1.0);
let sin_theta = (1.0 - cos_theta*cos_theta).sqrt();
let cannot_refract = refraction_ratio * sin_theta > 1.0;
let reflectance = Dielectric::reflectance(cos_theta, refraction_ratio);
let mut rng = rand::thread_rng();
let color = Color::new(1.0, 1.0, 1.0);
if cannot_refract || reflectance > rng.gen::<f64>() {
let reflected = unit_direction.reflected(&hit_record.normal);
let scattered = Ray::new(hit_record.point, reflected);
Some((Some(scattered), color))
} else {
let direction = unit_direction.refract(&hit_record.normal, refraction_ratio);
let scattered = Ray::new(hit_record.point, direction);
Some((Some(scattered), color))
}
}
It sort of works if I negate x and y of the refract-result, but still looks obviously wrong. Additionally, if I go a few steps back in the book and implement the 100% refraction glass, my sphere's are solid black, and I have to negate the z axis to see anything at all. So something is amiss with my refraction code, but I can't figure it out
Full code at: https://phlaym.net/git/phlaym/rustracer/src/commit/89a2333644a82f2645e4ad554eadf7d4f142f2c0/src
In the method src/hittable.rs which checks if a sphere is hit, the c code looks like this.
// Find the nearest root that lies in the acceptable range.
auto root = (-half_b - sqrtd) / a;
if (root < t_min || t_max < root) {
root = (-half_b + sqrtd) / a;
if (root < t_min || t_max < root)
return false;
}
You have ported it to rust code with the following listing:
let root = (-half_b - sqrtd) / a;
if root < t_min || t_max < root {
let root = (-half_b + sqrtd) / a;
if root < t_min || t_max < root {
return None;
}
}
The problem here is the second let root. You have created a new variable with its own scope for the inner brackets but not changed the already created variable defined before. To do this you have to make it mutable.
let mut root = (-half_b - sqrtd) / a;
if root < t_min || t_max < root {
root = (-half_b + sqrtd) / a;
if root < t_min || t_max < root {
return None;
}
}
Additionally I changed the following in src/ray.rs
return match scattered {
Some((scattered_ray, albedo)) => {
match scattered_ray {
Some(sr) => {
albedo * sr.pixel_color(world, depth-1)
},
None => albedo
}
},
None => { return Color::default() }
};
to match the corresponding C code. Be aware of the unwrap used.
let scattered = rect.material.scatter(self, &rect);
if let Some((scattered_ray, albedo)) = scattered {
return albedo * scattered_ray.unwrap().pixel_color(world, depth - 1)
}
return Color::default()
And remove your tries to correct the reflections:
let reflected = Vec3::new(-reflected.x(), reflected.y(), -reflected.z());

Use map object as function input

I have a function that calculates the variance of an iterator of floats, I would like to be able to call this function on an iterator after using the map method to transform it.
use num::Float;
fn compute_var_iter<'a, I, T>(vals: I) -> T
where
I: Iterator<Item = &'a T>,
T: 'a + Float + std::ops::AddAssign,
{
// online variance function
// Var = E[X^2] - E[X]^2
// corrects for 1/n -> 1/(n-1)
let mut x = T::zero();
let mut xsquare = T::zero();
let mut len = T::zero();
for &val in vals {
x += val;
xsquare += val * val;
len += T::one();
}
((xsquare / len) - (x / len) * (x / len)) / (len - T::one()) * len
}
fn main() {
let a: Vec<f64> = (1..100001).map(|i| i as f64).collect();
let b: Vec<f64> = (0..100000).map(|i| i as f64).collect();
dbg!(compute_var_iter(&mut a.iter())); // this works
dbg!(compute_var_iter(a.iter().zip(b).map(|(x, y)| x * y))); // this does not work
}
Is there a performant way to get the map output back to an iterator or to make the function take the map object as an input so that we can avoid having to .collect() and keep the execution lazy?
You can use the iterator objects directly without collect:
use num::Float;
fn compute_var_iter<I, T>(vals: I) -> T
where
I: Iterator<Item = T>,
T: Float + std::ops::AddAssign,
{
// online variance function
// Var = E[X^2] - E[X]^2
// corrects for 1/n -> 1/(n-1)
let mut x = T::zero();
let mut xsquare = T::zero();
let mut len = T::zero();
for val in vals {
x += val;
xsquare += val * val;
len += T::one();
}
((xsquare / len) - (x / len) * (x / len)) / (len - T::one()) * len
}
fn main() {
let a = (1..100001).map(|i| i as f64);
let b = (0..100000).map(|i| i as f64);
let c: Vec<f64> = (0..10000).map(|i| i as f64).collect();
dbg!(compute_var_iter(a.clone())); // this works
dbg!(compute_var_iter(c.iter().map(|i| *i))); // this works
dbg!(compute_var_iter(a.zip(b).map(|(x, y)| x * y)));
}
Playground
Notice that you would need to clone the iterator if you intend to use it several times. Also you do not really need to use references since numbers are usually Copy and the cost is the same as creating the references itself.

My test fails at "attempt to subtract with overflow"

use itertools::Itertools;
#[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord)]
struct Runner {
sec: u16,
}
impl Runner {
fn from(v: (u8, u8, u8)) -> Runner {
Runner {
sec: v.0 as u16 * 3600 + v.1 as u16 * 60 + v.2 as u16
}
}
}
fn parse_runner(strg: &str) -> Vec<Runner> {
strg.split(", ")
.flat_map(|personal_result| personal_result.split('|'))
.map(|x| x.parse::<u8>().unwrap())
.tuples::<(_, _, _)>()
.map(|x| Runner::from(x))
.sorted()
.collect::<Vec<Runner>>()
}
fn parse_to_format(x: u16) -> String {
let h = x / 3600;
let m = (x - 3600)/60;
let s = x % 60;
format!("{:02}|{:02}|{:02}", h, m, s)
}
fn return_stats(runners: &[Runner]) -> String {
let range: u16 = runners.last().unwrap().sec - runners.first().unwrap().sec;
let average: u16 = runners.iter().map(|&r| r.sec).sum::<u16>()/(runners.len() as u16);
let median: u16 = if runners.len()%2 != 0 {
runners.get(runners.len()/2).unwrap().sec
} else {
runners.get(runners.len()/2).unwrap().sec/2 + runners.get((runners.len()/2) + 1).unwrap().sec/2
};
format!("Range: {} Average: {} Median: {}", parse_to_format(range), parse_to_format(average), parse_to_format(median))
}
fn stati(strg: &str) -> String {
let run_vec = parse_runner(strg);
return_stats(&run_vec)
}
I cant find the mistake I made with supposedly substraction to make my code pass the test. Basically I'm trying to start with a &str like "01|15|59, 1|47|6, 01|17|20, 1|32|34, 2|3|17" and end up with another one like "Range: 00|47|18 Average: 01|35|15 Median: 01|32|34"
Sorry in advance for my mistake if it is really stupid, I've been trying to fix it for quite a while
https://www.codewars.com/kata/55b3425df71c1201a800009c/train/rust
let m = (x - 3600) / 60;
As Peter mentioned, that will indeed overflow if x is less than 3600. A u16 can not be negative.
Using integer arithmetic, here's another way of formatting seconds to hh|mm|ss and does not experience overflow:
fn seconds_to_hhmmss(mut s: u64) -> String {
let h = s / 3600;
s -= h * 3600;
let m = s / 60;
s -= m * 60;
format!("{:02}|{:02}|{:02}", h, m, s)
}

How to initialize immutable globals with non-const initializer in Rust?

I am trying to get a variable that is only initialized once, at runtime.
In C/C++ static would be the keyword that I would be looking for, yet in Rust, it must be initialized by a constant.
static mut is unsafe, and I can see why, but it doesn't conceptually capture what I want, I want an immutable variable.
Take this trivial example of a tribonacci function:
static sqrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
static tribonacci_constant: f64 = 1.0
+ (19.0 - sqrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + sqrt_33_mul_3).powf(1.0 / 3.0);
fn tribonacci(n: f64) -> f64 {
return (
(tribonacci_constant / 3.0).powf(n)
/ (
(4.0 / 3.0)
* tribonacci_constant
- (1.0 / 9.0)
* tribonacci_constant.powf(2.0) - 1.0
)
).round();
}
I want the two static variables outside of the function to be initialized only once, and powf to not be called with every run of the function
I am incredibly new to Rust and do not know what may be common knowledge to the average, experienced user.
Is this possible, if so, how can it be done?
If f64::powf was a const function then the compiler should convert things like 3.0 * 33.0f64.powf(0.5) down to a single fixed value.
While lazy_static can be used to solve this problem, there is a cost behind using lazy_statics, because they're designed to support more than just simple floating-point constants.
You can see this cost by benchmarking the two implementations using Criterion:
pub mod ls {
use lazy_static::lazy_static; // 1.4.0
lazy_static! {
//TODO: Should this be a pow(1.0/3.0)?
pub static ref cbrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
pub static ref tribonacci_constant: f64 = 1.0
+ (19.0 - *cbrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + *cbrt_33_mul_3).powf(1.0 / 3.0);
}
pub fn tribonacci(n: f64) -> f64 {
return (
(*tribonacci_constant / 3.0).powf(n)
/ (
(4.0 / 3.0)
* *tribonacci_constant
- (1.0 / 9.0)
* tribonacci_constant.powf(2.0) - 1.0
)
).round();
}
}
pub mod hc {
pub fn tribonacci(n: f64) -> f64 {
let p = 1.839286755214161;
let s = 0.3362281169949411;
return (s * p.powf(n)).round();
}
}
fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("trib 5.1 ls", |b| b.iter(|| ls::tribonacci(black_box(5.1))));
c.bench_function("trib 5.1 hc", |b| b.iter(|| hc::tribonacci(black_box(5.1))));
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
The cost is small, but may be significant if this is in your core loops.
On my machine, I get (after removing unrelated lines)
trib 5.1 ls time: [47.946 ns 48.832 ns 49.796 ns]
trib 5.1 hc time: [38.828 ns 39.898 ns 41.266 ns]
This is about a 20% difference.
If you don't like having hardcoded constants in your code, you can actually generate these at build time using a build.rs script.
My complete example for benchmarking looks like this:
build.rs
use std::env;
use std::fs;
use std::path::Path;
fn main() {
let out_dir = env::var_os("OUT_DIR").unwrap();
let dest_path = Path::new(&out_dir).join("constants.rs");
//TODO: Should this be a pow(1.0/3.0)?
let cbrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
let tribonacci_constant: f64 = 1.0
+ (19.0 - cbrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + cbrt_33_mul_3).powf(1.0 / 3.0);
let p = tribonacci_constant / 3.0;
let s = 1.0 / (
(4.0 / 3.0)
* tribonacci_constant
- (1.0 / 9.0)
* tribonacci_constant.powf(2.0) - 1.0
);
fs::write(
&dest_path,
format!("\
pub mod tribonacci {{\n\
pub const P: f64 = {:.32};\n\
pub const S: f64 = {:.32};\n\
}}\n", p, s)
).unwrap();
println!("cargo:rerun-if-changed=build.rs");
}
src/lib.rs
pub mod constants {
include!(concat!(env!("OUT_DIR"), "/constants.rs"));
}
pub mod ls {
use lazy_static::lazy_static; // 1.4.0
lazy_static! {
//TODO: Should this be a pow(1.0/3.0)?
pub static ref cbrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
pub static ref tribonacci_constant: f64 = 1.0
+ (19.0 - *cbrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + *cbrt_33_mul_3).powf(1.0 / 3.0);
}
pub fn tribonacci(n: f64) -> f64 {
return (
(*tribonacci_constant / 3.0).powf(n)
/ (
(4.0 / 3.0)
* *tribonacci_constant
- (1.0 / 9.0)
* tribonacci_constant.powf(2.0) - 1.0
)
).round();
}
}
pub mod hc {
pub fn tribonacci(n: f64) -> f64 {
let p = super::constants::tribonacci::P;
let s = super::constants::tribonacci::S;
return (s * p.powf(n)).round();
}
}
benches/my_benchmark.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use rust_gen_const_vs_lazy_static::ls;
use rust_gen_const_vs_lazy_static::hc;
fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("trib 5.1 ls", |b| b.iter(|| ls::tribonacci(black_box(5.1))));
c.bench_function("trib 5.1 hc", |b| b.iter(|| hc::tribonacci(black_box(5.1))));
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
Cargo.toml
[package]
name = "rust_gen_const_vs_lazy_static"
version = "0.1.0"
edition = "2018"
[dependencies]
"lazy_static" = "1.4.0"
[dev-dependencies]
criterion = "0.3"
[[bench]]
name = "my_benchmark"
harness = false
$OUTDIR/constants.rs (generated)
pub mod tribonacci {
pub const P: f64 = 1.83928675521416096216853475198150;
pub const S: f64 = 0.33622811699494109527464047459944;
}
As suggested by Dilshod Tadjibaev it is possible to achieve a similar result using proc-macros, though it requires a little more work in this case. This gives exactly the same speed as build-time generation.
To set this up I created a new crate for the macros trib_macros, as proc-macros need to be in their own crate. This new crate contained just two files Cargo.toml and src/lib.rs
Cargo.toml
[package]
name = "trib_macros"
version = "0.1.0"
edition = "2018"
[lib]
proc-macro = true
src/lib.rs
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro]
pub fn tp(_item: TokenStream) -> TokenStream {
let cbrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
let tribonacci_constant: f64 = 1.0
+ (19.0 - cbrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + cbrt_33_mul_3).powf(1.0 / 3.0);
let p = tribonacci_constant / 3.0;
format!("{}f64",p).parse().unwrap()
}
#[proc_macro]
pub fn ts(_item: TokenStream) -> TokenStream {
let cbrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
let tribonacci_constant: f64 = 1.0
+ (19.0 - cbrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + cbrt_33_mul_3).powf(1.0 / 3.0);
let s = 1.0 / (
(4.0 / 3.0)
* tribonacci_constant
- (1.0 / 9.0)
* tribonacci_constant.powf(2.0) - 1.0
);
format!("{}f64",s).parse().unwrap()
}
Then we need to adjust the Cargo.toml of the original crate to pull this in.
[dependencies]
...
trib_macros = { path = "path/to/trib_macros" }
And finally using it is relatively clean:
pub mod mc {
use trib_macros::{ts,tp};
pub fn tribonacci(n: f64) -> f64 {
return (ts!() * tp!().powf(n)).round();
}
}
There's definitely a neater way to output the float literal tokens, but I couldn't find it.
You can find a complete repository for these tests at https://github.com/mikeando/rust_code_gen_example
You can use lazy_static:
use lazy_static::lazy_static; // 1.4.0
lazy_static! {
static ref cbrt_33_mul_3: f64 = 3.0 * 33.0f64.powf(0.5);
static ref tribonacci_constant: f64 = 1.0
+ (19.0 - *cbrt_33_mul_3).powf(1.0 / 3.0)
+ (19.0 + *cbrt_33_mul_3).powf(1.0 / 3.0);
}
fn tribonacci(n: f64) -> f64 {
return (
(*tribonacci_constant / 3.0).powf(n)
/ (
(4.0 / 3.0)
* *tribonacci_constant
- (1.0 / 9.0)
* tribonacci_constant.powf(2.0) - 1.0
)
).round();
}
fn main() {
println!("Hello, world!");
}
Note the usage of ref and the dereference needed for the tribonacci_constant.

Problem when write a set of general math tool functions for f64\f32\i32\i64 array

Now I want to write a set of general math tool functions for f64\f32\i32\i64 array
Some functions are simple to write, such as addition functions:
pub fn add_val<T: std::ops::Add<Output=T> + Copy>(val1: &[T; 2], val2: &[T; 2]) -> [T; 2]{
[val1[0] + val2[0], val1[1] + val2[1]]
}
But I have encountered some problems when implementing some functions. For example, there is a normalize function:
pub fn normalize(val: &[f64; 2], len: &f64) -> [f64; 2] {
let real_length = (val[0] * val[0] + val[1] * val[1]).sqrt();
if real_length == 0.0 { // fix: NaN when 0
return [val[0], val[1]]
} else {
let scale = len / real_length;
[val[0] * scale, val[1] * scale]
}
}
it is easy to understand,Now I want it to support more data types.
One of my attempts:
pub fn normalize<T: std::ops::Add<Output=T> + std::ops::Mul<Output=T> + std::ops::Div<Output=T> + std::convert::From<f32> + std::convert::Into<f32> + Copy + std::cmp::PartialEq>(val: &[T; 2], len: &T) -> [T; 2] {
let pow_num: f32 = (val[0] * val[0] + val[1] * val[1]).into();
let real_length: f32 = (pow_num).sqrt();
if real_length == 0.0 { // fix: 不特判 0 可能会返回 NaN
return [val[0], val[1]]
} else {
let scale = *len / T::from(real_length);
[val[0] * scale, val[1] * scale]
}
}
This is an error :
58 | let c = normalize(&a, &3.0);
| ^^^^^^^^^ the trait `std::convert::From<f64>` is not implemented for `f32`
|
= help: the following implementations were found:
<f32 as std::convert::From<i16>>
<f32 as std::convert::From<i8>>
<f32 as std::convert::From<u16>>
<f32 as std::convert::From<u8>>
= note: required because of the requirements on
The reason for this error
I also try to use as But was told that as can only be used in the primitive type。
I found that Primitive trait has been removed so I don't know how to qualify...
If you have good ideas I hope you can share with me, thank you very much.

Resources