I want to interpolate values between multiple points with Bezier curve interpolation using the splines crate.
My current implementation looks like this:
let mut output_buffer: Vec<f32> = vec![ 0.0; buffer.len() ];
let mut points: Vec<Key<f32, f32>> = Vec::new();
for val in pos_index.iter() {
let x = val.0;
let y = val.1;
points.push(Key::new(x, y, Interpolation::Bezier( 0.5 )));
}
let spline = Spline::from_vec(points);
for i in 0..output_buffer.len() {
let v = spline.clamped_sample(i as f32).unwrap_or(0.0);
output_buffer[i] = v;
}
output_buffer
But I do not known what value to put into Interpolation::Bezier( 'value' ).
The documentation does not help me neither and I cannot find any examples that use Bezier interpolation.
One important thing to note is that the points or keys are not evenly spaced out, so the distance between them is much higher on the first values that are closer to zero.
At line #170 of spline.rs, it shows V's values of Interpolation::Bezier(V) of cp0 (cp stands for control point) and cp1 are passed to the function V::cubic_bezier as the third and fourth parameters.
Here is the documentation of the function cubic_bezier.
fn cubic_bezier(t: T, a: Self, u: Self, v: Self, b: Self) -> Self
Cubic Bézier interpolation.
a is the first point; b is the second point; u is the output tangent of a to the curve and v is the input tangent of b to the curve.
We can see u and v stand for tangents and also, from the function signature, we can see their type is Self, the type that implements the Interpolate trait. The section Implementations on Foreign Types of Interpolate documentation lists the types that implements the trait. Here are some as examples:
impl Interpolate<f32> for Vector1<f32>
impl Interpolate<f32> for Vector2<f32>
impl Interpolate<f64> for Vector1<f64>
impl Interpolate<f64> for Vector2<f64>
Hope this is helpful.
Related
I have a point cloud in an ndarray Array2 that is 3x500.
I would like to make a kd tree from those points using the KdTree crate.
https://docs.rs/kdtree/0.6.0/kdtree/
The following example is in the KdTree documentation :
let dimensions = 2;
let mut kdtree = KdTree::new(dimensions);
But when I try that, I get :
cannot infer type for type parameter `A`
consider giving `tree` the explicit type `kdtree::kdtree::KdTree<A, T, U>`, where the type parameter `A` is specified
And I don't know what A, T, and U are supposed to be.
To add the points I tried to use :
pub fn add(&mut self, point: U, data: T) -> Result<(), ErrorKind>
But I don't know what data is supposed to be.
This is because you are only trying to run this :
let dimensions = 2;
let mut kdtree = KdTree::new(dimensions); // kdtree type is KdTree<?, ?, ?>
instead of :
let a: ([f64; 2], usize) = ([0f64, 0f64], 0);
let b: ([f64; 2], usize) = ([1f64, 1f64], 1);
let c: ([f64; 2], usize) = ([2f64, 2f64], 2);
let d: ([f64; 2], usize) = ([3f64, 3f64], 3);
let dimensions = 2;
let mut kdtree = KdTree::new(dimensions); // kdTree type is KdTree<f64, usize, &[f64; 2]>
kdtree.add(&a.0, a.1).unwrap();
kdtree.add(&b.0, b.1).unwrap();
kdtree.add(&c.0, c.1).unwrap();
kdtree.add(&d.0, d.1).unwrap();
You can change the top code to :
let dimensions = 2;
// we provide the type ourselves
let mut kdtree: KdTree<f64, usize, &[f64; 2]> = KdTree::new(dimensions);
Now this happens how Rust infers the type of variables during compile time.
The compiler can usually infer what type we want to use based on the value and how we use it
In the second piece of code Rust was able to figure out the type we want kdtree to have by looking at our add method, but this did not happend in first case because no extra info was provided.
The third piece of code works beacuse we explicitly type out the type of kdtree.
But I don't know what data is supposed to be
Here data : T is a generic and can be anything.
I'm following the Rust by Example tutorial and am on the second part of the Tuples activity which is to add a transpose function using the reverse function as a template. This will accept a matrix as an argument and return a matrix in which two elements have been swapped. For example:
println!("Matrix:\n{}", matrix);
println!("Transpose:\n{}", transpose(matrix));
Expected results:
Input Matrix:
( 1.1 1.2 2.1 2.2 )
Transposed output:
( 1.1 2.1 1.2 2.2 )
I can't find the right code, here is what I'm trying:
// this is defined in the tutorial
#[derive(Debug)]
struct Matrix(f32, f32, f32, f32);
// this is my attempt that does not compile
fn transpose(maat: Matrix) -> (Matrix) {
let matrix = maat;
(matrix.0, matrix.2, matrix.1, matrix.3)
}
I don't want to give you the full solution because i would do you a disservice if you're learning Rust.
There's a key ingredient that you're missing at this point of the tutorial. Not your fault.
Matrix is a "tuple struct" (also sometimes called a newtype) and it's covered in a later section of Rust by example.
If you want to peek ahead, in the section on structs you'll learn the two pieces you're missing.
Piece one: struct Matrix(f32, f32, f32, f32); as defined in the tutorial can be destructured in a similar way as the simple tuple.
If you have a let matrix = Matrix(1.1, 1.2, 2.1, 2.2); you can do this to create names for its individual elements:
let Matrix(r1c1, r2c2, r2c1, r2c2) = matrix
What you did (matrix.0, matrix.1...) works too, though...
Piece two. When you want to create a new instance of Matrix, you do Matrix(1.1, 1.2, 2.1, 2.2). From your attempt at writing transpose you're trying to return a tuple, but a tuple struct like Matrix is a different, incompatible type (that's why it's also called a "newtype")
Using reverse as the example says and rewriting the reverse function to accept f32,
fn reverse(pair: (f32, f32)) -> (f32, f32) {
let (a, b) = pair;
(b, a)
}
fn transpose(mat: Matrix) -> Matrix {
let (a, b) = reverse((mat.1, mat.2));
Matrix(mat.0, a, b, mat.3)
}
You need to do three things here:
Tell the Display format how to structure the output so that it divides the Matrix into two rows of equal length.
impl fmt::Display for Matrix {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "( {} {} )\n( {} {} )", self.0, self.1, self.2, self.3)
}
}
Write the transpose function that will return the matrix in transposed order.
fn transpose(mat: Matrix) -> Matrix {
return Matrix(mat.0, mat.2, mat.1, mat.3);
}
call it inside println
println!("Transpose:\n{}", transpose(matrix));
I found a function to compute a mean and have been playing with it. The code snippet below runs, but if the data inside the input changes from a float to an int an error occurs. How do I get this to work with floats and integers?
use std::borrow::Borrow;
fn mean(arr: &mut [f64]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for num in arr {
i += 1.0;
mean += (num.borrow() - mean) / i;
}
mean
}
fn main() {
let val = mean(&mut vec![4.0, 5.0, 3.0, 2.0]);
println!("The mean is {}", val);
}
The code in the question doesn't compile because f64 does not have a borrow() method. Also, the slice it accepts doesn't need to be mutable since we are not changing it. Here is a modified version that compiles and works:
fn mean(arr: &[f64]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for &num in arr {
i += 1.0;
mean += (num - mean) / i;
}
mean
}
We specify &num when looping over arr, so that the type of num is f64 rather than a reference to f64. This snippet would work with both, but omitting it would break the generic version.
For the same function to accept floats and integers alike, its parameter needs to be generic. Ideally we'd like it to accept any type that can be converted into f64, including f32 or user-defined types that defin such a conversion. Something like this:
fn mean<T>(arr: &[T]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for &num in arr {
i += 1.0;
mean += (num as f64 - mean) / i;
}
mean
}
This doesn't compile because x as f64 is not defined for x of an arbitry type. Instead, we need a trait bound on T that defines a way to convert T values to f64. This is exactly the purpose of the Into trait; every type T that implements Into<U> defines an into(self) -> U method. Specifying T: Into<f64> as the trait bound gives us the into() method that returns an f64.
We also need to request T to be Copy, to prevent reading the value from the array to "consume" the value, i.e. attempt moving it out of the array. Since primitive numbers such as integers implement Copy, this is ok for us. Working code then looks like this:
fn mean<T: Into<f64> + Copy>(arr: &[T]) -> f64 {
let mut i = 0.0;
let mut mean = 0.0;
for &num in arr {
i += 1.0;
mean += (num.into() - mean) / i;
}
mean
}
fn main() {
let val1 = mean(&vec![4.0, 5.0, 3.0, 2.0]);
let val2 = mean(&vec![4, 5, 3, 2]);
println!("The means are {} and {}", val1, val2);
}
Note that this will only work for types that define lossless conversion to f64. Thus it will work for u32, i32 (as in the above example) and smaller integer types, but it won't accept for example a vector of i64 or u64, which cannot be losslessly converted to f64.
Also note that this problem lends nicely to functional programming idioms such as enumerate() and fold(). Although outside the scope of this already longish answer, writing out such an implementation is an exercise hard to resist.
If I have a vector that represents some arbitrarily big numbers, big_v, I want to be able to convert this vector into a vector that holds the binary representation of big_v as well in order to implement a division algorithm. So say
let big_v = vec![1,3,9]; // 139
let big_binary_v = func(big_v); // 10001011
big_binary_v's value would be the equivalent of [1,0,0,0,1,0,1,1].
This trick is that I could implement this easy if it weren't for the fact that the vector can represent an arbitrarily large integer , if not I could simply do binary(100) + binary(30) + binary(9). In the case that you have a number like 100000000000000001 however, you can't compute binary(100000000000000000) + binary(1). Is there any trick to convert this vector to binary without using size dependent operations?
The most obvious way is to generate some abstract operations.
fn add_binary_fixed(left: &mut Vec<u8>, right: u8);
fn multiply_binary_fixed(binary_value: &mut Vec<u8>, by: u8);
Then one can do
let mut binary_value = vec![0];
for decimal_digit in decimal_value {
multiply(binary_value, 10);
add(binary_value, decimal_digit);
}
binary_value
If one uses traits and proper types, this can generalize to any (bounded, natural numbered) base conversion:
fn convert(value: &LargeInteger, base: u8) -> LargeInteger {
let mut output = LargeInteger::new(base);
for digit in value {
output.multiply(base);
output.add(digit);
}
output
}
I have a variable a of type &[T]; how can I get a reference to a subslice of a?
As a concrete example, I'd like to get the first and second halves of a, provided a.len() is even.
You use slicing syntax for that:
fn main() {
let data: &[u8] = b"12345678";
println!("{:?} - {:?}", &data[..data.len()/2], &data[data.len()/2..]);
}
(try it here)
The general syntax is
&slice[start_idx..end_idx]
which gives a slice derived from slice, starting at start_idx and ending at end_idx-1 (that is, item at the right index is not included). Either index could be omitted (even both), which would mean zero or slice length, correspondingly.
Note that if you want to split a slice at some position into two slices, it is often better to use split_at() method:
let data = b"12345678";
let (left, right): (&[u8], &[u8]) = data.split_at(4);
Moreover, this is the only way to obtain two mutable slices out of another mutable slice:
let mut data: Vec<u8> = vec![1, 2, 3, 4, 5, 6, 7, 8];
let data_slice: &mut [u8] = &mut data[..];
let (left, right): (&mut [u8], &mut [u8]) = data_slice.split_at_mut(4);
However, these basic things are explained in the official book on Rust. You should start from there if you want to learn Rust.