I'm currently reading the book Computer graphics from scratch and can't get the example from the first chapter about a raytracer quite right in my rust project.
I'm using the image crate to render the picture. I convert the coordinates from renderer (x and y, origin upper left corner) to to my viewport (origin in the middle, with z on 10).
fn canvas_to_viewport(x: u32, y: u32, scene: &Scene) -> Point {
let x: f64 = x as f64 - (scene.width as f64 / 2.) ;
let y: f64 = y as f64 - (scene.width as f64 / 2.) ;
// x * viewport width / camera width
// y * viewport height / camera height
Point {
x: x * 1.,
y: y * 1.,
z: 10.
}
}
I then get the color of the pixel by my trace_ray function which looks for the closest intersection with a sphere with the helper function intersect_ray_sphere.
fn trace_ray(camera: Point, viewport: Point, t_min: f64, _t_max: f64, spheres: &Vec<Sphere>) -> Color {
let mut closest_t = 1000.;
let mut closest_sphere_color = Color (255,255,255);
for sphere in spheres {
let (t1, t2) = intersect_ray_sphere(camera, viewport, &sphere);
if t1 > t_min && t1 < closest_t {
closest_t = t1;
closest_sphere_color = sphere.color;
};
if t2 > t_min && t2 < closest_t {
closest_t = t2;
closest_sphere_color = sphere.color;
};
}
closest_sphere_color
}
fn intersect_ray_sphere(o: Point, d: Point, sphere: &Sphere) -> (f64, f64) {
let r = sphere.radius;
let co = Point {
x: o.x - sphere.center.x,
y: o.y - sphere.center.y,
z: o.z - sphere.center.z
};
let a = dot_product(&d, &d);
let b = 2. * dot_product(&co, &d);
let c = dot_product(&co, &co) - r*r;
let discriminant = b*b - 4.*a*c;
if discriminant < 0. {
return (1000., 1000.);
};
let t1 = (-b + discriminant.sqrt()) / (2.*a);
let t2 = (-b - discriminant.sqrt()) / (2.*a);
return (t1, t2)
}
The spheres farther from the middle are warped as you can see here:
I have played with pretty much all parameters but can't seem to figure it out. Every help would be much appreciated.
I think you are missing checks for t1 < t_max and t2 < t_max, though I suppose it doesn't matter much if your t_max is infinite.
Also, in case of a negative discriminant, you can actually return inf if you like, using f64::INFINITY; using 1000 is likely the culprit for the warping effect you are seeing.
Related
I'm making Tic-tac-toe in Rust. I want to count the amount of cells in a line, when the player is moving. Like -- x --. So I call count_result two times with the same dx and dy (that's why I pass dx and dy by reference). But here's the problem. My linter in IntelliJ IDEA complains that I use moved value. The compiler compiles without any warnings. What is the problem? Can you help me?
IntelliJ IDEA screenshot
Just to clarify - game_data.board is [[Option<Cell>; 3]; 3]
struct Data {
board: [[Option<Cell>; 3]; 3],
}
fn count_result(game_data: &Data, point: &Point, cell: &Cell, dx: &isize, dy: &isize) -> isize {
let mut count: isize = 0;
let point = (*point).clone();
let mut x = point.x;
let mut y = point.y;
while (x + dx < 3 && x + dx >= 0) && (y + dy >= 0 && y + dy < 3) {
x += 1;
y += 1;
if game_data.board[x as usize][y as usize] == Some(*cell) {
count += 1;
} else {
break;
}
}
count
}
Here is another example of the same problem.
game_data: &mut Data
IntelliJ IDEA screenshot
This is just a linter indexing invalid state. Just invalidate caches and you'll be fine.
I am currently implementing biome-generation in my game and I want to make the type of a biome dependent on humidity and temperature values that are generated from gradual noise.
different Biomes should have different heights and without interpolation this would result in abrupt height differences on biome borders as expected.
What I tried was to get the 2 neighbour biomes in the grid too and measure the blend-percentage of each biome.
Later I then get the 3 different height values from the biomes and multiply each with it's respective blend value.
Here is the simplified and stripped code, which I use to fetch biomes:
const BIOME_GRID_SIZE: usize = 2;
const BIOME_INDEX_SIZE: usize = BIOME_GRID_SIZE - 1;
const BIOME_GRID: [[Biomes; BIOME_GRID_SIZE]; BIOME_GRID_SIZE] =
[
[Biomes::Mountains, Biomes::Forest],
[Biomes::Desert , Biomes::Mesa ],
];
fn get_height(coord: [i64; 2], noise: &Noise) -> i64 {
let temperature = (noise.get_2d(coord) + 0.5).clamp(0.0, 1.0);
let humidity = (noise.get_2d(coord /* + some offset */) + 0.5).clamp(0.0, 1.0);
let x = BIOME_GRID_SIZE as f64 * humidity;
let y = BIOME_GRID_SIZE as f64 * temperature;
let x_frac = (x.fract() - 0.5) * 2.0;
let y_frac = (y.fract() - 0.5) * 2.0;
let x_blending = x_frac.abs();
let y_blending = y_frac.abs();
let own_blending = 2.0 - x_blending - y_blending;
// direction of neighbour biomes
let x_direction = x_frac.signum() as isize;
let y_direction = y_frac.signum() as isize;
let x_index = (x.trunc() as isize).clamp(0, BIOME_INDEX_SIZE as isize);
let y_index = (y.trunc() as isize).clamp(0, BIOME_INDEX_SIZE as isize);
let biomes = get_biomes(x_index, y_index, x_direction, y_direction);
blend(
coord,
noise,
biomes,
[
own_blending,
x_blending,
y_blending,
]
),
}
// get main and neighbour biomes
fn get_biomes(x: isize, y: isize, x_direction: isize, y_direction: isize) -> [Biomes; 3] {
let mut biomes = [Biomes::Ocean; 3];
for (i, (d_x, d_y)) in [(0, 0), (x_direction, 0), (0, y_direction)].iter().enumerate() {
let x_index = (x + d_x).clamp(0, BIOME_INDEX_SIZE as isize) as usize;
let y_index = (y + d_y).clamp(0, BIOME_INDEX_SIZE as isize) as usize;
let biome = BIOME_GRID[x_index][y_index];
biomes[i] = biome;
}
biomes
}
pub fn blend(
coord: [i64; 2],
noise: &Noise,
biomes: [Biomes; 4],
blending: [f64; 4],
) -> i64 {
let heights: Vec<f64> = biomes
.iter()
.map(|x| x.get().height(coord, noise) as f64)
.collect();
let height = heights[0] * blending[0] + heights[1] * blending[1] + heights[2] * blending[2];
let height = height as i64;
height
}
This works well in some cases, in the other it fails completely.
I am unsure, if 2 neighbours are enough and how to properly get the blend values.
Is there a better solution to this problem?
In general for bilinear blending you would use four points. If I understand your code correctly that would be the four height maps for each biome.
You then lerp across one axis (e.g. humidity) for the two pairs with the same other axis, and then lerp the two blended values again with the other axis (e.g. temperature).
I wrote a low-pass filter based around the FFT algorithm.
First I process the input data using forward FFT, then I decrease the "volume" of parts of the resulting spectrum, then I put the data into the inverse FFT and finally I normalize the data like this:
fn lowpass_filter(data: &[f32], sampling_rate: f32, cutoff_frequency: f32) -> Vec<f32> {
let len = data.len();
// step 1:
let mut spectrum = fft::forward(&data);
// step 2:
let start: usize = (len as f32 * (cutoff_frequency / sampling_rate * 2.0)) as usize;
let diff = len - start;
// what to add to multiplikator in each iteration
let step: f32 = PI / diff as f32;
// reduce volume of frequencies after cutoff_frequency in spectrum
let mut multiplikator: f32 = 0.0;
for i in start..len {
let mul = (multiplikator.cos() + 1.0) / 2.0;
spectrum[i] *= mul;
multiplikator += step;
}
// step 3:
let data = fft::inverse(&spectrum);
// step 4:
fft::normalize(&data, true)
}
When only doing step 1, 3 and 4 it works but the only problem there is that after normalization of the inversed data, I only get the absolute value of the data back so a 65Hz sinus wave looks like this:
The main problem I am facing tho, is that I do not know how to reduce the volume of specific frequencies in the spectrum.
When reducing said frequencies like in step 3 the visualization of a 65Hz sinus wave put through that lowpass filter with the cutoff_frequency set to 100.0Hz looks like this:
What did I do wrong here?
More info about the defined functions:
use rustfft::FftPlanner;
pub use rustfft::num_complex::Complex;
pub fn forward(data: &[f32]) -> Vec<Complex<f32>> {
let length = data.len();
// conversion to complex numbers
let mut buffer: Vec<Complex<f32>> = Vec::new();
for d in data {
buffer.push(Complex{re: *d, im: 0.0});
}
// creates a planner
let mut planner = FftPlanner::<f32>::new();
// creates a FFT
let fft = planner.plan_fft_forward(length);
//input.append(&mut data.to_vec());
fft.process(&mut buffer);
buffer
}
pub fn inverse(data: &[Complex<f32>]) -> Vec<Complex<f32>> {
let length = data.len();
let mut data = data.to_vec();
// creates a planner
let mut planner = FftPlanner::<f32>::new();
// creates a FFT
let fft = planner.plan_fft_inverse(length);
fft.process(&mut data);
data.to_vec()
}
pub fn normalize(data: &[Complex<f32>], normalize: bool) -> Vec<f32> {
let len: f32 = data.len() as f32;
let norm = data
.iter()
.map(|x| x.norm() / len)
.collect();
norm
}
I am using rustfft for the actual processing.
There are two issues with your code:
Since you want to process real data (i.e. data whose imaginary part is 0), the output of the forward FFT is symmetrical and you need to apply the same coefficient to matching frequencies (i.e. spectrum[i] and spectrum[spectrum.len() - i], remembering that spectrum[0] stands alone, and so does spectrum[spectrum.len()/2] if the length is even).
If the frequencies are symmetrical, the result of the inverse FFT should be real (i.e. the imaginary part should be 0 ± small rounding errors). Therefore your normalize function should use x.re instead of x.norm(), which will allow it to retain its sign.
After fixing these issues and adding a cutoff_end_freq to the lowpass_filter function the code looks like this:
pub fn lowpass_filter(data: &[f32], sampling_rate: f32, cutoff_start_freq: f32, cutoff_end_freq: f32) -> Vec<f32> {
let len = data.len();
let spectrum_len = len / 2;
let mut spectrum = fft::forward(&data);
assert!(len == spectrum.len());
let start: usize = (spectrum_len as f32 * (cutoff_start_freq / sampling_rate * 2.0)) as usize;
let end: usize = (spectrum_len as f32 * (cutoff_end_freq / sampling_rate * 2.0)) as usize;
let diff = end - start;
// what to add to multiplikator in each iteration
let step: f32 = PI / diff as f32;
let mut multiplikator: f32 = 0.0;
for i in start..=end {
let mul = (multiplikator.cos() + 1.0) / 2.0;
spectrum[i] *= mul;
spectrum[len - i - 1] *= mul;
multiplikator += step;
}
for i in end..spectrum_len {
spectrum[i] *= 0.0;
spectrum[len - i - 1] *= 0.0;
}
let data = fft::inverse(&spectrum);
fft::get_real(&data)
}
The normalize function now renamed to get_real:
pub fn get_real(data: &[Complex<f32>]) -> Vec<f32> {
let len: f32 = data.len() as f32;
let norm = data
.iter()
.map(|x| x.re / len)
.collect();
norm
}
Now it works very well, but there are still alternating "spikes" on each end when visualizing a frequency outside of the threshold:
I simply put a Hanning windowing effect over it because I heard that the FFT expects the data to be continuous.
And it works, with the drawback that a lot of information on the sides get lost.
Are there better methods to circumvent this problem or is there still something wrong with the lowpass?
This is a relatively complex task for me and I am not fully able to sum it up in the title.
But the problem is this:
I created a audio visualizer that converts raw audio data to a Vec<f32> where the elements in the vector are ordered by ascending frequency starting with 0hz and ending with 20_000hz
But now I have to normalize the vector so that the frequencies are not spaced in a linear way but logarithmically, which is more like how the human hearing works. here is the function that does this:
fn normalize(buffer: Vec<f32>, volume: f32) -> Vec<f32> {
let mut output_buffer: Vec<f32> = vec![0.0; buffer.len()];
let mut start_pos: usize = 0;
let mut end_pos: usize = 0;
for i in 0..buffer.len() {
// FIRST HALF
let offset: f32 = (buffer.len() as f32 / (i + 1) as f32).sqrt();
if ((i as f32 * offset) as usize) < output_buffer.len() {
// normalized position
let pos: usize = (i as f32 * offset) as usize;
// stores positions needed for filling
start_pos = end_pos;
end_pos = pos;
let y = buffer[i];
// prevent volume loss, that could occur because of 'crunching' of higher freqs
// by only setting the value of buffer if y is bigger
if output_buffer[pos] < y {
output_buffer[pos] = y;
}
}
// SECOND HALF
// linear filling of the values between
if end_pos - start_pos > 1 && (end_pos - 1) < output_buffer.len() {
for s_p in (start_pos + 1)..end_pos {
let percentage: f32 = (s_p - start_pos) as f32 / ((end_pos - 1) - start_pos) as f32;
let mut y: f32 = 0.0;
//(output_buffer[s_p] * (1.0 - percentage) ) + (output_buffer[end_pos] * percentage);
y += output_buffer[start_pos] * (1.0 - percentage);
y += output_buffer[end_pos] * percentage;
output_buffer[s_p] = y;
}
}
}
output_buffer
}
In the first half I am reallocating the values of the buffer to be logarithmic, but with this method a lot of values especially in the low frequency range get skipped and then it looks like this: unfilled
|
| |
| |
| | | |
| | | |||
| | | | |||
+----+---+--+-+++
Because of that I found a way to fill in the the gaps in the second half.
now it looks like this: filled
|
:|: |
::|:: :|:
:::|::: ::|:| |
::::|:::|::|:|||
|::::|:::|::|:|||
+----+---+--+-+++
I reduced the amount of bars for the sake of visualisation, the real implementation has about 10 time more 'bars' so the linearity is much more visible there.
So my final problem is that instead of straight lines in between the points I want to create curves, which represent sound much better.
I need to be able to access the 'y' coordinate value of any point of the curve.
Is there any way to do this, or am I doing this totally wrong?
I created audioviz that does all of this processing and where the code is from and audiolizer an application that makes use this libary combined with a GUI.
Splines does solve my exact problem.
here is my implementation with added resolution control and volume normalisation, that may not be neccessary:
use splines::{Interpolation, Key, Spline};
fn normalize(buffer: Vec<f32>, volume: f32, resolution: f32) -> Vec<f32> {
let mut output_buffer: Vec<f32> = vec![0.0; (buffer.len() as f32 * resolution ) as usize ];
let mut pos_index: Vec<(usize, f32)> = Vec::new();
for i in 0..buffer.len() {
let offset: f32 = (output_buffer.len() as f32 / (i + 1) as f32 * resolution).sqrt();
if ((i as f32 * offset) as usize) < output_buffer.len() {
// space normalisation
let pos: usize = (i as f32 * offset) as usize;
// volume normalisation
let volume_offset: f32 = (output_buffer.len() as f32 / (pos + 1) as f32).sqrt();
let y = buffer[i] / volume_offset.powi(3) * 0.01;
pos_index.push( ((pos as f32) as usize, y) );
}
}
// Interpolation
let mut points: Vec<Key<f32, f32>> = Vec::new();
for val in pos_index.iter() {
let x = val.0 as f32;
let y = val.1 * volume;
points.push(Key::new(x, y, Interpolation::Bezier(0.5)));
}
let spline = Spline::from_vec(points);
for i in 0..output_buffer.len() {
let v = match spline.sample(i as f32) {
Some(v) => v,
None => 0.0,
};
output_buffer[i] = v;
}
output_buffer
}
Trying to convert this for loop from c++ to rust and i'm having a hard time figuring it out as I'm very new to Rust syntax.
double sinError = 0;
for (float x = -10 * M_PI; x < 10 * M_PI; x += M_PI / 300) {
double approxResult = sin_approx(x);
double libmResult = sinf(x);
sinError = MAX(sinError, fabs(approxResult - libmResult));
}
Iterate over integers
As #trentcl already pointed out, it's usually better to iterate over integers instead of floats, to prevent numerical errors from adding up:
use std::f32::consts::PI;
let mut sin_error = 0.0;
for x in (-3000..3000).map(|i| (i as f32) * PI / 300.0) {
sin_error = todo!();
}
Just replace todo!() with the code that computes the next sin_error.
A more functional way
use std::f32::consts::PI;
let sin_error = (-3000..3000)
.map(|i| (i as f32) * PI / 300.0)
.fold(0.0, |sin_error, x| todo!());
In case you don't care about numerical errors, or want to iterate over something else, here are some other options:
Use a while loop
It's not as nice, but does the job!
use std::f32::consts::PI;
let mut sin_error = 0.0;
let mut x = -10.0 * PI;
while (x < 10.0 * PI) {
sin_error = todo!();
x += PI / 300.0;
}
Create your iterator with successors()
The successors() function creates a new iterator where each successive item is computed based on the preceding one:
use std::f32::consts::PI;
use std::iter::successors;
let mut sin_error = 0.0;
let iter = successors(Some(-10.0 * PI), |x| Some(x + PI / 300.0));
for x in iter.take_while(|&x| x < 10.0 * PI) {
sin_error = todo!();
}
A more functional way
use std::f32::consts::PI;
use std::iter::successors;
let sin_error = successors(Some(-10.0 * PI), |x| Some(x + PI / 300.0))
.take_while(|&x| x < 10.0 * PI)
.fold(0.0, |sin_error, x| todo!());