Convex Shapes vertices are not accurate SFML rust - rust

So I have been using the Convex Shape struct to create triangles using the rust bindings for sfml.
My code is as follows:
let mut shape2 = ConvexShape::new(3);
shape2.set_point(0, Vector2f::new(0.0, 0.0));
shape2.set_point(1, Vector2f::new(0.0, 100.0));
shape2.set_point(2, Vector2f::new(100.0, 0.0));
for point in shape2.points() {
println!("x:{} y:{}", point.x, point.y);
}
However, when looping through the points for the triangles sometimes I would get output like this:
x:0 y:100
x:100 y:0
x:434583.44 y:-0.000000000000000000000000000000000000000047353
I am not sure what is causing this problem; however, I assume it has to do with the f32 overflowing.
Are there any fixes for this problem or am I doing something wrong here?

This appears to be a bug in the iterator returned by .points(). You can resort to fetching directly by index instead (accessed via the Shape trait):
use sfml::graphics::{ConvexShape, Shape};
use sfml::system::Vector2f;
...
let mut shape2 = ConvexShape::new(3);
shape2.set_point(0, Vector2f::new(0.0, 0.0));
shape2.set_point(1, Vector2f::new(0.0, 100.0));
shape2.set_point(2, Vector2f::new(100.0, 0.0));
for i in 0..shape2.point_count() {
let point = shape2.point(i);
println!("x:{} y:{}", point.x, point.y);
}

Related

How to use Bezier curve interpolation?

I want to interpolate values between multiple points with Bezier curve interpolation using the splines crate.
My current implementation looks like this:
let mut output_buffer: Vec<f32> = vec![ 0.0; buffer.len() ];
let mut points: Vec<Key<f32, f32>> = Vec::new();
for val in pos_index.iter() {
let x = val.0;
let y = val.1;
points.push(Key::new(x, y, Interpolation::Bezier( 0.5 )));
}
let spline = Spline::from_vec(points);
for i in 0..output_buffer.len() {
let v = spline.clamped_sample(i as f32).unwrap_or(0.0);
output_buffer[i] = v;
}
output_buffer
But I do not known what value to put into Interpolation::Bezier( 'value' ).
The documentation does not help me neither and I cannot find any examples that use Bezier interpolation.
One important thing to note is that the points or keys are not evenly spaced out, so the distance between them is much higher on the first values that are closer to zero.
At line #170 of spline.rs, it shows V's values of Interpolation::Bezier(V) of cp0 (cp stands for control point) and cp1 are passed to the function V::cubic_bezier as the third and fourth parameters.
Here is the documentation of the function cubic_bezier.
fn cubic_bezier(t: T, a: Self, u: Self, v: Self, b: Self) -> Self
Cubic Bézier interpolation.
a is the first point; b is the second point; u is the output tangent of a to the curve and v is the input tangent of b to the curve.
We can see u and v stand for tangents and also, from the function signature, we can see their type is Self, the type that implements the Interpolate trait. The section Implementations on Foreign Types of Interpolate documentation lists the types that implements the trait. Here are some as examples:
impl Interpolate<f32> for Vector1<f32>
impl Interpolate<f32> for Vector2<f32>
impl Interpolate<f64> for Vector1<f64>
impl Interpolate<f64> for Vector2<f64>
Hope this is helpful.

Rusttype how to position glyphs vertically

I'm trying to use the rusttype crate to render text. So far it's been fantastic, but I am running into a wall when trying to correctly position individual glyphs.
I am rendering text by rendering each glyph to an RgbaImage (from the image crate, like in the rusttype image example) and rendering it to a quad mesh.
pub struct Glyph {
pub image: image::RgbaImage,
pub glyph: rusttype::PositionedGlyph<'static>,
pub vertical_offset: f32,
}
// &self.font is a rusttype::Font type
pub fn draw_glyph(&self, ch: char, font_size: f32) -> Option<Glyph> {
// Set size
let scale = rusttype::Scale::uniform(font_size);
let v_metrics = self.font.v_metrics(scale);
// Position and scale glyph
let offset = rusttype::point(0.0, 0.0 + v_metrics.ascent);
let glyph = self.font.glyph(ch).scaled(scale).positioned(offset);
// Get glyph dimensions
let bounds = glyph.pixel_bounding_box()?;
let glyph_height = (v_metrics.ascent - v_metrics.descent).ceil() as u32;
let glyph_width = (bounds.max.x - bounds.min.x) as u32;
// Try to align glyphs on a baseline
let offset_y = bounds.height() as f32 - v_metrics.ascent;
// Generate image
let mut image =
image::ImageBuffer::from_pixel(glyph_width, glyph_height, image::Rgba([200; 4]));
glyph.draw(|x, y, v| {
image.put_pixel(x, y, image::Rgba([0 + (v * 255.0) as u8; 4]));
});
Some(Glyph {
glyph: glyph,
image: image,
vertical_offset: offset_y,
})
}
However, without the vertical_offset calculation, I receive the following text:
As you can see, the characters are all over the place.
I need to take into account the ascent and descent size of the font. However, this is where I run into issues.
I can compensate for the ascent using the following calculation:
let bounds = glyph.pixel_bounding_box()?;
let offset_y = bounds.height() as f32 - v_metrics.ascent;
And then shifting the quad meshes downwards by offset_y.
This improves the output when there are no descenders like "g" in the text:
But does not help when there are descending characters involved:
I cannot figure out how the rusttype examples handle this. They use the Font::layout method, but this doesn't have any special vertical align code.
I'm definitely doing something wrong. Please help!
Try compensating for the descent as well:
let offset_y = bounds.height() as f32 - v_metrics.ascent + v_metrics.descent;

Convert Vec<[f64;1]> to Vec<f64>

I am new to rust and I am playing with it. When I was following some tutorial, I found a vector of type <[f64,1]>. I thought it should be simple to transform it to [f64] but could not find a simple way other than for loop. Is there other way?
let y: Vec<[f64;1]> = [[1],[2],[3],[4]];
let mut y2: Vec<f64> = Vec::new();
for each in &y {
y2.push(each[0]);
}
y in your example is not a Vec; you probably forgot vec! in front.
Furthermore, floats should be 1.0 not 1.
I don't know why you find this for loop not simple, but if you want other ways:
Using iterator pattern
let y: Vec<[f64; 1]> = vec![[1.0], [2.0], [3.0], [4.0]];
let y2: Vec<f64> = y.iter().map(|&[f]| f).collect();
Using unsafe
Since [f64; 1] and f64 are equal-sized (both 8 bytes),
we can transmute the Vec directly:
let y: Vec<[f64; 1]> = vec![[1.0], [2.0], [3.0], [4.0]];
let y2 = unsafe {
// Ensure the original vector is not dropped.
let mut y = std::mem::ManuallyDrop::new(y);
Vec::from_raw_parts(y.as_mut_ptr() as *mut f64,
y.len(),
y.capacity())
};
This is more complex, but it will reuse the same memory without copying.
you could use the flatten() method:
let y_vec: Vec<f64> = y.iter().flatten().cloned().collect();

wgpu-rs: Putting a Matrix3 into a vertex shader results in odd behavior but using a Matrix4 works fine

Using wgpu-rs, I'm trying to get a 3x3 cgmath matrix into a shader (compiled using glsl-to-spirv). However, the resulting mat3 in the shader has incorrect data. When I replace the mat3 and Matrix3 with mat4 and Matrix4, everything works fine and the matrix has correct data.
Vertex Shader:
layout(set=0, binding=0) uniform Uniforms {
mat3 camera_transform;
};
Render Loop:
let mut encoder = self.device.create_command_encoder(
&wgpu::CommandEncoderDescriptor {
label: Some("update encoder"),
},
);
let staging_buffer = self.device.create_buffer_with_data(
bytemuck::cast_slice(&[self.uniforms]),
wgpu::BufferUsage::COPY_SRC,
);
encoder.copy_buffer_to_buffer(&staging_buffer, 0, &self.uniform_buffer, 0, std::mem::size_of::<Uniforms>() as wgpu::BufferAddress);
self.queue.submit(&[encoder.finish()]);
// ...
render_pass.set_bind_group(0, &self.uniform_bind_group, &[]);
Uniforms:
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct Uniforms {
pub camera_transform: Matrix3::<f32>,
}
unsafe impl bytemuck::Pod for Uniforms {}
unsafe impl bytemuck::Zeroable for Uniforms {}
impl Uniforms {
pub fn new() -> Self {
Self {
camera_transform: Matrix3::identity(),
}
}
}
This is an open issue in wgpu-rs. Indeed the simplest workaround may be to make your mat3 into a mat4 until it is resolved.
The problem seems to be a mistake of alignment in generating SPIR-V. The actual alignment is:
If the member is a scalar consuming N basic machine units, the base alignment is N.
If the member is a two- or four-component vector with components consuming N basic machine units, the base alignment is 2N or 4N,
respectively.
If the member is a three-component vector with components consuming N basic machine units, the base alignment is 4N.
If the member is an array of scalars or vectors, the base alignment and array stride are set to match the base alignment of a single array
element, according to rules (1), (2), and (3), and rounded up to the
base alignment of a vec4. The array may have padding at the end; the
base offset of the member following the array is rounded up to the
next multiple of the base alignment.
You are in case 4. Having a mat4 should leave no extra padding on the end and not give any possibility for misalignment issues.

How do I use read_scanline from the piston image crate?

I just started learning Rust and I really like to learn by building something 'real'. So I read through the Book, installed Rust, played around with the language and got cargo run working.
I then decided to try and read an image from disk and convert that image into a Vec. In this case, I want to detect the color of the pixels and store that somehow.
I broke it into multiple parts to learn Rust and its syntax:
Starting with:
Read an image from disk
Get the pixels of an image
Show colour code
This led me to the following code using the image crate:
extern crate image;
use std::path::Path;
use image::GenericImage;
fn main() {
let img = image::open(&Path::new("src/maze.gif")).unwrap();
let pixels = img.pixels();
for e in pixels {
let (_, _, color) = e;
println!("Pixel colour {:?}", color);
}
println!("Dimensions {:?}", img.dimensions());
}
So proud as I am, I see some information popping up:
* snip *
Pixel colour Rgba { data: [255, 255, 255, 255] }
Pixel colour Rgba { data: [0, 0, 0, 255] }
Pixel colour Rgba { data: [255, 255, 255, 255] }
* snip *
Now I want to store for each line of the image its pixel information. I would like to have a Vec (is that correct?) with the info. A PHP array would look like this:
$a = [
0 => [ Color, Color, Color, Color],
1 => [ Color, Color, Color, Color]
];
Therefore my assumption is to use read_scanline. This is where reading AND understanding the documentation fails me completely.
What I think I need to do is:
Get the decoder for the current opened image
Call read_scanline on that decoder
Iterate over the results
But how?!
The code reads this:
read_scanline(&mut self, buf: &mut [u8]) -> ImageResult<u32>
I break this down as follows:
First argument, an ImageDecoder object.
Second argument a mutable 8 bit value.
It returns an ImageResult object.
So I tried adapting the code slightly:
extern crate image;
use std::path::Path;
use image::GenericImage;
use image::ImageDecoder;
fn main() {
let img = image::open(&Path::new("src/maze.gif")).unwrap();
let pixels = img.pixels();
let something: &mut [u8];
let result = image::ImageDecoder::read_scanline(img, something);
for e in pixels {
let (_, _, color) = e;
println!("Pixel colour {:?}", color);
}
println!("Dimensions {:?}", img.dimensions());
}
Which, as you might have guessed fails miserably.
error: mismatched types:
expected `&mut _`,
found `image::dynimage::DynamicImage`
(expected &-ptr,
found enum `image::dynimage::DynamicImage`) [E0308]
src/main.rs:13 let result = image::ImageDecoder::read_scanline(img, something);
Obviously this is due to the fact I didn't pass an ImageDecoder object. But how can I? How should I read and understand the documentation. I think it is due to not understanding &mut self. I also do not understand how I should initiate this.
I hope someone can explain what it is I am missing and point me into the right direction.
Note I'm having trouble actually opening a GIF as each line says it is 0 bytes, so there's the possibility I've missed something important... I'll use a JPEG to demonstrate instead.
ImageDecoder is a trait. You need to use a concrete implementation of the trait. The documentation lists all known implementors of the trait, one of which is image::gif::Decoder, another is image::jpeg::JPEGDecoder.
read_scanline accepts a mutable self, which means that you normally will call it with method syntax: object.method(arg1). The first non-self argument is a mutable slice of bytes. The documentation states:
Reads one row from the image into buf and returns the row index
So the pixel data will be stored in the buffer. The next trick is to figure out how many bytes we need for a row and how many rows there are. ImageDecoder::row_len and ImageDecoder::dimensions address that respectively. Put together, we get something like this:
extern crate image;
use std::fs::File;
use image::jpeg;
use image::{GenericImage, ImageDecoder};
use image::{ColorType, Rgb, Pixel};
fn main() {
let f = File::open("/tmp/cat.jpg").unwrap();
let mut decoder = jpeg::JPEGDecoder::new(f);
let (width, height) = decoder.dimensions().unwrap();
let row_len = decoder.row_len().unwrap();
println!("{} x {}, {}", width, height, row_len);
let rows: Vec<_> = (0..height).map(|_| {
let mut row = vec![0; row_len];
decoder.read_scanline(&mut row).unwrap();
row
}).collect();
}
This loads the JPEG, row-by-row, into a Vec<Vec<u8>> - a vector of vectors of bytes.
To convert the raw data into pixels, we can use Pixel::from_slice, another trait method. In this case, I only handle one type of pixel - 8-bit RGB:
let colortype = decoder.colortype().unwrap();
assert_eq!(colortype, ColorType::RGB(8)); // Others are left to you!
let row_colors: Vec<Vec<_>> = rows.iter().map(|r| {
r.chunks(3).map(|p| Rgb::from_slice(p)).collect()
}).collect();
I'm not in love with this because of the hard-coded 3. There should be some way of knowing that this pixel type only takes 3 bytes, but I'm not seeing anything obvious.

Resources