How to set rust piston window spawn position? - rust

How to set start coordinats for piston window spawn?
I can move it after with .set_position() that is not exactly what i want.
fn main() {
...
let mut window: GlutinWindow = WindowSettings::new(
"spinning-square",
[0, 0])
.graphics_api(opengl)
.exit_on_esc(true)
.build()
.unwrap();
...
window.set_position((settings.window_position.0,
settings.window_position.1));
window.set_size([size.0, size.1]);
Also, maybe you know how to spawn it with no mouse focus switch from IDE, for debugging.

Related

Display ImageBuffer on application window with Rust

what is Rust's best way to display and dynamically generated image?
I have
let image : ImageBuffer<Rgb<u8>, Vec<u8>> = ...
How do I show it in a GUI?
I tried with fltk, but didn't find anything usable for ImageBuffer.
use fltk::...
let app = App::default().with_scheme(AppScheme::Gleam);
let mut wind = Window::new(100, 100, 400, 300, "Hello from rust");
wind.set_image(Some(image));
won't work because:
the trait `ImageExt` is not implemented for image::buffer_::ImageBuffer<image::color::Rgb<u8>, Vec<u8>>`
Do I have to encode my ImageBuffer image to some specific format, so that the GUI library can decode it back and display it afterward?
Or maybe I should use some other UI library then fltk?
You can create an fltk RgbImage from an ImageBuffer (from the image crate) using:
let rgbimage = RgbImage::new(&image.to_rgb8(), image.width() as i32, image.height() as i32, ColorDepth::Rgb8).unwrap();
I was able to solve it with the help from mo_al_'s answear. I just had to use into_raw(), because to_rgb8() or into_rgb8() were not implemented for type.
let w = image.width() as i32;
let h = image.height() as i32;
let image_rgb = RgbImage::new(&domain_image.into_raw(), w, h, ColorDepth::Rgb8).unwrap();
...
let mut frame = Frame::new(0, 0, w, h, "");
frame.set_image(Some(image_rgb));

CPU time sleep instead of wall-clock time sleep

Currently, I have the following Rust toy program:
use rayon::prelude::*;
use std::{env, thread, time};
/// Sleeps 1 seconds n times parallely using rayon
fn rayon_sleep(n: usize) {
let millis = vec![0; n];
millis
.par_iter()
.for_each(|_| thread::sleep(time::Duration::from_millis(1000)));
}
fn main() {
let args: Vec<String> = env::args().collect();
let n = args[1].parse::<usize>().unwrap();
let now = time::Instant::now();
rayon_sleep(n);
println!("rayon: {:?}", now.elapsed());
}
Basically, my program accepts one input argument n. Then, I sleep for 1 second n times. The program executes the sleep tasks in parallel using rayon.
However, this is not exactly what I want. As far as I know, thread::sleep sleeps according to wall-clock time. However, I would like to keep a virtual CPU busy for 1 second in CPU time.
Is there any way to do this?
EDIT
I would like to make this point clear: I don't mind if the OS preempts the tasks. However, if this happens, then I don't want to consider the time the task spends in the ready/waiting queue.
EDIT
This is a simple, illustrative example of what I need to do. In reality, I have to develop a benchmark for a crate that allows defining and simulating models using the DEVS formalism. The benchmark aims to compare DEVS-compliant libraries with each other, and it explicitly says that the models must spend a fixed, known amount of CPU time. That is why I need to make sure of that. Thus, I cannot use a simple busy loop nor simply sleep.
I followed Sven Marnach's suggestions and implemented the following function:
use cpu_time::ThreadTime;
use rayon::prelude::*;
use std::{env, thread, time};
/// Sleeps 1 seconds n times parallely using rayon
fn rayon_sleep(n: usize) {
let millis = vec![0; n];
millis.par_iter().for_each(|_| {
let duration = time::Duration::from_millis(1000);
let mut x: u32 = 0;
let now = ThreadTime::now(); // get current thread time
while now.elapsed() < duration { // active sleep
std::hint::black_box(&mut x); // to avoid compiler optimizations
x = x.wrapping_add(1);
}
});
}
fn main() {
let args: Vec<String> = env::args().collect();
let n = args[1].parse::<usize>().unwrap();
let now = time::Instant::now();
rayon_sleep(n);
println!("rayon: {:?}", now.elapsed());
}
If I set n to 8, it takes 2 seconds more or less. I'd expect a better performance (1 second, as I have 8 vCPUs), but I guess that the overhead corresponds to the OS scheduling policy.

Why is my led(stm32f3-discovery board) is not glowing after applying delay?

Is there any mistake I am making in applying delay the delay?
This is the code I am working with to blink led 3 and 4 after with a delay.
use cortex_m_rt::entry;
use stm32f30x_hal as hal;
use hal::delay::Delay;
use hal::prelude::*;
use hal::stm32f30x;
use panic_halt;
#[entry]
fn main() -> ! {
let device_p = stm32f30x::Peripherals::take().unwrap();
let core_periphs=cortex_m::Peripherals::take().unwrap();
let mut reset_clock_control = device_p.RCC.constrain();
let mut gpioe = device_p.GPIOE.split(&mut reset_clock_control.ahb);
**let mut flash = device_p.FLASH.constrain();
let clocks = reset_clock_control.cfgr.freeze(&mut flash.acr);
let mut delay = Delay::new(core_periphs.SYST,clocks);**
let mut led_3 = gpioe
.pe9
.into_push_pull_output(&mut (gpioe.moder), &mut (gpioe.otyper));
let mut led_4=gpioe.pe8.into_push_pull_output(&mut gpioe.moder,&mut gpioe.otyper);
loop {
led_3.set_high();
**delay.delay_ms(2_000_u16);**
led_4.set_high();
}
}
If I am not using delay part it is working fine
I think you set up your clocks wrong. For the delay to work correctly you should use the system clock.
This is how to create the Delay for STM32 based on this sample (stm32f4xx, but should work for you, too):
// Set up the system clock. We want to run at 48MHz for this one.
let rcc = dp.RCC.constrain();
let clocks = rcc.cfgr.sysclk(48.mhz()).freeze();
// Create a delay abstraction based on SysTick
let mut delay = hal::delay::Delay::new(cp.SYST, clocks);
where dp are my device peripherals (e.g. let dp = stm32::Peripherals::take().unwrap()) and cp are the core peripherals.
So this uses the sysclk.
Alternatively you could also try to replace your delay with cortex_m::delay(8_000_000);, where the delay is given using the number of clock cycles.
In the loop you set the LED high led_3.set_high();. However never set led_3 low again so it would never blink. So change your loop to:
led_3.set_high();
led_4.set_low();
delay.delay_ms(2_000_u16);
led_4.set_high();
led_3.set_low();

How do I get a vector of u8 RGB values when using the image crate?

I need to take an image and get a list of RGB byte values. I am using the image crate. This is what I have:
extern crate image;
fn main() {
let im = image::open("wall.jpg").unwrap().to_rgb();
let data: Vec<[u8; 3]> = im.pixels().flat_map(|p| vec![p.data]).collect();
let rgb: Vec<&u8> = data.iter().flat_map(|p| p.iter()).collect();
println!("First Pixel: {} {} {}", rgb[0], rgb[1], rgb[2]);
}
This seems pretty ugly. I have to introduce an intermediate variable and I get a vector of pointers to the values I really need, so before I can do something else, I would have to map over it again to get the actual values.
All I want is a vector of u8. How do I get that?
As of 0.23.12, to_rgb has been deprecated use DynamicImage::to_rgb8 or DynamicImage::to_bytes instead:
let im = image::open("wall.jpg").unwrap().to_rgb8();
let rgb: Vec<u8> = im.into_raw();
// Alternatively
let bytes = image::open("wall.jpg").unwrap().to_bytes();
Prior to 0.23.12, if you just want the raw data itself, you can just call DynamicImage::raw_pixels:
let im = image::open("wall.jpg").unwrap().to_rgb();
let rgb: Vec<u8> = im.raw_pixels();
If all you're actually interested in is the first pixel though, I'd recommend calling GenericImage::get_pixel:
let im = image::open("wall.jpg").unwrap();
let first_pixel = im.get_pixel(0, 0);
Which you can then turn into a [u8; 3] array of RGB data:
let rgb = first_pixel.to_rbg();
println!("First Pixel: {} {} {}", rgb.data[0], rgb.data[1], rgb.data[2]);

How do I use read_scanline from the piston image crate?

I just started learning Rust and I really like to learn by building something 'real'. So I read through the Book, installed Rust, played around with the language and got cargo run working.
I then decided to try and read an image from disk and convert that image into a Vec. In this case, I want to detect the color of the pixels and store that somehow.
I broke it into multiple parts to learn Rust and its syntax:
Starting with:
Read an image from disk
Get the pixels of an image
Show colour code
This led me to the following code using the image crate:
extern crate image;
use std::path::Path;
use image::GenericImage;
fn main() {
let img = image::open(&Path::new("src/maze.gif")).unwrap();
let pixels = img.pixels();
for e in pixels {
let (_, _, color) = e;
println!("Pixel colour {:?}", color);
}
println!("Dimensions {:?}", img.dimensions());
}
So proud as I am, I see some information popping up:
* snip *
Pixel colour Rgba { data: [255, 255, 255, 255] }
Pixel colour Rgba { data: [0, 0, 0, 255] }
Pixel colour Rgba { data: [255, 255, 255, 255] }
* snip *
Now I want to store for each line of the image its pixel information. I would like to have a Vec (is that correct?) with the info. A PHP array would look like this:
$a = [
0 => [ Color, Color, Color, Color],
1 => [ Color, Color, Color, Color]
];
Therefore my assumption is to use read_scanline. This is where reading AND understanding the documentation fails me completely.
What I think I need to do is:
Get the decoder for the current opened image
Call read_scanline on that decoder
Iterate over the results
But how?!
The code reads this:
read_scanline(&mut self, buf: &mut [u8]) -> ImageResult<u32>
I break this down as follows:
First argument, an ImageDecoder object.
Second argument a mutable 8 bit value.
It returns an ImageResult object.
So I tried adapting the code slightly:
extern crate image;
use std::path::Path;
use image::GenericImage;
use image::ImageDecoder;
fn main() {
let img = image::open(&Path::new("src/maze.gif")).unwrap();
let pixels = img.pixels();
let something: &mut [u8];
let result = image::ImageDecoder::read_scanline(img, something);
for e in pixels {
let (_, _, color) = e;
println!("Pixel colour {:?}", color);
}
println!("Dimensions {:?}", img.dimensions());
}
Which, as you might have guessed fails miserably.
error: mismatched types:
expected `&mut _`,
found `image::dynimage::DynamicImage`
(expected &-ptr,
found enum `image::dynimage::DynamicImage`) [E0308]
src/main.rs:13 let result = image::ImageDecoder::read_scanline(img, something);
Obviously this is due to the fact I didn't pass an ImageDecoder object. But how can I? How should I read and understand the documentation. I think it is due to not understanding &mut self. I also do not understand how I should initiate this.
I hope someone can explain what it is I am missing and point me into the right direction.
Note I'm having trouble actually opening a GIF as each line says it is 0 bytes, so there's the possibility I've missed something important... I'll use a JPEG to demonstrate instead.
ImageDecoder is a trait. You need to use a concrete implementation of the trait. The documentation lists all known implementors of the trait, one of which is image::gif::Decoder, another is image::jpeg::JPEGDecoder.
read_scanline accepts a mutable self, which means that you normally will call it with method syntax: object.method(arg1). The first non-self argument is a mutable slice of bytes. The documentation states:
Reads one row from the image into buf and returns the row index
So the pixel data will be stored in the buffer. The next trick is to figure out how many bytes we need for a row and how many rows there are. ImageDecoder::row_len and ImageDecoder::dimensions address that respectively. Put together, we get something like this:
extern crate image;
use std::fs::File;
use image::jpeg;
use image::{GenericImage, ImageDecoder};
use image::{ColorType, Rgb, Pixel};
fn main() {
let f = File::open("/tmp/cat.jpg").unwrap();
let mut decoder = jpeg::JPEGDecoder::new(f);
let (width, height) = decoder.dimensions().unwrap();
let row_len = decoder.row_len().unwrap();
println!("{} x {}, {}", width, height, row_len);
let rows: Vec<_> = (0..height).map(|_| {
let mut row = vec![0; row_len];
decoder.read_scanline(&mut row).unwrap();
row
}).collect();
}
This loads the JPEG, row-by-row, into a Vec<Vec<u8>> - a vector of vectors of bytes.
To convert the raw data into pixels, we can use Pixel::from_slice, another trait method. In this case, I only handle one type of pixel - 8-bit RGB:
let colortype = decoder.colortype().unwrap();
assert_eq!(colortype, ColorType::RGB(8)); // Others are left to you!
let row_colors: Vec<Vec<_>> = rows.iter().map(|r| {
r.chunks(3).map(|p| Rgb::from_slice(p)).collect()
}).collect();
I'm not in love with this because of the hard-coded 3. There should be some way of knowing that this pixel type only takes 3 bytes, but I'm not seeing anything obvious.

Resources