I wanted to try to make a game in Rust using Piston. This is the first time I have used this library. I took this code from the official doc to test it. However, when my mouse touches the application window it closes immediately and I don’t understand why.
extern crate glutin_window;
extern crate graphics;
extern crate opengl_graphics;
extern crate piston;
use glutin_window::GlutinWindow as Window;
use opengl_graphics::{GlGraphics, OpenGL};
use piston::event_loop::{EventSettings, Events};
use piston::input::{RenderArgs, RenderEvent, UpdateArgs, UpdateEvent};
use piston::window::WindowSettings;
pub struct App {
gl: GlGraphics, // OpenGL drawing backend.
rotation: f64, // Rotation for the square.
}
impl App {
fn render(&mut self, args: &RenderArgs) {
use graphics::*;
const GREEN: [f32; 4] = [0.0, 1.0, 0.0, 1.0];
const RED: [f32; 4] = [1.0, 0.0, 0.0, 1.0];
let square = rectangle::square(0.0, 0.0, 50.0);
let rotation = self.rotation;
let (x, y) = (args.window_size[0] / 2.0, args.window_size[1] / 2.0);
self.gl.draw(args.viewport(), |c, gl| {
// Clear the screen.
clear(GREEN, gl);
let transform = c
.transform
.trans(x, y)
.rot_rad(rotation)
.trans(-25.0, -25.0);
// Draw a box rotating around the middle of the screen.
rectangle(RED, square, transform, gl);
});
}
fn update(&mut self, args: &UpdateArgs) {
// Rotate 2 radians per second.
self.rotation += 2.0 * args.dt;
}
}
fn main() {
// Change this to OpenGL::V2_1 if not working.
let opengl = OpenGL::V3_2;
// Create an Glutin window.
let mut window: Window = WindowSettings::new("spinning-square", [200, 200])
.graphics_api(opengl)
.exit_on_esc(true)
.build()
.unwrap();
// Create a new game and run it.
let mut app = App {
gl: GlGraphics::new(opengl),
rotation: 0.0,
};
let mut events = Events::new(EventSettings::new());
while let Some(e) = events.next(&mut window) {
if let Some(args) = e.render_args() {
app.render(&args);
}
if let Some(args) = e.update_args() {
app.update(&args);
}
}
}
The error:
thread 'main' panicked at 'attempted to leave type `platform::platform::x11::util::input::PointerState` uninitialized, which is invalid', /home/zenmoa/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:658:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
This is apparently a bug in an old version of winit, which is fixed in the latest winit release. However, various crates e.g. amethyst, piston, ggez, etc. still use winit 0.19. In Rust 1.48.0 the issue has apparently manifested itself as a panic.
On the amethyst issue tracker a comment mentions, that for now a possible workaround is to revert back to Rust 1.47.0. If you're using rustup, then you can do that by executing the following command:
rustup default 1.47.0
Related
At the moment, my nodes are made of a struct which hold a name and a value.
Later the graph is created.
I want to change the value of certain nodes. I also want the struct to hold many other categories at some point.
The problem is that I'm not able to change the values of the struct in the node after the nodes have been put together in the graph.
In the code below, I have 3 nodes and two of them are connected to a main node. I find the children of the main node and then for each children, I try to change the values of their struct to an arbitrary value (for testing).
use petgraph::stable_graph::StableGraph;
#[derive(Clone, Debug, Default)]
struct ControlBloc {
name: String,
value: u32,
}
#[derive(Clone, PartialEq, PartialOrd, Debug, Default)]
struct Arcs {
speed_arcs: f32,
bandwidth: f32,
state_bandwidth: f32,
}
fn create_bloc(name: String, value: u32) -> ControlBloc {
ControlBloc { name, value }
}
fn main() {
let mut graph = StableGraph::<&ControlBloc, &Arcs>::new();
let m = &create_bloc(String::from("Main"), 10);
let b1 = &create_bloc(String::from("sub1"), 20);
let b2 = &create_bloc(String::from("sub2"), 30);
let main = graph.add_node(m);
let sub1 = graph.add_node(b1);
let sub2 = graph.add_node(b2);
graph.add_edge(
main,
sub1,
&Arcs {
speed_arcs: 1.0,
bandwidth: 1.02,
state_bandwidth: 0.0,
},
);
graph.add_edge(
main,
sub2,
&Arcs {
speed_arcs: 2.0,
bandwidth: 2.02,
state_bandwidth: 0.0,
},
);
let neighb = graph.neighbors(main).collect::<Vec<_>>();
for i in 0..neighb.len() {
let &mut nod = graph.node_weight_mut(neighb[i]).unwrap();
let ar = *graph
.edge_weight(graph.find_edge(main, neighb[i]).unwrap())
.unwrap();
println!("{:?}", nod);
nod.value = 10; // ########## Problem Here :( ################
println!("{:?}", nod);
println!("{:?}", ar);
}
}
Whatever I try to do I get as an error:
error[E0594]: cannot assign to `nod.value`, which is behind a `&` reference
--> src/main.rs:56:9
|
51 | let &mut nod = graph.node_weight_mut(neighb[i]).unwrap();
| --- consider changing this binding's type to be: `&mut ControlBloc`
...
56 | nod.value = 10; // ########## Problem Here :( ################
| ^^^^^^^^^^^^^^ `nod` is a `&` reference, so the data it refers to cannot be written
Is it telling me that it is actually impossible to change the values once they have been set in the graph? Because if I want to print them then that's fine but it is a bit useless if I can't do anything about them.
One solution that I could think of would be to create a copy of the node with the value I want to change and when I create it, I put the correct value and then replace the old node with the new node by replacing the edges in the graph. That, however, seems rather long of a solution if there is a way to fix the original code.
Your problem is, that your graph does not own the values, but instead it only holds references to them.
You can't manipulate references, you should either hold a &mut, which is often problematic, or transfer the ownership to the graph.
Here's my code, which works:
fn main() {
let mut graph = StableGraph::<ControlBloc, Arcs>::new();
let m = create_bloc(String::from("Main"), 10);
let b1 = create_bloc(String::from("sub1"), 20);
let b2 = create_bloc(String::from("sub2"), 30);
let main = graph.add_node(m);
let sub1 = graph.add_node(b1);
let sub2 = graph.add_node(b2);
graph.add_edge(
main,
sub1,
Arcs {
speed_arcs: 1.0,
bandwidth: 1.02,
state_bandwidth: 0.0,
},
);
graph.add_edge(
main,
sub2,
Arcs {
speed_arcs: 2.0,
bandwidth: 2.02,
state_bandwidth: 0.0,
},
);
let neighb = graph.neighbors(main).collect::<Vec<_>>();
for i in 0..neighb.len() {
let nod = graph.node_weight_mut(neighb[i]).unwrap();
println!("{:?}", nod);
nod.value = 10;
println!("{:?}", nod);
let ar = graph
.edge_weight(graph.find_edge(main, neighb[i]).unwrap())
.unwrap();
println!("{:?}", ar);
}
}
Please not how I changed StableGraph::<&ControlBloc, &Arcs>::new() to StableGraph::<ControlBloc, Arcs>::new(), which could be ultimatly shortened to StableGraph::new(), because Rust is able to infer the types.
Also note, that I moved the let ar = ... code down, because else you will get another borrow error.
I'm sure there's an easy way to do this but I don't know what it is. I have a very basic gtk::{Application, ApplicationWindow, DrawingArea}; setup. I want the DrawingArea::connect_draw closure to be triggered repeatedly on a timer, so it updates according to some changing state. (It would also be cool if it could be actively triggered by other threads, but a timer is fine.)
So far everything I've found that would work on a timer fails because it would mean moving the ApplicationWindow to another thread. (fails with NonNull<GObject> cannot be shared between threads safely) What I have currently triggers redraw on generic events, so if I click my mouse on the window it will redraw, but not do so automatically.
That code is below, but please show me how to make this work?
//BOILER PLATE SCROLL DOWN
extern crate cairo;
extern crate rand;
extern crate gtk;
extern crate gdk;
extern crate glib;
use std::{thread, time};
use gtk::prelude::*;
use gtk::{Application, ApplicationWindow, DrawingArea};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender};
fn main(){
let app = Application::builder()
.application_id("org.example.HelloWorld")
.build();
let (tx, rx ) : (Sender<f64>, Receiver<f64>)= mpsc::channel();
gtk::init().expect("GTK init failed");
let draw_area = DrawingArea::new();
let _id = draw_area.connect_draw(move |_unused, f| {
let red = rx.recv().unwrap();
f.set_source_rgb(red,0.5, 0.5);
f.paint().expect("Painting failed");
Inhibit(false)
});
app.connect_activate(move |app| {
let win = ApplicationWindow::builder()
.application(app)
.default_width(320)
.default_height(200)
.title("Hello, World!")
.build();
win.add(&draw_area);
win.show_all();
//IMPORTANT STUFF STARTS HERE
win.connect_event(|w, _g|{ //HORRIBLE HACK HELP FIX
w.queue_draw();
Inhibit(false)
});
glib::timeout_add_seconds(1, ||{
println!("I wish I could redraw instead of printing this line");
Continue(true)
});
//fails with "`NonNull<GObject>` cannot be shared between threads safely" :
// glib::timeout_add_seconds(1, ||{
// win.queue_draw();
// Continue(true)
// });
//IMPORTANT STUFF ENDS HERE
});
thread::spawn(move || {
loop {
thread::sleep(time::Duration::from_millis(100));
tx.send(rand::random::<f64>()).unwrap();
}
});
app.run();
}
EDIT: I tried a mutex version, maybe have implemented it wrong. The following code gives the same error (NonNull<GObject> cannot be shared between threads safely)
let mut_win = Mutex::new(win);
let arc_win = Arc::new(mut_win);
glib::timeout_add_seconds(1, move ||{
let mut w = arc_win.lock().unwrap();
(*w).queue_draw();
Continue(true)
});
Use glib::timeout_add_seconds_local() instead of the non-local version if you're doing everything on the same thread.
The generic version requires a Send-able closure and can be called from any thread at any time, calling the closure from your main thread. The local version can only be called from the main thread and panics otherwise.
By not requiring a Send-able closure, you can move a reference to your widgets into the closure.
Okay, I eventually made it work, after stumbling onto gtk-rs: how to update view from another thread . The key is to stash window in a thread-local global (TBH I don't really understand what that means but it works), and then access it through a static function.
I had to modify the linked answer a bit because of scope disagreements between my channel and my window. Eventually I just decided to deal with them separately.
I strongly suspect this is not the right way to do this, but at least it runs.
extern crate cairo;
extern crate rand;
extern crate gtk;
extern crate gdk;
extern crate glib;
use std::sync::{Arc, Mutex};
use std::{thread, time, u32};
use gtk::prelude::*;
use gtk::{Application, ApplicationWindow, DrawingArea};
use std::sync::mpsc;
use std::sync::mpsc::{Receiver, Sender};
use std::cell::RefCell;
const SIZE : usize = 400;
type Message = (usize, usize);
type Grid = [[bool; SIZE]; SIZE];
thread_local!(
static GLOBAL: RefCell<Option<ApplicationWindow>> = RefCell::new(None);
);
fn check_update_display(){
GLOBAL.with(|global|{
if let Some(win) = &*global.borrow() {
win.queue_draw();
}
})
}
fn main(){
let app = Application::builder()
.application_id("org.example.HelloWorld")
.build();
let (tx, rx ) : (Sender<Message>, Receiver<Message>) = mpsc::channel();
gtk::init().expect("GTK init failed");
let draw_area = DrawingArea::new();
let grid_mut = Arc::new(Mutex::new([[false; SIZE]; SIZE]));
let draw_grid_mut = Arc::clone(&grid_mut);
let _id = draw_area.connect_draw(move |_unused, f| {
let grid = *(draw_grid_mut.lock().unwrap());
f.set_source_rgb(0.0,0.0, 0.0);
f.paint().expect("Painting failed");
f.set_source_rgb(1.0,1.0, 1.0);
let mut count = 0;
for i in 0 .. SIZE{
for j in 0 .. SIZE {
if grid[i][j] {
count = count + 1;
f.move_to(i as f64, j as f64);
f.rectangle(i as f64 * 3.0, j as f64 * 3.0 , 1.0, 1.0);
}
}
}
f.stroke().unwrap();
Inhibit(false)
});
let reader_grid = Arc::clone(&grid_mut);
thread::spawn(move ||{
loop{
let mut g = reader_grid.lock().unwrap();
let (x, y) = rx.recv().unwrap();
g[x][y] = true;
drop(g);
thread::sleep(time::Duration::from_millis(10));
}
});
app.connect_activate(move |app| {
let win =ApplicationWindow::builder()
.application(app)
.default_width(320)
.default_height(200)
.title("steveburg")
.build();
win.add(&draw_area);
win.show_all();
GLOBAL.with(|global|{
*global.borrow_mut() = Some(win);
});
glib::timeout_add_seconds(1, move ||{
check_update_display();
Continue(true)
});
});
thread::spawn(move || {
steveburg(tx);
});
app.run();
}
fn random_pair() -> (i32, i32) {
let (x, y) = ((rand::random::<u32>() % 3) as i32 - 1, (rand::random::<u32>() % 3) as i32 - 1);
(x, y)
}
fn steveburg(tx : Sender<Message>){
let mut grid : Grid = [[false; SIZE]; SIZE];
loop{
let (mut x, mut y) = (SIZE/2, SIZE/2);
'drift: loop {
if x == 0 || x == SIZE - 1 || y == 0 || y == SIZE - 1 {
break 'drift;
}
for nx in 0 .. 3 {
for ny in 0 .. 3 {
if grid[x + nx -1][y + ny -1] {break 'drift}
}
}
let (xa, ya) = random_pair();
(x, y) = ((x as i32+ xa) as usize, (y as i32 + ya) as usize);
}
grid[x][y] = true;
tx.send((x, y)).unwrap();
thread::sleep(time::Duration::from_millis(10));
}
}
I want to update a cairo drawing inside a DrawingArea. I tried to achieve this by calling DrawingArea::connect_draw(...) with a new function as the parameter. My issue is that it does not replace the original drawing function, but calls both when showing the window.
Here is an example
extern crate cairo;
extern crate gio;
extern crate gtk;
use gio::prelude::*;
use gtk::prelude::*;
fn main() {
let application = gtk::Application::new(Some("com.example"), Default::default())
.expect("Initialization failed...");
application.connect_activate(|app| {
build_ui(app);
});
application.run(&vec![]);
}
fn build_ui(application: >k::Application) {
let window = get_window(application);
let drawing_area = Box::new(gtk::DrawingArea::new)();
// Set drawing function
drawing_area.connect_draw(|_, ctx| draw(ctx, 0.5, 2.0));
// Change drawing function
drawing_area.connect_draw(|_, ctx| draw(ctx, 0.9, 1.0)); // <-- Why is this not working as expected?
window.add(&drawing_area);
window.show_all();
}
fn get_window(application: >k::Application) -> gtk::ApplicationWindow {
let window = gtk::ApplicationWindow::new(application);
window.set_default_size(500i32, 500i32);
// Set transparency
set_visual(&window, None);
window.connect_screen_changed(set_visual);
window.set_app_paintable(true);
window
}
fn draw(ctx: &cairo::Context, param1: f64, param2: f64) -> gtk::Inhibit {
ctx.scale(500f64, 500f64);
ctx.set_source_rgba(1.0, 0.2, 0.2, param1);
ctx.arc(0.5, 0.5, 0.2, 0.0, 3.1414 * param2);
ctx.fill();
Inhibit(false)
}
fn set_visual(window: >k::ApplicationWindow, _screen: Option<&gdk::Screen>) {
if let Some(screen) = window.get_screen() {
if let Some(ref visual) = screen.get_rgba_visual() {
window.set_visual(Some(visual));
}
}
}
I expect half a circle to be shown. Yet the old, full circle is still there, even though I tried to replace connect_draw. How can I properly replace this?
Using the connect_draw function attaches another signal handler, but will not replace existing handlers. However, the function returns a SignalHandlerId which you should then be able to use to disonnect your original signal handler using the signal_handler_disconnect function (see https://gtk-rs.org/docs/glib/signal/fn.signal_handler_disconnect.html).
Consider two programs, and the difference between them:
$ diff flashes/src/main.rs doesnt_flash/src/main.rs
22,23c22
<
< let mut i = 0;
---
> let mut cursor_poses: Vec<(f64, f64)> = Vec::new();
28c27
< mx = x; my = y;
---
> cursor_poses.push((x,y));
32,33c31,33
< if i == 0 {
< graphics::clear([1.0; 4], g);
---
> graphics::clear([1.0; 4], g);
> for &(x, y) in cursor_poses.iter() {
> draw_cursor_pos([x, y], &c, g);
35,36d34
< draw_cursor_pos([mx, my], &c, g);
< i+=1;
Video demonstration of the two programs.
The program is an extremely basic paint program, with only one brush width, brush stroke color, canvas size, no saving, etc; oh and to stop drawing, move your mouse out of the window, since every time you go over the window this counts as drawing ;-)
flashes.rs does not draw every pixel every time e.render_args() is reached, except the first time. doesnt_flash.rs does draw every pixel every time e.render_args() is reached. This is the only difference between the two programs.
While it does not take long to generate the content in this program, so it is acceptable to re-generate it hundreds of times as the mouse moves over the window, this seems inefficient. In theory, as more and more points are added to the screen, each iteration of gl.draw takes longer and longer. In practice, the difference between calling graphics::ellipse one time vs. ten thousand times is not significant on modern hardware.
Other programs I'd want to write won't have that luxury as it will take longer to generate the result to put on the screen.
While perusing the API, I came up with no obvious way to just "do nothing". I assume that I would have to write my screen changes to some buffer object, then feed GlGraphics back this buffer object if e.render_args() is called but I don't need to update the screen.
The problem is, I can't seem to find this buffer object. :-(
How can I "do nothing" without getting screen flashing? If my theory is correct, how can I draw to a GlGraphics buffer instead of the screen, then feed my buffer back to the screen when I don't have anything new to draw?
Cargo.toml
[package]
name = "stackoverflow-piston-example"
version = "0.0.0"
authors = ["Fred"]
description = "Note: This program can be used for both of the programs below. Simply use `cargo new` and save either of the below files as `src/main.rs`"
keywords = []
[dependencies]
piston = "0.35.0"
piston2d-opengl_graphics = "0.50.0"
piston2d-graphics = "0.24.0"
piston2d-touch_visualizer = "0.8.0"
pistoncore-sdl2_window = "0.47.0"
doesnt_flash.rs
extern crate piston;
extern crate opengl_graphics;
extern crate graphics;
extern crate touch_visualizer;
extern crate sdl2_window;
use opengl_graphics::{ GlGraphics, OpenGL };
use graphics::{ Context, Graphics };
use piston::input::*;
use piston::event_loop::*;
use sdl2_window::Sdl2Window as AppWindow;
static CURSOR_POS_COLOR: [f32; 4] = [0.0, 0.0, 0.0, 1.0];
fn main() {
let opengl = OpenGL::V3_2;
let mut window: AppWindow = piston::window::WindowSettings::new("Example for StackOverflow", [600, 600])
.exit_on_esc(true).opengl(opengl).build().unwrap();
let ref mut gl = GlGraphics::new(opengl);
let (mut mx, mut my) = (0., 0.);
let mut cursor_poses: Vec<(f64, f64)> = Vec::new();
let mut events = Events::new(EventSettings::new().lazy(true));
while let Some(e) = events.next(&mut window) {
e.mouse_cursor(|x, y| {
cursor_poses.push((x,y));
});
if let Some(args) = e.render_args() {
gl.draw(args.viewport(), |c, g| {
graphics::clear([1.0; 4], g);
for &(x, y) in cursor_poses.iter() {
draw_cursor_pos([x, y], &c, g);
}
}
);
}
}
}
fn draw_cursor_pos<G: Graphics>(
cursor: [f64; 2],
c: &Context,
g: &mut G,
) {
graphics::ellipse(
CURSOR_POS_COLOR,
graphics::ellipse::circle(cursor[0], cursor[1], 4.0),
c.transform,
g
);
}
flashes.rs
extern crate piston;
extern crate opengl_graphics;
extern crate graphics;
extern crate touch_visualizer;
extern crate sdl2_window;
use opengl_graphics::{ GlGraphics, OpenGL };
use graphics::{ Context, Graphics };
use piston::input::*;
use piston::event_loop::*;
use sdl2_window::Sdl2Window as AppWindow;
static CURSOR_POS_COLOR: [f32; 4] = [0.0, 0.0, 0.0, 1.0];
fn main() {
let opengl = OpenGL::V3_2;
let mut window: AppWindow = piston::window::WindowSettings::new("Example for StackOverflow", [600, 600])
.exit_on_esc(true).opengl(opengl).build().unwrap();
let ref mut gl = GlGraphics::new(opengl);
let (mut mx, mut my) = (0., 0.);
let mut i = 0;
let mut events = Events::new(EventSettings::new().lazy(true));
while let Some(e) = events.next(&mut window) {
e.mouse_cursor(|x, y| {
mx = x; my = y;
});
if let Some(args) = e.render_args() {
gl.draw(args.viewport(), |c, g| {
if i == 0 {
graphics::clear([1.0; 4], g);
}
draw_cursor_pos([mx, my], &c, g);
i+=1;
}
);
}
}
}
fn draw_cursor_pos<G: Graphics>(
cursor: [f64; 2],
c: &Context,
g: &mut G,
) {
graphics::ellipse(
CURSOR_POS_COLOR,
graphics::ellipse::circle(cursor[0], cursor[1], 4.0),
c.transform,
g
);
}
I think the flashing is caused by buffer swapping: in flashes.rs only the first buffer to be drawn into is cleared. The second one will be all zeros, or leftover gpu memory if you're unlucky. According to the OpenGL wiki there's no good way around calling graphics::clear:
A modern OpenGL program should always use double buffering. . .
The buffers should always be cleared. On much older hardware, there
was a technique to get away without clearing the scene, but on even
semi-recent hardware, this will actually make things slower. So always
do the clear.
Instead, the usual method is to accumulate your changes to a texture or renderbuffer, and then draw that to the screen, exactly as you described.
I couldn't find any way to do this from within opengl_graphics either (there are no calls to gl::GenFramebuffers anywhere in it) but it's relatively straightforward to set up using raw gl calls. (I've used textures instead of renderbuffers because they have the significant advantage of being supported by high-level methods like Image::draw.)
extern crate piston;
extern crate opengl_graphics;
extern crate graphics;
extern crate sdl2_window;
extern crate gl;
use opengl_graphics::{ GlGraphics, OpenGL, Texture, TextureSettings };
use graphics::{ Context, Graphics, Transformed };
use graphics::image::Image;
use piston::input::*;
use piston::event_loop::*;
use piston::window::Window;
use sdl2_window::Sdl2Window as AppWindow;
use gl::types::GLuint;
static CURSOR_POS_COLOR: [f32; 4] = [0.0, 0.0, 0.0, 1.0];
fn main() {
let opengl = OpenGL::V3_2;
let mut window: AppWindow = piston::window::WindowSettings::new("Example for StackOverflow", [600, 600])
.exit_on_esc(true).opengl(opengl).build().expect("window");
let ref mut gl = GlGraphics::new(opengl);
let (mut mx, mut my) = (0., 0.);
let draw_size = window.draw_size();
// It would also be possible to create a texture by hand using gl::GenTextures and call
// gl::TexImage2D with a null pointer for the data argument, which would require another unsafe
// block but would save this allocation
let texture_buf = vec![0u8; draw_size.width as usize * draw_size.height as usize];
let texture = Texture::from_memory_alpha(&texture_buf, draw_size.width, draw_size.height,
&TextureSettings::new()).expect("texture");
let fbo;
unsafe {
let mut fbos: [GLuint; 1] = [0];
// Create a Framebuffer Object that we can draw to later
gl::GenFramebuffers(1, fbos.as_mut_ptr());
fbo = fbos[0];
// Switch to it as the active framebuffer
gl::BindFramebuffer(gl::FRAMEBUFFER, fbo);
// Set up the framebuffer object so that draws to it will go to the texture
gl::FramebufferTexture2D(gl::FRAMEBUFFER,
gl::COLOR_ATTACHMENT0, // draw colors, not depth or stencil data
gl::TEXTURE_2D, // the texture's type
texture.get_id(),
0); // mipmap level
}
let mut events = Events::new(EventSettings::new().lazy(true));
while let Some(e) = events.next(&mut window) {
e.mouse_cursor(|x, y| {
mx = x; my = y;
});
e.render(|args| {
// Switch to the texture framebuffer and draw the cursor
unsafe {
gl::BindFramebuffer(gl::FRAMEBUFFER, fbo);
}
gl.draw(args.viewport(), |c, g| {
draw_cursor_pos([mx, my], &c, g);
});
// Switch to the window framebuffer and draw the texture
unsafe {
gl::BindFramebuffer(gl::FRAMEBUFFER, 0);
}
gl.draw(args.viewport(), |c, g| {
graphics::clear([1f32, 1f32, 1f32, 0f32], g);
// I can't entirely explain this. We already applied the viewport transform when
// we were rendering the cursor, so I think the texture is right-side-up for GL,
// but piston::Image is expecting an image laid out in screen coordinates.
// Since there is an offset in the viewport transform, the flip has to be applied
// first, otherwise it would flip across the origin.
let flipped = c.transform.prepend_transform(graphics::math::scale(1., -1.));
Image::new().draw(&texture, &c.draw_state, flipped, g);
});
});
}
}
fn draw_cursor_pos<G: Graphics>(
cursor: [f64; 2],
c: &Context,
g: &mut G,
) {
graphics::ellipse(
CURSOR_POS_COLOR,
graphics::ellipse::circle(cursor[0], cursor[1], 4.0),
c.transform,
g
);
}
Alternatively, the gfx backend has the promising-sounding Factory::CreateRenderTarget method. My hardware doesn't support it, but I believe using it would look approximately like this:
extern crate piston;
extern crate graphics;
extern crate piston_window;
extern crate gfx_core;
use graphics::{ Context, Graphics, Transformed };
use graphics::image::Image;
use piston::input::*;
use piston::event_loop::*;
use piston::window::Window;
use piston_window::{ PistonWindow, OpenGL, G2dTexture };
use gfx_core::factory::Factory;
use gfx_core::texture::{ SamplerInfo, FilterMethod, WrapMode, Size };
static CURSOR_POS_COLOR: [f32; 4] = [0.0, 0.0, 0.0, 1.0];
fn main() {
let opengl = OpenGL::V2_1;
let window_settings =
piston::window::WindowSettings::new("Example for StackOverflow", [600, 600])
.opengl(opengl)
.exit_on_esc(true);
let mut window: PistonWindow = window_settings.build().expect("window");
window.set_lazy(true);
let size = window.draw_size();
let (texture_handle, shader_view, target) = window.factory.create_render_target(size.width as Size, size.height as Size)
.expect("render target");
let sampler = window.factory.create_sampler(SamplerInfo::new(FilterMethod::Scale, WrapMode::Tile));
let texture = G2dTexture {
surface: texture_handle,
sampler: sampler,
view: shader_view,
};
let stencil = window.factory.create_depth_stencil_view_only(size.width as Size, size.height as Size)
.expect("stencil");
let (mut mx, mut my) = (0., 0.);
while let Some(e) = window.next() {
e.mouse_cursor(|x, y| {
mx = x; my = y;
});
if let Some(args) = e.render_args() {
window.g2d.draw(&mut window.encoder, &target, &stencil, args.viewport(), |c, g| {
draw_cursor_pos([mx, my], &c, g);
});
window.draw_2d(&e, |c, g| {
graphics::clear([1f32, 1f32, 1f32, 0f32], g);
let flipped = c.transform.prepend_transform(graphics::math::scale(1., -1.));
Image::new().draw(&texture, &c.draw_state, flipped, g);
});
}
}
}
fn draw_cursor_pos<G: Graphics>(
cursor: [f64; 2],
c: &Context,
g: &mut G,
) {
graphics::ellipse(
CURSOR_POS_COLOR,
graphics::ellipse::circle(cursor[0], cursor[1], 4.0),
c.transform,
g
);
}
I'd like to change the resolution of a PistonWindow after it has been created. This would allow a user to change the resolution of the window while playing the game.
WindowSettings has a set_size method, but it doesn't seem to be accessible after the PistonWindow type is created.
A simplified example:
extern crate piston_window;
use piston_window::*;
fn main() {
let mut window: PistonWindow = WindowSettings::new("Game Title", [200, 200])
.exit_on_esc(true)
.build()
.unwrap();
// Do some logic here, and change resolution to 400 x 400.
while let Some(e) = window.next() {
window.draw_2d(&e, |c, g| {
ellipse([1.0, 1.0, 1.0, 1.0], [0.0, 0.0, 100.0, 100.0], c.transform, g)
});
}
}
As far as I know, you can't. At least not of an existing window.
However, you can store the WindowSettings and re-create your window based on it when the user changes the resolution.