I'm trying to write code using conrod with a winit/glium backend. Conrod is an intermediate mode graphical user interface library for Rust while wininit provides an event model and glium the OpenGL bindings.
My code draws a grid of values onto a screen with a (row,col), i.e. "text1" at (0,0), "text2", (1,0), "text3" at (0,1) etc. where (row,col) is translated to an absolute (x,y) coordinate.
Basic flow is this:
For each value I want to render as text
Calculate the column / row of the value, i.e. (idx % num_cols, idx / num_cols)
Calculate an x and y from the column and row. i.e. (col * WIDTH, ROW * HEIGHT).
Create a widget::Text at (x, y) containing the value
The code that does this is shown below in a slightly simplified form. The entire file is online here.
This should be simple code to implement but when I run it, the text boxes are not drawn from x=0, y=0 but instead somewhere but not exactly offset from the centre of the view in both axes.
As far as I can tell this shouldn't happen because I am explicitly setting the absolute position and have confirmed the coordinates are correct. So I would expect them to render from the top left of the screen. I don't understand what causes the offset.
Any ideas?
The code that calculates position is this:
fn draw_ui(ui: &mut Ui, model: &mut UiModel) {
let ui = &mut ui.set_widgets();
// Canvas is the backdrop to the view
widget::Canvas::new()
.color(BACKGROUND_COLOUR)
.set(model.static_ids.canvas, ui);
// Create text widgets for each value
let state = model.session_state.read().unwrap();
// Create / update the widgets
if state.values.is_empty() {
// ... removed
} else {
let num_cols: usize = 2;
state.values.iter().enumerate().for_each(|(i, v)| {
// Create / update the cell and its state
let (node_id, value) = v;
if let Some(id) = model.value_ids.get(node_id) {
let (col, row) = (i % num_cols, i / num_cols);
let valid = value.is_valid();
let value = if let Some(ref value) = value.value {
value.to_string()
} else {
"None".to_string()
};
// Turn the value into a string to render it
let (x, y) = (col as f64 * (CELL_WIDTH + PADDING), row as f64 * (CELL_HEIGHT + PADDING));
value_widget(&value, valid, x, y, CELL_WIDTH, CELL_HEIGHT, model.static_ids.canvas)
.set(*id, ui);
}
});
}
}
fn value_widget(value: &str, valid: bool, x: f64, y: f64, w: f64, h: f64, canvas_id: conrod::widget::Id) -> widget::Text {
let color = if valid { GOOD_COLOUR } else { BAD_COLOUR };
widget::Text::new(value)
.x_y(x, y)
.w(w).h(h)
.color(color)
}
I think the 0,0 position is in the center of the window. The slight offset you are seeing comes from the text box position specifying the corner of the text box. To get something in the top left of the window you would want to do something like x_position - 0.5 * window_width + x_margin (similar for y).
Related
I have an OLS fitting function that returns an OMatrix. The return type should always be a one-dimensional vector of coefficients.
use std::f64::NAN;
use nalgebra::{DMatrix, Dynamic, MatrixSlice, OMatrix, RowVector};
fn ols(
x: MatrixSlice<f64, Dynamic, Dynamic>,
y: MatrixSlice<f64, Dynamic, Dynamic>,
) -> OMatrix<f64, Dynamic, Dynamic> {
(x.transpose() * x).pseudo_inverse(0.00001).unwrap() * x.transpose() * y
}
The output of ols will always have the same number of elements which is equal to the number of columns as the input x (I'm not sure how I can change the return signature to represent this, I'm new to rust).
The output of ols should then be copied to a single row of an output matrix out. I am trying to do this with the set_row function, but I get the error expected struct 'Const', found struct 'Dynamic'.
fn my_func(
x: &DMatrix<f64>, // data matrix
y: &DMatrix<f64>, // target matrix, actually a matrix with only 1 column
) -> DMatrix<f64> {
let nrows = x.shape().0;
let ncols = x.shape().1;
// initialize out matrix to all NAN's
let mut out = DMatrix::from_element(nrows, ncols, NAN);
let i: usize = 100;
let tmp_x: MatrixSlice<f64, Dynamic, Dynamic> = x.slice((i, 0), (50, ncols));
let tmp_y: MatrixSlice<f64, Dynamic, Dynamic> = y.slice((i, 0), (50, 1));
// the next two lines are where I need help
let ols_coefs = ols(tmp_x, tmp_y);
out.set_row(i, &ols_coefs); // error occurs here
return out;
}
I suspect I need to convert the type of the output of ols somehow, but I am not sure how.
Using Res<Events<CursorMoved>> I can get the mouse position change in screen space coordinates (zero in bottom-left corner), e.g.
#[derive(Default)]
struct State {
cursor_moved_reader: EventReader<CursorMoved>,
}
fn set_mouse_pos(mut state: ResMut<State>, cursor_moved: Res<Events<CursorMoved>>) {
for event in state.cursor_moved_reader.iter(&cursor_moved) {
println!("cursor: {:?}", event.position);
}
}
The question now is if I set SpriteComponents' transform.translation to the cursor position when using a camera from Camera2dComponents::default() a sprite with position 0, 0 is rendered in the center of the screen. What's the idiomatic way of converting between the screen space mouse coords and the camera world space coords?
The previous answer seems to be on the right track, however the transformation is not fully implemented and actually overengineered. Stumbled on the same problem and figured that's what the camera transform is for! This works for me:
fn window_to_world(
position: Vec2,
window: &Window,
camera: &Transform,
) -> Vec3 {
// Center in screen space
let norm = Vec3::new(
position.x - window.width() / 2.,
position.y - window.height() / 2.,
0.,
);
// Apply camera transform
*camera * norm
// Alternatively:
//camera.mul_vec3(norm)
}
I had a very similar confusion when I first started hacking with bevy. Here's what I am currently using to translate from window to world coordinates (for a 2D game):
fn window_to_world(
window: &Window,
camera: &Transform,
position: &Vec2,
) -> Vec3 {
let center = camera.translation.truncate();
let half_width = (window.width() / 2.0) * camera.scale.x;
let half_height = (window.height() / 2.0) * camera.scale.y;
let left = center.x - half_width;
let bottom = center.y - half_height;
Vec3::new(
left + position.x * camera.scale.x,
bottom + position.y * camera.scale.y,
0.0, // I'm working in 2D
)
}
In bevy, the Window coordinate system has the origin (0, 0) at the bottom left, measured in pixels, and increasing as you go up the screen. The Camera's coordinates default to (0, 0) in the center of the window, measured in whichever unit makes the most sense for your game.
This function is limited because it doesn't take rotation of the camera into account, only the scale.
Please! forgive my English writing skill!...
I confirm sarray value changed through my mycallback1 routine. (another function...)
however table.draw_cell is not!.
How can i draw my string array and string array be changed through my function!
//..omitted
fn mycallback1(sarray : &mut [&str]) {
let max = 200 * 11;
for n in 0..max {
sarray[n] = "Array ^^^";
}
}
fn main() {
//..omitted;
let mut sarray = [""; 200 * 11];
let max = 200 * 11;
for n in 0..max {
sarray[n] = "array ^";
}
butStock1.set_callback(Box::new(move || mycallback1(&mut sarray)));
// Called when the table is drawn then when it's redrawn due to events
table.draw_cell(Box::new(move |ctx, row, col, x, y, w, h| match ctx {
table::TableContext::StartPage => draw::set_font(Font::Helvetica, 14),
table::TableContext::ColHeader => {
let idx = col as usize;
draw_header(&format!("{}", tuple[idx]), x, y, w, h);
} // Column titles
table::TableContext::RowHeader => draw_header(&format!("{}", row + 1), x, y, w, h), // Row titles
table::TableContext::Cell => {
let idx = (row * 11 + col) as usize;
println!("Drawn {}", idx);
draw_data( &format!("{} {}", sarray[idx], idx), x, y, w, h, table_c.is_selected(row, col),); // Data in cells
}
_ => (),
}));
app.run().unwrap();
}
The way fltk works is that it only redraws a widget when a "damage" flag is set, for example when a button is clicked, the click sets a damage flag within the button class, so fltk would redraw it in the next frame.
A table is also redrawn when it's "damaged". Changing the value in the array, fltk has no way of knowing that it has to redraw the table.
So you can add an explicit redraw call, either of the table, or of the whole app.
butStock1.set_callback(Box::new(move || {
mycallback1(&mut sarray);
fltk::app::redraw();
}));
I am trying to create a FluidView shape in SwiftUI which acts like a fluid in a container, such that when the device is at a particular angle, so too is the shape / 'fluid'. The shape also has a specific capacity, percentFilled, which indicates how much of the parent's view should be filled.
Using these constraints, the invariant for the class is
lines.area == rect.area * percentFilled
where lines is the quadrilateral and rect is the bounding rectangle. This invariant implies that the 'volume' of the shape remains constant for each percentFilled irrespective of the tilt angle.
Here is what I have so far:
/// A View made using a specified angle and amount to fill
/// - Invariant: The area of the view is exactly equal to the area of the rectangle of the parent view times `percentFilled`
struct FluidView: Shape {
var angle: CGFloat = 0.0
var percentFilled: CGFloat = 0
/// Creates a new FluidView
/// - Parameters:
/// - angle: A value in the range `0...1`. A value of `0` indicates the view is horizontal, and an angle of `1` indicates the view is vertical (horizontal if viewed as landscape)
/// - percentFilled: the amount of the view bounds to fill represented as a value in the range `0...1`. A value of `x` indicates that `x * 100`% of the parent view is covered by this view
init(angle: CGFloat = 0, percentFilled: CGFloat = 0) {
precondition(0...1 ~= angle)
precondition(0...1 ~= percentFilled)
self.angle = angle
self.percentFilled = percentFilled
}
func path(in rect: CGRect) -> Path {
var path = Path()
path.move(to: CGPoint(x: 0, y: rect.height * (1 - percentFilled))) // top left
let lines = [
(0, rect.height ), // bottom left
(rect.width * 1 / (1 + angle - percentFilled), rect.height ), // bottom right
(rect.width * 1 / (1 + angle - percentFilled), rect.height * (1 + angle - percentFilled)), // top right
(0, rect.height * (1 - angle - percentFilled)) // top left
].map { x, y in
// make sure no points exceed the bounds
CGPoint(x: min(rect.width, x), y: min(rect.height, y))
}
// invariant
assert(lines.area == rect.area * percentFilled)
path.addLines(lines)
return path
}
}
I feel like what I have currently is somewhat close to the goal, however the invariant fails. I believe that my y-coordinates are correct, however I think my calculations for the x-coordinates have to change, but I'm not sure to what they should change.
Any help would be really appreciated, thanks!
Try something like this,
struct FilledShape<S: Shape>: View {
let shape: S
#State var angle: Angle = .zero
#State var percentFull: CGFloat
var gradient: Gradient {
Gradient(stops: [Gradient.Stop(color: .red, location: 0),
Gradient.Stop(color: .red, location: percentFull),
Gradient.Stop(color: .blue, location: percentFull),
Gradient.Stop(color: .blue, location: 1)])
}
var body: some View {
shape.rotation(angle)
.fill(LinearGradient(gradient: gradient, startPoint: .bottom, endPoint: .top))
}
}
struct ContentView: View {
#State var angle: Angle = .degrees(30)
var body: some View {
FilledShape(shape: Rectangle(), angle: angle, percentFull: 0.3).frame(width: 100, height: 100)
}
}
Thing is, percent full is really the percent up along the y axis, not the percent of the area filled. You could use some kind of numeric method with GeometryReader to get the area and read the y-value at the appropriate filled area sum (or if you just use quadrilaterals it's easier). By brute force:
extension Shape {
func area(in box: CGRect) -> Int {
var area = 0
for x in 0..<Int(box.width) {
for y in 0..<Int(box.height) {
let point = CGPoint(x: x, y: y)
if self.contains(point) {
area += 1
}
}
}
return area
}
}
As a different approach, look into SpriteKit and SKPhysicsBody.
I am trying to draw a minimap from a randomly generated matrix that represents my level.
To do so, I am drawing black or white little squares one by one to represent the matrix visually (I don't know if it the best way to do that with phaser, actually, I am a beginner with this framework).
The map draws correctly but its position is bound to the world not to camera, so when I move it is not visible anymore.
Here is the code I use to draw the map:
generate() {
let wallsGraphics = this._scene.add.graphics({fillStyle : {color : LabyrinthConfig.MAPS.MINI_MAP.WALLS_COLOR}});
let pathGraphics = this._scene.add.graphics({fillStyle : {color : LabyrinthConfig.MAPS.MINI_MAP.PATH_COLOR}});
// Draw the map
let y = 0;
for (let line of this._matrix) {
let x = 0;
for (let cell of line) {
let rect = new Phaser.Geom.Rectangle();
rect.width = LabyrinthConfig.MAPS.MINI_MAP.CELL_WIDTH;
rect.height = LabyrinthConfig.MAPS.MINI_MAP.CELL_HEIGHT;
rect.x = LabyrinthConfig.MAPS.MINI_MAP.POSITION_X + x * LabyrinthConfig.MAPS.MINI_MAP.CELL_WIDTH;
rect.y = LabyrinthConfig.MAPS.MINI_MAP.POSITION_Y + y * LabyrinthConfig.MAPS.MINI_MAP.CELL_HEIGHT;
cell === 0 ? wallsGraphics.fillRectShape(rect) : pathGraphics.fillRectShape(rect);
x++;
}
y++;
}
}
Any help on how to fix this map to the camera view ?
Set scroll factor of your graphics objects to 0.
wallsGraphics.setScrollFactor(0);
pathGraphics.setScrollFactor(0);