I want to use the image and imageproc crates to draw triangles that blend if there are any overlapping triangles. Currently my code looks like this:
use image::{Rgba, RgbaImage};
use imageproc::drawing;
use imageproc::drawing::Blend;
use imageproc::point::Point;
struct Triangle([Point<i32>; 3]);
fn main() {
let triangle = Triangle([
Point { x: 0, y: 0 },
Point { x: 100, y: 400 },
Point { x: 400, y: 100 },
]);
let mut image = Blend(RgbaImage::new(400, 400));
drawing::draw_polygon_mut(&mut image, &triangle.0, Rgba([255, 255, 0, 255]));
drawing::draw_polygon_mut(&mut image, &triangle.0, Rgba([0, 0, 255, 255]));
image.save("test.png").unwrap();
}
This should produce a single white triangle, but there is no save method for Blend canvases. Is there an easy way to save this image?
The actual ImageBuffer object is inside of the Blend object.
You can extract it via either destructuring or .0.
Destructuring:
let Blend(image_buffer) = image;
image_buffer.save("test.png").unwrap();
.0:
image.0.save("test.png").unwrap();
Related
2https://i.stack.imgur.com/2aZm2.png
1https://codepen.io/1gelistirici/pen/KKRjgoj[enter image description here]
// Initiate a Canvas instance
var canvas = new fabric.Canvas("canvas");
// Initiate a polygon instance
var polygon = new fabric.Polygon([
{ x: 200, y: 10 },
{ x: 250, y: 50 },
{ x: 250, y: 180},
{ x: 150, y: 180},
{ x: 150, y: 50 }], {
fill: 'green'
});
// Render the polygon in canvas
canvas.add(polygon);
Hi, I have created a polygon in fabricjs. Then I check your intersect. When the other object comes to a non-polygon but defined as a polygon by fabricjs, it is detected that it hits. How can I get ahead of this? Can you help me?
Check for intersection using object.containsPoint(). You can define the point as anywhere associated with the moving object coordinates.
var targetPoint = new fabric.Point(moveEventTarget.left, moveEventTarget.top);
// detect if move event target intersects with another object
if (intersectionObject.containsPoint(targetPoint)) {
if (!canvas.isTargetTransparent(intersectionObject, targetPoint.x, targetPoint.y)) {
// do something
};
};
I want to load an image collection tileset into my phaser game. I know that with tilesets that are just one image you can just load that image into phaser, but what about an image collection? In Tiled I saw the options to export that tileset as either .tsx or .json, but I don't know if that helps in my case. The reason I need this is because I have some objects that are too big to be used as tiles. I can load them into Tiled and place them like I want to, but obviously they don't show up in my game, unless I can import that tileset into phaser. Does anyone know how to do that, or maybe you know a better option than using an image collection?
Well after some tests, and updating my Tiled version to 1.9.2, it seems there is an pretty simple way.
As long as the tileset collection is marked as "Embeded in map"
(I could have sworn, this checkbox was hidden/deactivated, when selecting "Collection of Images", in my early Tiled version)
Export the map as json
Load map and tile-images
preload() {
this.load.tilemapTiledJSON('map', 'export.tmj');
this.load.image('tile01', 'tile01.png');
this.load.image('tile02', 'tile02.png');
...
}
create Phaser TileSets, just use the filename from the json as the tilsetName (this is the "tricky" part, atleast for me)
create() {
var map = this.make.tilemap({ key: 'map' });
var img1 = map.addTilesetImage( 'tile01.png', 'tile01');
var img2 = map.addTilesetImage( 'tile02.png', 'tile02');
...
// create the layer with all tilesets
map.createLayer('Tile Layer 1', [img1, img2, ...]);
...
}
This should work, atleast with a "Collection of images", with images that have a size of 8x8 pixel (since I don't know the needed/intended Images size, I didn't want to waste time testing various images-sizes needlessly).
Here a small demo:
(due to CORS-issues the map data is inserted as jsonObject and the textures are generate and not loaded)
const mapJsonExport = {"compressionlevel":-1,"height":10,"infinite":false,"layers":[{"compression":"","data":"AQAAAAEAAAACAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAIAAAACAAAAAgAAAAEAAAACAAAAAgAAAAEAAAACAAAAAQAAAAEAAAACAAAAAQAAAAEAAAABAAAAAQAAAAEAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAABAAAAAQAAAAEAAAACAAAAAQAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAEAAAACAAAAAgAAAAIAAAACAAAAAgAAAAEAAAABAAAAAgAAAAIAAAACAAAAAgAAAAEAAAACAAAAAQAAAA==","encoding":"base64","height":10,"id":1,"name":"Tile Layer 1","opacity":1,"type":"tilelayer","visible":true,"width":10,"x":0,"y":0}],"nextlayerid":2,"nextobjectid":1,"orientation":"orthogonal","renderorder":"right-down","tiledversion":"1.9.2","tileheight":8,"tilesets":[{"columns":0,"firstgid":1,"grid":{"height":1,"orientation":"orthogonal","width":1},"margin":0,"name":"tiles","spacing":0,"tilecount":2,"tileheight":8,"tiles":[{"id":0,"image":"tile01.png","imageheight":8,"imagewidth":8},{"id":1,"image":"tile02.png","imageheight":8,"imagewidth":8}],"tilewidth":8}],"tilewidth":8,"type":"map","version":"1.9","width":10};
var config = {
width: 8 * 10,
height: 8 * 10,
zoom: 2.2,
scene: { preload, create }
};
function preload() {
// loading inline JSON due to CORS-issues with the code Snippets
this.load.tilemapTiledJSON('map', mapJsonExport);
// generating texture instead of loading them due to CORS-issues with the code Snippets
let graphics = this.make.graphics({add: false});
graphics.fillStyle(0xff0000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile01', 8, 8);
graphics.fillStyle(0x000000);
graphics.fillRect(0, 0, 8, 8);
graphics.generateTexture('tile02', 8, 8);
}
function create () {
let map = this.make.tilemap({ key: 'map' });
let img1 = map.addTilesetImage( 'tile01.png', 'tile01');
let img2 = map.addTilesetImage( 'tile02.png', 'tile02');
map.createLayer('Tile Layer 1', [img1, img2], 0, 0);
}
new Phaser.Game(config);
<script src="https://cdn.jsdelivr.net/npm/phaser#3.55.2/dist/phaser.js"></script>
I would like to render a live wallpaper into the X root window. This is currently not possible directly through the API glutin exposes.
I found a post where the asker seems to eventually figure it out: How can I make a window override-redirect with glutin?.
However, it seems that the code snippet from the answer is not enough to achieve the desired effect: once the flag is set, the previously-created window disappears, but nothing is rendered to the root window. What am I missing?
Here's what I have so far:
let mut events_loop = glium::glutin::EventsLoop::new();
let context = glium::glutin::ContextBuilder::new();
let window = glium::glutin::WindowBuilder::new()
.with_dimensions(800, 600)
.with_title("TEST");
let display = glium::Display::new(window, context, &events_loop).unwrap();
unsafe {
use glium::glutin::os::unix::WindowExt;
use winit::os::unix::x11::XConnection;
use winit::os::unix::x11::ffi::{Display, XID, CWOverrideRedirect, XSetWindowAttributes};
let x_connection = std::sync::Arc::<XConnection>::into_raw(display.gl_window().get_xlib_xconnection().unwrap());
let x_display = display.gl_window().get_xlib_display().unwrap() as *mut Display;
let x_window = display.gl_window().get_xlib_window().unwrap() as XID;
((*x_connection).xlib.XChangeWindowAttributes)(
x_display,
x_window,
CWOverrideRedirect,
&mut XSetWindowAttributes {
background_pixmap: 0,
background_pixel: 0,
border_pixmap: 0,
border_pixel: 0,
bit_gravity: 0,
win_gravity: 0,
backing_store: 0,
backing_planes: 0,
backing_pixel: 0,
save_under: 0,
event_mask: 0,
do_not_propagate_mask: 0,
override_redirect: 1,
colormap: 0,
cursor: 0,
}
);
((*x_connection).xlib.XUnmapWindow)(x_display, x_window);
((*x_connection).xlib.XMapWindow)(x_display, x_window);
}
I'm creating a program that uses glutin, and I want to provide a command-line flag to make the window override-redirect so it can be used as a desktop wallpaper for certain window managers that don't support the desktop window type.
I've done a lot of research and managed to cobble together a block of code that I thought would work, using the provided xlib display and window from glutin. Here is my existing code:
unsafe {
use glutin::os::unix::WindowExt;
let x_connection = std::sync::Arc::<glutin::os::unix::x11::XConnection>::into_raw(display.gl_window().get_xlib_xconnection().unwrap());
((*x_connection).xlib.XChangeWindowAttributes)(
display.gl_window().get_xlib_display().unwrap() as *mut glutin::os::unix::x11::ffi::Display,
display.gl_window().get_xlib_window().unwrap() as glutin::os::unix::x11::ffi::XID,
glutin::os::unix::x11::ffi::CWOverrideRedirect,
&mut glutin::os::unix::x11::ffi::XSetWindowAttributes {
background_pixmap: 0,
background_pixel: 0,
border_pixmap: 0,
border_pixel: 0,
bit_gravity: 0,
win_gravity: 0,
backing_store: 0,
backing_planes: 0,
backing_pixel: 0,
save_under: 0,
event_mask: 0,
do_not_propagate_mask: 0,
override_redirect: 1,
colormap: 0,
cursor: 0,
}
);
}
It doesn't give me any errors, and compiles and runs fine with the rest of the code, but it doesn't make the window override-redirect like I want to.
I figured it out. The override-redirect only takes place when the window is mapped, so if I unmap it and map it again then it works!
Here is the code now:
unsafe {
use glutin::os::unix::WindowExt;
use glutin::os::unix::x11::XConnection;
use glutin::os::unix::x11::ffi::{Display, XID, CWOverrideRedirect, XSetWindowAttributes};
let x_connection = std::sync::Arc::<XConnection>::into_raw(display.gl_window().get_xlib_xconnection().unwrap());
let x_display = display.gl_window().get_xlib_display().unwrap() as *mut Display;
let x_window = display.gl_window().get_xlib_window().unwrap() as XID;
((*x_connection).xlib.XChangeWindowAttributes)(
x_display,
x_window,
CWOverrideRedirect,
&mut XSetWindowAttributes {
background_pixmap: 0,
background_pixel: 0,
border_pixmap: 0,
border_pixel: 0,
bit_gravity: 0,
win_gravity: 0,
backing_store: 0,
backing_planes: 0,
backing_pixel: 0,
save_under: 0,
event_mask: 0,
do_not_propagate_mask: 0,
override_redirect: 1,
colormap: 0,
cursor: 0,
}
);
((*x_connection).xlib.XUnmapWindow)(x_display, x_window);
((*x_connection).xlib.XMapWindow)(x_display, x_window);
}
I am trying to zoom and pan a text which is draggable already. All the examples are on images or shapes and it seems I cannot adapt it to a text object. My questions are:
Do I have to use the anchors or is any simpler way zoom a text with Kineticjs?
I found an example regarding zooming a shape and the code crashes here:
var layer = new Kinetic.Layer({
drawFunc : drawTriangle //drawTriangle is a function defined already
});
Can we call a function while we are creating a layer?
I usually create a layer and then add the outcome of the function in it.
Any idea would be great, thanks.
I thought of many ways you could do this but this is the one I ended up implementing: jsfiddle
Basically, you have an anchor (which doesn't always have to be there, you can hide and show it if you would like. Let me know if you need help with that) and if you drag the anchor down it increases the fontSize, and if you drag the anchor up it decreases the fontSize.
I followed the exact same anchor tutorial but instead I added a dragBoundFunc to limit dragging to the Y-axis:
var anchor = new Kinetic.Circle({
x: x,
y: y,
stroke: '#666',
fill: '#ddd',
strokeWidth: 2,
radius: 8,
name: name,
draggable: true,
dragOnTop: false,
dragBoundFunc: function (pos) {
return {
x: this.getAbsolutePosition().x,
y: pos.y
};
}
});
And then I updated the updateAnchor() function to only detect the single anchor I added to the group named sizeAnchor:
var mY = 0;
function update(activeAnchor, event) {
var group = activeAnchor.getParent();
var sizeAnchor = group.get('.sizeAnchor')[0];
var text = group.get('.text')[0];
if (event.pageY < mY) {
text.setFontSize(text.getFontSize()-1);
} else {
text.setFontSize(text.getFontSize()+1);
}
sizeAnchor.setPosition(-10, 0);
mY = event.pageY;
}
Basically mY is used compared to the e.PageY to see if the mouse is moving up or down. Once we can determine the mouse direction, then we can decide if we should increase or decrease the fontSize!
Alternatively, you can use the mousewheel to do the exact same thing! I didn't implement it myself but it's definitely doable. Something like:
Mousewheel down and the fontSize decreases
Mousewheel up and the fontSize increases
Hopefully this emulates "Zooming" a text for you. And I guess being able to drag the text acts as "panning" right?
UPDATE (based on comment below)
This is how you would limit dragging to the Y-Axis using dragBoundFunc:
var textGroup = new Kinetic.Group({
x: 100,
y: 100,
draggable: true,
dragBoundFunc: function (pos) {
return {
x: this.getAbsolutePosition().x,
y: pos.y
};
}
});
See the updated jsfiddle (same jsfiddle as above)