Raise event when rectangle movement starts - jointjs

I want to implement an undo action for the movement of the rectangles.
For that i need the initial position of the rectangle.
I tryed with "pointerdown", but this fires also when a rectangle is just selected and not moved.
Is there a way to save the position only when movement starts?
Thank you!

You can use the Rappid's dia.CommandManager to travel the history. This includes elements movements.
CommandManager keeps track of graph changes and allows you to travel
the history of those changes back and forth. There is no limitation
put into the number of levels one can undo/redo.
Installation Include joint.dia.command.js to your HTML:
<script src="joint.dia.command.js"></script>
Creating CommandManager
var graph = new joint.dia.Graph;
var paper = new joint.dia.Paper({ el: $('#paper'), width: 500, height: 500, model: graph });
var commandManager = new joint.dia.CommandManager({ graph: graph });
$('#undo-button').click(function() { commandManager.undo(); });
$('#redo-button').click(function() { commandManager.redo(); });
(source : http://resources.jointjs.com/docs/rappid/v2.1/dia.html)

Related

How to repeat Paths in Free Drawing?

Please look at
https://stackblitz.com/edit/js-wxidql
Here is my code.
import { fabric } from "fabric";
const canvas = new fabric.Canvas("c", { isDrawingMode: true });
canvas.setBackgroundColor("rgb(255,73,64)", canvas.renderAll.bind(canvas));
canvas.on("path:created", e => {
let mousePath = e.path;
let offsetPath = new fabric.Path();
let offsetLeft = mousePath.left + 60;
let offsetTop = mousePath.top + 60;
offsetPath.setOptions({
path: mousePath.path,
left: offsetLeft,
top: offsetTop,
width: mousePath.width,
height: mousePath.height,
fill: '',
stroke: 'black'
});
canvas.add(offsetPath);
canvas.requestRenderAll();
});
Here is an resulting image of my drawing session.
I drew the happy face in the top left corner of the canvas with my mouse.
The offset image was added by my code.
How can I change my code to make the offset drawing look like the one drawn with my mouse?
Edit: from https://github.com/fabricjs/fabric.js/wiki/When-to-call-setCoords
I tried using
offsetDraw.setCoords();
but I was unable to find a way to make it work.
Edit: What I have presented here is a minimized example. I am working on an larger project where I save each path drawn by the user. Later I recreate those paths in an animation like sequence.
Edit: I made some changes to my code in a effort to understand fabricjs.
const canvas = new fabric.Canvas("c", { isDrawingMode: true });
canvas.setBackgroundColor("rgb(255,73,64)", canvas.renderAll.bind(canvas));
canvas.on("path:created", e => {
let mousePath = e.path;
let offsetPath = new fabric.Path();
let offsetLeft = mousePath.left + 60;
let offsetTop = mousePath.top + 60;
offsetPath.setOptions({
path: mousePath.path,
//left: offsetLeft,
//top: offsetTop,
width: mousePath.width,
height: mousePath.height,
fill: '',
stroke: 'black'
});
canvas.add(offsetPath);
canvas.requestRenderAll();
console.log("mousePath left " + mousePath.left + " top " + mousePath.top);
console.log("offsetPath left " + offsetPath.left + " top " + offsetPath.top);
});
In that code, I commented out the setting of the left and top properties of the offsetPath and added console.log lines. I drew a circle in the top left corner of the canvas with my mouse. The resulting image was the following.
The following was printed in the console.
mousePath left 7.488148148148148 top 10.5
offsetPath left -0.5 top -0.5
I don't understand the results. Why was the offset circle rendered in that position?
Edit: I drew another test with my mouse.
It seems that the code repeats the paths of concentric circles rather well. But, the vertical lines are moved out of their correct position. Their displacements increase as the distance from the center increases. Why?
I found a solution for my own question. But, if someone has a better one, then please post it. Look at
https://stackblitz.com/edit/js-ncxvpg
The following is the code.
const canvas = new fabric.Canvas("c", { isDrawingMode: true });
canvas.setBackgroundColor("rgb(255,73,64)", canvas.renderAll.bind(canvas));
canvas.on("path:created", e => {
let mousePath = e.path;
mousePath.clone(clone => {
clone.setOptions({
left: mousePath.left + 100,
top: mousePath.top + 100
});
canvas.add(clone);
});
});
If someone has an explaination as to why my original code didn't work, then please post it.
thanks for sharing your comments on my question. I am facing a similar issue, just that if I click anywhere on the Canvas the first time. Then the offset issue vanishes. This happens for any object that I programmatically insert into the canvas.
In my case, I suspect that the canvas.pointer is having some offset of the current mouse position. However, when I click and add one very small path of 1pixel, then the pointer offset gets corrected. After this, all subsequent paths or objects created programatically are shown at the correct positions.

Using Konvajs, I need to drag an object that is under another object WITHOUT bringing it to the top

Using Konvajs, I need to drag an object that is under another object WITHOUT bringing the bottom object to the top, the top object is NOT draggable.
Would appreciate any help, thank you.
Needing to detect clicks in stacked objects is a common requirement. Fortunately Konva makes this easy. Every shape will be 'listening' for events by default. So mouse over, mouse exit, click, double click, and their touch counterparts are all available to us (see docs).
In your case you need to make the shape that is higher up the stack stop listening so that one further down can hear it.
In the example below I am using the shape.listening(value) property which controls whether the shape is listening for events (true value) or not (false value).
Looking at the snippet, if we click the green spot (where the rectangles overlap), with the upper shape set listening(false) events pass through it to the next shape in the stack and we are informed that the click was on the lower shape. With the upper shape listening(true) the upper shape receives and eats the events.
I am also using an efficient approach to event detection, which is to listen for events on the layer rather than setting listeners per shape. In the case of a few shapes on the layer, using event listeners per shape is not going to be an issue, but when you have hundreds of shapes on the layer it will affect performance.
Of course delegating the listener to the layer leaves a requirement to recognise which shape was clicked. To do this I could have used logic to match on the shape names.
// when the user starts to drag shape...
layer.on('click', function(evt) {
if (evt.target.hasName('upperRect')) {
alert('Click on upperRect');
}
if (evt.target.hasName('lowerRect')) {
alert('Click on upperRect');
}
...
})
but since all I need to do here is show the name of the clicked shape, I chose to simply use
// when the user starts to drag shape...
layer.on('click', function(evt) {
showEvent('Click on ' + evt.target.name());
})
Finally it is worth noting that another approach to getting the clicked shape is to use the layer.getIntersection() method which requires a point parameter and will return the first shape (not a list of shapes!) that is on the layer at that point.
layer.on("click", function() {
var target = layer.getIntersection(stage.getPointerPosition());
if (target) {
alert('clicked on ' + target.name());
}
});
Bonus note: layer.on('click') does not detect clicks on the empty part of the layer.
// Set up a stage
stage = new Konva.Stage({
container: 'container',
width: window.innerWidth,
height: window.innerHeight
}),
// add a layer to draw on
layer = new Konva.Layer(),
// make the lower rect
rectLower = new Konva.Rect({
name: 'Lower rect',
x: 40,
y: 20,
width: 100,
height: 40,
stroke: 'cyan',
fill: 'cyan',
fillEnabled: true
}),
// Make the upper rect
rectUpper = rectLower.clone({
name: 'Upper rect',
stroke: 'magenta',
fill: 'magenta',
x: 0,
y: 0
}),
// add a click target for user
spot = new Konva.Circle({
x: 80,
y: 30,
fill: 'lime',
radius: 10,
listening: false
});
// Add the layer to the stage and shapes to layer
stage.add(layer);
layer.add(rectLower, rectUpper, spot)
// when the user clicks a shape...
layer.on('click', function(evt) {
showEvent('Click on ' + evt.target.name());
})
// Start / stop listening for events on to rect when checkbox changes.
$('#upperListening').on('change', function(){
switch ($(this).is(':checked')){
case true:
rectUpper.listening(true);
showEvent('Upper rect <b>is</b> listening...');
break;
case false:
rectUpper.listening(false);
showEvent('Upper rect <b>is not</b> listening...');
break;
}
// Important - if we change lsitening shapes we have to refresh hit checking list.
layer.drawHit();
})
stage.draw();
function showEvent(msg){
$('#info').html(msg)
}
body {
margin: 10;
padding: 10;
overflow: hidden;
background-color: #f0f0f0;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://unpkg.com/konva#^3/konva.min.js"></script>
<p>Click the green dot. Watch the events. Click the tickbox to stop upper rect catching events.</p>
<p><input type='checkbox' id='upperListening' checked='checked' ><label for='upperListening'>Upper is listening</label>
<p id='info'>Events show here</p>
<div id="container"></div>

Using OpenSeadragon, how can set it to load the image at a specific set of coordinates in the upper left corner?

I am using OpenSeadragon to display a large image so that it scrolls infinitely as a mosaic. This is working fine, with the code listed below. However, the initial zoom level varies when opened in Chrome, Firefox or Opera, and the image is displayed from a random position within the image, instead of having the desired coordinates in the upper left corner.
Two related questions:
Are there properties to be set to specify the coordinates of the image that should be displayed in the upper left corner when the image is first loaded?
Is there a property to specify the zoom level when the image is first loaded? I set defaultZoomLevel to 0.6 but each browser seem to react differently to it, with Chrome being the only one to get it about right and the other two showing a much more zoomed out image.
Thanks for any help!
<body>
<div id="openseadragon1" style="width: 6560px; height: 3801px;"></div>
<script src="/openseadragon/openseadragon.min.js"></script>
<script type="text/javascript">
var viewer = OpenSeadragon({
id: "openseadragon1",
showNavigationControl: false,
wrapHorizontal: true,
wrapVertical: true,
visibilityRatio: 0,
zoomPerScroll: 1.2,
defaultZoomLevel: 0.6,
minZoomImageRatio: 0.6,
maxZoomPixelRatio: 2.5,
prefixUrl: "/openseadragon/images/",
tileSources: {
type: 'image',
url: '/myBigImage.png'
}
});
// -------------------------------------
// Edit based on iangilman's reply
// -------------------------------------
viewer.addHandler('open', function() {
var tiledImage = viewer.world.getItemAt(0);
var imageRect = new OpenSeadragon.Rect(10, 50, 1000, 1000);
var viewportRect = tiledImage.imageToViewportRectangle(imageRect);
viewer.viewport.fitBounds(viewportRect, true);
});
</script>
</body>
Yes, the zoom levels in OpenSeadragon are relative to the width of the viewport (a zoom of 1 means the image is exactly filling the viewport width-wise), which is probably why you're getting different results on different devices. If you want to choose a specific portion of the image, you'll have to convert from image coordinates to viewport coordinates. Either way, your best bet for reliable results is to explicitly choose the location after the image loads. For example:
viewer.addHandler('open', function() {
// Assuming you are interested in the first image in the viewer (or you only have one image)
var tiledImage = viewer.world.getItemAt(0);
var imageRect = new OpenSeadragon.Rect(0, 0, 1000, 1000); // Or whatever area you want to focus on
var viewportRect = tiledImage.imageToViewportRectangle(imageRect);
viewer.viewport.fitBounds(viewportRect, true);
});

openlayers polygon change color

i'm using OL3 and javascript to draw several polygon on a map. Each polygon came from a database in WKT format like "POLIGON((39 -9, ....))". I can draw them on the map but i want to change fill color of each one, but don't know how to do it.
Here is my code:
//WKTpoly -> this is my array of POLYLINES
var format = new ol.format.WKT();
var vectorArea = new ol.source.Vector({});
for (var i=0;i<WKTpoly.length;i++) {
var featureGeom = format.readFeature(WKTpoly[i]);
featureGeom.getGeometry().transform('EPSG:4326', 'EPSG:3857');
vectorArea.addFeature(featureGeom);
}
VectorMap = new ol.layer.Vector({
name: map,
source: vectorArea,
});
map.addLayer(VectorMap);
Well, after LessThanJake response and some more google search, i found the solution, i had to create a style and call setStyle() before addFeature():
(...)
var style = new ol.style.Style({
fill: new ol.style.Fill({
color: FillColor,
weight: 1
}),
stroke: new ol.style.Stroke({
color: LineColor,
width: 1
})
});
featureGeom.setStyle(style);
(...)
Thanks LessThanJake for pointing the right direction.
Or you can setup the layer to use a function as style
the function signature is
var makeStyle = function(feature,resolution) {
return [styles];
};
You can use this to manage style by feature and resolution (zoom level).
As the function is called at each feature render, you'll need to cache the style in a js object to gain performance.

Detect Graphical Regions in an Image on a Web Page

I've been beating around the bush a lot, so I'll explain my problem here and hope with the whole picture, somebody has some ideas. With the following image:
I need to detect a mouseover on the blobs over her eyes and mouth, and solve this problem in a general form. The model and blobs are on two different layers, so I can produce one image with only the blobs, and one with only the model, and somehow synchronise a virtual cursor over the blobs while it actually hovers over the model.
I can also make the blobs polygons, for hit testing, but I think a colour hit test would be much easier. If I hit blue, I am on her mouth and I show lipstick images; if I hit pink, I'm over her eyes, and display eye makeup images.
What are the suggestions and conversation of the learned ones here?
The simpler way to do it would be to load the layer image in a canvas, then get all its pixel data. When the mouse is hovering the model image, find out what color is currently selected and if it is different from a previous one, trigger an event to indicate that the selection has changed.
Here is an example, feel free to toy with it; but be aware that it doesn't handle all cases:
what if the layer and the model image are not the same size
what if the layer and the model image are not the same width/height ratio
what if you want to use some alpha channel (the example doesn't take it into account)
$(function() {
/* we load all the image data first */
var imageData = null;
var layerImage = new Image();
layerImage.onload = function() {
var canvas = document.createElement("canvas");
canvas.width = this.width;
canvas.height = this.height;
context = canvas.getContext('2d');
context.drawImage(this, 0, 0, canvas.width, canvas.height);
imageData = context.getImageData(0, 0, canvas.width, canvas.height).data;
};
/* it's easier to set the image data for example as base64 data */
layerImage.src = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwgAADsIBFShKgAAAABp0RVh0U29mdHdhcmUAUGFpbnQuTkVUIHYzLjUuMTFH80I3AAAA5klEQVR4Xu3WMQ7CMAwF0IxcgCtw/xsGdWApItWHKorLQ8qESe2HbWjNiwABAgQIECBAgAABAgQIECBAgMAUgd4e/ehMSeTMh4wK2j/nqPjt/TNzm3IXgEFb64CdgBG44hKcsmg8pKDAvbf+7SlY7nvK3xb/+lx5BAA/jMCGpwOqC/z9CFT/ApfP/9Z6T87yBaUJJsVvsen9y8cDMAJ2gCWY7IHll1qaYFL84a9AelnF+CFwxYLSnAGMBFLNivE6QAcMBCq2dJqzETACRuCzQDpPFePTv9riCRAgQIAAAQIECBC4nsATagY67TVyuhAAAAAASUVORK5CYII=";
var pColor = null;
/* on mouse over the model image */
$("#model").mousemove(function(event) {
/* we correct the offset */
var offset = $(this).offset();
var relX = event.pageX - offset.left;
var relY = event.pageY - Math.round(offset.top);
/* and get the pixel values at this place (note we are not keeping the alpha channel; it's your decision whether or not it is valuable */
var pixelIndex = relY * layerImage.width + relX;
var dataIndex = pixelIndex * 4;
var color = [imageData[dataIndex], imageData[dataIndex + 1], imageData[dataIndex + 2]];
if (pColor == null) {
/* we trigger when first entering the image */
$(this).trigger("newColor", {
message: "Initial layer color",
data: color
});
} else if (pColor[0] != color[0] || pColor[1] != color[1] || pColor[2] != color[2]) {
/* we trigger if the new position is a new color in the layer image */
$(this).trigger("newColor", {
message: "Changed layer color",
data: color
});
}
pColor = color;
});
/* some small help to convert rgb to css colors */
function rgb2hex(red, green, blue) {
var rgb = blue | (green << 8) | (red << 16);
return '#' + (0x1000000 + rgb).toString(16).slice(1)
}
/* there you have the new layer color event management; for the example sake we change the color of some text */
$("#model").on("newColor", function(event, eventData) {
$("#selector").css("color", rgb2hex(eventData.data[0], eventData.data[1], eventData.data[2]));
});
});
img {
border: 1px solid silver
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script>
<body>
<h4>Model image</h4>
<img id="model" src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAadEVYdFNvZnR3YXJlAFBhaW50Lk5FVCB2My41LjExR/NCNwAAAtdJREFUaEPtmS2PwkAQhhGQEASEBIECFAJBgoCAxpDgsWBxSP4lPwGJRCLv5m4uk6Xt7s5He9dLFst2+z7vfG3bxsc//zX+uf6PBPDXEUwRSBEwOpBSyGig+fIUAbOFxg3KjMD1em3EfqvV6vl8GkW7l5cJsF6vY/p//i8RQwbwer22221U5W63K/T4fr93u1263LdMFB8ZAEc96BsOhwER5WIIAMB+NO92u4lMKlw8m80oFJaMEgCg/ZADdvW0A2GMRiPdtgKAEu13tUJG4c667sQFoBap8yl8FdiPDIqyZgFQ9oerU83mlrV0ExaAJfshdKfTiSMLg8BZKR5k4ewPS4TpNplMOLKqAqD88YngSwxjVAVgyR+O8bSmKoCKumeerVoAkZe6xQnA45vCGH7rfGuIFbVRBUDgvO2bCefzWXEj4I8PDtyXOYzQ0cVi4WMonAmWSR8HIDWbzUZ33grXNKifTqcA3Ov1FNUfB3AdhdOv4h6+S0D6fr/HWLVaLd1jBgsAFNChV5RLAVpKelDfbrd16lk1QCLczLZjoPEW6SiMG4FAdUphXO/tCSkD8GEwi9uVDvZLny6gZg6HQ8YvDYBr23g8DnR9319S6XjH5XIJGw4Gg7fxZwwiFDfsyGfQSQeRvnci1ghE+XXz1d3W7bb5WVF3ABpzvpZVdwDM+8CYqyOAmzPRx6l6ARyPx3w/CNd9jQDgNEHq+RO6LgCgvt/vA8Dlcol2tjLnQPRmhW00n+W4rNPpPB6P6J5cAHofqnhlSffIAIDT8/k8n+jNZhO8l6qPHOYKPxlJ3+Wj1swpqJRzKPc06n6JIOf4GBmz1U778kpWxJmvQ9EjEOQergEX1KegcEnIAHAvPgaIRgDpMwO/jjUA/N1hJT3Hia7iL64c4KtRfP/4mkQrq9r3rVUngEBMUgQYCZtqIGRSSqGUQgwHUgoZTSrrQ3KhjN8oYiN/+PJPqpb83Htu7qcAAAAASUVORK5CYII="
/>
<p>You are pointing at some <strong><span id="selector">color</span></strong>
</p>
<hr/>
<h4>Layer image (reference only, not displayed in page)</h4>
<img id="layer" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwgAADsIBFShKgAAAABp0RVh0U29mdHdhcmUAUGFpbnQuTkVUIHYzLjUuMTFH80I3AAAA5klEQVR4Xu3WMQ7CMAwF0IxcgCtw/xsGdWApItWHKorLQ8qESe2HbWjNiwABAgQIECBAgAABAgQIECBAgMAUgd4e/ehMSeTMh4wK2j/nqPjt/TNzm3IXgEFb64CdgBG44hKcsmg8pKDAvbf+7SlY7nvK3xb/+lx5BAA/jMCGpwOqC/z9CFT/ApfP/9Z6T87yBaUJJsVvsen9y8cDMAJ2gCWY7IHll1qaYFL84a9AelnF+CFwxYLSnAGMBFLNivE6QAcMBCq2dJqzETACRuCzQDpPFePTv9riCRAgQIAAAQIECBC4nsATagY67TVyuhAAAAAASUVORK5CYII="
/>
</body>
If you can have both images (the one with the blob and the one without), I think you can do this using HTML5 canvas.
draw the image normally
draw the blob image beneath the master image so it is invisible
copy the blob to a Canvas
onMouseOver, retrieve pixel data (R,G,B and alpha) for the Canvas at the appropriate coordinates
profit
Twist: you might be able to do this with only one image and its alpha channel, if you don't need it for anything else - give the pixels a full opacity (A=255) everywhere except in blobs 1, 2 and 3, which will have opacity equal to 255-(1,2,3...). You can't have too many different blobs or the transparency will become noticeable. Haven't tried, but it should work. Given the likely compressibility of a "blob-only" image, a pair of images (one without transparency, one also without transparency and with only N+1 colours, PNG compressed) should yield better results.
More-or-less-pseudo code with two images, using jQuery (can be done without):
var image = document.getElementById('mainImage')
var blobs = document.getElementById('blobImage');
// Create a canvas
canvas = $('<canvas/>')[0];
canvas.width = image.width;
canvas.height = image.height;
// IMPORTANT: for this to work, this script and blobImage.src must be both
// in the same security domain, or you'll get "this operation is insecure"
canvas.getContext('2d').drawImage(blobs, 0, 0, image.width, image.height);
// Now wait for it.
$('#mainImage').mouseover(function(event) {
// TO DO: offset clientX, clientY by margin on mainImage
var ctx = canvas.getContext('2d');
// Get one pixel
var pix = ctx.getImageData(event.clientX, event.clientY, 1, 1);
// Retrieve the red component
var red = pix.data[0];
if (red > 128) {
// ... do something for red
}
});
You could use SVG graphics to layer over the image.
My example uses an ellipse but you could use a polygons just as easily.
You could use the colour like you stated in your question or add an extra property to the svg element. The example uses onclick but mouseover works as well.
example js:
function svg_clicked(objSVG)
{
alert(objSVG.style.fill);
alert(objSVG.getAttribute('data-category'));
}
example svg:
<svg xmlns="http://www.w3.org/2000/svg" version="1.1">
<ellipse cx="110" cy="80" rx="100" ry="50" style="fill:red;" onclick="svg_clicked(this);" data-category="lipstick" />
</svg>
Here's a fiddle (move the mouse over the O's in the picture)
It still works if you make the svg element transparent (using fill:transparent).
You can change the overlay to a colour or outline quickly for testing.
I highly recommend the time tested method.
The easiest way to both create blobs and to detect if the mouse is over them is to use svg graphics on top of the other image. SVG supports mouseover events and allows vector shapes which are going to give you far greater precision than using <map> or <area>.
I found this question that might also shed some light on where I am coming from: Hover only on non-transparent part of image. Read the second answer down because it will likely be prefered in your situation.
The svg elements on your image would be transparent (or whatever you would want), and you could easily detect the mouse over events.
The library from that question is called raphael. Hopefully this proves to be useful.

Resources