I have just upgraded from fabric 1.7 to 2 and now the image object is behaving differently.See the screenshot, the image where the arrow is is completely ignoring the fact that i set a width on it, it looks like it's actually scaling it based on the given height to keep the image ratio. I don't want this to happen, the image needs to stretch to the size i tell it to.
Anyone have any idea to stop it doing this? I mean if i set a width in the options for the image object i expect it to respect those dimensions. It should be stretching to fill where the red box is.
This is happening when loading the image initially as a square and setting {width:1000,height:400} for example, but instead it looks like it's taking the height and scaling the width down to keep it square.
You need to set scaleX for width and scaleY for height. It's a breaking change for v2.
DEMO
var canvas = new fabric.Canvas('c');
var index = 0,
json;
var url = '//fabricjs.com/assets/pug.jpg';
fabric.Image.fromURL(url, function(img) {
var elWidth = img.naturalWidth || img.width;
var elHeight = img.naturalHeight || img.height;
img.set({
scaleX:200/elWidth,
scaleY:200/elHeight
})
canvas.add(img);
})
canvas{
border:2px solid #000;
}
<script type="text/javascript" src="
https://rawgit.com/kangax/fabric.js/master/dist/fabric.js"></script>
<canvas id="c" width="400" height="400"></canvas>
Related
I am using OpenSeadragon to display a large image so that it scrolls infinitely as a mosaic. This is working fine, with the code listed below. However, the initial zoom level varies when opened in Chrome, Firefox or Opera, and the image is displayed from a random position within the image, instead of having the desired coordinates in the upper left corner.
Two related questions:
Are there properties to be set to specify the coordinates of the image that should be displayed in the upper left corner when the image is first loaded?
Is there a property to specify the zoom level when the image is first loaded? I set defaultZoomLevel to 0.6 but each browser seem to react differently to it, with Chrome being the only one to get it about right and the other two showing a much more zoomed out image.
Thanks for any help!
<body>
<div id="openseadragon1" style="width: 6560px; height: 3801px;"></div>
<script src="/openseadragon/openseadragon.min.js"></script>
<script type="text/javascript">
var viewer = OpenSeadragon({
id: "openseadragon1",
showNavigationControl: false,
wrapHorizontal: true,
wrapVertical: true,
visibilityRatio: 0,
zoomPerScroll: 1.2,
defaultZoomLevel: 0.6,
minZoomImageRatio: 0.6,
maxZoomPixelRatio: 2.5,
prefixUrl: "/openseadragon/images/",
tileSources: {
type: 'image',
url: '/myBigImage.png'
}
});
// -------------------------------------
// Edit based on iangilman's reply
// -------------------------------------
viewer.addHandler('open', function() {
var tiledImage = viewer.world.getItemAt(0);
var imageRect = new OpenSeadragon.Rect(10, 50, 1000, 1000);
var viewportRect = tiledImage.imageToViewportRectangle(imageRect);
viewer.viewport.fitBounds(viewportRect, true);
});
</script>
</body>
Yes, the zoom levels in OpenSeadragon are relative to the width of the viewport (a zoom of 1 means the image is exactly filling the viewport width-wise), which is probably why you're getting different results on different devices. If you want to choose a specific portion of the image, you'll have to convert from image coordinates to viewport coordinates. Either way, your best bet for reliable results is to explicitly choose the location after the image loads. For example:
viewer.addHandler('open', function() {
// Assuming you are interested in the first image in the viewer (or you only have one image)
var tiledImage = viewer.world.getItemAt(0);
var imageRect = new OpenSeadragon.Rect(0, 0, 1000, 1000); // Or whatever area you want to focus on
var viewportRect = tiledImage.imageToViewportRectangle(imageRect);
viewer.viewport.fitBounds(viewportRect, true);
});
I'm using loadSVGfromURL to fill a canvas with this SVG
As you can see in the link, I got some Heather effect on my shirt, along with some shadows. Plus, my SVG style applies a mix-blend-mode: multiply; to my paths.
Unfortunately, once rendered in my canvas, it seems like the paths CSS is not taken into account :
How can I make sure that this style is applied ?
Here is an exemple. Basically you need to map mix-blend-mode to globalCompositeOperation
var site_url = 'http://s3.eu-central-1.amazonaws.com/balibart-s3/SVGMockups2/59f32980b5d8493ef7f29904/front/Layer.svg';
canvas = new fabric.Canvas('canvas');
fabric.loadSVGFromURL(site_url, function(objects) {
var group = new fabric.Group(objects, {
left: 165,
top: 100,
});
canvas.add(group);
group._objects[3].globalCompositeOperation='multiply';
group._objects[2].globalCompositeOperation='multiply';
group._objects[4].globalCompositeOperation='multiply';
group._objects[5].globalCompositeOperation='multiply';
group._objects[6].globalCompositeOperation='multiply';
/*for(var i=0;i<objects.length;i++){
canvas.add(objects[i]);
}
canvas.getObjects()[5].globalCompositeOperation='multiply';*/
// canvas.add(objects);
canvas.renderAll();
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/2.4.6/fabric.js"></script>
<canvas id='canvas' width="900" height="900"></canvas>
I am using fabricjs to implement image editing and I try to use a fabric.Image object as the background image of canvas to store the data. And the following is the code:
var canvas = new fabric.Canvas('canvasId');
var imageObject = new fabric.Image($originImage);
canvas.add(imageObject);
but I found the $originImage's size is much larger than canvas' size and also imageObject's size, so the canvas can only show part of the image. I want to know how to stretch the $originImage to adapt the canvas then canvas can display all of the $originImage?
Here what I have done
$canvas.width = $originImage.clientWidth;
$canvas.height = $originImage.clientHeight;
var fabricCanvas = new fabric.Canvas('canvasId');
// canvas.setFabricCanvas(fabricCanvas);
var imageObject = new fabric.Image($originImage);
// fabricCanvas.add(imageObject);
// fabricCanvas.isDrawingMode = true;
fabricCanvas.setBackgroundImage(imageObject, fabricCanvas.renderAll.bind(fabricCanvas), {
scaleX: fabricCanvas.width / $originImage.naturalWidth,
scaleY: fabricCanvas.height / $originImage.naturalHeight
});
the upper is my related code and below is the display:
it is solved, and the previous question is because I resize the $originImage first, so when I input the image src as setBackgroundImage's parameter, it can display normally.
For version 2.0 of fabricjs, they have changed the way to handle width/height compared to previous version. If you apply width/height, then it basically crop the image to that particular size compared to the original image size. In order to resize it to a proper size, you have to switch to use scaleX/scaleY like other people have suggested.
For example, I use image.fromUrl to load an image and set it to be the background:
fabric.Image.fromURL(imgUrl, function(img) {
img.set({
scaleX: currentWidth / img.width,
scaleY: currentHeight / img.height,
top: topPosition,
left: leftPosition
originX: 'left', originY: 'top'
});
canvas.setBackgroundImage(img, canvas.renderAll.bind(canvas));
});
where currentWidth/currentHeight can be your canvas size if you want full background or they can be a specific width/height that you want (in my project, I want to do letter-boxed image instead so I set the width and height based on my letter-boxing algorithm), and top/left is the location that you want to place your image in the canvas (leave it none if you want to be full background image). Do not set any width and height since that will crop the scaled image rather than setting the image to be the exact size.
If this does not work, check the backgroundImage object from the canvas and see if its dimension and scale properties are different compared to a manual calculation. If you take a look, you will see the width and height properties are your natural image width and height, but it will have scaleX and scaleY less than 1 if your canvas size is less than the image size or more than 1 if it is bigger.
Also I see you are loading image directly into an image object. That might be different compared to load the data and set it directly using setBackgroundImage.
canvas.setBackgroundImage(img, canvas.renderAll.bind(canvas), {
scaleX: canvas.width / $originalImg.naturalWidth,
scaleY: canvas.width / $originalImg.naturalHeight
});
use scaleX,scaleY to resize.
(function() {
var $originalImg = $('#originalImage')[0];
console.dir($originalImg)
var canvas = this.__canvas = new fabric.Canvas('c');
var img = new fabric.Image($originalImg);
canvas.setBackgroundImage(img,canvas.renderAll.bind(canvas),{
scaleX:canvas.width/$originalImg.naturalWidth,
scaleY:canvas.width/$originalImg.naturalHeight
});
})();
canvas{
border-width: 1pz;
border-style: solid;
border-color: #000;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/2.0.0-rc.3/fabric.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<img id='originalImage'src='http://fabricjs.com/assets/pug_small.jpg'>
<canvas id='c' width=200 height=200></canvas>
I am using the following script to ensure the image is no larger than the canvas width:
var aspect = image.width / image.height; //Aspect ratio of image
new fabric.Image(image, {
scaleX: canvas.width / image.width,
scaleY: canvas.width / (image.height * aspect)
});
When I rotate an image in fabricjs, the top-left corner's coordinates are not updated after rotated. Instead, the top-left corner of the image still refers to the old point. I believe that it should recalculate the top-left corner based on the image's new position. Is there a way to achieve this? Any help is appreciated!
Below is the code for image rotation:
function rotate(){
activeCanvas.forEachObject(function(obj){
if(obj instanceof fabric.Image){
curAngle = obj.getAngle();
obj.setAngle(curAngle-90);
}
});
activeCanvas.renderAll();
}
Now, after the rotation, I want top-left coordinates of the new rotated image but it still returns top and left from the old image state.
For example, let say the image's top-left corner was originally at (100,200) and the image's dimensions are 500x600. Now, if I rotate the image in 90 degrees, the new dimensions are 600x500 and the top-left corner changes as well, as the image is rotated related to its center. But the fabricjs image still refers to the old top-left corner. Is there any method just like setCoords() to get the new upper left corner point as its top left?
As you can see from the snippet below, if you only rotate your object, only the bounding box will be updated, you have to move your object to have the position of your object updated.
var canvas = this.__canvas = new fabric.Canvas('c');
var rect = new fabric.Rect({
left: 120,
top: 30,
width: 100,
height: 100,
fill: 'green',
angle: 20
});
canvas.on("object:rotating", function() {
var ao = canvas.getActiveObject();
if(ao){
console.log('top and left are the same after rotation');
console.log('top:' + ao.top);
console.log('left:' + ao.left);
console.log('but not the bounding box');
var bound = ao.getBoundingRect();
console.log('bounding box - top:' + bound.top);
console.log('bounding box - left:' + bound.left);
}
})
canvas.add(rect);
canvas.renderAll();
<script src="https://rawgit.com/kangax/fabric.js/master/dist/fabric.js"></script>
<script src="https://seikichi.github.io/tmp/PDFJS.0.8.715/pdf.min.js"></script>
<canvas id="c" style="border:1px solid black"></canvas>
I guess you are looking for something like this:
canvas.on('object:rotating', function(options) {
options.target.setCoords();
var left = options.target.oCoords.tl.x,
top = options.target.oCoords.tl.y;
console.log(left, top);
});
I've been beating around the bush a lot, so I'll explain my problem here and hope with the whole picture, somebody has some ideas. With the following image:
I need to detect a mouseover on the blobs over her eyes and mouth, and solve this problem in a general form. The model and blobs are on two different layers, so I can produce one image with only the blobs, and one with only the model, and somehow synchronise a virtual cursor over the blobs while it actually hovers over the model.
I can also make the blobs polygons, for hit testing, but I think a colour hit test would be much easier. If I hit blue, I am on her mouth and I show lipstick images; if I hit pink, I'm over her eyes, and display eye makeup images.
What are the suggestions and conversation of the learned ones here?
The simpler way to do it would be to load the layer image in a canvas, then get all its pixel data. When the mouse is hovering the model image, find out what color is currently selected and if it is different from a previous one, trigger an event to indicate that the selection has changed.
Here is an example, feel free to toy with it; but be aware that it doesn't handle all cases:
what if the layer and the model image are not the same size
what if the layer and the model image are not the same width/height ratio
what if you want to use some alpha channel (the example doesn't take it into account)
$(function() {
/* we load all the image data first */
var imageData = null;
var layerImage = new Image();
layerImage.onload = function() {
var canvas = document.createElement("canvas");
canvas.width = this.width;
canvas.height = this.height;
context = canvas.getContext('2d');
context.drawImage(this, 0, 0, canvas.width, canvas.height);
imageData = context.getImageData(0, 0, canvas.width, canvas.height).data;
};
/* it's easier to set the image data for example as base64 data */
layerImage.src = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwgAADsIBFShKgAAAABp0RVh0U29mdHdhcmUAUGFpbnQuTkVUIHYzLjUuMTFH80I3AAAA5klEQVR4Xu3WMQ7CMAwF0IxcgCtw/xsGdWApItWHKorLQ8qESe2HbWjNiwABAgQIECBAgAABAgQIECBAgMAUgd4e/ehMSeTMh4wK2j/nqPjt/TNzm3IXgEFb64CdgBG44hKcsmg8pKDAvbf+7SlY7nvK3xb/+lx5BAA/jMCGpwOqC/z9CFT/ApfP/9Z6T87yBaUJJsVvsen9y8cDMAJ2gCWY7IHll1qaYFL84a9AelnF+CFwxYLSnAGMBFLNivE6QAcMBCq2dJqzETACRuCzQDpPFePTv9riCRAgQIAAAQIECBC4nsATagY67TVyuhAAAAAASUVORK5CYII=";
var pColor = null;
/* on mouse over the model image */
$("#model").mousemove(function(event) {
/* we correct the offset */
var offset = $(this).offset();
var relX = event.pageX - offset.left;
var relY = event.pageY - Math.round(offset.top);
/* and get the pixel values at this place (note we are not keeping the alpha channel; it's your decision whether or not it is valuable */
var pixelIndex = relY * layerImage.width + relX;
var dataIndex = pixelIndex * 4;
var color = [imageData[dataIndex], imageData[dataIndex + 1], imageData[dataIndex + 2]];
if (pColor == null) {
/* we trigger when first entering the image */
$(this).trigger("newColor", {
message: "Initial layer color",
data: color
});
} else if (pColor[0] != color[0] || pColor[1] != color[1] || pColor[2] != color[2]) {
/* we trigger if the new position is a new color in the layer image */
$(this).trigger("newColor", {
message: "Changed layer color",
data: color
});
}
pColor = color;
});
/* some small help to convert rgb to css colors */
function rgb2hex(red, green, blue) {
var rgb = blue | (green << 8) | (red << 16);
return '#' + (0x1000000 + rgb).toString(16).slice(1)
}
/* there you have the new layer color event management; for the example sake we change the color of some text */
$("#model").on("newColor", function(event, eventData) {
$("#selector").css("color", rgb2hex(eventData.data[0], eventData.data[1], eventData.data[2]));
});
});
img {
border: 1px solid silver
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script>
<body>
<h4>Model image</h4>
<img id="model" src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAadEVYdFNvZnR3YXJlAFBhaW50Lk5FVCB2My41LjExR/NCNwAAAtdJREFUaEPtmS2PwkAQhhGQEASEBIECFAJBgoCAxpDgsWBxSP4lPwGJRCLv5m4uk6Xt7s5He9dLFst2+z7vfG3bxsc//zX+uf6PBPDXEUwRSBEwOpBSyGig+fIUAbOFxg3KjMD1em3EfqvV6vl8GkW7l5cJsF6vY/p//i8RQwbwer22221U5W63K/T4fr93u1263LdMFB8ZAEc96BsOhwER5WIIAMB+NO92u4lMKlw8m80oFJaMEgCg/ZADdvW0A2GMRiPdtgKAEu13tUJG4c667sQFoBap8yl8FdiPDIqyZgFQ9oerU83mlrV0ExaAJfshdKfTiSMLg8BZKR5k4ewPS4TpNplMOLKqAqD88YngSwxjVAVgyR+O8bSmKoCKumeerVoAkZe6xQnA45vCGH7rfGuIFbVRBUDgvO2bCefzWXEj4I8PDtyXOYzQ0cVi4WMonAmWSR8HIDWbzUZ33grXNKifTqcA3Ov1FNUfB3AdhdOv4h6+S0D6fr/HWLVaLd1jBgsAFNChV5RLAVpKelDfbrd16lk1QCLczLZjoPEW6SiMG4FAdUphXO/tCSkD8GEwi9uVDvZLny6gZg6HQ8YvDYBr23g8DnR9319S6XjH5XIJGw4Gg7fxZwwiFDfsyGfQSQeRvnci1ghE+XXz1d3W7bb5WVF3ABpzvpZVdwDM+8CYqyOAmzPRx6l6ARyPx3w/CNd9jQDgNEHq+RO6LgCgvt/vA8Dlcol2tjLnQPRmhW00n+W4rNPpPB6P6J5cAHofqnhlSffIAIDT8/k8n+jNZhO8l6qPHOYKPxlJ3+Wj1swpqJRzKPc06n6JIOf4GBmz1U778kpWxJmvQ9EjEOQergEX1KegcEnIAHAvPgaIRgDpMwO/jjUA/N1hJT3Hia7iL64c4KtRfP/4mkQrq9r3rVUngEBMUgQYCZtqIGRSSqGUQgwHUgoZTSrrQ3KhjN8oYiN/+PJPqpb83Htu7qcAAAAASUVORK5CYII="
/>
<p>You are pointing at some <strong><span id="selector">color</span></strong>
</p>
<hr/>
<h4>Layer image (reference only, not displayed in page)</h4>
<img id="layer" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwgAADsIBFShKgAAAABp0RVh0U29mdHdhcmUAUGFpbnQuTkVUIHYzLjUuMTFH80I3AAAA5klEQVR4Xu3WMQ7CMAwF0IxcgCtw/xsGdWApItWHKorLQ8qESe2HbWjNiwABAgQIECBAgAABAgQIECBAgMAUgd4e/ehMSeTMh4wK2j/nqPjt/TNzm3IXgEFb64CdgBG44hKcsmg8pKDAvbf+7SlY7nvK3xb/+lx5BAA/jMCGpwOqC/z9CFT/ApfP/9Z6T87yBaUJJsVvsen9y8cDMAJ2gCWY7IHll1qaYFL84a9AelnF+CFwxYLSnAGMBFLNivE6QAcMBCq2dJqzETACRuCzQDpPFePTv9riCRAgQIAAAQIECBC4nsATagY67TVyuhAAAAAASUVORK5CYII="
/>
</body>
If you can have both images (the one with the blob and the one without), I think you can do this using HTML5 canvas.
draw the image normally
draw the blob image beneath the master image so it is invisible
copy the blob to a Canvas
onMouseOver, retrieve pixel data (R,G,B and alpha) for the Canvas at the appropriate coordinates
profit
Twist: you might be able to do this with only one image and its alpha channel, if you don't need it for anything else - give the pixels a full opacity (A=255) everywhere except in blobs 1, 2 and 3, which will have opacity equal to 255-(1,2,3...). You can't have too many different blobs or the transparency will become noticeable. Haven't tried, but it should work. Given the likely compressibility of a "blob-only" image, a pair of images (one without transparency, one also without transparency and with only N+1 colours, PNG compressed) should yield better results.
More-or-less-pseudo code with two images, using jQuery (can be done without):
var image = document.getElementById('mainImage')
var blobs = document.getElementById('blobImage');
// Create a canvas
canvas = $('<canvas/>')[0];
canvas.width = image.width;
canvas.height = image.height;
// IMPORTANT: for this to work, this script and blobImage.src must be both
// in the same security domain, or you'll get "this operation is insecure"
canvas.getContext('2d').drawImage(blobs, 0, 0, image.width, image.height);
// Now wait for it.
$('#mainImage').mouseover(function(event) {
// TO DO: offset clientX, clientY by margin on mainImage
var ctx = canvas.getContext('2d');
// Get one pixel
var pix = ctx.getImageData(event.clientX, event.clientY, 1, 1);
// Retrieve the red component
var red = pix.data[0];
if (red > 128) {
// ... do something for red
}
});
You could use SVG graphics to layer over the image.
My example uses an ellipse but you could use a polygons just as easily.
You could use the colour like you stated in your question or add an extra property to the svg element. The example uses onclick but mouseover works as well.
example js:
function svg_clicked(objSVG)
{
alert(objSVG.style.fill);
alert(objSVG.getAttribute('data-category'));
}
example svg:
<svg xmlns="http://www.w3.org/2000/svg" version="1.1">
<ellipse cx="110" cy="80" rx="100" ry="50" style="fill:red;" onclick="svg_clicked(this);" data-category="lipstick" />
</svg>
Here's a fiddle (move the mouse over the O's in the picture)
It still works if you make the svg element transparent (using fill:transparent).
You can change the overlay to a colour or outline quickly for testing.
I highly recommend the time tested method.
The easiest way to both create blobs and to detect if the mouse is over them is to use svg graphics on top of the other image. SVG supports mouseover events and allows vector shapes which are going to give you far greater precision than using <map> or <area>.
I found this question that might also shed some light on where I am coming from: Hover only on non-transparent part of image. Read the second answer down because it will likely be prefered in your situation.
The svg elements on your image would be transparent (or whatever you would want), and you could easily detect the mouse over events.
The library from that question is called raphael. Hopefully this proves to be useful.