Flutter draw SVG in CustomPaint (Canvas) - svg

I have something like this:
CustomPaint(
painter: CurvePainter(),
)
In this class I am doing my painting:
import 'package:flutter/material.dart';
import 'package:flutter_svg/flutter_svg.dart';
import './myState.dart';
import './models/mode.dart';
final String rawSvg = '''<svg viewBox="...">...</svg>''';
class CurvePainter extends CustomPainter {
MyState _myState;
DrawableRoot svgRoot;
CurvePainter(MyState myState) {
this._myState = myState;
this.loadAsset();
}
void loadAsset() async {
this.svgRoot = await svg.fromSvgString(rawSvg, rawSvg);// The canvas that is your board.
}
#override
void paint (Canvas canvas, Size size) {
canvas.translate(_myState.translateX, _myState.translateY);
if(this.svgRoot != null){
svgRoot.scaleCanvasToViewBox(canvas, size);
svgRoot.clipCanvasToViewBox(canvas);
// svgRoot.draw(canvas, size);
}
}
Somebody know how to draw a SVG inside paint method?
I found this library https://pub.dev/packages/flutter_svg#-readme-tab- .
With my code I get error: Unhandled Exception: Bad state: viewBox element must be 4 elements long
I would be nice, if I can scale and rotate the svg inside canvas. But this is optional.

From the README:
import 'package:flutter_svg/flutter_svg.dart';
final String rawSvg = '''<svg viewBox="...">...</svg>''';
final DrawableRoot svgRoot = await svg.fromSvgString(rawSvg, rawSvg);
// If you only want the final Picture output, just use
final Picture picture = svgRoot.toPicture();
// Otherwise, if you want to draw it to a canvas:
// Optional, but probably normally desirable: scale the canvas dimensions to
// the SVG's viewbox
svgRoot.scaleCanvasToViewBox(canvas);
// Optional, but probably normally desireable: ensure the SVG isn't rendered
// outside of the viewbox bounds
svgRoot.clipCanvasToViewBox(canvas);
svgRoot.draw(canvas, size);
Which you could adapt as:
class CurvePainter extends CustomPainter {
CurvePainter(this.svg);
final DrawableRoot svg;
#override
void paint(Canvas canvas, Size size) {
canvas.drawLine(...);
svg.scaleCanvasToViewBox(canvas);
svg.clipCanvasToViewBox(canvas);
svg.draw(canvas, size);
}
}
I'd advise finding some way to get the asynchronous part earlier on in your app, perhaps using a FutureBuilder or a ValueListenableBuilder.
Disclosure: I'm the author/primary maintainer of Flutter SVG.

Ultimately I found drawing SVGs directly in Canvas to be cumbersome. Instead, I copied the SVG paths and transforms to Dart code using path_drawing and rendered them as Paths with Canvas.drawPath. This has the advantage of not even being an asset at all; the SVG data is literally code at this point. And you can convert back to an SVG easily. The process goes a bit like this:
Add path_drawing: 0.4.1 to pubspec.yaml, flutter pub get, in the file you're rendering from import 'package:path_drawing/path_drawing.dart';.
Copy-paste all paths from your SVG with the method parseSvgPathData as Path constants. (Path strings look something like M 86.102000,447.45700 L 86.102000,442.75300 .....)
You can combine many paths if there are more than one in the SVG:
static final Path complexPathToDraw = parseSvgPathData("path_1").addPath(parseSvgPathData("path_2"));.
Usually the SVG will be wrapped in some translations (<g transform='translate()>). And drawPath can only render the Path from the top-left position. So you have to translate to the appropriate position. When rendering, translate the canvas before drawing (1) first to correct for the translations in the SVG, (2) next to scale to the size you want, (3) to go to the position you really want it on the Canvas. Then draw, and restore the Canvas to its untransformed state. But keep in mind, these matrices are added in reverse order to how we logically break it down because linear algebra is stupid.
canvas.save();
canvas.translate(dxToRenderPosition, dyToRenderPosition);
canvas.scale(sxFromSvgSizeToDesiredRenderSize);
canvas.translate(dxFromSvg, dyFromSvg);
canvas.drawPath(complexPathToDraw, Paint());
canvas.restore();

I faced a problem similar to this where I wanted to draw a Svg scaled down on a small part of a canvas.
To make it work, I had to use this code:
Size desiredSize = Size(60, 40);
// get the svg from a preloaded array of DrawableRoot corresponding to all the Svg I might use
final DrawableRoot svgRoot = drawables[i];
canvas.save();
// [center] below is the Offset of the center of the area where I want the Svg to be drawn
canvas.translate(center.dx - desiredSize.width / 2, center.dy - desiredSize.height / 2);
Size svgSize = svgRoot.viewport.size;
var matrix = Matrix4.identity();
matrix.scale(desiredSize.width / svgSize.width, desiredSize.height / svgSize.height);
canvas.transform(matrix.storage);
svgRoot.draw(canvas, Rect.zero); // the second argument is not used in DrawableRoot.draw() method
canvas.restore();
This way, you can have complex Svg rendered on the canvas and still do some work on it.
E.g.: you can draw multiple Svg on the same canvas and draw over them.

Related

Threejs - Best way to render image texture only in a portion of the mesh

I have the .obj of a T-Shirt, it contains a few meshes and materials and I'm coloring it using a CanvasTexture fed by an inline svg.
Now I should add a logo at a specific location (more or less above the heart), but I'm struggling to understand which is the best/proper way of doing it (I'm quite new to 3D graphics and Three.js). This is what I tried so far:
since I'm coloring the T-Shirt through a CanvasTexture fed by an inline svg, I thought it would have been easy to just draw the logo into the svg at specific coordinates. And it was easy indeed, but the logo gets not rendered (or is not visible in some way) on the texture/mesh, although it is visible in the inline svg. So CanvasTexture probably doesn't work with embedded images (I tried both base64 and URL)
so, I started looking into more 3d "native" ways of doing it, but I haven't found one that really makes sense to me. I know there's ShaderMaterial in threejs, which I could use to selectively render pixels of the logo or pixels of the cloth, but that means making a lot of complex computation to figure out where the logo should be and I can't believe drawing a simple JPEG or PNG with specific coordinates and size can be so complex... I must have missed an obvious solution.
EDIT
Here is how I'm adding the image to the inline svg (option 1 above).
Add the image to the inline svg
const groups = Array.from(svg.querySelectorAll('g'));
// this is the "g" tag where I want to add the logo into
const targetGroup = groups.find((group: SVGGElement) => group.getAttribute('id') === "logo_placeholder");
const image = document.createElement('image');
image.setAttribute('width', '64');
image.setAttribute('height', '64');
image.setAttribute('x', '240');
image.setAttribute('y', '512');
image.setAttribute('xlink:href', `data:image/png;base64,${base64}`);
targetGroup.appendChild(image);
Draw inline svg to 2d canvas
static drawSvgToCanvas = async (canvas: HTMLCanvasElement, canvasSize: TSize, svgString: string) => {
return new Promise((resolve, reject) =>
canvas.width = canvasSize.width;
canvas.height = canvasSize.height;
const ctx = canvas.getContext('2d');
const image = new Image(); // eslint-disable-line no-undef
image.src = `data:image/svg+xml;base64,${btoa(svgString)}`;
image.onload = () => {
if (ctx) {
ctx.drawImage(image, 0, 0);
resolve();
} else {
reject(new Error('2D context is not set on canvas'));
}
};
image.onerror = () => {
reject(new Error('Could not load svg image'));
}
});
};
Draw 2d canvas to threejs Texture
const texture = new Three.CanvasTexture(canvas);
texture.mapping = Three.UVMapping; // it's the default
texture.wrapS = Three.RepeatWrapping;
texture.wrapT = Three.RepeatWrapping; // it's the default
texture.magFilter = Three.LinearFilter; // it's the default
texture.minFilter = Three.LinearFilter;
texture.needsUpdate = true;
[...add texture to material...]
For some reason, canvases don't like SVGs with embedded images, so for a similar project I had to do this in two steps, rendering the SVG and the image separately:
First, render the SVG on the canvas, and then render the image on top of that (on the same canvas).
const ctx = canvas.getContext('2d');
ctx.drawImage(imgSVG, 0, 0);
ctx.drawImage(img2, 130, 10, 65, 90);
const texture = new THREE.CanvasTexture(canvas);
Example: https://jsfiddle.net/0f9hm7gx/

Threejs - How to correctly apply svg texture to .obj file

I've just started playing around with threejs and first thing I'm trying to do is applying textures to models. I am able to load a PNG texture with something like:
var material = new THREE.MeshBasicMaterial({
map: THREE.ImageUtils.loadTexture('textures/wood.png'),
});
object.material = material;
and it fits perfectly to the 3D model without doing anything else. I assume to have exported the .obj with the correct options (as PNG fits perfectly), so I attempted to create the same texture from an SVG instead.
var canvasElement = document.getElementById('svgTexture');
var img = new Image();
img.onload = function() {
canvasElement.getContext('2d').drawImage(img, 0, 0);
var texture = new THREE.CanvasTexture(
canvasElement,
THREE.UVMapping,
THREE.ClampToEdgeWrapping, THREE.ClampToEdgeWrapping,
THREE.LinearFilter, THREE.LinearFilter,
);
material = new THREE.MeshBasicMaterial({
map: texture
});
}
img.src = 'textures/wood.svg';
Unfortunately, the SVG texture gets loaded but mapping is all messed up. It seems Threejs is able to "use" the exported UVMaps if I do use a PNG, whereas It can't with the SVG.
NOTE: both files are the same texture, with same dimensions. Indeed, the PNG was created right from the SVG with SVG Converter (OS X).

SetFillPattern using SVG with ability to change color of svg using fabricJS

I am using fabric.js and I have the below issue.
I have added a circle on my canvas. now I want to fillpattern to the Circle with some SVG pattern image. And I want to change the color of the patter(SVG) dynamically.
I have tried below:
fabric.util.loadImage('../images/cross.svg', function (img)
{
newObj = img;
currentObject.setPatternFill({ source: img, repeat: 'repeat' });
canvas.renderAll();
});
And Code is working fine for pattern filling in the circle. but I am not able to change the color of the svg.

Generating a working svg gradient with d3

I'm trying to generate an SVG gradient (for a stroke) with D3 (the rest of the project uses D3, so using D3 for this seemed to make sense...)
Here is the code that generates the gradient:
function generateBlindGradient(svg, color, side) {
// can't have a hash mark in the id or bad things will happen
idColor = color.replace('#', '');
side = side || 'right';
// this is a sneaky d3 way to select the element if present
// or create the element if it isn't
var defs = svg.selectAll('defs').data([0]);
defs.enter().append('svg:defs');
var id = 'gradient-' + idColor + '-' + side;
var gradient = defs.selectAll('lineargradient#'+id).data([0]);
gradient.enter().append('svg:lineargradient')
.attr('id', id);
var colors = [
{ offset : '50%', color : '#DFE2E6' },
{ offset : side === 'left' ? '100%' : '0%', color : color }
];
var stops = gradient.selectAll('stop').data(colors);
stops.enter().append('svg:stop');
stops.attr('stop-color', function(d) { return d.color; })
.attr('offset', function(d) { return d.offset; });
return id;
}
This works... almost right. It generates gradients like this:
<lineargradient id="gradient-a8d4a1-left">
<stop stop-color="#DFE2E6" offset="50%"></stop>
<stop stop-color="#a8d4a1" offset="100%"></stop>
</lineargradient>
That gradient does not work (as either a fill or a stroke)--the element it's applied to gets no stroke or fill.
If I use the web inspector to "edit the HTML" of the lineargradient element, even if I don't change anything, the gradients suddenly work — so I'm guessing there's something weird going on within Chrome's SVG parsing or d3's element generation.
I think it might be down to a confusion between lineargradient and linearGradient—d3 seems to have some issues with camelCased elements and when I have it create linearGradient elements, it doesn't select them (and I get lots and lots of copies). Also, while in Chrome's inspector, these elements show up as lineargradient; when I edit as HTML, they are linearGradient. I'm not sure what's going on here or how to fix it.
SVG is case sensitive so its linearGradient rather than lineargradient for creation.
I think Chrome has a selector bug that you can't select camel cased elements though.
The common workaround seems to be to assign a class to all your linearGradient elements and select by class rather than by tag name.

Detect Graphical Regions in an Image on a Web Page

I've been beating around the bush a lot, so I'll explain my problem here and hope with the whole picture, somebody has some ideas. With the following image:
I need to detect a mouseover on the blobs over her eyes and mouth, and solve this problem in a general form. The model and blobs are on two different layers, so I can produce one image with only the blobs, and one with only the model, and somehow synchronise a virtual cursor over the blobs while it actually hovers over the model.
I can also make the blobs polygons, for hit testing, but I think a colour hit test would be much easier. If I hit blue, I am on her mouth and I show lipstick images; if I hit pink, I'm over her eyes, and display eye makeup images.
What are the suggestions and conversation of the learned ones here?
The simpler way to do it would be to load the layer image in a canvas, then get all its pixel data. When the mouse is hovering the model image, find out what color is currently selected and if it is different from a previous one, trigger an event to indicate that the selection has changed.
Here is an example, feel free to toy with it; but be aware that it doesn't handle all cases:
what if the layer and the model image are not the same size
what if the layer and the model image are not the same width/height ratio
what if you want to use some alpha channel (the example doesn't take it into account)
$(function() {
/* we load all the image data first */
var imageData = null;
var layerImage = new Image();
layerImage.onload = function() {
var canvas = document.createElement("canvas");
canvas.width = this.width;
canvas.height = this.height;
context = canvas.getContext('2d');
context.drawImage(this, 0, 0, canvas.width, canvas.height);
imageData = context.getImageData(0, 0, canvas.width, canvas.height).data;
};
/* it's easier to set the image data for example as base64 data */
layerImage.src = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwgAADsIBFShKgAAAABp0RVh0U29mdHdhcmUAUGFpbnQuTkVUIHYzLjUuMTFH80I3AAAA5klEQVR4Xu3WMQ7CMAwF0IxcgCtw/xsGdWApItWHKorLQ8qESe2HbWjNiwABAgQIECBAgAABAgQIECBAgMAUgd4e/ehMSeTMh4wK2j/nqPjt/TNzm3IXgEFb64CdgBG44hKcsmg8pKDAvbf+7SlY7nvK3xb/+lx5BAA/jMCGpwOqC/z9CFT/ApfP/9Z6T87yBaUJJsVvsen9y8cDMAJ2gCWY7IHll1qaYFL84a9AelnF+CFwxYLSnAGMBFLNivE6QAcMBCq2dJqzETACRuCzQDpPFePTv9riCRAgQIAAAQIECBC4nsATagY67TVyuhAAAAAASUVORK5CYII=";
var pColor = null;
/* on mouse over the model image */
$("#model").mousemove(function(event) {
/* we correct the offset */
var offset = $(this).offset();
var relX = event.pageX - offset.left;
var relY = event.pageY - Math.round(offset.top);
/* and get the pixel values at this place (note we are not keeping the alpha channel; it's your decision whether or not it is valuable */
var pixelIndex = relY * layerImage.width + relX;
var dataIndex = pixelIndex * 4;
var color = [imageData[dataIndex], imageData[dataIndex + 1], imageData[dataIndex + 2]];
if (pColor == null) {
/* we trigger when first entering the image */
$(this).trigger("newColor", {
message: "Initial layer color",
data: color
});
} else if (pColor[0] != color[0] || pColor[1] != color[1] || pColor[2] != color[2]) {
/* we trigger if the new position is a new color in the layer image */
$(this).trigger("newColor", {
message: "Changed layer color",
data: color
});
}
pColor = color;
});
/* some small help to convert rgb to css colors */
function rgb2hex(red, green, blue) {
var rgb = blue | (green << 8) | (red << 16);
return '#' + (0x1000000 + rgb).toString(16).slice(1)
}
/* there you have the new layer color event management; for the example sake we change the color of some text */
$("#model").on("newColor", function(event, eventData) {
$("#selector").css("color", rgb2hex(eventData.data[0], eventData.data[1], eventData.data[2]));
});
});
img {
border: 1px solid silver
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script>
<body>
<h4>Model image</h4>
<img id="model" src="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAIAAAAlC+aJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAAadEVYdFNvZnR3YXJlAFBhaW50Lk5FVCB2My41LjExR/NCNwAAAtdJREFUaEPtmS2PwkAQhhGQEASEBIECFAJBgoCAxpDgsWBxSP4lPwGJRCLv5m4uk6Xt7s5He9dLFst2+z7vfG3bxsc//zX+uf6PBPDXEUwRSBEwOpBSyGig+fIUAbOFxg3KjMD1em3EfqvV6vl8GkW7l5cJsF6vY/p//i8RQwbwer22221U5W63K/T4fr93u1263LdMFB8ZAEc96BsOhwER5WIIAMB+NO92u4lMKlw8m80oFJaMEgCg/ZADdvW0A2GMRiPdtgKAEu13tUJG4c667sQFoBap8yl8FdiPDIqyZgFQ9oerU83mlrV0ExaAJfshdKfTiSMLg8BZKR5k4ewPS4TpNplMOLKqAqD88YngSwxjVAVgyR+O8bSmKoCKumeerVoAkZe6xQnA45vCGH7rfGuIFbVRBUDgvO2bCefzWXEj4I8PDtyXOYzQ0cVi4WMonAmWSR8HIDWbzUZ33grXNKifTqcA3Ov1FNUfB3AdhdOv4h6+S0D6fr/HWLVaLd1jBgsAFNChV5RLAVpKelDfbrd16lk1QCLczLZjoPEW6SiMG4FAdUphXO/tCSkD8GEwi9uVDvZLny6gZg6HQ8YvDYBr23g8DnR9319S6XjH5XIJGw4Gg7fxZwwiFDfsyGfQSQeRvnci1ghE+XXz1d3W7bb5WVF3ABpzvpZVdwDM+8CYqyOAmzPRx6l6ARyPx3w/CNd9jQDgNEHq+RO6LgCgvt/vA8Dlcol2tjLnQPRmhW00n+W4rNPpPB6P6J5cAHofqnhlSffIAIDT8/k8n+jNZhO8l6qPHOYKPxlJ3+Wj1swpqJRzKPc06n6JIOf4GBmz1U778kpWxJmvQ9EjEOQergEX1KegcEnIAHAvPgaIRgDpMwO/jjUA/N1hJT3Hia7iL64c4KtRfP/4mkQrq9r3rVUngEBMUgQYCZtqIGRSSqGUQgwHUgoZTSrrQ3KhjN8oYiN/+PJPqpb83Htu7qcAAAAASUVORK5CYII="
/>
<p>You are pointing at some <strong><span id="selector">color</span></strong>
</p>
<hr/>
<h4>Layer image (reference only, not displayed in page)</h4>
<img id="layer" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAABACAYAAACqaXHeAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwgAADsIBFShKgAAAABp0RVh0U29mdHdhcmUAUGFpbnQuTkVUIHYzLjUuMTFH80I3AAAA5klEQVR4Xu3WMQ7CMAwF0IxcgCtw/xsGdWApItWHKorLQ8qESe2HbWjNiwABAgQIECBAgAABAgQIECBAgMAUgd4e/ehMSeTMh4wK2j/nqPjt/TNzm3IXgEFb64CdgBG44hKcsmg8pKDAvbf+7SlY7nvK3xb/+lx5BAA/jMCGpwOqC/z9CFT/ApfP/9Z6T87yBaUJJsVvsen9y8cDMAJ2gCWY7IHll1qaYFL84a9AelnF+CFwxYLSnAGMBFLNivE6QAcMBCq2dJqzETACRuCzQDpPFePTv9riCRAgQIAAAQIECBC4nsATagY67TVyuhAAAAAASUVORK5CYII="
/>
</body>
If you can have both images (the one with the blob and the one without), I think you can do this using HTML5 canvas.
draw the image normally
draw the blob image beneath the master image so it is invisible
copy the blob to a Canvas
onMouseOver, retrieve pixel data (R,G,B and alpha) for the Canvas at the appropriate coordinates
profit
Twist: you might be able to do this with only one image and its alpha channel, if you don't need it for anything else - give the pixels a full opacity (A=255) everywhere except in blobs 1, 2 and 3, which will have opacity equal to 255-(1,2,3...). You can't have too many different blobs or the transparency will become noticeable. Haven't tried, but it should work. Given the likely compressibility of a "blob-only" image, a pair of images (one without transparency, one also without transparency and with only N+1 colours, PNG compressed) should yield better results.
More-or-less-pseudo code with two images, using jQuery (can be done without):
var image = document.getElementById('mainImage')
var blobs = document.getElementById('blobImage');
// Create a canvas
canvas = $('<canvas/>')[0];
canvas.width = image.width;
canvas.height = image.height;
// IMPORTANT: for this to work, this script and blobImage.src must be both
// in the same security domain, or you'll get "this operation is insecure"
canvas.getContext('2d').drawImage(blobs, 0, 0, image.width, image.height);
// Now wait for it.
$('#mainImage').mouseover(function(event) {
// TO DO: offset clientX, clientY by margin on mainImage
var ctx = canvas.getContext('2d');
// Get one pixel
var pix = ctx.getImageData(event.clientX, event.clientY, 1, 1);
// Retrieve the red component
var red = pix.data[0];
if (red > 128) {
// ... do something for red
}
});
You could use SVG graphics to layer over the image.
My example uses an ellipse but you could use a polygons just as easily.
You could use the colour like you stated in your question or add an extra property to the svg element. The example uses onclick but mouseover works as well.
example js:
function svg_clicked(objSVG)
{
alert(objSVG.style.fill);
alert(objSVG.getAttribute('data-category'));
}
example svg:
<svg xmlns="http://www.w3.org/2000/svg" version="1.1">
<ellipse cx="110" cy="80" rx="100" ry="50" style="fill:red;" onclick="svg_clicked(this);" data-category="lipstick" />
</svg>
Here's a fiddle (move the mouse over the O's in the picture)
It still works if you make the svg element transparent (using fill:transparent).
You can change the overlay to a colour or outline quickly for testing.
I highly recommend the time tested method.
The easiest way to both create blobs and to detect if the mouse is over them is to use svg graphics on top of the other image. SVG supports mouseover events and allows vector shapes which are going to give you far greater precision than using <map> or <area>.
I found this question that might also shed some light on where I am coming from: Hover only on non-transparent part of image. Read the second answer down because it will likely be prefered in your situation.
The svg elements on your image would be transparent (or whatever you would want), and you could easily detect the mouse over events.
The library from that question is called raphael. Hopefully this proves to be useful.

Resources