I have two SVG files: intial.svg and final.svg. I want to morph initial.svg onto final.svg on button click event. I have gone through the libraries suggested in this question but there is no clear documentation or example on how to achieve this specific morph. I have exported these animations from an XD prototype. I want to achieve a simple ease-in animation by specifying the initial state of an svg and the final state of the same svg. Any recommendations would be highly appreciated.
If the SVGs are (or can be) drawn from the same paths, then I would suggest the NPM library svg-path-morph. It allows you to interpolate freely between an arbitrary number of SVG paths.
An example of its usage:
import { compile, morph } from 'svg-path-morph'
// Get the d attributes of the <path> elements you want to morph between
const happy = document.getElemenyById('happy').getAttribute('d')
const angry = document.getElemenyById('angry').getAttribute('d')
// Compile the morph base (average path embedding)
const compiled = compile([
happy,
angry
])
// Morph between the happy/angry faces
const slightlyAngry = morph(
compiled,
[
0.80, // 80% happy
0.20 // 20% angry
]
)
// Use the face is the d attribute of a <path> element
document.getElementById('the-face').setAttribute('d', slightlyAngry)
Related
I am analysing solar farms and have defined two areas of geometry. In the example below, for a site called 'Stateline', I have drawn the boundary of the site and saved the geometry as a variable 'Stateline_boundary'. I have drawn around the solar panels within the boundary, which exist in two distinct groups and saved the geometry as a variable 'Stateline_panels'.
Stateline_panels has two co-ordinate lists (as there are two areas of panels).
When I try to subtract the area covered by the panels from the area within the boundary only the first of the two lists in the 'Stateline_panels' geometry is used (see code below and attached image).
var mask = Stateline_boundary
var mask_no_panels = mask.difference(Stateline_panels);
Map.addLayer(mask_no_panels,{},'Stateline_mask_no_panels',false);
I don't understand the behaviour of the geometry. Specifically why when adding the 'Stateline_panels' geometry to the map it displays in its entirety, but when used as a mask breaks the geometry and only uses the first of two lists of coordinates.
I was going to write a longer question asking why the geometry variables seem to behave differently when they are imported into the script rather than listed within the script (which I don't think should make a difference, but it does). However I think this is an earlier manifestation of whatever is going on.
The methodology that I found worked in the end was to create geometry assets individually with the polygon tool in the Earth Engine IDE - ensuring that each is on a different layer (using the line tool, then converting to polygons never worked).
Not only was this more flexible, it was also easier to manage on Earth Engine as editing geometries is not easy. I read about the importance of winding clockwise - though never determined if that was part of the issue here. If I always drew polygons clockwise the issue never occured.
I ended up with my aoi covered in polygons like this (each colour a different named layer/geometry object):
Once this was done, manipulating each geometry object in the code editor was relatively straightforward. They can be converted to FeatureCollections and merged (or subtracted) using simple code - see below for my final code.
It was also then easy to share them between scripts by importing the generated code.
I hope that helps someone - first attempt at answering a question (even if its my own). :)
// Convert panel geometries to Feature Collections and merge to create one object.
var spw = ee.FeatureCollection(stateline_panels_west);
var spe = ee.FeatureCollection(stateline_panels_east);
var stateline_panels = spw.merge(spe);
// Convert 'features to mask' geometries to Feature Collections.
var gc = ee.FeatureCollection(golf_course);
var sp = ee.FeatureCollection(salt_pan);
var sc = ee.FeatureCollection(solar_concentrator);
var h1 = ee.FeatureCollection(hill_1);
var h2 = ee.FeatureCollection(hill_2);
var h3 = ee.FeatureCollection(hill_3);
var mf = ee.FeatureCollection(misc_features);
// Merge geometries to create mask
var features_to_mask = gc.merge(sp).merge(sc).merge(h1).merge(h2).merge(h3).merge(mf);
// Convert 'Features_to_mask' to geometry (needed to create mask)
var features_to_mask = features_to_mask.geometry();
// Change name
var mask = features_to_mask
///// If site has other solar panels nearby need to add these separately & buffer by 1km
var extra_mask = ee.Feature(solar_concentrator);
var extra_mask = extra_mask.buffer(1000);
var extra_mask = extra_mask.geometry();
///// Join mask & extra mask into single feature using .union()
// Geometry objects
var mask = mask.union(extra_mask);
Is there a way to dynamically build an SVG using sprites in amcharts 4?
Example: screenhot
There are 20 different types which are represented by colors.
Each pin can contain a multitude of types.
So an example can be that a pin has 3 types and will consist out of 3 colors.
I have an SVG path which is a circle.
With regular JS and SVG i can create a path for each type and change the stroke color, strokedasharray and strokedashoffset.
This results in the nice circle with 3 colors.
However this seems to be impossible to do with amcharts 4.
For starters, strokedashoffset is not even a supported property for a sprite. Why would you bother supporting strokedasharray and then ignore strokedashoffet?!
The second problem is finding out how to pass data to the sprite.
This is an example of a data object I pass to the mapImageSeries class.
[{
amount: 3,
client: undefined,
colorsArr: {0: "#FFB783", 1: "#FD9797", 2: "#77A538"},
dashArray: "500,1000",
dashOffset: 1500,
divided: 500,
global: true,
groupId: "minZoom-1",
hcenter: "middle",
id: "250",
latitude: 50.53398,
legendNr: 8,
longitude: 9.68581,
name: "Fulda",
offsetsArr: {0: 0, 1: 500, 2: 1000},
scale: 0.5,
title: "Fulda",
typeIds: (3) ["4", "18", "21"],
typeMarker: " type-21 type-18 type-4",
vcenter: "bottom",
zoomLevel: 5
}]
It seems impossible to pass the colors down to the sprite.
var svgPath = 'M291,530C159,530,52,423,52,291S159,52,291,52s239,107,239,239c0,131.5-106.3,238.3-237.7,239'
var mainPin1 = single.createChild(am4core.Sprite)
mainPin1.strokeWidth = 100
mainPin1.fill = am4core.color('#fff')
mainPin1.stroke = am4core.color('#ff0000')
mainPin1.propertyFields.strokeDasharray = 'dashArray'
mainPin1.propertyFields.strokeDashoffset = 'dashOffset'
mainPin1.path = svgPath
mainPin1.scale = 0.04
mainPin1.propertyFields.horizontalCenter = 'hcenter'
mainPin1.propertyFields.verticalCenter = 'vbottom'
With what you've provided, simulating your custom SVGs is beyond the scope of what can be answered, so I'll try tackling:
applying stroke-dashoffset despite lack of innate library support. (I see you've added a feature request on GitHub for it, so why the library doesn't include it, when/if it will, can be left for discussion there.)
passing data/colors to the Sprite
For both we're going to have to wait until the instances of Sprites are ready along with their data. Presuming your single variable is a reference to a MapImageSeries.mapImages.template, we can set up an "inited" event like so:
single.events.once("inited", function(event){
// ...
});
Our data and data placeholders don't really support nested arrays/objects in general, since your colors are nested within a field, we can find them via:
event.target.dataItem.dataContext.colorsArr
You can then set the fill and stroke on the Sprite or event.target.children.getIndex(0) manually from there (in my demo below, the index will be 1 because mainPin1 is not the first/only child created on the MapImage template).
As for stroke-dashoffset, you can access the actual rendered SVGElement via sprite.group.node and just use setAttribute.
I forked our map image demo from our map image data guide and added all the above to it here:
https://codepen.io/team/amcharts/pen/6a3d87ff3bdee7b85000fe775af9e583
I have a canvas.
In this canvas I have a closed path, and I am trying to morph this path into a different path.
Paths can have any number of points(>=3).
I have two paths:
var path1 = "M50,50L200,50,200,200,50,200z"
var path2 = "M300,200L50,200,50,50,200,50z"
This is what I'm using to animate the morphing:
var path = paper.path(path1).attr({'stroke':'black','fill':'white'})
var currentPath = 1
path.click(function () {
if(currentPath==1) {
path.stop().animate({d: path2},2000, function () {
currentPath=2
})
}
else {
path.stop().animate({d: path1},2000,function () {
currentPath=1
})
}
})
This is the situation I want to achieve:
http://jsfiddle.net/MichaelSel/vgw3qxpg/7/
This is the situation I want to avoid.
http://jsfiddle.net/MichaelSel/vgw3qxpg/6/
Is there any way to tell snap to do the animation by using the shortest distance to each point?
Note: I cannot just 'rewrite' the paths in reverse order (which would fix them) because it's the client who positions the points arbitrarily.
What can I do?
I would love to add more details if my question is unclear.
Thank you all.
No, there is no way to do this (other than coding your own complex morphing with checks).
It uses basic interpolation between the points, so its important to get the devs to move the points in their svg creation rather than just creating a new svg from scratch that could have points start from any position.
There is no reason though to my knowledge why you couldn't still rotate the path points with a bit of clever code though, even if the client has positioned them arbitrarily, but I don't think thats simple.
What is paper and set in Raphael.js
Is it some external library reference..?
What is its use and how to use it..?
A Paper is a Raphael reference to its main SVG element that it uses, a bit like a container (you can have several). It also has extra methods and variables, so its not 'just' an SVG element, but you can sort of think of it a bit like the main SVG element.
A Set is like an Array, thats used to store Raphael elements.
When its useful, is iterating over a large amount of Raph elements.
So you may do something like.
var mySet = paper.set();
mySet.push( myCircle, myRect, myOtherShapeCreatedEarlier);
mySet.forEach( function( el ) { doSomethingWithEachElement() } );
Also you may do something like...
var mySet = paper.selectAll('path');
mySet.attr({ opacity: "0" });
Which would make all the paths vanish.
So really a set is just a way of dealing with elements in an easy way.
I have an view that have mutiples views inside it, and an image presentation (aka. 'cover flow') into that too... And I need to make a screenshot programatically !
Since docs says that "renderInContext:" will not render 3d animations :
"Important The Mac OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of Mac OS X may add support for rendering these layers and properties."
source: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html
I have searched a lot, and my 'best' solution (that is not good at all), is to create my own CGContext and record all CG animations into it. But I really do not want to do it, because I will need to re-write most of my animations codes and it will be very expensive for memory... I found other solutions (some of then unmakable) as use openGL or capture through AVSessions, but no one that can help me...
What are my options ? Any with that problem ?
Thanks for your time !
have you actually tried it? I'm currently working on a project with several 3D transforms, and when I try to programmatically make this screenshot it works just fine :)
Here is the code I use:
-(UIImage *)getScreenshot
{
CGFloat scale = 1.0;
if([[UIScreen mainScreen]respondsToSelector:#selector(scale)])
{
CGFloat tmp = [[UIScreen mainScreen]scale];
if (tmp > 1.5)
{
scale = 2.0;
}
}
if(scale > 1.5)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(self.frame.size);
}
//SELF HERE IS A UIVIEW
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
I got it working with protocols.... I'm implementing a protocol in all UIViews classes that make 3D transforms. So when I request a screenshot, it make all subviews screenshot, and generate one UIImage.. Not so good for lots of views, but I'm doing in a few views.
#pragma mark - Protocol implementation 'TDITransitionCustomTransform'
//Conforms to "TDITransitionCustomTransform" protocol, return currrent image view state , by current layer
- (UIImage*)imageForCurrentState {
//Make print
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return printed image
return screenShot;
}
I was thinking it may works now because I'm doing that render in the transformed view layer, which have being transformed it self...
And it wasn't working because "renderInContext:" doesn't get layers of it subviews, may it possible ?
Anyone interest in a bit more code of this solution, can be found here . in the apple dev forum.
It may be a function bug, or it just not being design for this purpose ...
May Be You can use Core Graphaic instead of CATransform3DMakeRotation :)
CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
Which get effet on the renderInContext