I resize the paper size by moving model :
paper.on('cell:pointermove',
function(cellView, evt, x, y) {
if((x+cellView.model.prop('size/width'))>=650 &&(y+cellView.model.prop('size/height'))>=200)
paper.setDimensions(x+cellView.model.prop('size/width'), y+cellView.model.prop('size/height'));
}
Is there any way I can change the property of paper just like element.prop(properties)?
You can get the width/height of the paper with paper.options.width and paper.options.height. Paper is a view, not a model so it does not have set()/get()/prop()/attr() methods but you can always store properties to it just like to any other object if you want: paper.foo = 'bar'
Why don't you just call paper.fitToContent()?
Related
I have two SVG files: intial.svg and final.svg. I want to morph initial.svg onto final.svg on button click event. I have gone through the libraries suggested in this question but there is no clear documentation or example on how to achieve this specific morph. I have exported these animations from an XD prototype. I want to achieve a simple ease-in animation by specifying the initial state of an svg and the final state of the same svg. Any recommendations would be highly appreciated.
If the SVGs are (or can be) drawn from the same paths, then I would suggest the NPM library svg-path-morph. It allows you to interpolate freely between an arbitrary number of SVG paths.
An example of its usage:
import { compile, morph } from 'svg-path-morph'
// Get the d attributes of the <path> elements you want to morph between
const happy = document.getElemenyById('happy').getAttribute('d')
const angry = document.getElemenyById('angry').getAttribute('d')
// Compile the morph base (average path embedding)
const compiled = compile([
happy,
angry
])
// Morph between the happy/angry faces
const slightlyAngry = morph(
compiled,
[
0.80, // 80% happy
0.20 // 20% angry
]
)
// Use the face is the d attribute of a <path> element
document.getElementById('the-face').setAttribute('d', slightlyAngry)
I have been running SpatialUnderstandingExample scene from holo-toolkit. Couldnt figure out how to place my objects into the scene. I want to replace those small boxes that comes default with my own objects. How can I do that?
Thanks
edit: found the draw box but how do i push my object there?
edit2: finally pushed an object at the position but still code is very complicated its messing up with the size and shape of my object. Will try to make it clean and neat.
It's been a while since I've looked at that example so hopefully I remember its method name's correctly. It contains a "DrawBox" method that is called after a successful call to get a location from spatial understanding. The call that creates the box looks something like this:
DrawBox(toPlace, Color.red);
Replace this call with the following (assuming "toPlace" contains the results from the spatial understanding call and "model" contains the model you are trying to place there):
var rotation = Quaternion.LookRotation(toPlace.Normal, Vector3.up);
// Stay center in the square but move down to the ground
var position = toPlace.Postion - new Vector3(0, RequestedSize.y * .5f, 0);
// instantiate the hologram from a model
GameObject newObject = Instantiate(model, position, rotation) as GameObject;
if (newObject != null)
{
// Set the parent of the new object the GameObject it was placed on
newObject.transform.parent = gameObject.transform;
}
.NET 4.5, C#, Npgsql 3.1.0
I have a query which retrieves a Postgis geometry field - the only way I could see of doing this was:
public class pgRasterChart
{
...
public NpgsqlTypes.PostgisGeometry GEOMETRY;
...
}
...
NpgsqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
pgRasterChart chart = new pgRasterChart();
chart.GEOMETRY = (PostgisGeometry) reader.GetValue(21);
...
This functions but I need to get at the coordinates of the GEOMETRY field and I can't find a way of doing that? I want to use the coordinates to display the results on an OpenLayers map.
Any answers most gratefully received. This is my first post so my apologies if the etiquette is clumsy or question unclear.
Providing another answer because the the link above to the documentation for PostGisTypes is now broken.
PostGisGeometry is an abstract base class that does not contain anything more exiting than the SRID. Instead, you want to cast the object obtained by your datareader to the appropriate type (any of the following):
PostGisLineString
PostGisMultiLineString
PostGisMultiPoint
PostGisMultiPolygon
PostGisPoint
PostGisPolygon
These classes have ways of getting to the coordinates.
eg:
...
NpgsqlDataReader reader = command.ExecuteReader();
try
{
while (reader.Read())
{
var geom = (PostgisLineString) reader.GetValue(0);
var firstCoordinate = geom[0]; // Coordinate in linestring at index 0
var X = firstCoordinate.X;
var Y = firstCoordinate.Y;
...
As you can see here
https://github.com/npgsql/npgsql/blob/dev/src/Npgsql.LegacyPostgis/PostgisTypes.cs
PostgisGeometry types are a set of xy pairs.
For example, a linestring is an array of points, a polygon is an array of rings and so on..
You could traverse those structures and get the coordinates.
However, if you just want to display geometries using openlayers, I suggest you to use the wkt format.
You should change your query, selecting st_astext(geometry) instead of geometry, than treat the result as a string and give it back to OpenLayers.
Then use OpenLayers.Geometry.fromWKT to parse the WKT into an OpenLayers.Geometry
I have an view that have mutiples views inside it, and an image presentation (aka. 'cover flow') into that too... And I need to make a screenshot programatically !
Since docs says that "renderInContext:" will not render 3d animations :
"Important The Mac OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of Mac OS X may add support for rendering these layers and properties."
source: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CALayer_class/Introduction/Introduction.html
I have searched a lot, and my 'best' solution (that is not good at all), is to create my own CGContext and record all CG animations into it. But I really do not want to do it, because I will need to re-write most of my animations codes and it will be very expensive for memory... I found other solutions (some of then unmakable) as use openGL or capture through AVSessions, but no one that can help me...
What are my options ? Any with that problem ?
Thanks for your time !
have you actually tried it? I'm currently working on a project with several 3D transforms, and when I try to programmatically make this screenshot it works just fine :)
Here is the code I use:
-(UIImage *)getScreenshot
{
CGFloat scale = 1.0;
if([[UIScreen mainScreen]respondsToSelector:#selector(scale)])
{
CGFloat tmp = [[UIScreen mainScreen]scale];
if (tmp > 1.5)
{
scale = 2.0;
}
}
if(scale > 1.5)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(self.frame.size);
}
//SELF HERE IS A UIVIEW
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
I got it working with protocols.... I'm implementing a protocol in all UIViews classes that make 3D transforms. So when I request a screenshot, it make all subviews screenshot, and generate one UIImage.. Not so good for lots of views, but I'm doing in a few views.
#pragma mark - Protocol implementation 'TDITransitionCustomTransform'
//Conforms to "TDITransitionCustomTransform" protocol, return currrent image view state , by current layer
- (UIImage*)imageForCurrentState {
//Make print
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return printed image
return screenShot;
}
I was thinking it may works now because I'm doing that render in the transformed view layer, which have being transformed it self...
And it wasn't working because "renderInContext:" doesn't get layers of it subviews, may it possible ?
Anyone interest in a bit more code of this solution, can be found here . in the apple dev forum.
It may be a function bug, or it just not being design for this purpose ...
May Be You can use Core Graphaic instead of CATransform3DMakeRotation :)
CGAffineTransform flip = CGAffineTransformMakeScale(1.0, -1.0);
Which get effet on the renderInContext
I've a MKPolyline made of CLLocationCoordinate2D array (Points). That's all fine.
I added this line to the Map as an overlay, like so: Map.AddOverlay(line);
I event set this: Map.SetVisibleMapRect(line.BoundingMapRect, true);
But the line does not show up although the Map bounds are correct.
I'm looking into MKPolylineView, but can't get it to work.
Anyone knows to set colour and line width?
Thanks
After much head scratching, here is how to display a MKPolyline on a MKMapView:
Step 1: Create a delegate method for Map GetViewForOverlay
Map.GetViewForOverlay = Map_GetViewForOverlay;
Where Map is the MKMapView.
MKOverlayView Map_GetViewForOverlay(MKMapView mapView, NSObject overlay)
{
if(overlay.GetType() == typeof(MKPolyline))
{
MKPolylineView p = new MKPolylineView((MKPolyline)overlay);
p.LineWidth = 2.0f;
p.StrokeColor = UIColor.Green;
return p;
}
else
return null;
}
Step 2: Create a new instance of MKPolyline
MKPolyline line = MKPolyline.FromCoordinates(polyPoints);
Where polyPoints is an Array of CLLocationCoordinate2D.
Step 3: Add the overlay to the map
Map.AddOverlay(line);
Step 4: Use code below to zoom and change Map bounds to fit route
Map.SetVisibleMapRect(line.BoundingMapRect, true);
I'm pretty sure if your intent is to dynamically draw a map over the MapView given a backing model object that indicates two coordinates you want to take a look at my project here:
https://github.com/anujb/MapWithRoutes
This will allow you to overlay a path and it will update as the map changes. It's a modified version of an obj-C port that makes use of background threads so it's not blocking.
Thanks,
Anuj