Applying multiple AffineTransformations in Batik - svg

I am trying to scale and translate an SVG image in Batik.
To do the zooming I use
AffineTransform at= new AffineTransform();
at.scale(sx, sy);
at.translate(tx, ty);
canvas.setRenderingTransform(at, true);
That works quite fine (after I found out, that the sx, sy, tx and ty values must be screen coordinates, not SVG coordinates.
But I want to allow multiple scaling operations.
The problem is: I do not manage to "add" another transformation to the existing one.
I tried it by first reverting the old transformation and then appying the new one. But that gets me to another problem: The reversion doesn't work! It leads to an image that is smaller than the original one (thus zooming out).
I experimented a bit and tried to apply a transformation, then apply the inverse and then apply the original one again:
final AffineTransform at= new AffineTransform();
at.scale(zoom.sx, zoom.sy);
at.translate(zoom.tx, zoom.ty);
canvas.setRenderingTransform(at, true);
...
final AffineTransform reverseAt = at.createInverse();
canvas.setRenderingTransform(reverseAt, true);
...
final AffineTransform reverseBackAt= reverseAt.createInverse();
canvas.setRenderingTransform(reverseBackAt, true);
The first transformation is correct. The second one leads to rubbish, but appying the original one (or the inverse of the inverse) again, leads to the correct result.
So actually, there are two questions:
What is the best way, to apply multiple zooming operations?
Why is the result of the inverse transformation not what I expected?

To answer your first question, use AffineTransform.concatenate():
AffineTransform firstTransform = new AffineTransform();
at.scale(sx, sy);
at.translate(tx, ty);
// Example: double all sizes
AffineTransform secondTransform = AffineTransform.getScaleInstance(2, 2)
secondTransform.concatenate(firstTransform);
canvas.setRenderingTransform(secondTransform, true);

Related

Unexpected behaviour of geometry in Earth Engine

I am analysing solar farms and have defined two areas of geometry. In the example below, for a site called 'Stateline', I have drawn the boundary of the site and saved the geometry as a variable 'Stateline_boundary'. I have drawn around the solar panels within the boundary, which exist in two distinct groups and saved the geometry as a variable 'Stateline_panels'.
Stateline_panels has two co-ordinate lists (as there are two areas of panels).
When I try to subtract the area covered by the panels from the area within the boundary only the first of the two lists in the 'Stateline_panels' geometry is used (see code below and attached image).
var mask = Stateline_boundary
var mask_no_panels = mask.difference(Stateline_panels);
Map.addLayer(mask_no_panels,{},'Stateline_mask_no_panels',false);
I don't understand the behaviour of the geometry. Specifically why when adding the 'Stateline_panels' geometry to the map it displays in its entirety, but when used as a mask breaks the geometry and only uses the first of two lists of coordinates.
I was going to write a longer question asking why the geometry variables seem to behave differently when they are imported into the script rather than listed within the script (which I don't think should make a difference, but it does). However I think this is an earlier manifestation of whatever is going on.
The methodology that I found worked in the end was to create geometry assets individually with the polygon tool in the Earth Engine IDE - ensuring that each is on a different layer (using the line tool, then converting to polygons never worked).
Not only was this more flexible, it was also easier to manage on Earth Engine as editing geometries is not easy. I read about the importance of winding clockwise - though never determined if that was part of the issue here. If I always drew polygons clockwise the issue never occured.
I ended up with my aoi covered in polygons like this (each colour a different named layer/geometry object):
Once this was done, manipulating each geometry object in the code editor was relatively straightforward. They can be converted to FeatureCollections and merged (or subtracted) using simple code - see below for my final code.
It was also then easy to share them between scripts by importing the generated code.
I hope that helps someone - first attempt at answering a question (even if its my own). :)
// Convert panel geometries to Feature Collections and merge to create one object.
var spw = ee.FeatureCollection(stateline_panels_west);
var spe = ee.FeatureCollection(stateline_panels_east);
var stateline_panels = spw.merge(spe);
// Convert 'features to mask' geometries to Feature Collections.
var gc = ee.FeatureCollection(golf_course);
var sp = ee.FeatureCollection(salt_pan);
var sc = ee.FeatureCollection(solar_concentrator);
var h1 = ee.FeatureCollection(hill_1);
var h2 = ee.FeatureCollection(hill_2);
var h3 = ee.FeatureCollection(hill_3);
var mf = ee.FeatureCollection(misc_features);
// Merge geometries to create mask
var features_to_mask = gc.merge(sp).merge(sc).merge(h1).merge(h2).merge(h3).merge(mf);
// Convert 'Features_to_mask' to geometry (needed to create mask)
var features_to_mask = features_to_mask.geometry();
// Change name
var mask = features_to_mask
///// If site has other solar panels nearby need to add these separately & buffer by 1km
var extra_mask = ee.Feature(solar_concentrator);
var extra_mask = extra_mask.buffer(1000);
var extra_mask = extra_mask.geometry();
///// Join mask & extra mask into single feature using .union()
// Geometry objects
var mask = mask.union(extra_mask);

Add edge gremlin query in Nodejs

Here is the code for adding Tribe Vertex
let addTribe = g.addV('tribe')
addTribe.property('tname', addTribeInput.tribename)
addTribe.property('tribeadmin', addTribeInput.tribeadmin)
const newTribe = await addTribe.next()
and Here is the code for adding Edges
const addMember = await
g.V(addTribeInput.tribeadmin).addE('member').
to(g.V(newTribe.value.id)).next()
Is this is a correct way of adding edges?
I am just confusing what should I need to pass in .to() methoud
Gremlin is meant to be chained, so unless you have an explicit reason to break things up, it's much nicer to just do:
g.addV('tribe').
property('tname', addTribeInput.tribename).
property('tribeadmin', addTribeInput.tribeadmin).as('x').
V(newTribe.value.id).as('y').
addE('member').
from('x').
to('y')
Given your variable names I'm not completely sure that I'm doing what you want exactly (e.g. getting the edge direction right), but the point here is that for adding edges you just need to specify the direction of the edge "from" one vertex (i.e. the starting vertex) "to" another vertex (i.e. the ending vertex).

Revit API. ReferenceIntersector with TopografySurfaces

Does anyone know if ReferenceIntersector works with TopografySurfaces? Cannot make it work. I need to find a point on the surface based on a intersection with a line.
Did you solve it? If not, i gave it a try and for me this Code here works fine:
public XYZ ProjectPointOnTopographySurface(XYZ point, int direction)
{
// For getting the 3D view
View3D view3D = new FilteredElementCollector(Document)
.OfClass(typeof(View3D))
.Cast<View3D>()
.Where(v => v.Name == "{3D}")
.FirstOrDefault();
XYZ vectorDirection = new XYZ(0, 0, direction);
ElementClassFilter intersectionFilter = new ElementClassFilter(typeof(TopographySurface));
ReferenceIntersector referenceIntersector = new ReferenceIntersector(intersectionFilter, FindReferenceTarget.All, view3D);
ReferenceWithContext referenceWithContext = referenceIntersector.FindNearest(point, vectorDirection);
return referenceWithContext.GetReference().GlobalPoint;
}
Regardless of whether the ReferenceIntersector does or does not work with topography surfaces, you can pretty easily solve the problem you describe yourself using other means. Simply ask the surface for its tessellated representation. That will return a bunch of triangles. Then, implement your own algorithm to intersect a triangle with the line. That should give you all you need, really.

Create Dimensions from edge of wall, to sides of openings, to other edge of wall

I've been struggling with this issue off and on for the better part of a year.
As the title says, i wish to dimension from one side of a wall, to both sides of openings (door openings), then terminate at the other end of the wall (vertically and horizontally). I also wish to dimension to all families hosted in the wall, but i have been able to accomplish this using ScottWilson's voodo magic helper class. Found Here: http://thebuildingcoder.typepad.com/blog/2016/04/stable-reference-string-magic-voodoo.html
foreach (ElementId ele in selSet) {
FamilyInstance fi = doc.GetElement(ele) as FamilyInstance;
Reference reference = ScottWilsonVoodooMagic.GetSpecialFamilyReference(fi,ScottWilsonVoodooMagic.SpecialReferenceType.CenterLR,doc);
refs.Append(reference);
pts[i] = ( fi.Location as LocationPoint ).Point;
i++;
}
XYZ offset = new XYZ(0,0,4);
Line line = Line.CreateBound( pts[0]+offset, pts[selSet.Count - 1]+offset );
using( Transaction t = new Transaction( doc ) )
{
t.Start( "dimension embeds" );
Dimension dim = doc.Create.NewDimension(doc.ActiveView, line, refs );
t.Commit();
}
The problem lies in determining the appropriate stable references to the wall faces. I am able to find all faces on a wall, but this gives me 100+ faces to sort through.
If anyone can help me it would be greatly appreciated!
Side note: The closest of gotten is using a casting a ray trace through my panel, then using a reference intersector method to determine references. But i'm really struggling with implementation: http://thebuildingcoder.typepad.com/blog/2015/12/retrieving-wall-openings-and-sorting-points.html
These two posts should provide more than enough to solve all your issues:
Dimension walls by iterating their faces
Dimension walls by shooting a ray
Basically, you need to obtain references to the faces or edges that you wish to attach the dimensioning to. These references can be obtained in several ways. Two common and easy approaches are:
Retrieve the element geometry using the option ComputeReferences set to true and extract the specific face required.
Shoot a ray through the model to determine the required element and its face using the 2017
ReferenceIntersector Class.

iOS 5 + GLKView: How to access pixel RGB data for colour-based vertex picking?

I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.
After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.
I have two questions, and I'd be very grateful for assistance with either:
I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0
buffer related commands. How do I combine these to identify the color
of a pixel in the EAGLContext I'm tapping on-screen?
Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader
which firstly doesn't involve modifying buffered vertex (colour)
attributes, and secondly doesn't involve the addition of an IF
statement into the vertex shader?
I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.
Any suggestions would be gratefully received. Thank you for your time
UPDATE
Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.
Still, any tips as regards the back buffer would be much appreciated.
Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)
In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:
#interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>
After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):
// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];
Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.
Anyway, here's my target action for the tap gesture recognizer:
-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
const CGPoint loc = [recognizer locationInView:[self view]];
[self pickAtX:loc.x Y:loc.y];
}
The pick method called in there is one I've defined inside my GLKViewController subclass:
-(void)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
[snapshot pickPixelAtX:x Y:y];
}
This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.
What's important to note is a comment in the snapshot API documentation, which states:
This method should be called whenever your application explicitly
needs the contents of the view; never attempt to directly read the
contents of the underlying framebuffer using OpenGL ES functions.
This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.
You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:
#interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
#end
Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:
#implementation UIImage (NDBExtensions)
- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 b = data[offset+0];
UInt8 g = data[offset+1];
UInt8 r = data[offset+2];
UInt8 a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%i G:%i B:%i A:%i",r,g,b,a);
}
}
#end
I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.
That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:
R:24 G:46 B:244 A:255
Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];

Resources