I've got a problem with my MKAnnotationViews when MKUserTrackingModeFollowWithHeading is enabled on the MKMapView.
I positioned my images using the centerOffset property of the MKAnnotationView. Specifying the coordinates of the pin's tip relative to the coordinate system at the center of the image is somewhat counter-intutive, but I came up with the following formula:
annotationView.centerOffset = CGPointMake(imageWidth/2.0 - tipXCoordinate, imageHeight/2.0 - tipYCordinate);
This works fine for zooming the map in and out. The tips of the pins keep their relative position on the map.
However, when I enable MKUserTrackingModeFollowWithHeading, it won't work anymore. The Pins rotate around the center of the image, instead of the tip. So when the map rotates, the tips do no point to the locations they are supposed to annotate.
I've played around a bit with the frameand centerproperties of the MKAnnotationView, but I feel, they are having no effect on the alignement of the pins whatsoever.
Interestingly, the MKPinAnnotationView does not seem to use centerOffset at all, but a shifted frame instead. However, I was unable to reproduce this. Changing the frame of my custom view did not move it at all.
Thanks for any insights you can provide :-)
Solution:
Don't use centerOffset! Use annotationView.layer.anchorPoint instead. The coordinate system of achor point is much nicer, too. Coordinates range from 0.0 (top/left) to 1.0 (bottom/right) of the image rectangle:
annotationView.layer.anchorPoint = CGPointMake(tipXCoordinate/imageWidth, tipYCordinate/imageHeight);
A friend asks me to let you know that you should "try this for instance":
self.layer.anchorPoint = CGPointMake (0.5f, 1.0f);
Related
TL;DR:
I can't draw an image exactly onto the full screen on wide-aspect (13:6) phones. If I observe the safe area, the error is (predictably) underscan. Using .edgesIgnoringSafeArea() goes (unexpectedly) too far in the other direction.
Update
Apple DTS have suggested this is a bug, refunded me one support incident, and invited me to submit a bug report. It is in the pipeline at https://feedbackassistant.apple.com/feedback/8192204
Caveat Lector
My presumptions about .scaledToFill might be wrong. I address that at the end.
Code
So elementary I can put it here and it won't even slow you down
struct ContentView: View {
var body: some View {
Image("testImage").resizable().scaledToFill()
// .edgesIgnoringSafeArea(.all)
}
}
Test Image
The Test Image is a landscape rectangle, proportioned at 13:6, like the wide phone. (E.g. the 812:375 proportion of the original iPhone X.) The gray periphery is not part of the image.
It has its sub-frames marked, that correspond to the narrow (older) phones (16:9) and pads (4:3).
Runtime Results
The Xcode project settings are explicitly landscape-only, for both pads and phones.
For narrow phones and all pads, the code above, observing safe areas, renders the Test Image like I expect:
But on wide phones, I can't get the red rect to coincide with the screen edges.
Wide Phones
With no call to .edgesIgnoringSafeArea(), that is we are observing safe area. Naturally, our image is mapped to a subset of the full screen.
With the call to .edgesIgnoringSafeArea(). I expected this to exactly fill the screen but it overscans:
Here is the Xcode view-hierarchy debugger's perspective on the previous: the image is being mapped to a rect larger than the full screen. Why?
Order of Events
If I reverse the order of modifiers, and call .edgesIgnoringSafeArea() before .scaledToFill(), I get aspect ratio distortion, which .scaledToFill() is supposed to prevent. (See circle become ellipse in screen shot.) An explanation of how these operations compose, and why they do not commute, might go a long way to answering my primary questions.
Workaround
I think the above should work, and I don't see why not. What does work — on wide phones — is to eliminate the .scaledToFill modifier. Then you get this. But it only works because the test image is already the exact aspect ratio as the display — not a very general solution.
Scale to Fill
In the restricted domain of landscape images and displays, I expect the operation of scale-to-fill on the 13:6 test image to be equivalent to (to have the semantics of):
Center the test image in the destination (container) rect, sized to fit entirely in the container.
I have been expecting that ignoring safe areas means the "destination" will be the full screen, but that may be where I err.
Expand the test image, maintaining proportion and center, until one pair of sides coincides with those of the container.
For narrower displays, the left and right edges will meet first, and the top and bottom will be inside the destination rect.
But don’t stop now. That would be scale to fit, or letterboxing.
Expand until your top and bottom coincide with those of the container.
For narrower displays this means there will be content cropped on both sides
For 13:6 displays all four image edges will to coincide with the display edges at the same time.
I do not know why .edgesIgnoringSafeArea() does not work as it should but here is a workaround that should help you.
GeometryReader { geo in
Image("testImage")
.resizable()
.scaledToFill()
.frame(width: geo.size.width, height: geo.size.height)
}
.edgesIgnoringSafeArea(.all)
Update:
Here is another way to do the same thing without GeometryReader:
Image("testImage")
.resizable()
.scaledToFill()
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity)
.edgesIgnoringSafeArea(.all)
I am using SVGPanZoom to manage the zooming of an SVG image in my hybrid Android (for all intents and purposes the same behavior as in Chrome) app. While zooming works well I have found a strange issue. My original inline SVG element goes like this
<svg id='puzzle' viewBox='0 0 1600 770' preserveAspectRatio='none'
width='100vw' height='85.5vh' fill-rule='evenodd' clip-rule='evenodd'
stroke-linejoin='round' stroke-miterlimit='1.414'
xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://
www.w3.org/1999/xlink'>
Initially this SVG element is empty and gets populated programmatically from JavaScript at run time after which I initiate SVGPanZoom as follows
var panZoom = svgPanZoom('#puzzle',
{panEnabled:false,controlIconsEnabled:false,
zoomEnabled:true,dblClickZoomEnabled:true,onZoom:postZoom});
panZoom.refreshRate = 10;
panZoom.zoomScaleSensitivity = 0.02;
The problem I have run into is this - I want my SVG image to fill the available area, 100vw x 85.5vhcompletely to do which I instruct it via the preserveAspectRatio="none"attribute above along with the viewBox="0 0 1600 770" attribute. I have found that this works - so long as I don't use SVGPanZoom. As soon as I initiate panZoom thezoomBox`attribute gets stripped out and I end up with an image that does not quite behave in terms of its default stretching/filling behavior.
SVGPanZoom is widely used so I assume that this behavior is down to me not quite setting it up properly. Dipping into the code I have found SVGPanZoom creates a cacheViewBoxand then proceeds to remove the original zoomBox attribute.
Which is fine if after that zooming works and the original behavior of the application does not change which is not what I find. What am I doing wrong here?
I've also run into this issue recently. From my research, this is just how the library works. I chose to live with this limitation for now but I found a couple other libraries that may work the way you intend (I haven't tried them yet):
jquery.panzoom is a jquery library that provides this functionality and also has some nice features. I know many people try to avoid jquery but it's pretty small and may do what you want. It handles SVG but I don't know what it does with the viewBox attribute.
react-svg-pan-zoom is a react component which may be useful if you are working in react.
I've also tried the PanZoom library but this also suffers the same viewBox limitation.
A note for anyone running into this thread. In the end I abandoned SVGPanZoom and decided to eschew the route of using any pan/zoom library at all. At the same time I decided to completely stop using the SVG viewBox and handle all zooming/panning entirely on my own through SVG transforms. The core steps involved
Wrap the entire SVG contents in a group to make it easier to manage the transform. I use the id attribute gOuter for this group
Set an initial scale for the SVG to occupy the desired client rectangle. In my case I had an original viewBox of 0 0 1600 770 intended to occupy 100% of screen width and 85% of screen height. So my scaling was scaleX = 1600/window.innerWidth and scaleY = 770/)0.85*window.innerHeight).
Apply this initial transform to the wrapping outer group, gOuter.setAttribute('transform','0 0 scaleX,scaleY)
Now in order to zoom to a an object whose virtual top left hand coordinates in the original viewBox were Ox,Oy you would use the transform
gOuter.setAttribute('transform',
scale(scaleX,scaleY) translate(-Ox,-Oy) scale(2*scaleX,2*scaleY) translate(Ox,Oy))
to zoom in by a factor of x 2. The important things to understand here
In SVG transformations are applied right to left.
Here we are translating the zoom point to the top l.h.s. scaling and then translating it back to its original location.
The problem is that we also need to allow for the original level of zoom through the initial scaling so we tag that on as one last transform
This leaves you in complete control of the zooming process and as a fringe benefit the operation becomes considerably more smooth than when using a pan/zoom library.
I'm struggling with how to properly implement simultaneous movement and rotation using Kivy (in python, not kv lang). My goal is to rotate an object so it's facing its destination then move it towards the destination using Animation. Using the code below I typically get movement in relation to the angle rotated instead of in relation to my general playing area. For example the animation without rotation might move an image to point [1,1] whereas with rotation of 180* the animation is moving the image to [-1,-1]. The image is properly rotated in this scenario, meaning it's facing the right way but going the wrong way.
My understanding is that the push/pop matrix functions should provide the appropriate context for the animation rather than the rotated element context. Because the PopMatrix function is happening in Canvas.after it seems like this has no effect - my animation is completed before the original Matrix is restored.
I'm lacking some key piece of information here that's causing a lot of headache. Can someone explain why the code below causes an image to move to (-1,-1) rather than the (1,1) indicated, and how to properly implement this?
I threw this code together as an example, my game code is far more complex. That's why I'm hoping for an explanation rather than a direct solution for my code. Thanks.
with self.image.canvas.before:
PushMatrix()
self.rot = Rotate()
self.rot.axis = (0, 0, 1)
self.rot.origin=self.center
self.rot.angle=180
with self.image.canvas.after:
PopMatrix()
self.anim = Animation(pos = (1,1), duration=1)
self.anim.start(self)
self.image.pos = self.pos
self.image.size = self.size
In case others are interested in how to make this work consistently - I've found that setting origin and angle on each frame, along with binding the image widget to any pos change on it's parent will ensure the widget moves with its parent and in the proper direction. I implemented that like this:
Instantiate the image like this:
with self.image.canvas.before:
PushMatrix()
self.rot = Rotate()
self.rot.axis = (0, 0, 1)
self.rot.origin=self.center
self.rot.angle = 0
with self.image.canvas.after:
PopMatrix()
Bind it like this:
self.bind(pos = self.binding)
def binding(self, *args):
self.image.center = self.center
self.image.size = self.size
On each frame call a function that does similar to the below.
self.rot.origin = self.center
self.rot.angle = getangle()#you can use a set angle or generate a new angle every frame
Rotate effectively changes the coordinate system used by the entire canvas, so after you've rotated by 180 degrees the position [1, 1] really is in the opposite direction to what it was before, as far as any canvas instruction is concerned.
I don't know what self.image is (maybe an Image widget?), but presumably whatever you see is something like a Rectangle drawn on its canvas, whose pos and size match those of the widget. When you update that pos and size, the Rectangle is positioned according to the current coordinate system, which is in the rotated frame.
Thinking about it, I'm not sure if there's a neat way to combine Rotate instructions with Kivy's high level widget coordinates in quite this way. Of course you can work around it in various ways, such as by accounting for the rotation when setting the position of the Rectangle, but that's kind of fiddly, and inconvenient when working with prebuilt widgets. You can also look at what the Scatter widget does to enable arbitrary transformations.
If you just want to rotate by 180 degrees, you can instead adjust the image being displayed, either before displaying it or by adjusting the tex_coords of the Rectangle to change the displayed orientation. However, this won't work for arbitrary rotations, which it looks like you may want.
I'm using andengine to create a physic simulation via box2d.
The bodies are created through PhysicsFactory using Sprites.
My idea is to procedurally position these sprites, following this pattern:
basically one central sprites which represent my world coordinates center, and a series of cloned sprites that are created by rotating the base sprite around myWorld center (the "X" inside the circle).
I've tried to use opengl way inside andengine (translate, rotate, back-translate)
super(stamiRadious, 0, image); //stamiDoadious is te distance from radix (world center) and "petal" attach point
this.setRotationCenter(0, 0);
this.setRotation((float) Math.toDegrees(angleRad));
this.setPosition(this.getX()+radixX, this.getY()+radixY);
but i failed: results are not right (wrong final position, and wrong box2d body property as if the sprite is much larger than the image)
I belive part of the problem relies on my interpretation on setRotation and setRotationCenter, and in general on my understanding of andengine coordinates system + box2d cordinates system.
Any thoughts/links to doc/explanation?
Once you created a Physics representation (Body) of a Sprite, you should be very careful on how you modify the Sprite! Usually you don't modify the Sprite anymore at all, but instead modify the Body, by calling
someBody.setTransform(); // Note that positions must be divided by PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT!
Hope that helped :)
I am trying to get DirectX (DX9) to grab a screenshot of the desktop and immediately draw it back out (in smaller dimensions) to my form.
I have DirectX working to the capacity that the device is created along with a few surfaces and I can render them to screen. I am using one surface F3F3Surf9_SS to get the desktop Screenshot.
Here is my declaration and initialization of varaibles
F3D3Surf9_SS : IDirect3DSurface9; //Surface SS
F3D3Surf9_A : IDirect3DSurface9; //Surface A
F3D3Surf9_B : IDirect3DSurface9; //Surface B
...
FDirect3D9.CreateDevice(D3DADAPTER_DEFAULT,D3DDEVTYPE_HAL,Form1.Handle,
D3DCREATE_SOFTWARE_VERTEXPROCESSING,#D3DPresentParams,
FDirect3DDevice9);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_A,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_A,nil,nil,'D:\Images\Pillar.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(1360,768,D3DFMT_X8R8G8B8,
D3DPOOL_DEFAULT,F3D3Surf9_B,nil);
D3DXLoadSurfaceFromFile(F3D3Surf9_B,nil,nil,'D:\Images\Niagra.bmp',nil,
D3DX_DEFAULT,0,nil);
FDirect3DDevice9.CreateOffscreenPlainSurface(Screen.Width,Screen.Height,D3DFMT_A8R8G8B8,
D3DPOOL_SCRATCH,F3D3Surf9_SS,nil);
Here is the code I use to grab and then render the screenshot
FDIrect3DDevice9.BeginScene;
FDirect3DDevice9.Clear(0,0,D3DCLEAR_TARGET,D3DCOLOR_XRGB(0,0,255),0,0);
FDirect3DDevice9.GetBackBuffer(0,0,D3DBACKBUFFER_TYPE_MONO, BackBuffer);
FDirect3DDevice9.GetFrontBufferData(0,F3D3Surf9_SS); //Get the screen shot
FDirect3DDevice9.StretchRect(F3D3Surf9_SS,nil,BackBuffer,nil,D3DTEXF_NONE); //Draw it
FDIrect3DDevice9.EndScene;
FDirect3DDevice9.Present(nil,nil,0,nil);
However this does not work.
The image does not get drawn to screen. If I draw surface A or B to screen, that works but it doesn't work for Surface SS. However I know Surface SS has the screenshot in it since if I call D3DXSaveSurfaceToFile the resulting bitmap I put on the hard disk is a valid screen shot.
Any thoughts on the proper way to do this?
The reason this would not work is that the F3D3Surf9_SS was declared in system memory by D3DPOOL_SCRATCH and cannot be drawn directly to the back buffer as I was trying to.
So my solution was to use the F3D3Surf9_A surface and use UpdateSurface to copy the screenshot in system memory into the surface A in video memory.
The only other change I had to make to get this to work was create Surface A in the same format as the screenshot surface: D3DFMT_A8R8G8B8. Also had to make sure the the destination surface in UpdateSurface was larger than the source surface.
NOTE:
This is slow since we are reading from video memory to system memory and then right back to video memory.
I needed this for my application since I want to capture everything the OS and other application put on screen but if you are just worried about your own application then there are better alternatives.
If you know of a way to GetFrontBufferData without putting it to system memory (which is the only way I could see it working) please let me know.