SwiftUi - Filling a wide-aspect screen - layout

TL;DR:
I can't draw an image exactly onto the full screen on wide-aspect (13:6) phones. If I observe the safe area, the error is (predictably) underscan. Using .edgesIgnoringSafeArea() goes (unexpectedly) too far in the other direction.
Update
Apple DTS have suggested this is a bug, refunded me one support incident, and invited me to submit a bug report. It is in the pipeline at https://feedbackassistant.apple.com/feedback/8192204
Caveat Lector
My presumptions about .scaledToFill might be wrong. I address that at the end.
Code
So elementary I can put it here and it won't even slow you down
struct ContentView: View {
var body: some View {
Image("testImage").resizable().scaledToFill()
// .edgesIgnoringSafeArea(.all)
}
}
Test Image
The Test Image is a landscape rectangle, proportioned at 13:6, like the wide phone. (E.g. the 812:375 proportion of the original iPhone X.) The gray periphery is not part of the image.
It has its sub-frames marked, that correspond to the narrow (older) phones (16:9) and pads (4:3).
Runtime Results
The Xcode project settings are explicitly landscape-only, for both pads and phones.
For narrow phones and all pads, the code above, observing safe areas, renders the Test Image like I expect:
But on wide phones, I can't get the red rect to coincide with the screen edges.
Wide Phones
With no call to .edgesIgnoringSafeArea(), that is we are observing safe area. Naturally, our image is mapped to a subset of the full screen.
With the call to .edgesIgnoringSafeArea(). I expected this to exactly fill the screen but it overscans:
Here is the Xcode view-hierarchy debugger's perspective on the previous: the image is being mapped to a rect larger than the full screen. Why?
Order of Events
If I reverse the order of modifiers, and call .edgesIgnoringSafeArea() before .scaledToFill(), I get aspect ratio distortion, which .scaledToFill() is supposed to prevent. (See circle become ellipse in screen shot.) An explanation of how these operations compose, and why they do not commute, might go a long way to answering my primary questions.
Workaround
I think the above should work, and I don't see why not. What does work — on wide phones — is to eliminate the .scaledToFill modifier. Then you get this. But it only works because the test image is already the exact aspect ratio as the display — not a very general solution.
Scale to Fill
In the restricted domain of landscape images and displays, I expect the operation of scale-to-fill on the 13:6 test image to be equivalent to (to have the semantics of):
Center the test image in the destination (container) rect, sized to fit entirely in the container.
I have been expecting that ignoring safe areas means the "destination" will be the full screen, but that may be where I err.
Expand the test image, maintaining proportion and center, until one pair of sides coincides with those of the container.
For narrower displays, the left and right edges will meet first, and the top and bottom will be inside the destination rect.
But don’t stop now. That would be scale to fit, or letterboxing.
Expand until your top and bottom coincide with those of the container.
For narrower displays this means there will be content cropped on both sides
For 13:6 displays all four image edges will to coincide with the display edges at the same time.

I do not know why .edgesIgnoringSafeArea() does not work as it should but here is a workaround that should help you.
GeometryReader { geo in
Image("testImage")
.resizable()
.scaledToFill()
.frame(width: geo.size.width, height: geo.size.height)
}
.edgesIgnoringSafeArea(.all)
Update:
Here is another way to do the same thing without GeometryReader:
Image("testImage")
.resizable()
.scaledToFill()
.frame(minWidth: 0, maxWidth: .infinity, minHeight: 0, maxHeight: .infinity)
.edgesIgnoringSafeArea(.all)

Related

SVGPanZoom discards original viewBox

I am using SVGPanZoom to manage the zooming of an SVG image in my hybrid Android (for all intents and purposes the same behavior as in Chrome) app. While zooming works well I have found a strange issue. My original inline SVG element goes like this
<svg id='puzzle' viewBox='0 0 1600 770' preserveAspectRatio='none'
width='100vw' height='85.5vh' fill-rule='evenodd' clip-rule='evenodd'
stroke-linejoin='round' stroke-miterlimit='1.414'
xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://
www.w3.org/1999/xlink'>
Initially this SVG element is empty and gets populated programmatically from JavaScript at run time after which I initiate SVGPanZoom as follows
var panZoom = svgPanZoom('#puzzle',
{panEnabled:false,controlIconsEnabled:false,
zoomEnabled:true,dblClickZoomEnabled:true,onZoom:postZoom});
panZoom.refreshRate = 10;
panZoom.zoomScaleSensitivity = 0.02;
The problem I have run into is this - I want my SVG image to fill the available area, 100vw x 85.5vhcompletely to do which I instruct it via the preserveAspectRatio="none"attribute above along with the viewBox="0 0 1600 770" attribute. I have found that this works - so long as I don't use SVGPanZoom. As soon as I initiate panZoom thezoomBox`attribute gets stripped out and I end up with an image that does not quite behave in terms of its default stretching/filling behavior.
SVGPanZoom is widely used so I assume that this behavior is down to me not quite setting it up properly. Dipping into the code I have found SVGPanZoom creates a cacheViewBoxand then proceeds to remove the original zoomBox attribute.
Which is fine if after that zooming works and the original behavior of the application does not change which is not what I find. What am I doing wrong here?
I've also run into this issue recently. From my research, this is just how the library works. I chose to live with this limitation for now but I found a couple other libraries that may work the way you intend (I haven't tried them yet):
jquery.panzoom is a jquery library that provides this functionality and also has some nice features. I know many people try to avoid jquery but it's pretty small and may do what you want. It handles SVG but I don't know what it does with the viewBox attribute.
react-svg-pan-zoom is a react component which may be useful if you are working in react.
I've also tried the PanZoom library but this also suffers the same viewBox limitation.
A note for anyone running into this thread. In the end I abandoned SVGPanZoom and decided to eschew the route of using any pan/zoom library at all. At the same time I decided to completely stop using the SVG viewBox and handle all zooming/panning entirely on my own through SVG transforms. The core steps involved
Wrap the entire SVG contents in a group to make it easier to manage the transform. I use the id attribute gOuter for this group
Set an initial scale for the SVG to occupy the desired client rectangle. In my case I had an original viewBox of 0 0 1600 770 intended to occupy 100% of screen width and 85% of screen height. So my scaling was scaleX = 1600/window.innerWidth and scaleY = 770/)0.85*window.innerHeight).
Apply this initial transform to the wrapping outer group, gOuter.setAttribute('transform','0 0 scaleX,scaleY)
Now in order to zoom to a an object whose virtual top left hand coordinates in the original viewBox were Ox,Oy you would use the transform
gOuter.setAttribute('transform',
scale(scaleX,scaleY) translate(-Ox,-Oy) scale(2*scaleX,2*scaleY) translate(Ox,Oy))
to zoom in by a factor of x 2. The important things to understand here
In SVG transformations are applied right to left.
Here we are translating the zoom point to the top l.h.s. scaling and then translating it back to its original location.
The problem is that we also need to allow for the original level of zoom through the initial scaling so we tag that on as one last transform
This leaves you in complete control of the zooming process and as a fringe benefit the operation becomes considerably more smooth than when using a pan/zoom library.

Pixi.js Container.width does not return width

I'm having a bit of a problem: I'm trying to access the width of a container in which I've added a sprite to, but it seems to return as 1. However, when I inspect the object in the console, it gives me the proper width.
I wrote up a code pen showing the issue, but it goes something like this:
var container = new PIXI.Container();
app.stage.addChild(container);
var sprite = PIXI.Sprite.fromImage('https://i2.wp.com/techshard.com/wp-
content/uploads/2017/05/pay-1036469_1920.jpg?ssl=1&w=200');
container.addChild(sprite);
console.log(container.height);
console.log(container);
The first console log returns 1, while if I go into the object in the second log it gives me 141.
I'm trying to center the container like in the demo. The demo container returns the proper width, unless you try and do it for only one "bunny" (replacing bunny texture with internet image, also the for loop is commented out).
Any suggestions on a proper approach for this?
Cheers
There's a few things to address here.
Firstly, what the problem in your codepen is:
You're creating a texture from an image that has yet to be loaded.
Until the image loads pixi will not be able to give you the dimensions of it, and the container reports a width and height of 1 when you immediately query them. If you put the same console.log statement in a timeout then it will report the dimensions after the image has loaded and thus the dimensions will be accurate.
Logging out the object itself seems to work because when you examine the contents of it they've been updated to the correct values because the image has loaded by that point.
If the texture is already in the cache at the point that you create a new sprite using it then you won't have to wait before you can access its true dimensions.
Secondly, why the bunny example on pixi's site doesn't have the same problem:
Actually, it does. You just don't notice it.
The magic is in the bunny.anchor.set(0.5);. It lines 25 sprites with width and height of 1 out in a grid. By spacing them out, their container now has width and height of 160.
This container is now centered immediately based on its current dimensions, and then when the sprite textures finish loading and the sprites are updated with their new dimensions. Due to their anchor being set to 0.5, however, this means they remain centered despite their container now being larger.
You can play around with using a larger image than the bunny to exaggerate things and changing the anchor value, along with using the rerun code button rather than just refreshing the page. If you rerun the code the image being used for the texture remains cached by pixi so you get different results.
Lastly, how you would resolve this issue:
Loading your assets before creating sprites with them.
(Or at least waiting before they're loaded before querying their dimensions to position things)
You can find an example of the resource loader that pixi has here: http://pixijs.io/examples/?v=next-interaction#/basics/spritesheet.js

How to set the rotation center of an MKAnnotationView

I've got a problem with my MKAnnotationViews when MKUserTrackingModeFollowWithHeading is enabled on the MKMapView.
I positioned my images using the centerOffset property of the MKAnnotationView. Specifying the coordinates of the pin's tip relative to the coordinate system at the center of the image is somewhat counter-intutive, but I came up with the following formula:
annotationView.centerOffset = CGPointMake(imageWidth/2.0 - tipXCoordinate, imageHeight/2.0 - tipYCordinate);
This works fine for zooming the map in and out. The tips of the pins keep their relative position on the map.
However, when I enable MKUserTrackingModeFollowWithHeading, it won't work anymore. The Pins rotate around the center of the image, instead of the tip. So when the map rotates, the tips do no point to the locations they are supposed to annotate.
I've played around a bit with the frameand centerproperties of the MKAnnotationView, but I feel, they are having no effect on the alignement of the pins whatsoever.
Interestingly, the MKPinAnnotationView does not seem to use centerOffset at all, but a shifted frame instead. However, I was unable to reproduce this. Changing the frame of my custom view did not move it at all.
Thanks for any insights you can provide :-)
Solution:
Don't use centerOffset! Use annotationView.layer.anchorPoint instead. The coordinate system of achor point is much nicer, too. Coordinates range from 0.0 (top/left) to 1.0 (bottom/right) of the image rectangle:
annotationView.layer.anchorPoint = CGPointMake(tipXCoordinate/imageWidth, tipYCordinate/imageHeight);
A friend asks me to let you know that you should "try this for instance":
self.layer.anchorPoint = CGPointMake (0.5f, 1.0f);

Coordinate system and sprite transformation

I'm using andengine to create a physic simulation via box2d.
The bodies are created through PhysicsFactory using Sprites.
My idea is to procedurally position these sprites, following this pattern:
basically one central sprites which represent my world coordinates center, and a series of cloned sprites that are created by rotating the base sprite around myWorld center (the "X" inside the circle).
I've tried to use opengl way inside andengine (translate, rotate, back-translate)
super(stamiRadious, 0, image); //stamiDoadious is te distance from radix (world center) and "petal" attach point
this.setRotationCenter(0, 0);
this.setRotation((float) Math.toDegrees(angleRad));
this.setPosition(this.getX()+radixX, this.getY()+radixY);
but i failed: results are not right (wrong final position, and wrong box2d body property as if the sprite is much larger than the image)
I belive part of the problem relies on my interpretation on setRotation and setRotationCenter, and in general on my understanding of andengine coordinates system + box2d cordinates system.
Any thoughts/links to doc/explanation?
Once you created a Physics representation (Body) of a Sprite, you should be very careful on how you modify the Sprite! Usually you don't modify the Sprite anymore at all, but instead modify the Body, by calling
someBody.setTransform(); // Note that positions must be divided by PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT!
Hope that helped :)

How to blend color of two sprites with constant alpha in DirectX?

Essentially, what I want to do (in DirectX) is to take two partially-transparent images and blend them together. This works fine with default blending, insofar as they both show up as overlapping, etc. However, the problem is that the opacity goes up markedly where the two intersect. This causes increasing problems as more sprites overlap. What I'd like to do is keep the blending the same, except keep a global opacity for all these sprites being blended, regardless of how they overlap.
Seems like there would be a render setting for this (all of these sprites are alone in their sprite batch, which keeps that part easy), but if so I don't know it. Right now I'm kind of shooting in the dark, and I've tried a lot of different things and none of them have looked right at all. I know I probably need some sort of variant of D3DBLENDOP, but I just don't know what sort of settings there I really need (I have tried many things, but it is all guessing at this stage).
Here is a screenshot of what is actually happening with standard blending (the best I can get it): http://arcengames.com/share/FFActual.png Here is a screenshot with a mockup of how I would want the blending to turn out (the forcefields were added to the same layer in Photoshop, then given a shared alpha value): http://arcengames.com/share/FFMockup.png
This is how I did it in Photoshop:
1. Take the two images, and remove all transparency (completely transparent pixels excepted).
2. Combine them into one layer, which blends the color but which has no partial alpha at all.
3. Now set the global transparency for that layer to (say) 40%.
The result is something that looks kind of blended together color-wise, but which has no increase in opaqueness on the overlapped sections.
UPDATE: Okay, thanks very much to Goz below, who suggested using the Z-Buffer. That works! The blending, by and large, is perfect and just what I would want. The only remaining problem? Using that new method, there is a huge artifact around the edge of the force field image that is rendered last. See this: http://www.arcengames.com/share/FFZBuffer.png
UPDATE: Below is the final solution in C# (SlimDX)
Clearing the ZBuffer to black, transparent, or white once per frame all has the same effect (this is right before BeginScene is called)
Direct3DWrapper.ClearDevice( SlimDX.Direct3D9.ClearFlags.ZBuffer, Color.Transparent, 0 );
All other sprites are drawn at Z=1, with the ZBuffer disabled for them:
device.SetRenderState( RenderState.ZEnable, ZBufferType.DontUseZBuffer );
The force field sprites are drawn at Z=2, with the ZBuffer enabled and ZWrite enabled and ZFunc as Less:
device.SetRenderState( RenderState.ZEnable, ZBufferType.UseZBuffer );
device.SetRenderState( RenderState.ZWriteEnable, true );
device.SetRenderState( RenderState.ZFunc, Compare.Less );
The following flags are also set at this time, to prevent the black border artifact I encountered:
device.SetRenderState( RenderState.AlphaTestEnable, true );
device.SetRenderState( RenderState.AlphaFunc, Compare.GreaterEqual );
device.SetRenderState( RenderState.AlphaRef, 55 );
Note that AlphaRef is at 55 because of the alpha levels set in the specific source image I was using. If my source image had a higher alpha value, then the AlphaRef would also need to be higher.
Best I can tell is that the forcefields are a whole object. Why not render them last, in front to back order, and with Z-buffering enabled. That will give you the effect you are after.
ie its not blending settings thats your problem at all.
Edit: Can you use render-to-texture then? IF so you could easily do what you did under photoshop. Render them all together into the texture and then blend the texture back over the screen.
Edit2: How about
ALPHATESTENABLE = TRUE;
ALPHAFUNC = LESS
ALPHABLENDENABLE = TRUE;
SRCBLEND = SRCALPHA;
DESTBLEND = INVSRCALPHA;
SEPERATEALPHABLENDENABLE = TRUE;
SRCBLENDALPHA = ONE;
DESTBLENDALPHA = ZERO;
You need to make sure the alpha is cleared to 0xff in the frame buffer each frame. You then do the standard alpha blend. while passing the alpha value straight through to the backbuffer. This is, though, where the alpha test comes in. You test the final alpha value against the one in the back buffer. If it is less than whats in the backbuffer then that pixel has not been blended yet and will be put into the frame buffer. If it is equal (or greater) then it HAS been blended already and the alpha value will be discarded.
That said ... using a Z-Buffer would cost you a load of RAM but would be faster overall as it would be able to throw away the pixels far earlier in the pipeline. Seeing as all the shields would just need to be written to a given Z-plane you wouldn't even need to go through the hell I suggested earlier. If the Z value it receives is less than whats there already then it will render it if it is greater or equal then it will discard it, fortunately before the blend calculation is ever performed.
That said ... you could also do it by using the stencil buffer which would require a Z-buffer anyway.
Anyway ... hope one of those methods is of some help.
Edit3: DO you render the forcefield with some form of feathering around the edge? Most likely that edge is caused by the fact that the alpha fades off slightly and then the "slightly alpha" pixels are getting written to the z-buffer and hence any subsequent draw doesn't overwrite them.
Try the following settings
ALPHATESTENABLE = TRUE
ALPHAFUNC = GREATEREQUAL // if this doesn't work try less .. i may be being a retard
ALPHAREF = 255
To fine tune the feathering around the edge adjust the alpharef but i'd suspect you need to keep it as above.
You can specify the D3DBLENDOP used when blending the two images together for the alpha channel. It sounds like your using D3DBLENDOP_ADD currently - try switching this to D3DBLENDOP_MAX, as that will just use the opacity of the "most opaque" image.
It is hard to tell exactly what you are trying to accomplish from your mock up since both forcefields are the same color; do you want to blend the colors and cap the alpha? Just take one of the colors?
Based off the above discussion it isnt' clear if you are setting all the relevant render states:
D3DRS_ALPHABLENDENABLE = TRUE (default: FALSE)
D3DRS_BLENDOP = D3DBLENDOP_MAX (default: D3DBLENDOP_ADD)
D3DRS_SRCBLEND = D3DBLEND_ONE (default: D3DBLEND_ONE)
D3DRS_DESTBLEND = D3DBLEND_ONE (default: D3DBLEND_ZERO)
It sounds like you are setting the first two, but what about the last two?

Resources