I manage the map through image views placed side by side to form a grid. How could I implement map zoom-in and zoom-out by loading all frames and elements without them getting distorted?
Without knowing any details about your rendering method I would suggest to use a Camera. The OrthographicCamera class provides a zoom attribute whereas the PerspectiveCamera can "zoom" via altering its position.z.
Related
I've read the documentation of this attribute:
An additional flag to be used with 'snap'. If set, the view will be snapped to its top and bottom margins, as opposed to the edges of the view itself.
https://developer.android.com/reference/com/google/android/material/appbar/AppBarLayout.LayoutParams.html#scroll_flag_snap
But I can't observe any actual effect in my app. What margin are they talking about? Every margin on the CollapsingToolbarLayout (on which this attribute is set) completely destroys the layout.
Not sure if it's useful yet, but here is the difference:)
If you pass snap as scroll flag, then it will move your view (in my case this search bar, it's linearLayout) to it's edge NOT INCLUDING the margin_layout
Then I've tried snap|snapMargin and it moved with the margin
P.S. No idea why snapMargin doesn't work without snap ¯\_(ツ)_/¯
This attribute is responsible for a scrolling behavior of AppBarLayout and its children. You can apply it directly to AppBarLayout or on the inside views, in the xml layout of your AppCompatActivity. It has to be an instance of AppCompatActivity if you want to use AppBar features. Also, the design library must be included in Gradle dependencies, like so: implementation 'com.android.support:design:26.1.0'
Please refer this link:-[https://medium.com/#tonia.tkachuk/appbarlayout-scroll-behavior-with-layout-scrollflags-2eec41b4366b][1]
I've implemented shadow mapping to generate shadows on a terrain.
I render the scene (or the objects that cast shadows) from the light's perspective and then generate a depth map to be sampled during the second render pass (as all the tutorials on the web explain).
It seemed to work fine but then I noticed that objects on a hill cast more than one shadow:
I think that's the correct behaviour since I don't render the terrain during depth map generation (the small quad on top right shows the depth buffer during the first render pass), so more terrain fragments look like they're behind the object from the light's point of view.
No tutorial on shadow mapping seems to mention this issue though.
Am I missing something or this shadow generation technique is very basic and issues like this one are likely to occur?
EDIT
here's my rendering code:
// in my Render() method:
mShadowRenderer.SetLight(*mLights[0]);
mShadowRenderer.Render();
RenderSkyBox();
RenderEntities();
RenderGeometryTerrain(mTerrains[0],
mShadowRenderer.GetLightViewProjectionMatrix());
shadowRenderer is a class that is in charge of rendering to an off-screen buffer and to return a depth map as a shader resource view. The depth map is added to the terrain when I call the render method so it can be sampled in the terrai pixel shader to generate shadows.
Also, the terrain needs to be rendered to the shadow map/dep map
I want to have a custom MKOverlay that's a circle anchored to the user location annotation that the user can resize by pinching. I was able to successfully achieve this using MKOverlayPathRenderer and a custom MKOverlay object by overriding the createPath method and making an arc. The resizing and moving of the overlay was handled by using KVO on the radius and coordinate properties of my overlay. However the resizing was incredibly choppy and the boundingMapRect wasn't correctly calculated.
I've also tried using an image and instead of subclassing MKOverlayPathRenderer just MKOverlayRenderer, overriding - (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context but when I resize my CPU percentage jumps to 160% usage (not great yeah?) and the boundingRect is again being drawn incorrectly.
I really think the way to do it is with MKOverlayPathRenderer and maybe having an atomic counter of some kind so that a redraw only gets called say every 5 or 10 times the pinch gesture is triggered.
Does anyone have any suggestions? I've also considered but haven't tried making a UIView and adding it as a subview to the map view and putting the pinch gesture on that but that seems hacky and dirty.
When you computed new boundingMapRect on the Overlay, you must invoke invalidatePath on your Renderer. After that, system will invoke createPath for you when appropriate.
I have been trying to find an example or some hints on how to create an app that I could drag, resize, rotate images onto a UIView and then save the individual pieces (including their size, rotation and placement) and the entire UIView into CoreData. Kind of like the omnigraffle app.
Any tutorials, examples or anything on any piece of the app would be greatly appreciated.
for dragging a view
http://www.cocoacontrols.com/platforms/ios/controls/tkdragview
for roting a view http://www.cocoacontrols.com/platforms/ios/controls/ktonefingerrotationgesturerecognizer
for resizing a view
http://www.cocoacontrols.com/platforms/ios/controls/spuserresizableview
What respects to core data, its actually pretty straightforward just, gather the classes in one view, see the properties you need to save, and the new one you will need for your app and thats it.
Like:
Object Canvas containing a many relationship to morphlingViews wich contain all the properties as center, color, width, height, angle, UIPath (if you plan to create custom shapes) layer position (so it gets drawn correctly) and if you plan to connect the views as omnigraffle add a many realtionship to self (in morphlingViews) so you can take the center of different morphlingViews and add a simple line between them. (and a string if you plan to add drawInRect method to allow users to write in the objects, then it will be a good idea to add the text properties as well).
You can also add Quartz Composer drawing styles properties to the object, as shadow, shadowColor, shadowOffset, or add patterColor to add resizable background.
I am attempting to rotate an MKMapView using MapKit.
I can display a map and rotate it, however not very efficiently. I create an MKMapView larger than the view and rotate it using CGAffineTransformMakeRotation, so the the grey areas behind the view are not visible. Although I have clip subviews checked in Interface Builder, I still have the feeling this is not the correct implementation.
This method does allow me to rotate any annotations displayed as MKPinAnnotationView conforms to the CGAffineTransformMakeRotation function, but I come into problems when trying to add an overlay to the map.
I can place an overlay on using the boundingMapRect property in the class declaration but the image remains unrotated on the display. Is there a way to achieve this? Or alternatively should I be rotating the MKMapView and annotations in a different method?
Thanks in advance for any advice or information.
Are you rotating the map so that it is facing the same way the user is? (i.e. not just North = Up). If so you don't need to do any transformation stuff at all, just set the MKUserTrackingMode to MKUserTrackingModeFollowWithHeading