I manage the map through image views placed side by side to form a grid. How could I implement map zoom-in and zoom-out by loading all frames and elements without them getting distorted?
Without knowing any details about your rendering method I would suggest to use a Camera. The OrthographicCamera class provides a zoom attribute whereas the PerspectiveCamera can "zoom" via altering its position.z.
As I understand it, if I create a vtk render window, then I can add different renderers to it and for each renderer renders from a different perspective. No to actually render the scene I use the vtk render window method render() to render all renderers in parallel. Now there is a vtk render window method called GetZbufferData which apparently returns an array containing the zbuffer. So my question is, to which renderer does this zbuffer correspond to?
Thanks for any clarification.
If you have all renderers in the same window, then they will share the same framebuffer, so also the same z-buffer. So a simple answer to your question is "to all of them". To get the individual z-values, it depends on what you are exactly doing with the renderers.
If you are doing some kind of a "tiled view", you want to assign different viewports (vtkRenderer::SetViewport(), like here) to each of the renderers. Then you can access the z data for a given "tile" (renderer) by passing appropriate x,y coordinates to the GetZBufferData function. For example, to get the whole part of the z buffer that belongs to renderer ren1 of vtkRenderWindow renWin:
double x1 = ren1->GetViewport()[0] * (renWin->GetSize()[0] - 1);
double y1 = ren1->GetViewport()[1] * (renWin->GetSize()[1] - 1);
double x2 = ren1->GetViewport()[2] * (renWin->GetSize()[0] - 1);
double y2 = ren1->GetViewport()[3] * (renWin->GetSize()[1] - 1);
float *ren1Z = renWin->GetZbufferData(
static_cast<int>(x1),static_cast<int>(y1),static_cast<int>(x2),
static_cast<int>(y2));
If you have the same viewport, it would be more complicated. You can have a renderwindow with multiple "layers", by setting vtkRenderWindow::SetNumberOfLayers(int) and then you can assign each renderer to a different layer (vtkRenderer::SetLayer(0-based layer index)). The window then renders from layer 0 to the last layer over each other. If you are interested in getting only one specific renderer's z-data, you should get it if you have it render in the last layer. However, I am not sure if the z-buffer is cleaned in between individual renderer's renders, I would actually bet on that it is not, so you might also get some inconsistent mess.
I would like to complement tomj answer:
Any of the vtkRenderWindow::GetZbufferData() methods query the framebuffer for Z-values, which is contained in the vtkRenderWindow, but there is a slight remark:
You need to set this in your renderers: vtkRenderer::PreserveDepthBufferOn(). This is because as the documentation says:
"By default, the depth buffer is reset for each renderer.
If this flag is true, this renderer will use the existing depth buffer for its rendering."
So, that bring us to the vtkRenderers. There is a layering of vtkRenderers, which tells which "chain" or "precedence order" to make the drawing. Check the method vtkRenderer::SetLayer().
So, you first need to set up your layered vtkRenderers, attach them to the vtkRenderWindow, and then set up correctly if you want to preserve some depth buffers or not.
Notice that if the z-buffer has not been set (first draw of the first vtkRenderer), it will return 1.0. I'm still figuring out why, but currently that is the situation.
I'm new to the google cardboard sdk. I need to draw a slightly different image for the left eye as compared to the right(I know distortion correction is taken care of). I saw the "Eye" class spec (an instance of which is passed to OnDrawEye()) from the docs.It does not seem to contain info of which eye is being referred to. How do I tell if the image is being rendered for the right or left eye and code accordingly?
In the class where you implements the CardboardView.StereoRenderer you have to define the function onDrawEye(), int this function you receive a parameter of type Eye, from that parameter you can know what eye is rendering at the moment with the function getType(). You can check the eye with the constants defined in Eye.Type
I want to have a custom MKOverlay that's a circle anchored to the user location annotation that the user can resize by pinching. I was able to successfully achieve this using MKOverlayPathRenderer and a custom MKOverlay object by overriding the createPath method and making an arc. The resizing and moving of the overlay was handled by using KVO on the radius and coordinate properties of my overlay. However the resizing was incredibly choppy and the boundingMapRect wasn't correctly calculated.
I've also tried using an image and instead of subclassing MKOverlayPathRenderer just MKOverlayRenderer, overriding - (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context but when I resize my CPU percentage jumps to 160% usage (not great yeah?) and the boundingRect is again being drawn incorrectly.
I really think the way to do it is with MKOverlayPathRenderer and maybe having an atomic counter of some kind so that a redraw only gets called say every 5 or 10 times the pinch gesture is triggered.
Does anyone have any suggestions? I've also considered but haven't tried making a UIView and adding it as a subview to the map view and putting the pinch gesture on that but that seems hacky and dirty.
When you computed new boundingMapRect on the Overlay, you must invoke invalidatePath on your Renderer. After that, system will invoke createPath for you when appropriate.
I am attempting to rotate an MKMapView using MapKit.
I can display a map and rotate it, however not very efficiently. I create an MKMapView larger than the view and rotate it using CGAffineTransformMakeRotation, so the the grey areas behind the view are not visible. Although I have clip subviews checked in Interface Builder, I still have the feeling this is not the correct implementation.
This method does allow me to rotate any annotations displayed as MKPinAnnotationView conforms to the CGAffineTransformMakeRotation function, but I come into problems when trying to add an overlay to the map.
I can place an overlay on using the boundingMapRect property in the class declaration but the image remains unrotated on the display. Is there a way to achieve this? Or alternatively should I be rotating the MKMapView and annotations in a different method?
Thanks in advance for any advice or information.
Are you rotating the map so that it is facing the same way the user is? (i.e. not just North = Up). If so you don't need to do any transformation stuff at all, just set the MKUserTrackingMode to MKUserTrackingModeFollowWithHeading