iOS iBooks like store animation - ios4

Anyone know how Apple does the transition to the store in the iBooks app? I know they are using a Modal View Controller with the UIModalTransitionStyleFlipHorizontal transition style set but I don't see how they are showing the "depth" of the "bookcase" when they do the transition.

I'm supposing that the animation works something like this:
A 'snapshot' image is taken of the
bookshelf and set to the contents of
a CALayer.
The bookshelf view is
removed from its superview.
The newly created CALayer (the bookshelf
layer) with the image of the
bookshelf is added to the layer of
some existing (or newly created)
UIView
Another layer (the side
layer) is created and an image of
the side of the bookshelf is used as
its contents. This layer is added
to the UIView The side layer is
positioned at the right edge of the
bookshelf layer.
The side layer's
transform is set to rotate it M_PI_2
rads about the y-axis. This
essentially makes it invisible as it
is perpendicular to the device
screen
The bookshelf and side layers
have an animation added to them that
transforms each with an M_PI_2
rotation about the y axis. At the
conclusion of this animation, the
bookshelf will be perpendicular to
the device screen and the side layer
will be fully in view.
A new layer (the store layer) is created and a snapshot image of the store is added as the contents of the new layer, it is transformed and positioned so that it will appear correctly when . . .
The side layer and store layer have an animation added that transforms each with an M_PI_2 rotation about the y axis.
All of the layers are removed and the store UIView is displayed.
I don't have any inside knowledge of the iBooks app, but if I were trying to duplicate this effect, this is how I'd go about it. FWIW, once you get the hang of positioning and transforming the CALayers, its kinda fun to mess around with animations like this.

Related

React leaflet draw - deleting layer doesn't work if overlapped with another layer of higher z-index

I am using React leaflet draw and facing an issue while deleting drawn layer when it is overlapped with another layer of a higher z-index. Is it possible to control the z-index of react-leaflet-draw's FeatureGroup/EditControl to render the drawn layer at the extreme top?
I can delete the layers programmatically like below or by clicking on clear all layers, but I want to be able to click to delete individual layers based on the requirement.
//To programmatically delete:
const mapObject = useMap();
drawnLayers.forEach((layer) => { //drawnLayers = array storing the layers received in onCreated handler
mapObject.removeLayer(layer);
});

Steps to integrate fabricjs into existing application

There exists legacy application having 3 canvas. First canvas to display objects which is stored in container, second canvas when object is selected and third one when object is moved or added for editing. I want to use the text and image feature of fabricJS into my existing application. While creating fabricJS canvas, i provided editing canvas object then all the mouse events bind to the editing canvas stopped working. So does there exists way to use fabricJS into existing application only for using the text and image feature of fabricJS.
You should be able to dynamically assign a z-index value to a canvas layer to expose or hide the layer to mouse events.
For the layers that you want to hide mouse events from, set z-index value to -1. For the layer that you want to expose to mouse events, set z-index value to 1.
var textLayer = document.getElementById("text-layer");
textLayer.style["z-index"] = 1;

How do I set the center of rotation of a Qt3D scene to the center of the active .obj file in the scene?

I have a Qt3D scene in which I have only one 3D object. I would like to set the center of rotation of the scene(camera) to the center of this 3D object. Currently, the 3D model goes out of view when the scene is rotated with the mouse.
There is also a OrbitCameraController, which has the purpose to look at a certain position. You could let the camera position track your object's position.
QML example code:
Camera {
id: myCamera
viewCenter: YOUROBJECTPOSITION
}
OrbitCameraController { camera: myCamera }
// FirstPersonCameraController { camera: myCamera }
I'm not using pyqt like you do. Hope this helps.
I am using a Mesh with a custom 3D model as "source" the qt docs don't
really indicate any property of the Mesh object that will return its
3Dvector position.
If you import your obj file in a scene the origin of your mesh is placed at the origin of the scene. If you didn't transform it that origin is where you want the camera to look at.
If you used a transform, then use that new position to look at.
Qt3D uses an ECS (Entitiy Component System) Basically, you make an enitity and add components to it like a mesh and transform in your case. That's why the mesh doesn't have a property that reflects it's position. The transform component holds that information.
I suggest you read the following in the Qt docs: Qt3D Architecture
The above solution is just in case you have smaller objects and your camera is far enough. But if you import a larger mesh fi. like a spaceship you might need to get the coordinates of the spot you want to look at. You can get those coordinates by using an object picker.

Is it possible to draw a feature on a MapContent object (layer) in Geotools at runtime?

I am working on Geotools and trying to edit an existing layer at run time.
Basically I have a layer which I have added on to a MapContent object, which I then project it on JFrame. Now what I want to know is that can we manually draw some features on this showing on JFrame to edit this layer by drawing some features( which could be either a point or a polygon).
Yes, that is perfectly possible. You will need a Swing tool to handle the drawing and then some way to add any needed attributes to the geometry to make a new feature (see SimpleFeatureBuilder) and then add that to the layer.

iPhone SDK building an Omnigraffle like app

I have been trying to find an example or some hints on how to create an app that I could drag, resize, rotate images onto a UIView and then save the individual pieces (including their size, rotation and placement) and the entire UIView into CoreData. Kind of like the omnigraffle app.
Any tutorials, examples or anything on any piece of the app would be greatly appreciated.
for dragging a view
http://www.cocoacontrols.com/platforms/ios/controls/tkdragview
for roting a view http://www.cocoacontrols.com/platforms/ios/controls/ktonefingerrotationgesturerecognizer
for resizing a view
http://www.cocoacontrols.com/platforms/ios/controls/spuserresizableview
What respects to core data, its actually pretty straightforward just, gather the classes in one view, see the properties you need to save, and the new one you will need for your app and thats it.
Like:
Object Canvas containing a many relationship to morphlingViews wich contain all the properties as center, color, width, height, angle, UIPath (if you plan to create custom shapes) layer position (so it gets drawn correctly) and if you plan to connect the views as omnigraffle add a many realtionship to self (in morphlingViews) so you can take the center of different morphlingViews and add a simple line between them. (and a string if you plan to add drawInRect method to allow users to write in the objects, then it will be a good idea to add the text properties as well).
You can also add Quartz Composer drawing styles properties to the object, as shadow, shadowColor, shadowOffset, or add patterColor to add resizable background.

Resources