I'm using open3d to visualise a point cloud and a mesh inside sceneView Widget.
The problem is now that the mesh and the point cloud randomly disappear when i change the camera angle or distance.
Is there a workaround?
Related
I want to have a reference point or to know the coordinates of any point on an exported Image (from any view) from Revit.
For example in the attached image exported from Revit, I'd like to know the bounding box of the picture or the middle point of the picture (in X,Y coordinates) or any other reference point.
Plan image
Is there a way to extract the bounding box coordinates of the picture?
I would suggest defining two diagonally opposite points in your image file that you can identify precisely in your Revit model. Determine their image pixel coordinates, export their Revit model coordinates, and use this information to determine the appropriate scaling and translation.
The RoomEditorApp Revit add-in and its corresponding roomedit CouchDb web interface demonstrate exporting an SVG image from Revit, scaling it for display in a web browser, and transformation and calculation of exact coordinates back and forth between two environments.
have been able to successfully detect an object using haar cascade classifier in python using opencv.
When the object is detected, a rectangle is shown around the object. The x and y position of this rectangle is known and so is the height and width (x,y,w,h).
The problem I have is trying the get the 3d coordinates form this rectangle (x,y,z).
I have not been able to find an resources on getting the z coordinate of the detected object.
I calibrated my camera and found the camera matrix and also the distortion coefficient but not sure next what to do next. I looked into pose estimation but do not think that will be able to get the 3D coordinates of this object.
Please any help will be sufficient. I just need to be pointed in the right direction so i can continue my project
Thank you.
how can I get in maya per pixel all surface positions and normals of the entire scene.
I don't want to stop at the first viewed surface of the camera. I need to get information about all object. I want to traverse the whole scene
For example a cube is in front of the sphere. The camera position just shows the cube - the sphere gets at that camera position by the cube. My output of every pixel position of my camera rendered image data gives me information of the surface position in world space and the normal of the cube at the first hit. Then again for the other side of the cube. Then the two surfaces of the sphere.
How can that be achieved?
Thanks
I have an object in blender that has sharp corners, and easily distinguishable faces, exactly what I want. However, when I place it in Unity all of the vertices smooth out, and it is impossible to see what you are looking at. How do I get the same object that I have in Blender to show up in unity?
This is tackled here
blender-normal-smoothing-import-problem
Also you can calculate the normals on import via 'Smoothing angle' which will edge break/phong break based on the angles
I have followed the RajawaliVuforia tutorial and integrated the rajawali with vuforia CloudReco and i am able to get the 3D model but model is not positioned properly in target image center and also if i move camera close or up, the model is positioning out of the target image. Can someone let me know what could be the issue.
Vuforia passes the position (Vector3) and orientation (Quaternion) to Rajawali. Rajawali then uses this to position and rotate the model. This might interfere with animations applied to the model. If you're using animations or if you're setting the position manually you'll get unpredictable results. The reason for this is that the position is set twice on each frame.
The way to fix this is to put your 3D model in a container (an empty BaseObject3D). Vuforia's position and orientation will be applied to this container and not your 3D model. This way you can animate the model without getting unpredictable results.