I'm looking to track images on a curved surface. I have seen the Vuforia image tracking stuff (Vuforia cylinder target), and was wondering if we could produce a similar effect in ARCore, possibly with multiple images in a database? Anyone with any experience in this area, or even any views or advice would be a big help.
Related
I am working on a 2d fantasy map displayed in browser via WebGL. Here is what it looks like:
It is procedurally generated so you can move wherever you want but you can also zoom and unzoom without losing quality. I would like to add assets in some places, especially mountains when the altitude is high. I have those assets as vector images (.svg) so that you can still zoom in without losing quality. The thing is I have no idea how I could draw them on screen. I think I would need to convert those vectors to vertices of triangles but I am wondering if there is an automatic way to do it. I heard about something called SVGLoader but I think this is only for threejs and I am using webgl alone. What would you advise me to do?
edit: I just found https://github.com/MoeYc/svg-webgl-loader which looks interesting
I am an undergraduate student working with detecting defects on a surface of an object, in a given digital image using image processing technique. I am planning on using OpenCV library to get image processing functions. Currently I am trying to decide on which defect detection algorithm to use, in order to detect defects. This is one of my very first projects related to this field, so it will be appreciated if I can get some help related to this issue. The reference image with a defect (missing teeth in the gear), which I am currently working with is uploaded as a link below ("defective gear image").
defective gear image
Get the convex hull of a gear (which is a polygon) and shrink is slightly so that it crosses the teeth. Make sure that the centroid of the gear is the fixed point.
Then sample the pixels along the hull, preferably using equidistant points (divide the perimeter by a multiple of the number of teeth). The unwrapped profile will be a dashed line, with missing dashes corresponding to missing teeth, and the problem is reduced to 1D.
You can also try a polar unwarping, making the outline straight, but you will need an accurate location of the center.
I am new to meshlab and am trying to reconstruct an stl file which has a number of issues such as over 700 self-intersecting faces, non-manifold edges and flipped triangles. The part I am trying to fix is a sunglass frame just to give you some perspective. I was able to remove the flipped triangles using Netfabb, which reduced the number of self-intersecting faces. I attempted to fix the rest of the problems by using features within the "Cleaning and Repairing" tab in Meshlab such as remove non-manifold edges and intersecting faces; however, I was unable to fix all problems with the features in the "Cleaning and Repairing" tab alone. Thus I decided to convert the mesh into a point cloud, calculate normals from the "Sampling" tab and try surface reconstruction: poisson. This method gave me a mesh that looked like a big blob instead of the detailed part that I was trying to achieve.
Can anyone please give me a step by step outline of how I can convert the point cloud back into a mesh with surface reconstruction while maintaining the part's dimensional integrity and avoiding self-intersecting faces? Or if you have any other suggestions, I'd be more than happy to listen.
Thank you!
I'm researching the the possibility of performing occlusion culling in voxel/cube-based games like Minecraft and I've come across a challenging sub-problem. I'll give the 2D version of it.
I have a bitmap, which infrequently has pixels get either added to or removed from it.
Image Link
What I want to do is maintain some arbitrarily small set of geometry primitives that cover an arbitrarily large area, such that the area covered by all the primitives is within the colored part of the bitmap.
Image Link
Is there a smart way to maintain these sets? Please not that this is different from typical image tracing in that the primitives can not go outside the lines. If it helps, I already have the bitmap organized into a quadtree.
I am trying to create an application similar in UI to Sketchup. As a first step, I need to display the ground surface stretching out in all directions. What is the best way to do this?
Options:
Create a sufficiently large regular polygon stretching out in all directions from the origin. Here there is a possibility of the user hitting the edges and falling off the surface of the earth.
Model the surface of the earth as a sphere/spheroid. Here I will be limiting my vertex co-ordinates to very large values prone to rounding off errors. (Radius of earth is 6371000000 millimeter).
Same as 1 but dynamically extend the ends of the earth as the user gets close to them.
What is the usual practice?
I guess you would do neither of these, but instead use a virtual ground.
So you just find out, what portion of the ground is visible in the viewport and then create a plane large enough to fill that. With some reasonable maxiumum, which simulates the end of the line of sight aka horizon as we know it.