Random string from randomly placed circles - security

I have this fun idea of a project i'd like to do, but i'm not really sure about the math part of it. Here is the idea:
Make a plastic card that would simulate a 9 finger multitouch gesture when it is held against a capacitive screen
Based on the "9 finger" placement, determine some sort of a unique string and use it as an encryption/decryption key for an app
This way i could just open an app, touch the screen with the card and it would get authorized.
But here's the problem:
It shouldn't matter where you place the card on a screen, because the card would be pretty small to fit various screen sizes
The rectangle in which we can randomly position the 9 "fingers" would optimally be 4.5cm x 3cm
The "finger" itself is only recognized as a touch if it is about a 6mm circle (not sure if this can be made smaller)
I figured we could find the left-top "finger" and get every other "finger's" X and Y difference from it. Then concatenate the resulting numbers into a string and use it as a decryption/encryption key. So basically:
key = concat(X2 - X1, Y2 - Y1, X3 - X1, Y3 - Y1, ...)
But i think such an approach would have very few possible combinations (given a relatively small card size and a relatively big "finger") and one could easily write a program to generate all possible combinations and break the key in no time. Am i right about this? If so, how could i improve this?
Thanks for your thoughts
UPDATE 1: actually tried it out on iOS. The result is not promising, since the "fingers" get detected differently each time. The distance between them varies significantly (by as much as 40 pixels!). So i guess this is not as easy as i expected, since the OS seems to detect the touch differently each time for the same two circles.

Your question is lacking some relevant information: how far apart need the circles be so that the system can still distinguish them? What resolution can you realistically expect for the circle centers? And by “6mm circle”, do you mean 6mm diameter or radius (or even circumference)?
Lacking details, I'll make some pretty rough approximations. I'll start by requiring that two of the circles will be placed in opposite corners of the card. That way, you can find them by looking for a pair with maximal distance, and from that compute the orientation and size of the card and correct for that. This leaves 7 fingers to be placed randomly. I'll assume 1mm resolution, and restrict myself to a 45×30mm area. Which means 39×24=936 positions per circle, for a total of 9367≈6,3×1020≈269 combinations. OK, this does not exclude overlapping circles. But since the card is still rather sparsely covered, that shouldn't amount to too much. I'd say 64 bit of entropy (i.e. 264 possible combinations) should be reasonable even if you enforce non-overlapping circles. If you can really detect the circle centers with the required resolution, that is. This should be sufficient security for most applications. Far better than 8-letter passwords, but worse than the symmetric keys usually used for e.g. AES.
Since all of this depends very much on the resolution, it might be worthwhile to investigate that aspect first. Usually you'll get pixel coordinates for your finger positions, but it would be expecting too much to assume that you'd always get the pixel coordinate closest to the center of your circle. So you might start by writing a small application which draws a 6mm circle and records coordinates it receives. Then place a 6mm artificial circle in that drawn one a large number of times. Look how far the recorded positions differ from the center of circle. Take the maximum of those differences, perhaps after removing outliers. I'd add a pixel or two to that, to account for rounding errors due to the rotation of the card. Then turn that pixel count back into a metric length. This is the resolution you can expect. You might have to do this for several devices. If you do perform these experiments, let me know what you find and I'll update my answer accordingly.

Related

"Inverting" a concave polygon

I'm building a 2D game where player can only see things that are not blocked by other objects. Consider this example on how it looks now:
I've implemented raytracing algorithm for this and it seems to work just fine (I've reduced the boundaries for demo to make all edges visible).
As you can see, lighter area is built with a bunch of triangles, each of them having common point in the position of player. Each two neighbours have two common points.
However I'm willing to calculate bounds for external the part of the polygon to fill it with black-colored triangles "hiding" what player cannot see.
One way to do it is to "mask" the black rectangle with current polygon, but I'm afraid it's very ineffective.
Any ideas about an effective algorithm to achieve this?
Thanks!
A non-analytical, rough solution.
Cast rays with gradually increasing polar angle
Record when a ray first hits an object (and the point where it hits)
Keep going until it no longer hits the same object (and record where it previously hits)
Using the two recorded points, construct a trapezoid that extends to infinity (or wherever)
Caveats:
Doesn't work too well with concavities - need to include all points in-between as well. May need Delaunay triangulation etc... messy!
May need extra states to account for objects tucked in behind each other.

How do I check if a set of plane polygones create a watertight polyhedra

I am currently wondering if there is a common algorithm to check whether a set of plane polygones, not nescessarily triangles, contruct a watertight polyhedra. Each polygon has an oriantation (normal vector). A simple solution would just be to say yes or no. A more advanced version would be to point out the edges, where the polyhedron is "open". I am not really interesed on how to close to polyhedra.
I would like to point out, that my "holes" are not nescessarily small, e.g., one face of a cube might be missing. Thus, the "undersampling correction" algorithms dont seem to be the correct approach. Furthermore, I am talking of about 100 - 1000, not 1000000 polygons, so computation time should not really be a problem.
Any hints or tips?
kind regards,
curator
I believe you can use a simple topological test -- count the number of times each edge appears in the full list of polygons.
If the set of polygons define the surface of a closed volume, each edge should have count>=2, indicating that each edge is shared by (at least) two adjacent polygons. If the surface is manifold count==2 exactly.
Edges with count==1 indicate open regions of the surface.
The above answer does not cover many cases. A more correct (but not necessarily complete: I wouldn't know) algorithm is to ensure that every edge of every polygon (or of the mesh/polyhedron) has an even number of faces connected to it. Consider the following mesh:
The segment (line) between the closest vertex and the one below is attached to 3 faces (one one of the outer triangle and two of the inner triangle), which is greater than two faces. However this is clearly not closed.

How heavy is hardware tessellation?

If tessellation gives a bonus over just using high-poly models,then why do modern 2012 games still use gigantic models that take a lot of hard disk space instead of tessellating it all and just adjusting the tessellation factor to depend on distance from camera,creating a nice level of detail.
You can't get back detail by tessellation that was not there in the first place. It just means those models would be even bigger without it being available.
In its most basic form, tessellation is a method of breaking down polygons into finer pieces. For example, if you take a square and cut it across its diagonal, you’ve “tessellated” this square into two triangles. By itself, tessellation does little to improve realism. For example, in a game, it doesn’t really matter if a square is rendered as two triangles or two thousand triangles—tessellation only improves realism if the new triangles are put to use in depicting new information.
When a displacement map (left) is applied to a flat surface, the
resulting surface (right) expresses the height information encoded in
the displacement map. The simplest and most popular way of putting the
new triangles to use is a technique called displacement mapping. A
displacement map is a texture that stores height information. When
applied to a surface, it allows vertices on the surface to be shifted
up or down based on the height information. For example, the graphics
artist can take a slab of marble and shift the vertices to form a
carving. Another popular technique is to apply displacement maps over
terrain to carve out craters, canyons, and peaks
http://www.nvidia.com/object/tessellation.html
I think the reason why nobody uses hardware tessellation in games is, that ca. 60% of all game player are console player and aslong the console doesnt support shadermodel5, there is no reason to do games that uses hardware tessellation. Even if they do, they may be have to do a game in dx9 and dx11 because it is not really good downward compatible... but maybe there is an other reason to!
With the new consoles comming out this year, maybe HW Tessellation gets an other change ;)

How to construct ground surface of infinite size in a 3D CAD application?

I am trying to create an application similar in UI to Sketchup. As a first step, I need to display the ground surface stretching out in all directions. What is the best way to do this?
Options:
Create a sufficiently large regular polygon stretching out in all directions from the origin. Here there is a possibility of the user hitting the edges and falling off the surface of the earth.
Model the surface of the earth as a sphere/spheroid. Here I will be limiting my vertex co-ordinates to very large values prone to rounding off errors. (Radius of earth is 6371000000 millimeter).
Same as 1 but dynamically extend the ends of the earth as the user gets close to them.
What is the usual practice?
I guess you would do neither of these, but instead use a virtual ground.
So you just find out, what portion of the ground is visible in the viewport and then create a plane large enough to fill that. With some reasonable maxiumum, which simulates the end of the line of sight aka horizon as we know it.

Contact area size in MultitouchSupport private framework

I've been playing around with the carbon multitouch support private framework and I've been able to retrieve various type of data.
Among these, each contact seems to have a size and is as well described by an ellipsoid (angle, minor axis, major axis). However, I haven't been able to identify the frame of reference used for the size and the minor and major axis.
If anybody has been able to find it out, I'm interested in your information.
Thanks in advance
I've been using the framework for two years now and I've found that the ellipse is not in standard units (e.g. inches, milimeters). You could approximate millimeters by doubling the values you get for the ellipse.
Here's how I derived the ellipse information.
First, my best guess for how it works is that it's close to Synaptics "units per mm": http://ccdw.org/~cjj/l/docs/ACF126.pdf But since Apple has not released any of that information for developers, I'm relying on information that I print to the console.
You may get slightly different values based on the dimensions of the device (e.g. native trackpad vs magic mouse) you're using with the MultiTouchSupport.framework. This might also be caused by the differences in the surface (magic mouse is curved).
The code on http://www.steike.com/code/multitouch/ has a parameter called mm. This gives you the raw (non-normalized) position and velocity for the device.
Based on the width's observed min & max values from mm (-47.5,52.5), the trackpad is ~100 units wide (~75 units the other way). The trackpad is about 100mm wide x 80mm. But no, it's not a direct unit to millimeter translation. I think the parameter being named 'mm' may have just been a coincidence.
My forearm can cover about 90% of the surface of the trackpad. After laying it across the trackpad, the output will read to about 58 units wide by 36 units long, with a size of 55. If you double the units you get 116 by 72 which is really close to 100mm by 80mm. So that's why I say just double the units to approximate the millimeters. I've done this with my forearm the other way and with my palm and the approximations still seem to work.
The size of 55 doesn't seem to coincide with the values of ellipse. I'm inclined to believe that ellipse is an approximation of the surface dimensions and size is the actual surface area (probably in decimeters).
Sorry there's no straight answer (this is after all a reverse engineering project) but maybe this information can help you find the answer yourself.
(Note: I'd like to know what you're working on?)

Resources