Where can I find the calculations of Google Cardboard? - google-cardboard

Where can I find the calculations of cardboard?
Example:
A calculation about distance from lens to my eyes
A calculation about radious of lens
A calculation about field of views
A calculation about focal lenght of lens
....
I downloaded document of Google, but it not provides what i want. Because it is too "step-by-step" and i can't see any line explain why i have to select that parameter of components? If i don't want to use 45mm for focal lenght of lens? If i want to use tablet?...

You can find the exact specifications on the Google Cardboard website, in the Manufacturers section. There you can download the kit which provides the details you're looking for.

AB is phone screen
O is a len
This is simple function:
You can be based on this function to calculate distance of lens, eyes, phone screen.
You can ask:" Where i can find OA' "
With normal person, OA' is infinite, With myope,
Example:
I use lens have F = 45 mm, I am a myope and my number of glasses is 3 => OA' = 333 mm => OA = 396 mm

Related

Simultaneous Subset sums

I am dealing with a problem which is a variant of a subset-sum problem, and I am hoping that the additional constraint could make it easier to solve than the classical subset-sum problem. I have searched for a problem with this constraint but I have been unable to find a good example with an appropriate algorithm either on StackOverflow or through googling elsewhere.
The problem:
Assume you have two lists of positive numbers A1,A2,A3... and B1,B2,B3... with the same number of elements N. There are two sums Sa and Sb. The problem is to find the simultaneous set Q where |sum (A{Q}) - Sa| <= epsilon and |sum (B{Q}) - Sb| <= epsilon. So, if Q is {1, 5, 7} then A1 + A5 + A7 - Sa <= epsilon and B1 + B5 + B7 - Sb <= epsilon. Epsilon is an arbitrarily small positive constant.
Now, I could solve this as two completely separate subset sum problems, but removing the simultaneity constraint results in the possibility of erroneous solutions (where Qa != Qb). I also suspect that the additional constraint should make this problem easier than the two NP complete problems. I would like to solve an instance with 18+ elements in both lists of numbers, and most subset-sum algorithms have a long run time with this number of elements. I have investigated the pseudo-polynomial run time dynamic programming algorithm, but this has the problems that a) the speed relies on a short bit-depth of the list of numbers (which does not necessarily apply to my instance) and b) it does not take into account the simultaneity constraint.
Any advice on how to use the simultaneity constraint to reduce the run time? Is there a dynamic programming approach I could use to take into account this constraint?
If I understand your description of the problem correctly (I'm confused about why you have the distance symbols around "sum (A{Q}) - Sa" and "sum (B{Q}) - Sb", it doesn't seem to fit the rest of the explanation), then it is in NP.
You can see this by making a reduction from Subset sum (SUB) to Simultaneous subset sum (SIMSUB).
If you have a SUB problem consisting of a set X = {x1,x2,...,xn} and a target called t and you have an algorithm that solves SIMSUB when given two sets A = {a1,a2,...,an} and B = {b1,b2,...,bn}, two intergers Sa and Sb and a value for epsilon then we can solve SUB like this:
Let A = X and let B be a set of length n consisting of only 0's. Set Sa = t, Sb = 0 and epsilon = 0. You can now run the SIMSUB algorithm on this problem and get the solution to your SUB problem.
This shows that SUBSIM is as least as hard as SUB and therefore in NP.

Record linkage using String similarity Techniques

We are working on Record linkage project.
We are observing a strange behavior from all of the standard technique like Jaro Winkler, Levenshtein, N-Gram, Damerau-Levenshtein, Jaccard index, Sorensen-Dice
Say,
String 1= MINI GRINDER KIT
String 2= Weiler 13001 Mini Grinder Accessory Kit, For Use With Small Right Angle Grinders
String 3= Milwaukee Video Borescope, Rotating Inspection Scope, Series: M-SPECTOR 360, 2.7 in 640 x 480 pixels High-Resolution LCD, Plastic, Black/Red
In the above case string 1 and string 2 are related the score of all the methods as shown below.
Jaro Winkler -> 0.391666651
Levenshtein -> 75
N-Gram, -> 0.9375
Damerau -> 75
Jaccard index -> 0
Sorensen-Dice -> 0
Cosine -> 0
But string 1 and string 3 are not at all related, but distance method are giving very high score.
Jaro Winkler -> 0.435714275
Levenshtein -> 133
N-Gram, -> 0.953571439
Damerau -> 133
Jaccard index -> 1
Sorensen-Dice -> 0
Cosine -> 0
Any thoughts .?
All distance calculation score are case sensitive. Hence bring all of them to same case. Then you get to see the score calculation appropriately.
I believe your goal here is to check if the two products are same or not. The data are form different sources i guess, in case of data like this you'll need to find out what is the most important mention worth comparing?! The brand name, the specs, etc...
These metrics follow very crude notion of similarity!, don't just feed the data like that.
So first Clean(remove punctuation, non important words), tokenize(break it single word sentences) then maybe u can use fuzzywuzzy to help find a better match.

How to generate a value for sorting similar text phrases?

I have a list of texts which are required to be sorted by similarity, based on a value exclusively obtained from the text itself. Hence, no comparison allowed as it could take too long to compare with thousands of other texts.
The idea is to generate values (numeric or not), from arbitrary text phrases, like in the next example:
42334220 = "A white horse is running accross the field"
42334229 = "The white horse is running across that field"
42334403 = "A white animal is running across the green field"
Notice that the first and second phrases are together because they're more similar than the last, plus despite both start with the same letter.
I have used, in other scenarios, the Soundex function, but it is oriented to pronuntiation, for single words and dependant on the first letter.
So, how to generate (aka what algorithms exists for classify) that exemplified value which represents a phrase for sorting purposes?
I suggest looking at fourier transformation. This is successfully applied to find similarities in images or audio samples. I think it could be well-suited for your problem.
You could view a String as int array, i.e. a signal function from position => char value
Do a fourier transform on this function
return a sublist of k indexes of highest elements in the result (greater k means greater precision)
Perhaps the signal function has to be adjusted to get a better match to the common understanding of similarity:
use a function position => word-index (look up the word in a dictionary and get its index)
use a function position => ngram-index (= weighted sum of the n chars)
if the sequence of words is not of relevance, use a function char -> frequency of the character (ordered alphabetically)
Maybe other transforms (e.g. wavelet transform) would do better.

Given a phrase without spaces add spaces to make proper sentence

This is what I've in mind, but it's O(n^2):
For ex: Input is "Thisisawesome", we need to check if adding the current character makes the older found set any longer and meaningful. But in order to see till where we need to back up we'll have to traverse all the way to the beginning. For ex: "awe" and "some" make proper words but "awesome" makes the bigger word. Please suggest how can we improve the complexity. Here is the code:
void update(string in)
{
int len= in.length();
int DS[len];
string word;
for(int i=0; i<len; i++) DS[i]=0;
for(int i=0; i<len; i++)
for(int j=i+1; j<=len; j++)
{
word = in.substr(i,j-i);
if(dict.find(word)!=dict.end())
DS[j-1] = (DS[j-1] > word.length()) ? DS[j-1] : word.length();
}
}
There is a dynamic programming solution which at first looks like it is going to be O(n^2) but which turns out to be only O(n) for sufficiently large n and fixed size dictionary.
Work through the string from left to right. At the ith stage you need to work out whether there is a solution for the first i characters. To solve this, consider every possible way to break those i characters into two chunks. If the second chunk is a word and the first chunk can be broken up into words then there is a solution. The first requirement you can check with your dictionary. The second requirement you can check by looking to see if you found an answer for the first j characters, where j is the length of the first chunk.
This would be O(n^2) because for each of 1,2,3,...n lengths you consider every possible split. However, if you know what the longest word in your dictionary is you know that there is no point considering splits which make the second chunk longer than this. So for each of 1,2,3...n lengths you consider at most w possible splits, where w is the longest word in your dictionary, and the cost is O(n).
I have coded my solution today, and will put it on a web site tomorrow. Anyway, the method is as follows:
Arrange the dictionary in a trie.
The trie can help to do multiple matches quickly, because all dictionary words starting with the same letters can be matched at the same time.
(e.g. "chairman" matches "chair" and "chairman" in a trie.)
Use Dijkstra algorithm to find the best match.
(e.g. for "chairman", if we count "c" as position 0, then we have the relationships 0->5, 0->8, 1->5, 2->5, 5->8. These relationship form a network perfect for Dijkstra algorithm.)
(Note: Where's the weights of the edges? See the next point.)
Assign weighting to dictionary words.
Without weighting bad matches do weight over good matches. (e.g. "iamahero" becomes "i ama hero" instead of "i am a hero".)
The SCOWL dictionary at http://app.aspell.net/create serve the purpose well, because it has dictionaries of different sizes. These sizes (10, 20, etc.) is a good choice for weighing).
After some tries I found a need to reduce the weighing of words ending with "s", so "eyesandme" become "eyes and me" instead of "eye sand me".
I have been able to split a paragraph in milliseconds. The algorithm has linear complexity on the length of the string to be splitted, so the algorithm scales well as long as memory is enough.
Here's the dump (sorry for bragging). (The passage selected is "Novel" in Wikipedia.)
D:\GoogleDrive\programs\WordBreaker>"word breaker"<novelnospace.txt>output.txt
D:\GoogleDrive\programs\WordBreaker>type output.txt
Number of words after reading words-10.txt : 4101
Number of words after reading words-20.txt : 11329
Number of words after reading words-35.txt : 43292
Number of words after reading words-40.txt : 49406
Number of words after reading words-50.txt : 87966
Time elapsed in reading dictionary: 0.956782s
Enter the string to be broken into words:
Result:
a novel is along narrative normally in prose which describes fictional character
s and events usually in the form of a sequential story while i an watt in the ri
se of the novel 1957 suggests that the novel came into being in the early 18 th
century the genre has also been described as possessing a continuous and compreh
ensive history of about two thousand years with historical roots in classical gr
eece and rome medieval early modern romance and in the tradition of the novel la
the latter an italian word used to describe short stories supplied the present g
eneric english term in the 18 th century miguel de cervantes author of don quixo
te is frequently cited as the first significant europe an novelist of the modern
era the first part of don quixote was published in 1605 while a more precise de
finition of the genre is difficult the main elements that critics discuss are ho
w the narrative and especially the plot is constructed the themes settings and c
haracterization how language is used and the way that plot character and setting
relate to reality the romance is a related long prose narrative w alter scott d
efined it as a fictitious narrative in prose or verse the interest of which turn
s upon marvellous and uncommon incidents whereas in the novel the events are acc
ommodated to the ordinary train of human events and the modern state of society
however many romances including the historical romances of scott emily brontes w
u the ring heights and her man melvilles mo by dick are also frequently called n
ovels and scott describes romance as a kind red term romance as defined here sho
uld not be confused with the genre fiction love romance or romance novel other e
urope an languages do not distinguish between romance and novel a novel isle rom
and err o ma nil roman z o
Time elapsed in splitting: 0.00495095s
D:\GoogleDrive\programs\WordBreaker>type novelnospace.txt
Anovelisalongnarrativenormallyinprosewhichdescribesfictionalcharactersandeventsu
suallyintheformofasequentialstoryWhileIanWattinTheRiseoftheNovel1957suggeststhat
thenovelcameintobeingintheearly18thcenturythegenrehasalsobeendescribedaspossessi
ngacontinuousandcomprehensivehistoryofabouttwothousandyearswithhistoricalrootsin
ClassicalGreeceandRomemedievalearlymodernromanceandinthetraditionofthenovellaThe
latteranItalianwordusedtodescribeshortstoriessuppliedthepresentgenericEnglishter
minthe18thcenturyMigueldeCervantesauthorofDonQuixoteisfrequentlycitedasthefirsts
ignificantEuropeannovelistofthemodernerathefirstpartofDonQuixotewaspublishedin16
05Whileamoreprecisedefinitionofthegenreisdifficultthemainelementsthatcriticsdisc
ussarehowthenarrativeandespeciallytheplotisconstructedthethemessettingsandcharac
terizationhowlanguageisusedandthewaythatplotcharacterandsettingrelatetorealityTh
eromanceisarelatedlongprosenarrativeWalterScottdefineditasafictitiousnarrativein
proseorversetheinterestofwhichturnsuponmarvellousanduncommonincidentswhereasinth
enoveltheeventsareaccommodatedtotheordinarytrainofhumaneventsandthemodernstateof
societyHowevermanyromancesincludingthehistoricalromancesofScottEmilyBrontesWuthe
ringHeightsandHermanMelvillesMobyDickarealsofrequentlycallednovelsandScottdescri
besromanceasakindredtermRomanceasdefinedhereshouldnotbeconfusedwiththegenreficti
onloveromanceorromancenovelOtherEuropeanlanguagesdonotdistinguishbetweenromancea
ndnovelanovelisleromanderRomanilromanzo
D:\GoogleDrive\programs\WordBreaker>

Ways to calculate similarity

I am doing a community website that requires me to calculate the similarity between any two users. Each user is described with the following attributes:
age, skin type (oily, dry), hair type (long, short, medium), lifestyle (active outdoor lover, TV junky) and others.
Can anyone tell me how to go about this problem or point me to some resources?
Another way of computing (in R) all the pairwise dissimilarities (distances) between observations in the data set. The original variables may be of mixed types. The handling of nominal, ordinal, and (a)symmetric binary data is achieved by using the general dissimilarity coefficient of Gower (Gower, J. C. (1971) A general coefficient of similarity and some of its properties, Biometrics 27, 857–874). For more check out this on page 47. If x contains any columns of these data-types, Gower's coefficient will be used as the metric.
For example
x1 <- factor(c(10, 12, 25, 14, 29))
x2 <- factor(c("oily", "dry", "dry", "dry", "oily"))
x3 <- factor(c("medium", "short", "medium", "medium", "long"))
x4 <- factor(c("active outdoor lover", "TV junky", "TV junky", "active outdoor lover", "TV junky"))
x <- cbind(x1,x2,x3,x4)
library(cluster)
daisy(x, metric = "euclidean")
you'll get :
Dissimilarities :
1 2 3 4
2 2.000000
3 3.316625 2.236068
4 2.236068 1.732051 1.414214
5 4.242641 3.741657 1.732051 2.645751
If you are interested on a method for dimensionality reduction for categorical data (also a way to arrange variables into homogeneous clusters) check this
Give each attribute an appropriate weight, and add the differences between values.
enum SkinType
Dry, Medium, Oily
enum HairLength
Bald, Short, Medium, Long
UserDifference(user1, user2)
total := 0
total += abs(user1.Age - user2.Age) * 0.1
total += abs((int)user1.Skin - (int)user2.Skin) * 0.5
total += abs((int)user1.Hair - (int)user2.Hair) * 0.8
# etc...
return total
If you really need similarity instead of difference, use 1 / UserDifference(a, b)
You probably should take a look for
Data Mining and Data Warehousing (Essential)
Machine Learning (Extra)
Artificial Neural Networks (Especially SOM)
Pattern Recognition (Related)
These topics will let you your program recognize similarities and clusters in your users collection and try to adapt to them...
You can then know different hidden common groups of related users... (i.e users with green hair usually do not like watching TV..)
As an advice, try to use ready implemented tools for this feature instead of implementing it yourself...
Take a look at Open Directory Data Mining Projects
Three steps to achieve a simple subjective metric for difference between two datapoints that might work fine in your case:
Capture all your variables in a representative numeric variable, for example: skin type (oily=-1, dry=1), hair type (long=2, short=0, medium=1),lifestyle (active outdoor lover=1, TV junky=-1), age is a number.
Scale all numeric ranges so that they fit the relative importance you give them for indicating difference. For example: An age difference of 10 years is about as different as the difference between long and medium hair, and the difference between oily and dry skin. So 10 on the age scale is as different as 1 on the hair scale is as different as 2 on the skin scale, so scale the difference in age by 0.1, that in hair by 1 and and that in skin by 0.5
Use an appropriate distance metric to combine the differences between two people on the various scales in one overal difference. The smaller this number, the more similar they are. I'd suggest simple quadratic difference as a first attempt at your distance function.
Then the difference between two people could be calculated with (I assume Person.age, .skin, .hair, etc. have already gone through step 1 and are numeric):
double Difference(Person p1, Person p2) {
double agescale=0.1;
double skinscale=0.5;
double hairscale=1;
double lifestylescale=1;
double agediff = (p1.age-p2.age)*agescale;
double skindiff = (p1.skin-p2.skin)*skinscale;
double hairdiff = (p1.hair-p2.hair)*hairscale;
double lifestylediff = (p1.lifestyle-p2.lifestyle)*lifestylescale;
double diff = sqrt(agediff^2 + skindiff^2 + hairdiff^2 + lifestylediff^2);
return diff;
}
Note that diff in this example is not on a nice scale like (0..1). It's value can range from 0 (no difference) to something large (high difference). Also, this method is almost completely unscientific, it is just designed to quickly give you a working difference metric.
Look at algorithms for computing srting difference. Its very similar to what you need. Store your attributes as a bit string and compute the distance between the strings
You should read these two topics.
Most popular clustering algorithm k - means
And similarity matrix are essential in clustering

Resources