I can't understand some parametres - geometry

i find a paper talk about the camera field of view but i didn't understand the parameter used in the given formula, please i want just to understand the parameter and how that's work. The definition of all things is in the figure.

Related

What 's' represents in cells in this dataset?

I need to use this public dataset:
https://catalog.data.gov/dataset/2015-2016-demographic-data-grades-k-8-school
You can view data in this table viewer:
https://data.cityofnewyork.us/Education/2015-2016-Demographic-Data-Grades-K-8-School/7yc5-fec2
Many cells have No data in them, which is clear for me. However many others have s which is unclear. I didn't find any explanation.
I guess maybe it is some standard way of telling why data is absent in that cell. Or maybe not and only authors of data know it.
Please tell me what s mean or may mean.
There is no "universal" meaning behind this.
It could be a number of things:
Data not present
Not applicable
Use case specific information
Error in the dataset itself
...
If you want to create value out of this data, don't make assumptions, look for documentation or description of dataset, which describes its columns, types and expected content. The owner or creator of this dataset should of course also be able to inform you of what it represents.

Handle different layout of document using kofax

I am new to KofaxTotalAgility solution, but i am well aware of OCR, OMR and recognition mechanism.
I have two forms in one folder, A and B.
both of them are identical, but due to manual scan there are slight axes change, say 20 pixel right shift, so Layout is slightly differ.
Layout of Image A and Image B are different, position of a form in a page are not fix.
I know, other solution like "abbyy fine reader", provide flexilayout where we can handle this by finding the text and setting up right left top down to automatically identify zones.
As i have started learning KofaxTotalAgility, i am unaware of all option provided by "kofax Transformation Designer".
My question is which Locator should i use, i am currently using/working-on advance zone locator and for one document(Image A) which i set as a reference, extraction is proper. But for other,(Image B) due to layout mismatch text/box field are not getting extracted.
Can anyone point out the right direction from where i can get this case handled properly.
I know, i am asking direct option/solution, any help is highly appreciable.
In general, Kofax Transformations has two groups of locators:
Deterministic. You tell the locator precisely what to do, and how to do it (similar to an imperative approach when programming)
Probabilistic. You just tell your locator what to extract, and it works out the rest (based on AI).
Here's a (non-exhaustive) diagram I created the other day:
When working with forms, you might be tempted to rely on forms-specific locators such as the Advanced Zone Locator. While this locator can account for fields "moving around", for example due to images being jolted, zoomed, or distorted, there are certain limitations. Other locators don't have these limitations - the format locator for example allows you to define a certain pattern (a Regular Expression) that should be matched along with a keyword that has to be found somewhere around that pattern.
For your example, you could create a regex like M|F|X, and then define "Gender" as the keyword that needs to be present on the left.
However, any locator that's ruled by determinism follows Murphy's law - at some point that keyword might change. There could be different languages. And maybe additional letters for certain genders might be added; ultimately breaking your extraction logic.
Enter AI - while Murphy's law still applies when using Group Locators, the difference here is that users can train the system to pick up the new data. Said locator will automatically work out the best way to extract that piece of data. If you used a format locator, the customer would need to get back to you to add additional expressions, or have the keywords changed.
In your particular case, I'd try to use a Trainable Group Locator first. If you already know what you're looking for - for example SSNs that you have somewhere in a database, go for the Database Locator. Use Format Locators as a last resort, as tempting as they may be. Advanced Zone Locators are useful when you deal with forms, but I find myself using them almost exclusively for handprint or checkbox recognition.

MKMapView showsUserLocation and load custom annotations

We are using MKMapView with showsUserLocation = YES so that we see a blue dot and accuracy ring. We've also implemented mapView:didUpdateUserLocation: to capture updates. From this method we get the users location and use that to make a web service call, the results of which end up as map annotations.
The problem
As long as showsUserLocation = YES the method mapView:didUpdateUserLocation: is periodically called. We only need to get the users location at specific times, e.g. viewDidAppear or when the user touches a button. If we set showsUserLocation to NO after the first update then we lose the blue dot which we'd like to keep.
Ideas
One idea is to check the MKUserLocation value received by mapView:didUpdateUserLocation: against the previous value to see if there has been a change, if there has then do a check to see how much of a change before deciding to load fresh data.
Another idea is to use CLLocationManager and to manually place a user pin on the map, the issue with this one is how to simulate the blue circle and accuracy ring.
Anyone know any examples? Or have thoughts on how to tackle this?
Thanks
The simplest idea is simply to ignore all callbacks after the first good one that you get. Or your first idea to check and see how far the new location is from the previous is a good one. Simulating the blue dot and the pulsating accuracy ring is much more complicated.
Approach taken was to use the first idea. As no further activity/answers, and given age of the question this will be accepted answer.

correlation function in Opencv

Help me to learn correlation function in Opencv.
I have read some references but I unable to get a correct idea.
Using Correlation can I match two images and assign weights on them by considering relation between original and another. ?
(becouse I want to match 2 images like same but not 100% same)
Is a kind of Template matching ...?
I wonder if someone can point me a sample code in c++ somewhere in net.
Thank you
You can use Image Correlation to find subimages inside an image.
This is how it works, looking for zeroes inside a textbox:
Also, take a look at this answer

Freebase: Format search result to list all properties of object of unknown type(s)

I'm trying to write a MQL query to format a search result in freebase (the "output" parameter in the search API). I essentially want to find the (simple) values of all the properties of a given search result (without knowing anything about the types of the result a priori). By "simple", I mean only the default properties if the values are complex objects.
E.g., if I search for "Yo La Tengo" and this takes me to the result for "/en/yo_la_tengo", I want to be able to get the group's members (I just need names, not instruments or dates started), albums (again, just names), films contributed to (again, just names), etc.
Is there a simple way to do this with a search output query, given that I know nothing about the types? I imagine there's some sort of reflection magic I can use, and I've tried mucking about with "/type/reflect", but I'm not getting anywhere. I'm brand-new to MQL (though I have extensive SQL experience), so this is a little daunting. Any ideas?
Edit: So to clarify, I think the problem I'm seeing is due to mediator types like "performance" (an actor in a film) or "marriage". E.g., with a query about Yo La Tengo, I can see most (all?) information that I'm interested in, but a similar query about [The Muppet Movie]( freebase.com/api/service/search?limit=1&mql_output=%5B%7B%22%2Ftype%2Freflect%2Fany_reverse%22%3A%5B%7B%7D%5D%2C%22%2Ftype%2Freflect%2Fany_master%22%3A%5B%7B%7D%5D%2C%22%2Ftype%2Freflect%2Fany_value%22%3A%5B%7B%7D%5D%7D%5D&query=The%20Muppet%20Movie -- sorry, SO thinks I'm a spammer so I can't make this a link), I don't see Frank Oz reference at all (probably because his performance is referenced instead). Is there a generic way for me to "follow" mediator types to get all their properties? E.g., is there a single output MQL that would allow me to get the actor in a performance (when linked form a film search result) and give the the spouse in a marriage (when linked from a person)?
Querying not only every property, but then following those properties another ply deep in the graph for all search results is going to be an incredibly expensive operation. What is the use case for this? Do you really have a UI where the user can see and effectively absorb all this information? To answer the question directly though, it's not possible to unpack mediator types automatically using mql_output on the search API.
I'd suggest combining a basic set of information on the search query with a deeper set of information on a topic that the user has expressed interest (e.g. by hovering over). This UI experience would be similar to that of Freebase Suggest.
In the years since the question was originally asked there have been some additional useful things added such as the "notable" pseudo-property which lets you see what the topic is notable for.
Of course everyone also needs to be moving to the new API, so the queries would be:
https://www.googleapis.com/freebase/v1/search?query=%22the%20muppet%20movie%22&limit=1&indent=true
https://www.googleapis.com/freebase/v1/topic/en/the_muppet_movie
AFAIK there is no way to do this in outright MQL, but you can:
Get all the properties of an object or type of object, then
Programmatically construct another MQL query to get those objects you want to know more about.
Look at this example:
[{
"type|=": [
"/film/actor",
"/tv/tv_actor",
"/celebrities/celebrity"
],
"*": [{}]
}]​
It grabs all the properties of all objects that have the type actor, tv_actor, or celebrity. When you run it, you'll see all the possible "follow" points you can explore.
This is not exactly what you want, but it should get you closer.

Resources