Where is the algorithm or table that "resolves" the OLC short code? - translate

Many geocodes, such as Geohash and OLC (Open Location Code), can be reduced by a context reference, as described here and here.
For example:
Being able to say WF8Q+WF, Cape Verde, Praia is significantly easier than remembering and using 796RWF8Q+WF
The resolver software take "Cape Verde, Praia" (or ISO abbreviation CV instead Cape Verde) and transforms it into a code prefix... The resolver make use of something like a lookup table,
Prefix | Country | Name (replaces prefix) | Reference is it?
-------+---------+------------------------+------------------
796R | CV | Praia | 796RWFMP ?
796R | CV | Joao Varela | 796RXC4C ?
797R | CV | Cruz do Gato | 797R3F38 ?
... | ... | ... | ...
I am supposing that the hidden (black box) algorithm do something simple, based on an official lookup table like the illustrated above. It use prefix of the lookup table to translate short code into complete code, or the inverse:
Translating short code to complete code. To retrieve the location from the OLC short code, just know the prefix. Example: "WF8Q+WF, CV, Praia " will use the line of CV | Praia of lookup table, that informs the prefix 796R to resolve the code, concatenating prefix with suffix, "796R" with "WF8Q+WF". It is like a functionrecoverNearest('WF8Q+WF', getReferencePoint_byText(lookup,"CV", "Praia")) but Google/PlusCodes not published lookup dataset of Cape Verde.
Translating complete code to short code. To show the short code from location (e.g. from 796RWF8Q+WF), is necessary to check the "nearst reference" to resolve the spatial query — Joao Varela and Praia lines have same prefix, but Praia's reference, by 796RWF, matches better. It is like a functionshorten('796RWF8Q+WF', getReferencePoint_byNearFrom(lookup,'796RWF8Q+WF')) but Google/PlusCodes not published lookup dataset of Cape Verde.
Question: where official lookup table of Cape Verde?
NOTES
We can split in more questions to generalize:
Is plus.codes really a black box? (perhaps I am using some wrong hypothesis on my explanation)
The lookup table of a country like Cape Verde exist, and we can download it? where Google is publishing it?
The official lookup table of Cape Verde exists and Google is respecting it... where Cape Verde govern is publishing it?
More illustrations for readers that not understand the central problem:
Translation from complete code to short code. Suppose the prefix 796R, when a complete code 796Rxxxx+yy is translated to "Praia xxxx+yy" and when is translated to "Joao Varela xxxx+yy"? It is an arbitrary choice if you not have a table with the PlusCode official references.
Translation from short code to complete code. Suppose that I am developing a Javascript software. The inputs are the short code xxxx+yy and a name (country and city or contry/state/city/district). Suppose only Cabo Verde country names, how to convert names into prefixes exactly as PlusCodes?
(edit after discussions) A preliminary conclusion. There are only two possible answers to the question:
show a link where PlusCodes published its name-to-prefix table;
show the source-code of an algorithm that reproduces exactly PlusCodes, that you developed by reengineering. I supposed that the most simple algorithm use the ordinary OLC encode/decode, and a parser for translate names into prefixes (or vice-versa), based in an "official lookup table".

Open Location Code is just another form of standard geographic coordinate: latitude and longitude. So, to get OLC for any place you need only geo coordinates for this place (see encoding section) and vice versa.
With database of Cape Verde towns and their coordinates you can build your own lookup table for quick OLC transformation with any required precision (starting from Wikipedia List of cities and towns in Cape Verde or using any of free world cities databases) or you can just convert OLC to latitude and longitude and than work with this coordinates.

Related

Restart the numbering of the reference labels in the appendix body in overleaf

I have created a supplementary section in my journal manuscript using the following script:
\appendix
%%%
\renewcommand{\appendixname}{S}
\renewcommand{\thesection}{S}
\renewcommand\thefigure{\thesection.\arabic{figure}}
\setcounter{figure}{0}
\renewcommand*{\thepage}{S\arabic{page}}
\setcounter{page}{1}
%%%
\begin{center}
\section*{Supplementary Material}
\end{center}
%%%
\subsection{Sub-heading1}
A separate bibliography also has been generated for the appendix using the multibib package as follows:
\usepackage[resetlabels]{multibib}
\newcites{supp}{Supplementary References}
and declaring
%% Loading supplementary bibliography style file
\bibliographystylesupp{unsrt}
% Loading supplementary bibliography database
\bibliographysupp{cas-sc-template-refs.bib}
\end{document}
resulting in a reference section that looks like this:
Supplmentary References
However, the reference labels in the body text does not change:
S.2. Discussion
S.2.1. Subheading2
The role of the structural of squares and the circles is clearly seen in the
interdependence of property on the values of energy and density as
shown in Figures S.4a and S.4b. There is a clear clustering of data
points based on the primary property as viewed against its dependence
on secondary property in Figures S.4c. The high-value compositions
are observed to be all apples and the medium value ones are observed
to be oranges. The values thus predicted placed most of them in the
low– and medium–value range [66].
The reference numbers are still from the main document's bibliography.
I have tried the \DeclareOption{resetlabels}{\continuouslabelsfalse} option in the multibib package documentation given in http://tug.ctan.org/tex-archive/macros/latex/contrib/multibib/multibib.pdf but to no avail.
Is there any way to renumber these reference labels as well?

How can I display multiple line scenario text in extend reports?

In my feature file, using the same scenario I am checking more than one requirements. I have written the scenario like below:
Scenario: My first requirement ID
My second requirement ID
My third requirement ID
Etc
After execution, the extend report shows only the result as
Scenario: My first requirement ID
How can I get all the three I D,s in extent report.
NOTE:Each of my scenario title is lengthy.
Can you explain your scenario text a little bit more? According to the documentation, the scenario should describe in human terms what we expect the software to do. It is quite unusual to include expected data in that scenario text. Are you using the ID from an enum? If that is the case, it would be better to spell out the enum in human readable terms. Scenario: UserType is Administrator for example. Another option would be to use a Scenario Outline, something like
Scenario Outline: My generic requirement statement
Given Id <whateverId> is provided
When I do <activity>
Then I expect to see <result>
Examples:
| whateverId | activity | result |
| 12 | firstMethod | MyResult |
| 20 | secondActivity | anotherResult |
| 42 | thirdExample | thirdResult |
The variable names provided in the outline in angle brackets become the column headers in the examples grid. Just be sure to indent the grid below the Examples: line and also include the pipe | on both the left and right boundaries of the grid. Hopefully that helps.

Combining phrases from list of words Python3

doing my best to grab information out of a lot of pdf files. Have them in a dictionary format where the key is a given date and the values are a list of occupations.
looks like this when proper:
'12/29/2014': [['COUNSELING',
'NURSING',
'NURSING',
'NURSING',
'NURSING',
'NURSING']]
However, occasionally there are occupations with several words which cannot be reliably understood in single word-form, such as this:
'11/03/2014': [['DENTISTRY',
'OSTEOPATHIC',
'MEDICINE',
'SURGERY',
'SOCIAL',
'SPEECH-LANGUAGE',
'PATHOLOGY']]
Notice that "osteopathic medicine & surgery" and "speech-language pathology" are the full text for two of these entries. This gets hairier when we also have examples of just "osteopathic medicine" or even "medicine."
So my question is this - How should I go about testing combinations of these words to see if they match more complex occupational titles? I can use the same order of the words, as I have maintained that from the source.
Thanks!

Finding Related Topics using Google Knowledge Graph API

I'm currently working on a behavioral targeting application and I need a considerably large keyword database/tool/provider that enables applications to reach to the similar keywords via given keyword for my app. I've recently found that Freebase, which had been providing a similar service before Google acquired them and then integrated to their Knowledge Graph. I was wondering if it's possible to have a list of related topics/keywords for the given entity.
import json
import urllib
api_key = 'API_KEY_HERE'
query = 'Yoga'
service_url = 'https://kgsearch.googleapis.com/v1/entities:search'
params = {
'query': query,
'limit': 10,
'indent': True,
'key': api_key,
}
url = service_url + '?' + urllib.urlencode(params)
response = json.loads(urllib.urlopen(url).read())
for element in response['itemListElement']:
print element['result']['name'] + ' (' + str(element['resultScore']) + ')'
The script above returns the queries below, though I'd like to receive related topics to yoga, such as health, fitness, gym and so on, rather than the things that has the word "Yoga" in their name.
Yoga Sutras of Patanjali (71.245544)
Yōga, Tokyo (28.808222)
Sri Aurobindo (28.727333)
Yoga Vasistha (28.637642)
Yoga Hosers (28.253984)
Yoga Lin (27.524054)
Patanjali (27.061115)
Yoga Journal (26.635073)
Kripalu Center (26.074436)
Yōga Station (25.10318)
I'd really appreciate any suggestions, and I'm also open to using any other API if there is any that I could make use of. Cheers.
See your point:) So here's the script I use for that using Serpstat's API. Here's how it works:
Script collects the keywords from Serpstat's database
Then, collects search suggestions from Serpstat's database
Finally, collects search suggestions from Google's suggestions
Note that to make script work correctly, it's preferable to fill all input boxes. But not all of them are required.
Keyword — required keyword
Search Engine — a search engine for which the analysis will be carried out. For example, for the US Google, you need to set the g_us. The entire list of available search engines can be found here.
Limit the maximum number of phrases from the organic issue, which will participate in the analysis. You cannot set more than 1000 here.
Default keys — list of two-word keywords. You should give each of them some "weight" to receive some kind of result if something goes wrong.
Format: type, keyword, "weight". Every keyword should be written from a new line.
Types:
w — one word
p — two words
Examples:
"w; bottle; 50" — initial weight of word bottle is 50.
"p; plastic bottle; 30" — initial weight of phrase plastic bottle is 30.
"w; plastic bottle; 20" — incorrect. You cannot use a two-word phrase for the "w" type.
Bad words — comma-separated list of words you want the script to exclude from the results.
Token — here you need to enter your token for API access. It can be found on your profile page.
You can download the source code for script here

What spatial SRID is this? (trying to convert a .shp file to WSG84)

I'm trying to import some Shapefile mapping data into Sql2008. Before I do that, I need to convert it to WGS84 / SRID 4326, because all my existing data is in this format.
This is the source file info:
GEOGCS["GCS_GDA_1994",DATUM["D_GDA_1994",
SPHEROID["GRS_1980",6378137,298.257222101]],
PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]]
I've tried googling for this and haven't had too much luck.
Secondly, I've tried to check the spatial_reference_systems table and I can't see it in there.
eg. SELECT * from sys.spatial_reference_systems
So, can anyone help me? I can't covert it to SRID 4326 if i don't know it's current SRID.
UPDATE 1
I found this page which explains the tech specs of GDA 1994 .. but doesn't hint at any SRID number... ???
UPDATE 2
This search result page also has some interesting results. From here, if you click on the SR-ORG:6643: Australia Albers Equal Area Conic link, it explains that datum .. and it's pretty much identical to the one I'm searching for. This means the SRID is 6643.
So is that the answer?
Using FME as my reference, this (GDA94) maps to EPSG:4283, which means that you need to use SRID 4283 (assuming that you're using EPSG-compliant SRID values)
Using this link GDA94 can be mapped to SRID = 4283 covering the Australian continent. If one knows, for example, that it is Western Australia it may be better to use SRID = 28350 and preserve greater accuracy.

Resources