divide routes in Open Street Maps on certain length - geospatial

I am never was working with any Map SDK. Now I am going to develop a Map based application.
I would like use OpenStreetMap for this development. The are 2 main questions which I was trying to get answer:
!. Is it possible to divide a route on parts of certain distance? Let say generated route is about 1500 miles. I would like to divide it (get coordinates) on segments of 500 miles each.
2. Is it possible to divide each part on tails of predefined length and width?
I appreciate any help

Related

Problems with template matching and pyrDown

I am trying to make a normal template matching search more effizient by first doing the search on downscaled representations of the image. Basically I do a double pyrDown -> quarter resolution.
For most images and templates this works beautifully, but for some others I get really bad matching results. It seems to be especially bad for thin fonts or small contrast.
Look at this example:
And this template:
At 100% resolution I get a matching probability of 99,9%
At 50% resolution I get 90%
At 25% resolution I get 87%
I don't really know why its so bad for some images/templates. I tried to recreate and test in photoshop by hiding/showing the 25% downscaled template on top of the 25% downscaled image, and as you can see, it's not 100% congruent:
https://giphy.com/gifs/coWDjcvHysKgn95IFa
I need a way to get more probability for those matchings at low resolution because it needs to be fast.
Any ideas on how to improve my algorithm?
Here are the original files:
https://www.dropbox.com/s/llbdj9bx5eprxbk/images.zip?dl=0
This is not unusual and those scores seem perfectly fine. However here are some ideas that might help you improve the situation:
You mentioned that it seems to be especially bad for thin fonts. This could be happening because some of the pixels in the lines are being smoothed out or distorted with the Gaussian filter that is applied on pyrDown. It could also be an indication that you have reduced the resolution too much. Unfortunately I think the pyrDown function in OpenCV reduces the resolution by a factor of 2 so it does not give you the ability to fine tune it by other scale factors. Another thing you could try is the instruction resize() with interpolation set to INTER_LINEAR or INTER_CUBIC. The resize() function will allow you to resize the image using any scale factor so you might have more control of performance vs accuracy.
Use multiple templates of the same objects. If you come to a scene and can only achieve an 87% score, create a template out of that scene. Then add it to a database of templates that are to be utilized. Obviously as the amount of templates increases so does the time it takes to complete the search.
The best way to deal with this scenario is to perform an exhaustive match on the highest level of the pyramid then track it down to the lowest level using a reduced search space on lower levels. By exhaustive I mean you will search all rows and all columns across the entire top pyramid level image. You will keep track of the locations (row, col) of the highest matches on the highest level (you are already probably doing that). Then you will multiply those locations by a factor of 2 and perform a restricted search on the next lowest level (ex. 5 x 5 shift centered on the rough location). You keep doing this until you are at the bottom level. This will give you the best overall accuracy and performance. This is also the way most industrial computer vision packages do it.

OSMnx: Divide a cache ox.graph into equal squares without redownloading each

I am trying to divide a city into n squares.
Right now, I'm calculating the coordinates for all square centres and using the ox.graph_from_point function to extract the OSM data for each of them.
However, this is getting quite long at high n due to the API pausing times.
My question:
Is there a way to download all city data from OSM, and then divide the cache file into squares (using ox.graph_from_point or other) without making a request for each?
Thanks
Using OSMnx directly - no, there isn't. You would have to script your own solution using the existing tools OSMnx provides.

Excel Geocoding -- extract coordinates from map

I've been trying to get longitude and latitude coordinates from Japanese addresses. There are a number of ways to do this, but most of them (such as Google Maps) only allow a limited number of queries a day (I have ~15000), and many do not support Japanese addresses.
Here is an example form of the addresses that I am using:
東京都千代田区丸の内1-9-1
However, recently I found that the 3D maps tool in Excel 365 can plot addresses on a map, and it's fast enough to handle all of my addresses. However, although I can see these points on the Excel, I don't know if there's a way to export these points to longitude-latitude coordinate pairs.
Does anyone know a way to get the longitude-latitude pairs from Excel's 3D maps feature?
I've been working exactly same issue for weeks and i think best soluition is Google API

Interact with the five millions of dots on a screen

We need to display 5 millions of dots (or very simple graphics objects) on a screen at the same time and we want to interact with each of the dots (e.g., change their colors or drag/drop them).
To achieve this, we usually run a for-loop through 5 millions items in the worst case O(N) to access and change the states of the dot, according to the mouse coordinates (x, y). Due to the huge number of the objects, this approach causes lots of overhead (we have to run the for-loop of five millions whenever a user selects a dot). I have already tested this approach but it was almost impossible to make an interactive tool with this. Is there anyway to rapidly and efficiently access the dots without running the million for-loop and causing this performance problem?
You really haven’t given many details
These questions quickly come to mind:
Are dots the same size?
Are dots uniformly disbursed on the canvas?
If one dot is “selected”, is only that one dot recolored or moved?
Why are you violating good data visualization rules by overwhelming the user? :)
With this lack of specificity in mind...
...Divide and conquer:
Divide your dot array into multiple parts.
Divide your dots onto multiple overlaying canvases.
Divide your dot array into multiple parts
This will allow you to examine far fewer array elements when searching for the 1 you need.
Create a container object with 1980 elements representing the 1980 “x” coordinates on the screen.
var container={};
for(var x=1;x<=1980;x++){
container[x]=[];
}
Each container element is an array of dot objects with their dot centers on that x-coordinate.
Every dot object has enough info to locate and redraw itself.
A dot at x-coordinate == 125 might be defined like this:
{x:125,y:100,r:2,color:"red",canvas:1};
When you want to add a dot, push a dot object to the appropriate "x" element of the container object.
// add a dot with x screen coordinate == 952
container[952].push({x:952,y:100,r:2,color:"red",canvas:1});
Dots can be drawn based on the dot objects:
function drawDot(dot,context){
context.beginPath();
context.fillStyle=dot.color;
context.arc(dot.x,dot.y,dot.r,0,PI2,false);
context.closePath();
context.fill();
}
When the user selects a dot, you can find it quickly by pulling the few container elements around the X where the user clicked:
function getDotsNearX(x,radius){
// pull arrays from "x" plus/minus "radius"
var dotArrays=[]
for(var i=x-radius;i<=x+radius;i++){
dotArrays.push(container[i]);
}
return(dotArray);
}
Now you can process the dots in these highly targeted arrays instead of all 5 million array elements.
When the user moves a dot to a new position, just pull the dot object out of its current container element and push it into the appropriate new "x" container element.
Divide your dots onto multiple overlaying canvases
To improve drawing performance, you will want to disburse your dots across multiple canvas overlayed on each other.
The dot element includes a canvas property to identify on which canvas this dot will be drawn.
Have you already taken a look at the KineticJS framework? There is a very impressive stress-test with exactly the same drag-and-drop functionality you're looking for. If you use KineticJS, you can access every single dot with the following eventlistener, and of course change its color, size etc.:
stage.on('mousedown', function(evt) {
var circle = evt.targetNode;
});

Search different audio files for equal short samples

Consider multiple (at least two) different audio-files, like several different mixes or remixes. Naively I would say, it must be possible to detect samples, especially the vocals, that are almost equal in two or more of the files, of course only then, if the vocal samples aren't modified, stretched, pitched, reverbed too much etc.
So with what kind of algorithm or technique this could be done? Let's say, the user would try to set time markers in all files best possible, which describe the data windows to compare, containing the presumably equal sounds, vocals etc.
I know that no direct approach, trying to directly compare wav data in any way is useful. But even if I have the frequency domain data (e.g. from FFT), I would have to use a comparison algorithm that kind of shifts the comparing-windows through time scale, since I cannot assume the samples, I want to find, are time sync over all files.
Thanks in advance for any suggestions.
Hi this is possible !!
You can use one technique called LSH (locality sensitive hashing), is very robust.
another way to do this is try make spectrogram analysis in your audio files ...
Construct database song
1. Record your Full Song
2. Transform the sound to spectrum
3. slice your Spectrogram in chunk and get three or four high Frequencies
4. Store all the points
Match the song
1. Record one short sample.
2. Transform the sound into another spectrum
3. slice your Spectrogram in chunk and get three or four hight Frequencies
4. Compare the collected frequencies with your database song.
5. your match is the song with have the high hit !
you can see here how make ..
http://translate.google.com/translate?hl=EN&sl=pt&u=http://ederwander.wordpress.com/2011/05/09/audio-fingerprint-em-python/
ederwander

Resources