Speed up getting distance between two lat and lon - python-3.x

I have two DataFrame containing Lat and Lon. I want to find distance from one (Lat, Lon) pair to ALL (Lat, Lon) from another DataFrame and get the minimum. The package that I am using geopy. The code is as follows:
from geopy import distance
import numpy as np
distanceMiles = []
count = 0
for id1, row1 in df1.iterrows():
target = (row1["LAT"], row1["LON"])
count = count + 1
print(count)
for id2, row2 in df2.iterrows():
point = (row2["LAT"], row2["LON"])
distanceMiles.append(distance.distance(target, point).miles)
closestPoint = np.argmin(distanceMiles)
distanceMiles = []
The problem is that df1 has 168K rows and df2 has 1200 rows. How do I make it faster?

geopy.distance.distance uses geodesic algorithm by default, which is rather slow but more accurate. If you can trade accuracy for speed, you can use great_circle, which is ~20 times faster:
In [4]: %%timeit
...: distance.distance(newport_ri, cleveland_oh).miles
...:
236 µs ± 1.67 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [5]: %%timeit
...: distance.great_circle(newport_ri, cleveland_oh).miles
...:
13.4 µs ± 94.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Also you may use multiprocessing to parallelize the computation:
from multiprocessing import Pool
from geopy import distance
import numpy as np
def compute(points):
target, point = points
return distance.great_circle(target, point).miles
with Pool() as pool:
for id1, row1 in df1.iterrows():
target = (row1["LAT"], row1["LON"])
distanceMiles = pool.map(
compute,
(
(target, (row2["LAT"], row2["LON"]))
for id2, row2 in df2.iterrows()
)
)
closestPoint = np.argmin(distanceMiles)

Leaving this here in case anyone needs it in the future:
If you need only the minimum distance, then you don't have to bruteforce all the pairs. There are some data structures that can help you solve this in O(n*log(n)) time complexity, which is way faster than the bruteforce method.
For example, you can use a generalized KNearestNeighbors (with k=1) algorithm to do exactly that, given that you pay attention to your points being on a sphere, not a plane. See this SO answer for an example implementation using sklearn.
There seems to be a few libraries to solve this too, like sknni and GriSPy.
Here's also another question that talks a bit about the theory.

This should run much faster if you utilize itertools instead of explicit for loops. Inline comments should help you understand whats happening at each step.
import numpy as np
import itertools
from geopy import distance
#Creating 2 sample dataframes with 10 and 5 rows of lat, long columns respectively
df1 = pd.DataFrame({'LAT':np.random.random(10,), 'LON':np.random.random(10,)})
df2 = pd.DataFrame({'LAT':np.random.random(5,), 'LON':np.random.random(5,)})
#Zip the 2 columns to get (lat, lon) tuples for target in df1 and point in df2
target = list(zip(df1['LAT'], df1['LON']))
point = list(zip(df2['LAT'], df2['LON']))
#Product function in itertools does a cross product between the 2 iteratables
#You should get things of the form ( ( lat, lon), (lat, lon) ) where 1st is target, second is point. Feel free to change the order if needed
product = list(itertools.product(target, point)])
#starmap(function, parameters) maps the distance function to the list of tuples. Later you can use i.miles for conversion
geo_dist = [i.miles for i in itertools.starmap(distance.distance, product)]
len(geo_dist)
50
geo_dist = [42.430772028845716,
44.29982320107605,
25.88823239877388,
23.877570442142783,
29.9351451072828,
...]
Finally,
If you are working with a massive dataset, then I would recommend using multiprocessing library to map the itertools.starmap to different cores and asynchronously compute the distance values. Python Multiprocessing library now supports starmap.

If you need to check all the the pairs by brute force, I think the following approach is the best you can do.
Looping directly on columns is usually slightly faster than iterrows, and the vectorized approach replacing the inner loop saves time too.
for lat1, lon1 in zip(df1["LAT"], df1["LON"]):
target = (lat1, lon1)
count = count + 1
# print(count) #printing is also time expensive
df2['dist'] = df1.apply(lambda row : distance.distance(target, (row['LAT'], row['LON'])).miles, axis=1)
closestpoint = df2['dist'].min() #if you want the minimum distance
closestpoint = df2['dist'].idxmin() #if you want the position (index) of the minimum.

Related

How to efficiently calculate squared distance of each element to every other in a matrix?

I have a matrix of the below format:
matrix = array([[-0.2436986 , -0.25583658, -0.16579486, ..., -0.04291612,
-0.06026303, 0.08564489],
[-0.08684622, -0.21300158, -0.04034272, ..., -0.01995692,
-0.07747065, 0.06965207],
[-0.34814256, -0.20597479, 0.06931241, ..., -0.1236965 ,
-0.1300714 , -0.110122 ],
...,
[-0.04154776, -0.07538085, 0.01860147, ..., -0.01494173,
-0.08960884, -0.21338603],
[-0.34039265, -0.24616522, 0.10838407, ..., 0.22280858,
-0.03465452, 0.04178255],
[-0.30251586, -0.23072125, -0.01975435, ..., 0.34529492,
-0.03508861, 0.00699677]], dtype=float32)
Since, I want to calculate squared distance of each element to every other, I am using the below code:
def sq_dist(a,b):
"""
Returns the squared distance between two vectors
Args:
a (ndarray (n,)): vector with n features
b (ndarray (n,)): vector with n features
Returns:
d (float) : distance
"""
d = np.sum(np.square(a - b))
return d
dim = len(matrix)
dist = np.zeros((dim,dim))
for i in range(dim):
for j in range(dim):
dist[i,j] = sq_dist(matrix[i, :], matrix[j, :])
I am getting the correct result but only for 5000 elements in 17 minutes (if I use 5000 elements instead of 100k).
Since I have 100k*100k matrix, the cluster fails in 5 hours.
How to efficiently do this for a large matrix?
I am using Python3.8
and Pyspark.
Output matrix should be like:
dist = array([[0. , 0.57371938, 0.78593194, ..., 0.83454031, 0.58932155,
0.76440328],
[0.57371938, 0. , 0.66285896, ..., 0.89251578, 0.76511419,
0.59261483],
[0.78593194, 0.66285896, 0. , ..., 0.60711896, 0.80852598,
0.73895919],
...,
[0.83454031, 0.89251578, 0.60711896, ..., 0. , 1.01311994,
0.84679914],
[0.58932155, 0.76511419, 0.80852598, ..., 1.01311994, 0. ,
0.5392195 ],
[0.76440328, 0.59261483, 0.73895919, ..., 0.84679914, 0.5392195 ,
0. ]])
You can make it significantly faster by using numba:
import numpy as np
import numba as nb
#nb.njit(parallel=True)
def square_dist(matrix):
dim = len(matrix)
assert dim > 0
dist = np.zeros((dim,dim))
for i in nb.prange(dim):
for j in nb.prange(dim):
dist[i][j] = np.square(matrix[i, :] - matrix[j, :]).sum()
return dist
Test and time:
rng = np.random.default_rng()
matrix = rng.random((200, 10))
assert np.allclose(op(matrix),square_dist(matrix))
%timeit op(matrix)
%timeit square_dist(matrix)
Output:
181 ms ± 556 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
947 µs ± 43.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
First, let's do a reality check. Computing N2 distances where each one takes 3N-1 operations (N subtractions, N multiplications and N-1 additions) means you have to perform about 3N3 arithmetic operations. When N is 100k, that totals to 3x1015 operations. A modern CPU with AVX-512 running at 3 GHz (3x109 Hz) can perform 3x109 [cycles/sec] x (512 / 32) [float32 entries in a vector] x 2 [vector ALUs per core] = 1011 float32 operations/second. Therefore, to compute all entries in your distance matrix it will take no less than 3x1015 / 1011 = 30000 seconds or 8 hrs and 20 mins. This is a hard lower limit, only achievable if all operations are perfectly vectorisable, which they are not, e.g. the horizontal sum after the squaring. If the CPU isn't AVX-512 capable but only supports AVX2, then the vector length is twice as small and the time goes up to about 17 hrs. All this assuming that data fits in the CPU cache - it actually doesn't and it needs proper prefetching.
First thing you can do is cut the compute time in half by noticing that dij = dji and also dii = 0:
for i in range(dim):
dist[i,i] = 0
for j in range(i+1, dim):
d[i,j] = d[j,i] = np.sum(np.square(matrix[i, :] - matrix[j, :]))
Notice the loop here runs only for i < j and the call to sq_dist has been inlined to save you 5x109 unnecessary function calls!!
But even then, you still need more than 4 hrs on that AVX-512 CPU (more than 8 hrs with AVX2 only.)
If you really must cut down that compute time, you need to run it in parallel. With PySpark that means you have to store the vectors in a dataset, perform a self-join, and write a UDF that uses the BLAS implementation that ships with Spark (or install a native one) to compute the distance metric. Unfortunately, this is a low-level interface of Spark and it's only exposed to UDFs written in JVM languages - check this question for a Scala-based solution.

How to calculate the orthogonal vector of a unit vector with numpy?

I have a set of unit vectors in a numpy array u:
import numpy as np
a = np.arange(12).reshape(2,6) # generate some vectors
u = a/np.linalg.norm(a, axis=0) # turn them into unit vectors
print(u)
[[0. 0.5547002 0.62469505 0.65079137 0.66436384 0.67267279]
[1. 0.83205029 0.78086881 0.7592566 0.74740932 0.73994007]]
Now I want to generate the vectors orthogonal to each vector (just by flipping the components of the vectors like (x,y) -> (-y,x) ):
ortogonal_u = np.array(-u[1,:], u[0,:])
and get the error
TypeError: data type not understood
What am i doing wrong? How to fix it?
Is there a better way to find the orthogonal vectors of such a set of vectors? I would like it to be performant.
If you want this to be fast for large arrays, it helps to do things in place. The following will do that:
a = np.arange(12).reshape(2,6)
a = a[::-1, :] # change the indexing to reverse the vector to swap x and y (note that this doesn't do any copying)
np.negative(a[0,:], out=a[0, :]) # negate one axis
# [[ -6 -7 -8 -9 -10 -11]
# [ 0 1 2 3 4 5]]
Speed testing this and some of the other posted answers:
N = 10000000
a0 = np.arange(2*N).reshape(2,N)
def f0(a):
x = a[::-1, :]
np.negative(x[0,:], out=x[10, :])
return x
def f1(a):
x = np.array([-a[1,:], a[0,:]])
return x
def f2(a):
x = np.flip(a, axis=0) * np.array([[1], [-1]])
return x
%timeit f0(a0)
# 6.69 ms ± 81.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%timeit f1(a0)
# 103 ms ± 1.41 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
%timeit f2(a0)
# 81.6 ms ± 1.76 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
So the in-place operation is more than 10x faster for a very large array. (This is particularly fast since all it does is change the indexing direction (a single operation on the array header and independent of array size), and then change the sign of one row, so it's an unusual speed gain. Currently, I suspect the sign change requires a copy, but there may be a way to do this without a copy, but I don't know it. Also, note that if you do the operation in place, the original array is over-written, so this may not work for your use case.
You're passing two data arguments to the array constructor, but it only expects one. When a second argument is passed, array expects it to be a description of the datatype of the array, and u[0, :] is not a valid type descriptor.
The minimal change needed to get the expected result is to place the two slices in a list.
np.array([-u[1,:], u[0,:]])
You can use flip and broadcast opperations:
import numpy as np
a = np.arange(12).reshape(2,6) # generate some vectors
u = a/np.linalg.norm(a, axis=0) # turn them into unit vectors
print(u)
print(np.flip(u, axis=0) * np.array([[1], [-1]])) # NEW LINE HERE
[[0. 0.14142136 0.24253563 0.31622777 0.37139068 0.41380294]
[1. 0.98994949 0.9701425 0.9486833 0.92847669 0.91036648]]
[[ 1. 0.98994949 0.9701425 0.9486833 0.92847669 0.91036648]
[-0. -0.14142136 -0.24253563 -0.31622777 -0.37139068 -0.41380294]]

What is the fastest way to find value that appears only once in a list?

I have tried using
nums = [#10000 items of type int]
arr = numpy.array(nums)
for value,count for collections.Counter(arr).items():
if count == 1:
return value
but it is too slow. Is there any faster way to solve this.
time 200ms
I got this using np.unique(..., return_counts=True):
import numpy as np
# Synthesise array...
arr = np.random.randint(0,8, (10000), np.int32)
# ... with one unique value
arr[5000] = 9
And now time the code:
%%timeit
...: v,c = np.unique(arr, return_counts=True)
...: np.argwhere(c==1)
164 µs ± 672 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
The array c looks like this:
array([1284, 1224, 1311, 1185, 1207, 1278, 1233, 1277, 1])
and the index of the unique value:
np.argwhere(c==1)
Out[62]: array([[8]])
you can create a reverse dictionary based on frequency
import numpy as np
import collections
nums=np.random.randint(2,100,10000)
nums[np.random.randint(0,10000-1)]=1 # set a unique value with one
freq = collections.Counter(nums) # get frequencies
revFreq = {v: k for k, v in freq.items()} # reverse the dictionary so each frequency points to its value
revFreq[1] # get the value that appeared one time
1

how to find the area of shaded region from the plot using matplotlib [duplicate]

I have a set of points and would like to know if there is a function (for the sake of convenience and probably speed) that can calculate the area enclosed by a set of points.
for example:
x = np.arange(0,1,0.001)
y = np.sqrt(1-x**2)
points = zip(x,y)
given points the area should be approximately equal to (pi-2)/4. Maybe there is something from scipy, matplotlib, numpy, shapely, etc. to do this? I won't be encountering any negative values for either the x or y coordinates... and they will be polygons without any defined function.
EDIT:
points will most likely not be in any specified order (clockwise or counterclockwise) and may be quite complex as they are a set of utm coordinates from a shapefile under a set of boundaries
Implementation of Shoelace formula could be done in Numpy. Assuming these vertices:
import numpy as np
x = np.arange(0,1,0.001)
y = np.sqrt(1-x**2)
We can redefine the function in numpy to find the area:
def PolyArea(x,y):
return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1)))
And getting results:
print PolyArea(x,y)
# 0.26353377782163534
Avoiding for loop makes this function ~50X faster than PolygonArea:
%timeit PolyArea(x,y)
# 10000 loops, best of 3: 42 µs per loop
%timeit PolygonArea(zip(x,y))
# 100 loops, best of 3: 2.09 ms per loop.
Timing is done in Jupyter notebook.
The most optimized solution that covers all possible cases, would be to use a geometry package, like shapely, scikit-geometry or pygeos. All of them use C++ geometry packages under the hood. The first one is easy to install via pip:
pip install shapely
and simple to use:
from shapely.geometry import Polygon
pgon = Polygon(zip(x, y)) # Assuming the OP's x,y coordinates
print(pgon.area)
To build it from scratch or understand how the underlying algorithm works, check the shoelace formula:
# e.g. corners = [(2.0, 1.0), (4.0, 5.0), (7.0, 8.0)]
def Area(corners):
n = len(corners) # of corners
area = 0.0
for i in range(n):
j = (i + 1) % n
area += corners[i][0] * corners[j][1]
area -= corners[j][0] * corners[i][1]
area = abs(area) / 2.0
return area
Since this works for simple polygons:
If you have a polygon with holes : Calculate the area of the outer ring and subtrack the areas of the inner rings
If you have self-intersecting rings : You have to decompose them into simple sectors
By analysis of Mahdi's answer, I concluded that the majority of time was spent doing np.roll(). By removing the need of the roll, and still using numpy, I got the execution time down to 4-5µs per loop compared to Mahdi's 41µs (for comparison Mahdi's function took an average of 37µs on my machine).
def polygon_area(x,y):
correction = x[-1] * y[0] - y[-1]* x[0]
main_area = np.dot(x[:-1], y[1:]) - np.dot(y[:-1], x[1:])
return 0.5*np.abs(main_area + correction)
By calculating the correctional term, and then slicing the arrays, there is no need to roll or create a new array.
Benchmarks:
10000 iterations
PolyArea(x,y): 37.075µs per loop
polygon_area(x,y): 4.665µs per loop
Timing was done using the time module and time.clock()
maxb's answer gives good performance but can easily lead to loss of precision when coordinate values or the number of points are large. This can be mitigated with a simple coordinate shift:
def polygon_area(x,y):
# coordinate shift
x_ = x - x.mean()
y_ = y - y.mean()
# everything else is the same as maxb's code
correction = x_[-1] * y_[0] - y_[-1]* x_[0]
main_area = np.dot(x_[:-1], y_[1:]) - np.dot(y_[:-1], x_[1:])
return 0.5*np.abs(main_area + correction)
For example, a common geographic reference system is UTM, which might have (x,y) coordinates of (488685.984, 7133035.984). The product of those two values is 3485814708748.448. You can see that this single product is already at the edge of precision (it has the same number of decimal places as the inputs). Adding just a few of these products, let alone thousands, will result in loss of precision.
A simple way to mitigate this is to shift the polygon from large positive coordinates to something closer to (0,0), for example by subtracting the centroid as in the code above. This helps in two ways:
It eliminates a factor of x.mean() * y.mean() from each product
It produces a mix of positive and negative values within each dot product, which will largely cancel.
The coordinate shift does not alter the total area, it just makes the calculation more numerically stable.
It's faster to use shapely.geometry.Polygon rather than to calculate yourself.
from shapely.geometry import Polygon
import numpy as np
def PolyArea(x,y):
return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1)))
coords = np.random.rand(6, 2)
x, y = coords[:, 0], coords[:, 1]
With those codes, and do %timeit:
%timeit PolyArea(x,y)
46.4 µs ± 2.24 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit Polygon(coords).area
20.2 µs ± 414 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
cv2.contourArea() in OpenCV gives an alternative method.
example:
points = np.array([[0,0],[10,0],[10,10],[0,10]])
area = cv2.contourArea(points)
print(area) # 100.0
The argument (points, in the above example) is a numpy array with dtype int, representing the vertices of a polygon: [[x1,y1],[x2,y2], ...]
There's an error in the code above as it doesn't take absolute values on each iteration. The above code will always return zero. (Mathematically, it's the difference between taking signed area or wedge product and the actual area
http://en.wikipedia.org/wiki/Exterior_algebra.) Here's some alternate code.
def area(vertices):
n = len(vertices) # of corners
a = 0.0
for i in range(n):
j = (i + 1) % n
a += abs(vertices[i][0] * vertices[j][1]-vertices[j][0] * vertices[i][1])
result = a / 2.0
return result
a bit late here, but have you considered simply using sympy?
a simple code is :
from sympy import Polygon
a = Polygon((0, 0), (2, 0), (2, 2), (0, 2)).area
print(a)
I compared every solutions offered here to Shapely's area method result, they had the right integer part but the decimal numbers differed. Only #Trenton's solution provided the the correct result.
Now improving on #Trenton's answer to process coordinates as a list of tuples, I came up with the following:
import numpy as np
def polygon_area(coords):
# get x and y in vectors
x = [point[0] for point in coords]
y = [point[1] for point in coords]
# shift coordinates
x_ = x - np.mean(x)
y_ = y - np.mean(y)
# calculate area
correction = x_[-1] * y_[0] - y_[-1] * x_[0]
main_area = np.dot(x_[:-1], y_[1:]) - np.dot(y_[:-1], x_[1:])
return 0.5 * np.abs(main_area + correction)
#### Example output
coords = [(385495.19520441635, 6466826.196947694), (385496.1951836388, 6466826.196947694), (385496.1951836388, 6466825.196929455), (385495.19520441635, 6466825.196929455), (385495.19520441635, 6466826.196947694)]
Shapely's area method: 0.9999974610685296
#Trenton's area method: 0.9999974610685296
This is much simpler, for regular polygons:
import math
def area_polygon(n, s):
return 0.25 * n * s**2 / math.tan(math.pi/n)
since the formula is ¼ n s2 / tan(π/n).
Given the number of sides, n, and the length of each side, s
Based on
https://www.mathsisfun.com/geometry/area-irregular-polygons.html
def _area_(coords):
t=0
for count in range(len(coords)-1):
y = coords[count+1][1] + coords[count][1]
x = coords[count+1][0] - coords[count][0]
z = y * x
t += z
return abs(t/2.0)
a=[(5.09,5.8), (1.68,4.9), (1.48,1.38), (4.76,0.1), (7.0,2.83), (5.09,5.8)]
print _area_(a)
The trick is that the first coordinate should also be last.
def find_int_coordinates(n: int, coords: list[list[int]]) -> float:
rez = 0
x, y = coords[n - 1]
for coord in coords:
rez += (x + coord[0]) * (y - coord[1])
x, y = coord
return abs(rez / 2)

Extract the max value of a set of pixels, from a vector list

Im working on python and I need to extract the max or min values from a set of specific pixels on a image. lets say my image is a 40 by 40 image. I have a list with some given vector coordinates example: vectorlist=[[10,15],[13,14],[15,23]]. I need to extract the pixel values of those vectors in the list and calculate the min and max. I am looking for some fast way to do it, because a FOR loop is to slow.
a=[]
for i in range(0,len(vectorlist)):
a.append(image[vectorlist[i][0],vectorlist[i][1]])
max1=max(a)
min1=min(a)
if there is faster way to do it, that would be great!
thanks!
I agree with Mark that creating a mask array is probably a good idea because you can reuse this mask for other operations on this array.
import numpy as np
#create test data with random but reproducible data
np.random.seed(54321)
arr = np.random.randint(0, 255, (40, 40), dtype = "uint8")
vectorlist = [[10, 15], [13, 14], [15, 23]]
#extracting rows and columns of the vectorlist
rows, cols = zip(*vectorlist)
#create mask at points defined by vectorlist
mask = np.zeros(arr.shape, dtype = bool)
mask[rows, cols] = True
print(arr[mask])
#output
#[ 49 245 197]
print(np.max(arr[mask]))
#245
print(np.min(arr[mask]))
#49
Please note that the indexing starts at 0, not 1 - your question is not clear, if this is taken into consideration by your vectorlist. And make sure that in your list the first value represents the row. If not, just switch rows and cols in the script, when retrieving these values from the zip object.
I am an absolute beginner on Python, but it seems my comment was wrong, but I'll "man up" and admit it and say how I worked it out and maybe someone will know why. Before anyone says it's not an answer, it is, because it does improve on the original code (in function loop()) by introducing the improved loop2() function:
#!/usr/local/bin/python3
import numpy as np
# Generate an array 40x40 of random integers <100
image=np.random.randint(100,size=(40,40))
# List of pixels we like
vectorlist=[[0,0],[1,1],[1,0],[39,39]]
# Boolean mask of elements we like
mask=np.reshape(np.zeros(1600,dtype=bool),(40,40))
mask[0,0]=mask[1,1]=mask[1,0]=mask[39,39]=True
# OP's suggested method
def loop():
a=[]
for i in range(0,len(vectorlist)):
a.append(image[vectorlist[i][0],vectorlist[i][1]])
mi=min(a)
ma=max(a)
return(mi,ma)
# Slight improvement on OP's method
def loop2():
# Don't add a bunch of items to a list we don't need
mi=ma=image[vectorlist[0][0],vectorlist[0][1]]
for i in range(1,len(vectorlist)):
this=image[vectorlist[i][0],vectorlist[i][1]]
if this>ma:
ma=this
elif this<mi:
mi=this
return (mi,ma)
# My very own slow method using a masked array
def masked():
selpix=image[mask]
mi=np.amin(selpix)
ma=np.amax(selpix)
return (mi,ma)
print(loop())
print(loop2())
print(masked())
Sample Output
(22, 91)
(22, 91)
(22, 91)
I pasted all the above into IPython and then ran the following timing tests:
In [178]: %timeit loop()
1.96 µs ± 6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [179]: %timeit loop2()
1.4 µs ± 2.21 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [180]: %timeit masked()
4.64 µs ± 32.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Disappointed of Cheltenham :-)

Resources