Finding the relationship between two variables for RE Investment Analysis - excel

I've been struggling with how to attack this problem for the better part of a week. I'll give some quick background to the situation. Basically, I'm trying to figure out a formula to find the average value of below-ground square footage (basement) and above-ground square footage independently within specified areas. This way, I can divide the two averages to determine a ratio of below-ground sf to above-ground sf. This will help me make certain adjustments for a few different real estate investment analytics. It is commonly known that a square foot in a basement is not as valuable as a square foot above the ground. The national average is roughly half. Meaning, a square foot in the basement holds half the value as a square foot on the main floor or upstairs.
If I have a spreadsheet with columns of the sold price for all homes in an area, the above-ground square footage, and the below-ground square footage; is there a way to properly isolate above-ground square footage from below-ground square footage to figure out on average what a square foot in the basement is worth relative to a square foot on the main floor/upstairs?
I've tried a lot of different approaches. I thought I solved it a few different times until realizing upon testing that I was finding solutions for different things... not what I wanted. I tried creating a system of linear equations... but realized quickly that there was no solution that way. Then I also tried to run regressions... but in all honesty, some of that went over my head. I know there has to be a way to figure this out, but I'm looking for any assistance that I can at this point. Any suggestions or resources would be much appreciated, thanks!

Related

Accurate direct distance between two points (latitude, longitude, and altitude)

First of all, thanks for reading. I have Vincenty working in Excel(VBA) already, and want to do this in Excel, but this is a math question, not a coding question. By the way, I'll readily cop up front to the fact that ellipsoids are way over my head.
I'm looking to calculate accurate direct distance between two objects, given their latitude, longitude and altitude. Vincenty was an interesting start, but two issues:
(a) Vincenty is the distance along the ellipsoid, and I would need the chord length.
(b) Vincenty doesn't account for elevation, and the distance between points increases as elevation increases.
It would be easy to take Vincenty as my horizontal distance, and use the elevation difference to solve for the slope, but that doesn't seem accurate.
Maybe this should just be solving for the line between points on concentric circles (i.e. the lower elevation versus the higher elevation) except what Earth radius to use? I mean, it's an ellipsoid, so...?
My distances will typically be 10 - 40 miles, but millimeter precision is required.
Point me in the right direction? Thanks! ~Mike

Heuristics for blokus game

A variant of the blokus game, where you are a single player and you want to cover all corners of the given board using A* Search, I'm trying to figure out what kind of heuristics would be good for this, so for example on an 8x8 board it would finish fast, without expanding too many nodes.
I want an admissible heuristic, so far I've ruled out:
Manhattan distances and euclid distances because in blokus you need to put pieces adjacent to other pieces diagonal which doesn't comply with manhattan distance.
Information about the game:
It's a board game, in which there is a n x n table, and you are given pieces of sizes and shapes like tetris in which you can put on the table.
The rules are: Each piece is only usable once, and you start from coordinate (0,0). You can only place pieces adjacent to another piece diagonally. Two pieces cannot be adjacent to each other, only diagonally.
The task is to finish game with the lowest score possible (score is determined by how many tiles your pieces are composed with), you want to leave the board as vacant as possible.

Excel, Determine where data takes a dive

I'm trying to determine where, in a set of measurement data, the data takes a dive...
... so I can plot a vertical line and
... plot a horizontal line in the graph.
I have no problem doing the 2nd and 3rd bullet points above on my own, so that's taken care of.
The problem I need help with is the first bullet point - determining WHERE the data takes a dive - WHERE the data crosses a threshold that basically says, "Whatever-it-is you're measuring, is no longer performing as it is expected to.".
Here's what I'm doing:
I am taking measurements using a measuring device and that device is logging the measurements in its internal memory and allowing me to download that measurement data to my computer into a csv when the test session is complete.
I pull that csv into an xls and plot the data on a graph. (see attached image)
Here's what I want to do:
If you look at the attached image I would like to find the value where the data DEFINITELY crosses BELOW the horizontal line so I can say, "Here is where the device being tested 'gave up the ghost' and was no longer able to perform as desired."
What the data roughly looks like:
Each measurement set will have the rough look and feel of the attached image but slightly different each time. (because each object I am testing will have roughly the same performance characteristics but they all have their own manufacturing defects and variations.)
The data set for the attached image is a data set of 7000 measurements.
I never really know where the horizontal line will be.
Examples of the data sets I have gotten in the past several tests look like this:
(394 to 0)
(390000 to 0)
(3.88 to 0)
(375000 to 0)
(39.55 to 0)
(59200 to 0)
and each data set will have about 1,000 to 7,000 measurements each.
Here's how I was trying to solve this issue:
I was using SLOPE() and trying to latch onto where the slop of the line took a dive / started to work its way to a zero slope (which is a vertical line) so when it starts approaching a really small slope then it MUST be taking a dive. That didn't really work.
I was looking at using STDEV.P() in Excel and feeding it the entire data set. Then I was looking at doing the same thing but feeding it only the first 10, 30, 60 measurements but then I thought - we never really know just how many measurements will come through. Then I thought I would use the first 10% of the measurements and feed that to STDEV.P().
Please let me know what you think of this and please let me know of any ideas you may have.
Thanks.
H
Something like this should work to flag when the decay rate increases.
To find what 'direction' your data is going in you need the derivative.
Excel doesn't have a derivative formula but you can set it up pretty easily by using the (change in y)/(change in x) as demonstrated here:
http://faculty.educ.ubc.ca/sanderson/lab/CLFbiom/demo/diff.htm
I would then check a formula which counts how many datarows you have (=COUNTA(A:A) or similar)
Then uses that to get a step of 10% of your data
Then check the value of the derivative in a cell against a cell 10% further down. If it's still a negative (to account for the slight downhill at first) then you'll know
The right way to go about this is to model the data with an unknown discontinuity, something like "if time < break_time then (some constant plus noise) else (decaying exponential)". A maximum likelihood estimation for that model might require iteration or other operations which are clumsy in Excel -- maybe you should consider VB or Python or some other programming language. I.e. choose the tool to fit the problem and not the other way around.
See Seber and Wild, "Nonlinear Regression", for an extensive discussion of models with discontinuities.
If your data can be generally characterized as having:
(A) a more or less flat plateau region, followed by
(B) a downward trending region
then a basic strategy could be to start at then end of the data and march towards the beginning one point at a time, checking to see that the values are increasing. Once they stop increasing, you've found the break point.
The strategy assumes (unwisely?) that the downward trending region is smooth/noiseless. To make the solution more robust to noise, you could compare values that are 5 apart, or 10 apart, or whatever interval works to filter out the noise. Or you could use a moving average.
This strategy could potentially be made more efficient by starting the search somewhere in the middle of the data but still in downward trending portion. If you know (based on experience) that any value that is (say) 0.5X the maximum is in the downward trending portion, you could start the search there.
Hope that helps.
It appears as though you want to detect when the slope changes from something near zero to something negative. One way to detect this is to calculate the 2nd derivative of the values (calculate the slope of the slope). The 2nd derivative should be near zero in the flat portion of the data AND in the downward trending portion of the data. It should go negative at the break point. So finding the minimum (most negative) value of the 2nd should locate the break point.
To implement this, you probably will need to filter noise. So calculate the first derivative (slope) over some suitable window of data:
=SLOPE(moving window of say 25 raw values)
Then calculate the second derivative (slope of slope):
=SLOPE(moving window of say 25 slope values)
Then look for the minimum.
Hope that helps.

Averaging many curves with different x and y values

I have several curves that contain many data points. The x-axis is time and let's say I have n curves with data points corresponding to times on the x-axis.
Is there a way to get an "average" of the n curves, despite the fact that the data points are located at different x-points?
I was thinking maybe something like using a histogram to bin the values, but I am not sure which code to start with that could accomplish something like this.
Can Excel or MATLAB do this?
I would also like to plot the standard deviation of the averaged curve.
One concern is: The distribution amongst the x-values is not uniform. There are many more values closer to t=0, but at t=5 (for example), the frequency of data points is much less.
Another concern. What happens if two values fall within 1 bin? I assume I would need the average of these values before calculating the averaged curve.
I hope this conveys what I would like to do.
Any ideas on what code I could use (MATLAB, EXCEL etc) to accomplish my goal?
Since your series' are not uniformly distributed, interpolating prior to computing the mean is one way to avoid biasing towards times where you have more frequent samples. Note that by definition, interpolation will likely reduce the range of your values, i.e. the interpolated points aren't likely to fall exactly at the times of your measured points. This has a greater effect on the extreme statistics (e.g. 5th and 95th percentiles) rather than the mean. If you plan on going this route, you'll need the interp1 and mean functions
An alternative is to do a weighted mean. This way you avoid truncating the range of your measured values. Assuming x is a vector of measured values and t is a vector of measurement times in seconds from some reference time then you can compute the weighted mean by:
timeStep = diff(t);
weightedMean = timeStep .* x(1:end-1) / sum(timeStep);
As mentioned in the comments above, a sample of your data would help a lot in suggesting the appropriate method for calculating the "average".

Way to reduce geopoints?

Does anyone have any handy algorithms that could be used to reduce the number of geo-points ?
I am using a list of 2,000,000 postcodes which come with their own geo-point. I am using them to collect data from an API to be used offline. The program is written in C++.
I have to go through each postcode, calculate a bounding box based on the postcodes location, and then send it to the API which gives me some data near to that postcode.
However 2,000,000 is a lot to process and some of the postcodes are next to each other or close enough to each other that they would share some of the same data.
So far I've came up with two ways I could reduce them but I am not sure if they would work:
1 - Program uses data structure to record which postcode overlaps which and then run a routine a few time to removes the ones that have overlaps one by one until we are left without ones without overlapping postcodes.
Start at the top left geo point of the UK and slowly increment it the rough size of a postcode area until we have covered the entire UK.
Is there a easy way to reduce these number of postcodes so that I have few of them overlapping as possible ? whilst still making sure I get data covering as much of the UK as possible ? I was thinking there may be an algorithm handy for this, that people use else where.
You can use a quadtree especially a quadkey. A quadkey plot the points along a curve. It's similar to sort the points into a grid. Then you can traverse the grid to search deeper in the tree. You can also search around a center point. You can also use a database with a spatial index. It depends how much the data overlap but with a quadtree you can choose the size of the grid.

Resources