I am fairly new to AE. I am aware of JS expressions. I have a programming background.
I have the following type of data that I want to visualise.....
I have about 10,000 different elements (locations in a city). Each of these elements occur during the last 10 years (I will compress this into 60 seconds).
When each element occurs..I want a small sphere to appear. This sphere will occur somewhere in the X,Y space (depending on its lattitude and longitude).
To provide some context ... the data is a series of house sales. The larger the price of the sale...the larger the sphere....
Becuase of the large number of elements. It is not possible (nor desirable) to manually do this in AE.
So...my question is....how can I programmatically do this in AE...?
Is it possible...?
Or perhaps should I write a program to automatically create some type of SVG that I could then import into AE...?
Or another approach entirely...?
Any ideas ... on a basic approach would be welcome.
Thanks,
Mark
Q1: So...my question is....how can I programmatically do this in AE...?
A1: You can use the ExtendScript API to run a Javascript that creates your elements.
Q2: Is it possible...?
A2: Yes.
Q3: Or perhaps should I write a program to automatically create some type of SVG that I could then import into AE...?
A3: You can't import SVG into AE (as far as I know).
Q4: Or another approach entirely...?
A4: No. You are in the right spot (IMHO).
You could use my two scripts (Warning: shameless self promotion) for creating maps and location markers in an AE comp.
Locations creates the markers for you http://aescripts.com/locations/ from CSV files.
AEMaps creates the Map http://aescripts.com/aemap/ for you from geojson files.
Or checkout GeoLayers by Markus Bergelt http://aescripts.com/geolayers/ which does the same but in a different way.
To add the price/size to your sphere you need to do some additional scripting. I suggest hacking on Locations for that.
The function add_projected_marker in line 1814 should be a good entrypoint to add additional expressions to you newly created layer.
You could add an expression like this to the scale property (where v is the value you read from your csv):
layer.transform.scale.expression = "var v = 50;\n[v,v];"
To get the data into the the locdata object you need to hack on the function win.read_button.onClick in line 1181.
Related
I am using the Neo library for linear algebra in Nim, and I would like to extract arbitrary rows from a matrix.
I can explicitly select a continuous sequence of rows as per the examples in the README, but can't select a disjoint subset of rows.
import neo
let x = randomMatrix(10, 4)
let some_rows = #[1,3,5]
echo x[2..4, All] # works fine
echo x[some_rows, All] ## error
The first echo works because you are creating a Slice object, which neo has defined a proc for. The second echo uses a sequence of integers, and that kind of access is not defined in the neo library. Unfortunately Slices define contiguous closed ranges, you can't even specify steps to iterate in bigger increments than one, so there is no way to accomplish what you want.
Looking at the structure of a Matrix, it seems that it is highly optimised to avoid copying data. Matrix transformation operations seem to reuse the data of the previous matrix and change the access/dimensions. As such, a matrix transformation with arbitrary random would not be possible, the indexes in your example specifically access non contiguos data and this would need to be encoded somehow in the new structure. Plus if you wrote #[1,5,3] that would defeat any kind of normal iterative looping.
An alternative of course is to write a proc which accepts a sequence instead of a slice and then builds a new matrix copying data from the old one. This implies a performance penalty, but if you think this is a good addition to the library please request it in the issue tracker of the project. If it is not accepted, then you will need to write yourself such a proc for personal use in your programs.
I've seen examples using the NewDimension method to dimension between two points and two lines, I assume in the family editor, but I want to add a dimension to two family instances in the model, such as a pipe tap's centerline and a pipe end. Then the dimension would 'drive' the distance if the user edits it, moving the outlet along the pipe, just like it does if a user created the dimension using the Revit UI.
I just don't know what way Revit wants me to try to do this:
Finding the family instance ID, going into each family ID, and finding a line/plane/point in the family to use as a dimension point when you use NewDimension. Hopefully this would work outside the family editor trying to make a dimension between two different family instances (pipe end and pipe tap).
Finding the x,y,z location of the points you want to snap to, and creating a dimension (using NewDimension method for example) between those two x,y,z locations, and if the x,y,z locations fall on appropriate points like a pipe end and center-line of a pipe tap then perhaps Revit automatically makes it a 'smart' dimension that 'drives' the location of the pipe tap.
Here's some promising methods I found in the API, not sure which of them I should be using though.
NewDimension
AlignedDimension
AddListeningDimensionBendToBend
AddListeningDimensionSegmentToBend
AddListeningDimensionSegmentToSegment
SetElementsToDimension
Look at the two Building Coder samples showing how to Dimension Walls by Iterating Faces and Dimension Walls using FindReferencesByDirection.
The approach used for walls works with standard family instances as well.
Note that the FindReferencesByDirection method has now been replaced by the `ReferenceIntersector class.
the job of the layout is to place vertexes at given locations. if the layout is iterative, then the layout's job is to iterate through an algo, moving the vertexes with each step, until the final layout configuration is achieved.
I have a multi-level graph - say 100 objects of type A; each A object has 10 objects as children; call the children type B objects.
I would like the layout location placement algos to operate on objects of type A only (let's say) - and ignore the B objects.
The cleanest way to achieve this objective might be to define a transform to expose those elements that should participate in the 'algo' placement operation via the step method.
Currently, the step methods, assuming they respect the lock flag at all, do their calculations including the locked vertexes first - so lock/unlock won't work in this case.
Is it possible to do this somehow without resorting to multiple graph objects?
If you want to ignore the B objects entirely, then the simplest option is to create a graph consisting only of the A objects, lay it out, and use the locations from that layout.
That said, it's not clear how you intend to assign locations to the B objects. And if the A objects aren't connected to each other at all, then this approach won't make much sense. (OTOH, if they aren't connected to each other then you're really just laying out a bunch of trees.)
I am working on a project, which is based on optix. I need to use progressive photon mapping, hence I am trying to use the Progressive Photon Mapping from the samples, but the transparency material is not implemented.
I've googled a lot and also tried to understand other samples that contains transparency material (e.g. Glass, Tutorial, whitted). At last, I got the solution as follows;
Find the hit point (intersection point) (h below)
Generate another ray from that point
use the color of the new generated points
By following you can also find the code of that part, by I do not understand why I get black color(.0f, .0f, 0.f) for the new generated ray (part 3 above).
optix::Ray ray( h, t, rtpass_ray_type, scene_epsilon );
HitPRD refr_prd;
refr_prd.ray_depth = hit_prd.ray_depth+1;
refr_prd.importance = importance;
rtTrace( top_object, ray, refr_prd );
result += (1.0f - reflection) * refraction_color * refr_prd.attenuation;
Any idea will be appreciated.
Please note that refr_prd.attenuation should contains some colors, after using function rtTrace(). I've mentioned reflection and reflaction_color to help you better understand the procedure. You can simply ignore them.
There are a number of methods to diagnose your problem.
Isolate the contribution of the refracted ray, by removing any contribution of the reflection ray.
Make sure you have a miss program. HitPRD::attenuation needs to be written to by all of your closest hit programs and your miss programs. If you suspect the miss program is being called set your miss color to something obviously bad ([1,0,1] is my favorite).
Use rtPrintf in combination with rtContextSetPrintLaunchIndex or setPrintLaunchIndex to print out the individual values of the product to see which term is zero from a given pixel. If you don't restrict the output to a given launch index you will get too much output. You probably also want to print out the depth as well.
Suppose I have a list of numbers and I've computed the q-quantile (using Quantile).
Now a new datapoint comes along and I want to update my q-quantile, without having stored the whole list of previous datapoints.
What would you recommend?
Perhaps it can't be done exactly without, in the worst case, storing all previous datapoints.
In that case, can you think of something that would work well enough?
One idea I had, if you can assume normality, is to use the inverse CDF instead of the q-quantile.
Keep track of the sample variance as you go and then you can compute InverseCDF[NormalDistribution[sampleMean,sampleVariance], q] which should be the value such that a fraction q of the values are smaller, which is what the q-quantile is.
(I see belisarius was thinking along the same lines.
Here's the link he pointed to: http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#On-line_algorithm )
Unless you know that your underlying data comes from some distribution, it is not possible to update arbitrary quantiles without retaining the original data. You can, as others suggested, assume that the data has some sort of distribution and store the quantiles this way, but this is a rather restrictive approach.
Alternately, have you thought of programming this somewhere besides Mathematica? For example, you could create a class for your datapoints that contains (1) the Double value and (2) some timestamp for when the data came in. In a SortedList of these datapoints classes (which compares based on value), you could get the quantile very fast by simply referencing the index of the datapoints. Want to get a historical quantile? Simply filter on the timestamps in your sorted list.