What approach should I take to implement a dynamic graph using Xamarin.ios graphics without a plugin or framework?
An example of what I mean is illustrated in the image below. The top graph in the image is swiped right and dynamically more nodes are added from the left side as illustrated in the below graph in the image.
Please note that some nodes are of not the same distance to each other. And there can come a shaded portion in the graph as well.
Related
I want to have a reference point or to know the coordinates of any point on an exported Image (from any view) from Revit.
For example in the attached image exported from Revit, I'd like to know the bounding box of the picture or the middle point of the picture (in X,Y coordinates) or any other reference point.
Plan image
Is there a way to extract the bounding box coordinates of the picture?
I would suggest defining two diagonally opposite points in your image file that you can identify precisely in your Revit model. Determine their image pixel coordinates, export their Revit model coordinates, and use this information to determine the appropriate scaling and translation.
The RoomEditorApp Revit add-in and its corresponding roomedit CouchDb web interface demonstrate exporting an SVG image from Revit, scaling it for display in a web browser, and transformation and calculation of exact coordinates back and forth between two environments.
Is it possible to implement a model-view architecture with pyqtgraph? It seems like pyqtgraph forces you along a widget-centric approach (https://doc.qt.io/qt-5/modelview.html) in which each plotitem has its own scene?
It would be great if the ViewBox (see image below from docs -http://www.pyqtgraph.org/documentation/plotting.html) would be an actual "view" of a scene, and the axis items around it would be dependent on that particular view. Therefore, if a number of PlotWidgets/PlotItems were displaying the same PlotDataItems at different zoom-levels, if you changed the zorder or scaled an item this would be shown on all the PlotItems without resorting to sending signals etc between PlotItems.
To do this does the ViewBox have to be an embedded GraphicsView? Or is it possible to have a PlotItem on multiple GraphicsViews?
Thanks!
Some other reference links:
https://doc.qt.io/qt-5/graphicsview.html
https://doc.qt.io/qt-5/model-view-programming.html
I am using HelixToolkit.Wpf.SharpDX to display a mesh in the 3D viewport. A requirement I have is to display any given mesh as solid, wireframe and point cloud.
The solid and wireframe implementation is simple, since the GeometryModel3D object provides the FillMode property in order to switch between them.
However I cannot find a simple way to switch the display to a point cloud. What I mean by this is that each vertex should display a small point. Does anyone know of a way to do this? I need the switching of the display to occur very quickly, just as switching between solid and wireframe is extremely quick.
Example images below:
As far as I know you cannot simply toggle between Mesh and Point representation. You have to transform your mesh model to a PointGeometryModel3D and use its Point collection to visualize.
I have created a simple Leaflet map and added Draw based on documentation here (using 1.0). I downloaded and am using all of the different js files in the example. I would like to change the tooltip for polylines to show the distance from last vertex instead of the total line length and (if possible) an angle from the last vertex. I am going to be mapping building with walls that are usually at right angles. Unfortunately, my js skills are mediocre at best and I do not know where to start. Any suggestions? Thank you.
My goal is to create an interactive web visualization of data from motion tracking experiments.
The trajectories of the moving objects are rendered as points connected by lines. The visualization allows the user to pan and zoom the data.
My current prototype uses Processing.js because I am familiar with Processing, but I have run into performance problems when drawing data with greater than 10,000 vertices or lines. I pursued a couple of strategies for implementing the pan and zoom, but the current implementation, which I think is the best, is to save the data as an svg image and use the PShape data type in Processing.js to load, draw, scale and translate the data. A cleaned version of the code:
/* #pjs preload="nanoparticle_trajs.svg"; */
PShape trajs;
void setup()
{
size(900, 600);
trajs = loadShape("nanoparticle_trajs.svg");
}
//function that repeats and draws elements to the canvas
void draw()
{
shape(trajs,centerX,centerY,imgW,imgH);
}
//...additional functions that get mouse events
Perhaps I should not expect snappy performance with so many data points, but are there general strategies for optimizing the display of complex svg elements with Processing.js? What would I do if I wanted to display 100,000 vertices and lines? Should I abandon Processing all together?
Thanks
EDIT:
Upon reading the following answer, I thought an image would help convey the essence of the visualization:
It is essentially a scatter plot with >10,000 points and connecting lines. The user can pan and zoom the data and the scale bar in the upper-left dynamically updates according to the current zoom level.
Here's my pitch:
Zoom level grouping and break down data as your users focuses/zooms-in
I would suggest you group together some data and present it as simple node
On zooming in to a particular node you can break down the node and release the group thus showing it's details.
This way you limit the amount of data you need to show on zoomed-out views (where all the nodes would be shown) and you add details as the user zooms-in to a region - in which case not all nodes would be showing since zooming in only focuses on one area of your graph
Viewport limit
Detect what is in the current view area and draw just that. Avoid drawing the whole node graph structure if your user cannot see it in his viewport - Show only what is necessary. Although I suspect that this is already done by Processing.js anyway, I don't know if your zooming functionality takes advantage of this.
Consider bitmap caching if your nodes are interactive/clickable
If your elements are clickable/interactive you might want to consider grouping data and showing them as bitmaps(large groups of data showed as a single image), until the user clicks on a bitmap in which case the bitmap is removed and the original shape is re-drawed in that bitmaps place. This minimizes the amount of points/lines the engine has to draw on each redraw cycle.
For bitmap caching see this link,(this is Fabric.js - a canvas library and SVG but the concept/idea is the same) and also this answer I posted to one of my questions for interactive vector/bitmap caching
As a side note:
Do you really need to use Processing?
If there's no interaction or animation happening and you just want to blit pixels(just draw it once) on a Canvas, consider abandoning a vector based library altogether. Plain-old canvas just blits pixels on a canvas and that's all. The initial startup drawing of data might have some delay, but since there's not any internal reference to the points/shapes/lines after they were drawn - there's nothing eating up your resources/clogging your memory.
So if this is the case - consider making the switch to plain Canvas.
However data visualisations are all about animations and interactivity so I doubt you'll want to give them up.