using multiple BloCs Pattern with rxdart to calculate some values - bloc

I searched the web looking for something to read as an example explaining bloc pattern and streams with rxdart (not the bloc library) and not the counter example or firebase login, an example how to manipulate values from multiple blocs to calculate some other values, but i found almost nothing in this direction. maybe you have an idea, that would be great..

Try checking out my repo (Manually bloc pattern with RxDart) :)) https://github.com/hoc081098/find_room_flutter_BLoC_pattern_RxDart/blob/master/lib/pages/home/home_bloc.dart

I can simply explain what i meant:
i have multiple blocs, and from multiple TextFields i add a streams to the sink, lets say a scale stream to blocA, and a width stream and a height stream to blocB.
blocA controls and calculate the screen scale
blockB draws a rectangle with the width and height, but its also needs the scale from blocA to show the rectangle in the wright scale.
So how can I handle this. I could also habe blocC That draws a circle and have a Radius stream, that depends on the scale from bocA too.

Related

How can I selectively render VTK PolyData without deleting points or lines

I have a pipeline for rendering a PolyData. The PolyData consists of points and lines only (specifically no faces). I have normals for the points which would allow me to do some point based version of backface culling but I can't see how I can apply some sort of Filter to the pipeline to hide these nodes. I'd like to do this so that I can pan, tilt and scroll the view using an interactor without having to rebuild the PolyData.
It seems like this ought to be possible. Can someone direct me at the appopriate part of the API docs?
You can look at vtkClipPolyData filter. It clips the cells of the PolyData. So it will work for lines in your PolyData. If you want to make it work for the points as well then your points need to be stored as vtkVertex cells in your PolyData. vtkVertexGlyphFilter can be used to create a vtkVertex for every point in your PolyData. Looking at this post it seems that backface culling is not possible for lines even if the points have normals.

Turn an image into lines and circles

I need to be able to turn a black and white image into series of lines (start, end points) and circles (start point, radius). I have a "pen width" that's constant.
(I'm working with a screen that can only work with this kind of graphics).
Problem is, I don't want to over complicate things - I could represent any image with loads of small lines, but it would take a lot of time to draw, so I basically want to "approximate" the image using those lines and circles.
I've tried several approaches (guessing lines, working area by area, etc) but none had any reasonable results without using a lot of lines and circles.
Any idea on how to approach this problem?
Thanks in advance!
You don't specify what language you are working in here but I'd suggest OpenCV if possible. If not, then most decent CV libraries ought to support the features that I'm about to describe here.
You don't say if the input is already composed of simple shapes ( lines and polygons) or not. Assuming that it's not, i.e. it's a photo or frame from a video for example, you'll need to do some edge extraction to find the lines that you are going to model. Use a Canny or other edge detector to convert the image into a series of lines.
I suggest that you then extract Circles as they are the richest feature that you can model directly. You should consider using a Hough Circle transform to locate circles in your edge image. Once you've located them you need to remove them from the edge image (to avoid duplicating them in the line processing section below).
Now, for each pixel in the edge image that's 'on' you want to find the longest line segment that it's a part of. There are a number of algorithms for doing this, simplest would be Probabilistic Hough Transform (also available in openCV) to extract line segments which will give you control over the minimum length, allowed gaps etc. You may also want to examine alternatives like LSWMS which has OpenCV source code freely available.
Once you have extracted the lines and circles you can plot them into a new image or save the coordinates for your output device.

ProcessingJS performance with large data

My goal is to create an interactive web visualization of data from motion tracking experiments.
The trajectories of the moving objects are rendered as points connected by lines. The visualization allows the user to pan and zoom the data.
My current prototype uses Processing.js because I am familiar with Processing, but I have run into performance problems when drawing data with greater than 10,000 vertices or lines. I pursued a couple of strategies for implementing the pan and zoom, but the current implementation, which I think is the best, is to save the data as an svg image and use the PShape data type in Processing.js to load, draw, scale and translate the data. A cleaned version of the code:
/* #pjs preload="nanoparticle_trajs.svg"; */
PShape trajs;
void setup()
{
size(900, 600);
trajs = loadShape("nanoparticle_trajs.svg");
}
//function that repeats and draws elements to the canvas
void draw()
{
shape(trajs,centerX,centerY,imgW,imgH);
}
//...additional functions that get mouse events
Perhaps I should not expect snappy performance with so many data points, but are there general strategies for optimizing the display of complex svg elements with Processing.js? What would I do if I wanted to display 100,000 vertices and lines? Should I abandon Processing all together?
Thanks
EDIT:
Upon reading the following answer, I thought an image would help convey the essence of the visualization:
It is essentially a scatter plot with >10,000 points and connecting lines. The user can pan and zoom the data and the scale bar in the upper-left dynamically updates according to the current zoom level.
Here's my pitch:
Zoom level grouping and break down data as your users focuses/zooms-in
I would suggest you group together some data and present it as simple node
On zooming in to a particular node you can break down the node and release the group thus showing it's details.
This way you limit the amount of data you need to show on zoomed-out views (where all the nodes would be shown) and you add details as the user zooms-in to a region - in which case not all nodes would be showing since zooming in only focuses on one area of your graph
Viewport limit
Detect what is in the current view area and draw just that. Avoid drawing the whole node graph structure if your user cannot see it in his viewport - Show only what is necessary. Although I suspect that this is already done by Processing.js anyway, I don't know if your zooming functionality takes advantage of this.
Consider bitmap caching if your nodes are interactive/clickable
If your elements are clickable/interactive you might want to consider grouping data and showing them as bitmaps(large groups of data showed as a single image), until the user clicks on a bitmap in which case the bitmap is removed and the original shape is re-drawed in that bitmaps place. This minimizes the amount of points/lines the engine has to draw on each redraw cycle.
For bitmap caching see this link,(this is Fabric.js - a canvas library and SVG but the concept/idea is the same) and also this answer I posted to one of my questions for interactive vector/bitmap caching
As a side note:
Do you really need to use Processing?
If there's no interaction or animation happening and you just want to blit pixels(just draw it once) on a Canvas, consider abandoning a vector based library altogether. Plain-old canvas just blits pixels on a canvas and that's all. The initial startup drawing of data might have some delay, but since there's not any internal reference to the points/shapes/lines after they were drawn - there's nothing eating up your resources/clogging your memory.
So if this is the case - consider making the switch to plain Canvas.
However data visualisations are all about animations and interactivity so I doubt you'll want to give them up.

fast 2D texture line sample

Imagine you have a chessboard textured triangle shown in front of you.
Then imagine you move the camera so that you can see the triangle from one side, when it nearly looks as a line.
You will provably see the line as grey, because this is the average color of the texels shown in a straight line from the camera to the end of the triangle. The GPU does this all the time.
Now, how is this implemented? Should I sample every texel in a straight line and average the result to get the same output? Or is there another more efficient way to do this? Maybe using mipmaps?
It does not matter if you look at the object from the side, front, or back; the implementation remains exactly the same.
The exact implementation depends on the required results. A typical graphics API such as Direct3D has many different texture sample techniques, which all have different properties. Have a look at the documentation for some common sampling techniques and an explanation.
If you start looking at objects from an oblique angle, the texture on the triangle might look distorted with most sampling techniques, and Anisotropic Filtering is often used in these scenario's.

Three.js ParticleSystem flickering with large data

Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.

Resources