Free drawing with FabricJS - fabricjs

Is it possible to have all the free drawing items put into one layer using the fabric.js api? I have a UI for managing layers (moving up and down layers) but if someone draws a lot in free drawing mode the amount of layers will be huge and the UI won't look great.
A group seems to keep them all in their separate layers. I need to effectively flatten the free drawing to one layer.
Thank you

Related

Is it possible to cut parts out of a picture and analyze them separately with python?

I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...

How to compute a 3d miniature model from a large set of 3d geometric models

i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.

Conservatively cover bitmap with small number of primitives?

I'm researching the the possibility of performing occlusion culling in voxel/cube-based games like Minecraft and I've come across a challenging sub-problem. I'll give the 2D version of it.
I have a bitmap, which infrequently has pixels get either added to or removed from it.
Image Link
What I want to do is maintain some arbitrarily small set of geometry primitives that cover an arbitrarily large area, such that the area covered by all the primitives is within the colored part of the bitmap.
Image Link
Is there a smart way to maintain these sets? Please not that this is different from typical image tracing in that the primitives can not go outside the lines. If it helps, I already have the bitmap organized into a quadtree.

Supporting a large number of layers in image editor?

I'm writing a photoshop-like program for a mobile device and I want to support the use of layers. At most, I can store about 7 bitmaps in memory at a time. I'm trying to see if I can come up with a way of supporting lots of layers (e.g. 10 or 20) while not using much memory.
My current idea is:
Use one bitmap as the active layer that the user can currently paint on and manipulate.
Use one bitmap that stores a flattened version of all the layers below the active layer.
Use one bitmap that stores a flattened version of all the layers above the active layer.
When a layer is not the active layer, I can write it to disk and remove it from memory. When the user switches active layer, I then retrieve the layer from disk and recreate the flattened images.
This idea appears sound if each layer only has opacity settings, but I don't think it will work if layers can have different blending modes like screen and multiply. The flattened bottom layers would work fine but it seems as if I would need to rerender all the top layers again if one of these used a blend mode and the active layer was changed.
What approach can I use? I've seen various paint programs supporting 100 and more layers so there must be some trick to it.
Well, I think you've already got a reasonable approach for the case where the layers just have simple opacity, but I can see the problem if they had different blending modes.
One suggestion could be to chop the image into sub-blocks, e.g. 32x32 in size and only re-blend the ones where something has changed. You could have a sort of cache of sub-blocks in main memory so that if the user is editing only a small region, you've have the data you needed most of the time. It would be complex, but you could still keep contiguous layers that only need opacity blending to potentially improve performance.
I seem to remember that Photoshop at least used to do this - they might have had a hierarchy, but the image was split blocks.

Non-Affine image transformations in .NET

Are there any classes, methods in the .NET library, or any algorithms in general, to perform non-affine transformations? (i.e. transformations that involve more than just rotation, scale, translation and shear)
e.g.:
(source: last100.com)
Is there another term for non-affine transformations?
I am not aware of anything integrated in .Net letting you do non affine transforms.
I guess you are trying to have some sort of 3D texture mapping? If that's the case you need an homogenous affine transform, which is not available in .Net. I'm also not aware of any integrated way to make pixel displacement transforms in .Net.
However, the currently voted solution might be good for what you are trying to do, just be aware that it won't do perspective correction out of the box.
For instance:
The picture on the left was generated using the single quad distort library provided by Neil N. The picture on the right was generated using a single quad (two triangles actually) in DirectX.
This may not have any impact on what you are trying to do, but this is something to keep in mind if you want to do 3D stuff, it will look very weird without perspective correct mapping.
All of the example images you posted can be done with a Quadrilateral Distortion. Though I cant say for certain that a quad distort will cover ALL non affine transforms.
Heres a link to a not so good implementation of it in C#... it works, but is slow. Poke around Wikipedia for the many different optimizations available for these kinds of calculations
http://www.vcskicks.com/image-distortion.html
-Neil
You can do this in wpf using a the Viewport3d control and a non-affine transform matrix. Rendering this to a bitmap again may be interesting.... Which I "fixed" by including an invisible <image> control with the same image as on my textured plane... (Also, I've had to work around the max texture size issues by splitting up the plane and cropping images...)
http://www.charlespetzold.com/blog/2007/08/060605.html
In my case I wanted the reverse of this (transform so arbitrary points on the warped become the corners of my rectangular window), which is the Inverse of the matrix to do the opposite.

Resources