I'm trying to implement the Gaussian blur filter on Graphics object, but I can't find function for get pixel information or transform Graphics object to byte array (with RGB data).
That isn't supported since hardware accelerated surfaces might not provide that information.
However, you can do something else. Just paint the current form onto a mutable image and then just get the RGB of the mutable image which you can then use to create a new Image from an RGB e.g. something close to this:
Display d = Display.getInstance();
Image img = Image.createImage(d.getDisplayWidth(), d.getDisplayHeight());
Graphics g = img.getGraphics();
d.getCurrent().paintBackgrounds(g);
d.getCurrent().paintComponent(g, false);
int[] bufferArray = img.getRGB();
// blur...
Image blurredImage = Image.createImage(bufferArray, img.getWidth(), img.getHeight());
Related
Here I have a image:
Then I have generated threshold image using the code below.
img = cv2.imread('Image_Original.jpg')
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
lower_gr = np.array([40,0,0])
upper_gr = np.array([90,255,255])
mask = cv2.inRange(hsv,lower_gr,upper_gr)
mask=~mask
res = cv2.bitwise_and(img,img,mask = ~mask)
cv2.imshow('Masked',mask)
cv2.imshow('Result',res)
Then the following images (masked):
and (result):
Now what I want is to remove the black pixels(FROM THE ORiGINAL IMAGE ONLY) by making them zero and I want to extract only patches of size 32x32px or more.
Use cv2.findContours() to find the boundaries of the white patches in your mask image.
Each boundary is returned as a list of 2D points.
Use cv2.boundingRect() to get the width/height of each patch and filter accordingly.
You could also use cv2.minAreaRect(), or cv2.contourArea() to filter based on actual area of the patch.
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.htm
Once you have determined which patches should be discarded, overwrite them with black on the colour image using cv2.fillPoly().
I am trying to used the seamsless cloning to blend to image together.
but I notice that after using the seamsless clone function the area in the
mask that I want to transfer is shift upward. So I have a question that
is this a normal behaviour of the seamsless clone function or it is a bug
on my implementation.
Here are the Source photo
Here are the destination photo
Here are the result photo
I encountered similar situation. Moreover, like #JoshuaCWebDeveloper noted, this shift disappeared when all one mask is used. Nevertheless, I got a fix for this. What I did is this. I cropped valid mask (non-zero sub-section) out using cv2.boundingRect. So my source image and mask image are reduced to a smaller size, while center is now calculated from boundingRect outputs (Since reference point is marked on destination image). This way, error got solved/shift got ridden.
(Based on the answer posted by Fractalic Forieu) You can achieve the same result without reducing the image size.
Instead of using the image center:
center = (width // 2, height // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, center)
use the center of the bounding rect:
monoMaskImage = cv2.split(maskImage)[0] # reducing the mask to a monochrome
br = cv2.boundingRect(monoMaskImage) # bounding rect (x,y,width,height)
centerOfBR = (br[0] + br[2] // 2, br[1] + br[3] // 2)
poissonImage = cv2.seamlessClone(srcImage, dstImage, maskImage, centerOfBR )
I am trying to convert .svg file to 3d (.obj file) using javafx.
I can able to convert primitives like Shape - Cylinder, Box etc to Mesh. Is it possible to convert SVGPath to convert to any particular Mesh.
The open source library FXyz has exactly what you are looking for: a SVG3DMesh class that given a 2D SVGPath (or a string with its content) will return a 3D TriangleMesh, extruding the 2D shape to a certain height.
Later on you can export that mesh to a obj file.
This is a code snippet of how you can use it:
SVG3DMesh svg3DMesh = new SVG3DMesh("M40,60 C42,48 44,30 25,32", 10);
You can show the mesh:
svg3DMesh.setDrawMode(DrawMode.LINE);
svg3DMesh.setCullFace(CullFace.NONE);
or show a solid 3D object with the color you want:
svg3DMesh.setTextureModeNone(Color.RED);
For exporting the mesh to obj:
OBJWriter writer=new OBJWriter((TriangleMesh) ((TexturedMesh) svg3DMesh.getMeshFromLetter("")).getMesh(), "svg");
writer.setMaterialColor(Color.RED);
writer.exportMesh();
it will generate svg.obj and svg.mtl.
I have a question/problem with fabric.js - in my code the user can upload a picture, with filters he can convert it to a black/white image. When I export the picture with canvas.toSVG(); it exports a svg image, but it is no real vector graphic - it loses quality when scaling up.
function handleImage(e) {
var reader = new FileReader();
reader.onload = function (event) {
var img = new Image();
img.onload = function () {
var imgInstance = new fabric.Image(img, {
scaleX: 0.7,
scaleY: 0.7
})
canvas.add(imgInstance);
}
img.src = event.target.result;
} reader.readAsDataURL(e.target.files[0]);}
$('saveBtn').onclick = function() {
var filedata= canvas.toSVG(); // the SVG file is now in filedata
var locfile = new Blob([filedata], {type: "image/svg+xml;charset=utf-8"});
var locfilesrc = URL.createObjectURL(locfile);//mylocfile);
var dwn = document.getElementById('dwn');
dwn.innerHTML = "<a href=" + locfilesrc + " download='mysvg.svg'>Download</a>";}
What am I doing wrong?
There is no easy way to "parse" raster graphics to a vector image. Vector graphics include information for how to draw an image, while raster images only include the pixel data for how an image appears at a given size and resolution. That's enough for many purposes, but it means that while it's easy to go from vector to raster (just execute the instructions), it's not easy to go from raster to vector.
It is possible to "trace" the edges of a raster image to obtain vectors that can approximate the raster: in other words, a set of vector instructions that, for that particular resolution and depth, yields an image that is the same as the original raster (or something very like it). But there is no guarantee that these actually correspond in any way to the original vectors (if there are any original vectors at all). Usually there is no correspondence, in fact, unless your tracing algorithm is very specialized: for example, tracing images of a font to make a vector copy of that font. Because they don't correspond, there's no guarantee that the image will scale up the way you want it to: it'll scale, but things may enlarge in strange ways.
It is possible to implement tracing algorithms in JavaScript, by drawing the image into a <canvas> element, using getImageData() to grab the pixel information from that, and performing your operations on the pixel information. Doing this, though, is beyond the scope of this question.
Here is the breakdown: I load a model from an obj file and store into buffers as follows :vbo for vertexes, ibo for indexes , vao for state and num_indices an int with total indices number. To get the model color I exported a separate off file and extracted color information per vertex and I have an array with 4 values for each vertex so the size of the array is vertexno*4. My draw function looks like this.
glUniformMatrix4fv(location_model_matrix,1,false,glm::value_ptr(model_matrix));
glUniformMatrix4fv(location_view_matrix,1,false,glm::value_ptr(view_matrix));
glUniformMatrix4fv(location_projection_matrix,1,false,glm::value_ptr(projection_matrix));
//glUniform4f(location_object_color,1,1,1,1);
glBindVertexArray(mesh_vao);
glEnableClientState( GL_COLOR_ARRAY );
glEnableClientState( GL_VERTEX_ARRAY );
glColorPointer( 4, GL_FLOAT, 0, colors );
glDrawElements(GL_TRIANGLES,mesh_num_indices,GL_UNSIGNED_INT,0);
And the model renders black. I also have some cubes drawn in the draw function which I color using glUniform4f(location_object_color,rgba) and if I uncomment then the loaded mesh will take the same color as the last drawn cube.
In my constructor I have something like this:
glClearColor(0.5,0.5,0.5,1);
glClearDepth(1);
glEnable(GL_DEPTH_TEST);
gl_program_shader = lab::loadShader("shadere\\shader_vertex.glsl", "shadere\\shader_fragment.glsl");
location_model_matrix = glGetUniformLocation(gl_program_shader, "model_matrix");
location_view_matrix = glGetUniformLocation(gl_program_shader, "view_matrix");
location_projection_matrix = glGetUniformLocation(gl_program_shader, "projection_matrix");
location_object_color = glGetUniformLocation(gl_program_shader, "object_color");
If need be I can provide my shader_vertex and shader_fragment , I considered it to be a problem but im not so sure so if anyone knows why my model isn't being colored please throw a hand.