pixel tracing not working properly - python-3.x

so i got an image with a curve, and i got a function that looks for 1 pixel and then connect the adjacent pixels (like forming a path), but somehow when it is going south-west or west it does not find any pixels even if there is one.
the function goes like:
pixels=img.getpixels()
window=((1,0),(0,-1),(-1,0),(0,1),(1,1),(1,-1),(-1,-1),(-1,1))
objs=[]
obj=[pixels.pop(),]
while pixels:
p=obj[-1]
neighbours=tuple((x+p[0],y+p[1]) for x,y in window)
for n in neighbours:
if n in pixels:
obj.append(n)
pixels.remove(n)
break
if p==obj[-1]:
objs.append(obj)
obj=[pixels.pop(),]
and an example of the failure:
the original image
the objects image
each different color represents a different "object"
should it not be showing just one object?
i appreciate any thoughts on this, tyvm :D

Related

Python rendering 3D, 2D images within a same window

I am trying to create a simple robot simulator with 3D + 2D(bird-eye view mini-map) like the below image.
My map file is just a list of vertices for polygon and center/radius for circles (all objects are heights of 1 where z = 0).
I found that python VTK plotter makes it really easy to visualize simple object but there is a lack of documentation for the multi-view windows. I also tried open-cv but it creates a 2D image in a separate window.
What would be the easiest way to achieve a simulator like below? There would be very few objects on the map so efficiency is not my concern.
My strategy for making a 2D mini-map overlay like this is to use glWindowPos2d and glDrawPixels, and I have found it to be very successful. You'll want to turn off common OpenGL features like texturing, lighting, and the depth test. In the following example, minimap_x and minimap_y are the window coordinates of the upper-left corner of the minimap.
For example:
glDisable(GL_TEXTURE_2D)
glDisable(GL_LIGHTING)
glDisable(GL_DEPTH_TEST)
glWindowPos2d(minimap_x, window_height - (minimap_y + minimap_height))
glDrawPixels(minimap_width, minimap_height, GL_RGBA, GL_UNSIGNED_BYTE, minimap_image)
glEnable(GL_TEXTURE_2D)
glEnable(GL_LIGHTING)
glEnable(GL_DEPTH_TEST)
You'll need to provide the minimap_image data.
In my applications, I'm typically using PyGame, and so the minimap is on a PyGame Surface. Converting the Surface to raw image data usable by glDrawPixels looks like this:
minimap_image = pygame.image.tostring(minimap_surface, "RGBA", True)

Kinect V2 vs PyKinect2 : difference between depth image

I'm currently working on the kinect V2 to have access to the depth image. I use the library in python PyKinect2 that help me to have this images.
My first problem is :
-> I run KinectStudio 2, and look for the depth image, i look for the implementation of the PyKinect2 and i have a different image. How that could be possible ?
-> To have access to the depth of specific point called X(x,y), I use the method MapColorFrameToDepthSpace, and i manage to have some coordinates that will help me to have the distance on the depth frame.
Is this assertion is correct ?
to get the depth image :
def draw_depth_frame(self, frame):
depth_frame = frame.reshape((424, 512,-1)).astype(np.uint8)
return depth_frame
the image from kinect 2 :
the image from kinect 2 with color ramp :
the image from pyKinect 2
Sincerely,
I found my mistake about this,
When i use this
depth_frame = frame.reshape((424, 512,-1)).astype(np.uint8)
This line is in fact wrong.
The depth is not mapped on a uint8, but on a uint16.
By performing the reshape, i have a "lost" of information about the distance, i have only value as 255, so not very useful.
example: 1600 as a distance, was considered as 255, and because of this line, it gives me the third depth image on my previous post. SO the correction is simply something :
depth_frame = frame.reshape((424, 512,-1)).astype(np.uint16)

How can I rotate an image (loaded by the PhotoImage class) on a canvas, without PIL or pillow?

I would like to rotate an image to follow my mouse and my school computers don't have PIL.
Bryan's answer is technically correct in that the PhotoImage class doesn't contain rotation methods, nor does tk.Canvas. But that doesn't mean we can't fix that.
def copy_img(img):
newimg = tk.PhotoImage(width=img.width(), height=img.height())
for x in range(img.width()):
for y in range(img.height()):
rgb = '#%02x%02x%02x' % img.get(x, y)
newimg.put(rgb, (x, y))
return newimg
The above function creates a blank PhotoImage object, then fills each pixel of a new PhotoImage with data from the image passed into it, making a perfect copy, pixel by pixel... Which is a useless thing to do.
But! Let's say you wanted a copy of the image that was upside-down. Change the last line in the function to:
newimg.put(rgb, (x, img.height()-1 - y))
And voila! The function still reads from the top down, but it writes from the bottom up, resulting in an image mirrored on the y axis. Want it rotated 90-degrees to the right, instead?:
newimg.put(rgb, (img.height() - y, x))
Substituting the y for the x makes it write columns for rows, effectively rotating it.
How deep you go into image processing PhotoImage objects is up to you. If you can get access to PIL (Python Imaging Library)... someone has basically already done this work, refined it to be optimal for speed and memory consumption, and packaged it into convenient classes and functions for you. But if you can't or don't want PIL, you absolutely CAN rotate PhotoImage's. You'll just have to write the methods yourself.
Thanks to acw1668's post that hipped me to the basics of PhotoImage manipulation here:
https://stackoverflow.com/a/41254261/9710971
You can't. The canvas doesn't support the ability to rotate images, and neither does the built-in PhotoImage class.
From the official Canvas documentation:
Individual items may be moved or scaled using widget commands described below, but they may not be rotated.

Basic importing coordinates into R and setting projection

Ok, I am trying to upload a .csv file, get it into a spatial points data frame and set the projection system to WGS 84. I then want to determine the distance between each point This is what I have come up with but I
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Latitude+Longitude
cluster<-CRS("+proj=longlat +datum=WGS84")
d<-dist2Line(cluster)
This returns an error that says
Error in .pointsToMatrix(p) :
points should be vectors of length 2, matrices with 2 columns, or inheriting from a SpatialPoints* object
But this isn't working and I will be honest that I don't fully comprehend importing and manipulating spatial data in R. Any help would be great. Thanks
I was able to determine the issue I was running into. With WGS 84, the longitude comes before the latitude. This is just backwards from how all the GPS data I download is formatted (e.g. lat-long). Hope this helps anyone else who runs into this issue!
thus the code should have been
cluster<-read.csv(file = "cluster.csv", stringsAsFactors=FALSE)
coordinates(cluster)<- ~Longitude+Latitude
cluster<-CRS("+proj=longlat +datum=WGS84")

GraphViz set page width

I'm using GraphViz to determine the controls location in my C# application.
But i'm not being able to give the graphViz dot generator the width of the output.
This are the paremeters of the dot file that im using
Digraph x {
autosize=false;
size="25.7,8.3!";
resolution=100;
node [shape=rect];
(edges definitions comes here)
...
But seems to have no effect on the generated plaintext file.
Am I missing something do set the page width?
Regards
I added a -> b to your example. Here's the plaintext output I get:
digraph x {
graph [autosize=false, size="25.7,8.3!", resolution=100];
node [label="\N", shape=rect];
graph [bb="0,0,54,108"];
a [pos="27,90", width=0.75, height=0.5];
b [pos="27,18", width=0.75, height=0.5];
a -> b [pos="e,27,36.104 27,71.697 27,63.983 27,54.712 27,46.112"];
}
As you can see, the size and resolution attributes are included in the output.
You may change the values of size and resolution, this won't change anything else than those attributes in the plaintext output. The positions of all nodes and edges are relative to the bounding box (bb) of the graph.
However, if you decide for example to output a png, graphviz will use this information to scale the bounding box according to your size and resolution attributes and calculate the final image size.
In this example, the resulting png will be 444 by 831 pixels (8.3 inches with a resolution of 100 dpi result in 830 pixels, the pixel on top is probably due to a rounding error).
You may find more detailed examples about size attribute and the resulting image size in this answer.

Resources