I'm trying to include OpenCV (version 2.3.1) in a project I'm working on. A camera is sending my program (in Microsoft Visual C++ 2008 on a Windows 7 64-bit machine) an image stream, which the program stores in an unsigned 8-bit integer buffer. I would like to display this stream in a window using OpenCV. Right now, I can't seem to get any images to display in my OpenCV windows, so I'm not using my image stream yet; just a JPEG file.
First I declare my window:
namedWindow( "Window", CV_WINDOW_AUTOSIZE );
Then I try to fill it:
char* imgName = "C:\...\Jellyfish.jpg";
Mat imgMat = imread(imgName, 1);
if(imgMat.data)
{
imshow( "Window", imgMat );
}
When my program gets to the point where the window gets declared, a tiny gray window appears. When it reaches the point where it is supposed to display the image, the window's dimensions change to that of the image (I've tested this with different images) but the inside of the window remains a plain gray box.
What is causing this strange error? The program obviously found the image, or it would not have been able to change its dimensions correctly.
You need to add waitKey(2) function call after the imshow.
From OpenCV documentation for waitKey:
This function is the only method in HighGUI that can fetch and handle
events, so it needs to be called periodically for normal event
processing unless HighGUI is used within an environment that takes
care of event processing.
Without this function Windows is unable to handle PAINT event and redraw your window.
Related
I'm working to generate an SVG image to represent a graph. For each node, I would like to display an image. As written in the documentation, to use an image, I need to use svgaddfile and svgaddimage.
I wrote this code (I copy only the interesting lines)
svgsetgraphviewbox(0, 0,max(i in V_zero_n_plus_one)X(i)+10, max(i in V_zero_n_plus_one)Y(i)+10)
svgsetgraphscale(5)
svgsetgraphpointsize(5)
svgaddgroup("Customers", "Customers", SVG_BLACK)
svgaddgroup("Depot", "Depot", SVG_BROWN)
svgaddpoint(X(0), Y(0))
svgaddtext(X(0)+0.5, Y(0)-0.5, "Depot")
svgaddfile("./city2.jpg", "city.png")
svgaddimage("city.png", X(0)+0.5, Y(0)-0.5, 20, 20)
svgaddgroup("Routes", "Delivery routes")
svgsave("vrp.svg")
svgrefresh
svgwaitclose("Close browser window to terminate model execution.", 1)
I obtain the following image:
The image is 512x512. What am I doing wrong? Tnx
There seems to be a timing issue for the uploading of the graphic file when you are using the option '1' in 'svgwaitclose' when running from Workbench (this option means that the underlying HTTP server that is run by mmsvg is stopped immediately once the SVG file has been uploaded).
You could either work with this form:
svgwaitclose("Close browser window to terminate model execution.") ! NB: the second argument defaults to value 0
or add a small delay before this statement:
sleep(2000) ! Wait for 2 seconds
svgwaitclose("Close browser window to terminate model execution.", 1)
I am working on an application in PyQT5 which has two docks on either side and an OCC 3d viewer and a TextEdit in the middle. The .ExportToImage() method of the OCC viewer allows taking a screenshot of the viewer. But since the Application has a responsive design, the Viewer is resized to be thin(on certain display resolutions) and thus the screenshot also comes out to be thin.
I've tried to resize the window to a particular size and then hide everything except the 3D viewer. This enlarges the viewer thus saving from a cropped screenshot. But when I hide and resize and then take the screenshot, the screenshot still comes out to be thin. Here's the code:
def take_screenshot(self):
Ww=self.frameGeometry().width()
Wh=self.frameGeometry().height()
self.resize(700,500)
self.outputDock.hide() #Dock on the right
self.inputDock.hide() #Dock on the left
self.textEdit.hide() #TextEdit on the Bottom-Middle
self.display.ExportToImage(fName) #Display is the 3d Viewer's display on the Top-Middle
self.resize(Ww,Wh)
self.outputDock.show()
self.inputDock.show()
self.textEdit.show()
I guess this happens because the above .show(), .hide(), .resize() methods of PyQt5 are multithreaded and as soon as I run them they dont run consicutively but parallely. Thus the screenshot is taken before the other processes complete.
Is there a way to resolve this? Or is there a better way?
no multiple threads. events are processed in loops, so called eventloop.
try this:
def take_screenshot(self):
Ww=self.frameGeometry().width()
Wh=self.frameGeometry().height()
self.resize(700,500)
self.outputDock.hide() #Dock on the right
self.inputDock.hide() #Dock on the left
self.textEdit.hide() #TextEdit on the Bottom-Middle
QTimer.singleShot(0, self._capture)
def _capture(self):
# app.processEvents() # open this line if it still doesn't work, I don't know why.
self.display.ExportToImage(fName) #Display is the 3d Viewer's display on the Top-Middle
self.resize(Ww,Wh)
self.outputDock.show()
self.inputDock.show()
self.textEdit.show()
The purpose is to take data from a virtual camera (from a camera in Gazebo simulation, updating every second) and use Detectron2 (requires data come from cv2.VideoCapture) to recognize other objects in the simulation. The virtual camera of course does not appear in lspci so I can't simply use cv2.VideoCapture(0).
So my code is
bridge = CvBridge()
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='bgr8') #cv_image is numpy.ndarray, size (100,100,3)
cap = cv2.VideoCapture()
ret, frame = cap.read(image=cv_image)
print(ret, frame)
but it just prints False None, I assume because there's nothing being captured in cap. I
f I replace line 2 with cap = cv2.VideoCapture(cv_image) I get the error,
TypeError: only size-1 arrays can be converted to Python scalars
since I believe it requires either and integer (representing webcam number) or string (representing video file).
And for reference,
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='bgr8') # cv_image is numpy.ndarray
cv2.imshow('image', cv_image)
cv2.waitKey(1)
displays the image perfectly fine. Could there be a way to use imshow() or something similar as input for VideoCapture()?
However, cap = cv2.VideoCapture(cv2.imshow('image', cv_image))opens a blank window and gives me,
[ERROR:0] global /io/opencv/modules/videoio/src/cap.cpp (116) open VIDEOIO(CV_IMAGES): raised OpenCV exception:
OpenCV(4.2.0) /io/opencv/modules/videoio/src/cap_images.cpp:293: error: (-215:Assertion failed) !_filename.empty() in function 'open'
How can I create a cv2.VideoCapture() object that can use the image data that I have? Or what's something that might point me in the right direction?
Ubuntu 18.04 and Python 3.6 with opencv-python 4.2.0.34
From what I found on Gazebo tutorials page:
In Rviz, add a ''Camera'' display and under ''Image Topic'' set it to /rrbot/camera1/image_raw.
In your case it probably won't be /rrbot/camera1/ name, but the one you are setting in .gazebo file
<cameraName>rrbot/camera1</cameraName>
<imageTopicName>image_raw</imageTopicName>
<cameraInfoTopicName>camera_info</cameraInfoTopicName>
So you can create subscriber and use cv2.VideoCapture() for every single image from that topic.
My solution was to rewrite Detectron2's --input flag in the demo to constantly run a ROS2 callback with demo.run_on_image(cv_data). So instead of making it process video, it just quickly processes each new image one at a time. This is a workaround so that cv2.VideoCapture() is not needed.
Inforamtion:
I have a simple ToF(Time of Flight) camera module provided by a vendor that only contains a Depth Node.
I've already setup the PCL environment and can compile and execute the sample code it provides.
The ToF camera module comes with a source code shows how to get depth raw data(the x, y, z value) from the hard device, but doesn't tell how to stream it as both point cloud image and depth image.
Win 7 64bit, Visual Studio 2008, PCL all-in-one 32bit.
As a result, I plan to use PCL to show the Point cloud image and depth image with the x, y, z data I can get from that camera module, further more, if streaming is possible.
However, as far as I know right now is that PCL tends to store all the point cloud data as a .pcd file, and then reads it thus output a point cloud image and a depth image.
It is obviously too slow to do the streaming in such way if I have to savePCD() and readPCD() every time in each frame. So I studied the "openni grabber" and "openni range image visualization" sample code and tried execute them, sadly "No device connected." is all I got.
I have a few ideas to ask for advises before I try:
Is there a way to use Openni on a device except Kinect, Xtion and PrimeSense? Even if it's just a device with no name and only has a depth node?
Can PCL show point cloud image and depth image without accessing a .pcd file? In other words, can I just assign the value of each vertex and construct a image?
Can I just normalize all the vertices and construct a image with barely Opencv?
Is there any other method to stream that ToF camera module in point cloud image and depth image?
1) Changing the OpenNI grabber to use your own ToF camera will be much more work than to just use the example from the camera in a loop shown below.
2) Yes PCL can show point cloud image and depth without accessing a .pcd file. What the .pcd loader does is to parse the pcd-file and place the values in the cloud format. You can do this directly from your camera data as shown below.
3) No idea what you mean here. I propose you try to use the pcl visualizer or cloud viewer as proposed below.
You can do something like:
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
cloud->isDense = true;
cloud->width = widthOfTOFsensor;
cloud->height = heightOfTOFsensor;
cloud->size = cloud->width * cloud->height;
//Create some loop
//grabNewFrame from TOFsensor
for(int pointIndex=0;pointIndex<cloud->size();pointIndex++)
{
cloud->points[pointIndex].x = tofSensorData[pointIndex].x; //Don't know the tofData format, so I just guessed something.
cloud->points[pointIndex].y = tofSensorData[pointIndex].y;
cloud->points[pointIndex].z = tofSensorData[pointIndex].z;
}
// Plot the data using pcl visualizer or cloud viewer, see:
http://pointclouds.org/documentation/tutorials/cloud_viewer.php#cloud-viewer
http://pointclouds.org/documentation/tutorials/pcl_visualizer.php#pcl-visualizer
I have a winform application in which there is a pictureBox what displays a pretty big image (2550by4500). This bitmap image is transformed from a byte array using unsafe pointer, like this:
Bitmap img;
unsafe
{
fixed (Byte* intPtr = &outBuffer[0])
img = new Bitmap(_width, _height, _width * 3, System.Drawing.Imaging.PixelFormat.Format24bppRgb, new IntPtr(intPtr));
}
So far, no problems. After displaying the image, I saved the pixel values into a Matlab .mat file using this DLL (http://www.mathworks.com/matlabcentral/fileexchange/16319). Still, no problem with saving.
However, the image in the pictureBox became like a noisy black-white image, the original image was completely lost.
Things I tried:
added the Bitmap in watch window, found out pixel values all changed. Bitmap is corrupted.
do that unsafe transformation again everytime after saving, however, it brings another problem: "AccessViolationException in Drawing.dll".
Something must have to do with the .mat saving part, because if I skip saving, no problem at all. But I do not know how they are related, memory? I tried a smaller size image, no problem. so I'm assuming that "save .mat" process corrupted the Bitmap?
Any idea would be helpful! Thank you