I wanted to install the OV7251 camera driver to work with a module I've recently purchased, the Arducam OV7251 MIPI, as I need to perform SLAM-like system called Virtual Inertial Navigation (VIN) and global shutter cameras are preferred for this. As far as my system goes, I'm using ROS Kinetic on an RPI-3B+ running Ubuntu 16.04 . I am using this camera as it is near my price point (<20$), and goes through the RPI's CSI Port, which sources say is easier and faster than ones going through USB.
I wanted to take this camera and publish its data to a topic, that way the repository I'm using for VIN, OpenVINS, can track the camera's position. Now, the camera that I'm using doesn't have much on it other than the manufacturer's github page, which does not work on Ubuntu, and cannot connect to ROS. Now, I'm decently inexperienced with RPI's, ROS included, since I wanted to originally perform this on an Arduino, but that was majorly impossible, so I doubt I would be able to write a simple ROS node, let alone one that connected with the CSI port.
Currently, I am unable to find many libraries for this, and help given to me has proved to be un-substantial. The camera does not natively have drivers supported on RPI, which is why I cannot find any /dev/video libraries, cheese turns up nothing, and the command $ Vcgencmd get_camera returns no detected devices. Someone suggested kernel hacking, in order to enable the module in menuconfig using libraries like the ones here. While I do not know much about kernel hacking, he reccomended that I follow this guide and after I run the defconfig line, I should search for "OV7251" in menuconfig and modularize the only one which popped up. And despite flashing and repeating this process multiple times to ensure I did not choose the wrong branch, the rpi-5.4.y branch, or wrong model, the RPI-3B+, I ended up being stuck on the rainbow screen after I rebooted every time. I know that the rainbow screen either means low power, which it wasn't because I had it run before, or a kernel error, which would most likely make sense.
Now, while I would most definitely like to fix the rainbow screen error, I would also like to know, how after installing the OV7251 driver, how do I get it working with ROS to send data to topics? Since I doubt I could write my own node, is there a library that I could look for to perform this, or would libraries that did not work previously due to a missing driver suddenly work now, or would I have to take an existing one and modify it? In any case, A more low-level tutorial to accomplish this would be quite handy seeing as I am new.
But, in the case this is not software, and the reason this camera is not supported is for good reason, is there any other cheap global shutter camera I can work with? I couldn't seem to find many over my various searches, but maybe you all have better luck/experience in this field. Although, I did manage to find another library by this same manufacturer which happens to support my camera model and even has a ROS node that works on ubuntu. However, I believe that if this can be done, then so can doing so by just the CSI port rather than buying an additional 40$ USB camera hat for the pi, and along with that, I am starting to doubt the validity of this companies repositories.
Yet the fact I am finding little information on the topic of this camera alone on the CSI port of an RPI and how renowned this company it scares me that it could be impossible, which if it is, do link me some other good and hopefully well-documented cameras, which could very well be a lot to ask for. And if it is just simply impossible to get the results I want with the parameters I have set, then how badly would a rolling shutter camera affect VIN'S performance, and furthermore is there any special dataset designed for rolling shutter which could minimize the drop in quality? This terrain is all too new to me.
Ok, so I got a rpi engineer to add a dtoverlay for the ov7251 in the rpi's firmware, and the most recent rpi-update has the overlay in the kernel.
I did sudo rpi-update to install the update, i then added dtoverlay=ov7251 to /boot/config.txt in order to enable the overlay, and i edited it by running sudo nano /boot/config.txt. And the repository only has one dependency, v4l-utils, which is installed easily enough by running sudo apt-get install v4l-utils. Finally i ran sudo reboot to initialize the changes.
And in order to pull the images into ROS, i edited a v4l2 node called usb_cam in order to accept the pixel format that the ov7251 camera uses (Y10). My fork can be found here. In order to install it, (since the docs for the original repo say very little on installation), i ran:
cd ~/catkin_ws/src
git clone https://github.com/ai-are-better-than-humans/usb_cam.git
cd ..
catkin_make
and then after that all you have to do is roslaunch usb_cam usb_cam-test.launch to start the node. Mine started out dark, so i had to go into the launch file and mess around with the brightness for a bit. And while youre there, make sure the pixel_format parameter has a value of Y10
You should get a sensor_msgs::Image message being published to a topic named "<camera_name>/image_raw", you can run rqt_graph to visualize it. Big thanks to 6by9 over at raspberry pi forums, dont think i could have gotten it done without him, he did alot of work that im very thankfull for. Thought id share the knowledge back here in case anyone finds it usefull.
EDIT: I hear you can also compile with catkin_make --pkg usb_cam -DCMAKE_BUILD_TYPE=Release instead of catkin_make if the node takes too much CPU. Also, if you see a ton of error messages while compiling, its fine, it still should work, but if you want to get rid of them you can refer to this answer from a ros thread:
It looks like you need to install libavcodec. I don't know the exact
command to install it off the top of my head, but the format will look
like this:
sudo apt-get install libavcodec
The exact package name might not be
libavcodec. It maybe looks something like libavcodec-VERSION-NUMBER or
libavcodec-dev. In these situations you can search for packages with a
command like this:
apt-cache search libavcodec
This will find all packages that have text
containing "libavcodec". This should find the correct package for you
to install.
I'm using Fungen to create a game in Haskell, and I run it without problems in my pc with Windows XP. The problem is when I try to make it run in another PC. I've tried running it in other 3 PCs with Windows XP, but when the game window opens, it is all white and I cann't do anything in the game. The strange thing is, that I can run the classical Fungen examples (Pong and Worms) in the other PCs without problems. I have a friend that can run my example in Windows 7 with a PC similar to mine.
The other PCs have worse hardware than mine, so I think that that's the problem here, but I want to know if it's something else.
I'm hoping someone can point me in the right direction.
I'm trying to modify an ubuntu 10.10 distro by hiding the entire desktop so that after the user boots it up all they see is a solid colour. The reason for this is that I am currently writing a glade application manager which will be the user's only interface with the OS and will sit on top of this background. I think I'm looking for a method to create a kiosk distro. I have looked and found no real good tutorials. I've not really messed with linux much in the past so if anyone any pointers/ideas it would be a real help.
Cheers in advance
Chris
It seems that you're not really ready for such a huge project...
Anyways, Ubuntu comes afaik with KDE or GNOME desktop environment. So if you don't need it, then don't launch it. Just start X11 with a solid background color. It really doesn't make sense to fire up KDE or GNOME and hide everything but a solid background color.
That's like "I want to have a seat and buy a jumbo jet and remove everything but one single seat".
I use gnome-terminal (Ubuntu 10.10). I like it, though I'd be willing to switch to another for this feature.
Can anyone tell me how I can broadcast keystrokes to multiple windows? The closest thing I've found is the "Terminator" program, which allows for broadcasting to multiple tabs, but not to multiple windows. Apparently a similar feature was removed from v3 of Konsole when it went to v4 (no idea why). There are also similar capabilities in screen, but not between windows, as far as I can tell.
I've spent a number of hours looking, but no joy.
I'd also be willing for a general solution (input to multiple windows of any kind) that I could adapt for use with terminal windows.
Thanks.
Try with a software called keyboardcast: apt-get install keyboardcast
For the source: http://web.archive.org/web/20100130104001/http://desrt.mcmaster.ca/code/keyboardcast/
i have had this problem for 3-4 months. OpenGL codes do not run that good as they should in windows. I have a project that i need to run it in linux, with times, pipes, ... that use the Windows API. I need to migrate the code but it doesn't look good. For example they are flashing on the screen! is it from my graphics card on linux? or is it some other difficulties?
Also i have ATI HD3470 on VAIO-FW13GU/H laptop running Debian5. Are there any good(i have seen some drivers but not so good :-S) drivers for ati hd series?
Try creating some simple demo program that uses the OpenGL features you're using in your code. Try isolating which features causes the problem. If all of them worked as you expected, there is a chance that the bug is in your code you may be assuming some platform specific behavior that get borked in linux.
I have had a bug when porting a Windows C++ code, where the 3D mesh parsing code doesn't correctly handle windows-style line ending and that caused the mesh to produce ugly colors since it passes a number string to a home-brewn string-to-int function (which I promptly replaced with atoi()), which gets silently borked when it meets the extra line end character.