I am using hector_mapping to create map of my room. I used openni node to get the depth/image_raw from kinect sensor, I then changed it to laser scan data using "depthimage_to_laserscan" and used that "scan" topic to create hector_mapping. When I run all of these in a machine it works fine and creates a map but when I run "openni_launch" and "depthimage_to_laserscan" on odroid and "hector_mapping" on my machine, I get an error the following error: "lookupTransform base_link to camera_depth_frame" timed out. Could not transform laser scan into base frame". What does this error mean and how did it not occur when everything was running on same machine.?
My odroid and machine communicates over a wireless network. My machine runs ROS indigo on Ubuntu Trusty.
roscore
Assuming your roscore is running on your machine, you will have to set the ENV ROS_MASTER_URI variable on odroid to export ROS_MASTER_URI=http://yourmachine:11311 before launching anything on odroid. This will tell the rosnodes on odroid to connect to the roscore on your machine.
Check frame rates and delays
There is tool in the tf package to see the TF tree. This will show you also the delays between the links:
$ rosrun tf view_frames
This will collect TFs for 5 seconds and generate a file named frames.pdf which will contain all the details.
Machine time synchronization
If you see there are large delays between the TF links you can try to synchronize your machines. To do that you synchronize the time clock on your machine to the one which is running the roscore:
$ sudo ntpdate <your-roscore-machine>
Sometimes it takes several synchronization attempts to bring down the time difference between the machines.
Related
I've had issues setting up the DeepstreamSdk IoTedge module on the jetson nano for the last week and I cannot get past it. I've installed IoTedge runtime and all the necessities for IoT edge to run. It runs perfectly including other modules like the simulated Temperature sensor. However when I deploy the DeepstreamSdk v-4.02 on the Jetson nano running Jetpack 4.3, it starts and runs for a couple of minutes but then fails unexpectedly and then after a bit of time starts up again and then fails again. And then Sometimes I restart the IoTedge and it will start up again and then fail. When I use the Developer extension IoTedge in VS code to see what the messages being sent up to the cloud are, I can see the temperature sensor module's messages, however none from the NvidiaDeepstream module.
I've had a look in the logs for the NvidiaDeepstream container, and it shows that it is printing out results ( messages to the cloud), but then eventually sends an error code 1. and some sort of message at the end INT8 is not supported, try INT16. All the Azure checks and connectivity and configurations are correct. It is just the deepstreamdk module that doesn't run properly.
Does anyone have any suggestions? What info should I provide to make this more clear and understandable? I am following the tutorial on the Github repository for NVIDIA Deepstream + Azure IoT Edge on a NVIDIA Jetson Nano: Link to turorial
Link to logs of COntainers
I have a couple of x86 machines connected through a direct 10GbE connection. Interfaces are up and working (i.e. the machines can ping each other). Both machines run CentOS Linux.
I need to setup ptpd to synchronize the machines in order to get timestamps with a microsecond timing resolution.
I have:
installed ptpd with yum
edited the /etc/ptpd2.conf file (putting one machine as masteronly and the other as slaveonly)
run the service through service ptpd2 start
The ptpd components communicate (verified either through through tcpdump and the ptp log files). Moreover, the /var/log/ptpd2_kernelclock.drift file shows the measured dritf.
However, date shows that times are not synchronized and a simple client-server test shows that timestamps are not synchronized.
Any idea what is wrong ?
The only solution we've found has been to reinstall ptpd through yum. For some (very weird) reason, re-installing ptpd with the same configuration file solved the wrong behavior.
What I am looking for: I need help debugging consistently happening system crashes on my Jetson TK1.
System: I am using a Jetson TK1 board from NVIDIA. Updated to 21.3.4 Grinch Kernel. All drivers installed, libopencv4tegra installed alongside ROS (using hacked deb packages to not overwrite openCV). Everything used to work perfectly in this exact setup.
When the crashes happen: I am running a VSLAM program, which uses a camera connected on the USB port. The program is making heavy use of OpenCV. The program used to run for over 1 month without problems in the current setup. Now, I am getting consistent system crashes which result in a total system freeze. When I am connected over ssh, I loose connection. When I connect a monitor to see what happens on the system while it crashes, I can see everything freeze. The USB port also seems to turn off, since not even USB mouse and keyboard work anymore post-crash. The Jetson stays on though.
Crash Logs: I have tried looking into the /var/log/ logs, but none of them show any messages for when the crash happens.
I have run memtester before. It didn't return any bad memory. While running and crashing, the memory onboard is used at about 60-75% (as shown by "top"). CPU usage is around 60%.
The weird thing is that this exact setup has been running just like this for over a month now.
I need to know: are there any other logs I could find information about the crash in? How could I find out if this is related to a hardware failure or whether there's a software issue?
Thanks
-Marc
I'm using my Raspberry Pi 2 Model B as a small and super simple LAMP development server. However it is on the edge of acceptable performance, especially when it comes to bulk copying or handling large MySQL databases.
The set-up:
The Pi has no display attached
I access the device via SSH and WinSCP
I changed the GUI boot behavior via raspi-config to command line only
What makes me curious is that whenever I connect to the Pi via the Windows Remote Control tool I still get a GUI.
Therefore I'm wondering whether if there are any negative performance implications or if Raspbian does not load the GUI until explicitly requested from the remote control tool.
If there are negative implications, what configurations should I change? (PS: I like to have a GUI from time to time but I could do without it.)
Unless the RPi is very starved of memory, there will be no performance difference as long as the graphical interface isn't actively being used.
Having said that, I would not try to run a large database on it unless I was using a class 10 card or better and the database configuration was heavily tuned.
All, Forgive me I am the beginner of the WDK development, I was reading some tutorial from msdn in here
and the read says
Typically when you test and debug a driver, the debugger and driver
run on separate computers. The computer that runs the debugger is
called the host computer, and the computer that runs the driver is
called the target computer. The target computer is also called the
test computer.
So I was wondering if the host computer and target computer can be the same one ? thanks.
There is a possibility for live debugging in Windows Kernel debugger. Not all the commands of the debugger will be available (http://msdn.microsoft.com/en-us/library/windows/hardware/ff553382(v=vs.85).aspx).
Another option is to use two virtual machines and redirect the serial ports of those VMs through named pipe or TCP\IP. If you are just beginning and mostly playing with Toaster sample driver - this is more than enough.
It is always advicable to have a host pc and test pc seperate as
when you a developing a device driver you might end up crashing the system multiple times which might lead to hard disk failure and hence if host pc is same as test pc, you would lose all your data.