I'm experiencing a very high CPU usage (~100%) using the Qt version of T32 on Linux, even when the program is waiting user interaction. The executable is t32marm-qt.
This does not happen when I use the standard Tcl-based t32marm executable.
A strace shows that the executable continuosly cycles on the
clock_gettime(CLOCK_REALTIME,...)
syscall.
The Linux distribution is Mint 14 32-bit (derivation of Ubuntu 12.10).
Has anybody experienced this behavior ?
If so, is it a bug or just a wrong configuration ?
Yes, I have been just confirmed that it is a software bug, fixed in more recent versions of the tool. If you encounter such a problem, update your version.
Related
For: QNX Software Development Platform 6.5.0
I have run into a problem on a QNX 6.5.0 system where my program silently exists, and has been found to be due to a race condition similar to this post here:
Thread stops randomly in the middle of a while loop
I have done some research and found that QNX has some built in tools to monitor memory and detect any leaks that are present in the program, however the instructions I have come across are for the QNX 6.5.0 IDE GUI, and I am running QNX on a server in command line.
example: http://www.qnx.com/developers/docs/6.5.0/index.jsp?topic=%2Fcom.qnx.doc.ide.userguide%2Ftopic%2Fmemory_DetecMemLeaks_.html
I'm kind of stuck with this as there isn't really a simple way to do this as the software designed is for logging purposes and is taking thousands of entries per second, and silently exists after a few hours. So I can't sit here waiting 2 hours each round.
Has anyone had experience with debugging mem leaks in QNX?
Edit: I am also using boost::lockfree::spsc_queue which may be causing the crash.
I was able to solve this by utilising Valgrind. I compiled my program and valgrind for linux and was able to debug my issue this way.
I had some problems with a cilk++ program that works well on windows system but not on linux system:
on windows system, while increasing the number of threads the execution time decrease
but on linux system, while increasing the number of threads the execution time increase.
I used linux ubuntu 2.6.35-22-generic x86_64 GNU/Linux
I can't understand the source of the problem.So can someone help me please ?
Without sources, there's no way to know. There may be a resource that has a per-thread implementation on Windows and a shared implementation on Linux.
I'd recommend using a performance analyzer like Intel's VTune/Amplifier to figure out where your application is spending it's time.
- Barry Tannenbaum
Intel Cilk Plus Runtime Development
I'm running the x64 version of some simulation app, on a very nice IBM x-server (4 8-core CPUs). The OS is Linux - redhat 5.6 x64 kernel.
So this app crashes exactly when it needs more than 2 GB of memory (as evident from its own log files).
My question really is how to debug this issue - what relevant environment settings should I look at? Is 'ulimit' (or sysctl.conf) relevant to this issue? What additional info can I post in order for you to help me?
This would be an application problem. Although the application is compiled as a 64-bit application, it still uses signed 32-bit integers for some things instead of proper pointers or the appropriate *_t types.
If you compile the application yourself, look for any "unsigned" or "truncated" warnings in the compilation output, and fix them.
The shmmax value defines the amount of memory that applications can use, you should check the value with this command:
cat /proc/sys/kernel/shmmax
If you need to increment, you can use:
echo 4096000000 > /proc/sys/kernel/shmmax
Bye
I am experiencing a strange behavior of GDB. When running a post-mortem analysis of a core, dumped from a heavily multithreaded application in c++, the debugger commands
bt
where
thread info
never tell me the thread which the program actually crashed. It keeps showing me the thread number 1. As I am used to see this working from other Systems, I am curious if is is a Bug in GDB or if they changed the behavior somehow. Can anyone point me to a solution of this, it is PITA to search through 75 Threads, just to find out something the Debugger already knows.
By the way, I am on Debian Squeeze (6.0.1), the version of GDB is 7.0.1-debian, the System is x86 and completely 32-Bit. On my older Debian (5.x) installation, debugging a core, dumped by the exact same source, delivers me a backtrace of the correct thread, as does GDB on a Ubuntu 10.04 installation.
Thanks!
GDB does not know which thread caused the crash, and simply shows the first thread that it sees in the core.
The Linux kernel usually dumps the faulting thread first, and that is why on most systems you end up in exactly the correct thread once you load core into GDB.
I've never seen a kernel where this was broken, but I've never used Debian 6 either.
My guess would be that this was broken, and then got fixed, and Debian 6 shipped with a broken kernel.
You could try upgrading the kernel on your Debian 6 machine to match e.g. your Ubuntu 10.04, and see if the problem disappears.
Alternatively, Google user-space coredumper does it correctly. You can link it in, and call it from SIGSEGV handler.
I'm considering doing some Linux kernel and device driver development under a vmware VM for testing ( Ubuntu 9.04 as a guest under vmware server 2.0 ) while doing the compiles on the Ubuntu 8.04 host.
I don't want to take the performance hit of doing the compiles under the VM.
I know that the kernel obviously doesn't link to anything outside itself so there shouldn't be any problems in that regard, but
are there any special gotcha's I need to watch out for when doing this?
beyond still having a running computer when the kernel crashes are there any other benefits to this setup?
Are there any guides to using this kind of setup?
Edit
I've seen numerous references to remote debugging in VMware via Workstation 6.0 using GDB on the host. Does anyone know if this works with any of the free versions of VMWare such as Server 2.0.
I'm not sure about ubuntu thing. Given that you are not doing a real cross compilation (i.e. x86->arm), I would consider using make-kpkg package. This should produce an installable .deb
archive with kernel for your system. this would work for me on debian, it might for for you
on ubuntu.
more about make-kpkg:
http://www.debianhelp.co.uk/kernel2.6.htm
I'm not aware of any gotchas. But basically it depends what kind of kernel part you
are working with. The more special HW/driver you need, the more likely VM won't work for you.
probably faster boots and my favorite is the possibility to take screenshot (cut'n'paste) of panic message.
try to browse to vmware communities. this thread looks very promising, although it dicusses
topic for MacOS:
http://communities.vmware.com/thread/185781
Compiling, editing, compiling is quite quick anyway, you don't recompile you whole kernel each time you modify the driver.
Before crashing, you can have deadlock, bad usage of resource that leads to unremovable module, memory leak etc ... All kind of things that needs a reboot even if your machine did not crash, so yes, this can be a good idea.
The gotchas can come in the form of the install step and module dependency generation, since you don't want to install your driver in the host, but in the target machine.