Invoke kernel failure through cuda-gdb? - linux

Is there a way to invoke kernel failure using cuda-gdb? I've tried stepping through the kernel code and setting invalid index positions, odd values to variables, but I'm unable to trigger a "kernel Execution Failed" after continuing from an erroneous setting.
Does anyone know of a proper way to do this through cuda-gdb? I've read through the cuda-gdb documentation twice but might have missed some clues on how to achieve this if it is at all possible. If anyone knows of any tools/techniques that would be most appreciated, thanks.
I'm on CentOS 7 and my device's compute capability is 2.1. See below for the output of the uname -a command.
Linux john 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Is there a way to invoke kernel failure using cuda-gdb?
Yes, it's possible. Here is a fully worked example:
$ cat t678.cu
#include <stdio.h>
__global__ void kernel(int *data){
int idx = 0; // line 4
idx += data[0];
int tval = data[idx];
data[1] = tval;
}
int main(){
int *d_data;
cudaMalloc(&d_data, 32*sizeof(int));
cudaMemset(d_data, 0, 32*sizeof(int));
kernel<<<1,1>>>(d_data);
cudaDeviceSynchronize();
cudaError_t err = cudaGetLastError();
if (err != cudaSuccess) printf("kernel fail %s\n", cudaGetErrorString(err));
}
$ nvcc -g -G -o t678 t678.cu
$ cuda-gdb ./t678
NVIDIA (R) CUDA Debugger
7.5 release
Portions Copyright (C) 2007-2015 NVIDIA Corporation
GNU gdb (GDB) 7.6.2
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-unknown-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /home/user2/misc/t678...done.
(cuda-gdb) break t678.cu:4
Breakpoint 1 at 0x4026d5: file t678.cu, line 4.
(cuda-gdb) run
Starting program: /home/user2/misc/./t678
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[New Thread 0x7ffff700a700 (LWP 8693)]
[Switching focus to CUDA kernel 0, grid 2, block (0,0,0), thread (0,0,0), device 0, sm 14, warp 2, lane 0]
Breakpoint 1, kernel<<<(1,1,1),(1,1,1)>>> (data=0x13047a0000) at t678.cu:4
4 int idx = 0; // line 4
(cuda-gdb) step
5 idx += data[0];
(cuda-gdb) print idx
$1 = 0
(cuda-gdb) set idx=1000000
(cuda-gdb) step
6 int tval = data[idx];
(cuda-gdb) print idx
$2 = 1000000
(cuda-gdb) step
CUDA Exception: Device Illegal Address
The exception was triggered in device 0.
Program received signal CUDA_EXCEPTION_10, Device Illegal Address.
kernel<<<(1,1,1),(1,1,1)>>> (data=0x13047a0000) at t678.cu:7
7 data[1] = tval;
(cuda-gdb)
In the above cuda-gdb output, you can see that after setting the idx variable to a large value, it results in an index-out-of-bounds (illegal address) error when executing the following line in the debugger:
int tval = data[idx];

Related

openmp core assignation fails

My Centos 6 VM shows four cores when displaying the content of /proc/cpuinfo, and /sys/devices/system/cpu/online shows 0-3.
I am trying to run the following code on the core 2 and 3 using KMP_AFFINITY="explicit,proclist=[2-3]"
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#include <sched.h>
int main (int argc, char *argv[]) {
int nthreads, tid, cid;
#pragma omp parallel private(nthreads, tid)
{
tid = omp_get_thread_num();
cid = sched_getcpu();
printf("Hello from thread %d on core %d\n", tid, cid);
if (tid == 0) {
nthreads = omp_get_num_threads();
printf("Number of threads = %d\n", nthreads);
}
}
}
When compiled with icc (ICC) 16.0.1 20151021, it it fails to detect the available cores and executes everything on the core 0.
$ OMP_NUM_THREADS=4 ./a.out
OMP: Warning #123: Ignoring invalid OS proc ID 2.
OMP: Warning #123: Ignoring invalid OS proc ID 3.
OMP: Warning #124: No valid OS proc IDs specified - not using affinity.
Hello from thread 0 on core 0
Number of threads = 1
Hello from thread 0 on core 0
Number of threads = 1
Hello from thread 0 on core 0
Number of threads = 1
Hello from thread 0 on core 0
Number of threads = 1
Where as gcc (GCC) 4.4.7 20120313, with GOMP_CPU_AFFINITY="2-3", executes properly on core 2 and 3, like set.
I used strace to check what's going on under the hood, and I noticed something strange :
[...]
open("/sys/devices/system/cpu/online", O_RDONLY|O_CLOEXEC) = 3
read(3, "0-3\n", 8192) = 4
[...]
sched_getaffinity(0, 1048576, { 1 }) = 8
sched_setaffinity(0, 8, { 4521c26fbb1c38c1 }) = -1 EFAULT (Bad addres
[...]
Could this be an error from the implementation of OpenMP made by intel?
I cannot upgrade my compiler to fix it in this case. Is it possible to use the GCC OpenMP library instead of the Intel one when compiling with icc ?
Update:
I managed to compile the code with gcc and linking it with iomp using the following command
gcc omp.c -L/opt/intel/compilers_and_libraries_2016/linux/lib/intel64_lin/ -liomp5
The execution outputs no warning, and is still not correct:
$ OMP_NUM_THREADS=4 ./a.out
Hello from thread 0 on core 0
Number of threads = 1
Same sched_setaffinity error than previously shown.

What is the precision of the bash builtin time command?

I have a script that measures the execution time of a program using the bash builtin command time.
I am trying to understand the precision of this command: as far as I know it returns the precision in ms, however it use the getrusage() function which returns a value in microseconds. But reading this paper, the real precision is only of 10ms because getrusage samples the time relying on ticks (= 100Hz). The paper is really old (it mentions Linux 2.2.14 running on a Pentium 166Mhz with 96Mb of ram).
Is time still using getrusage() and 100 Hz ticks or is more precise on modern systems?
The testing machine is running Linux 2.6.32.
EDIT: This is a slightly modified version (which should compile also on old version of GCC) of muru's code: modify the value of variable 'v' change also the delay between measures in order to spot the minimum granularity. A value around 500,000 should trigger changes of 1ms on a relatively recent cpu (first versions of i5/i7 #~2.5Ghz)
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
void dosomething(){
long v = 1000000;
while (v > 0)
v--;
}
int main()
{
struct rusage r1, r2;
long t1, t2, min, max;
int i;
printf("t1\tt2\tdiff\n");
for (i = 0; i<5; i++){
getrusage(RUSAGE_SELF, &r1);
dosomething();
getrusage(RUSAGE_SELF, &r2);
t1 = r1.ru_stime.tv_usec + r1.ru_stime.tv_sec*1000000 + r1.ru_utime.tv_usec + r1.ru_utime.tv_sec*1000000;
t2 = r2.ru_stime.tv_usec + r2.ru_stime.tv_sec*1000000 + r2.ru_utime.tv_usec + r2.ru_utime.tv_sec*1000000;
printf("%ld\t%ld\t%ld\n",t1,t2,t2-t1);
if ((t2-t1 < min) | (i == 0))
min = t2-t1;
if ((t2-t1 > max) | (i == 0))
max = t2-t1;
dosomething();
}
printf("Min = %ldus Max = %ldus\n",min,max);
return 0;
}
However the precision is bound to linux version: with Linux 3 and above the precision is in the order of us, while on linux 2.6.32 could be around 1ms, probably depending also on the specific distro. I suppose that this difference is related to the usage of HRT isntead of Tick on recent linux versions.
In any case the maximum precision of time is 1ms on all recent and not-so-recent machines.
The bash builtin time still uses getrusage(2). On an Ubuntu 14.04 system:
$ bash --version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ strace -o log bash -c 'time sleep 1'
real 0m1.018s
user 0m0.000s
sys 0m0.001s
$ tail log
getrusage(RUSAGE_SELF, {ru_utime={0, 0}, ru_stime={0, 3242}, ...}) = 0
getrusage(RUSAGE_CHILDREN, {ru_utime={0, 0}, ru_stime={0, 530}, ...}) = 0
write(2, "\n", 1) = 1
write(2, "real\t0m1.018s\n", 14) = 14
write(2, "user\t0m0.000s\n", 14) = 14
write(2, "sys\t0m0.001s\n", 13) = 13
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
exit_group(0) = ?
+++ exited with 0 +++
As the strace output shows, it calls getrusage.
As for precision, the rusage struct that getrusage uses includes timeval objects, and timeval has microsecond precision. From the manpage of getrusage:
ru_utime
This is the total amount of time spent executing in user mode,
expressed in a timeval structure (seconds plus microseconds).
ru_stime
This is the total amount of time spent executing in kernel
mode, expressed in a timeval structure (seconds plus
microseconds).
I think it does better than 10 ms. Take the following example file:
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
int main()
{
struct rusage now, then;
getrusage(RUSAGE_SELF, &then);
getrusage(RUSAGE_SELF, &now);
printf("%ld %ld\n",
then.ru_stime.tv_usec + then.ru_stime.tv_sec*1000000 + then.ru_utime.tv_usec + then.ru_utime.tv_sec*1000000
now.ru_stime.tv_usec + now.ru_stime.tv_sec*1000000 + now.ru_utime.tv_usec + now.ru_utime.tv_sec*1000000);
}
Now:
$ make test
cc test.c -o test
$ for ((i=0; i < 5; i++)); do ./test; done
447 448
356 356
348 348
347 347
349 350
The reported differences between successive calls to getrusage are 1 µs and 0 (the minimum). Since it does show a 1 µs gap, the tick must be at most 1 µs.
If it had a 10 ms tick, the differences would be zero, or at least 10000.

How to get right MIPS libc toolchain for embedded device

I've run into a problem (repetitively) with various company's' embedded linux products where GPL source code from them does not match what is actually running on a system. It's "close", but not quite right, especially with respect to the standard C library they use.
Isn't that a violation of the GPL?
Often this mismatch results in a programmer (like me) cross compiling only to have the device reply cryptically "file not found" or something similar when the program is run.
I'm not alone with this kind of problem -- For many people have threads directly and indirectly related to the problem: eg:
Compile parameters for MIPS based codesourcery toolchain?
And I've run into the problem on Sony devices, D-link, and many others. It's very common.
Making a new library is not a good solution, since most systems are ROMFS only, and LD_LIBRARY_PATH is sometimes broken -- so that installing a new library on the device wastes very limited memory and often won't work.
If I knew what the right source code version of the library was, I could go around the manufacturer's carelessness and compile it from the original developer's tree; but how can I find out which version I need when all I have is the binary of the library itself?
For example: I ran elfread -a libc.so.0 on a DSL modem's libc (see below); but I don't see anything here that could tell me exactly which libc it was...
How can I find the name of the source code, or an identifier from the library's binary so I can create a cross compiler using that library? eg: Can anyone tell me what source code this library came from, and how they know?
ELF Header:
Magic: 7f 45 4c 46 01 02 01 00 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, big endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: DYN (Shared object file)
Machine: MIPS R3000
Version: 0x1
Entry point address: 0x5a60
Start of program headers: 52 (bytes into file)
Start of section headers: 0 (bytes into file)
Flags: 0x1007, noreorder, pic, cpic, o32, mips1
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 4
Size of section headers: 0 (bytes)
Number of section headers: 0
Section header string table index: 0
There are no sections in this file.
There are no sections to group in this file.
Program Headers:
Type Offset VirtAddr PhysAddr FileSiz MemSiz Flg Align
REGINFO 0x0000b4 0x000000b4 0x000000b4 0x00018 0x00018 R 0x4
LOAD 0x000000 0x00000000 0x00000000 0x2c9ee 0x2c9ee R E 0x1000
LOAD 0x02c9f0 0x0006c9f0 0x0006c9f0 0x009a0 0x040b8 RW 0x1000
DYNAMIC 0x0000cc 0x000000cc 0x000000cc 0x0579a 0x0579a RWE 0x4
Dynamic section at offset 0xcc contains 19 entries:
Tag Type Name/Value
0x0000000e (SONAME) Library soname: [libc.so.0]
0x00000004 (HASH) 0x18c
0x00000005 (STRTAB) 0x3e9c
0x00000006 (SYMTAB) 0x144c
0x0000000a (STRSZ) 6602 (bytes)
0x0000000b (SYMENT) 16 (bytes)
0x00000015 (DEBUG) 0x0
0x00000003 (PLTGOT) 0x6ce20
0x00000011 (REL) 0x5868
0x00000012 (RELSZ) 504 (bytes)
0x00000013 (RELENT) 8 (bytes)
0x70000001 (MIPS_RLD_VERSION) 1
0x70000005 (MIPS_FLAGS) NOTPOT
0x70000006 (MIPS_BASE_ADDRESS) 0x0
0x7000000a (MIPS_LOCAL_GOTNO) 11
0x70000011 (MIPS_SYMTABNO) 677
0x70000012 (MIPS_UNREFEXTNO) 17
0x70000013 (MIPS_GOTSYM) 0x154
0x00000000 (NULL) 0x0
There are no relocations in this file.
The decoding of unwind sections for machine type MIPS R3000 is not currently supported.
Histogram for bucket list length (total of 521 buckets):
Length Number % of total Coverage
0 144 ( 27.6%)
1 181 ( 34.7%) 27.1%
2 130 ( 25.0%) 66.0%
3 47 ( 9.0%) 87.1%
4 12 ( 2.3%) 94.3%
5 5 ( 1.0%) 98.1%
6 1 ( 0.2%) 99.0%
7 1 ( 0.2%) 100.0%
No version information found in this file.
Primary GOT:
Canonical gp value: 00074e10
Reserved entries:
Address Access Initial Purpose
0006ce20 -32752(gp) 00000000 Lazy resolver
0006ce24 -32748(gp) 80000000 Module pointer (GNU extension)
Local entries:
Address Access Initial
0006ce28 -32744(gp) 00070000
0006ce2c -32740(gp) 00030000
0006ce30 -32736(gp) 00000000
0006ce34 -32732(gp) 00010000
0006ce38 -32728(gp) 0006d810
0006ce3c -32724(gp) 0006d814
0006ce40 -32720(gp) 00020000
0006ce44 -32716(gp) 00000000
0006ce48 -32712(gp) 00000000
Global entries:
Address Access Initial Sym.Val. Type Ndx Name
0006ce4c -32708(gp) 000186c0 000186c0 FUNC bad section index[ 6] __fputc_unlocked
0006ce50 -32704(gp) 000211a4 000211a4 FUNC bad section index[ 6] sigprocmask
0006ce54 -32700(gp) 0001e2b4 0001e2b4 FUNC bad section index[ 6] free
0006ce58 -32696(gp) 00026940 00026940 FUNC bad section index[ 6] raise
...
truncated listing
....
Note:
The rest of this post is a blog showing how I came to ask the question above and to put useful information about the subject in one place.
Don't bother reading it unless you want to know I actually did research the question... in gory detail... and how NOT to answer my question.
The proper (theoretical) way to get a libc program running on (for example) a D-link modem would simply be to get the TRUE source code for the product from the manufacturer, and compile against those libraries.... (It's GPL !? right, so the law is on our side, right?)
For example: I just bought a D-Link DSL-520B modem and a 526B modem -- but found out after the fact that the manufacturer "forgot" to supply linux source code for the 520B but does have it for the 526B. I checked all of the DSL-5xxB devices online for source code & toolchains, finding to my delight that ALL of them (including 526B) -- contain the SAME pre-compiled libc.so.0 with MD5sum of 6ed709113ce615e9f170aafa0eac04a6 . So in theory, all supported modems in the DSL-5xxB family seemed to use the same libc library... and I hoped I might be able to use that library.
But after I figured out how to get the DSL modem itself to send me a copy of the installed /lib/libc.so.0 library -- I found to my disgust that they ALL use a library with MD5 sum of b8d492decc8207e724a0822641205078 . In NEITHER of the modems I bought (supported or not) was found the same library as contained in the source code toolchain.
To verify the toolchain from D-link was defective, I didn't compile a program (the toolchain wouldn't run on my PC anyway as it was the wrong binary format) -- but I found the toolchain had some pre-compiled mips binaries in it already; so I simply downloaded one to the modem and chmod +x -- and (surprise) I got the message "file not found." when I tried to run it ... It won't run.
So, I knew the toolchains were no good immediately, but not exactly why.
I decided to get a newer verson of MIPS GCC (binary version) that should have less bugs, more features and which is supported on most PC platforms. This is the way to GO!
See: Kernel.org pre-compiled binaries
I upgraded to gcc 4.9.0 after selecting the older "mips" verson from the above site to get the right FTP page; and set my shells' PATH variable to the /bin directory of the cross compiler once installed.
Then I copied all the header files and libraries from the D-link source code package into the new cross compiler just to verify that it could compile D-link libc binaries. And it did on the first try, compiling "hello world!" with no warnings or errors into a mips 32 big endian binary.
( START EDIT: ) #ChrisStratton points out in the comments (after this post) that my test of the toolchain is inadequate, and that using a newer GCC with an older library -- even though it links properly -- is flawed as a test. I wish there was a way to give him points for his comments -- I've become convinced that he's right; although that makes what D-link did even a worse problem -- for there's no way to know from the binaries on the modem which GCC they actually used. The GCC used for the kernel isn't necessarily the same used in user space.
In order to test the new compiler's compatibility with the modems and also make tools so I could get a copy of the actual libraries found on the modem: ( END EDIT ) I wrote a program that doesn't use the C library at all (but in two parts): It ran just fine... and the code is attached to show how it can be done.
The first listing is an assembly language program to bypass linking the standard C libraries on MIPS; and the second listing is a program meant to create an octal number dump of a binary file/stream using only the linux kernel. eg: It enables copying/pasting or scripting of binary data over telnet, netcat, etc... via ash/bash or busybox :) like a poor man's uucp.
// substart.S MIPS assembly language bypass of libc startup code
// it just calls main, and then jumps to the exit function
.text
.globl __start
__start: .ent __start
.frame $29, 32, $31
.set noreorder
.cpload $25
.set reorder
.cprestore 16
jal main
j exit
.end __start
// end substart.S
...and...
// octdump.c
// To compile w/o libc :
// mips-linux-gcc stubstart.S octdump.c -nostdlib -o octdump
// To compile with working libc (eg: x86 system) :
// gcc octdump.c -o octdump_x86
#include <syscall.h>
#include <errno.h>
#include <sys/types.h>
int* __errno_location(void) { return &errno; }
#ifdef _syscall1
// define three unix functions (exit,read,write) in terms of unix syscall macros.
_syscall1( void, exit, int, status );
_syscall3( ssize_t, read, int, fd, void*, buf, size_t, count );
_syscall3( ssize_t, write, int, fd, const void*, buf, size_t, count );
#endif
#include <unistd.h>
void oct( unsigned char c ) {
unsigned int n = c;
int m=6;
static unsigned char oval[6]={'\\','\\','0','0','0','0'};
if (n < 64) { m-=1; n <<= 3; }
if (n < 64) { m-=1; n <<= 3; }
if (n < 64) { m-=1; n <<= 3; }
oval[5]='0'+(n&7);
oval[4]='0'+((n>>3)&7);
oval[3]='0'+((n>>6)&7);
write( STDOUT_FILENO, oval, m );
}
int main(void) {
char buffer[255];
int count=1;
int i;
while (count>0) {
count=read( STDIN_FILENO, buffer, 17 );
if (count>0) write( STDOUT_FILENO, "echo -ne $'",11 );
for (i=0; i<count; ++i) oct( buffer[i] );
if (count>0) write( STDOUT_FILENO, "'\n", 2 );
}
write( STDOUT_FILENO,"#\n",2);
return 0;
}
Once mips' octdump was saved (chmod +x) as /var/octdump on the modem, it ran without errors.
(use your imagination about how I got it on there... Dlink's TFTP, & friends are broken.)
I was able to use octdump to copy all the dynamic libraries off the DSL modem and examine them, using an automated script to avoid copy/pasting by hand.
#!/bin/env python
# octget.py
# A program to upload a file off an embedded linux device via telnet
import socket
import time
import sys
import string
if len( sys.argv ) != 4 :
raise ValueError, "Usage: octget.py IP_OF_MODEM passwd path_to_file_to_get"
o = socket.socket( socket.AF_INET, socket.SOCK_STREAM )
o.connect((sys.argv[1],23)) # The IP address of the DSL modem.
time.sleep(1)
sys.stderr.write( o.recv(1024) )
o.send("admin\r\n");
time.sleep(0.1)
sys.stderr.write( o.recv(1024) )
o.send(sys.argv[2]+"\r\n")
time.sleep(0.1)
o.send("sh\r\n")
time.sleep(0.1)
sys.stderr.write( o.recv(1024) )
o.send("cd /var\r\n")
time.sleep(0.1)
sys.stderr.write( o.recv(1024) )
o.send("./octdump.x < "+sys.argv[3]+"\r\n" );
sys.stderr.write( o.recv(21) )
get="y"
while get and not ('#' in get):
get = o.recv(4096)
get = get.translate( None, '\r' )
sys.stdout.write( get )
time.sleep(0.5)
o.close()
The DSL520B modem had the following libraries...
libcrypt.so.0 libpsi.so libutil.so.0 ld-uClibc.so.0 libc.so.0 libdl.so.0 libpsixml.so
... and I thought I might cross compile using these libraries since (at least in theory) -- GCC could link against them; and my problem might be solved.
I made very sure to erase all the incompatible .so libraries from gcc-4.9.0/mips-linux/mips-linux/lib, but kept the generic crt..o files; then I copied the modem's libraries into the cross compiler directory.
But even though the kernel version of the source code, and the kernel version of the modem matched -- GCC found undefined symbols in the crt files.... So, either the generic crt files or the modem libraries themselves are somehow defective... and I don't know why. Without knowing how to get the full library version of the ? ucLibc ? library, I'm not sure how I can get the CORRECT source code to recompile the libraries and the crt's from scratch.

SIGSEGV when using pthreads in Stop-and-Wait Protocol implementation

I'm a college student and as part of a Networks Assignment I need to do an implementation of the Stop-and-Wait Protocol. The problem statement requires using 2 threads. I am a novice to threading but after going through the man pages for the pthreads API, I wrote the basic code. However, I get a segmentation fault after the thread is created successfully (on execution of the first line of the function passed to pthread_create() as an argument).
typedef struct packet_generator_args
{
int max_pkts;
int pkt_len;
int pkt_gen_rate;
} pktgen_args;
/* generates and buffers packets at a mean rate given by the
pkt_gen_rate field of its argument; runs in a separate thread */
void *generate_packets(void *arg)
{
pktgen_args *opts = (pktgen_args *)arg; // error occurs here
buffer = (char **)calloc((size_t)opts->max_pkts, sizeof(char *));
if (buffer == NULL)
handle_error("Calloc Error");
//front = back = buffer;
........
return 0;
}
The main thread reads packets from this bufffer and runs the stop-and wait algorithm.
pktgen_args thread_args;
thread_args.pkt_len = DEF_PKT_LEN;
thread_args.pkt_gen_rate = DEF_PKT_GEN_RATE;
thread_args.max_pkts = DEF_MAX_PKTS;
/* initialize sockets and other data structures */
.....
pthread_t packet_generator;
pktgen_args *thread_args1 = (pktgen_args *)malloc(sizeof(pktgen_args));
memcpy((void *)thread_args1, (void *)&thread_args, sizeof(pktgen_args));
retval = pthread_create(&packet_generator, NULL, &generate_packets, (void *)thread_args1);
if (retval != 0)
handle_error_th(retval, "Thread Creation Error");
.....
/* send a fixed no of packets to the receiver wating for ack for each. If
the ack is not received till timeout occurs resend the pkt */
.....
I have tried debugging using gdb but am unable to understand why a segmentation fault is occuring at the first line of my generate_packets() function. Hopefully, one of you can help. If anyone needs additional context, the entire code can be obtained at http://pastebin.com/Z3QtEJpQ. I am in a real jam here having spent hours over this. Any help will be appreciated.
You initialize your buffer as NULL:
char **buffer = NULL;
and then in main() without further do, you try to address it:
while (!buffer[pkts_ackd]); /* wait as long as the next pkt has not
Basically my semi-educated guess is that your thread hasn't generated any packets yet and you crash on trying to access an element in NULL.
[162][04:34:17] vlazarenko#alluminium (~/tests) > cc -ggdb -o pthr pthr.c 2> /dev/null
[163][04:34:29] vlazarenko#alluminium (~/tests) > gdb pthr
GNU gdb 6.3.50-20050815 (Apple version gdb-1824) (Thu Nov 15 10:42:43 UTC 2012)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "x86_64-apple-darwin"...Reading symbols for shared libraries .. done
(gdb) run
Starting program: /Users/vlazarenko/tests/pthr
Reading symbols for shared libraries +............................. done
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000000
0x000000010000150d in main (argc=1, argv=0x7fff5fbffb10) at pthr.c:205
205 while (!buffer[pkts_ackd]); /* wait as long as the next pkt has not
(gdb)

How do I get gdb working with D programs under linux?

I have a patched gdb 6.8, but I can't get any debugging to work. Given this test file:
import std.stdio;
void main()
{
float f = 3.0;
int i = 1;
writeln(f, " ", i);
f += cast(float)(i / 10.0);
writeln(f, " ", i);
i++;
f += cast(float)(i / 10.0);
writeln(f, " ", i);
i += 2;
f += cast(float)(i / 5.0);
writeln(f, " ", i);
}
And attempting to debug on the command line:
bash-4.0 [d]$ dmd -g test.d # '-gc' shows the same behaviour.
bash-4.0 [d]$ ~/src/gdb-6.8/gdb/gdb test
GNU gdb 6.8
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "i686-pc-linux-gnu"...
(gdb) list
1 ../sysdeps/i386/elf/start.S: No such file or directory.
in ../sysdeps/i386/elf/start.S
And debugging a project with Eclipse
Using -gc:
Dwarf Error: Cannot find DIE at 0x134e4 referenced from DIE at 0x12bd4 [in module /home/bernard/projects/drl/drl.i386]
(gdb) Dwarf Error: Cannot find DIE at 0x1810 referenced from DIE at 0x1b8 [in module /home/bernard/projects/drl/drl.i386]
Using -g:
(gdb) Die: DW_TAG_<unknown> (abbrev = 7, offset = 567)
has children: FALSE
attributes:
DW_AT_byte_size (DW_FORM_data1) constant: 4
DW_AT_type (DW_FORM_ref4) constant ref: 561 (adjusted)
DW_AT_containing_type (DW_FORM_ref4) constant ref: 539 (adjusted)
Dwarf Error: Cannot find type of die [in module /home/bernard/projects/drl/drl.i386]
I've seen quite a few posts like this on the Digital Mars newsgroup, but all are seemingly ignored. Can anyone shed some light on the situation?
I know of ZeroBUGS, but I really want to get gdb working.
Update:
Thanks to luca_, on IRC (freenode, #D), I got the simple case (one file) working:
(gdb) list Dmain
1 void main()
2 {
3 float f = 3.0;
4 int i = 1;
5 f += cast(float)(i / 10.0);
6 i++;
7 f += cast(float)(i / 10.0);
8 i += 2;
9 f += cast(float)(i / 5.0);
10 }
(gdb) break 3
Unfortunately, my project made up of multiple files dies with a DWARF error.
EDIT:
As of 2.036 (I think), the GDB debugging information produced by DMD is correct, and should work as expected.
You may have stumbled on a GDB bug, which was recently fixed here.
To get the fix, you would have to build GDB from CVS Head. Instructions on how to get it are here.
If that doesn't fix the problem, it may be due to another bug in GDB, or it may be that dmd emits incorrect DWARF debug info. I suggest opening a bug in GDB bugzilla and attaching your small executable (and all runtime libraries it requires).
The answer, it seems, is to use GDC, if you can stand going back to D 2.015 (this is for D2, I have no idea how old the D1 stuff is). GDB works great.

Resources