I have the following code in C:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
int i;
double x;
double pi;
pi = M_PI;
double increment;
FILE * fp;
fp = fopen ("FILE","w");
x = 0;
increment = 4* pi / 500;
for (i = 0; i<=500; i++)
{
x = i * increment;
printf("i: %d, tan(x): %f\n", i, tanf(x));
fprintf (fp, "%d, %f\n",i, tanf(x));
}
fclose (fp);
exit(0);
}
And the following Python code:
import matplotlib.pyplot as plt
import os
f = open("FILE", "r")
lines=f.readlines()
first =[]
second=[]
for x in lines:
first.append(int(x.split(', ')[0]))
second.append(float(x.split(', ')[1]))
f.close()
plt.plot(first, second)
plt.show()
os.remove("FILE")
The C codes generates data that is saved in a text file. The python code reads that textfile and makes a plot. After making a plot, python deletes the datafile.
I need to make a bash file that executes both pieces of code, like a sort of glue. I've read tutorials about bash, but it's still unclear to me how to compile and execute C and run python.
Question: How do I make a bash file that runs both pieces of code in linux?
This is pretty trivial once your C program is compiled into an executable (binary). Let's assume your C program generates executable table, and your python script is called plot.py:
#!/bin/bash
./table && python plot.py
This will just run these two programs in sequence. The && means that the second program will only run if the first one completes successfully (exit code == 0).
PS: In case you still need to compile your C-code, use gcc filename.c -lm. The -lm will make sure the math library where tanf is defined is linked.
Related
I have a simple C-extension(see example below) that sometimes prints using the printf function.
I'm looking for a way to wrap the calls to the function from that C-extensions so that all those printfs will be redirected to my python logger.
hello.c:
#include <Python.h>
static PyObject* hello(PyObject* self)
{
printf("example print from a C code\n");
return Py_BuildValue("");
}
static char helloworld_docs[] =
"helloworld(): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"hello", (PyCFunction)hello,
METH_NOARGS, helloworld_docs},
{NULL}
};
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"helloworld",
"Extension module example!",
-1,
helloworld_funcs
};
PyMODINIT_FUNC PyInit_helloworld(void)
{
return PyModule_Create(&cModPyDem);
};
setup.py:
from distutils.core import setup, Extension
setup(name = 'helloworld', version = '1.0', \
ext_modules = [Extension('helloworld', ['hello.c'])])
to use first run
python3 setup.py install
and then:
import helloworld
helloworld.hello()
I want to be able to do something like this:
with redirect_to_logger(my_logger)
helloworld.hello()
EDIT: I saw a number of posts showing how to silence the prints from C, but I wasn't able to figure out from it how can I capture the prints in python instead.
Example of such post: Redirect stdout from python for C calls
I assume that this question didn't get much traction because I maybe ask too much, so I don't care about logging anymore... how can I capture the C prints in python? to a list or whatever.
EDIT
So I was able to achieve somewhat a working code that does what I want - redirect c printf to python logger:
import select
import threading
import time
import logging
import re
from contextlib import contextmanager
from wurlitzer import pipes
from helloworld import hello
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
class CPrintsHandler(threading.Thread):
def __init__(self, std, poll_std, err, poll_err, logger):
super(CPrintsHandler, self).__init__()
self.std = std
self.poll_std = poll_std
self.err = err
self.poll_err = poll_err
self.logger = logger
self.stop_event = threading.Event()
def stop(self):
self.stop_event.set()
def run(self):
while not self.stop_event.is_set():
# How can I poll both std and err at the same time?
if self.poll_std.poll(1):
line = self.std.readline()
if line:
self.logger.debug(line.strip())
if self.poll_err.poll(1):
line = self.err.readline()
if line:
self.logger.debug(line.strip())
#contextmanager
def redirect_to_logger(some_logger):
handler = None
try:
with pipes() as (std, err):
poll_std = select.poll()
poll_std.register(std, select.POLLIN)
poll_err = select.poll()
poll_err.register(err, select.POLLIN)
handler = CPrintsHandler(std, poll_std, err, poll_err, some_logger)
handler.start()
yield
finally:
if handler:
time.sleep(0.1) # why do I have to sleep here for the foo prints to finish?
handler.stop()
handler.join()
def foo():
logger.debug('logger print from foo()')
hello()
def main():
with redirect_to_logger(logger):
# I don't want the logs from here to be redirected as well, only printf.
logger.debug('logger print from main()')
foo()
main()
But I have a couple of issues:
The python logs are also being redirected and caught by the CPrintsHandler. Is there a way to avoid that?
The prints are not exactly in the correct order:
python3 redirect_c_example_for_stackoverflow.py
2020-08-18 19:50:47,732 - root - DEBUG - example print from a C code
2020-08-18 19:50:47,733 - root - DEBUG - 2020-08-18 19:50:47,731 - root - DEBUG - logger print from main()
2020-08-18 19:50:47,733 - root - DEBUG - 2020-08-18 19:50:47,731 - root - DEBUG - logger print from foo()
Also, the logger prints all go to err, perhaps the way I poll them causes this order.
I'm not that familiar with select in python and not sure if there is a way to poll both std and err at the same time and print whichever has something first.
On Linux you could use wurlitzer which would capture the output from fprint, e.g.:
from wurlitzer import pipes
with pipes() as (out, err):
helloworld.hello()
out.read()
#'example print from a C code\n'
wurlitzer is based on this article of Eli Bendersky, the code from which you can use if you don't like to depend on third-party libraries.
Sadly, wurlitzer and the code from the article work only for Linux (and possible MacOS).
Here is a prototype (an improved version of the prototype can be installed from my github) for Windows using Eli's approach as Cython-extension (which probably could be translated to ctypes if needed):
%%cython
import io
import os
cdef extern from *:
"""
#include <windows.h>
#include <io.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
int open_temp_file() {
TCHAR lpTempPathBuffer[MAX_PATH+1];//path+NULL
// Gets the temp path env string (no guarantee it's a valid path).
DWORD dwRetVal = GetTempPath(MAX_PATH, // length of the buffer
lpTempPathBuffer); // buffer for path
if(dwRetVal > MAX_PATH || (dwRetVal == 0))
{
return -1;
}
// Generates a temporary file name.
TCHAR szTempFileName[MAX_PATH + 1];//path+NULL
DWORD uRetVal = GetTempFileName(lpTempPathBuffer, // directory for tmp files
TEXT("tmp"), // temp file name prefix
0, // create unique name
szTempFileName); // buffer for name
if (uRetVal == 0)
{
return -1;
}
HANDLE tFile = CreateFile((LPTSTR)szTempFileName, // file name
GENERIC_READ | GENERIC_WRITE, // first we write than we read
0, // do not share
NULL, // default security
CREATE_ALWAYS, // overwrite existing
FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE, // "temporary" temporary file, see https://learn.microsoft.com/en-us/archive/blogs/larryosterman/its-only-temporary
NULL); // no template
if (tFile == INVALID_HANDLE_VALUE) {
return -1;
}
return _open_osfhandle((intptr_t)tFile, _O_APPEND | _O_TEXT);
}
int replace_stdout(int temp_fileno)
{
fflush(stdout);
int old;
int cstdout = _fileno(stdout);
old = _dup(cstdout); // "old" now refers to "stdout"
if (old == -1)
{
return -1;
}
if (-1 == _dup2(temp_fileno, cstdout))
{
return -1;
}
return old;
}
int restore_stdout(int old_stdout){
fflush(stdout);
// Restore original stdout
int cstdout = _fileno(stdout);
return _dup2(old_stdout, cstdout);
}
void rewind_fd(int fd) {
_lseek(fd, 0L, SEEK_SET);
}
"""
int open_temp_file()
int replace_stdout(int temp_fileno)
int restore_stdout(int old_stdout)
void rewind_fd(int fd)
void close_fd "_close" (int fd)
cdef class CStdOutCapture():
cdef int tmpfile_fd
cdef int old_stdout_fd
def start(self): #start capturing
self.tmpfile_fd = open_temp_file()
self.old_stdout_fd = replace_stdout(self.tmpfile_fd)
def stop(self): # stops capturing, frees resources and returns the content
restore_stdout(self.old_stdout_fd)
rewind_fd(self.tmpfile_fd) # need to read from the beginning
buffer = io.TextIOWrapper(os.fdopen(self.tmpfile_fd, 'rb'))
result = buffer.read()
close_fd(self.tmpfile_fd)
return result
And now:
b = CStdOutCapture()
b.start()
helloworld.hello()
out = b.stop()
print("HERE WE GO:", out)
# HERE WE GO: example print from a C code
This is what I would do if I am free to edit the C code. Open a memory map in C and write to its file descriptor using fprintf(). Expose the file descriptor to Python either as the int and then use mmap module to open it or use os.openfd() to wrap it in a simpler file-like object, or wrap it in file-like object in C and let Python use that.
Then I would create a class that will enable me to write to sys.stdout through usual interface, i.e. its write() method (for Python's side usage) , and that would use select module to poll the file from C that acts as its stdout in a thread. Then I would switch sys.stdout with an object of this class. So, when Python does sys.stdout.write(...) the string will be redirected to sys.stdout.write(), and when the loop in a thread detects output on a file from C, it will write it using sys.stdout.write(). So, everything will be written to the screen and be available to loggers as well.
In this model, the strictly C part will never actually be writing to the file descriptor connected to the terminal.
You can even do much of this in C itself and leave little for the Python's side, but its easier to influence the interpreter from the Python's side as the extension is the shared library which involves some kind of, lets call it, IPC and OS in the whole story. That's why the stdout is not shared between extension and Python in the first place.
If you want to continue printf() on C side, you can see how you can redirect it in C before programming this whole mess.
This answer is strictly theoretical because I have no time to test it; but it should be doable according to my knowledge. If you try it, please let me know in a comment how it went. Perhaps I missed something, but, I am certain the theory is sane.
Beauty of this idea is that it will be OS independent, although the part with shared memory or connecting a file descriptor to allocated space in RAM can be sometimes PITA on Windows.
If you are not constrained to using the printf in C, it would be easier to use the print equivalent from python C API and pass where you want to redirect the message as an argument.
For example, your hello.c would be:
#include <Python.h>
static PyObject* hello(PyObject* self, PyObject *args)
{
PyObject *file = NULL;
if (!PyArg_ParseTuple(args, "O", &file))
return NULL;
PyObject *pystr = PyUnicode_FromString("example print from a C code\n");
PyFile_WriteObject(pystr, file, Py_PRINT_RAW);
return Py_BuildValue("");
}
static char helloworld_docs[] =
"helloworld(): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"hello", (PyCFunction)hello,
METH_VARARGS, helloworld_docs},
{NULL}
};
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"helloworld",
"Extension module example!",
-1,
helloworld_funcs
};
PyMODINIT_FUNC PyInit_helloworld(void)
{
return PyModule_Create(&cModPyDem);
};
We can check if it is working with the program below:
import sys
import helloworld
helloworld.hello(sys.stdout)
helloworld.hello(sys.stdout)
helloworld.hello(sys.stderr)
In the command line we redirect each output separately:
python3 example.py 1> out.txt 2> err.txt
out.txt will have two print calls, while err.txt will have only one, as expected from our python script.
You can check python's print implementation to get some more ideas of what you can do.
cpython print source code
I needed help regarding the subprocess module. This question might sound repeated, and I have seen a number of articles related to it in a number of ways. But even so I am unable to solve my problem. It goes as follows:
I have a C program 2.c it's contents are as follows:
#include<stdio.h>
int main()
{
int a;
scanf("%d",&a);
while(1)
{
if(a==0) //Specific case for the first input
{
printf("%d\n",(a+1));
break;
}
scanf("%d",&a);
printf("%d\n",a);
}
return 0;
}
I need to write a python script which first compiles the code using subprocess.call() and then opens two process using Popen to execute the respective C-program. Now the output of the first process must be the input of the second and vice versa. So essentially, if my initial input was 0, then the first process outputs 2, which is taken by second process. It in turn outputs 3 and so on infinitely.
The below script is what I had in mind, but it is flawed. If someone can help me I would very much appreciate it.
from subprocess import *
call(["gcc","2.c"])
a = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #Initiating Process
a.stdin.write('0')
temp = a.communicate()[0]
print temp
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
c = Popen(["./a.out"],stdin=PIPE,stdout=PIPE)
while True:
b.stdin.write(str(temp))
temp = b.communicate()[0]
print temp
c.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
a.wait()
b.wait()
c.wait()
If you want the output of the first command a to go as the input of the second command b and in turn b's output is a's input—in a circle like a snake eating its tail— then you can't use .communicate() in a loop: .communicate() doesn't return until the process is dead and all the output is consumed.
One solution is to use a named pipe (if open() doesn't block in this case on your system):
#!/usr/bin/env python3
import os
from subprocess import Popen, PIPE
path = 'fifo'
os.mkfifo(path) # create named pipe
try:
with open(path, 'r+b', 0) as pipe, \
Popen(['./a.out'], stdin=PIPE, stdout=pipe) as b, \
Popen(['./a.out'], stdout=b.stdin, stdin=pipe) as a:
pipe.write(b'10\n') # kick-start it
finally:
os.remove(path) # clean up
It emulates a < fifo | b > fifo shell command from #alexander barakin answer.
Here's more complex solution that funnels the data via the python parent process:
#!/usr/bin/env python3
import shutil
from subprocess import Popen, PIPE
with Popen(['./a.out'], stdin=PIPE, stdout=PIPE, bufsize=0) as b, \
Popen(['./a.out'], stdout=b.stdin, stdin=PIPE, bufsize=0) as a:
a.stdin.write(b'10\n') # kick-start it
shutil.copyfileobj(b.stdout, a.stdin) # copy b's stdout to a' stdin
This code connects a's output to b's input using redirection via OS pipe (as a | b shell command does).
To complete the circle, b's output is copied to a's input in the parent Python code using shutil.copyfileobj().
This code may have buffering issues: there are multiple buffers in between the processes: C stdio buffers, buffers in Python file objects wrapping the pipes (controlled by bufsize).
bufsize=0 turns off the buffering on the Python side and the data is copied as soon as it is available. Beware, bufsize=0 may lead to partial writes—you might need to inline copyfileobj() and call write() again until all read data is written.
Call setvbuf(stdout, (char *) NULL, _IOLBF, 0), to make the stdout line-buffered inside your C program:
#include <stdio.h>
int main(void)
{
int a;
setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */
do {
scanf("%d",&a);
printf("%d\n",a-1);
fprintf(stderr, "%d\n",a); /* for debugging */
} while(a > 0);
return 0;
}
Output
10
9
8
7
6
5
4
3
2
1
0
-1
The output is the same.
Due to the way the C child program is written and executed, you might also need to catch and ignore BrokenPipeError exception at the end on a.stdin.write() and/or a.stdin.close() (a process may be already dead while there is uncopied data from b).
Problem is here
while True:
b.stdin.write(str(temp))
temp = b.communicate()[0]
print temp
c.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
Once communicate has returned, it does noting more. You have to run the process again. Plus you don't need 2 processes open at the same time.
Plus the init phase is not different from the running phase, except that you provide the input data.
what you could do to simplify and make it work:
from subprocess import *
call(["gcc","2.c"])
temp = str(0)
while True:
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
b.stdin.write(temp)
temp = b.communicate()[0]
print temp
b.wait()
Else, to see 2 processes running in parallel, proving that you can do that, just fix your loop as follows (by moving the Popen calls in the loop):
while True:
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
c = Popen(["./a.out"],stdin=PIPE,stdout=PIPE)
b.stdin.write(str(temp))
temp = b.communicate()[0]
print temp
c.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
better yet. b output feeds c input:
while True:
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
c = Popen(["./a.out"],stdin=b.stdout,stdout=PIPE)
b.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
quick question;
I'm using Ubuntu as my coding environment, and I am trying to write a C program for Windows for school.
The assignment says I have to do something using the system clock, and I decided to make a quick benchmarking program. Here it is:
#include <stdio.h>
#include <unistd.h>
#include <time.h>
int main () {
int i = 0;
int p = (int) getpid();
int n = 0;
clock_t cstart = clock();
clock_t cend = 0;
for (i=0; i<100000000; i++) {
long f = (((i+9)*99)%4)+(8+i*999);
if (i % p == 0)
printf("i=%d, f=%ld\n", i, f);
}
cend = clock();
printf ("%.3f cpu sec\n", ((double)cend - (double)cstart)* 1.0e-6);
return 0;
}
When I cross compile from Ubuntu to Windows using mingw32, it's fine. However, when I run the program in Windows, two issues happen:
The benchmark runs as expected, and takes roughly 5 seconds, yet the timer says it took 0.03 seconds (this doesnt happen when testing in my Ubuntu VM. If the benchmark takes 5 seconds in real time, the timer will say 5 seconds. So obviously, this is an issue.)
Then, once the program is done, the Windows terminal will close immediately.
How do I make the program stay open so you can look at your time for more than like 10 milliseconds, and how can I make the runtime of the benchmark reflect it's score like it does when I test in Ubuntu?
Thanks!
I would like to use ld's --build-id option in order to add build information to my binary. However, I'm not sure how to make this information available inside the program. Assume I want to write a program that writes a backtrace every time an exception occurs, and a script that parses this information. The script reads the symbol table of the program and searches for the addresses printed in the backtrace (I'm forced to use such a script because the program is statically linked and backtrace_symbols is not working). In order for the script to work correctly I need to match build version of the program with the build version of the program which created the backtrace. How can I print the build version of the program (located in the .note.gnu.build-id elf section) from the program itself?
How can I print the build version of the program (located in the .note.gnu.build-id elf section) from the program itself?
You need to read the ElfW(Ehdr) (at the beginning of the file) to find program headers in your binary (.e_phoff and .e_phnum will tell you where program headers are, and how many of them to read).
You then read program headers, until you find PT_NOTE segment of your program. That segment will tell you offset to the beginning of all the notes in your binary.
You then need to read the ElfW(Nhdr) and skip the rest of the note (total size of the note is sizeof(Nhdr) + .n_namesz + .n_descsz, properly aligned), until you find a note with .n_type == NT_GNU_BUILD_ID.
Once you find NT_GNU_BUILD_ID note, skip past its .n_namesz, and read the .n_descsz bytes to read the actual build-id.
You can verify that you are reading the right data by comparing what you read with the output of readelf -n a.out.
P.S.
If you are going to go through the trouble to decode build-id as above, and if your executable is not stripped, it may be better for you to just decode and print symbol names instead (i.e. to replicate what backtrace_symbols does) -- it's actually easier to do than decoding ELF notes, because the symbol table contains fixed-sized entries.
Basically, this is the code I've written based on answer given to my question. In order to compile the code I had to make some changes and I hope it will work for as many types of platforms as possible. However, it was tested only on one build machine. One of the assumptions I used was that the program was built on the machine which runs it so no point in checking endianness compatibility between the program and the machine.
user#:~/$ uname -s -r -m -o
Linux 3.2.0-45-generic x86_64 GNU/Linux
user#:~/$ g++ test.cpp -o test
user#:~/$ readelf -n test | grep Build
Build ID: dc5c4682e0282e2bd8bc2d3b61cfe35826aa34fc
user#:~/$ ./test
Build ID: dc5c4682e0282e2bd8bc2d3b61cfe35826aa34fc
#include <elf.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#if __x86_64__
# define ElfW(type) Elf64_##type
#else
# define ElfW(type) Elf32_##type
#endif
/*
detecting build id of a program from its note section
http://stackoverflow.com/questions/17637745/can-a-program-read-its-own-elf-section
http://www.scs.stanford.edu/histar/src/pkg/uclibc/utils/readelf.c
http://www.sco.com/developers/gabi/2000-07-17/ch5.pheader.html#note_section
*/
int main (int argc, char* argv[])
{
char *thefilename = argv[0];
FILE *thefile;
struct stat statbuf;
ElfW(Ehdr) *ehdr = 0;
ElfW(Phdr) *phdr = 0;
ElfW(Nhdr) *nhdr = 0;
if (!(thefile = fopen(thefilename, "r"))) {
perror(thefilename);
exit(EXIT_FAILURE);
}
if (fstat(fileno(thefile), &statbuf) < 0) {
perror(thefilename);
exit(EXIT_FAILURE);
}
ehdr = (ElfW(Ehdr) *)mmap(0, statbuf.st_size,
PROT_READ|PROT_WRITE, MAP_PRIVATE, fileno(thefile), 0);
phdr = (ElfW(Phdr) *)(ehdr->e_phoff + (size_t)ehdr);
while (phdr->p_type != PT_NOTE)
{
++phdr;
}
nhdr = (ElfW(Nhdr) *)(phdr->p_offset + (size_t)ehdr);
while (nhdr->n_type != NT_GNU_BUILD_ID)
{
nhdr = (ElfW(Nhdr) *)((size_t)nhdr + sizeof(ElfW(Nhdr)) + nhdr->n_namesz + nhdr->n_descsz);
}
unsigned char * build_id = (unsigned char *)malloc(nhdr->n_descsz);
memcpy(build_id, (void *)((size_t)nhdr + sizeof(ElfW(Nhdr)) + nhdr->n_namesz), nhdr->n_descsz);
printf(" Build ID: ");
for (int i = 0 ; i < nhdr->n_descsz ; ++i)
{
printf("%02x",build_id[i]);
}
free(build_id);
printf("\n");
return 0;
}
Yes, a program can read its own .note.gnu.build-id. The important piece is the dl_iterate_phdr function.
I've used this technique in Mesa (the OpenGL/Vulkan implementation) to read its own build-id for use with the on-disk shader cache.
I've extracted those bits into a separate project[1] for easy use by others.
[1] https://github.com/mattst88/build-id
I am trying to generate a comprehensive callgraph (complete with low level calls to Linux, runtime, the lot).
I have statically compiled my source files with "-fdump-rtl-expand" and created RTL files, which I passed to a PERL script called Egypt (which I believe is Graphviz/Dot) and generated a PDF file of the callgraph. This works perfectly, no problems at all.
Except, there are calls being made into some libraries that are getting shown as built-in. I was looking to see if there is a way for the callgraph not to be printed as and instead the real calls made into the libraries ?
Please let me know if the question is unclear.
http://i.imgur.com/sp58v.jpg
Basically, I am trying to avoid the callgraph from generating < built-in >
Is there a way to do that ?
-------- CODE ---------
#include <cilk/cilk.h>
#include <stdio.h>
#include <stdlib.h>
unsigned long int t0, t5;
unsigned int NOSPAWN_THRESHOLD = 32;
int fib_nospawn(int n)
{
if (n < 2)
return n;
else
{
int x = fib_nospawn(n-1);
int y = fib_nospawn(n-2);
return x + y;
}
}
// spawning fibonacci function
int fib(long int n)
{
long int x, y;
if (n < 2)
return n;
else if (n <= NOSPAWN_THRESHOLD)
{
x = fib_nospawn(n-1);
y = fib_nospawn(n-2);
return x + y;
}
else
{
x = cilk_spawn fib(n-1);
y = cilk_spawn fib(n-2);
cilk_sync;
return x + y;
}
}
int main(int argc, char *argv[])
{
int n;
long int result;
long int exec_time;
n = atoi(argv[1]);
NOSPAWN_THRESHOLD = atoi(argv[2]);
result = fib(n);
printf("%ld\n", result);
return 0;
}
I compiled the Cilk Library from source.
I might have found the partial solution to the problem:
You need to pass the following option to egypt
--include-external
This produced a slightly more comprehensive callgraph, although there still is the " visible
http://i.imgur.com/GWPJO.jpg?1
Can anyone suggest if I get more depth in the callgraph ?
You can use the GCC VCG Plugin: A gcc plugin, which can be loaded when debugging gcc, to show internal structures graphically.
gcc -fplugin=/path/to/vcg_plugin.so -fplugin-arg-vcg_plugin-cgraph foo.c
Call-graph is place to store data needed
for inter-procedural optimization. All datastructures
are divided into three components:
local_info that is produced while analyzing
the function, global_info that is result
of global walking of the call-graph on the end
of compilation and rtl_info used by RTL
back-end to propagate data from already compiled
functions to their callers.