application using lttng compile errors with aarch64-xilinx-linux-g++ - linux

I am trying to porting lttng on xilinx mpsoc with linux OS, I have write a demo as same as lttng "Record user application events", it runs on Ubuntu perfectly
g++ -c -I. hello-tp.c
g++ -c hello.c
g++ -o hello hello-tp.o hello.o -llttng-ust -ldl
but when I compile it on arm linux platform I got errors:
aarch64-xilinx-linux-g++ -mcpu=cortex-a72.cortex-a53 -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/david/project/zcu102/images/linux/sdk/sysroots/cortexa72-cortexa53-xilinx-linux -O2 -pipe -g -feliminate-unused-debug-types -c -I. hello-tp.c
In file included from hello-tp.c:4:
hello-tp.h:16:27: error: expected constructor, destructor, or type conversion before ‘(’ token
16 | LTTNG_UST_TRACEPOINT_EVENT(hello_world, my_first_tracepoint, LTTNG_ARGS, LTTNG_FIELDS)
| ^
make: *** [Makefile:14: hello-tp.o] Error 1
here is the code
hello-tp.h:
#undef LTTNG_UST_TRACEPOINT_PROVIDER
#define LTTNG_UST_TRACEPOINT_PROVIDER hello_world
#undef LTTNG_UST_TRACEPOINT_INCLUDE
#define LTTNG_UST_TRACEPOINT_INCLUDE "./hello-tp.h"
#if !defined(_HELLO_TP_H) || defined(LTTNG_UST_TRACEPOINT_HEADER_MULTI_READ)
#define _HELLO_TP_H
#include <lttng/tracepoint.h>
#define LTTNG_ARGS LTTNG_UST_TP_ARGS(int, my_integer_arg, char *, my_string_arg)
#define LTTNG_FIELDS LTTNG_UST_TP_FIELDS(lttng_ust_field_string(my_string_field, my_string_arg) lttng_ust_field_integer(int, my_integer_field, my_integer_arg))
LTTNG_UST_TRACEPOINT_EVENT(hello_world, my_first_tracepoint, LTTNG_ARGS, LTTNG_FIELDS)
#endif /* _HELLO_TP_H */
#include <lttng/tracepoint-event.h>
hello-tp.c
#define LTTNG_UST_TRACEPOINT_CREATE_PROBES
#define LTTNG_UST_TRACEPOINT_DEFINE
#include "hello-tp.h"
hello.c
#include <stdio.h>
#include "hello-tp.h"
int main(int argc, char *argv[])
{
unsigned int i;
puts("Hello, World!\nPress Enter to continue...");
/*
* The following getchar() call only exists for the purpose of this
* demonstration, to pause the application in order for you to have
* time to list its tracepoints. You don't need it otherwise.
*/
getchar();
/*
* An lttng_ust_tracepoint() call.
*
* Arguments, as defined in `hello-tp.h`:
*
* 1. Tracepoint provider name (required)
* 2. Tracepoint name (required)
* 3. `my_integer_arg` (first user-defined argument)
* 4. `my_string_arg` (second user-defined argument)
*
* Notice the tracepoint provider and tracepoint names are
* C identifiers, NOT strings: they're in fact parts of variables
* that the macros in `hello-tp.h` create.
*/
lttng_ust_tracepoint(hello_world, my_first_tracepoint, 23,
"hi there!");
for (i = 0; i < argc; i++) {
lttng_ust_tracepoint(hello_world, my_first_tracepoint,
i, argv[i]);
}
puts("Quitting now!");
lttng_ust_tracepoint(hello_world, my_first_tracepoint,
i * i, "i^2");
return 0;
}
Makefile
APP = hello
# Add any other object files to this list below
APP_OBJS = hello-tp.o hello.o
all: build
build: $(APP)
$(APP): $(APP_OBJS)
$(CXX) -o $# $(APP_OBJS) $(LDFLAGS) -llttng -ldl
hello-tp.o : hello-tp.c hello-tp.h
$(CXX) $(CXXFLAGS) -c -I. $<
hello.o : hello.c
$(CXX) $(CXXFLAGS) -c $<
clean:
rm -f $(APP) *.o
Is there anyone met such issue? I guess the problem is caused by complier but I don't find any clue...

I just ran into this problem. Check your LTTNG version. The 2.13 release (current) uses LTTNG_UST_TRACEPOINT_PROVIDER. However, older releases uses TRACEPOINT_PROVIDER. The prefix LTTNG_UST has been added all over the place. See https://lttng.org/man/3/lttng-ust/v2.13/#doc-_compatibility_with_previous_apis

Related

Compile error on cygwin with strerror_r

I am getting a build error using make:
g++ -std=c++11 -DHAVE_CONFIG_H -I. -I../src/config -I. -I./obj -DBOOST_SP_USE_STD_ATOMIC -DBOOST_
AC_USE_STD_ATOMIC -pthread -I/usr/include -I./leveldb/include -I./leveldb/helpers/memenv -I./secp2
56k1/include -I./univalue/include -DHAVE_BUILD_INFO -D__STDC_FORMAT_MACROS -std=c99 -D_XOPEN_SOURCE=
500 -g -O2 -Wall -Wextra -Wformat -Wvla -Wformat-security -Wno-unused-parameter -MT libbitcoin_co
mmon_a-netbase.o -MD -MP -MF .deps/libbitcoin_common_a-netbase.Tpo -c -o libbitcoin_common_a-netbase
.o `test -f 'netbase.cpp' || echo './'`netbase.cpp
cc1plus: warning: command line option `-std=c99' is valid for C/ObjC but not for C++
In file included from /usr/include/boost/assert.hpp:58:0,
from /usr/include/boost/range/size.hpp:23,
from /usr/include/boost/range/functions.hpp:20,
from /usr/include/boost/range/iterator_range_core.hpp:38,
from /usr/include/boost/range/iterator_range.hpp:13,
from /usr/include/boost/range/as_literal.hpp:22,
from /usr/include/boost/algorithm/string/case_conv.hpp:19,
from netbase.cpp:25:
netbase.cpp: In function `bool LookupIntern(const char*, std::vector<CNetAddr>&, unsigned int, bool)
':
netbase.cpp:95:39: warning: comparison between signed and unsigned integer expressions [-Wsign-compa
re]
assert(aiTrav->ai_addrlen >= sizeof(sockaddr_in));
~~~~~~~~~~~~~~~~~~~^~~~~~~~~~
netbase.cpp:101:39: warning: comparison between signed and unsigned integer expressions [-Wsign-comp
are]
assert(aiTrav->ai_addrlen >= sizeof(sockaddr_in6));
~~~~~~~~~~~~~~~~~~~^~~~~~~~~~
netbase.cpp: In function `std::string NetworkErrorString(int)':
netbase.cpp:720:41: error: `strerror_r' was not declared in this scope
if (strerror_r(err, buf, sizeof(buf)))
^
^
Evidently cygwin does support strerror_r as per https://cygwin.com/cygwin-api/compatibility.html#std-susv4
Code snippet where the code is breaking:
#ifdef STRERROR_R_CHAR_P /* GNU variant can return a pointer outside the passed buffer */
s = strerror_r(err, buf, sizeof(buf));
#else /* POSIX variant always returns message in buffer */
s = buf;
if (strerror_r(err, buf, sizeof(buf)))
buf[0] = 0;
#endif
Can someone guide me as to how I can fix this ?
TIA

OpenMPI runtime error : Hello World

I'm able to successfully compile my code when I execute the make command. However, when I run the code as:
mpirun -np 4 test
The error generated is:
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[63067,1],2]
Exit code: 1
--------------------------------------------------------------------------
I have no multiple mpi installations so I don't expect there to be a problem.
I've been having trouble with my Hello World OpenMPI program. My main file is :
#include <iostream>
#include "mpi.h"
using namespace std;
int main(int argc, const char * argv[]) {
MPI_Init(NULL, NULL);
int size, rank;
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
cout << "The number of spawned processes are " << size << "And this is the process " << rank;
MPI_Finalize();
return 0;
}
My makefile is:
# Compiler
CXX = mpic++
# Compiler flags
CFLAGS = -Wall -lm
# Header and Library Paths
INCLUDE = -I/usr/local/include -I/usr/local/lib -I..
LIBRARY_INCLUDE = -L/usr/local/lib
LIBRARIES = -l mpi
# the build target executable
TARGET = test
all: $(TARGET)
$(TARGET): main.cpp
$(CXX) $(CFLAGS) -o $(TARGET) main.cpp $(INCLUDE) $(LIBRARY_INCLUDE) $(LIBRARIES)
clean:
rm $(TARGET)
The output of: mpic++ --version is:
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.1.0 (clang-802.0.42)
Target: x86_64-apple-darwin16.5.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
And that for mpirun --version is:
mpirun (Open MPI) 2.1.1
Report bugs to http://www.open-mpi.org/community/help/
What could be causing the issue?
This is now resolved. It turns out that I have to execute with
mpirun -np 4 ./test
Ref: users-request#lists.open-mpi.org

dylib dynamic library calling a dylib : Undefined symbols for architecture i386

Under mac os x with g++ from gcc-5.2 I am trying to do the following : create a dylib exporting a class defined by header tmp8bis_dylib.h and source tmp8bis_dylib.cpp, and then create another dylib out of a source file tmp8bis.cpp using and linking to the previous dylib. Header and sources are in the same directory. I compile as follows :
g++-5.2.0 -m32 -Wall -g -c ./tmp8bis_dylib.cpp
g++-5.2.0 -m32 -dynamiclib ./tmp8bis_dylib.o -o ./tmp8bis_dylib.dylib
g++-5.2.0 -m32 -Wall -g -c ./tmp8bis.cpp
g++-5.2.0 -m32 -dynamiclib ./tmp8bis.o -o ./tmp8bis.dylib
and get this :
Undefined symbols for architecture i386:
"complex::cmodule(double, double)", referenced from:
_mymodule in tmp8bis.o
"complex::complex(double, double)", referenced from:
_mymodule in tmp8bis.o
"complex::~complex()", referenced from:
_mymodule in tmp8bis.o
ld: symbol(s) not found for architecture i386
collect2: error: ld returned 1 exit status
make: *** [all] Error 1
Obviously, I tried to pass various include and library paths with -I and -L flags respectively, with the very same result... Any idea ?
Files are below :
For tmp8bis_dylib.h :
#ifndef TMP_8_BIS_DYLIB_H
#define TMP_8_BIS_DYLIB_H
class complex
{
public:
double real;
double imag;
public:
complex();
complex(double x);
complex(double x,double y);
double cmodule(double x, double y);
~complex();
};
#endif
For tmp8bis_dylib.cpp :
#include "./tmp8bis_dylib.h"
#include <math.h>
extern "C"
{
complex::complex()
{
real = 0.0 ;
imag = 0.0 ;
}
complex::complex(double x)
{
real = x ;
imag = 0.0 ;
}
complex::complex(double x,double y)
{
real = x ;
imag = y ;
}
double complex::cmodule(double x, double y)
{
double res = sqrt(x*x+y*y);
return res ;
}
complex::~complex()
{
}
}
For tmp8bis.cpp :
#include <math.h>
#include "./tmp8bis_dylib.h"
extern "C"
{
double mymodule(double x, double y)
{
complex z(x,y);
double ret = z.cmodule(x,y);
return ret;
}
}
Precision. -m32 is because I need 32 bits dylib because the final dylib will be plugged into excel 2011's (for mac) VBA, which is 32 bits.
EDIT. Following Brett Hale's comment about Apple's advises about dylibs, I added
#define EXPORT __attribute__((visibility("default")))
after the #include's from tmp8bis.cpp, and EXPORT's for all its member functions, and compiled as follows :
g++-5.2.0 -m32 -Wall -g -c ./tmp8bis_dylib.cpp
g++-5.2.0 -m32 -dynamiclib ./tmp8bis_dylib.o -fvisibility=hidden -o ./tmp8bis_dylib.dylib
did a sudo cp ./tmp8bis_dylib.dylib /opt/lib/libtmp8bis_dylib.dylib and then compiled :
g++-5.2.0 -m32 -Wall -g -c ./tmp8bis.cpp
g++-5.2.0 -m32 -dynamiclib ./tmp8bis.o -o ./tmp8bis.dylib -L/opt/lib
and got the same result as before... Nor did
g++-5.2.0 -m32 -dynamiclib ./tmp8bis.o -o ./tmp8bis.dylib -ltmp8bis_dylib.dylib
make my day.
Without resorting to #define EXPORT __attribute__((visibility("default"))) or any -fvisibility=hidden
g++-5.2.0 -m32 -Wall -fpic -g -c ./tmp8bis_dylib.cpp
g++-5.2.0 -m32 -shared ./tmp8bis_dylib.o -o ./libtmp8bis_dylib.dylib
g++-5.2.0 -m32 -Wall -g -c ./tmp8bis.cpp
g++-5.2.0 -m32 -shared ./tmp8bis.o -o ./tmp8bis.dylib -L. -ltmp8bis_dylib
finally worked. I did not managed to succeed without -fpic, naming libtmp8bis_dylib.dylib and using -ltmp8bis_dylib.

Linking cuda object file

I have one .cu file that contains my cuda kernel, and a wrapper function that calls the kernel. I have a bunch of .c files as well, one of which contains the main function. One of these .c files calls the wrapper function from the .cu to invoke the kernel.
I compile these files as follows:
LIBS=-lcuda -lcudart
LIBDIR=-L/usr/local/cuda/lib64
CFLAGS = -g -c -Wall -Iinclude -Ioflib
NVCCFLAGS =-g -c -Iinclude -Ioflib
CFLAGSEXE =-g -O2 -Wall -Iinclude -Ioflib
CC=gcc
NVCC=nvcc
objects := $(patsubst oflib/%.c,oflib/%.o,$(wildcard oflib/*.c))
table-hash-gpu.o: table-hash.cu table-hash.h
$(NVCC) $(NVCCFLAGS) table-hash.cu -o table-hash-gpu.o
main: main.c $(objects) table-hash-gpu.o
$(CC) $(CFLAGSEXE) $(objects) table-hash-gpu.o -o udatapath udatapath.c $(LIBS) $(LIBDIR)
So far everything is fine. table-hash-gpu.cu calls a function from one of the .c files. When linking for main, I get the error that the function is not present. Can someone please tell me what is going on?
nvcc compiles both device and host code using the host C++ compiler, which implies name mangling. If you need to call a function compiled with a C compiler in C++, you must tell the C++ compiler that it uses C calling conventions. I presume that the errors you are seeing are analogous to this:
$ cat cfunc.c
float adder(float a, float b, float c)
{
return a + 2.f*b + 3.f*c;
}
$ cat cumain.cu
#include <cstdio>
float adder(float, float, float);
int main(void)
{
float result = adder(1.f, 2.f, 3.f);
printf("%f\n", result);
return 0;
}
$ gcc -m32 -c cfunc.c
$ nvcc -o app cumain.cu cfunc.o
Undefined symbols:
"adder(float, float, float)", referenced from:
_main in tmpxft_0000b928_00000000-13_cumain.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
Here we have code compiled with nvcc (so the host C++ compiler) trying to call a C function and getting a link error, because the C++ code expects a mangled name for adder in the supplied object file. If the main is changed like this:
$ cat cumain.cu
#include <cstdio>
extern "C" float adder(float, float, float);
int main(void)
{
float result = adder(1.f, 2.f, 3.f);
printf("%f\n", result);
return 0;
}
$ nvcc -o app cumain.cu cfunc.o
$ ./app
14.000000
It works. Using extern "C" to qualify the declaration of the function to the C++ compiler, it will not use C++ mangling and linkage rules when referencing adder and the resulting code links correctly.

Why I'm not getting "Multiple definition" error from the g++?

I tried to link my executable program with 2 static libraries using g++. The 2 static libraries have the same function name. I'm expecting a "multiple definition" linking error from the linker, but I did not received. Can anyone help to explain why is this so?
staticLibA.h
#ifndef _STATIC_LIBA_HEADER
#define _STATIC_LIBA_HEADER
int hello(void);
#endif
staticLibA.cpp
#include "staticLibA.h"
int hello(void)
{
printf("\nI'm in staticLibA\n");
return 0;
}
output:
g++ -c -Wall -fPIC -m32 -o staticLibA.o staticLibA.cpp
ar -cvq ../libstaticLibA.a staticLibA.o
a - staticLibA.o
staticLibB.h
#ifndef _STATIC_LIBB_HEADER
#define _STATIC_LIBB_HEADER
int hello(void);
#endif
staticLibB.cpp
#include "staticLibB.h"
int hello(void)
{
printf("\nI'm in staticLibB\n");
return 0;
}
output:
g++ -c -Wall -fPIC -m32 -o staticLibB.o staticLibB.cpp
ar -cvq ../libstaticLibB.a staticLibB.o
a - staticLibB.o
main.cpp
extern int hello(void);
int main(void)
{
hello();
return 0;
}
output:
g++ -c -o main.o main.cpp
g++ -o multipleLibsTest main.o -L. -lstaticLibA -lstaticLibB -lstaticLibC -ldl -lpthread -lrt
The linker does not look at staticLibB, because by the time staticLibA is linked, there are no unfulfilled dependencies.
That's an easy one. An object is only pulled out of a library if the symbol referenced hasn't already been defined. Only one of the hellos are pulled (from A). You'd get errors if you linked with the .o files.
When the linker tries to link main.o into multipleLibsTest and sees that hello() is unresolved, it starts searching the libraries in the order given on the command line. It will find the definition of hello() in staticLibA and will terminate the search.
It will not look in staticLibB or staticLibC at all.
If staticLibB.o contained another symbol not in staticLibA and that was pulled into the final executable, you then get a multiple definition of hello error, as individual .o files are pulled out of the library and two of them would have hello(). Reversing the order of staticLibA and staticLibB on the link command line would then make that error go away.

Resources