#include "rtest.h"
#include <iostream>
SEXP rcpp_hello_world ()
{
using namespace Rcpp ;
CharacterVector x = CharacterVector::create( "foo", "bar" );
NumericVector y = NumericVector::create( 0.0, 1.0 );
List z = List::create (x, y);
return z;
}
void funcA ()
{
std :: cout << "\nsdfsdfsdf\n";
}
int main () {return 0;}
How to place
library(RgoogleMaps)
and
png (filename="Rg.png", width=480, height=480)
inside the above code?
I run it as: R CMD SHLIB rtest.cpp
> sessionInfo()
R version 2.15.1 (2012-06-22)
Platform: x86_64-unknown-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=C LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
>
Rcpp's version is 0.9.13
I tried:
R CMD SHLIB -lRgoogleMaps rtest.cpp
It resulted in:
anisha#linux-y3pi:~/> R CMD SHLIB -lRgoogleMaps rtest.cpp
g++ -I/usr/lib64/R/include -DNDEBUG -I/usr/local/include -I/usr/lib64/R/library/Rcpp/include -fpic -fmessage-length=0 -O2 -Wall -D_FORTIFY_SOURCE=2 -fstack-protector -funwind-tables -fasynchronous-unwind-tables -c rtest.cpp -o rtest.o
g++ -shared -L/usr/local/lib64 -o rtest.so rtest.o -lRgoogleMaps -L/usr/lib64/R/lib -lR
/usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: cannot find -lRgoogleMaps
collect2: ld returned 1 exit status
make: *** [rtest.so] Error 1
I think you have a bit of a conceptual problem here between what
library(RgoogleMaps)
does in R, and what a library is for a compiler
-lfoo -Lpath/to/library
The two are not the same, despite the fact that we use the English noun "library" in both cases.
You may need to brush up a little with a text on programming, compilers, linkers, ...
Why would you want to do this? Rcpp is designed for interfacing C++ code within an R session so that you can exploit faster computation or re-use existing C++ libraries.
Write an R wrapper that calls the Rcpp code, have that wrapper arrange for the package to be made available (using require(RgoogleMaps)).
Second, you don't want to hard code the plotting device. Again, you could do this in R:
png(filename="Rg.png", width=480, height=480)
##
## call Rcpp function
## in here
dev.off()
Rcpp isn't meant for writing standalone C++ applications, you still want to interact with it from an R session.
Related
case 1. gcc -o program main.o file1.o file2.o
case 2. ar crv foo.a file1.o file2.o
then, gcc -o program main.o foo.a
At compile time, linking to a static library (case 2) is generally faster than linking to individual source files.(case 1)
Why and what cases?
Any help is appreciated.
/* Filename :lib.h */
void file2(char *);
void file1(int);
main.c
#include "lib.h"
int main()
{
file1(3);
file2("Hello World");
return 0;
}
file1.c
#include <stdio.h>
#include "lib.h"
void file1(int arg) {
printf("you passed %d\n", arg);
}
file2.c
#include <stdio.h>
#include "lib.h"
void file2(char* arg) {
printf("you passed %s\n", arg);
}
The only case I can think of is when your object files are rather large.
Linking against a library will only "pull in" referenced items. So your executable can be smaller and the functions closer in memory for shorter jumps. But that is an unlikely thing in a modern platform.
given the following program:
#include <math.h>
#include <stdio.h>
int
main(void)
{
double x = sqrt(2);
printf("The square root of two is %f\n", x);
return 0;
}
and compiling with:
gcc calc.c -o calc
succeeds? why doesn't it require -lm or /usr/lib/blah/libm.so.x
inspecting the binary object with ldd produces:
linux-vdso.so.1 (0x00007fff4f5e5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007feeffd1b000)
/lib64/ld-linux-x86-64.so.2 (0x00007fef000e1000
No libm is referenced. However, if I look at the libc.so.6 library or the ld-linux-x86-64.so.2 library using nm -D, no sqrt function is in those libraries.
What's going on here? Is gcc magically including a default set of common functions or something else?
No, the gcc compiler knows that sqrt(2) is a constant value and just calculates the value at compile time.
To trigger a use of the sqrt() library function, use code like this:
volatile double y = 2;
double x = sqrt(y);
One could also use the -ffreestanding gcc option, but this is not recommeneded.
This is the standard Hello World CUDA file:
#include <stdio.h>
#include "hello.h"
const int N = 7;
const int blocksize = 7;
__global__ void hello_kernel(char *a, int *b) {
a[threadIdx.x] += b[threadIdx.x];
}
#define cudaCheckError() { \
cudaError_t e=cudaGetLastError(); \
if(e!=cudaSuccess) { \
printf("Cuda failure %s:%d: '%s'\n",__FILE__,__LINE__,cudaGetErrorString(e)); \
exit(0); \
} \
}
void hello() {
char a[N] = "Hello ";
int b[N] = {15, 10, 6, 0, -11, 1, 0};
char *ad;
int *bd;
const int csize = N*sizeof(char);
const int isize = N*sizeof(int);
printf("%s", a);
cudaMalloc( (void**)&ad, csize );
cudaMemcpy( ad, a, csize, cudaMemcpyHostToDevice );
cudaCheckError();
cudaMalloc( (void**)&bd, isize );
cudaMemcpy( bd, b, isize, cudaMemcpyHostToDevice );
cudaCheckError();
dim3 dimBlock( blocksize, 1 );
dim3 dimGrid( 1, 1 );
hello_kernel<<<dimGrid, dimBlock>>>(ad, bd);
cudaMemcpy( a, ad, csize, cudaMemcpyDeviceToHost );
cudaCheckError();
cudaFree( ad );
cudaCheckError();
printf("%s\n", a);
}
And its header:
-- hello.h
extern "C"
void hello();
That's a Haskell file that calls such function:
-- test.hs
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign.C
import Foreign.Ptr (Ptr,nullPtr)
foreign import ccall "hello" hello :: IO ()
main = hello
I'm compiling it with:
nvcc hello.c -c -o hello.o
ghc test.hs -o test hello.o -L/usr/local/cuda/lib -optl-lcudart
Running that program with ./test results in:
Hello Cuda failure hello.cu:32: 'no CUDA-capable device is detected'
Running the same program with a C main() that just calls hello produces Hello World, as expected.
How do I make Haskell detect the device correctly?
Maybe unrelated, but I was able to reproduce your error on a Mac with separate on-board and discrete graphics cards. When "Automatic graphics switching" is enabled in System Preferences (and no 3D graphics applications are running), I get the same "no CUDA-capable device is detected" error.
When I turn off automatic graphics switching, it forces the Mac to use the discrete graphics card, and then the program runs as expected.
The purely C/CUDA-based version of the code doesn't seem to be affected by this preference and always works whether automatic switching is enabled or not.
Using ghc 7.8.3 and nvcc V6.5.12, I found that your code works as expected. The only different thing that I did was name hello.c as hello.cu.
/:cuda_haskell> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2014 NVIDIA Corporation
Built on Thu_Jul_17_19:13:24_CDT_2014
Cuda compilation tools, release 6.5, V6.5.12
/:cuda_haskell> nvcc -o hello.o -c hello.cu
/:cuda_haskell> ghc main.hs -o hello_hs hello.o -L/usr/local/cuda/lib -optl-lcudart
Linking hello_hs ...
/:cuda_haskell> ./hello_hs
Hello World!
/:cuda_haskell> cat main.hs
-- main.hs
{-# LANGUAGE ForeignFunctionInterface #-}
import Foreign.C
import Foreign.Ptr (Ptr,nullPtr)
foreign import ccall "hello" hello :: IO ()
main = hello
I'm building an application with C++11 threads, but I can't seem to get it to work with clang++ on MacOSX 10.9. Here is the simplest example I can find that causes the issues:
#include <thread>
#include <iostream>
class Functor {
public:
Functor() = default;
Functor (const Functor& ) = delete;
void execute () {
std::cerr << "running in thread\n";
}
};
int main (int argc, char* argv[])
{
Functor functor;
std::thread thread (&Functor::execute, std::ref(functor));
thread.join();
}
This compiles and runs fine on Arch Linux using g++ (version 4.9.2) with the following command-line:
$ g++ -std=c++11 -Wall -pthread test_thread.cpp -o test_thread
It also compiles and runs fine using clang++ (version 3.5.0, also on Arch Linux):
$ clang++ -std=c++11 -Wall -pthread test_thread.cpp -o test_thread
But fails on MacOSX 10.9.5, using XCode 6.1 (regardless of whether I include the -stdlib=libc++ option):
$ clang++ -std=c++11 -Wall -pthread test_thread.cpp -o test_thread
In file included from test_thread.cpp:1:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/thread:332:5: error: attempt to use a deleted function
__invoke(_VSTD::move(_VSTD::get<0>(__t)), _VSTD::move(_VSTD::get<_Indices>(__t))...);
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/thread:342:5: note: in instantiation of function template specialization
'std::__1::__thread_execute<void (Functor::*)(), std::__1::reference_wrapper<Functor> , 1>' requested here
__thread_execute(*__p, _Index());
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/thread:354:42: note: in instantiation of function template specialization
'std::__1::__thread_proxy<std::__1::tuple<void (Functor::*)(), std::__1::reference_wrapper<Functor> > >' requested here
int __ec = pthread_create(&__t_, 0, &__thread_proxy<_Gp>, __p.get());
^
test_thread.cpp:19:15: note: in instantiation of function template specialization 'std::__1::thread::thread<void (Functor::*)(), std::__1::reference_wrapper<Functor> , void>'
requested here
std::thread thread (&Functor::execute, std::ref(functor));
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/type_traits:1001:5: note: '~__nat' has been explicitly marked deleted
here
~__nat() = delete;
^
1 error generated.
I can't figure out how to get around this, it seems like a compiler bug to me. For reference, the version of clang on that Mac is:
$ clang++ --version
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin13.4.0
Thread model: posix
Any ideas what it is I'm doing wrong?
Thanks!
Donald.
The standard does not require the std::thread constructor - or the similar std::async for that matter - to unwrap a reference_wrapper when passed as the first argument with a pointer-to-member-function the way std::bind does. Pass a pointer to Functor instead of a reference_wrapper. (See Library Active Issues list DR2219.)
I am working through a sample program that uses both C++ source code as well as CUDA. This is the essential content from my four source files.
matrixmul.cu (main CUDA source code):
#include <stdlib.h>
#include <cutil.h>
#include "assist.h"
#include "matrixmul.h"
int main (int argc, char ** argv)
{
...
computeGold(reference, hostM, hostN, Mh, Mw, Nw); //reference to .cpp file
...
}
matrixmul_gold.cpp (C++ source code, single function, no main method):
void computeGold(float * P, const float * M, const float * N, int Mh, int Mw, int Nw)
{
...
}
matrixmul.h (header for matrixmul_gold.cpp file)
#ifndef matrixmul_h
#define matrixmul_h
extern "C"
void computeGold(float * P, const float * M, const float * N, int Mh, int Mw, int Nw);
#endif
assist.h (helper functions)
I am trying to compile and link these files so that they, well, work. So far I can get matrixmul_gold.cpp compiled using:
g++ -c matrixmul_gold.cpp
And I can compile the CUDA source code with out errors using:
nvcc -I/home/sbu/NVIDIA_GPU_Computing_SDK/C/common/inc -L/home/sbu/NVIDIA_GPU_Computing_SDK/C/lib matrixmul.cu -c -lcutil_x86_64
But I just end up with two .O files. I've tried a lot of different ways to link the two .O files but so far it's a no-go. What's the proper approach?
UPDATE: As requested, here is the output of:
nm matrixmul_gold.o matrixmul.o | grep computeGold
nm: 'matrixmul.o': No such file
0000000000000000 T _Z11computeGoldPfPKfS1_iii
I think the 'matrixmul.o' missing error is because I am not actually getting a successful compile when running the suggested compile command:
nvcc -I/home/sbu/NVIDIA_GPU_Computing_SDK/C/common/inc -L/home/sbu/NVIDIA_GPU_Computing_SDK/C/lib -o matrixmul matrixmul.cu matrixmul_gold.o -lcutil_x86_64
UPDATE 2: I was missing an extern "C" from the beginning of matrixmul_gold.cpp. I added that and the suggested compilation command works great. Thank you!
Conventionally you would use whichever compiler you are using to compile the code containing the main subroutine to link the application. In this case you have the main in the .cu, so use nvcc to do the linking. Something like this:
$ g++ -c matrixmul_gold.cpp
$ nvcc -I/home/sbu/NVIDIA_GPU_Computing_SDK/C/common/inc \
-L/home/sbu/NVIDIA_GPU_Computing_SDK/C/lib \
-o matrixmul matrixmul.cu matrixmul_gold.o -lcutil_x86_64
This will link an executable binary called matrimul from matrixmul.cu, matrixmul_gold.o and the cutil library (implicitly nvcc will link the CUDA runtime library and CUDA driver library as well).