I am trying to install a C++ library called FastAD ( https://github.com/JamesYang007/FastAD#user-guide) in Rcpp but the installation instructions are generic (not specifically for Rcpp).
It would be greatly appreciated if someone coule give me some guidance for how to install and be able to #include the files?
FastAD is a header-only library which only depends on Eigen3. That makes for a pretty straightforward application for Rcpp and friends.
First, rely on the RcppEigen.package.skeleton() function to create the barebones RcppEigen-using package.
Second, we copy the FastAD library into inst/include. We add a PKG_CPPFLAGS variable in src/Makevars to let R know to source it from there. That is all the compiler needs with a header-only library. (Edit: We also set CXX_STD=CXX17 unless one has a new enough compiler or R (currently: r-devel) which already default to C++17.)
Third, we create a simple example in src/ based on an example in FastAD. We picked the Black-Scholes example.
Fourth, minor cleanups like removing the hello* stanza files.
That is pretty much it. In its embryonic form the package is now here on GitHub. An example is
> library(RcppFastAD)
> blackScholesExamples()
56.5136
0.773818
51.4109
-0.226182
>
Related
I'm trying to write a wrapper for a C++ function I've written, making use of the Point Clouds Library (PCL). This is my first try interfacing R and C++, so I apologise if any solution is too trivial. My goal is to make a few functions available for myself and my colleagues directly in R, on mac and windows. My example function cloudSize is included at the bottom of the text. I will try to be as clear as possible.
I've installed PCL with the vcpkg package manager for winx64 at C:\src\vcpkg\vcpkg.
This is added to my Environmental Variable Path for my user.
I created an empty R-package with Rcpp.package.skeleton():
C:/User/csvi0001/Desktop/GitHub/RPCLpackage/PCLR
PCL is a massive library, but thankfully modular,and so I only #include the headers that are needed to compile the executable: pcl/io/pcd_io.h, pcl/point_types.h, pcl/registration/icp.h.
Now, since I'd like this to work on more than one OS - and therefore compile on install (?) - I should use a dynamic library? I'll presume that the person installing my package already has a compiled copy of pcl. However, I do not know how to find a flag showing that pcl is installed - how do I find these for inclusion in Makevars(?). CMake must find them when testing the C++ function in VSCode after adding an include path. In lieu of this:
I copy the pcl folder installed by vcpkg to ./src . When I tried copying all the .h files, they seemed to lose track of one another as they refer to eachother through which module they are placed in, e.g. <pcl/memory.h> cannot be found if memory.h is placed directly in ./src. However, flattening the structure of the modules means that every single dependency and #include must be manually changed, in some cases there are also files with the same name in different folders. e.g. pcl/kdtree.h and pcl/search/kdtree.h. After this, it must be done again when replacing < > with " " for each header.
Is there any way of telling Rcpp that the library included in /src is structured?
I'm working on Win 10 winx64.
Since I'm making use of the depends RcppEigen and BH; and I must have C++14 or higher (choice: C++17) I add to my DESCRIPTION file:
LinkingTo: Rcpp, RcppEigen, BH
SystemRequirements: C++17
My actual C++ function:
//PCL requires at least C++14
//[[Rcpp::plugins(cpp17)]]
//[[Rcpp::depends(RcppEigen)]]
//[[Rcpp::depends(BH)]]
#include <Rcpp.h>
#include <iostream>
#include "pcl/io/pcd_io.h"
#include "pcl/point_types.h"
#include "pcl/registration/icp.h"
//[[Rcpp::export]]
int cloudSize(Rcpp::DataFrame x)
{
pcl::PointCloud<pcl::PointXYZ> sourceCloud;
for(int i=0;i<x.nrows();i++)
{
sourceCloud.push_back(pcl::PointXYZ(x[0][i],x[1][i],x[2][i])); //This way of referring to elements in a Rcpp::DataFrame may be erroneous.
}
int cloudSize = sourceCloud->size();
return (cloudSize);
}
That is a non-trivial question. In the simplest case, use a 'hook' offered by configure and configure.win to pre-build a (static) library you ship in your sources and then link your package to that.
That said, the Writing R Extensions manual and/or the CRAN Repository Policy (both of which are the references here) expressed more of a preference for an external library -- which may not be an option here if PCL is too exotic.
As the topic comes up with Rcpp, I wrote a short paper about it (at arXiv here) which is also included as a vignette in the package. It requires a few pages to cover the common cases but even then it cannot cover all.
Your main source of reference may be CRAN. The are lots of packages in this space. A few of mine use external libraries, I contributed to package nloptr which uses a hybrid approach ("use system library if found, else build") and some like httpuv always build (a small-ish library).
I am using Asio in a Rcpp package, and am therefore using the package AsioHeaders.
I have added BH and AsioHeaders in the "LinkingTo" part of the DESCRIPTION file of my package. I have also added comments
// [[Rcpp::depends(BH)]]
// [[Rcpp::depends(AsioHeaders)]]
in my code. So normally, the linking should be fine when compiling the package.
And it is when I compile it on Linux. But when trying to compile it on Windows, I get linking errors that are solved by linking -lws2_32 and -lwsock32.
I am thus wondering, whether I should edit the Makevars file so that these are linked on Windows but ignored on Linux, or if I have done something wrong using AsioHeaders?
AsioHeaders maintainer here. Quick questions:
Which version of AsioHeaders? It just updated at CRAN. Is this a change from the new version (which would suprise me ...)?
Make sure you are not accidentally using Asio functionality from Boost which will require linking. See the three packages using AsioHeaders.
If your package is truly header-only then LinkingTo: is all you need. R will find the header directories for you. In particular, you do not need link instructions in src/Makevars* because, well, header-only.
Also, you probably meant // forward slashes for your C++ comments above...
I created a simple hierarchial C++ project to help me learn the use of scons as I want to get away from cmake and qmake. I have registered it in a github repository at https://github.com/pleopard777/SConsEx . This project is organized into two primary subdirs; packages contains two libraries and testing contains two apps. The packages dir needs to be built first and when complete the testing dir needs to be built. Under the packages library the core library must be compiled first and the numerics library second. The numerics library depends on the core library. Under the testing dir the core_tests app depends on the core library and the numerics_tests app depends on core and numerics.
I am struggling with what seems to be limited documentation and examples for scons so I am posting this here in search of some guidance. Here are some of the initial problems I am having, any guidance will be greatly appreciated:
1) [Edit/FIXED]
2) In the packages/numerics/ dir the source files depend on the core library. The file numerics_config.h requires the file ../core/core_config.h however when building that core file cannot be found. The following SConstruct lines don't help:
[code]
include = '../../packages'
env = Environment(CPPPATH=include)
[/code]
Again, this is just a start to the project and I am using it to learn scons. Any guidance will be appreciated ... I'm sure I will be asking lots more questions as this project progresses.
Thanks!
P
Fixed in pull request to your repo.
Note you had some c++ issues as well. I've fixed them too.
See:
https://github.com/pleopard777/SConsEx/pull/1
(Please don't delete your repo so others can find the solution as well)
I am building a custom binary of NodeJS from the latest code base for an embedded system. I have a couple modules that I would like to ship as standard with the binary - or even run a custom script the is compiled into the binary and can be invoked through a command line option.
So two questions:
1) I vaguely remember that node allowed to include custom modules during build time but I went through the latest 5.9.0 configure script and I can't see anything related - or maybe I am missing it.
2) Did someone already do something similar? If yes, what were the best practices you came up with?
I am not looking for something like Electron or other binary bundlers but actually building into the node binary.
Thanks,
Andy
So I guess I figure it out much faster that I thought.
For anyone else, you can add any NPM module to it and just add the actual source files to the node.gyp configuration file.
Compile it and run the custom binary. It's all in there now.
> var cmu = require("cmu");
undefined
> cmu
{ version: [Function] }
> cmu.version()
'It worked!'
> `
After studying this for quite a while, I have to say that the flyandi's answer is not quite true. You cannot add any NPM module just by adding it to the node.gyp.
You can only add pure JavaScript modules this way. To be able to embed a C++ module (I deliberately don't use the word "native", because that one is quite ambiguous in nodeJS terminology - just look at the sources).
To summarize this:
To embed a JS module to your custom nodejs, just add it in the library_files section of the node.gyp file. Also note that it should be placed within the lib folder, otherwise you'll have troubles requiring the module. That's because the name/path listed in node.gyp / library_files is used to encode the id of the module in the node_javascript.cc intermediate file which is then used when searching for the built-in modules.
To embed a native module is much more difficult. The best way I have found so far is to build the module as a static library instead of dynamic, which for cmake(-js) based module you can achieve by changing the SHARED parameter to STATIC like this:
add_library(${PROJECT_NAME} STATIC ${SRC})
instead of:
add_library(${PROJECT_NAME} SHARED ${SRC})
And also changing the suffix:
set_target_properties(
${PROJECT_NAME}
PROPERTIES
PREFIX ""
SUFFIX ".lib") /* instead of .node */
Then you can link it from node.gyp by adding this section:
'link_settings': {
'libraries' : [
"path/to/my/library.lib",
#...add other static dependencies
],
},
(how to do this with node-gyp based project should be quite ease to google)
This allows you to build the module, but you won't be able to require it, because require() function in node can only be used to load built-in JS modules, external JS modules or external dynamic node modules. But now we have a built-in C++ module. Well, lot of node integrated modules are C++, but they always have a JS wrapper in /lib, and those wrappers they use process.binding() to load the C++ module. That is, process.binding() is sort of a require() function for integrated C++ modules.
That said, we also need to call require.binding() instead of require to load our integrated module. To be able to do that, we have to make our module "built-in" first.
We can do that by replacing
NODE_MODULE(mymodule, InitAll)
int the module definition with
NODE_BUILTIN_MODULE_CONTEXT_AWARE(mymodule, InitAll)
which will register it as internal module and from now on we can process.binding() it.
Note that NODE_BUILTIN_MODULE_CONTEXT_AWARE is not defined in node.h as NODE_MODULE but in node_internals.h so you either have to include that one, or copy the macro definition over to your cpp file (the first one is of course better because the nodejs API tends to change quite often...).
The last thing we need to do is to list our newly integrated module among the others so that the node knows to initialize them (that is include them within the list of modules used when searching for the modules loaded with process.binding()). In node_internals.h there is this macro:
#define NODE_BUILTIN_STANDARD_MODULES(V) \
V(async_wrap) \
V(buffer) \
V(cares_wrap) \
...
So just add the your module to the list the same way as the others V(mymodule).
I might have forgotten some step, so ask in the comments if you think I have missed something.
If you wonder why would anyone even want to do this... You can come up with several reasons, but here's one most important to me: Those package managers used to pack your project within one executable (like pkg or nexe) work only with node-gyp based modules. If you, like me, need to use cmake based module, the final executable won't work...
Suppose there are three projects A, B and C. The modules in C will be shared by A and B. Because there is no java's CLASSPATH thing, then do we need to use absolute path when importing modules from C ?
Any suggestion is appreciated !
I urge you to cabalise your projects.
But if for some reason you don't want to do this, then the nearest thing to the java classpath is the -i switch to ghc. Note that the current directory needs to appear explicitly in the list.
The canonical way is to choose carefully where they should sit in the module hierarchy and make each project into a fully featured Cabal package, then install them locally so that they're a part of the namespace for your compiler locally.
This way the modules of each are available to any source code you're writing.
If (for example) you use the leksah IDE it'll do a lot of the work for you.
If by "projects" you mean Cabal packages, the standard thing to do would be to export whatever modules are necessary from C and then have A and B depend on the C and import them. It's not a good idea to have source files in a package directly depend on files that are aren't part of that package.