What's the canonical way to share modules between projects? - haskell

Suppose there are three projects A, B and C. The modules in C will be shared by A and B. Because there is no java's CLASSPATH thing, then do we need to use absolute path when importing modules from C ?
Any suggestion is appreciated !

I urge you to cabalise your projects.
But if for some reason you don't want to do this, then the nearest thing to the java classpath is the -i switch to ghc. Note that the current directory needs to appear explicitly in the list.

The canonical way is to choose carefully where they should sit in the module hierarchy and make each project into a fully featured Cabal package, then install them locally so that they're a part of the namespace for your compiler locally.
This way the modules of each are available to any source code you're writing.
If (for example) you use the leksah IDE it'll do a lot of the work for you.

If by "projects" you mean Cabal packages, the standard thing to do would be to export whatever modules are necessary from C and then have A and B depend on the C and import them. It's not a good idea to have source files in a package directly depend on files that are aren't part of that package.

Related

How to install the "FastAD" C++ library in Rcpp?

I am trying to install a C++ library called FastAD ( https://github.com/JamesYang007/FastAD#user-guide) in Rcpp but the installation instructions are generic (not specifically for Rcpp).
It would be greatly appreciated if someone coule give me some guidance for how to install and be able to #include the files?
FastAD is a header-only library which only depends on Eigen3. That makes for a pretty straightforward application for Rcpp and friends.
First, rely on the RcppEigen.package.skeleton() function to create the barebones RcppEigen-using package.
Second, we copy the FastAD library into inst/include. We add a PKG_CPPFLAGS variable in src/Makevars to let R know to source it from there. That is all the compiler needs with a header-only library. (Edit: We also set CXX_STD=CXX17 unless one has a new enough compiler or R (currently: r-devel) which already default to C++17.)
Third, we create a simple example in src/ based on an example in FastAD. We picked the Black-Scholes example.
Fourth, minor cleanups like removing the hello* stanza files.
That is pretty much it. In its embryonic form the package is now here on GitHub. An example is
> library(RcppFastAD)
> blackScholesExamples()
56.5136
0.773818
51.4109
-0.226182
>

Node.js/npm - dynamic service discovery in packages

I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .

make one static library from whole project with cmake

c++-project, say, foo is maintained by the cmake.
One wants to create one library libfoo.a (with all classes/methods/functions created at the whole source-tree) to make possible creating programs that could linked to the library with -lfoo.
ok, let's consider now a toy example, and the prolbem will be clear. Directory foo (root of the project) contains directories a, and b. Two CmakeLists.txt are created:
# a/CMakeLists.txt
add_library(A <a_sources>)
# b/CMakeLists.txt
add_library(B <b_sources>)
And one CMakeLists.txt for root directory:
add_subdirectory(a)
add_subdirectory(b)
add_library(foo <foo_sources>
target_link_libraries(foo A B)
That was a surprise for me: after building libfoo.a contains only methods from foo_sources, and a_sources,b_sources are excluded.
That is ok in the case when executables are built with the same project: while creating executables cmake "guesses" that a and b must be linked if it is linked to foo.
But in the case executable is created "outside" project to use library foo one must link with -lfoo -la -lb, now imagine a project with lots of subdirectories - how to deal with it? so question is "how to create one library, aggregating methods from whole project with means of cmake?"
Googling led me to relatively recently embedded (appeared in 2.8.8) OBJECT library opportunity. Nice example of using it is shown here. Now the problem above can be solved with that:
# a/CMakeLists.txt
add_library(A OBJECT <a_sources>)
# b/CMakeLists.txt
add_library(B OBJECT <b_sources>)
# foo/CMakeLists.txt
add_subdirectory(a)
add_subdirectory(b)
add_library(foo <foo_sources> $<TARGET_OBJECTS:A> $<TARGET_OBJECTS:B>)
problem seems to be solved, unfortunately, not quite.
if dependency chain is longer than 2, for example, foo depends on A, which depends on B, problem still remains.
That is because,
Object libraries may contain only sources (and headers) that compile to object files.
and
Object libraries cannot be imported, exported, installed, or linked.
(quotes are taken from the same link)
I've tried several combinations of target_link_library(), add_library(), add_library(... OBJECT ..) trying to link A and B to foo without success (error during cmake-process.)
I must be loosing something simple, please help, thank you!
I am not sure is it important: project is maintained at the linux.
I think you're getting tangled up in the term "depends on". If you're building a library named foo and it has two parts, A and B, it doesn't matter whether A depends on B; the library should contain both. The CMake code you've shown will build foo properly.
Yep, I support answer #Pete Becker# . But it should be said as well that those libraries a $<TARGET_OBJECTS:A> and $<TARGET_OBJECTS:B> actually not a libraries at all, but rather cmake internal list of object modules. There is no dependencies between compilation of object modules (except auto-generated sources) so they can be done in any order and in parallel.
I guess more correct term for your intention is gathering together several TARGET_OBJECTS under single object library. That's really bad that you can't write add_library(B OBJECT b.cpp $<TARGET_OBJECTS:A>). But you always can implement this by yourself:
add_library(A OBJECT a.cpp)
set(A_OBJECTS $<TARGET_OBJECTS:A>)
add_library(B OBJECT b.cpp)
set(B_OBJECTS $<TARGET_OBJECTS:B> ${A_OBJECTS})
add_library(foo ${B_OBJECTS})
I.e just create special variables _OBJECTS to use them whenever you want to include those object libraries in library, executable or as part of other object library with that _OBJECTS flavor.

On GNU/Linux systems, Where should I load application data from?

In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.

Link libraries with dependencies in Visual C++ without getting LNK4006

I have a set of statically-compiled libraries, with fairly deep-running dependencies between the libraries. For example, the executable X uses libraries A and B, A uses library C, and B uses libraries C and D:
X -> A
A -> C
X -> B
B -> C
B -> D
When I link X with A and B, I don't want to get errors if C and D were not also added to the list of libraries—the fact that A and B use these libraries internally is an implementation detail that X should not need to know about. Also, when new dependencies are added anywhere in the dependency tree, the project file of any program that uses A or B would have to be reconfigured. For a deep dependency tree, the list of required libraries can become really long and hard to maintain.
So, I am using the "Additional Dependencies" setting of the Librarian section in the A project, adding C.lib. And in the same section of B's project, I add C.lib and D.lib. The effect of this is that the librarian bundles C.lib into A.lib, and C.lib and D.lib into B.lib.
When I link X, however, both A.lib and B.lib contain their own copy of C.lib. This leads to tons of warnings along the lines of
A.lib(c.obj) : warning LNK4006 "symbol" (_symbol) already defined in B.lib(c.obj); second definition ignored.
How can I accomplish this without getting warnings? Is there a way to simply disable the warning, or is there a better way?
EDIT: I have seen more than one answer suggesting that, for the lack of a better alternative, I simply disable the warning. Well, this is part of the problem: I don't even know how to disable it!
As far as I know you can't disable linker warnings.
However, you can ignore some of them, using command line parameter of linker eg. /ignore:4006
Put it in your project properties under linker->command line setting (don't remember exact location).
Also read this:
Link /ignore
MSDN Forum - hiding LNK warnings
Wacek
Update If you can build all involved project in single solution, try this:
Put all project in one sln.
Remove all references to static libraries from projects' linker or librarian properties.
There is "Project Dependencies..." option in context menu for each project in Solution Explorer. Use it to define dependencies between project.
It should work. It doesn't invalidate anything I said before, the basic model of building C/C++ programs stays the same. VS (at least 2005 and newer) is simply smart enough to add all needed static libraries to linker command line. You can see it in project properties.
Of course this method won't help if you need to use already compiled static libraries. Then you need to add them all to exe or dll project that directly or indirectly uses them.
I don't think you can do anything about that. You should remove references to other static libs from static libs projects and add all needed static libs projects as dependences of exe or dll projects. You will just have to live with fact that any project that includes A.lib or B.lib also needs to include C.lib.
As an alternative you can turn your libraries into dlls which provide a richer model.
Statically compiled libraries simply aren't real libraries with dependency information, etc, like dlls. See how, when you build them, you don't really need to provide libraries they depend on? Headers are all that's needed. See? You can't even really say static libraries depend on something.
Static library is just an archive of compiled and not yet linked object code. It's not consistent whole. Each object file is compiled separately and remains separate entity inside the library. Linking happens when you build exe or dll. That's when you need to provide all object code. That's when all the symbol and dependency resolving happens.
If you add other static libraries to static library dependencies, librarian will simply copy all code together. Then, when building exe, linker will give you lots of warnings about duplicate symbols. You might be able to block those warnings (I don't know how) but be careful. It may conceal real problems like real duplicate symbols with differing definitions. And if you have static data defined in libraries, it probably won't work anyway.
Microsoft (R) Incremental Linker Version 9.00.x (link.exe) knows argument /ignore:4006
You could create one library which contains A, B, C & D and then link X against that.
Since it's a library, only object modules which are actually referenced will get linked into the final executable.
Note that one way of getting this warning is to define a member function in a header without the inline statement:
// Foo.h
class Foo
{
void someFunction();
};
void Foo:someFunction() // Warning! - should be "inline void Foo::someFunction()"
{
// do stuff
}
The problem is you are not localizing library C's symbols. So you have a ODR violation when you link in A and B. You need to have a way to make these private. By default all symbols are exported. One way to do this is to have a special linker definition file for both A and B that explicitly mention which files need to be exported.
[1] ODR = One Definition Rule.
I think the best course of action here will be to ignore/disable the linker warnings(LNK4006) since C.lib needs to be part of both A.Lib and B.lib and A.Lib does not need to know that B.lib itself uses C.Lib.
This may not fix your link error, but it might help with your dependency tree issue.
What I do, is just use a #pragma to include a lib in the .cpp file that needs it. For example:
#pragma comment(lib:"wsock32")
Like I said, I'm not sure it would keep the symbols in that object file, I'd have to whip up an example to try it out.
Poor flodin seems frustrated that nobody will explain how to disable the linker warnings. Well, I've had a similar problem, and for years I have simply lived with the fact that several hundred warnings were displayed. Now, however, thanks to the info from Link /ignore, I figured out how to disable the linker warnings.
I'm using Visual Studio 2008. In Project -> Settings -> Configuration Properties -> Librarian -> Command Line -> Additional Options, I added "/ignore:4006" (without the quotes). Now my warnings are gone!

Resources