Can't get V8 Monolith to genorate - shared-libraries

I've been trying to use -v8_monolith in my make files but so far it's not working.
I gotten v8 code from google's depot-tools, then followed every tutorial on the web to try to compile the shared libs for v8. I'm assuming that the ninja compiler automatically puts the libs in to the arch lib folder, since no tutorial says to do anything with the files.
As of rn, I get no errors from the ninja compiler.
Maybe I got to change something in the args.gn to push in to my libs folder. Possably it's trying to push to the wrong location.
this is what it looks like:
is_debug = false
target_cpu = "x64"
use_custom_libcxx = false
v8_monolithic = true
v8_use_external_startup_data = false
is_component_build = false
v8_static_library = true
here is the entire processes from terminal:
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git
export PATH=depot_tools:$PATH
mkdir v8
cd v8
fetch v8
gclient sync
tools/dev/v8gen.py x64.release
#then this is the point where I manually go to args.gn to edit it
ninja -C out.gn/x64.release
I've also tried getting the v8 package from AUR, but does not compile correctly

compile the shared libs for v8
In that case, you need is_component_build = true. And then you'll have to drop v8_monolithic, because that forces a static (i.e. non-shared) library. I don't think v8_static_library matters, at least I've never used that.
For the record, we (V8 team) tend to recommend static linking, mostly because of the rate of change to V8's API: any time you update the shared library, you'll have to recompile all programs using it anyway, so there's not a lot of benefit to be had from it being a shared library (but some room for accidental breakage, which a statically linked build avoids).
puts the libs in to the arch lib folder
I don't know what you mean by "the arch lib folder"; libv8.so will be in out.gn/x64.release/ (or whatever directory you pass to ninja -C <dir>). If you want it elsewhere, you'll have to put it elsewhere. Building V8 doesn't touch the rest of your system (there's no make install-like target).
trying to use -v8_monolith in my make files
Once everything else is correct, that'll be -lv8 -lv8_libbase -lv8_libplatform, and the corresponding library files will be called libv8.so, libv8_libbase.so, and libv8_libplatform.so.

Related

rust libraries with cargo (rlib)

I am trying to create a library in rust to be used with rust executables. In C you can just create your .a or .so (or .lib or .dll on windows) and use tools like CMake to link everything, however rust does not seem to have this kind of infrastructure?
It is possible to make an executable with cargo (cargo new ) and create a library by adding the --lib flag (cargo new --lib), but then how would you use the resulting .rlib file (from the library cargo project)? I managed to link the .rlib file as follows:
rustc main.rs --extern foo=libfoo.rlib
and that works beautifully, though, I am not interested in writing a thousand rustc commands to build the final executable (which depends on the .rlib) if there is cargo that can do that for you. I tried working with a build script (which works perfectly for any C library, static or dynamic), but if I try it with the .rlib file, cargo says that it cannot find "foo" (-lfoo), the build script:
fn main() {
println!("cargo:rustc-link-search=.");
println!("cargo:rustc-link-lib=foo");
}
I tried replacing the path (search) to different directories (whilst also moving the .rlib file to the correct directory), also tried different combinations of libfoo, libfoo.rlib, ... (note that for the C libaries, foo is sufficient).
So my question really is: How can you create a rust library for private use, and how do you use it with a rust executable in a proper way, avoiding manual rustc commands? Are there tools that do this? Am I missing something in the build script? Perhaps there exists something like CMake for rust?
I suppose it is possible to just create a C interface over the rust code and compile another C project as that does work with cargo.
I do NOT want to publish the code to crates.io as I want this library strictly for private use.
Cargo does not support using pre-compiled .rlibs. Cargo is designed to compile programs fully from source (not counting native libraries).
How can you create a rust library for private use … I do NOT want to publish the code to crates.io as I want this library strictly for private use.
To use a private library, you write a dependency using a path or git dependency (or use a private package registry, but that's more work to set up).
[dependencies]
my-lib-a = { path = "../my-lib-a/" }
my-lib-b = { git = "https://my-git-host.example/my-lib-b", branch = "stable" }
Your private library is now compiled exactly like a “public” one.

"undefined reference" error from static lib of external parquet project

I want to link expernal 'parquet' project ( https://github.com/apache/arrow/tree/master/cpp ) as part of my current project on Linux.
For this purposes I ran cmake of parquet with such parameters
cd build_parquet
cmake -DCMAKE_BUILD_TYPE=Release -DARROW_PARQUET=ON \
-DBoost_NO_BOOST_CMAKE=TRUE -DBoost_NO_SYSTEM_PATHS=TRUE -DBOOST_ROOT=${BOOST_BUILD_DIR}/include -DBOOST_LIBRARYDIR=${BOOST_BUILD_DIR}/lib/boost -DARROW_BOOST_USE_SHARED=OFF -DBOOST_INCLUDEDIR=${BOOST_BUILD_DIR}/include/boost ..
cmake --build . --config Release
// There are a lot of dependencies except boost, but only boost required to be installed on system, since other could be downloaded and installed by cmake script
Project successfully compiled. I got executable which could launch, generated static libs libarrow.a, libparquet.a, shared libraries libarrow.so, libparquet.so
In my main project I want to use such libraries and I use such commands in cmake to find them
find_path(PARQUET_INCLUDE_DIR NAMES arrow/api.h PATHS ${PARQUET_DIR}/src)
find_library(PARQUET_LIBRARY_RELEASE NAMES parquet.a
PATHS build_parquet/release/Release/ )
find_library(ARROW_LIBRARY_RELEASE NAMES arrow.a
PATHS build_parquet/release/Release/ )
set(PARQUET_LIBRARIES_RELEASE ${PARQUET_LIBRARY_RELEASE} ${ARROW_LIBRARY_RELEASE} )
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Parquet DEFAULT_MSG PARQUET_INCLUDE_DIR
${PARQUET_LIBRARIES_RELEASE } )
That's work okay, libraries and includes are found.
Then I link this libraries to my project
target_link_libraries(${myExe} ${PARQUET_LIBRARIES_RELEASE} ${mySomeOtherLibraries} )
after this I got enormous amount of linker errors such that
libparquet.a(column_writer.cc.o): In function `apache::thrift::transport::TMemoryBuffer::~TMemoryBuffer()':
column_writer.cc:(.text._ZN6apache6thrift9transport13TMemoryBufferD0Ev[_ZN6apache6thrift9transport13TMemoryBufferD5Ev]+0x3): undefined reference to `vtable for apache::thrift::transport::TMemoryBuffer'
.....
so that's what I don't understand much, why lib compiled well in parquet project itself but has a lot of unresolved now, when I use it to link in my own project? Moreover I compiled project for windows and when I did the same things, but with arrow.lib and parquet.lib (instead of libparquet.a and libarrow.a ) things worked fine! I was needed only to put arrow.dll, parquet.dll to executables to run project. But in Linux I've already crashed my head
So, why it doesn't work, what I should do to finally link the project with library ?
Update
I found the problem, I had to link libraries with adding .so files (not only .a files) like this
find_library(PARQUET_LIBRARY_RELEASE NAMES parquet.so parquet.a
PATHS build_parquet/release/Release/ )
find_library(ARROW_LIBRARY_RELEASE NAMES arrow.so arrow.a
PATHS build_parquet/release/Release/ )
set(PARQUET_LIBRARIES_RELEASE ${PARQUET_LIBRARY_RELEASE} ${ARROW_LIBRARY_RELEASE} )
Project is builded. So now the question is, why I need to add .so files to linker (in Windows only static .lib are enough), is it always a case when I build project in Linux ? Is order of linkage important ( .so files first and .a files next ? )
As Uwe wrote in a comment, the https://github.com/apache/parquet-cpp repository is deprecated, and the Parquet C++ library is being developed as part of the Apache Arrow C++ codebase https://github.com/apache/arrow/tree/master/cpp. Can you try building based on that, and if you have trouble can you post on the dev#arrow.apache.org mailing list?
You have succeed to build the project when link with the shared (.so) libraries instead of the static (.a) ones.
(The command find_library actually looks for one library, which name is listed in NAMES option. In your case it found .so library because its name comes before the .a one).
Actually, both shared and static parquet libraries contain the same set of symbols, and both sets are insufficient for link. The difference is that the shared library contains information, where to find remaining symbols (in the thrift library in your case), but the static library doesn't.
For correctly link with the static libraries, you need to list dependent libraries manually.
On Windows .lib file may mean either a static library, or a import file for the shared (.dll) one. It seems that you link with the dynamic one (it has no lib prefix), which succeed like on Linux.

What is the preferred way to write quick Haskell test programs that depend on Stack libraries in local directories?

I have a Haskell library that I am developing using Stack. As I am developing the library, I like to write small test/experimentation programs that use the library. I keep a collection of these test programs for myself in a directory locally. These test modules are very quick and informal, and not appropriate to include as unit tests in the committed library code. Typically, most of them aren't even maintained and won't compile against the latest version of the library, but I keep them around in case I want to update them later. When I'm working on a test program, I want it to build against my working copy of the library, with any changes that I've made to the library locally.
How should I set up my Stack build environment for this situation? Here are some options I've tried, and the problems with each options.
Two Cabal packages, one Stack configuration. The stack.yaml file lists both packages and defines the build environment for both at once.
Problem: The stack.yaml file needs to be included as part of the committed library source code, so that other developers can build the library from source reproducibly. I don't want the public stack.yaml file for my library to include build information for my local test projects.
Problem: As far as I know, to make this work I need to have a .cabal file that lists all the executables and modules for my test programs. This is annoying to update whenever I want to throw together a quick experimental script, and will fail to build any of the test programs if I have even a single module that doesn't compile. I can't have a .cabal file with no sections, because Cabal gives "No executables, libraries,tests, or benchmarks found. Nothing to do.", and because this offers nowhere to list build-depends.
Create a Cabal sandbox for the test programs. Use cabal sandbox add-source to add the local library as a package. See also this answer.
Problem: Using Cabal sandboxes instead of Stack reintroduces a lot of the dependency problems that Stack is supposed to fix, such as using the system-global GHC instead of the GHC defined by the resolver.
Have a separate stack.yaml for the test programs. Add the library under packages as location: 'C:\Path\To\Local\Library' and set extra-dep: true for that dependency. (See here for more info on this feature.) Don't put any other Cabal packages under packages in the stack.yaml for the test programs. Use stack runghc to invoke test programs within the scope of their stack.yaml.
Problem: I just can't get this one to work. Running stack build inside the test program directory gives "Error parsing targets: The project contains no local packages (packages not marked with 'extra-dep')". Running stack runghc acts as if no dependencies are present at all. I don't want to add a Cabal package for the test programs because this has the same problem as option 1 with needing to construct an explicit .cabal file describing the modules to build.
Problem: Stack build configuration info that I want to be identical between the library and the test programs has to be copied manually. For example, if I change the resolver in my library's stack.yaml, I also need to change it in the stack.yaml for my test programs separately.
Have a directory inside my working copy of the library that contains all of my test programs. Use stack runghc to invoke test programs in the context of the library.
Problem: I'd like the directory with my test programs to be outside of the directory containing my library source code and build configuration, so that I don't have to tell the version control for my library to ignore my test code, and can have my own local version control just for the test programs.
Problem: Only works with a single local library dependency. If my test programs need to depend on local working copies of two different libraries with their own stack.yaml files, I'm out of luck.
Add a symbolic link inside my working copy of the library to a separate directory that contains all of my test programs. Navigate through the symlink and use stack runghc to invoke test programs in the context of the library.
Problem: Super awkward to use, especially since I'm on Windows and Windows has terrible symlink support.
Problem: Still need to tell my version control system to ignore the symlink.
Problem: Still only works with a single library dependency.
If only one local library is involved, I use option 4. You can put your tests outside the directory of your library, and either invoke stack from the directory of your library, or using --stack-yaml path/to/library/stack.yaml.
Otherwise, I use option 3, creating a separate stack project without setting extra-dep.
...
packages:
- 'path/to/package1'
- 'path/to/package2'
...
I can't think of a good workaround for the issue of configuration duplication. There would otherwise be conflicts if multiple packages specified different resolvers/package versions.
Edit: Actually a stub library works better, so edited to add.
I think the way to get #3 to work is -- under your scratch program directory -- (1) add . under packages in stack.yaml alongside the location/extra-dep: true package:
packages:
- '.'
- location: ../mylib
extra-dep: true
(2) create an executable clause in scratch.cabal that points to a stub main program (i.e., a "Hello World" program that compiles but need not do anything) which depends on your library:
executable main
hs-source-dirs: src
main-is: Stub.hs
build-depends: base
, mylib
default-language: Haskell2010
or a library clause with no exposed modules, again that depends on your mylib library:
library
hs-source-dirs: src
build-depends: base >= 4.7 && < 5
, mylib
default-language: Haskell2010
and (3) run stack build in the scratch directory. This should build and register mylib, and now stack runghc Prog1.hs should work fine for running programs that depend on mylib modules.
If you use the executable approach, the stub program is compiled as a side effect but otherwise ignored. If you use the library approach, it looks like the stub library isn't even built; and you then have the option of actually building a scratch library by adding some exposed modules of shared code for your test programs to use, if it's convenient, so the stub library might be best.
None of this solves the problem of keeping stack.yaml info like the resolver version in sync, but it seems to address all the problems you list in 1, 2, 4, and 5. In particular, it should work fine for test programs that depend on multiple local libraries you're developing.

Reusing custom makefile for static library with cmake

I guess this would be a generic question on including libraries with existing makefiles within cmake; but here's my context -
I'm trying to include scintilla in another CMake project, and I have the following problem:
On Linux, scintilla has a makefile in (say) the ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk directory; if you run make in that directory (as usual), you get a ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/bin/scintilla.a file - which (I guess) is the static library.
Now, if I'd try to use cmake's ADD_LIBRARY, I'd have to manually specify the sources of scintilla within cmake - and I'd rather not mess with that, given I already have a makefile. So, I'd rather call the usual scintilla make - and then instruct CMAKE to somehow refer to the resulting scintilla.a. (I guess that this then would not ensure cross-platform compatibility - but note that currently cross-platform is not an issue for me; I'd just like to build scintilla as part of this project that already uses cmake, only within Linux)
So, I've tried a bit with this:
ADD_CUSTOM_COMMAND(
OUTPUT scintilla.a
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target" )
... but then, add_custom_command adds a "target with no output"; so I'm trying several approach to build upon that, all of which fail (errors given as comment):
ADD_CUSTOM_TARGET(scintilla STATIC DEPENDS scintilla.a) # Target "scintilla" of type UTILITY may not be linked into another target.
ADD_LIBRARY(scintilla STATIC DEPENDS scintilla.a) # Cannot find source file "DEPENDS".
ADD_LIBRARY(scintilla STATIC) # You have called ADD_LIBRARY for library scintilla without any source files.
ADD_DEPENDENCIES(scintilla scintilla.a)
I'm obviously quote a noob with cmake - so, is it possible at all to have cmake run a pre-existing makefile, and "capture" its output library file, such that other components of the cmake project can link against it?
Many thanks for any answers,
Cheers!
EDIT: possible duplicate: CMake: how do i depend on output from a custom target? - Stack Overflow - however, here the breakage seems to be due to the need to specifically have a library that the rest of the cmake project would recognize...
Another related: cmake - adding a custom command with the file name as a target - Stack Overflow; however, it specifically builds an executable from source files (which I wanted to avoid)..
You could also use imported targets and a custom target like this:
# set the output destination
set(SCINTILLA_LIBRARY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk/scintilla.a)
# create a custom target called build_scintilla that is part of ALL
# and will run each time you type make
add_custom_target(build_scintilla ALL
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target")
# now create an imported static target
add_library(scintilla STATIC IMPORTED)
# Import target "scintilla" for configuration ""
set_property(TARGET scintilla APPEND PROPERTY IMPORTED_CONFIGURATIONS NOCONFIG)
set_target_properties(scintilla PROPERTIES
IMPORTED_LOCATION_NOCONFIG "${SCINTILLA_LIBRARY}")
# now you can use scintilla as if it were a regular cmake built target in your project
add_dependencies(scintilla build_scintilla)
add_executable(foo foo.c)
target_link_libraries(foo scintilla)
# note, this will only work on linux/unix platforms, also it does building
# in the source tree which is also sort of bad style and keeps out of source
# builds from working.
OK, I think I have it somewhat; basically, in the CMakeLists.txt that build scintilla, I used this only:
ADD_CUSTOM_TARGET(
scintilla.a ALL
COMMAND ${CMAKE_MAKE_PROGRAM}
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/scintilla/gtk
COMMENT "Original scintilla makefile target" )
... and then, the slightly more complicated part, was to find the correct cmake file elsewhere in the project, where the ${PROJECT_NAME} was defined - so as to add a dependency:
ADD_DEPENDENCIES(${PROJECT_NAME} scintilla.a)
... and finally, the library needs to be linked.
Note that in the commands heretofore, the scintilla.a is merely a name/label/identifier/string (so it could be anything else, like scintilla--a or something); but for linking - the full path to the actual `scintilla.a file is needed (which in this project ends up in a variable ${SCINTILLA_LIBRARY}). In this project, the linking basically occurs through a form of a
list(APPEND PROJ_LIBRARIES ${SCINTILLA_LIBRARY} )
... and I don't really know how cmake handles the actual linking afterwards (but it seems to work)
For consistency, I tried to use ${SCINTILLA_LIBRARY} instead of scintilla.a as identifier in the ADD_CUSTOM_TARGET, but got error: "Target names may not contain a slash. Use ADD_CUSTOM_COMMAND to generate files". So probably this could be solved smarter/more correct with ADD_CUSTOM_COMMAND - however, I read that it "defines a new command that can be executed during the build process. The outputs named should be listed as source files in the target for which they are to be generated."... And by now I'm totally confused so as to what is a file, what is a label, and what is a target - so I think I'll leave at this (and not fix it if it ain't broken :) )
Well, it'd still be nice to know a more correct way to do this eventually,
Cheers!

How to work around "scons: warning: Two different environments were specified for target"

Suppose I have an SConstruct file that looks like this:
env = Environment()
env.Program("a", ["a.c", "util.c"])
env.Program("b", ["b.c", "util.c"])
This build works properly with no SCons warning messages. However, if I modify this to specify different libraries for each Program build (the actual libraries are not relevant):
env.Program("a", ["a.c", "util.c"], LIBS="m")
env.Program("b", ["b.c", "util.c"], LIBS="c")
then I get the warning:
scons: warning: Two different environments were specified for target util.o,
but they appear to have the same action: $CC -o $TARGET -c $CFLAGS $CCFLAGS $_CCCOMCOM $SOURCES
This appears to be caused by the Program builder automatically creating a new environment for building the sources, even though it is just the LIBS variable that is different (and so only the link step needs to have a different environment). I can work around this by doing something like:
util = env.Object("util.c")
env.Program("a", ["a.c"] + util, LIBS="m")
env.Program("b", ["b.c"] + util, LIBS="c")
This uses a single Object builder for building util.c, then using the precompiled object file in each Program build, thus avoiding the warning. However, this should not really be necessary. Is there a more elegant way to work around this problem? Or is this actually a bug in SCons that should be fixed?
Context: I have nearly 2000 C source files compiled into about 20 libraries and 120 executables with lots of shared sources. I created the SConstruct file from the previous proprietary build system using a conversion script I wrote. There are about 450 "Two different environments" warning messages produced by SCons for a full build using my current SConstruct.
I found a workaround that doesn't involve creating extra variables to hold the object file nodes:
env.Program("a", ["a.c", env.Object("util.c")], LIBS="m")
env.Program("b", ["b.c", env.Object("util.c")], LIBS="c")
This isolates the build of util.c within a single environment. Although it is specified twice, once for each Program, SCons doesn't warn about this because it's the same source built with the same env object. Of course SCons only compiles the source once in this case.
You may use the Split function and a custom helper to simplify the build process for large projects:
def create_objs(SRCS, path=""):
return [env.Object(path+src+".cpp") for src in SRCS]
prg1 = Split("file_1 file_2 file_N")
prg2 = Split("file_2 file_5 file_8")
env.Program("a", create_objs(prg1), LIBS="x")
env.Program("b", create_objs(prg2), LIBS="y")
The object files are created only once, and they can be used in multiple builds. Hope this helps...
One issue I found in my code was that I was not using the target object path correctly. Or in otherwords I had a variant dir directive, but instead of using BUILDPATH i ended up using my original source code path. This way Scons was finding the object generated in target BUILDPATH and source path.
Creating a static library out of the first set of files and linking the library to the next set of files (which have some files in common with the first set) to create a target works as well.
env.StaticLibrary ("a", ["a.c","util.c"], LIBS = "m")
env.Program ("b", ["b.c","util.c"], LIBS = ["c","a"])

Resources