What's the purpose of the .bin directory within node_modules?
In another question, the answerer stated:
"it's where your binaries (executables) from your node modules are located."
So additionally can someone explain to me the following: What are binaries/executables?
Any help would be greatly appreciated!
Binary or executable files are files which have already been compiled for your specific computer architecture and, once installed, these files can be ran directly on your computer. Common instruction set architectures are: X86 and ARM, which most computer processors are based upon. Contrary to binary files, source files are the actual source code itself and these files need to be compiled prior to installing.
As for the .bin directory, within ./node_modules/.bin, this directory stores all the executables of your node_modules for which your project is dependent upon to run. This allows your project to just 'run' the libraries which are necessary, for your project, without you having to worry about compiling these files yourself. By compiling I mean transforming the source code down to executable code (machine code) which can understood by your computer's underlying processor.
Hopefully that helps!
According to npm docmentation
When in global mode, executables are linked into {prefix}/bin on Unix, or directly into {prefix} on Windows.
When in local mode, executables are linked into ./node_modules/.bin so that they can be made available to scripts run through npm. (For example, so that a test runner will be in the path when you run npm test.)
Binaries are executable files ( the compiled version of your file for a specific computer architecture) and once installed they can be run directly from your machine
Related
I'm porting an SDK written in C++ from Windows to Linux. There are other binaries, but at its simplest, our SDK is this:
core.dll - implicitly loaded DLL ("libcore.so" shared library on Linux)
tests.exe - an app use to test the DLL (uses google test)
All of my binaries must live in one folder somewhere that apps can find. I've achieved that on Windows. I wanted to achieve the same thing in Linux. I'm failing miserably
To illustrate, Here's the basic project tree. We use CMake. After I build I've got
mysdk
|---CMakeLists.txt (has add_subdirectory() statements for "tests" and "core")
|---/tests (source code + CMakeLists.txt)
|---/core (source code + CMakeLists.txt)
|---/build (all build ouput, CMake output, etc)
|---tests (build output)
|---core (build output)
The goal is to "flatten" the "build" tree and put all the binary outputs of tests, core, etc into one folder.
I tried adding CMake's "install" command, to each of my CMakeLists.txt files (e.g. install(TARGETS core DESTINATION bin). I then then executed sudo make install after my normal build. This put all my binaries in /usr/local/bin with no errors. But when I ran tests from there, it failed to find libcore.so, even though it was sitting right there in the same folder
tests: error while loading shared libraries: libcore.so: Cannot open shared object file: No such file or directory
I read up on the LD_LIBRARY_PATH environment variable and so tried adding that folder (/usr/local/bin) into it and running. I can see I've properly altered LD_LIBRARY_PATH but it still doesn't work. "tests" still can't find libcore.so. I even tried changing the PATH environment variable as well. Same result.
In frustration, I tried brute-force copying the output binaries to a temporary subfolder (of /mysdk/build) and running tests from there. To my surprise it ran.
Then I realized why: Instead of loading the local copy of libcore.so it had loaded the one from the build output folder (as if the full path were "baked in" to the app at build time). Subsequently deleting that build-output copy of libcore.so made "tests" fail altogether as before, instead of loading the local copy. So maybe the path really was baked in.
I'm at a loss. I've read the CMake tutorial and reference. It makes this sound so easy. Aside from the obvious (What am I doing wrong?) I would appreciate if anyone could answer any of the following questions:
What is the correct way to control where my app looks for my shared libraries?
Is there a relationship between my project build structure and how my binaries must then appear when installed?
Am I even close to the right way of doing this?
Is it possible I've somehow inadvertently "baked" (into my app) full paths to my shared libraries? Is that a thing? I use all CMAKE variables in my CMakeLists files.
You can run ldd file to print the shared object dependencies for file. It will tell you where are its dependencies being read from.
You can export the environment variable LD_LIBRARY_PATH with the paths you want the linker to look for. If a dependency is not found, try adding the path where that dependency is located at to LD_LIBRARY_PATH and then run ldd again (make sure you export the variable).
Also, make sure the dependencies have the right permissions.
Updating LD_LIBRARY_PATH is an option. Another option is using RPATH. Please check the example.
https://github.com/mustafagonul/cmake-examples/blob/master/005-executable-with-shared-library/CMakeLists.txt
cmake_minimum_required(VERSION 2.8)
# Project
project(005-executable-with-shared-library)
# Directories
set(example_BIN_DIR bin)
set(example_INC_DIR include)
set(example_LIB_DIR lib)
set(example_SRC_DIR src)
# Library files
set(library_SOURCES ${example_SRC_DIR}/library.cpp)
set(library_HEADERS ${example_INC_DIR}/library.h)
set(executable_SOURCES ${example_SRC_DIR}/main.cpp)
# Setting RPATH
# See https://cmake.org/Wiki/CMake_RPATH_handling
set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/${example_LIB_DIR})
# Add library to project
add_library(library SHARED ${library_SOURCES})
# Include directories
target_include_directories(library PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/${example_INC_DIR})
# Add executable to project
add_executable(executable ${executable_SOURCES})
# Linking
target_link_libraries(executable PRIVATE library)
# Install
install(TARGETS executable DESTINATION ${example_BIN_DIR})
install(TARGETS library DESTINATION ${example_LIB_DIR})
install(FILES ${library_HEADERS} DESTINATION ${example_INC_DIR})
I just installed Cmake from git clone wget http://www.cmake.org/files/v2.8/cmake-2.8.3.tar.gz in a new folder on a Linux server. The compilation worked but cmake command is not recognized from other paths. Should I copy the entire contents of cmake-2.8.0 folder to usr/local/bin? Or is the contents of bin folder that need to be copied?
Thanks
On Linux and other Unix-based systems, a common arrangement is to install packages to /opt and add relevant entries to the PATH environment variable to make them available. This is intended for packages not provided by the native package manager or distribution. By choosing an appropriate directory structure, this can be done in a way which also allows different versions to be installed simultaneously and the user can pick which one they want by adding the relevant directory to the PATH.
For the specific case of CMake asked about in the question, you can use a directory structure like /opt/cmake/<version> and then add the relevant /opt/cmake/<version>/bin directory to your PATH (e.g. /opt/cmake/3.8.2/bin for the 3.8.2 CMake release). You can even just download the official pre-built CMake tarballs, unpack them and move the top level directory into the /opt/cmake area as the particular version you downloaded. I've used this successfully on Linux, MacOS and Solaris, as I'm sure have many others.
Note that once you've run CMake on a particular source tree, the cmake executable doesn't need to be on the PATH any more. If cmake needs to be re-run, the build will do so itself and it records the full path to the cmake executable in its own cache, so the PATH isn't even consulted (this is essential in ensuring the same version of CMake continues to be used for all builds regardless of the PATH, since PATH can change between login sessions, etc.). You would only need cmake on your PATH if you intend to invoke cmake manually or for the first time you run it on a source tree, but in both of these cases you can always just use the full path to the cmake executable if you preferred.
I should also add that the entire set of files provided in the CMake package are required, not just the bin directory. CMake makes extensive use of files in its other directories, such as the various modules it comes with. If you are building CMake from source, you may want to build the package target so you get a relocatable tarball or similar which will contain everything that should be included when you provide a CMake package on your system.
After the build, use 'sudo make install'. This will make sure the correct libraries and binaries are copied to their proper places.
Usually this will install the binary to /usr/local/bin.
Make sure the PATH variable has this included.
sudo make install did not copy to /usr/local/bin/ for some reason, so I copied the content of CMAKE /bin. to usr/local/bin an it worked.
cp –a bin/. /usr/local/bin/
I have downloaded the source code from git for some third-party libraries, which contains a makefile, and so I have run make to compile the libraries. During compilation, this seems to add various libraries to directories such as /usr/bin. My questions is: now that I have compiled the code and the libraries have been written to other locations on my machine, can I delete the original folder with the source code in? Or will I still need this to run these libraries?
Generally, a source code installation need 3 steps: ./configure; make; make install
make install will copy the compiled out binaries and libraries to target system directories. After that, removing the source code folder will not impact the binaries and libraries execution since they have been deployed on your system.
In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.
I have some shared/dynamic libraries installed in a sandbox directory. I'm building some applications which link agains the libraries. I'm running into what appears to be a difference between OSX and Linux in this regard and I'm not sure what the (best) solution is.
On OSX the location of library itself is recorded into the library, so that if your applications links against it, the executable knows where to look for the library at runtime. This works like expected with my sandbox, because the executable looks there instead of system wide install paths.
On Linux I can't get this to work. Apparently the library location is not present in the library itself. As I understand it you have to add the folders which contain libraries to /etc/ld.so.conf and regenerate the ld cache by running ldconfig.
This doesn't seem to do the trick for me because my libraries are located inside a users home directory. It looks like ldconfig doesn't like that, which makes sense actually.
How can I solve this? I don't want to move the libraries out of my sandbox.
On Linux, run your program with the environment variable LD_LIBRARY_PATH set to your sandbox dir.
(I remember having used a flag -R to include library paths in the binary, but either it has been removed from gcc or it was only available on BSD systems.)
On Linux you should set LD_RUN_PATH to your sandbox dir. This is better than setting LD_LIBRARY_PATH because you're telling the linker where the library is at link time, rather than telling the shared library loader at run time.
See: Link