I've been installing several Node.js modules/apps lately to be able to start a new web project. I use NPM for installing the modules, but every time I face the same problem: The modules are not accessible globally.
Not in one installation manual have I read anything about the need of changing/adding to the .bash_profile, but I have through some tutorials found out this is needed.
I have managed to get some modules working this way but not everyone, and I really could use some help here. The last one installed that I have problems with is Expresso. What shall I type in .bash_profile to be able to access it globally?
The executable Expresso file is in the following folder:
/Users/toby/node/imapp/imagebridge/node_modules/expresso/bin/expresso
The following doesn't work:
export PATH="/Users/toby/node/imapp/imagebridge/node_modules/expresso/bin/expresso/:$PATH"
remove espresso from your path, eg
export PATH="/Users/toby/node/imapp/imagebridge/node_modules/expresso/bin/:$PATH"
since espresso is most likely the executable. The path is a list of directories to search for executables, not a list of executables.
Also this one is good addition for bundled modules:
export PATH="./node_modules/.bin:$PATH"
It allows you to run binaries from current working dir's node_modules subdir
Related
I’m not root for the linux server,
so I choose to install softwares in my $HOME/local/bin, I already added the $HOME/local/bin directory to the PATH environment variable, wrote in my .bashrc.
Some softwares install this way like:
tar xvzf ncurses-5.9.tar.gz
cd ncurses-5.9
./configure --prefix=$HOME/local
make
make install
cd ..
So it will directly install in my $HOME/local/bin.
But for some softwares, after download like sbt-1.2.1.zip (based on java), and decompression, shows just a file fold sbt, it contains three foldsbin conf lib, and in its bin, contains one executable file named sbt and java9-rt-export.jar sbt-launch-lib.bash sbt-launch.jar sbt.bat.
Here I wonder:
I should just soft link this executable sbt file path under my $HOME/local/bin, then source my .bashrc?
Or, after decompression, add one line in my .bashrc export PATH="downloadpath/sbt/bin:$PATH"?
Since just one executable downloadpath/sbt/bin, so I'm not sure it is right to add whole bin fold path, if software's bin fold contains executable files (one or many), I think this situation is more convenient for just add it's bin in .bashrc, but even so, I'm not sure its right?
I'm not familiar with installation software, now I usually know way
but not why. Here I shows two ways (more ways not be showed here) to
install, executable file always be written in bin or src? But some
softwares no bin just src but no executable files in it...
Slurm also can use modules to install software, conda also other way, but I want to
confirm these traditional ways I mentioned (that two) still can be
used on slurm or conda?
However, any suggestion even one aspect's reminding will be grateful!
For precompiled software, or, in general, software that does not offer configure scripts or (C)make files, it is ofter better to leave them in their target directory and adapt the *PATH (PATH to binaries, but also LD_LIBRARY_PATH, LIBRARY_PATH to libraries and CPATH to include files and MANPATH to the man page) environment variables.
The reason is that the software might be configured to read files with hardcoded paths, relative to the position of the executable, such as libraries, etc.
In your case, you might also need to setup the CLASSPATH env variable to the directory with the jar files.
To ease software installation, you can use tools such as easybuild that can help, and even create user modules just like the system module installed by the system administrators.
There is something wrong in my opinion with your setup. If you don`t have root account on your server, is not better to test what you have to test, in a more safe environment - for example a vm/container on your developement machine ?
However, in your situation maybe it can be better to start sbt by using a separate bash script than modifying your .bashrc
I am working off of a cluster and since I do not have sudo privileges there, I had to install a toolkit at a different path ~/bin/tool_kit. This path now contains the following directories: bin, include and lib. This may be a very newbie question, but what changes do I make to my .bashrc so that I am able to use this toolkit.
For example, the $PATH variable might be augmented like:
export PATH = ~/bin/tool_kit/bin:$PATH. How do I include lib and include?
bin is the only location you definitely need to do anything with:
# using end of the PATH, unless you know you want to override like-named system binaries
PATH=$PATH:$HOME/bin/tool_kit/bin
No export is needed here, as the PATH variable is already in the environment.
If and only if your software didn't compile in a rpath pointing to the anticipated runtime library locations, you may also wish to set an LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH+$LD_LIBRARY_PATH:}$HOME/bin/tool_kit/lib
include files are used only when compiling other software, and are not generally needed at runtime.
In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.
This question already has answers here:
How to make exe files from a node.js app?
(20 answers)
Closed 7 years ago.
Supposed I have written a Node.js application, and I now would like to distribute it. Of course, I want to make it easy for the user, hence I do not want him to install Node.js, run npm install and then manually type node app.js.
What I'd prefer was a single executable file, e.g. an .exe file on Windows.
How could I approach this?
I am aware of this thread, anyway this is only about Windows. How could I achieve this in a platform-independent manner? Any ideas? Best practices? ...?
The perfect solution was a "compiler" I can give a source folder to. The source folder contains the app itself in various .js files, the node_modules folder and some metadata, such as the package.json. The output should be binaries for various platforms, such as Windows, OS X and Linux.
Oh, and what's important: I do not want to make any changes to the source code, so calls to require with relative paths should still work, even if this relative path is now inside the packaged app.
Any ideas?
PS: I do not want the user to install Node.js independently, it should be included inside the executable as well.
Meanwhile I have found the (for me) perfect solution: nexe, which creates a single executable from a Node.js application including all of its modules.
It's the next best thing to an ideal solution.
First, we're talking about packaging a Node.js app for workshops, demos, etc. where it can be handy to have an app "just running" without the need for the end user to care about installation and dependencies.
You can try the following setup:
Get your apps source code
npm install all dependencies (via package.json) to the local node_modules directory. It is important to perform this step on each platform you want to support separately, in case of binary dependencies.
Copy the Node.js binary – node.exe on Windows, (probably) /usr/local/bin/node on OS X/Linux to your project's root folder. On OS X/Linux you can find the location of the Node.js binary with which node.
For Windows:
Create a self extracting archive, 7zip_extra supports a way to execute a command right after extraction, see: http://www.msfn.org/board/topic/39048-how-to-make-a-7-zip-switchless-installer/.
For OS X/Linux:
You can use tools like makeself or unzipsfx (I don't know if this is compiled with CHEAP_SFX_AUTORUN defined by default).
These tools will extract the archive to a temporary directory, execute the given command (e.g. node app.js) and remove all files when finished.
Not to beat a dead horse, but the solution you're describing sounds a lot like Node-Webkit.
From the Git Page:
node-webkit is an app runtime based on Chromium and node.js. You can write native apps in HTML and JavaScript with node-webkit. It also lets you call Node.js modules directly from the DOM and enables a new way of writing native applications with all Web technologies.
These instructions specifically detail the creation of a single file app that a user can execute, and this portion describes the external dependencies.
I'm not sure if it's the exact solution, but it seems pretty close.
Hope it helps!
JXcore will allow you to turn any nodejs application into a single executable, including all dependencies, in either Windows, Linux, or Mac OS X.
Here is a link to the installer:
https://github.com/jxcore/jxcore-release
And here is a link to how to set it up:
http://jxcore.com/turn-node-applications-into-executables/
It is very easy to use and I have tested it in both Windows 8.1 and Ubuntu 14.04.
FYI: JXcore is a fork of NodeJS so it is 100% NodeJS compatible, with some extra features.
In addition to nexe, browserify can be used to bundle up all your dependencies as a single .js file. This does not bundle the actual node executable, just handles the javascript side. It too does not handle native modules. The command line options for pure node compilation would be browserify --output bundle.js --bare --dg false input.js.
There are a number of steps you have to go through to create an installer and it varies for each Operating System. For Example:
on Mac OS X you need to create a .pkg, there are instructions on how to do that here: https://coolaj86.com/articles/how-to-create-an-osx-pkg-installer.html
on Ubuntu Linux you need to create a .deb, there are instruction on how to do that here: https://coolaj86.com/articles/how-to-create-a-debian-installer.html
on Microsoft Windows you need to create a .exe or .msi, there are instruction on how do that using the innosetup installer here: https://coolaj86.com/articles/how-to-create-an-innosetup-installer.html
You could create a git repo and setup a link to the node git repo as a dependency. Then any user who clones the repo could also install node.
#git submodule [--quiet] add [-b branch] [-f|--force]
git submodule add /var/Node-repo.git common
You could easily package a script up to automatically clone the git repo you have hosted somewhere and "install" from one that one script file.
#!/bin/sh
#clone git repo
git clone your-repo.git
I have a lib I installed by hand (to /usr/local) on a Linux system (Eigen3, by the way). There is a FindEigen3.cmake bundled with the lib but that is not installed anywhere by default.
There is /usr/share/cmake-x.y/Modules where CMake looks for additional modules, but putting these files there doesn't seem the way to do things. Is there an equivalent place under /usr/local that is also scanned by default? Or what is the standard way of creating custom library modules?
(Although the question isn't strictly connected to programming, I think library authors may also encounter the same question from the other side: where to put these files when installing manually.)
In our project we place FIndXXX.cmake modules in folder project root dir/cmake/modules. For this to work you have to specify in project root dir/CMakeLists.txt (similiar to what DLRdave has already said):
set(CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake/modules)
See the comments in the CMake documentation for the "find_package" command:
http://cmake.org/cmake/help/v2.8.8/cmake.html#command:find_package
It speaks of writing a "project-config" file, and where to install it, such that find_package(Eigen3) will work without having a FindEigen3.cmake find module... It is verbose, but the information is in there.
See also user contributed wiki pages such as this one:
https://gitlab.kitware.com/cmake/community/wikis/doc/tutorials/How-to-create-a-ProjectConfig.cmake-file
You need to set the CMAKE_MODULE_PATH to include the directory that the FindEigen3.cmake file is in before calling find_package. I believe that:
set( CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} <your path> )
will do the trick, but I do not have a setup to test that available at the moment so you may have to massage that technique a bit.