nodejs Install issue on my BeagleBone Green - node.js

I want to develop my ReactJs and Nodejs application on my BeagleBone Green card using cloud9 so I had this error every time I install nodejs or execute a command using npm.
https://i.stack.imgur.com/Ie095.png
Any help would be greatly appreciated.
Thanks.

Your node executable has been built to require newer versions of libstdc++ (the GNU standard C++ library) and libc (the GNU standard C library) than the ones that are installed on your BeagleBone.
To fix, you'll need to download newer versions of those libraries and make them available to node. From the information at https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html it appears that you need at least libstdc++.so.6.0.20 and at least libc-2.16.
There's some risk involved in changing the system libraries on a running system. The way to do it is to put the new libraries next to the old ones (do not delete or rename the old ones) and then remake the existing libstdc++.so.6 and libc.so.6 symlinks to point to the new libraries. The symlinks are what programs follow to get to the actual libraries. (If you look at those symlinks you'll see that right now they point to the old libraries.) You have to remake the symlinks in one command, and if remaking the libc symlink fails then you'll be in deep trouble.
If you can get the newer libraries from properly-built packages then that should be a safer approach than trying to do it manually, because the packages should take care of the relinking for you.
Alternatively you can put the new libraries in some directory separate from the system libraries, make new libstdc++.so.6 and libc.so.6 symlinks in that directory, and then use the LD_LIBRARY_PATH environment variable to cause node and npm to look for them in that location. That's much safer, but a little ugly.

Related

What is the correct way to upgrade the versions of Haskell programs installed on /usr/bin?

I have the 3.0.1 version of Alex installed on my /usr/bin. I think the Haskell Platform originally put it there (although I'm not 100% sure...).
Unfortunately, version 3.0.1 is bugged so I need to upgrade it to 3.0.5. I tried using cabal to install the latest version of Alex but cabal install alex-3.0.5 it installed the executable on .cabal/bin over on my home folder instead of on /usr/bin
Do I just manually copy the executable to /usr/bin? (that sound like a lot of trouble to do all the time)
Do I change my PATH environment variable so that .cabal/bin comes before /usr/bin? (I'm afraid that an "ls" executable or similar over on the cabal folder might end up messing up my system)
Or is there a simpler way to go at it in general?
I want to first point out the layout that works well for me, and then suggest how you might proceed in your particular situation.
What works well for me
In general, I think that a better layout is to have the following search path:
directories with important non-Haskell related binaries
directory that cabal install installs to
directory that binaries from the Haskell platform are in
This way, you can use cabal install to update binaries from the Haskell platform, but they cannot accidently shadow some non-Haskell related binary.
(On my Windows machine, this layout is easy to achieve, because the binaries from the Haskell platform are installed in a separate directory by default. So I just manually adapt the search path and that's it. I don't know how to achieve it on other platforms).
Suggestion for your particular situation
In your specific situation with the Haskell platform binaries already installed together with the non-Haskell related binaries, maybe you can use the following layout for the search path:
directory containing links to some of the binaries in 3
directory with important non-Haskell related binaries and Haskell platform binaries
directory that cabal install installs to.
This way, binaries from cabal install cannot accidently shadow the important stuff in 2. But if you decide you want to shadow something form the Haskell platform, you can manually add a link to 1. If it's a soft link, I think you only have to do that once per program name, and then you can call cabal install for that program to update it. You could even look up what executables are bundled with the Haskell platform and do that once and for all.
On second though, putting /.cabal/bin in front of /usr/bin in the PATH is simpler and is what most people do already.
Its also not a big deal since only cabal will put files in .cabal/bin so it should be predictable and with little risk of overwriting stuff.

How to manage development and installed versions of a shared library?

In short: This question is basically about telling Linux to load the development version of the .so file for executables in the dev directory and the installed .so file for others.
In long: Imagine a shared library, let's call it libasdf.so. And imagine the following directories:
/home/user/asdf/lib: libasdf.so
/home/user/asdf/test: ... perform_test
/opt/asdf/lib: libasdf.so
/home/user/jkl: ... use_asdf
In other words, you have a development directory for your library (/home/user/asdf) and you have an installed copy of its previous stable version (/opt/asdf) and some other programs using it (/home/user/jkl).
My question is, how can I tell Linux, to load /home/user/asdf/lib/libasdf.so when executing /home/user/asdf/test/perform_test and to load /opt/asdf/lib/libasdf.so when executing /home/user/jkl/use_asdf? Note that, even though I specify the directory by -L during link, Linux uses other methods (for example /ect/ld.so.conf and $LD_LIBRARY_PATH) to find the .so file.
The reason I need such a thing is that, of course the executables in the development directory need to link with the latest version of the library, while the other programs, would want to use the stable version.
Putting ../lib in the library path doesn't seem like a secure idea, not to mention not completely correct since you can't run the test from a different directory.
One solution I thought about is to have perform_test link with libasdf-dev.so and upon install, copy libasdf-dev.so as libasdf.so and have others link with that. This solution has one problem though. Imagine the following additional directory:
/home/user/asdf/tool: ... use_asdf_too
Which gets installed to:
/opt/asdf/bin: use_asdf_too
In my solution, it is unknown what use_asdf_too should be linked against. If linked against libasdf.so, it wouldn't work properly if invoked from the dev directory and if linked against libasdf-dev.so, it wouldn't work properly if invoked from the installed location.
What can I do? How is this managed by other people?
Installed shared objects usually don't just end with ".so". Usually they also include their soname, such as libadsf.so.42.1. The .so file for development is typically a symlink to a fully-versioned filename. The linker will look for the .so file and resolve it to the full filename, and the loader will then load the fully-versioned library instead.

Installation and maintenance of multiple versions of OpenCV (applicable to any other 3rd party library as well)

I have been trying to do build and use OpenCV 2.3.0 on my Fedora15 Lovelock 64bit machine.
Background:
First, on my 64bit Fedora15, OpenCV2.2.0 seems to be in the locations namely
/usr/share/opencv
/usr/doc
/usr/lib64 &
/usr/bin
I do not find the include files though (in /usr/include). This means that the development package was n t installed. My package manager does not list the development packages when i try to Add/remove software.
I have a need to create applications, some of which just link to 2.2 and others which link to 2.3.O of the OpenCV library.So, I thought the best solution would be to have a separate location for 3rd party libraries that i use for my development . So I created a directory in /local named soft and created an OpenCV directory. A directory structure like this one.
/local/soft/
OpenCV/
OpenCV2.2.0/
source-files
build
OpenCV2.3.0/
source-files
build
installation
share/opencv
doc
include
lib
Now, i tried building OpenCV2.3.0 and i succeeded. I configure CMake to use CMAKE_INSTALL_PREFIX to the directory named "installation" (see above), instead of the default /usr/local/. Clean. huh?
I tried building and installing OpenCv 2.2.0 in the same way. Alas 2.2.0 complains something during the build. So i thought i ll link to the already existing version in the standard locations. BUT, when i try to install the dev packages for 2.2 using my package manager,the development files for x86_64 are not found :-) which means i dont have the headers to link to the libraries in the standard location.
I cant build my executable since linker ld would not find the OpenCV that i have installed in the non-standard location.(although i point it to the exact location using the -L and -l options with gcc in my Eclipse).
Question 1: Am i doing the right thing in maintaining installations in non-standard locations? Is /usr/ the standard location where the package manager will always do the installation?
Question2 : What is the right way of linking to these libraries installed in non-standard locations? Why would not ld recognize my .so files in the lib folder?
sudo g++ logpolar.cpp -o logpolar.o -I /local/soft/OpenCV/opencv2.3.1/installation/include/ -l/local/soft/OpenCV/opencv2.3.1/build/lib/libopencv_core.so
But ld canot find -l/local/soft/OpenCV/opencv2.3.1/build/lib/libopencv_core.so
I checked the lib folder and there sure is a beautiful symbolic link to libopencv_core.so.2.3
The standard approach is to use /usr/local directory structure that already has predefined paths like /usr/local/bin, /usr/local/sbin, /usr/local/include, /usr/local/lib.
You put your software here and everything will JustWork(TM). Every Linux distro (incl. Fedora) is set up so it will load programs (libraries, headers) from this libraries.
If you would use GNU toolchain (autoconf, automake => autotools) you would be fine. With CMake you probably need to setup paths for /usr/local/include and /usr/local/lib.
On the other hand this approach wont let you use multiple versions. You can only have one. The one in /usr/local overrides the system one (installed in /usr/bin) because these paths goes first.
You can keep your approach, it is nothing incorrect. We usually put such a software in the /opt folder, so you would go for /opt/opencv/X.Y where X.Y are the version numbers.
Q2: Read the gcc man page and search for the -L option. You need something like:
gcc ... -I/opt/opencv/2.0/include -lsystem_lib -L/opt/opencv/2.0/lib -lopencv ... ...
Do not forget to set LD_LIBRARY_PATH when running programs in multiple versions to properly load correct version:
LD_LIBRARY_PATH=/opt/opencv/2.0/lib /opt/opencv/2.0/bin/opencv

What's the accepted method for deploying a linux application that relies on shared libraries?

I have an application that relies on Qt, GDCM, and VTK, with the main build environment being Qt. All of these libraries are cross-platform and compile on Windows, Mac, and Linux. I need to deploy the application to Linux after deploying on Windows. The versions of vtk and gdcm I'm using are trunk versions from git (about a month old), more recent than what I can get apt-get on Ubuntu 11.04, which is my current (and only) Linux deployment target.
What is the accepted method for deploying an application that relies on these kinds of libraries?
Should I be statically linking here, to avoid LD_LIBRARY_PATH? I see conflicting reports on LD_LIBRARY_PATH; tutorials like this one suggest that it's the 'right way' to modify the library path to use shared libraries through system reboots. Others suggest that I should never set LD_LIBRARY_PATH. In the default version of GDCM, the installation already puts libraries into the /usr/local/lib directory, so those libraries get seen when I run ldd <my program>. VTK, on the other hand, puts its libraries into /usr/local/lib/vtk-5.9, which is not part of the LD_LIBRARY_PATH on most user's machines, and so is not found unless some change is made to the system. Copying the VTK files into '/usr/local/lib' does not allow 'ldd' to see the files.
So, how can I make my application see VTK to use the libraries?
On windows, deploying the dlls is very straightforward, because I can just include them in the installer, and the application finds them because they are in the local directory. That approach does not work in Linux, so I was going to have the users install Qt, GDCM, and VTK from whatever appropriate source and use the default locations, and then have the application point to those default locations. However, since VTK is putting things into a non-standard location, should I also expect users to modify LD_LIBRARY_PATH? Should I include the specific versions of the libraries that I want and then figure out how to make the executable look in the local directory for those libraries and ignore the ones it finds in the library path?
Every "serious" commercial application I have ever seen uses LD_LIBRARY_PATH. They invariably include a shell script that looks something like this:
#!/bin/sh
here="${0%/*}" # or you can use `dirname "$0"`
LD_LIBRARY_PATH="$here"/lib:"$LD_LIBRARY_PATH"
export LD_LIBRARY_PATH
exec "$0".bin "$#"
They name this script something like .wrapper and create a directory tree that looks like this:
.wrapper
lib/ (directory full of .so files)
app1 -> .wrapper (symlink)
app1.bin (executable)
app2 -> .wrapper (symlink)
app2.bin (executable)
Now you can copy this whole tree to wherever you want, and you can run "/path/to/tree/app1" or "/path/to/tree/app2 --with --some --arguments" and it will work. So will putting /path/to/tree in your PATH.
Incidentally, this is also how Firefox and Chrome do it, more or less.
Whoever told you not to use LD_LIBRARY_PATH is full of it, IMHO.
Which system libraries you want to put in lib depends on which Linux versions you want to officially support.
Do not even think about static linking. The glibc developers do not like it, they do not care about supporting it, and they somehow manage to break it a little harder with every release.
Good luck.
In general, you're best off depending on the 'normal' versions of the libraries for whatever distribution you're targetting (and saying you don't support dists that don't support recent enough versions of the lib), but if you REALLY need to depend on a bleeding edge version of some shared lib, you can link your app with -Wl,-rpath,'$ORIGIN' and then install a copy of the exact version you want in the same directory as your executable.
Note that if you use make, you'll need $$ in the makefile to get a single $ into the argument that is actually sent to the linker. The single qutoes are needed so the shell doesn't munge things...
Well, there are two options for deploying Linux application.
The correct way:
make a package for your app and for the libraries, if they are so special, that they can't be installed from standard repositories
There are two major package formats. RPM and DEB.
The easy way:
make a self-extract file that will install the "windows way" into /opt.
You can have libraries in the same directory as the executable, it's just not the preferred way.

Building a Win32 DLL from a Linux library source

I'm trying to build a Win32 DLL from an audio-DSP related Linux library (http://breakfastquay.com/rubberband/). There are makefiles and config scripts for Linux, but no help for Windows. The author provides a Win32 binary of a sample app using the library, and I see a number of "#ifdef MSVC" and "#ifdef WIN32" scattered around, so I don't think I'm starting completely from scratch but I'm stuck nevertheless.
As my programming knowledge in either platform is rather limited, I'd appreciate any help.
First of all, what is the right way to get started here? Visual Studio? Cygwin? Initially I started off creating a Win32 DLL project in Visual Studio, adding the source files, thinking about adding a .def file, etc, but at some point I felt like this was going nowhere.
As for Cygwin, this was the first time using it, and I don't even know if this is the sort of thing that Cygwin is designed for. Is it?
On Cygwin, I ran ./configure and got stuck at something like this:
"checking for SRC... configure: error: Package requirements (samplerate) were not met: No package 'samplerate' found"
After looking through the log, it appears that pkg-config is looking for samplerate.pc. How do I handle packages in Windows? libsamplerate is just an open source library, and I have source and a DLL for this. But I'm not sure how to use them to satisfy the dependency requirements for librubberband (which is what I'm trying to build)
I'm completely lost at this point and if anyone can give me a nudge in the right direction... and, is there an easier way to do this?
Many thanks in advance.
If you're still stuck on this I can throw a little light.
You may have to build everything from sources (or have the libraries installed in your environment). You're using Cygwin, I would recommend MinGW and MSYS too, but sometimes it's just not possible to use this combination to build the program or library.
So if using Cygwin, first ensure that you have a proper environment installed. This is that you have the correct development headers installed.
Then download libsndfile. Extract the sources to a directory and from the Cygwin bash shell navigate to that directory. There perform:
./configure
make
make install prefix=/cygdrive/c/cygwin
Notice that I use a prefix, that prefix should point to the directory Cygwin is installed in order to correctly install the libraries (the same happens to MinGW and MSYS, the prefix should point to the MinGW installation directory). Maybe using the usr directory in the prefix works too, I've never tried it.
Now download FFTW, as it will be needed for libsamplerate and rubberband. Same procedure as with libsndfile: extract, configure, make & make install using the prefix. Now copy the header files of FFTW (in the example they'd be in /cygdrive/c/cygwin/include) to the include directory in the usr directory (in the example /cygdrive/c/cygwin/usr/include).
Next SRC (libsamplerate), same procedure.
Then the Vamp plugin SDK. In order to compile the it you may need to edit the file src\vamp-hostsdk\PluginLoader.cpp, deleting RTLD_LOCAL from a dlopen() call (it's safe, it's already the default behaviour).
Also, you may need to install it by hand (in my experiences it didn't like the prefix). Or set the environmental variable PKG_CONFIG_PATH pointing to the paths of pkgconfig, e.g.:
set PKG_CONFIG_PATH=/cygdrive/c/cygwin/lib/pkgconfig:/usr/local/lib/pkgconfig
Now, create a file called ladspa.h in the include directory with the contents of the LADSPA header
Finally, configure and build rubberband, it should find everything it needs.
To build in MSYS using MinGW follow the same procedure, using the according prefix. Using Visual Studio is another alternative, but you may need to use some of the pre-built libraries (for example for libsndfile) as building Linux libraries natively in Windows may be complicated or even impossible (without hacking the source code) in VS.
Anyway, the autor of rubberband provides binaries; I think you should consider use them instead of going through all of this.
Linux to w32 is mostly a tricky thing.
For each of your dependencies, download the source and:
./configure
make
sudo make install
Also, I recommend you to use MinGW + msys in place of CygWin (as the latter produces executables that depend on its libraries). However in your situtation, use the VS approach -- 't will save you a lot of time.

Resources