Often I need to install supporting libraries on Linux to get my (or third party) app working. The process usually involves running configure and apt/yum install multiple times. Is this possible after the installation is over and the app was successfully compiled to get the list of all packages that were installed during the process by yum/apt? Or just yum or just apt if only some of them supports this.
Right now I have to use history | grep apt but this usually causes many invalid attempts to be included int he output.
Related
Environment
I do use CI/CD of gitlab to bundle my application.
I do use node:14-alpine as image and do run yarn to build my app.
After build is finished, I do deploy my app via rsync to the target-server, which run's ubuntu 20.04.
On this server, I do use pm2 to start the app and keep it running.
Issue
If I look into the logs, I do see an error like this:
I've searched a bit, and found that the issue might be caused of musl-dev is missing.
I've installed it at my server, and into the docker-container, but with same result.
BUT, if I do delete the node_modules directory from server, and run yarn install right at the Server, the app run like expected
Question
So why does this issue happens here? Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
Don't use an Alpine image if you're deploying on Ubuntu.
So why does this issue happens here?
The fundamental C standard library implementation is different on the two (Alpine uses musl libc; Ubuntu and more or less all other distros use GNU C Library (glibc)).
Trying to move binaries (such as those that might appear in node_modules for native modules) built against one libc implementation to a system using the other will likely be painful or not work at all (as you noticed).
Must I have the same distribution & version of linux in my docker-container to fit all dependencies?
If none of the dependencies use native code, then you should be able to just move things over without issues, but otherwise it'll be easiest (e.g. considering the versions of other libraries your dependencies may link against) to just use the same version as your target OS – or, if you don't want to think about that, just deploy your application as a Docker container.
Even if the suggestion from #AKX is a good answer, I've played a bit around to figure out how to solve this special case.
Here is my solution:
install musl-dev at the server
link it to /lib
apt-get install musl-dev
ln -s /usr/lib/x86_64-linux-musl/libc.so /lib/libc.musl-x86_64.so.1
In my case it's only this single dependency which cause the trouble. If I got more of this, I will follow AKX's suggestion and choose a debian/ubuntu-like distribution to bundle it.
I’ve written a very small application in Go, and configured an AWS Linux AMI to host. The application is a very simple web server. I’ve installed Go on the Linux VM by following the instructions in the official documentation to the letter. My application runs as expected when invoked with the “go run main.go” command.
However, I receive an “Invalid argument” error when I attempt to manually launch the binary file generated as a result of running “go install”. Instead, if I run “go build” (which I understand to be essentially the same thing, with a few exceptions) and then invoke the resulting binary, the application launches as expected.
I’m invoking the file from within the $GOPATH/bin/ folder as follows:
./myapp
I’ve also added $GOPATH/bin to the $PATH variable.
I have also moved the binary from $GOPATH/bin/ to the src folder, and successfully run it from there.
The Linux instance is a 64-bit instance, and I have installed the corresponding Go 64-bit installation.
go build builds everything (that is, all dependent packages), then produces the resulting executable files and then discards the intermediate results (see this for an alternative take; also consider carefully reading outputs of go help build and go help install).
go install, on the contrary, uses precompiled versions of the dependent packages, if it finds them; otherwise it builds them as well, and installs under $PATH/pkg. Hence I might suggest that go install sees some outdated packages which screw the resulting build.
Consider running go install ./... in your $GOPATH/src.
Or may be just selective go install uri/of/the/package for each dependent package, and then retry building the executable.
I'm using vagrant shell provisioning here.
I've installed on my vm Node.js along with many other packages.
I want to avoid running parts in my provisioning script when I don't need them.
For example - I already successfully installed via my script Node.js & nginx, so when I want to add additional packages like mysql or redis, I want to add it to the script, I want to run the script to test that it runs properly, but I DO NOT want to re-install Node.js or nginx again...
I need a simple conditional statement that would detect if a package is already installed, and install it only if it is not already installed.
Is there a generic check or will it be different from package to package?
Thanks
Ajar
dpkg -s <pkg-name> 2>/dev/null >/dev/null || sudo apt-get -y install <pkg-name>
This should be what you're looking for.
What's going on here:
This is a conditional assignment of the form <condition> && <value if true> || <value if false>
The first part of the expression uses dpkg to check to if the package is installed, suppressing the output. The second part is evaluated if the condition returns false. The "true" case is omitted.
This dependes on the Linux distribution you are using. Usually, a package manager comes with some kind of mechanism to skipp already installed packages.
For Ubuntu, this is built in - running apt-get install nodejs with Node.js already installed will not reinstall it; it will skip the target (unless there is new version available)
For ArchLinux, you can add run pacman -Sy node --needed to skip already installed packages.
A platform-independent mechanism would be to check if the executable (or any other known file for that package) exists. In Bash, you can do:
which node > /dev/null && echo "Yup, this is installed"
(the > /dev/null part supresses which's output - it prints the path where the found executable resides; we do not care about that, we only want to know if it is installed)
If you want to avoid writing custom Bash scripts for such basic checks I can recommend that you configure your boxes with tools dedicated for exactly what you are trying to achieve. The usual suspects here are:
Ansible
Puppet
Chef
CFEngine
All of these are supported by Vagrant so integrating them should not be a problem. You can find detailed guides on integrating these into your existing Vagrant recipe here.
PS. For a simple exapmle you can check out my Ansible provisioning recipe for Banana Pi machine running ArchLinux (note: it does not really follow best practices, but it might be a good starting point). There are many examples available online, check them out, too.
I installed Rmpi on my linux machine and it successfully loads in R. There are two versions of MPICH on my machine, and I (believe) have installed Rmpi with the latest version. I also had to update my LD_LIBRARY_PATH. I primarily followed the installation instructions here.
After loading Rmpi in R, I run mpi.spawn.Rslaves(nslaves=4) and get the following error message:
Error in mpi.spawn.Rslaves(nslaves = 2) :
You cannot use MPI_Comm_spawn API
Does anyone know how I can get Rmpi working?
Thanks!
You need to use MPICH2 for spawn support. If you have MPICH2 installed, you may still need to specify --with-Rmpi-type=MPICH2 when installing Rmpi. If you used --with-Rmpi-type=MPICH instead, it would disable functions such as mpi.spawn.Rslaves.
Also note that MPICH2 apparently does not support spawning workers unless the program is launched using a command such as mpiexec. This basically means that you can't execute mpi.spawn.Rslaves from an interactive R session using MPICH2, although this is possible using Open MPI. To be clear, this is not the issue that you're reporting, but you may encounter this after you have correctly installed Rmpi using MPICH2.
I was able to install Rmpi 0.6-5 using MPICH 3.1.3 with the command:
$ R CMD INSTALL Rmpi_0.6-5.tar.gz --configure-args='--with-mpi=$HOME/mpich-install --with-Rmpi-type=MPICH2'
To debug a configuration problem, you should install Rmpi from a directory rather than a tar file. That will allow you to examine the "config.log" file afterwards which will provide important information. Here is how I did that on my Linux box:
$ tar xzvf Rmpi_0.6-5.tar.gz
$ R CMD INSTALL Rmpi --configure-args='--with-mpi=$HOME/mpich-install --with-Rmpi-type=MPICH2'
In order to get spawn support, the MPI2 macro needs to be defined when compiling the C code in Rmpi. You can check if that is happening by searching for "PKG_CPPFLAGS" in config.log:
$ grep PKG_CPPFLAGS Rmpi/config.log
PKG_CPPFLAGS='-I/home/steve/mpich-install/include -DMPI2 -DMPICH2'
I have found "config.log" to be very useful for debugging configuration and build problems.
Note that you can use Rmpi without spawn support. You'll need to start all of the workers using mpirun (or mpiexec, etc) and it will be much more difficult, if not impossible, to use functions such as mpi.apply, mpi.applyLB, etc. But if you just need to initialize MPI so you can use MPI from functions implemented in C or Fortran, you will probably need to start all of the workers via mpirun.
I am trying to install mpich-3.1 in a linux cluster (Ubuntu 12.04 running on all machines). Previously I installed mpich2 by sudo apt-get install mpich2 but couldn't find how to run tests. Then I removed with sudo apt-get remove mpich2.
So I decided to upgrade to 3.1. I downloaded and installed mpich following instructions at https://www.mpich.org/static/downloads/3.1/mpich-3.1-installguide.pdf by running:
sudo ./configure -prefix=/usr/local/mpich/
sudo make
sudo make install
and apparently is properly installed. If I run
meteo#ventus:~/RAMS/RUN$ /usr/local/mpich/bin/mpiexec -f machinefile -n 20 hostname
ventus
ventus
ventus
ventus
ventus4
ventus4
ventus4
ventus4
ventus5
ventus5
ventus5
ventus5
ventus2
ventus2
ventus2
ventus2
ventus3
ventus3
ventus3
ventus3
Although I find it is responding "slowly". Where machinefile is
ventus:4
ventus2:4
ventus3:4
ventus4:4
ventus5:4
The directory is exported to all nodes in the cluster, /etc/exports
/usr/local/mpich 192.168.1.0/24(rw,sync)
In /etc/mtab ant /etc/fstab in node ventus4
ventus:/usr/local/mpich /usr/local/mpich nfs rw,vers=4,addr=192.168.1.1,clientaddr=192.168.1.4 0 0
ventus:/usr/local/mpich /usr/local/mpich nfs
Maybe the problem comes from a prior install not completely removed
meteo#ventus:~$ which mpiexec
/usr/local/bin/mpiexec
meteo#ventus:~$ which mpirun
/usr/local/bin/mpirun
meteo#ventus:~$ which mpicc
/usr/local/bin/mpicc
Following installation instructions which mpiexec should point to mpich installation bin directory /usr/local/mpich/bin/mpiexec
But if I move /usr/local/bin/mpiexec to /usr/local/bin/mpiexec.old
then
meteo#ventus:~$ which mpiexec
/usr/local/mpich/bin/mpiexec
points to my new mpich3 install directory. Could this be the reason of that slow performance? Which test should I run for benchmarking? How do I completely remove mpich2.
If you do in fact have root access to your entire machine, then you can always just delete all of the binaries, libraries, headers, etc. I'm not sure where everything is installed on your system (it's different everywhere), but the usual locations are /usr/local/bin, /usr/local/include, /usr/local/lib, etc. You should look for these files (or things that look similar:
bin/:
Anything that starts with mpi
include/:
Anything that starts with mpi
Anything that starts with opa
lib/:
Anything that includes mpich
Anything that includes mpl
Anything that includes opa
Beyond that, there's not much that would interfere (there's man pages somewhere too, but that's fine). If you delete all of those files, you should have gotten rid of your MPICH2 installation. This really should have been cleaned up with you did your apt-get uninstall, but that's neither here nor there...
Now, to test your new MPICH installation (the project is called MPICH now, not MPICH3), there are lots of MPI benchmarks. I'd suggest typing mpi benchmarks into your favorite search engine any trying out a few of the ones that you find. If you want to compare, you can install a few different versions of MPI. When you do this, make sure you are correctly setting up your PATH and LD_LIBRARY_PATH environment variables so your installations can sit side by side.