I have trouble installing this library called librsync on an Amazon standard linux instance.
I tried this:
yum install librsync-devel
but I got No package librsync available (fair enough I guess!)
I also followed the install instructions, which says:
To build and test librsync from the extracted distribution do;
$ ./configure
$ make all check
I'm no linux expert, I extracted the library files and run these commands:
[ec2-user#ip-**-***-**-*** librsync]$ ./configure
-bash: ./configure: Permission denied
[ec2-user#ip-**-***-**-*** librsync]$ sudo ./configure
sudo: ./configure: command not found
[ec2-user#ip-**-***-**-*** librsync]$ sudo configure
sudo: configure: command not found
I changed permission of the configure file and run the ./configure command again. I got a long list of yes (full log here) and then this:
checking whether g++ accepts -g... no
checking dependency style of g++... none
checking how to run the C++ preprocessor... /lib/cpp
configure: error: C++ preprocessor "/lib/cpp" fails sanity check
I'm totally lost. Any idea how to install this librsync library on EC2 linux instance?
From the error, it looks like your configure script is not set to be executable. You can check with ls -l configure. You should see a line that starts with something like -rwxr-xr-x. If not, you can run chmod +x configure to add executable permission to it.
If the permissions on that file are not right, it would be good to check the rest of the files in the distribution. How did you get the file? Downloading the tarball from Sourceforge? Download the ZIP from Github? Checking out from Github? And how did you extract it? If you could fill those details in to your question, as well as the full output of ls -l, that might help us figure out what happened.
edit to add: It looks from your configure log like cpp (the C preprocessor) is looking for cc1plus, which is part of g++. You can install that with yum install gcc-c++ (remember to run as root or with sudo).
Also, in regards to your comment, I would recommend copying the .tar.gz file directly to the Linux machine, and extracting it with tar xvzf myfile.tar.gz rather than extracting it on a Windows machine and uploading it. There are enough differences in the filesystem (how permission bits work, case sensitivity), that the process of extracting files on Windows and uploading the extracted files with something like winscp can cause problems like this.
Related
I want to create a shell script that uses the Expect library however Expect is not installed on any of the systems where I want to run the script. I also cannot install the library on these systems either. Can I build Expect from source and then put in same directory as the script? How would you go about this?
Yes you can, and it's not difficult.
Download Expect sources from https://sourceforge.net/projects/expect/files/latest/download?source=files
Unpack sources
gunzip expect.tar.gz
tar -xvf expect.tar
This will create a directory containing the Expect distribution. Change to that directory and
Configure sources for compilation:
./configure --prefix=~/
With --prefix parameter you specify where Expect should be installed. ~/ in my example is a shortcut for current user home directory, so it will be installed locally for your user only and you don't need root privileges this way to install it. In case you have root privileges and want to install it system-wide, you can omit --prefix parameter.
Compile Expect:
make
And install it:
make install
And that's it :)
UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.
Archlinux.
I downloaded mtools, which includes mcopy, which is what I'm after. The instructions in the INSTALL file say do this:
# ./configure
# make
These worked fine, now I have a bunch of .o files and of course executables.
What do I need to do, so I can just type
# mcopy
and have it run? Since I don't have it "installed" right now, doing that just says
-bash: mcopy: command not found
The usual linux build sequence is
./configure
make
make check
sudo make install
make check attempts to validate if the build took place correctly; not all Makefiles have it but many do. Note you will need sudo make install to do the install in the usual system directories if you are not root.
You can determine which of these options is available for your particular Makefile by
cat Makefile
and reading the labels on the left of the file.
You could create a symbolic link to the application in your /usr/bin folder like
ln -s /fullpath/to/app /usr/bin/aliasnameforapp
Then you can simple call aliasnameforapp from anywhere.
Recently I downloaded a file using the following link
git clone git://github.com/mapserver/mapcache.git
Inside the downloaded mapcache folder I can not find a configure file to do "./configure". But the installation help file tell:
Unix compilation instructions
If you are using a git clone rather than a tarball distribution, you
must first run autoconf in the root directory, to create the configure
file from configure.in:
$ autoconf
For unix users, the compilation process should resume to:
$ ./configure
$ make
(as root)
make install-module
The installation script takes care of putting the built module in the
apache module directory.
To do ./configure there should be a configure file isn't it? Please show me how to make one to get rid of this problem.
maintainer speaking ...
mapcache and mapserver are switching to cmake for the next release and the docs for the master branch need updating. You can either use the branch-1-0 branch to continue using autoconf builds, or use cmake with master:
$ mkdir build
$ cd build
$ cmake ..
$ make
$ sudo make install
The help file tells you exactly what you need to do
If you are using a git clone rather than a tarball distribution, you must first run autoconf in the root directory, to create the configure file from configure.in
If you don't already have autoconf installed you'll need to install it in the normal way for your distribution.
The repository seems out of sync with the documentation
there is no configure.in as mentioned in the INSTALL file (nowhere not only in the root directory)
there is just a Makefile.vc file for MSVC++
You should contact the maintainer
I already installed asteriskNow 2.0 ISO, thus after system installation, i've already have built-in asterisk within CentOS. I do not need to download source package and compile the source files. But right now, i have an situation that requires to recompile asterisk again.
I checked it out the installation tutorial which needs to go to the asterisk source directory to execute following commands:
# make clean
# ./configure
# make menuselect
# make install
# make samples
My questions : Is it the asterisk source directory means the directory which has asterisk installation files? But in my case, i do not download and compile the asterisk source ever.What should i do?
I asked myself: It needs to download source from and execute the following commands:
wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-1.8.17.0.tar.gz
tar xzf asterisk-1.8.17.0.tar.gz
cd asterisk-1.8.17.0
./configure
make
make install