I'm working on making a Arch Linux ISO for getting a my custom Linux system. I read about Calamares installer that seems very cool for customizing your own installer. I prepared my ISO and also my Calamares files but I don't understand how they join. i tried several searches but I don't find it.
For example, assuming that I have my ISO files ready under /home/user/archiso/airootfs directory (airootfs will be the root of my future system), I started by compiling and installing Calamares by following several docs, in my case:
$ git clone https://github.com/calamares/calamares.git
$ mkdir -p /home/user/calamares/build
$ cd calamares/build
$ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr -DWITH_PYTHONQT=ON -DAppStreamQt_DIR=... ..
$ make
$ sudo make install
and I get the Calamares modules in calamares/build/src/modules and brading in calamares/build/src/branding and I customized these files according my needs.
Now that I have ready Calamares and my custom ISO ready, how in general the airootfs can be tied with Calamares installer in order that when I run the ISO on a new machine, the Calamares installer starts automatically at the login?
I tried to edit build/src/modules/unpackfs/unpackfs.conf file by adding:
unpack:
- source: /home/user/archiso_build/iso/arch/x86_64/airootfs.sfs
sourcefs: ext4
but I didn't get the expected behavior because, when I make the ISO by mkarchiso and I mount it in a new VM:
Calamares Installer does not start automatically (maybe there is a specific command that I should store somewhere)
When I start manually Calamares, I get the error on the GUI: users#users module not found; bootloader#bootloader module not found; and so on
How these two entities communicate one each other during the creation of the ISO by mkarchiso?
Source: https://github.com/calamares/calamares/wiki/Develop-Guide
Thanks
Related
So some background, I'm installing Node on a host server, but it's a grid server not a server that's solely for my website.
The grid server doesn't have a root user/ administrative powers. So to install node I found this workaround: http://iantearle.com/blog/media-temple-grid-and-nodejs . It's a Linux Grid server, I've never used Linux so if someone could explain to me what the commands mean, especially: ./configure --prefix=~/opt/
Lastly I followed the steps but when I try to run the node command in the server it says node:command not found - which is why I'm trying to understand the steps. Thanks
To explain the process:
Configure
The configure script is responsible for getting ready to build the software on your specific system. It makes sure all of the dependencies for the rest of the build and install process are available, and finds out whatever it needs to know to use those dependencies.
Unix programs are often written in C, so we’ll usually need a C compiler to build them. In these cases the configure script will establish that your system does indeed have a C compiler, and find out what it’s called and where to find it.
Make
Once configure has done its job, we can invoke make to build the software. This runs a series of tasks defined in a Makefile to build the finished program from its source code.
The tarball you download usually doesn’t include a finished Makefile. Instead it comes with a template called Makefile.in and the configure script produces a customised Makefile specific to your system.
3.Make Install
Now that the software is built and ready to run, the files can be copied to their final destinations. The make install command will copy the built program, and its libraries and documentation, to the correct locations.
--prefix=~/opt/ -> will set the build directory to /home/yourhome/opt directory.
Now if you didnt get errors while doing those 3 steps explained above make sure you did the following:
nano ~/.bash_profile
export PATH=~/opt/bin:${PATH}
nano is a text editor and you are opening .bash_profile file with it.
you need to add export PATH=~/opt/bin:${PATH} in that file and save it using ctrl+x
Then restart your terminal.
Specified github repository for nodejs is outdated. use the following link instead.
git clone https://github.com/nodejs/node.git
P.S node:command not found usually happens when the program is not installed correctly or it's executable isnt in your terminal's PATH variable.
UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)
--
I've been trying (half a day :P) to execute a binary extracted during docker build.
My dockerfile contains roughly:
...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...
Within directory b is a binary file imcl
Error I'm getting was:
/bin/sh: 1: /tmp/setup/a/b/imcl: not found
What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:
RUN ls -la /tmp/setup/a/b/imcl
-rwxr-xr-x 1 root root 63050 Aug 9 2012 imcl
RUN file /tmp/setup/a/b/imcl
ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped`
Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.
Docker asks not to use sudo so I tried with su combinations:
su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"
Both of these returned:
stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory
Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D
Guess how that turned out?
sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory
Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing:
"Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."
So my question is:
Is there a workaround to this?
Is there a way to add extracted files to docker build context during a build (within the dockerfile)?
Oh and the machine I'm building this is not connected to the internet...
I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?
So am I out of luck?
Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?
UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.
My problem is actually this one:
CentOS 64 bit bad ELF interpreter
Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:
/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory
So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.
Now the solution would be simple if I had internet access and repos enabled:
apt-get install ia32-libs
Or
yum install glibc.i686
However, I dont... :[
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++
[root#localhost]# yum install gtk2.i686
[root#localhost]# yum install libXtst.i686
[root#localhost]# yum install compat-libstdc++
UPDATE:
So the question becomes now:
What would be the best way to achive the same result without repos or internet connection?
You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.
If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:
on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
on hugodby/fedora32 home page, there is an example of commands used to build the image;
and so on.
Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.
Say, you can use a Dockerfile like this:
FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs
...and use produced image as a base (with FROM directive) for images you're building without internet access.
You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.
No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.
You're in luck! You can do this using the ADD command. The docs say:
If <src> is a local tar archive in a recognized compression format
(identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is
copied or unpacked, it has the same behavior as tar -x: the result is
the union of:
Whatever existed at the destination path and
The contents of the
source tree, with conflicts resolved in favor of “2.” on a
file-by-file basis.
Consider these 4 pieces of software:
COBRA 2.05
LibSBML 5.10
MATLAB R2013a (Also known as 8.1, 64-bit; MATLAB no longer supports 32-bit Linux anyway)
A 64-bit Linux OS (such as Ubuntu 14.04 or the latest Mint but not restricted to them)
Intro
The COBRA toolbox is an optimization suite that runs on top of MATLAB aimed at the development of MATLAB code for metabolic network modelling. Such a "network" is a system of equations that can have a very large number of equations and variables (such as thousands). Therefore, routines to read and write those large models according to some format specification are a must-have, and COBRA uses the standard SBML for that.
Problem
Unlike the Windows versions, the Linux binary packages do not integrate well out-of-the-box: to begin with, the pre-compiled Linux binary of libSBML (open-source) available for download does not come with MATLAB support. If one tries to use the pre-compiled libSBML, COBRA won't find the "MATLAB bindings" and therefore won't be able to, for example, read and write SBML XML files from the disk in a m-script.
The question
What needs to be done to make COBRA 2.05 running on top of MATLAB R2013a under Linux (Ubuntu 14.04 or the latest Mint, but this is not likely distro-specific) able to read and write SBML XML files? In other words, what needs to be done system-wide to make COBRA pass its own testSBML test?
This is how I got it working and what I could learn from all the hassle regarding how my Linux box works. I hope I am not forgetting/missing/mistaking anything.
1. MATLAB
1.1. Install MATLAB
Install it in its default location (you will need root access for this), don't be stubborn like I tried to be. Why:
1.1.1. Integration
It is very likely you will want to install some other software that uses the MATLAB framework at some point, and in the real world software doesn't always find other software even if you know how to tell it where to look for.
1.1.2. Make your life easier
Even though it seems like a good idea to install a big software in a place where you have lots of available space and that you can use in multiple machines (specially in Linux, which doesn't have that abomination called Registry, and has symbolic links), that would only perhaps be a good idea - apart from item 1.1.1 - if that place is a partition with a Linux filesystem, since at some point, some executable/script will need execution permission, and mounting the entire partition with execution permission for all its files is rather unsafe. Therefore, do not put MATLAB in an NTFS partition of an external HD; perhaps creating a Linux partition in the external HD just for Linux-specific stuff could work for this matter, but how much hassle is that?
1.2. Setup MATLAB so people and other software can launch it
Even though I have seen somewhere that the MATLAB installer eventually shows an option to create symbolic links in the system path for convenience, it didn't in my case. But that is OK, since I would have to replace the symbolic link /usr/local/bin/matlab by the following shell-script (same path, same filename) anyway:
#!/bin/sh
export LD_LIBRARY_PATH=/usr/local/lib:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
exec /usr/local/MATLAB/R2013a/bin/matlab $*
OBS: That LD_LIBRARY_PATH is needed for MATLAB to find SBML bindings later and to be able to use them. Also, if you install some third-party solver such as TOMLAB, you will most likely need to add some more paths in this little launcher script.
OBS 2: In my case, the installation script didn't automatically create any launchers or shortcuts, but I have found an iconless and extension-less Matlab 8.01 file and a matlab icon as a png file, and that first file was a template .desktop file that I could edit to fit my needs and put in /usr/share/applications so the Unity Dash would find it. The contents of this Matlab.desktop file are:
[Desktop Entry]
Type=Application
Icon=/usr/local/MATLAB/R2013a/Matlab.png
Name=Matlab 8.01
Comment="Start Matlab 8.01"
Exec=/usr/local/MATLAB/R2013a/bin/matlab -desktop
Categories=Development;
Name=Matlab 8.01
GenericName=Matlab 8.01
Comment="Start Matlab 8.01"
2. libSBML
2.1. Install libSBML
libSBML is provided by a deb package specific for Ubuntu (and for CentOS), and also versions for several flavours of Windows and MacOSX (their home page: http://sbml.org/Software/libSBML). Guess which is the only platform whose binaries weren't compiled with MATLAB support? Linux, of course. That means you will need to compile from source (and that the deb package is therefore useless to you). To compile:
2.1.1. Install dependencies
The dependency libxml2-dev (from software manager or from a terminal):
sudo apt-get install libxml2-dev
2.1.2. Save yourself some time in the future
Usually, one would do configure, make and then make install. But this is not recommended for the same reason as installing anything that doesn't come in a pretty little package: you will loose control of which files went where, and will need to keep the source-code to be able to run make uninstall if you need to uninstall it later. So, install checkinstall and use it to replace the step make install, since checkinstall creates a package for your system that can be later uninstalled or reinstalled just as any regular packaged software (from software manager or from a terminal):
sudo apt-get install checkinstall
2.1.3. Configure the compiling-process
Get LibSBML source code and extract it to some folder. From a terminal, navigate to that folder and configure the compilation:
./configure --with-matlab
OBS: with the with-matlab flag, the configure script will fail it it cannot find an executable whose filename is matlab. If it fails, it outputs that the matlab file could not be found, but the test it performs is actually both for the existence of the file and whether it is executable. So, if the file is in an NTFS partition, configure will fail even if it finds the file, but will tell you the file couldn't be found. You can enforce it to look for the executable in /path/to/matlab/root by passing (it will look for a bin folder inside that path, and for the executable inside that bin folder):
./configure --with-matlab=/path/to/matlab/root
OBS: This will install libSBML in the default location: /usr/local/lib. Again, it is a good idea to just let it install in its default location, but if you need to change it, you can pass the path with the flag: --prefix=/your/installation/path
OBS 2: You might ask why libSBML needs to find and execute matlab to be compiled with support for it: it needs to fire up MATLAB later to build MEX-files (compiled MATLAB code), so I would speculate you wouldn't be able to install libSBML after all if your MATLAB has no compiler to generate MEX-files.
2.1.4. Build and install libSBML
make
checkinstall
VERY IMPORTANT OBS:
I) checkinstall asks for confirmation of the metadata of the package it is about to create. In my case, the string for the version field came by default as "Source" (without the quotes), which caused checkinstall to fail because dpkg (the system tool that actually builds the deb file) failed complaining the version number must start with, well, a number. So, save yourself some time and make sure the string in the version field starts with a number (i.e. "5.10", without the quotes obviously)
II) checkinstall asks if you want to exclude from the future package files that the make install command would put in your home folder and says it is a good idea to exclude them. LibSBML has a test.xml file that it needs to be in the $HOME folder later, or else it won't let you integrate with MATLAB. And even though it tells you a test.xml is missing, it doesn't tell you where that file should be or if that file was something that came with the library. I only noticed it because checkinstall had found a $HOME/test.xml reference earlier (that I excluded from the package, of course) and I had found that odd. So, save yourself some time and exclude $HOME/test.xml from the package generated by checkinstall, and then search for test.xml inside the source-code folder and copy it to $HOME as soon as libSBML finishes being installed by checkinstall.
2.2. Integrate libSBML to MATLAB
Fire up MATLAB, navigate to where the SBML MATLAB-bindings were installed in step 2.1.5 (in my case: /usr/local/lib) and run the file installSBML.m that should be there.
2.2.1. Shared libraries problems
In my case, I had an error due to an old unresolved issue: libstdc++.so.6 not having a reference to GLIBCXX_3.4.15. Turns out that MATLAB was trying to use a libstdc++.so.6.0.13 (libstdc++.so.6 was a symbolic link pointing to this file) that came with it in /usr/local/MATLAB/R2013a/sys/os/glnxa64, which indeed didn't have that reference (one could verify that by issuing:
strings /usr/local/MATLAB/R2013a/sys/os/glnxa64/libstdc++.so.6.0.13 | grep GLIBC
). My system has a libstdc++.so.6.0.19 located in /usr/lib/x86_64-linux-gnu that has that reference, so I enforced MATLAB to use 6.0.19 one by setting the LD_LIBRARY_PATH properly (refer to step 1.2) and also by renaming the libstdc++.so.6 that came with MATLAB to something else so it would not find it and would keep looking until it found my system's. A friend of mine running Linux Mint didn't need to rename anything: for him, setting the LD_LIBRARY_PATH was enough.
2.2.2. Other problems
installSBML.m will fail if it doesn't find that $HOME/test.xml file mentioned in step 2.1.5, regardless of whether the library actually works. It assumes that if it could not test itself using a file that it assumes to be in $HOME, the user shouldn't have the option to install it anyway.
3. COBRA / SBML toolbox
3.1. Setup COBRA
In MATLAB, navigate to <YOUR_COBRA_ROOT_FOLDER_HERE>/external/toolboxes/SBMLToolbox-4.1.0/toolbox and run the file install.m there. You should have all set so it finds the MATLAB-bindings you set up in step 2.2.
3.2. MATLAB setpaths problems
I had to manually edit the file /usr/local/MATLAB/R2013a/toolbox/local/pathdef.m as root to include the folder /usr/local/lib (where libSBML and its MATLAB-bindings are) to make that setting persistent. Every time I restarted MATLAB, its setpath had gone back to the default, no matter if I started MATLAB as root when setting its setpath via the MATLAB GUI.
3.3. Test
Now you have hopefully connected all the dots. Try it: in MATLAB, navigate to <YOUR_COBRA_ROOT_FOLDER_HERE> and issue:
initCobraToolbox
testAll
If you haven't got any third-party solvers installed and configured, it should pass 14 of the 19 tests, including the SBML test (testSBML). Now you can load SBML files into MATLAB and play with them.
I also needed to add a symbolic link from /usr/local/lib/libsbml.so.5 to the MATLAB sys folder by:
sudo ln -s /usr/local/lib/libsbml.so.5 /usr/local/MATLAB/R2014a/sys/os/glnxa64/
This finally made the installation possible.
I installed using Cmake. To do this it is necessary to find the FindMatlab.cmake in the source package and insert the MATLAB path manually!
.............
elseif(EXISTS "/Applications/MATLAB_R2008a.app/")
set(MATLAB_ROOT_PATH "/Applications/MATLAB_R2008a.app/")
endif()
else()
if (EXISTS "/usr/local/MATLAB/R2014a/")
set(MATLAB_ROOT_PATH "/usr/local/MATLAB/R2014a/")
endif()
endif()
..........
FYI, to resolve the shared library issue at point 2.2.1 I needed to install the package matlab-support (in Ubuntu repositories)
I created a binary package with this command:
dpkg-deb --build -z9 -Zlzma $(DEB_SRC_DIR) $(DEB_DEST_DIR)
and install it on my Ubuntu 12.04 with this command:
sudo dpkg -i /path/to/package
The contents of the package I think are irrelevant.
Despite the sudo command the files in the installation directory belong to the current user and not to root as I expected.
How can I fix that?
Try to run the dpkg-deb command with fakeroot:
`fakeroot dpkg-deb ...`
(This will only help if the files in the source directory already have the correct ownership, which they probably dont. The problem you're actually trying to solve here, is to create an archive with files in it that belong to user root, which is where fakeroot theoretically helps.)
Let me say though, that what you are doing is not the best way for creating a binary package (far from it).
Instead, create a debian/ directory with dh_make (from the dh-make package), and edit the control file and changelog accordingly. You also need a file debian/install that lists what files you are installing and where they should go. There are various guides on the net (and on Stack Overflow) that explain this process. For example, look at the Debian New Maintainers' Guide.
You can then use dpkg-buildpackage to create a real, standard-conforming Debian package with your files in a reproducible way.
dpkg-deb is a low-level tool for manipulating existing deb files; it's not meant to be used for package creation.
Background:
I recently joined a software development company as an intern and am getting used to a new build system. The software is for an embedded system, and lets just say that all building and compiling is done on a buildbox. The building makes use of code generation using xml files, and then makes use of make files, spec files, and version files as well.
We develop on our own comps, (linux - mandriva distro) and build using the following methods:
ssh buildserver
use mount to mount drive on personal computer to the buildserver
set the environment using . ./set_env (may not be exactly that)
cd app_dir/obj (where makefile is)
make spec_clean
make spec_all
make clean
make
The Question:
I am a newbie to Code::Blocks and linux and was wondering how to set up a project file so that it can simply run a script file to execute these commands, instead of actually invoking the build process on my actual computer. Sort of like a pre-build script. I want to pair the execution of this script simply to Ctrl-F9 (build) and capture any output from the above commands in the build log window.
In other words, there is no build configuration or target that the project needs to worry about. I don't even need a compiler installed on my computer! I wish to set this up so that I can have the full features of an IDE.
Appreciate any suggestions!
put your script in a shell script file. E.g.,
#!/bin/sh
mount ... /mnt/path/buildserver
. ./set_env
cd app_dir/obj
make spec_clean
make spec_all
make clean
make
Say you name it as /path/to/my_build_script, then chmod 755 /path/to/my_build_script and invoke the following from your ssh client machine:
script -c ssh buildserver "path/to/my_build_script"
When finish, then check for the file typescript under current directory.
HTH