Multiple package manager [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there a pitfall of using multiple package managers? Could I use Redhat's yum with Debian's aptitude at the same time?
I came across this article and this infographic -
I was inclined to choose Debian, but a quick VM install showed that the Kernel is not upgraded to 3.2 in the stable repo yet. So I am planning to move to Archlinux, but the infographic rates it low on package availablity and I was wondering if I could install .deb or .rpm files from the Fedora or Ubuntu repositories.

The short answer is, yes you can, but you really shouldn't.
Below is a list of things (in no particular order) to consider when cross-distribution installing:
Dependency hell
The primary reason things like yum/apt/yast exist in the first place is to avoid what is known as dependency hell. By using packages from other systems, you forfeit the work that has been put into the packages from installing cleanly.
The secondary package manager will want to satisfy its own dependencies, and install a bunch of things that are already installed. This means you have to install packages a piece at a time so that you don't overwrite already installed packages from your primary package manager, and have all kinds of problems.
Do they use the same package manager?
If they do, you may even be able to just install it outright, but you'll likely have dependancy issues or package conflicts. If they don't, you can extract the package with various tools and just lay down the binary files onto the file system (have a look at alien, or this post about extracting .rpm and .deb files).
This will get you the files on the system, but there is no guarantee it'll work out of the box. Some additional manual hunting may be (and usually is) required.
Are the versions of base packages such as glibc the same or very close?
If so, there is less chance of an issue. The further difference between the two distribution's base packages, the more likely you'll have missing shared libraries that aren't available in the distribution you're running on, because the version is different and the filename doesn't match what the binary is looking for.
Technically you could also extract the base dependencies from the other distribution and put them on the filesystem as well, but that will certainly cause you pain should you ever need to compile things from source. Imagine how confused gcc will be.
Does the package you're installing call for a specific kernel module?
The best way I can articulate this is a common problem I see these days with buying VMs from a web host; you get their own brand of xen or virtuozzo kernel, and iptables doesn't work outright because netfilter is in the kernel and the ABI has changed. It can be quite the headache to get it working again, and this issue isn't limited to iptables. My best advice here is pick the distribution that has the kernel you want in its own base repository.
Compiling from source
No doubt you'll have to do this should you get very deep into wanting packages from other systems. Since the various distros setup their build environments differently, you'll be spending half your time figuring out pathing and other configuration issues.

Related

Compiling the linux kernel source and not sure if this correct behaviour?

I'm compiling the linux source code on a different machine to where I will be testing this (I'm trying to see if I can trace if/where a commit that is causing some weird behaviour on my machine), and I therefore want to try to build packages which I can install on the test machine.
The test machine runs Fedora, so, therefore, I would use make binpkg-rpm to generate those packages.
The output I get is two rpms -- one for the kernel and one for the headers. All good.
Some things I noticed at this point, however:
The kernel rpm is HUGE -- like 1G+ in size.
The kernel-headers package obsoletes kernel-headers (and the rpm generation actually warns of this). Removing this package after installation makes Fedora want to remove a TON of other packages.
There are no separate module packages (e.g. kernel-modules, kernel-modules-extra)
So a few questions I have regarding the compilation process (and I cannot find any documentation on the web to answer these)
Why is the generated kernel rpm so big?
Why are there no separate modules rpms? Is there a switch I need to use to generate these?
Why does the kernel-headers generation throw that obsoletes message and why does removing it make Fedora want to remove so many other packages?
I did use the kernel config of the current running kernel (i.e. I copied the config from the /boot folder and renamed it .config)
Many thanks in advance for any guidance, even if it's just pointing me to a page which answers my questions (that my hours of searching didn't uncover)

Easy ways to separate DATA (Dev-Environments/apps/etc) partition from Linux System minimal sized OS partition? Docker or Overlayfs Overlayroot? Other? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Back when the powers that be didn't squeeze the middle-class as much and there was time to waste "fooling around" etc, I used to compile everything from scratch from .tgz and manually get dependencies and make install to localdir.
Sadly, there's no more time for such l(in)uxuries these days so I need a quick lazy way to keep my 16GB Linux Boot OS partition as small as possible and have apps/software/Development Environment and other data on a separate partition.
I can deal with mounting my home dir to other partition but my remaining issue is with /var and /usr etc and all the stuff that gets installed there every time I apt-get some packages I end up with a trillion dependencies installed because an author of a 5kb app decided not to include a 3kb parser and wanted me to install another 50MB package to get that 3kb library :) yay!
Of course later when I uninstall those packages, all those dependencies that got installed and never really have a need for anymore get left behind.
But anyway the point is I don't want to have to manually compile and spend hours chasing down dependencies so I can compile and install to my own paths and then have to tinker with a bunch of configuration files. So after some research this is the best I could come up with, did I miss some easier solution?
Use OVERLAYFS and Overlayroot to do an overlay of my root / partition on my secondary drive or partition so that my Linux OS is never written to anymore but everything will be transparently written to the other partition.
I like the idea of this method and I want to know who uses this method and if its working out well. What I like is that this way I can continue to be lazy and just blindly apt-get install toolchain and everything should work as normal without any special tinkering with each apps config files etc to change paths.
Its also nice that dependencies will be easily re-used by the different apps.
Any problems I haven't foreseen with this method? Is anyone using this solution?
DOCKER or Other Application Containers, libvirt/lxc etc?
This might be THE WAY to go? With this method I assume I should install ALL my apps I want to install/try-out inside ONE Docker container otherwise I will be wasting storage space by duplication of dependencies in each container? Or does DOCKER or other app-containers do DEDUPLICATION of files/libs across containers?
Does this work fine for graphical/x-windows/etc apps inside docker/containers?
If you know of something easier than Overlayfs/overlayroot or Docker/LXC to accomplish what I want and that's not any more hassle to setup please tell me.tx
After further research and testing/trying out docker for this, I've decided that "containers" like docker are the easy way to go to install apps you may want to purge later. It seems that this technique already uses the Overlayfs overlayroot kind of technique under the hood to make use of already installed dependencies in the host and installs other needed dependencies in the docker image. So basically I gain the same advantages as the other manual overlayroot technique I talked about and yet without having to work to set all that up.
So yep, I'm a believer in application containers now! Even works for GUI apps.
Now I can keep a very light-weight small size main root partition and simply install anything I want to try out inside app containers and delete when done.
This also solves the problem of lingering no longer needed dependencies which I'd have to deal with myself if manually doing an overlayroot over the main /.

Installing a package independent of package manager [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to create an install script for my program that runs on Linux systems. I am choosing Bash for the install script, and I am wondering if there is any efficient way of installing a package independent of the system's package manager. I know that Debian uses aptitude, Redhat uses yellow dog updater, etc. Given this diversity of package managers, it seems like a lot of if statements will be required, of which check the distribution and match it with the default package manager.
The only thing I can think of apart from checking the distro is having the script manually download the package sources and building them. I'm not sure how popular projects do this, but I would be interested to know.
Your question seems inside out. Just make your script install things in /usr/local/bin with an option for those who want to create a package wrapper for their distro to easily override the destination (typically to use /usr/bin instead, ideally simply by setting DESTDIR). The Linux Standard Base should tell you where to similarly install auxiliary files, libraries, etc.
As an aside, the core package manager on Debian is dpkg and on Red Hat it's rpm.
You are not the first to encounter this issue. Basically, there are two differnt solutions to that.
Use a package manager
Examples:
docker: https://get.docker.com/
https://github.com/firehol/netdata
Docker detects your OS and adds its own repository to the local package manager. This might be an option for you, depending on your project size.
Standard compile and install approach
You might have heard of the following 3 lines:
./configure
make
make install
Make is not the only option, there are other build systems, that do essentially the same.
There are still a lot of open source projects out there, where compiling locally and then moving/copying the files to the correct location is the preferred method of installation (sometimes only the development builds)
Examples:
aria2: https://github.com/aria2/aria2/

How do I manually install a program to CentOS 6, without using yum or rpm? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have created a CentOS server in a virtual machine and now I would like to know how to install programs from scratch, without using yum or rpm. Every search I've tried on how to go about finding each individual program and what commands I would use to install them has returned very useful information on why using yum would be so much better and how to install yum if I don't have it.
So basically all I want to know are how to find the download links to individual programs,how to download them (since I'm using only text I'm unfamiliar with this whole process), and what commands I need to use to install them once I have them.
Thanks guys!
When in rome, man. They're telling you to do it that way because CentOS really prefers rpm-based packages. They're easier to manage, upgrade, uninstall, etc.
However since this is a learning exercise, ignore all of that.
Each piece of software is unique, and you need to read the installation instructions that come with the source code for the project. A good chunk of software out there uses a system called "automake" whose commands are usually very predictable. The experience is usually something like this:
Download the source code from a website (often comes are a .tar.gz or .zip) You can use wget to download files from websites.
Extract the source code locally (using tar or unzip)
Set some compiler variables (don't do this unless you know what you're doing -- the defaults are usually sufficient, esp. for a learning exercise). e.g. export CFLAGS="-O2 -pipe"
Run the configure script with --help to determine what kinds of options are configurable. ./configure --help
Run configure with the options you want: ./configure --prefix=/usr/local --enable-option1 --with-library=/path/to/lib --without-cowbell
This will set up the project to be compiled. Now you need to run make. Just type make
Once everything has compiled (assuming there are no compile errors) run make install. You have to run this command as root usually.
Tada. The package has been installed from source.
There are of course other compile systems out there (cmake for example) but I won't cover all of them. Things will break for you. Google is your friend when this happens. Usually it's due to (a) shitty source code, or (b) missing / out of date libraries on your system.
Also keep in mind that just because a package compiles doesn't mean it will work out of the box for you. Most packages need a certain amount of configuration to actually run properly, so be sure to read any documentation available to you.
EDIT
Also, if you REALLY want the FULL experience, there's always linux from scratch which can, and will teach you everything you were afraid to ask about compiling things from source.
Compiling archive like tar.bz2. Use ./configure , make and after sudo make install.

Clone Debian/Ubuntu installation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there an easy way of cloning entire installed debian/ubuntu system?
I want to have identical installation in terms of installed packages and as much as possible of settings.
I've looked into options of aptitude, apt-get, synaptic but have found nothing.
How to mirror apt-get installs.
Primary System
dpkg --get-selections > installed-software
scp installed-software $targetsystem:.
Target System
dpkg --set-selections < installed-software
dselect"
done.
+1 to this post
This guide should answer your direct question
But I would recomend Rsync and simply clone entire /root. Its only expensive the first time.
You can also create your own package repository and have all your machines run their daily updates from your repository.
Supposing you want to install Ubuntu on multiple identical systems you could try with the Automatic Install feature.
You can use rsync for that and there is an interesting thread about it on ubuntuforms:
ubuntuforms
There is RSYNC which let's you synchornise files between installations. So you could just rsync your entire distro, or at least the directories that contain the programs and the configuration files.
Also, I don't know if this is what you are asking, but you could turn your existing install into an ISO image, this would allow you to install it elsewhere, thus having a duplicate.
Hope that helps
If the drives and systems are identical, you might consider using dd to copy the source machine to the target.
The only changes that would need to be made on booting the new machine would be to change the hostname.
Once the machine has been duplicated, go with what the other answers have suggested, and look at rsync. You won't want to rsync everything, though: system log files etc should be left alone.
Also, depending on how often "changes" are made to either system (from bookmarks to downloaded ISOs), you may need to run rsync in daemon mode, and have it update nearly constantly.
SystemImager
FAI
We have systemimager working great with RHEL and CentOS. Haven't tried it on Debian.
The trick linked by Luka works great with debian though.
Well it all depends on scale, and how often you want to use it, using systemimager is basicly rsync on steroids, it has some scripts which make creation of images easy and allows you have network settings etc. This can be easily used where you need to create a farm of webservers or a farm of mailserver with only a small difference between installations where you are able to boot one blank system over the network and have it completely installed. This has the advantage that it's almost completely automated, a script learn your partitioning layout and automatically applies it.
When you only need one copy of a system, keep it simple, boot from a livecd, create your partitioning, copy over network using rsync, install your bootloader and everything 'll be fine.

Resources