Installing a package independent of package manager [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to create an install script for my program that runs on Linux systems. I am choosing Bash for the install script, and I am wondering if there is any efficient way of installing a package independent of the system's package manager. I know that Debian uses aptitude, Redhat uses yellow dog updater, etc. Given this diversity of package managers, it seems like a lot of if statements will be required, of which check the distribution and match it with the default package manager.
The only thing I can think of apart from checking the distro is having the script manually download the package sources and building them. I'm not sure how popular projects do this, but I would be interested to know.

Your question seems inside out. Just make your script install things in /usr/local/bin with an option for those who want to create a package wrapper for their distro to easily override the destination (typically to use /usr/bin instead, ideally simply by setting DESTDIR). The Linux Standard Base should tell you where to similarly install auxiliary files, libraries, etc.
As an aside, the core package manager on Debian is dpkg and on Red Hat it's rpm.

You are not the first to encounter this issue. Basically, there are two differnt solutions to that.
Use a package manager
Examples:
docker: https://get.docker.com/
https://github.com/firehol/netdata
Docker detects your OS and adds its own repository to the local package manager. This might be an option for you, depending on your project size.
Standard compile and install approach
You might have heard of the following 3 lines:
./configure
make
make install
Make is not the only option, there are other build systems, that do essentially the same.
There are still a lot of open source projects out there, where compiling locally and then moving/copying the files to the correct location is the preferred method of installation (sometimes only the development builds)
Examples:
aria2: https://github.com/aria2/aria2/

Related

setting up a linux machine on a webserver [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a server with Hostinger, and I have SSH access.
It lacks a LOT of commands. Here's my bin folder.
https://gyazo.com/4509a9c8868e5a19c01f78ba3e0bf09e
I can use wget, meaning I can grab packages.
How can I get this up and running as an average linux machine? My plan is to use heroku on it (sneaky i know) and run django and such, but it lacks so much to start with it's looking really hard. I'm lacking essentials, including dbkg, apt, make, ect. Tips are appreciated.
There shouldn't be a case when your Linux based server is missing core packages like package manager (As I understood you don't have apt-get).
I'm lacking essentials, including dbkg, apt, make, ect.
For me, this server is broken and needs to be reinstalled.
I guess you can try to install apt with wget:
look for apropriate release here: http://security.ubuntu.com/ubuntu/pool/main/a/apt/
Example: wget http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt_1.7.0_amd64.deb
Install dpkg -i apt_1.4_amd64.deb
But maybe you are on different OS than you think? Have you tried to install with yum or dnf? To make sure what os you run type:
cat /etc/*release
or
lsb_release -a
Going back to your question on how to setup Linux server.
1. Update
[package manager] update
If you run Debian based OS use apt-get as a package manager, if Centos based use yum or dnf (dnf is updated yum) and Arch uses pacman. For other distributions look it up.
2. Install
Install packages you require. To make life easier you can install groups
yum groupinstall [name_of_group]
For my knowledge apt doesn't have group install but uses meta packages instead (They points to a group of packages) Ex.:
apt-get install build-essential
3. Create users
Avoid using root! Create users for services, processes etc. This is tremendously important for security reasons. More on overall security for Linux
4. Configure
Mayby silly point but configure what needs to be configured. For instance, web servers, ssh, workspace etc. Each use case is different.
5. Orchestrate
If you don't want to set up each time Linux environment by hand from shell you can use tools like Chef or Ansible for doing it for you (Of course you need to configure it first which will take some time, but later you will save much more trust me)
For setting up application environments, I really recommend using Docker. Thanks to this your application will work on any Linux based server which has docker engine installed. Not only maintenance, but deployment will also be child easy. Just download on any server your image, then run container with necessary parameters.
At the end you will need any server with only security, kernel updates and docker engine. Rest of dependencies will be resolved inside your Docker image
Hope it helps
Heroku isn't a web server in the same sense as Apache or Nginx. It's a platform as a service provider. You don't install it on your own server, you use its hosted platform (and it uses whatever web server you bundle into your slug).
I suggest you go through the getting started tutorial for Python, which walks you through deploying a simple Django app on Heroku. That should give you a good idea of how Heroku works.
If your goal is to enable some kind of deploy workflow on your own server (via shared hosting or a server where you have full administrative access) you can search the web for deploy tools. There are all kinds of them, some of which may be more suitable to your needs than others.

How do I manually install a program to CentOS 6, without using yum or rpm? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have created a CentOS server in a virtual machine and now I would like to know how to install programs from scratch, without using yum or rpm. Every search I've tried on how to go about finding each individual program and what commands I would use to install them has returned very useful information on why using yum would be so much better and how to install yum if I don't have it.
So basically all I want to know are how to find the download links to individual programs,how to download them (since I'm using only text I'm unfamiliar with this whole process), and what commands I need to use to install them once I have them.
Thanks guys!
When in rome, man. They're telling you to do it that way because CentOS really prefers rpm-based packages. They're easier to manage, upgrade, uninstall, etc.
However since this is a learning exercise, ignore all of that.
Each piece of software is unique, and you need to read the installation instructions that come with the source code for the project. A good chunk of software out there uses a system called "automake" whose commands are usually very predictable. The experience is usually something like this:
Download the source code from a website (often comes are a .tar.gz or .zip) You can use wget to download files from websites.
Extract the source code locally (using tar or unzip)
Set some compiler variables (don't do this unless you know what you're doing -- the defaults are usually sufficient, esp. for a learning exercise). e.g. export CFLAGS="-O2 -pipe"
Run the configure script with --help to determine what kinds of options are configurable. ./configure --help
Run configure with the options you want: ./configure --prefix=/usr/local --enable-option1 --with-library=/path/to/lib --without-cowbell
This will set up the project to be compiled. Now you need to run make. Just type make
Once everything has compiled (assuming there are no compile errors) run make install. You have to run this command as root usually.
Tada. The package has been installed from source.
There are of course other compile systems out there (cmake for example) but I won't cover all of them. Things will break for you. Google is your friend when this happens. Usually it's due to (a) shitty source code, or (b) missing / out of date libraries on your system.
Also keep in mind that just because a package compiles doesn't mean it will work out of the box for you. Most packages need a certain amount of configuration to actually run properly, so be sure to read any documentation available to you.
EDIT
Also, if you REALLY want the FULL experience, there's always linux from scratch which can, and will teach you everything you were afraid to ask about compiling things from source.
Compiling archive like tar.bz2. Use ./configure , make and after sudo make install.

Multiple package manager [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there a pitfall of using multiple package managers? Could I use Redhat's yum with Debian's aptitude at the same time?
I came across this article and this infographic -
I was inclined to choose Debian, but a quick VM install showed that the Kernel is not upgraded to 3.2 in the stable repo yet. So I am planning to move to Archlinux, but the infographic rates it low on package availablity and I was wondering if I could install .deb or .rpm files from the Fedora or Ubuntu repositories.
The short answer is, yes you can, but you really shouldn't.
Below is a list of things (in no particular order) to consider when cross-distribution installing:
Dependency hell
The primary reason things like yum/apt/yast exist in the first place is to avoid what is known as dependency hell. By using packages from other systems, you forfeit the work that has been put into the packages from installing cleanly.
The secondary package manager will want to satisfy its own dependencies, and install a bunch of things that are already installed. This means you have to install packages a piece at a time so that you don't overwrite already installed packages from your primary package manager, and have all kinds of problems.
Do they use the same package manager?
If they do, you may even be able to just install it outright, but you'll likely have dependancy issues or package conflicts. If they don't, you can extract the package with various tools and just lay down the binary files onto the file system (have a look at alien, or this post about extracting .rpm and .deb files).
This will get you the files on the system, but there is no guarantee it'll work out of the box. Some additional manual hunting may be (and usually is) required.
Are the versions of base packages such as glibc the same or very close?
If so, there is less chance of an issue. The further difference between the two distribution's base packages, the more likely you'll have missing shared libraries that aren't available in the distribution you're running on, because the version is different and the filename doesn't match what the binary is looking for.
Technically you could also extract the base dependencies from the other distribution and put them on the filesystem as well, but that will certainly cause you pain should you ever need to compile things from source. Imagine how confused gcc will be.
Does the package you're installing call for a specific kernel module?
The best way I can articulate this is a common problem I see these days with buying VMs from a web host; you get their own brand of xen or virtuozzo kernel, and iptables doesn't work outright because netfilter is in the kernel and the ABI has changed. It can be quite the headache to get it working again, and this issue isn't limited to iptables. My best advice here is pick the distribution that has the kernel you want in its own base repository.
Compiling from source
No doubt you'll have to do this should you get very deep into wanting packages from other systems. Since the various distros setup their build environments differently, you'll be spending half your time figuring out pathing and other configuration issues.

What packages should I install with Cygwin to make it not bloated but also have everything I would need as a developer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Normally, I run Linux in a VM, however, most of my VMs are on an external HDD and I might or might not have one with me. I figure Cygwin would be a good alternative for lightweight functionality when I need something Linux like and don't have a VM on my laptop. But I'm having trouble getting the configuration right - I want the bare minimum for development + X11. Has anyone used Cygwin in this manner? If so, what suggestions do you have?
Update: I've switched over to WSL since posting this answer. If you're still using Cygwin give it a try. It's not a drop-in replacement but it's nicer in a number of ways.
Personally, I find having to exit Cygwin just to install new packages annoying enough to try to avoid the just-in-time strategy, and fortunately there's a tool to make this much easier: apt-cyg. This way you actually can just-in-time install packages without having to quit Cygwin.
That said, here's a list of common packages you might want to install, whether via the installer or via apt-cyg:
bash-completion
lynx (to install apt-cyg), wget and curl
vim
hg, git, and maybe svn and git-svn
diffutils and patchutils
python and python3
There's tons of Cygwin setup posts out on the internet too, I referenced this one.
First option: don't worry about "bloat" - install everything that comes to mind.
With a permanently-available internet connection, I've also taken a "just in time" approach - the Cygwin installer makes it easy to download and install whatever you need, as and when you discover you need it.
The only way here is trial-and-error. Start with an absolute minimal installation and add things as you find that you need them.

Clone Debian/Ubuntu installation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there an easy way of cloning entire installed debian/ubuntu system?
I want to have identical installation in terms of installed packages and as much as possible of settings.
I've looked into options of aptitude, apt-get, synaptic but have found nothing.
How to mirror apt-get installs.
Primary System
dpkg --get-selections > installed-software
scp installed-software $targetsystem:.
Target System
dpkg --set-selections < installed-software
dselect"
done.
+1 to this post
This guide should answer your direct question
But I would recomend Rsync and simply clone entire /root. Its only expensive the first time.
You can also create your own package repository and have all your machines run their daily updates from your repository.
Supposing you want to install Ubuntu on multiple identical systems you could try with the Automatic Install feature.
You can use rsync for that and there is an interesting thread about it on ubuntuforms:
ubuntuforms
There is RSYNC which let's you synchornise files between installations. So you could just rsync your entire distro, or at least the directories that contain the programs and the configuration files.
Also, I don't know if this is what you are asking, but you could turn your existing install into an ISO image, this would allow you to install it elsewhere, thus having a duplicate.
Hope that helps
If the drives and systems are identical, you might consider using dd to copy the source machine to the target.
The only changes that would need to be made on booting the new machine would be to change the hostname.
Once the machine has been duplicated, go with what the other answers have suggested, and look at rsync. You won't want to rsync everything, though: system log files etc should be left alone.
Also, depending on how often "changes" are made to either system (from bookmarks to downloaded ISOs), you may need to run rsync in daemon mode, and have it update nearly constantly.
SystemImager
FAI
We have systemimager working great with RHEL and CentOS. Haven't tried it on Debian.
The trick linked by Luka works great with debian though.
Well it all depends on scale, and how often you want to use it, using systemimager is basicly rsync on steroids, it has some scripts which make creation of images easy and allows you have network settings etc. This can be easily used where you need to create a farm of webservers or a farm of mailserver with only a small difference between installations where you are able to boot one blank system over the network and have it completely installed. This has the advantage that it's almost completely automated, a script learn your partitioning layout and automatically applies it.
When you only need one copy of a system, keep it simple, boot from a livecd, create your partitioning, copy over network using rsync, install your bootloader and everything 'll be fine.

Resources