Running apt-get for another partition/directory? - linux

I have booted my system from a live Ubuntu CD, and I need to fix some package problems. I have mounted my hard drive, and now I want to run apt-get as if I booted normally. ie change the working directory for apt-get so it will work on my hard drive. I have done this before, but I can't remember the syntax. I think it was only some flag, like this:
apt-get --root-directory=/mnt/partition1 install....
But I only get "Command line option...is not understood". Any ideas?

Also this should work:
sudo apt-get -o Dir=/media/partitioni1 update

chroot /mnt/partition1
If your system uses several disk partitions you may have to mount some of them in order to get the package system working (I stopped setting up multiple partitions 10 years ago when hard disks started to get too large for raw physical backup).
This wouldn't work if you don't already have a usable debian system in that location. – akostadinov
If you can't get the package system working when chrooting, perhaps it is too messed up to ever be trusted again - in my experience the effort to bring it back to life rarely pays. If that happens, be happy you can still access your HD, backup your data and perform a clean reinstall.
Some relevant comments from other answer:
apt-get -o RootDir=/tmp/test_apt sets (almost) all paths to be in the different root. btw on a running system, if you copy /etc/apt, /usr/lib/apt, and mkdir -p usr/lib etc var/cache var/lib/dpkg var/lib/apt/lists/partial var/cache/apt/archives/partial and finally touch var/lib/dpkg/status, then apt is going to work in that root. It can even work as a non-root user if you add the option -o Debug::NoLocking=1. The nolock option is necessary because I couldn't find a way to set the lock file inside the different root directory. – akostadinov
Work means using search and downloading packages and such operations. Actually installing is not possible if some essential packages are not already there. debootstrap can help if the goal is actually installing packages in a new root for whatever reason. – akostadinov

Running chroot /mnt/partition1 will start a new shell in which the root of the filesystem is /mnt/partition1. Assuming the apt-get on your hard drive still works correctly, you can proceed from there.
dpkg --root=/mnt/partition1 -i mypackage.deb is an option that doesn't require chroot, but does require you to download the package yourself.

Related

How to change yum install location?

Could anyone tell me how to change yum install default directory? I have been trying to install datastax cassandra after creating the datastax.repo file in yum.repos.d directory but when installing it says no enough space. it is installing in default / file system. can i change to /data or /local/apps directory where there is plenty of space. how can i do this. commands used: yum install dse-full;
many thanks for the help
You don't. Not really.
If the RPM is built as a relocatable RPM (almost none are that I'm aware of). Then, and only then, you can use the --prefix or --relocate arguments to rpm to do some amount of prefix replacement/path translation.
That said that is almost certainly not the case.
If the rpm installs to under a specific prefix (e.g. /opt/cassandra) then you might be able to create a symlink at that location to your other partitions and that might work.
A better option (and one that might be more reliable) would be to use a bind mount at that location to somewhere on your other partition.
That said the real answer here is to give your root partition more space. Which, assuming you used LVM to create your partitions (and you really probably should) is not a complicated task.
I've been stuck on a legacy server with insufficient disk space, and had to use an approach similar to this answer.
You can find out where it wants to install to using rpm commands:
rpm -q -p -l /path/to/rpmfile.rpm |less
If it installs under a common directory such as /usr/local/, you're in luck. I cannot download the RPMs from the vendor, since it requires registration, but from the docs about the .run installer for the same product, the default is /usr/local/dse.
If this is the same for the .rpm installs, then you can just symlink that directory to your large disk:
ln -s /usr/local/dse /local/apps/dse
Hope that helps!

Best Approach to installing Node.js/npm without sudo

I've been looking around for the best/most appropriate way to install node.js/npm in such a way that using commands like npm install -g bower does not require sudo, as using sudo for such a command can cause issues later on. Initially I followed this answer: Installing with nvm but this installs it into the users home directory which I read may not be a good a idea in production to have node installed in your home directory so I followed an expansion on above tutorial with this: Installing with NVM (digital ocean) however this left me still requiring sudo.
On a side note - on my macbook I installed node with homebrew, is this a good idea or is there a more standard approach.
Thanks for all your help, feel free to ask for clarifications.
I forgot to say, the machine I am planning on installing this on is running XUbuntu 14.04. (also I have my macbook running mavericks - but this is just an addition)
Sudo gives you permissions to change/add/remove files not owned by your user. Those files are as a rule everything except /home/YOU (in MacOS: /Users/YOU)
Your desire is to have Node installed as appropriate (system wide, rather than your home directory), that is good. And as you guessed you need sudo to initially install it on a system path.
But then you wish to have modules installed without sudo, meaning you want modules to be located in a directory, where your user has write access to. That would be available by default if Node was installed in your home.
To enforce your wish on a system path, you will need to give write permission to the folder where modules are located, that is change write permissions or ownership of:
/usr/local/share/npm/lib/node_modules, so that modules can be saved on your disk.
/usr/local/share/npm/bin, to allow modules executables be reachable.
You might have to alter few other folders as well.
That answers your question, but I strongly recommend you not doing so. Instead I suggest you stick to default methodologies. Everyone here without doubt will say it is absolutely safe approach to use sudo when you are installing modules globally, it is even safer to not have write permissions to global infrastructure of your install without super privileges.

Unable to boot linux due to removing the filesystem package

In my fedora x64 system I accidently did removed the "filesystem" package while I was root , by executing this command :
rpm -e filesystem --nodeps
instead of doing this :
yum update filesystem
and unfortunatly the command executed normally and the "filesystem" package was deleted totally .
now the system is refusing to boot up showing this message :
systemd[1] : Failed to execute /bin/sh , giving up : No such file or directory
Now I can't do anything to fix it so any solutions are welcome, because I don't want to reinstall the system .
I am running an x64 Fedora 18 linux on an intel i3 processor.
I ran into the same beast on Fedora 19, after 3 hours I found a quite straight forward solution, what I did was:
Boot from Fedora-Live USB-stick of the same version installed
Mounted root into a local directory (btrfs): mount -o subvol=/root /dev/sda3 /mnt
Downloaded the filesystem package, telling yum it's working and base-directory are at my mountpoint: yum -c /mnt/etc/yum.conf --installroot=/mnt --downloadonly --downloaddir=/~ install filesystem
Since the package filesystem.x.x.x.rpm was gloriously removed by the rpm -e filesystem --nodeps command already, I installed the downloaded filesystem.rpm - at least I thought so. Turned out I had to force rpm because some other package from Google-Earth was blocking my command:
rpm -Uvh --root=/mnt ~/filesytem.x.x.x.rpm --force
Finally I edited /etc/selinux/config, I turned it off:
SELINUX=disabled
I'd take the drive out, install it in another system mount it as a secondary drive, and fool around with RPM to install the package in the specified path.
Bear in mind you'll need to manually check all your dependencies are installed too, and that you're building the correct version for Fedora 18.
I guess there might be other ways to do this too, but as long as you have another system you can connect the drive to, this might be the least effort.
I'd boot your broken system off a rescue disk on DVD, CD, USB or what have you. My experience was with Knoppix (a few years back), it was regarded as the best. However, if you don't have that, google "fedora rescue" and download that. See if that can read your hard drive, perhaps allowing you to avoid losing files of value that you had on old system, copy out to some removable media. Or, it may actually diagnose your situation and suggest fixing it for you.
Otherwise, I suspect the least-effort path back to a working system will be to install linux from scratch. The "filesystem" is not a separate package, it pretty much is the linux installation. The kernel is still present and booting, but everything else is gone.
I looked for the ISO mounted it extracted the rpm package filesystem-3.2-10.fc19.x86_64.rpm. I then looked for a live cd, boot into and mounted my former working partition and then run
rpm2cpio /root/filesystem-3.2-10.fc19.x86_64.rpm | cpio -idmv

Install chromium to Linux disk image?

I'm sure this has been asked before but I have no clue what to search for
I am trying to create a custom Linux image (for the Raspberry Pi) - I am currently manipulating the filesystem of the .img but I've discovered it's not as simple as dropping in the binary :( if only...
What is the accepted way to "pre-install" a package on a disk image where you can only manipulate the filesystem and ideally not run it first? Am I best to boot up, install, and then create the image from that, or is there a way of doing it beforehand in the same way you can change configuration settings etc?
Usually, when I have to change something in a disk image, I do the following:
sudo mount --bind /proc /mnt/disk_image/proc
sudo mount --bind /sys /mnt/disk_image/sys
sudo mount --bind /dev /mnt/disk_image/dev
These action are needed as this folder are create during boot process, mounting them in your system image will emulate a full boot. Then, you can chroot on it safely:
sudo chroot /mnt/disk_image
You're now able to issue commands in the chroot environment:
sudo apt-get install chromium
Of course, change /mnt/disk_image to the path where you have mounted your filesystem. apt-get will only works on Debian based system, change it according to your distribution.
You could find problem connecting to the internet and it can be cause by DNS configuration. The best thing you can do, is to copy your /etc/resolv.conf file in the remote filesystem as this file is usually changed by dhcp and it's empty on chroot environment.
This is the only solution that gives you full access to the command line of the system you're trying to modify.
This is an untested idea:
The dpkg tool, which can install .deb packages, has a --root option which can set a different filesystem than the local / path.
From the man page:
--instdir=dir
Change default installation directory which refers to the
directory where packages are to be installed. instdir is
also the directory passed to chroot(2) before running
package’s installation scripts, which means that the
scripts see instdir as a root directory. (Defaults to /)
--root=dir
Changing root changes instdir to dir and admindir to
dir/var/lib/dpkg.
If you mount your image and pass its mountpoint as --root, it should work.
There are things like the Ubuntu Customization Kit which allow you to create your own version of the distro with your own packages.
Crunchbang even has a utility like this, which is the distro I have personally selected for experimenting with my Pi.

Run time installation directory of debian package contents

I have a debian package that I built that contains a tar ball of the files, a control file, and a postinst file. Its built using dpkg-deb and it installs properly using dpkg.
The modification I would like to make is to have the installation directory of the files be determined at runtime based on an environment variable that will be set when dpkg -i is run on the deb file. I echo out the environment variable in the postinst script and I can see that its set properly.
My questions:
1) Is it possible to dynamically determine the installation directory at runtime?
2) If its possible how would I go about this? I have read about the rules file and the mypackage.install files but I don't know if either of these would allow me to accomplish this.
I could hack it by copying the files to the target location in the posinst script but I would prefer to do it the right way if possible.
Thanks in advance!
So this is what I found out about this problem over the past couple of weeks.
With prepackaged binaries you can't build a debian package with a destination directory dynamicall determined at runtime. I believe that this might be possible if installing a package that is built from source where you can set the install directory using configure. But in this case since these are embedded Ubuntu machines they don't have make so I didn't pursue such an option. I did work out a non traditional method (hack) for installing that did work. Since debian packages simply contain a tar ball relative to / simply build your package relative to a directory under /tmp. In the postinst script you can then determine where to copy the files from the archive into a permanent location.
I expected that after rebooting and the automatic deletion of the subdirectory under /tmp that dpkg might not know that the file package existed. This wasn't a problem. When I ran 'dpkg -l myapp' it showed as still installed. Updating the package using dpkg/apt-get also worked without a hitch.
What I did find is that if you attempted to remove the package using 'dpkg -r myapp' that dpkg would try and remove /tmp which wasn't good. However /tmp isn't easily removed so it never succeeded. Plus in our situation we never remove packages but instead simply upgrade them.
I eventually had to abandon the universal package due to code differences in the sources resulting in having to recompile per platform but I would have left it this way and it did work.
I tried using --instdir to change the install directory of the package and it does relocate the files but dpkg fails since the dpkg file can't be found relative to the new instdir. Using --instdir is sort of like a chroot. I also tried --admindir and --root in various combinations to see if I could use the dpkg system relative to / but install relocate the files but they didn't work. I guess rpm has a relocate option that works but not Ubuntu.
You can also write a script that runs dpkg-deb with a different environment for 6 times, generating 6 different packages. When you make a modification, you simply have to run your script, and all 6 packages gets generated and you can install them on your machines avoiding postinst hacking!
Why not install to a standard location, and simply use a postinst script to create symbolic links to the desired location? This is much cleaner, and shouldn't break anything in dpk -I.

Resources