Install chromium to Linux disk image? - linux

I'm sure this has been asked before but I have no clue what to search for
I am trying to create a custom Linux image (for the Raspberry Pi) - I am currently manipulating the filesystem of the .img but I've discovered it's not as simple as dropping in the binary :( if only...
What is the accepted way to "pre-install" a package on a disk image where you can only manipulate the filesystem and ideally not run it first? Am I best to boot up, install, and then create the image from that, or is there a way of doing it beforehand in the same way you can change configuration settings etc?

Usually, when I have to change something in a disk image, I do the following:
sudo mount --bind /proc /mnt/disk_image/proc
sudo mount --bind /sys /mnt/disk_image/sys
sudo mount --bind /dev /mnt/disk_image/dev
These action are needed as this folder are create during boot process, mounting them in your system image will emulate a full boot. Then, you can chroot on it safely:
sudo chroot /mnt/disk_image
You're now able to issue commands in the chroot environment:
sudo apt-get install chromium
Of course, change /mnt/disk_image to the path where you have mounted your filesystem. apt-get will only works on Debian based system, change it according to your distribution.
You could find problem connecting to the internet and it can be cause by DNS configuration. The best thing you can do, is to copy your /etc/resolv.conf file in the remote filesystem as this file is usually changed by dhcp and it's empty on chroot environment.
This is the only solution that gives you full access to the command line of the system you're trying to modify.

This is an untested idea:
The dpkg tool, which can install .deb packages, has a --root option which can set a different filesystem than the local / path.
From the man page:
--instdir=dir
Change default installation directory which refers to the
directory where packages are to be installed. instdir is
also the directory passed to chroot(2) before running
package’s installation scripts, which means that the
scripts see instdir as a root directory. (Defaults to /)
--root=dir
Changing root changes instdir to dir and admindir to
dir/var/lib/dpkg.
If you mount your image and pass its mountpoint as --root, it should work.

There are things like the Ubuntu Customization Kit which allow you to create your own version of the distro with your own packages.
Crunchbang even has a utility like this, which is the distro I have personally selected for experimenting with my Pi.

Related

Within lxc/docker container - what happens if apt-get upgrade includes kernel update?

I am reading a lot of Docker guides where the will often use some Ubuntu base image and in the Dockerfile directly or in a bash script that gets copy to container and run on start, it has things like 'apt-get upgrade'
As i understand it, the container still uses the hosts kernel. So what happens when the apt-get upgrade includes a kernel upgrade? Does it create a /boot and install the files as usual but the underlying LXC has some pass-through/whitelist mechanism for specific directories that always come from host... so it ignores those files in guest container ?
Thanks
fLo
The host's /boot is not visible to a Docker container, and the kernel image package should not be installed in such a container, since it's not needed. (Even if it is, though, it's entirely inert.)

Unable to boot linux due to removing the filesystem package

In my fedora x64 system I accidently did removed the "filesystem" package while I was root , by executing this command :
rpm -e filesystem --nodeps
instead of doing this :
yum update filesystem
and unfortunatly the command executed normally and the "filesystem" package was deleted totally .
now the system is refusing to boot up showing this message :
systemd[1] : Failed to execute /bin/sh , giving up : No such file or directory
Now I can't do anything to fix it so any solutions are welcome, because I don't want to reinstall the system .
I am running an x64 Fedora 18 linux on an intel i3 processor.
I ran into the same beast on Fedora 19, after 3 hours I found a quite straight forward solution, what I did was:
Boot from Fedora-Live USB-stick of the same version installed
Mounted root into a local directory (btrfs): mount -o subvol=/root /dev/sda3 /mnt
Downloaded the filesystem package, telling yum it's working and base-directory are at my mountpoint: yum -c /mnt/etc/yum.conf --installroot=/mnt --downloadonly --downloaddir=/~ install filesystem
Since the package filesystem.x.x.x.rpm was gloriously removed by the rpm -e filesystem --nodeps command already, I installed the downloaded filesystem.rpm - at least I thought so. Turned out I had to force rpm because some other package from Google-Earth was blocking my command:
rpm -Uvh --root=/mnt ~/filesytem.x.x.x.rpm --force
Finally I edited /etc/selinux/config, I turned it off:
SELINUX=disabled
I'd take the drive out, install it in another system mount it as a secondary drive, and fool around with RPM to install the package in the specified path.
Bear in mind you'll need to manually check all your dependencies are installed too, and that you're building the correct version for Fedora 18.
I guess there might be other ways to do this too, but as long as you have another system you can connect the drive to, this might be the least effort.
I'd boot your broken system off a rescue disk on DVD, CD, USB or what have you. My experience was with Knoppix (a few years back), it was regarded as the best. However, if you don't have that, google "fedora rescue" and download that. See if that can read your hard drive, perhaps allowing you to avoid losing files of value that you had on old system, copy out to some removable media. Or, it may actually diagnose your situation and suggest fixing it for you.
Otherwise, I suspect the least-effort path back to a working system will be to install linux from scratch. The "filesystem" is not a separate package, it pretty much is the linux installation. The kernel is still present and booting, but everything else is gone.
I looked for the ISO mounted it extracted the rpm package filesystem-3.2-10.fc19.x86_64.rpm. I then looked for a live cd, boot into and mounted my former working partition and then run
rpm2cpio /root/filesystem-3.2-10.fc19.x86_64.rpm | cpio -idmv

How to access share folder in virtualbox. Host Win7, Guest Fedora 16?

I'm a newbie in linux. I installed Fedora 16 OS as guest in virtualbox on Window 7. Now, I want to access share folder from Fedora. Here is something I did:
Install guest addtion [OK]
Make share folder link to virtualbox [OK]. Share folder path in Window 7: D:\share_folder_vm
In terminal program in fedora, I just run some commands:
[hoangphi#localhost ~]$ su
Password:
[root#localhost hoangphi]# cd Desktop/
[root#localhost Desktop]# mkdir share_folder
[root#localhost Desktop]# sudo mount -t vboxsf D:\share_folder_vm \share_folder
/sbin/mount.vboxsf: mounting failed with the error: Protocol error
[root#localhost Desktop]#
I got this message: /sbin/mount.vboxsf: mounting failed with the error: Protocol error
share_folder_vm is the folder in Win7 Host and share_folder is the folder in Fedora Guest.
My question: How can I fix this problem?
Install Oracle Guest Additions:
[host-hotkey (usually right Ctrl)] + [d],
Then:
sudo /media/VBOXADDITIONS_4.*/VBoxLinuxAdditions.run
You can now enjoy:
A guest that can run at native screen resolution
Ability to share files between host and guest
Share the clipboard (allowing you to copy and paste between host and guest).
To share folders set them up to be shared. Consider the permissions. Note that the host file permissions are transient. IOW if you can't write to file on host, the guest can't either.
After setting up the file to be shared create a destination if you don't have one:
mkdir -p ~/destination
Now mount it under the name you configured it with:
sudo mount -t vboxsf myFileName ~/destination
As an extra tip you can really exploit this feature to do things like:
- Use guest subversion client to create repository to mounted directory (you won't have a full svn client but the repo can be used in an IDE on the host).
- I personally use my guest to download and unpack binaries like Tomcat to a targeted mount. Yes you can use Linux to install things on Windows!
To unmount all shares:
sudo umount -f -a -t vboxsf
This thread has some great tips. However....
#GirishB's answer isn't correct - sorry. Jartender's is best.
Also, every post in here seems to assume you're logging in to the Linux guest as root, except for #tomoguisuru. Yuck! Don't use root, use a separate user account and "sudo" when you need root privileges. Then this user (or any other user who needs the shared folder) should have membership in the vboxsf group, and #tomoguisuru's command is perfect, even terser than what I use.
Forget running mount yourself. Set up the shared folder to auto mount and you'll find the shared folder - it's under /media in my OEL (RH and Centos probably the same). If it's not there, just run "mount" with no arguments and look for the mounted directory of type vboxsf.
For accessing a shared folder, YOU have to have "Oracle VM extension pack" installed.
Look at the bottom of this link, you can download it from there.
http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html
I just figured. You need to add a shared folder using VirtualBox before you access it with the guest.
Click "Device" in the menu bar--->Shared File--->add a directory and name it
then in the guest terminal, use:
sudo mount -t vboxsf myFileName ~/destination
Dont directly refer to the host directory
There's a simpler way I found when running Linux Mint.
Ensure you install the Guest Additions from the command line and that you have the folder(s) shared with "automount" and "make permanent" settings selected within "Shared Folders" tab of the Machine Settings
Launch the User management application from Application/Settings/System Setting/ menu selection (requires sudo) from within the Mint menu
In the "Privileges and Groups" tab, check the box next to the "vboxsf" group, and then apply and ok your way back out.
Any user within the vboxsf group has full access to any shared folders on each boot with no manual mounting or unmounting
I usually do the following in addition to the above just to have quick access
Open the Dolphin file manager and navigate to /media/
Right-Click on the shared folder and click "Add to Places"
You probably need to change your mount command from:
[root#localhost Desktop]# sudo mount -t vboxsf D:\share_folder_vm \share_folder
to:
[root#localhost Desktop]# sudo mount -t vboxsf share_name \share_folder
where share_name is the "Name" of the share in the VirtualBox -> Shared Folders -> Folder List list box. The argument you have ("D:\share_folder_vm") is the "Path" of the share on the host, not the "Name".
May be this can help other guys:
I had the same problem, and after looking with Google I found that can be because of the permissions of the folder... So, you need first to add permissions...
$ chmod 777 share_folder
Then run again
$ sudo mount -t vboxsf D:\share_folder_vm \share_folder
Check the answers here: Error mounting VirtualBox shared folders in an Ubuntu guest...
VirtualBox version has many uncompatibilities with Linux version, so it's hard to install by using "Guest Addition CD image". For linux distributions it's frequently have a good companion Guest Addition package(equivalent functions to the CD image) which can be installed by:
sudo apt-get install virtualbox-guest-dkms
After that, on the window menu of the Guest, go to Devices->Shared Folders Settings->Shared Folders and add a host window folder to Machine Folders(Mark Auto-mount option) then you can see the shared folder in the Files of Guest Linux.
There is a really simple tuturial here : http://my-wd-local.wikidot.com/otherapp:configure-virtualbox-shared-folders-in-a-windows-ho
telling to do:
sudo mkdir /mnt/vbox_share
sudo mount.vboxsf nameAddesAsShared /mnt/vbox_share
These are the steps to share a folder from Windows to Linux Virtual Box
Step 1 : Install Virtual Box Extension Pack from this link
Step 2: Install Oracle Guest Additions:
By pressing -> Right Ctrl and d together
Run the command
sudo /media/VBOXADDITIONS_4.*/VBoxLinuxAdditions.run
Step 3 : Create Shared Folder by Clicking Settings in Vbox
Then Shared Folders -> + and give a name to the folder (e.g. VB_Share)
Select the Shared Folder path on Windows (e.g. D:\VBox_Share)
Step 4: Create a folder in named VB_share in home\user-name (e.g. home\satish\VB_share) and share
mkdir VB_Share
chmod 777 VB_share
Step 5: Run the following command
sudo mount –t vboxsf vBox_Share VB_Share

Can oprofile be made to use a directory other than /root/.oprofile?

We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.

Running apt-get for another partition/directory?

I have booted my system from a live Ubuntu CD, and I need to fix some package problems. I have mounted my hard drive, and now I want to run apt-get as if I booted normally. ie change the working directory for apt-get so it will work on my hard drive. I have done this before, but I can't remember the syntax. I think it was only some flag, like this:
apt-get --root-directory=/mnt/partition1 install....
But I only get "Command line option...is not understood". Any ideas?
Also this should work:
sudo apt-get -o Dir=/media/partitioni1 update
chroot /mnt/partition1
If your system uses several disk partitions you may have to mount some of them in order to get the package system working (I stopped setting up multiple partitions 10 years ago when hard disks started to get too large for raw physical backup).
This wouldn't work if you don't already have a usable debian system in that location. – akostadinov
If you can't get the package system working when chrooting, perhaps it is too messed up to ever be trusted again - in my experience the effort to bring it back to life rarely pays. If that happens, be happy you can still access your HD, backup your data and perform a clean reinstall.
Some relevant comments from other answer:
apt-get -o RootDir=/tmp/test_apt sets (almost) all paths to be in the different root. btw on a running system, if you copy /etc/apt, /usr/lib/apt, and mkdir -p usr/lib etc var/cache var/lib/dpkg var/lib/apt/lists/partial var/cache/apt/archives/partial and finally touch var/lib/dpkg/status, then apt is going to work in that root. It can even work as a non-root user if you add the option -o Debug::NoLocking=1. The nolock option is necessary because I couldn't find a way to set the lock file inside the different root directory. – akostadinov
Work means using search and downloading packages and such operations. Actually installing is not possible if some essential packages are not already there. debootstrap can help if the goal is actually installing packages in a new root for whatever reason. – akostadinov
Running chroot /mnt/partition1 will start a new shell in which the root of the filesystem is /mnt/partition1. Assuming the apt-get on your hard drive still works correctly, you can proceed from there.
dpkg --root=/mnt/partition1 -i mypackage.deb is an option that doesn't require chroot, but does require you to download the package yourself.

Resources