Am trying to build small OS using buildroot & able to generate iso hybrid image to boot from USB. Generated iso image is working fine as live CD but not able to install it inside hard disk (like regular OS image).
I have tried to modify /init but need more guidance for doing so. Any help here will be much appreciated.
To install on a hard disk, you want a regular root filesystem + kernel. Select:
BR2_LINUX_KERNEL_INSTALL_TARGET "Install kernel image to /boot in target" to make the kernel part of the rootfs.
grub2 as the bootloader;
ext4 as the root filesystem;
host-genimage to create a partitioned hard disk image
You also need to supply:
a grub.cfg to configure grub;
a post-build script to copy the grub.cfg to the right place;
a genimage.cfg to configure the hard disk image.
Look at configs/pc_x86_64_efi_defconfig for inspiration. You might even be able to use it directly.
Related
I have a customized Linux filesystem with some binaries and new folders in the root file system. This Linux filesystem is created for small board computer.
Currently, I compress the root folder of the customized filesystem to a tar.gz file. With this tar.gz file I can share it to my friends. Then this file have to be extracted to their SD card. With this method they can also update a libraries or binary for testing.
However, this mechanism (creating and deploying) takes a lot of time.
My questions are:
1. How can I improve the creating and deploying customized linux image?
2. If I see the linux distributions, they use .iso or .img format. What is the reason using .iso or .img instead of tar.gz or zip file?
Thanks.
You have two steps to speed up:
Create an image that is shareable with your friends
Deploy the image on the board to test it
1. Step
Automate as much as you can. You could even set up a Jenkins server which should be able to automate anything that looks like a repetitive job. (compiling, compressing, uploading or sending the image to friends)
Compress your image using all your cores. AFAIK gzip does only use one core. pigz should compress/decompress faster
2. Step: Run your rootfs from an NFS.
This is the real time saver because you don't need to copy your rootfs to a sdcard anymore:
If your bootloader supports it, use nfs/tftp boot. Your board then boots directly from a network mount instead of a sdcard. I dont know which bootloader you use, but u-boot supports it, and others probably do as well.
it depends from what exactly need to be updated. If this is entire filesystem that is always changing - then just distribute the compressed img (by xz for example). if this is some small part of the system - just use the package manager, i.e. extract this part to package, next create this new package using the standard distro tools and distribute only it.
So if your SD card image has the fixed size - you can use rsync/xdelta to distribute only changes, but their efficiency depends from the exact changes percentage.
First off, my intention is to create a portable, bootable USB drive containing a GNU/Linux distribution. Specifically, I want to use Arch Linux with a squashfs read-only root filesystem.
The squashfs image is based on a snapshot of a working VM. The base system with it's services like ssh work out of the box as expected. But when trying to launch gnome via systemd (systemctl start gdm), all I see is a black screen (supposedly the X-Server started but gdm fails to load). I already tried to figure out whats happening, but failed to identify the exact problem.
Home directories are writeable
/tmp is writeable
/var/log is writeable
/var/run & /run are writeable anyway
/var/log/gdm gets created but stays empty.
Which modules may require write access to any other files? Is there any documentation? What would make sense to strace or similar?
My desire is to know the root of the problem and fix it, instead of using workarounds like unionfs. Thanks for any help or hints!
Although it's not relevant, for those who might wonder why I want to do this, here are some points to consider:
Stability - as you cannot modify system files, you cannot mess up the system (unless you write bogus directly to the drive of course)
Storage - as files are compressed, more data fits on the drive
Performance - as I/O on most USB drives is slow, compression gives you higher I/O speed
Portability - no special treatment for read-only storage, you might copy it on a CD or any other read-only technology and it will still work the same way as it would on a writeable disk
Update
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
I figured out that the problem was actually at /var/lib/gdm. GDM tried to access files in there an (silently) failed doing so giving me a black screen.
journalctl was the debugging command i was missing in the first place.
As part of my infrastructure I have many Virtual Machines running different Linux distros, under Proxmox using OpenVz. My problem is that I need to export into a personalized installable ISOs some of the VMs I have, (installable snapshots of the current state of the VMs), some of them are running Ubuntu, some of them CentOS, so my question is:
1- Is there a way I can do this aware of the OS the VM is running?,
2- Exporting VMs to ISOs the way I just explained is the way to go or is there any other approach?
I'm open to any advice from those how has experience with this subject even if I have to setup different Virt. Technology to host the VMs.
Your question is pretty vague on your requirements. I'll try to give you some ideas:
What do you mean by "Current state"? If you really want all the running processes, then you should something like VirtualBox and take a snapshot. You can easily boot that up on another computer and continue running where you left off, and it's independent of the OS.
If you really mean just the filesystem, then just copying the filesystem and burning it on a CD is unlikely to give you good results. For instance, there are many areas that are expected to be writable (/var, /tmp. even /etc for /etc/resolv.conf)
One simple idea is to just 'tar' up the filesystem, and untar it on another OpenVz distro. (I'm sure someone has made a bootable OpenVz distro..)
If you want a real bootable ISO, there are a LOT of different options. For example, you could have the kernel mount the ISO as root. Or you could boot to a RAMDisk as root, and unpack the filesystem you need. Or you could mount the ISO as root with an AUFS overlay filesystem. Or you could mount some directories as a SquashFS filesystem onto a RAM root.
But if you really want simplicity in "moving VMs around", look into Docker. It has a simple way to push a filesystem up to a public or private server, then download it on the other side, but save bandwidth on common elements like the OS and Apache installs. (If you do it right.)
I am compiling my own Linux kernel and userland tools for a PXE environment meant for cloning and reimaging. Right now, I'm sticking to a specific kernel version and using preconfigured .config's for building the Linux kernel.
I need to change from using preconfigured .config's to automatically generating the default configuration for the specified architecture, and then enabling all ethernet, ATA, SATA, and SCSI drivers.
The reason I want to do this is:
Updating the kernel means updating the preconfigured .config's, which takes too much time to manually do. The way I'm doing it now is using menuconfig, enabling the options I need, and saving the resulting .config to my repository.
I know the kernel I'm building is missing some drivers because I've encountered some PC's that were not able to mount the NFS share because Linux could not find an ethernet device (which I've verified by booting an Ubuntu CD, which did find the ethernet device). I want an automated way of building any Linux kernel version that will guarantee that ALL drivers I need are pulled in.
Using a distribution's configuration pulls in too many unnecessary drivers and features for my purposes. It lengthens the kernel build time from 10-15 mintues to an hour or more, and the resulting image is too big.
Does anyone know how to write a Bash script to accomplish this?
Have you considered using a text editor to modify the .config file.
Then you can modify it using search and replace.
Plus, there are other choices for configuring the kernel than the menu-driven "menuconfig".
Is it possible to mount a ISO image from USB disk and to use it as a filesystem at boot time(with grub)? I ask it because I would like to put the kernel linux image and an ISO to be used as a filesystem(with fedora bootstrap) into an USB disk(without creating new partitions, etc.), as it is possible to do by using Qemu, for example.
Qemu is a virtualization/emulation environment. Grub is a bootloader, designed to get a kernel loaded into memory and start it executing. Neither program is directly related to your question, although you could certainly use Qemu to execute a VM that uses Grub to start Linux to do what you want.
Modern Linux distributions create an initrd, which the bootloader puts into memory for the kernel to use as its initial root file system. The initrd does things like loading the modules necessary to access the hard disks where the real root file system lives. In your case, you should look at having the initrd find your ISO, mount it, and use it as the root.
The contents of initrd vary based on what distro you're using. I'd grab a livecd from somewhere, dump its initrd's contents with zcat /boot/initrd-2.6.whatever.img | cpio -id, and check out what it's doing. Look for the init file, which will be the first user-space process run by the kernel.
Grub's loopback feature should allow you to boot a kernel and initrd from within an ISO image. Unfortunately, there's no way to allow the kernel to mount a loopback device as the root filesystem, so I think you're out of luck.