Copy pre-configured OS image to slave local drive with PXE [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have a handful of servers and would like to configure them as close as possible to a standard HPC cluster, currently focusing on automated node provisioning. My requirements for this are:
All nodes are identical, so I'd like to use a pre-configured install I have set up on one of the nodes
I have created an image of this install, which I would like to use to boot all nodes (this is a copy of the filesystem on the configured node)
The software image should be installed on the local disk of each node (not mounted over NFS)
I've been playing around with PXE and managed load Ubuntu 14.04 on a slave node, with the software image provided through NFS.
Is it possible to tell PXE to copy the contents of the NFS-mounted directory to a local disk partition and then make it boot/run linux from there?

Is it possible to tell PXE to copy the contents of the NFS-mounted
directory to a local disk partition and then make it boot/run linux
from there?
PXE is just a mechanism for booting a system over the network. Since you have complete control over what you feed your systems via PXE, you can do pretty much anything you want.
Four your scenario, you would want to boot into some sort of minimal Linux environment that is able to:
Mount NFS filesystems
Copy files
Configure the bootloader locally
That's a pretty short shell script. Once it completes, you would reboot the system, which ideally would be configured to prefer booting from the local disk (so that once you have configured the local boot loader, the system will not attempt to PXE boot during subsequent boots).
You may want to take a look at the Clonzezilla project, which does some of what you want.

Related

Firecracker microVM: how to create custom Firecracker microVM and file system images [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I went through the Getting Started guide of Firecracker microVM via building from source via Docker and following the steps. I have working knowledge of Docker via CLI/Visual Studio UI/ECS and remember building AWS AMIs manually before the Docker ubiquity...
However, this part is completely uncharted territory for me and several googling rounds over the past weeks did not help:
Next, you will need an uncompressed Linux kernel binary, and an ext4 file system image (to use as rootfs). You can use these files from our microVM image S3 bucket: kernel, and rootfs.
What is hello-vmlinux.bin and how to build one with my pre-installed apps? Could it be done similarly to Docker or AMI, i.e. in a simple way?
What is hello-rootfs.ext4 file and how to create a custom one for the same purpose as in 1. above?
vmlinux.bin - it's linux kernel image which will be used by VM. Probably you can use provided kernel w/o any modifications.
hello-rootfs.ext4 - it's a file which contains root file system for your VM.
You have to modify the file to add your application.
Mount provided rootfs to do your changes
mkdir -p /tmp/myroot
sudo mount rootfs.ext4 /tmp/my-rootfs
Copy your application and all dependencies to /tmp/my-rootfs/opt/
Add start script for your application to /tmp/myroot/etc/init.d/
Start script have to be prepared for OpenRC init system.
Unmount rootfs
sudo umount /zprojects/modus/sketch/images/hello-rootfs.ext4
Start firecracker so your application will be started as a part of VM init system start up.
You probably would like to check how to provide network access to your VM also: vm network setup doc

How to get recursively dependencies for package with versions? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I need to install a package on a system without internet access (the package contains a driver for network card).
System A has internet connection and runs Ubuntu 14; System B has no internet connection and runs Ubuntu 16.
How can I download all dependencies recursively with the correct version on system A, that could be next installed on system B?
I would suggest that you run a docker container (or some type of virtualization) with Ubuntu 16.04 on System A. After that, you can update the packages index (apt update) then install the desired packages on that system. Finally, you may copy the packages index from /var/lib/apt, and the packages themselves from /var/cache/apt/archives to System B.
It's a good practice to restrict hosts from internet access. However, as a patch management solution, you should setup a local mirror - this will centralize your patching needs for the entire organization. It's not just limited to ubuntu but you could host multiple linux distro mirrors. The only thing you really need is a large capacity disk, maybe mirror it for some non-critical resiliency. This will also cut-back on a multi-server environment using bandwidth, limiting the bandwidth to a single host pulling updates to it's mirror one-time. Just make sure you have a process or script to run to regularly check for updates. That way your hosts are ready for patching when you need it, assuming you stay on top of emerging threats and vulnerability management for various *Nix platforms.
I'm not a huge fan of reinventing the wheel so.. here's a couple how-to references.
How-to: Setup a local Ubuntu Aptitude Repo (Can setup to mirror Ubuntu 14,15,16 to support all your linux hosts.)
How-to: Set a local CentOS YUM Repo (just incase you have some RH based servers)
What you're going to have to do afterwards is change your /etc/apt/sources.list to point to your new internal repository for aptitude repos. You can just copy the lines existing there, change the server domain name to your local server. Then you don't need any of your linux hosts to communicate to any hosts outside of your network, the one server pulling from the mirrors can. It will definitely help you refine your security needs.
For RHEL based or yum, it's configured in /etc/yum.repos.d/{reponame}.repo

make linux machine grub bootable using bootable disk [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I am not familiar to grub, and very less to linux.
What i currently have is the recovered linux machine disk.
Which has grub bootable source machine, but for booting on other platform (hypervisor/cloud)
My team used an extlinux to make it bootable by overwriting its boot code and make that machine bootable on other platform like cloud/hypervisor.
Did something like in this link
I want to make that machine grub bootable so i tried and came up with something below:
Created 1 gb disk.
installed grub using command on fat32 partition using below link
Content of grub.cfg
menuentry 'usbboot ubuntu' {
set root (hd0,1)
linux (hd1,3)/boot/vmlinuz.efi root=/dev/sdb3
initrd (hd1,3)/boot/initrd.lz
}
After that i created vm. Attached 1GB disk first then recovered disk second.
please help me to resolve issue
I was able to solve the issue, Issue is their was no such device like /dev/sdb3,It may be due to hypervisor type. So i was trying mount command in BusyBox and i found it their as /dev/vdb3.

Remote File Transfer from Linux Machine to Windows Machine [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am looking to remotely transfer a file from my Linux machine to a Windows machine. I have done some research and it appears that scp is what I want to use to achieve this. However... all of the code that I'm seeing appears to be using cygwin (or similar) that is already installed on the windows machine, hardly "remote." My two systems are completely separate and have their own unique IP addresses.
Filezilla or WinSCP will do the job. It's required only to have SSH server running on your Linux machine, enabled SSH port (tcp/22 by default) in firewall and your Windows computer must be able to reach the Linux host - you can try ping <Linux-machine-IP> from your Windows computer to verify.
If you want something else, you could configure Samba or probably WebDAV (httpd.apache.org/docs/2.4/mod/mod_dav.html), which allow you to mount your Linux directories as drives in Windows without additional tools. For example, your linux home /home/user can be mounted as Y: drive in Windows.
If you already have an ssh server on your Linux machine, I suggest using Filezilla, which comes with a GUI.
You just want to install the client on windows, don't bother with the server, a classic ssh server does the job.

why sometimes we need to mount those files under root? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I am a newbie to mount. From what I know, the root file / is automatically mounted to /dev/xvda1 or /dev/sda1. In some of the tutorials, people mount file like /dummy to /dev/xvdb1, I don't understand what the meaning of doing this. Because it's parent root file / has already mounted. Could someone explain this to me?
Thanks in advance.
To throw out a very non-exclusive list of possibilities:
Sometimes / doesn't have the capacity for what you intend to use it for, so you want to use a filesystem located on a different physical device for extra storage.
Sometimes you want to mount content from a filesystem that isn't capable of being used as root -- for instance, a FAT or NTFS filesystem, which doesn't properly support UNIX semantics.
Sometimes your other block device is removable, and you're mounting it only temporarily.
Sometimes your other block device is located on media that isn't available at boot time -- requiring iSCSI setup or other operations that prevent it from being used as root without initrd / initramfs facilities your operating system doesn't provide.
Sometimes you want to use a different filesystem with different semantics -- for instance, maybe your xvdb1 is a GFS shared-block filesystem that other machines also have mounted at the same time for combined storage.
Sometimes you have a read-only block device with bulk contents that can't change, and you're mounting it to multiple VMs, vs systems having their own local read-write storage.
The number of possibilities is nearly endless.
This isn't a software development question, and doesn't belong on StackOverflow.

Resources