Is it possible to script starting up a FreeBSD VM, running a program in it, and fetching the result? - freebsd

I have a library that I'd like to test on FreeBSD. My CI setup doesn't have any FreeBSD systems, and adding them would be difficult, but I can spin up a VM inside my CI script. (In fact, I already do this to test on more exotic Linux kernel versions.)
For Linux, this is pretty easy: grab a pre-built machine image from some distro site, and use cloud-init to inject a first-run script, done.
Is it possible to do the same thing with FreeBSD? I'm looking for an automated way to take a standard FreeBSD machine image (e.g. downloaded from https://freebsd.org), boot it, and inject a program to run. The tricky part is that it should be entirely automated – I don't want to have to manually click through an installer ever time FreeBSD makes a new release.

Out of the box, there is no option like cloud-init but you could create your own image and use firstboot for example, this script is used to bootstrap a VM with saltstack in AWS:
#!/bin/sh
# KEYWORD: firstboot
# PROVIDE: set_hostname
# REQUIRE: NETWORKING
# BEFORE: login
. /etc/rc.subr
name="set_hostname"
rcvar=set_hostname_enable
start_cmd="set_hostname_run"
stop_cmd=":"
export AWS_ACCESS_KEY_ID=key
export AWS_SECRET_ACCESS_KEY=secret
export AWS_DEFAULT_REGION=region
TAG_NAME="Salt"
INSTANCE_ID=$(/usr/local/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id)
REGION=$(/usr/local/bin/curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/.$//')
TAG_VALUE=$(/usr/local/bin/aws ec2 describe-tags --filters "Name=resource-id,Values=${INSTANCE_ID}" "Name=key,Values=$TAG_NAME" --region ${REGION} --output=text | cut -f5)
set_hostname_run()
{
hostname ${INSTANCE_ID}
sysrc hostname="${INSTANCE_ID}"
sysrc salt_minion_enable="YES"
echo ${INSTANCE_ID} > /usr/local/etc/salt/minion_id
pw usermod root -c "root on ${INSTANCE_ID}"
if [ ! -z "${TAG_VALUE}" ]; then
echo "node_type: ${TAG_VALUE}" > /usr/local/etc/salt/grains
fi
service salt_minion start
}
load_rc_config $name
run_rc_command "$1"
To create your own images you could use this script as a starting point: https://github.com/fabrik-red/images/blob/master/fabrik.sh#L124, more info here: https://fabrik.red/post/creating-the-image/
You can also simply install FreeBSD in Virtualbox, configure scripts related to firstboot, test and when you are happy with the results, export it, just be careful before exporting it that /firstboot exists (touch /firstboot) since after the first boot it will be removed and it could happen that after you export it if is not present it will not call the scripts.
After you have created the image you can use it multiple times, no need to create a new "custom" VM every time, it all depends on the scripts you use to bootstrap and load the scripts on the "firstboot".

I looked into this a bit more, and it turns out that it is possible, though it's quite awkward.
There are three kinds of official FreeBSD releases: pre-installed VMs, ISO installers, and USB stick installers.
The official pre-installed VM images don't offer any way to script them from outside by default. And they use the FreeBSD UFS filesystem, which isn't modifiable from any common OS. (Linux can mount UFS read-only, and has some code for read-write support, but the read-write support is disabled by default and requires a custom kernel.) So there's no easy way to programmatically modify them unless you already have a FreeBSD install.
The USB stick installers also use UFS filesystems, so that's out. So do the pre-built live CD's I found, like mfsBSD (the CD itself is iso9660, but it's just a container for a big UFS blob that gets unpacked into memory).
So that leaves the CD installers. It turns out that these actually use iso9660 for their actual file layout. And we don't need FreeBSD to work with iso9660!
So what you have to do is:
Download the CD installer
Modify the files on it to do the install without user interaction, apply some custom configuration to the new system, and then shut down
Use your favorite VM runner to boot up the CD with a blank hard drive image, and let it run to install FreeBSD onto that hard drive
Boot up the hard drive, and it will do whatever you want.
There are a ton of fiddly details that I'm glossing over, but that's the basic idea. There's also a fully-worked example here: https://github.com/python-trio/trio/pull/1509/

To "inject" your software you generally need to be able to write to the filesystem, and the most reliable way of doing that is to run the system itself. In other words, have a FreeBSD VM to create FreeBSD VMs - you can either build them locally (man 7 release), or fetch VM images from http://download.FreeBSD.org, mount their rootfs somewhere, put your software wherever you need it, and make it execute it from mounted filesystem's /etc/rc.local.

Related

Docker: Where is "reset to factory defaults" on linux?

I've used Docker on Windows and macOS for the past couple years. Often, when things got really messed up, I found it faster to use the Reset to factory defaults option in the Docker GUI to do a clean reset than to troubleshoot whatever problem was giving me grief.
Now I'm using Ubuntu 20.04 and I can't find this option. I found a long list of commands to remove/reset individual components but where is the single command for this like Windows/macOS?
Use your OS's package manager to uninstall the Docker package; then
sudo rm -rf /var/lib/docker
That should completely undo all Docker-related things.
Note that the "Desktop" applications have many more settings (VM disk/memory size, embedded Kubernetes, ...). The native-Linux Docker installations tend to have very few, and generally the only way to set them is by directly editing the JSON configuration file in /etc. So "reset Docker" doesn't really tend to be an issue on native Linux.
As always, make sure you have an external copy of your images (in Docker Hub or a registry like ECR) or you can rebuild them from Dockerfiles, your containers are designed to tolerate being deleted and recreated, and if you use named volumes, you have backups of those.
You can use this command:
docker system prune -a
Description:
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all images without at least one container associated to them
- all build cache

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Using RPMs for installation on embedded system images

I'm trying to use RPMs to install public and private software into disk images that are eventually written to the boot flash of Linux based embedded systems.
My current methodology is to mount the image (/mnt/foo) read/write on a CentOS 6.5 box and use the rpm --installroot=/mnt/foo option. There are two problems:
--installroot=/mnt/foo appears to chroot into /mnt/foo, meaning that when the post install scripts run /bin/sh (etc.) they're actually using /mnt/foo/bin/sh (etc.) That's sort of workable if the target architecture is the same as the installation box but gets very messy if its not. I'm interested to hear if someone has solved this before.
At a higher level it would be nice to use yum or apt-get or ??? to handle package dependencies and repositories. yum is the obvious choice on CentOS but it has a weak grasp of non-native architectures and would likely require some hacking. apt-get looks more promising in that department but in truth I've never used it and my attempts to install it on CentOS 6.5 have left me in dependency hell.
This seems like a problem someone would have hit before but unfortunately everything I can find about RPMs and embedded systems assumes identical processor architectures.
Bottom line, I need to use RPMs to install software to a Linux image that will be the boot disk for a embedded system. Other than doing the rpm install as part of the image installation on the embedded system (our installation time is already a big problem) I'm open to just about anything.
Any suggestions will be gratefully received.
Have you tried using some continuous build system like Jenkins? You can use that to easily set up build hosts on any architecture/platform you like, so long as that platform has some basic tools (like ssh).
You could use a combination of the --installroot flag mentioned by other commenters, alongside of some VMs setup as build hosts in Jenkins in order to install your RPMs in a specific directory while avoiding any platform/architecture issues.
I'm not sure what your specific requirements are, but, depending on how far you are willing to go... RPMs are just compressed CPIO archives with a header, so you could use rpm2cpio piped to cpio to extract the files in the RPM. You can then extract the postinstall scripts using rpm -qp --scripts filename.rpm and run them yourself. The downside to this, is of course, that you lose a lot of the benefit of using RPM/yum in the first place like the automatic installation of dependencies, and so on.

Run build script in Code::Blocks instead of building a target

Background:
I recently joined a software development company as an intern and am getting used to a new build system. The software is for an embedded system, and lets just say that all building and compiling is done on a buildbox. The building makes use of code generation using xml files, and then makes use of make files, spec files, and version files as well.
We develop on our own comps, (linux - mandriva distro) and build using the following methods:
ssh buildserver
use mount to mount drive on personal computer to the buildserver
set the environment using . ./set_env (may not be exactly that)
cd app_dir/obj (where makefile is)
make spec_clean
make spec_all
make clean
make
The Question:
I am a newbie to Code::Blocks and linux and was wondering how to set up a project file so that it can simply run a script file to execute these commands, instead of actually invoking the build process on my actual computer. Sort of like a pre-build script. I want to pair the execution of this script simply to Ctrl-F9 (build) and capture any output from the above commands in the build log window.
In other words, there is no build configuration or target that the project needs to worry about. I don't even need a compiler installed on my computer! I wish to set this up so that I can have the full features of an IDE.
Appreciate any suggestions!
put your script in a shell script file. E.g.,
#!/bin/sh
mount ... /mnt/path/buildserver
. ./set_env
cd app_dir/obj
make spec_clean
make spec_all
make clean
make
Say you name it as /path/to/my_build_script, then chmod 755 /path/to/my_build_script and invoke the following from your ssh client machine:
script -c ssh buildserver "path/to/my_build_script"
When finish, then check for the file typescript under current directory.
HTH

Install chromium to Linux disk image?

I'm sure this has been asked before but I have no clue what to search for
I am trying to create a custom Linux image (for the Raspberry Pi) - I am currently manipulating the filesystem of the .img but I've discovered it's not as simple as dropping in the binary :( if only...
What is the accepted way to "pre-install" a package on a disk image where you can only manipulate the filesystem and ideally not run it first? Am I best to boot up, install, and then create the image from that, or is there a way of doing it beforehand in the same way you can change configuration settings etc?
Usually, when I have to change something in a disk image, I do the following:
sudo mount --bind /proc /mnt/disk_image/proc
sudo mount --bind /sys /mnt/disk_image/sys
sudo mount --bind /dev /mnt/disk_image/dev
These action are needed as this folder are create during boot process, mounting them in your system image will emulate a full boot. Then, you can chroot on it safely:
sudo chroot /mnt/disk_image
You're now able to issue commands in the chroot environment:
sudo apt-get install chromium
Of course, change /mnt/disk_image to the path where you have mounted your filesystem. apt-get will only works on Debian based system, change it according to your distribution.
You could find problem connecting to the internet and it can be cause by DNS configuration. The best thing you can do, is to copy your /etc/resolv.conf file in the remote filesystem as this file is usually changed by dhcp and it's empty on chroot environment.
This is the only solution that gives you full access to the command line of the system you're trying to modify.
This is an untested idea:
The dpkg tool, which can install .deb packages, has a --root option which can set a different filesystem than the local / path.
From the man page:
--instdir=dir
Change default installation directory which refers to the
directory where packages are to be installed. instdir is
also the directory passed to chroot(2) before running
package’s installation scripts, which means that the
scripts see instdir as a root directory. (Defaults to /)
--root=dir
Changing root changes instdir to dir and admindir to
dir/var/lib/dpkg.
If you mount your image and pass its mountpoint as --root, it should work.
There are things like the Ubuntu Customization Kit which allow you to create your own version of the distro with your own packages.
Crunchbang even has a utility like this, which is the distro I have personally selected for experimenting with my Pi.

Resources