Install Alpine in diskless mode on NON VNC dedicated server - linux

Hello i try to figure out how to install alpine linux, in diskless mode, on my remote dedicated server, without vnc access.
The server hoster just has a few images and a rescue system without a VNC option.
I already tried to boot the iso via grub-image boot but the alpine linux installation image doesn't have openssh installed, so i couldn't connect to the server to do the alpine-setup.
So i thought i can maybe edit the squashfs image.
In a Debian Live CD it's easy to unsquash the image, enable "PermitRootLogin= yes" and squash it again but with alpine linux i have absolutely no clue.
After this i tried to build with mkimage a custom alpine iso but i just don't know how to build properly i get "unable to load key file" and "$apks: unable to select package (or its dependencies)" error after the building.
(https://wiki.alpinelinux.org/wiki/How_to_make_a_custom_ISO_image_with_mkimage)
I used this code for the mkimage profile:
profile_nas() {
profile_standard
kernel_cmdline="unionfs_size=512M console=tty0 console=ttyS0,115200"
syslinux_serial="0 115200"
kernel_addons="zfs"
apks="\$apks openssh"
local _k _a
for _k in \$kernel_flavors; do
apks="\$apks linux-\$_k"
for _a in \$kernel_addons; do
apks="\$apks \$_a-\$_k"
done
done
apks="\$apks linux-firmware"
}
and this one to build
sh mkimage.sh --tag edge \
--outdir ~/iso \
--arch x86_64 \
--repository https://dl-cdn.alpinelinux.org/alpine/edge/main/ \
--profile nas
even if I'm able to generate the custom alpine linux iso i don't understand
this part of the guide (and if i would be able to understand this part i still wouldn't know how to enable remote ROOT access aka "PermitRootLogin= yes" in sshd_config):
Making packages available on boot
A package may be made available in the live system by defining the generation of an apkovl which contains a corresponding /etc/apk/world file, and adding that overlay definition to the mkimg-profile, e.g. with `apkovl="genapkovl-mkimgoverlay.sh"`
The definition may be done as in the genapkovl-dhcp.sh example. Copy the relevant parts (including the rc_add lines) into a `genapkovl-mkimgoverlay.sh` file and add the package(s) that should be installed in the live system on separate lines in the file contents for /etc/apk/world.
After this i tried to do a ssh in initramfs with dropbear-initramfs.
But it also doesn't work. With encrypted filesystems it always worked but to do this task i can't get a connection.
Does someone have a different idea how i can accomplish this task?

Related

WSL2 distro shell can't launch a file copied from outside

The situation in short
I can't launch an executable (binary or a script) in a WSL2 distro if it wasn't created inside this distro
I can launch scripts and binaries that were created inside the distro shell (not using /mnt/c or /mnt/d in any way)
But I can't launch anything that was created outside and copied inside from Windows (using /mnt/c or /mnt/d)
I can see the copied files in the file system, can "cat" them, can look them up with "which", but I cannot launch them by entering the path into the command line
The questions I have in regards to all this
How come that the shell can't see the files while utils you run from the shell can?
How do I make the shell see files that were copied from outside?
If I can't make the shell launch the files, then how do I launch them?
The Situation in detail
I have Windows 10 with WSL2 and two distros
Ubuntu-20.04
Alpine
In Ubuntu I have a "Hello, World!" project written in C
It compiles in Ubuntu and then and runs in Ubuntu just fine
But, when I copy it from Ubuntu to Windows
cp hello /mnt/d/
and then go to Alpine and copy it inside from Windows
cp /mnt/d/hello .
I then have trouble launching it inside Alpine
Here is the output of file hello command in Ubuntu with some extra formatting (just in case)
$ file hello
hello:
ELF 64-bit LSB shared object,
x86-64,
version 1 (SYSV),
dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2,
BuildID[sha1]=021352ab7bf244e340c3c42ce34225b74baa6618,
for GNU/Linux 3.2.0,
not stripped
Here's what I have in Alpine
$ cp /mnt/d/hello .
$ ls -l
-rwxr-xr-x 1 pavel pavel 16760 Apr 19 19:07 hello
$ ./hello
-ash: ./hello: not found
Now same with a script copied from Windows
Copy the script inside Alpine from Windows
$ cp /mnt/d/hello.sh .
Checking the contents
$ cat hello.sh
#!/bin/ash
echo Hello!
Setting the execute permission just in case
$ chmod agu+x hello.sh
Trying to run it
$ ./hello.sh
-ash: ./hello.sh: not found
But, I can launch the hello.sh by explicitly calling the ash tool and passing the script path as the argument
$ ash ./hello.sh
Hello!
At the same time, a script created inside Alpine runs just by entering it's path to the command line
$ cat << EOF > hello-local.sh
> #!/bin/ash
> echo Local hello!
> EOF
$ chmod agu+x hello-local.sh
$ ./hello-local.sh
Local hello!
Also, I couldn't make a file that would run from one that wouldn't either by copying it with cp
cp hello.sh hello2.sh
or by copying it with cat
cat hello.sh > hello3.sh
cmod agu+x hello3.sh
Why do I need to copy things from outside
It all started when I wanted to explore how Docker for Windows uses Linux namespaces to separate containers
The distro that Docker for Windows uses is called docker-desktop
The docker-desktop distro neither has utilities that I need for my experiments, nor a package manager to get those utilities
So I tried to copy them from outside
But now Docker for Windows studies is not the only concern
I want to understand this magic that is happening just as bad
To be fair, there really are three separate questions here, but not necessarily the questions you listed in your post:
Secondary question -- Why does your script that you copied to Alpine fail?
As #MarkPlotnick covered in the comments (and you confirmed), it was due to the script having DOS/Windows line endings (CRLF). In general, try to avoid creating or editing Linux text files using Windows tools unless you are sure that they are using Linux line-endings.
Secondary question -- Why does your C program fail when you compile on Ubuntu and copy the binary to Alpine?
Also as #MarkPlotnick mentioned in the comments, this is because Ubuntu uses glibc as the standard library implementation by default, but Alpine uses musl. See a number of questions here for more information. The first one in the list sorted by "relevance" is actually a pretty good one to start with.
Main question -- How to explore the docker-desktop distro
Really, your main goal seems to be how to gain access to certain tools inside the docker-desktop distro in order to learn more about it.
I was going to say, "don't" (with more explanation), but the reality is that I think it's a potentially good learning experience. I've done it, to some degree, so who am I to say it's "too dangerous" or recommend against it? ;-)
I will give fair warning, though -- The docker-desktop distro isn't intended to be run by users. Docker Desktop "injects" links and sockets into your other WSL2 distros (which you can enable/disable per-distro in Docker Desktop) so that its tools, processes, etc., are available to all your WSL2 (and PowerShell/CMD) instances.
I'd personally try to avoid making any changes to the docker-desktop distro itself. They'll likely be overwritten anyway by Docker Desktop when it extracts a new rootfs.
However, we can still gain access to the tools we need by accessing them from another distribution, but without copying them into docker-desktop.
First, a note -- As I think you have probably already figured out,docker-desktop is also musl-basesd. So you'll want to use tools from another musl-based distro like Alpine.
This can be easily accomplished by running the following line once in your Alpine instance (as root):
echo "/ /mnt/wsl/instances/Alpine none defaults,bind,X-mount.mkdir 0 0" >> /etc/fstab
That will add a mount to the Alpine instance into the tmpfs /mnt/wsl mount. You can see my Super User answer here for more details on that.
Once you wsl --terminate Alpine and restart it, you'll have access to the Alpine files from any other WSL2 distribution.
As a useful (for your intent) example, install the util-linux package in Alpine to get access to the lsns command.
Then, in the docker-desktop distro (which I assume you already know to access with wsl -u root -d docker-desktop, but I'll include that command here for other future readers), to list the namespaces:
/mnt/host/wsl/instances/Alpine/usr/bin/lsns
The docker-desktop instance automounts at a slightly different directory than default (see cat /etc/wsl.conf), so you need to adjust the path to /mnt/host/wsl instead of /mnt/wsl.
But with that in place, you can run all (most?) of your Alpine binaries directly in docker-desktop without having to modify it directly. If you have a script in your home directory that you want to run in docker-desktop, for instance:
/mnt/host/wsl/instances/Alpine/home/users/<yourusername>/hello.sh
Note that if you have a binary that requires a dynamically-linked library on Alpine, I'm assuming you'll need to adjust your LD_LIBRARY_PATH accordingly, although I haven't tested. For instance:
LD_LIBRARY_PATH=/mnt/host/wsl/instances/Alpine/usr/lib /mnt/host/wsl/intances/Alpine/usr/bin/<whatever>

Can Docker be used to run Linux CLI tools from macOS?

I am writing software on macOS. As a subroutine I would like to call certain Linux-only CLI tools, e.g., > mytool inputfile. Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?). And if I can, is it a good idea or will there be issues installing and compiling Linux packages?
From my understanding of docker as basically a lightweight VM that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
Can Docker be used to run Linux CLI tools from macOS?
Docker supports macOS according to documentation.
Can I use Docker for Mac to compile the Linux tool inside a container and call it from outside the container (after copying input files into the container?
Yes.
And if I can, is it a good idea
Depends on the term "good" - it's subjective and highly depends on specific case.
or will there be issues installing and compiling Linux packages?
No.
From my understanding of docker as basically a lightweight VM
Yes.
that uses a stripped down version of a Linux distribution, this approach seems to make sense, but the stripped down aspect might be an impediment.
What is in docker container depends on the container. Overall, usually man pages and system package manager repository information are removed from images. I would disagree - mostly docker containers come with full Linux distributions and can be used as such.
You should do as follow:
docker run --rm -v /:/host -ti ubuntu ... your command referring to /host...
And this is the command parameters explanation:
--rm : remove sthe container after running (but keep cached the image for next calls).
-t : allocates a visibile shell terminal.
-i : runs in interactive mode.
-v /:/host : maps your root folder to container /host folder.
ubuntu : pulls the ubuntu image, which you can change with any other you prefer.
As last parameter put the commands to run into the container but relatives to /host.

Trouble converting Docker to Singularity: "Function not implemented" in Singularity, but works fine in Docker

I have an Ubuntu docker container that works perfectly fine as is. I have a custom binary inside that executes and returns as expected. Because of security reasons, I cannot use docker for automated testing. I created a docker archive and then I load a singularity container from this docker archive. The binary that I need to run fails with the following error:
MyBinary::BinaryNameSpace::BinaryFunction[FATAL]: boost::filesystem::status: Function not implemented: "/var/tmp/username"
When I run $ldd <binary_path>, I see that a boost filesystem binary was linked. I am not sure why the binary is unable to find the status function...
So far, I have used a tool called ermine to turn the dynamically linked binary into a static binary
I still got the same error, which I found very strange.
Any suggestions on directions to look next are very appreciated. Thank you.
Both /var/tmp and /tmp are silently automounted by default. If anything was added to /var/tmp during singularity build or in the source docker image, it will be hidden when the host's /var/tmp is mounted over it.
You can disable the automounts individually when you run a singularity command, which is probably what you want to do first to check that it is the source of the problem (e.g., singularity run --no-mount tmp ...). I'd also recommend using --writable-tmpfs or manually mounting -B /tmp to make sure that there is somewhere writable for any temp files. You are likely to get an error about a read-only filesystem if not.
The host OS environment can also cause problems in unexpected ways that are hard to debug. I recommend using --cleanenv as a general practice to minimize this.
The culprit was an outdated Linux kernel. The containers still use the host's kernel.
On Docker, I was using Kernel 5.4.x and the computer that runs the singularity container runs 3.10.x
There are instructions in the binary which are not supported on 3.10.x
There is no fix for now except running the automated tests on a different computer with a newer kernel.

Cannot get custom kernel to boot - mkinitpcio does not add any modules

1. What I am trying to achieve:
Build a custom kernel so I can install and run Anbox-git from AUR on my Arch laptop. Custom kernel is needed for the package to work.
2. What I did to achieve it:
Download Arch Linux kernel v5.8.5-arch1 from here
I followed the guidelines on tradional compilation Arch wiki to create the custom kernel
Via make nconfig I applied the changes mentioned in the Anbox Arch wiki page.
Via make nconfig I changed EFIVAR_FS option from "M" to "*" to resolve an error from earlier attemps.
Via make nconfig under Location: -> Device Drivers-> Multiple devices driver support (RAID and LVM) (MD [=y])-> Device mapper support (BLK_DEV_DM [=y]) I added a few more options (*) because on earlier builds mkinitpcio gave errors for missing modules for DM_CRYPT, and some more DM_ modules which I cannot easily reproduce (will do if necessary for the answer, but I hope it'll be irrelevant).
After creating the config this way I did:
sudo make modules_install
sudo cp -v arch/x86_64/boot/bzImage /boot/vmlinuz-linux58ac
sudo cp /etc/mkinitcpio.d/linux.preset /etc/mkinitcpio.d/linux58ac.preset
Adapted the preset file per Arch wiki instructions
sudo mkinitcpio -p linux58ac
Important: The mkinitpcio runs fine, but keeps giving me a warning:
WARNING: No modules were added to the image. This is probably not what
you want.
sudo grub-mkconfig -o /boot/grub/grub.cfg
3. Expected result:
I am able to reboot, select the new kernel from grub menu, get the usual LVM password prompt, and launch into it without problems.
4. Result I get:
I can reboot and select new kernel from grub but when I select it I get a
Warning: /lib/modules/5.8.5-arch1/modules.devname not found, ignoring.
Starting version 246.4-1-arch
ERROR device 'dev/mapper/vg0-root' not found. Skipping fsck.
mount /new_root: special device /dev/mapper/vg0-root does not exist.
You are being dropped into an emergy shell.
I checked and the /lib/modules/5.8.5-arch1/modules.devnamedoes indeed exist. But I think the actual problem is that mkinitcpio doesn't load the correct modules into the custom kernel, causing it to become unbootable.
Any help appreciated!

Install/Update cifs-utils before mount smb

I'm currently trying to get Vagrant to provision a working CentoOS7 image on Windows10, using Hyper-V. Vagrant 1.8.4, current latest.
I envcounter a problem where the provisioning fails and I need to workaround each time. The CentOS7 image is a minimal image and does not include cifs-utils, therefore the mount wont work. So, I need cifs-utils installed before mount.
Error:
==> default: Mounting SMB shared folders...
default: C:/Programs/vagrant_stuff/centos7 => /vagrant
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o uid=`id -u vagrant`,gid=`getent group vagrant | cut -d: -f3`,sec=ntlm,credentials=/etc/smb_creds_4d99b2
d500a1bcb656d5a1c481a47191 //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191 /vagrant
mount -t cifs -o uid=`id -u vagrant`,gid=`id -g vagrant`,sec=ntlm,credentials=/etc/smb_creds_4d99b2d500a1bcb656d5a1c481a
47191 //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191 /vagrant
The error output from the last command was:
mount: wrong fs type, bad option, bad superblock on //192.168.137.1/4d99b2d500a1bcb656d5a1c481a47191,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
In some cases useful info is found in syslog - try
dmesg | tail or so.
As it is now, the provisioning has to fail, and I need to:
vagrant ssh (powershell)
(connect to instance via putty/ssh)
sudo yum install cifs-utils -y (putty/ssh)
(wait for install...)
exit (putty/ssh)
vagrant reload --provision (powershell)
This is obviously a pain and I am trying to streamline the process.
Does anyone know a better way?
You can install the missing package in your box and repackage this box so you can distribute a new version of this box containing the missing package.
In order to provision a vagrant box you need to create it from an iso. While preparing the box you can install all needed packages for you. In your case it is Hyper-v - https://www.vagrantup.com/docs/hyperv/boxes.html
Best Regards
Apparently my original question was downvoted for some reason. #whatever
As I mentioned in one of the comments above:
I managed to repackage and upload an updated version. Thanks for the advice. Its available in Atlas as "KptnKMan/bluefhypervalphacentos7repack".
Special thanks to #frédéric-henri :)

Resources