I'm setting up bcache on an ubuntu virtual machine on azure. I'm following the instructions from http://blog.rralcala.com/2014/08/using-bcache-in-ec2.html.
After running make-bcache -B /dev/sdc1, the /dev/bcache0 device is not yet available.
When running make-bcache -B /dev/sdc1 a second time, /dev/bcache suddenly exists, as does /sys/fs/bcache and /sys/fs/block/bcache0 etc.
A weird side-note is that running make-bcache -B for the second time in a script (even with sleeps in between) does not fix the issue, but running it manually does.
Once bcache has initiated properly it stays stable, also after reboots and VM-relocations. You can read the configuration scripts at https://github.com/okke-formsma/azure-bcache.
Does anyone have a clue how to enable bcache on my ubuntu 16.04 machine without having to resort to a double manual make-bcache -B xxx?
[edit] According to the archlinux wiki not having a /sys/fs/bcache means that "The kernel you booted is not bcache enabled", however that is not the case.
I figured it out!
The module was not loaded by the kernel. Doing a sudo modprobe bcache made all bcache functionality available without weird workarounds.
Related
I'd like to run bpftrace under Ubuntu on VirtualBox.
Unfortunately, I get
$ sudo bpftrace -e 'tracepoint:syscalls:sys_enter_open { printf("%s %s\n", comm, str(args->filename)); }'
Kernel lockdown is enabled and set to 'confidentiality'. Lockdown mode blocks
parts of BPF which makes it impossible for bpftrace to function. Please see
https://github.com/iovisor/bpftrace/blob/master/INSTALL.md#disable-lockdown
for more details on lockdown and how to disable it.
Which leads me to the following instructions
1. Disable secure boot in UEFI.
As far as I can tell, there's no UEFI with VirtualBox.
2. Disable validation using mokutil, run the following command, reboot and follow the prompt.
$ sudo mokutil --disable-validation
This results in
EFI variables are not supported on this system
And
3. Use the SysRQ+x key combination to temporarily lift lockdown (until next boot)
Can't find how to do this under VirtualBox.
Any tips?
edit Apparently, #Qeole installed the same VM I did and didn't get the issue. Something must have gone wrong in my install. I tried with another VM and it now seems to work.
Leaving the issue open so that people can find this resolution.
I have cloned the linux kernel repo on my arch hosted machine (host is ubuntu 16.04). Two weeks ago I was able to boot into the new kernel (it was 4.11.rc06 back then), then I did git pull and recompiled everything and it just hangs after "loading initial ramdisk image...".
So I tried git clean -xfd then make localmoduleconfig answering defaults for everything, then make then make modules_install then mkinitcpio -p linux.4.11.custom and of course sudo cp -v arch/x86_64/boot/bzImage /boot/vmlinuz-linux.4.11.custom.
After I verified it does indeed hang I tried more git pulls, more cleans, but nothing changed.
Running the same kernel from the same source on a real machine boots.
I could not find recorded bug in virtualbox or find an update for ubuntu.
Next I tried debugging it myself by adding to the grub's linux command: debug earlyprintk=vga,keep and even removing the initrd line adding noinitrd to the kernel, but I get no error. Just a screen with the grub's "echo" messages that stays like that forever.
How can I debug it?
Has anyone got any idea what can be done?
To check whether the kernel even starts I would use KDB (kernel's built-in debugger), and see if you get a prompt at startup.
For better debugging I would try to get KGDB (GDB for kernel) working.
You can actually activate both to have all options available. See following link for more information about this:
https://www.kernel.org/doc/htmldocs/kgdb/index.html
Perhaps this problem is specific to upgrading an Ubuntu 12.04 that is using tftp-hpa as part of a standard LTSP installation. After upgrading to 14.04 using do-release-upgrade the tftp-hpa daemon failed to start in a confusing way.
Using either systemV or the upstart method it pretended to start (it would complain if I tried to start it again without first "stopping" it) but no daemon appeared in the ps listing AND any attempt to download something via
tftp localhost generated a time out.
(Posted as an answer on behalf of the OP).
I traced the problem to the --address flag by attempting to start the daemon by hand. It worked if I left out the --address flag altogether but refused to start with any of the standard "0:0:0:0:69" like constructions.
Since "--address TFTP_ADDRESS" is baked into the the upstart script,
it did not work to remove TFTP_ADDRESS entirely from /etc/default/tftp-hpa. However setting
TFTP_ADDRESS :69
in /etc/default/tftp-hpa seems to have solved the problem.
Also, the upgrade from 12.04 to 14.04 left /var/lib/tftpboot owned by root, which generated the complaint
in.tftpd[3084]: cannot set groups for user nobody
in syslog.
I know the purpose of "biosdevname" feature in Linux, but I'm not sure how
exactly it works.
I tested it with Ubuntu 14.04 and Ubuntu 14.10 (both 64-bit server editions)
and it looks like they enable it by default - right after system startup my
network interface has a name such as p4p1 instead of eth0, no customization
is needed. As I understood it, in order for biosdevname to be enabled, BOTH
of these two conditions must be met:
a boot option biosdevname=1 must be passed to a kernel
biosdevname package must be installed
As I already mentioned, both Ubuntu 14.04 and 14.10 seem to offer biosdevname
as a default feature: they come with biosdevname package already installed, I
didn't need to modify grub.cfg either - GRUB_CMDLINE_LINUX_DEFAULT has no
parameters and my network interface still has a BIOS name (p*p*) instead of a
kernel name (eth*.)
Later I wanted to restore the old style device naming and that's where the
interesting part begins. I decided to experiment a bit while trying to disable
the biosdevname feature. Since it requires biosdevname package to work (or
so I read here and there), I assumed removing it would be enough to disable the
feature, so I typed:
sudo apt-get purge biosdevname
To my surprise, after reboot my network interface was still p4p1, so
biosdevname clearly still worked even though biosdevname package had been
wiped out.
As a next step, I applied appropriate changes to /etc/network/interfaces in
order to restore the old name of my network interface (removed entry for p4p1
and added entry for eth0). As a result, after another reboot, ifconfig
reported neither eth0 nor p4p1 which was another proof that OS still
understood BIOS names instead of kernel names.
It turned out that I also had to explicitly change GRUB entry to
GRUB_CMDLINE_LINUX_DEFAULT=biosdevname=0 and update GRUB to get the expected
result (biosdevname disabled and old name of network interface restored).
My question is: how could biosdevname work without biosdevname package? Is
it not required after all? If so, what exactly provides the biosdevname
functionality and how does it work?
The reason biosdevname keeps annoying you even after you uninstall the package, is that it installed itself in the initrd 'initial ramdisk' file as well.
When uninstalling, the /usr/share/initramfs-tools/hooks/biosdevname is removed, but there is no postrm script in the package so update-initramfs is not executed and biosdevname is still present in the /boot/initrd... file used in the first stage of system startup.
You can fully get rid of it like this:
$ sudo update-initramfs -u
I have a Linux System based LFS (Linux from Scrach). Linux kernel version 2.6.29.6 #1 SMP PREEMPT. This system uses Extlinux bootloader and boots from SSD (Micron USB Solid State Device). There is also a seconday harddrive in this system but not meant for booting. We changed the booting from HDD to SSD as we found SSD is fast and reliable over HDD.
Whenever there is a power outage, the unit reboots, the power Outage causes the SSD corruption. After reboot, the Fsck command is run by the script checkfs. The system halts with the Error message ""UNEXPECTED INCONSISTANCY; Run fsck manually error and unit halts and fails to reboot until we manually reboot.
I checked the checkfs script and found that, in this condition, the fsck -a -A -C -T is run and returns error value > 3 and < 16 for which the action is to halt the system and reboot using CD and run fsck manually and fix the issue.
I tried changing the checkfs script and used fsck -y which fixed all the errors and the unit booted normally but while fixing the issues, many files were deleted. Secondly if i ignore the fsck error and instead of system halt, if i go ahead with normal boot, it works but since it doesnt fix the filesystem issues, the unit may not work properly.
At this point i would like to know if there are any work arounds to resolve this issue and still boot the system normally and fix the filesystem issues? can i do something like if fsck fails then umount root file system from SSD and mount it from HDD and boot normally, then after boot recover SSD filesystem? if yes any pointers to do this? Please suggest.
You can append 'fastboot' as a kernel argument (in grub) to skip fsck.