How to fix problem with zfs mount after upgrade to 12.0-RELEASE? - freebsd

So I had to upgrade my system from 11.1 to 12.0 and now the system does not load. Stop on error Trying mount root zfs - Error 2 unknown filesystem.
And I do not have an old kernel which was good and worked well.
So How to fix mount problem?
Had tried to boot with the old kernel, but after one of the tries to freebsd-update upgrade there left only new kernel.
Expected no problems after the upgrade.
Actual - cannot load the system with Error 2 - unknown filesystem
P.S.
Found that /boot/kernel folder does not contain opensolaris.ko module.
How to copy this module to /boot partition on the system from LiveCD (this file exist on LiveCD)

Considering you have a FreeBSD USB stick ready... you can import the pool into a live environment and then mount individual datasets manually.
Considering "zroot" is your pool name
# mount -urw /
# zpool import -fR /mnt zroot
# zfs mount zroot/ROOT/default
# zfs mount -a // in case you want datasets to mount
# cd /mnt
Now do whatever you want...
You can also rollback to the last working snapshot (if there is any)
In case, your system is encrypted, you need to decrypt it first.

Related

why my hash functions doesnt work or freez in the /sys/kernel/tracing/per_cpu/cpu45 folder?

Im having issue with my script that calulates intergity on this version of ubunutu :
cyber#ubuntu:/$ hostnamectl
Static hostname: ubuntu
Icon name: computer-vm
Chassis: vm
Machine ID: 48d13c046d74421781e6c6f771f6ac31
Boot ID: 847b838897ac47eb932f6427361232d1
Virtualization: vmware
Operating System: Ubuntu 20.04.4 LTS
Kernel: Linux 5.13.0-51-generic
Architecture: x86-64
Im wondering if /sys/kernel/tracing/per_cpu/cpu45 is not by any chance an alive file ?
because calculating the hash of the files inside takes ifinite time.
If you want to check filesystem integrity, skip the whole /sys folder - it is an interface to the kernel.
Also it would be better if you also skip /proc (also kernel interface) and /dev (special or device files) folders. F.e - you can read from /dev/zero or /dev/urandom forever. Network mounts can give you a lot of bright moments too.
Also your script can freeze on reading pipes - it there is enough permissions it can read from a pipe forever.
If I was building such a script, I'll start from the mounts, checked their filesystems and scanned only needed ones. For example if a mount is tmpfs - it's contents is located in RAM and will be wiped after reboot.
And you totally should check it out -
https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

What should I do after the "usebackuproot" mount option works on BTRFS?

After changing a BCache cache device, I was unable to mount my BTRFS filesystem without the "usebackuproot" option; I suspect that there is corruption without any disk failures. I've tried recreating the previous caching setup but it doesn't seem to help.
So the question is, what now? Both btrfs rescue chunk-recover and btrfs check failed, but with the "usebackuproot" option I can mount it r/w, the data seems fine, and btrfs rescue super-recover reports no issues. I'm currently performing a scrub operation but it will take several more hours.
Can I trust the data stored on that filesystem? I made a read-only snapshot shortly before the corruption occurred, but still within the same filesystem; can I trust that? Will btrfs scrub or any other operation truly check whether any of my files are damaged? Should I just add "usebackuproot" to my /etc/fstab (and update-initramfs) and call it a day?
I don't know if this will work for everyone, but I saved my filesystem with the following steps*:
Mount the filesystem with mount -o usebackuproot†
Scrub the mounted filesystem with btrfs scrub
Unmount (or remount as ro) the filesystem and run btrfs check --mode=lowmem‡
*These steps assume that you're unable to mount the filesystem normally and that btrfs check has failed. Otherwise, try that first.
†If this step fails, try running btrfs rescue super-recover, and if that alone doesn't fix it, btrfs rescue chunk-recover.
‡This command will not fix your file systems if problems are found, but it's otherwise very memory intensive and will be killed by the kernel if run in a live image. If problems are found, make or use a separate installation to run btrfs check --repair.

Error Mounting for ntfs partition in ubuntu 16.04 in terminal

hello.
i need to help. i want to mounting drive D in ubuntu 16.04.BUT
my partition is ntfs format. (Drive C & D)
I had Installed Windows 7 on my computer, but then I Deleted It and Installed Ubuntu 16.04, but i just repartition the drive C. and did not change the drive D partition.
means that i changed C partitioning and partitioned it for Ubuntu OS(like home & swap & root). partition of D is constant. so D partitions did not change.(D partition is NTFS)
partitioning for ubuntu in C
When Ubuntu installed, i wanted to open my D drive (ntfs) but get the following error:
this message show when i want to open drive
and when mounting in terminal give me this message:
`root#mjb:/home/mjb# mount -t "ntfs" /home
Mount is denied because the NTFS volume is already exclusively opened.
The volume may be already mounted, or another software may use it which
could be identified for example by the help of the 'fuser' command.`
and this:
sudo mount -t ntfs-3g /dev/sda5 /dummy
[sudo] password for mjb:
The disk contains an unclean file system (0, 0).
Metadata kept in Windows cache, refused to mount.
Failed to mount '/dev/sda5': Operation not permitted
The NTFS partition is in an unsafe state. Please resume and shutdown
Windows fully (no hibernation or fast restarting), or mount the volume
read-only with the 'ro' mount option.
I test this solution:
open Terminal
type this command sudo -mount -t ntfs -r /dev/sda5 and then enter
then the partition mounted but i have a new problem:
the partition is read only because i type in command -r
ubuntu told me in the error message that: you can mount partition read only.
my question is: does exist any command for mounting partition in the form of read/write.
Open Disks
Select the partition you are not able to mount then turn off automatic mounting options, unselect mount at startup & write ro after comma as shown in the image & now you should be able to mount the disk succesfully.
seems like your windows is locking your HD before shutting down.
This happens when you try to acess the HD that windows is installed on from another OS, because on shutdown, windows locks the acess to the HD because by doing this, it can gain some performance on resuming Windows the next time you boot it.
So, simply try rebooting your windows before going to linux, if you shutdown Windows and then turn your PC directly into any other SO you wont be able to acess the HD/partition Windows has acess to.
Try Shift+shutdown in windows, then boot to Ubuntu os. It will mount all drives

Cassandra moving data_file_firectories

Regarding the location of cassandra created data files and system files, I need to move the "commitlog_directory", "data_file_directories" and "saved_caches_directory" which have settings in the "cassandra.yaml" config file. It is currently at the default location "/var/lib/cassandra". The data is only some test data and of course the system generated keyspaces which are
dse_perf
dse_system
OpsCenter
system
system_traces
There are also the commitlog and saved_caches.db to move.
I am thinking of moving the keyspace directories with linux shell commands but I'm very unsure if they will become corrupt somehow. There is simply no space in the default drive and we need to move everything to the secondary and tertiary mounted drives.
Right now I'm in the process of moving all the files and resetting the yaml settings.
I have two questions -
Regarding the cassandra.yaml file, are there any other files besides this that are depended upon to have the location of the commitlog_directory and data_file_directories and saved_caches_directory, and their 'wrong location' will cause failure once I move all these files? I am also concerned the files (like the db files) inside the tables themselves have references to their own location and cause failure once they are moved.
If I just move the three settings commitlog_directory and data_file_directories and saved_caches_directory, will dse/cassandra actually create all the system keyspaces (system_traces, dse_perf, system, OpsCenter, dse_system), and the commitlof and the saved_caches.db, and will any other upstream config files be out of sync with that (same as first part of question 1)?
It is a very new installation so re installing would not be the end of the world but I realllly don't want to because we have kerberos and all kinds of other stuff on top of this cluster now.
This OS is ubuntu 14.0.4 and the DSE version is 4.7.
I just finished doing this. My instances are in AWS EC2 so your process may vary, but in essence:
create a new volume and attach it to the instance. my new device was
/dev/xvdg.
create new mount point sudo mkdir /new_data
format the new volume sudo mkfs -t ext4 /dev/xvdg
edit /etc/fstab so that your mount will survive reboots and add this
line /dev/xvdg /new_data ext4 defaults,nofail,nobootwait 0 2
mount the new volume sudo mount -a
make the new directories sudo mkdir -p
/new_data/lib/cassandra/commitlog
chown the ownership sudo chown -R cassandra:cassandra
/new_data/lib/cassandra
change cassandra.yaml to point to the new dirs
drain the node. if you're moving the data dir, copy over the data
from the old location to the new location. if you're moving
commitlog only, just restart cassandra.
I was able to move all the files and the commitlog as well. I changed the yaml and pointed it to where I wanted it to go. Remember to run the following command afterward -
chown -R cassandra:cassandra
And voila! Everything is reading/writing as it should. Cassandra is neato.

Solutions to resize root partition on live mounted system

I'm writing a Chef recipe to automate setting up software RAID 1 on an existing system with. The basic procedure is:
Clear partition table on new disk (/dev/sdb)
Add new partitions, and set then to raid using parted (sdb1 for /boot and sdb2 with LVM for /)
Create a degraded RAID with /dev/sdb using mdadm --create ... missing
pvcreate /dev/md1 && vgextend VolGroup /dev/md1
pvmove /dev/sda2 /dev/md1
vgreduce VolGroup /dev/sda2 && pvremove /dev/sda2
...
...
I'm stuck on no. 5. With 2 disks of the same size I always get an error:
Insufficient free space: 10114 extents needed, but only 10106 available
Unable to allocate mirror extents for pvmove0.
Failed to convert pvmove LV to mirrored
I think it's because when I do the mdadm --create, it adds extra information to the disk so it has slightly less physical extents.
To remedy the issue, one would normally reboot the system off a live distro and:
e2fsck -f /dev/VolGroup/lv_root
lvreduce -L -0.5G --resizefs ...
pvresize --setphysicalvolumesize ...G /dev/sda2
etc etc
reboot
and continue with step no. 5 above.
I can't do that with Chef as it can't handle the rebooting onto a live distro and continuing where it left off. I understand that this obviously wouldn't be idempotent.
So my requirements are to be able to lvreduce (somehow) on the live system without using a live distro cd.
Anyone out there have any ideas on how this can be accomplished?
Maybe?:
Mount a remote filesystem as root and remount current root elsewhere
Remount the root filesystem as read-only (but I don't know how that's possible as you can't unmount the live system in the first place).
Or another solution to somehow reboot into a live distro, script the resize and reboot back and continue the Chef run (Not sure if this is even popssible
Ideas?
I'm quite unsure chef is the right tool.
Not a definitive solution but what I would do for this case:
Create a live system with a chef and a cookbook
Boot on this
run chef as chef-solo with the recipe doing the work (which should work as the physical disk are unmounted at first)
The best way would be to write cookbooks to be able to redo the target boxes from scratch, once it's done you can reinstall entirely the target with the correct partitionning at system install time and let chef rebuild your application stack after that.

Resources