Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 days ago.
Improve this question
I had a problem on a RAID1 with 4 disks. We replaced the faulty disk and restarted the server, the rebuild was done, two linux centos 7 machines did not come up accusing error of xfs corruption. Other machines rose normally.
I tried to mount the partition:
# mount /dev/mapper/cs_mbox_opt /mnt
returned: XFS metadata corruption detected at xfs_dir3_leaf_check_init.....
I ran the XFS_repair command and received the message that it was not possible to fix and indicated to use -L. I did the process with xfs_repair -L and after many messages with errors it informed that it was not possible to correct with the message:
[code]Metadata CRC error detected at 0x559d9f7ac1e9. xfs_dir3_block 0x41df0c80/0x1000
corrupt block 0 in directory inode 807368306: junking block
segmentation failure(saved core image)[/code]
I exported the metadata and imported it in another directory but I got the error:
Commands:
#xfs_metadump -gwa /dev/mapper/[volume] /tmp/xfsmetadata.img
# xfs_mdrestore -g /tmp/xfsmetadata.img /tmp/xfs_file
# xfs_repair -vf /tmp/xfs_file
Sorry, Could not file valid secondary superblock.
See attached images.
At the moment I don't know what else to do. Any tips?
I mentioned the steps above.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have an Acer Aspire R laptop with 260GB SSD, UEFI, Ubuntu and Windows 10 dual boot. How can I backup / clone / image the whole drive to be reinstalled on a new drive if current drive fails?
Clonezillla: Will it backup all partitions (EFI, recovery, Ubuntu, swap, Windows) to external drive, so I can restore it to a new drive, no problem? Which file system should the external drive have?
GParted: Or should I partition the external drive like the existing drive and copy the partitions with gparted?
You can use Clonezilla to make a bootable copy of the whole existing SSD with all of its partitions including Windows.
The boot menu comes from Grub2 and it gets created from templates in /etc/grub.d and settings from /etc/default/grub.
So, if your Clonezilla ISO file lives at /srv/iso/clonezilla-live-disco-amd64.iso and /srv directory lives in hard drive 0 in partition 13, then create a new executable file in /etc/grub.d, such as 40_clonezilla and put the following in it:
#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
menuentry "Clonezilla live" {
set root=(hd0,13)
set isofile="/iso/clonezilla-live-disco-amd64.iso"
loopback loop $isofile
linux (loop)/live/vmlinuz boot=live union=overlay username=user config components quiet noswap nolocales edd=on nomodeset ocs_live_run=\"ocs-live-general\" ocs_live_extra_param=\"\" keyboard-layouts= ocs_live_batch=\"no\" locales= vga=788 ip=frommedia nosplash toram=live,syslinux,EFI findiso=$isofile
initrd (loop)/live/initrd.img
}
Then, run update-grub to regenerate your grub menu.
When you reboot, you will have a new boot option that boots from Clonezilla, and, from there, you can make a bootable copy of the existing hard-drive onto an external drive and overwrite whatever is already on that external drive.
All of this stuff, editing Grub templates and overwriting drives is quite dangerous and the penalty for getting something wrong is high.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Wanted to reduce a filesystem space.
Did a lvreduce on it. lvreduce successfully ran.
sudo lvreduce -L 40G /dev/mapper/tmp
when i am trying to do mkfs.xfs force it is not allowing me saying
mkfs.xfs: /dev/tmp contains a mounted filesystem
sudo lvmdiskscan shows the correct space after lvreduce. But when i mount it back it is not showing correct space
Can anyone please help here ? Let me know if you need more details
Thanks in advance
I was having a similar issue and I found this:
https://discuss.pivotal.io/hc/en-us/articles/201816273-xfs-repair-failed-with-error-message-dev-sdb-contains-a-mounted-filesystem-
Basically, comment out the mount in your fstab and then reboot. Then perform your format or repair. Re-mount as needed, either manually or in your fstab.
This worked for me.
I was having the same problem. The way I solved it was
In the console enter tu super user su -l
fdisk -l to list the devices table
fdisk /dev/device-name
p to print the partition table of the device
d to delete partitions
n to create a new partition
if you don't know how to use it you can press m to launch help
t to assign the tipe of partition, I usually use linux ext4
w to write the new partition table in the device
Is very important to press w
Then you can use mkfs
mkfs.fat -F32 /dev/sbc1 you have to change sdc1 with the partition you want to format the -F32 is used in order to assign fat32 file system.
2.exit to standar user
First unmount with umount device then perform the size reduction
Unmount the selected partition in Disk Utility with the stop icon button.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I hope this is the right place to ask this question.
While trying to move a big directory "mydirname" (abt900GB), in a remote linux server, from /abc/source to /xyz/target ; I used the following command in sourcedirectory,
mv mydirname /xyz/target/ &
However, after a while the process got interrupted and gave an error,
mv: cannot stat `mydirname/GS9/set04/trans/run.3/acc': Stale file handle
mv: cannot stat `mydirname/GS9/set04/trans/run.4/amc': Stale file handle
.
.
.
and many more such messages mentioning different subdirectories locations.
The problem is that, the process has moved about 300gb of data. However, there are many directories which are not fully moved. Similar, problem occurred with another transfer (about 500 GB) that was running at the same machine.
Also, I am no longer in the same working session. I have disconnected and reconnected to the remote server.
It would be great if you help with following queries.
Is it possible that some of the file are not fully-transferred (i have seen such cases in 'cp' command where if a process interrupts, it results in lesser size file at the destination.
How can I resume the process so that I do not loose any data. Will 'mv' command be enough? or is there any special command that can work in background.
Else, is there a command to undo the process and restore the 'mydirname' to original location 'source'.
Use "rsync" to complete a job like this:
rsync -av --delete mydirname/ /xyz/target
It will verify that all files are moved, of the proper length, correct timestamps and will delete any leftover garbage.
You can test first with a "dry run" to see what the damages are:
rsync -avn --delete mydirname/ /xyz/target
This goes through the whole rsync process but doesn't actually do anything. It's usually a good idea to run this test to check your command syntax and see if it's going to do what you think it should do.
The "rsync" command is actually more like a copy "cp" than a move "mv". It will leave the source files in place and you can delete them later when you are satisfied that everthing has transferred correctly.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I would like to know how to copy a Linux partition (example: /dev/sda1) on a USB stick, and then boot on the USB stick.
I tried to just copy it with the command cp but when I tried to boot on it, it booted on the partition I copied (/dev/sda1) and not the usb.
In short what I want to do is create a USB stick with my Linux partition on it that I can boot on with any computer.
Thank you.
cp is great for copying files, but you should consider it too high-level for copying partitions. When you copy a partition, you read from a device file and write to another device file, or normal file, or what ever. With cp, many file attributes might be changed: modification time, owner, permissions, etc. This isn't great for partition copies, e.g. files owned by root should still be owned by root, or ~/.ssh/config should still have permissions 600.
The program for this task is dd, which copies bit-by-bit. You specify an input file and an output file:
dd if=/dev/sda of=/dev/sdf bs=512
This copies contents of /dev/sda to /dev/sdf while reading 512 bytes at a time (bs=blocksize). After some time, it will finish and report some statstics. To get statistics during copying, you must send SIGUSR1 signal to the dd process.
Please beware that dd is a dangerous tool, if incorrectly used: For example, it won't ask you for permission to overwrite your 10000 picture vacation album. It simply does. Make sure to specify the correct device files!
You also have to take care that sizes of source and destination fit: destination needs to be at least the size as the source. If you have a 500GB hard disk it won't work to copy to a 4GB USB stick.
Copying whole hard disks also copies the boot loader. An issue with this might be, that the entries in boot loader configuration reference wrong disks. However, starting the boot loader should be no problem (provided architecture matches). If you use GRUB, you even get a command line, which you can use to boot the system manually.
Please change your bios setting so that the first boot device is USB.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
How do I edit the /etc/fstab file so that the file systems on my disk partitions are automatically mounted on system boot? I'm using CentOS 5 on a VMWare virtual machine. Using an example fstab file, if it looked like this:
/dev/hda2 / ext2 defaults 1 1
/dev/hdb1 /home ext2 defaults 1 2
/dev/cdrom /media/cdrom auto ro,noauto,user,exec 0 0
/dev/fd0 /media/floppy auto rw,noauto,user,sync 0 0
proc /proc proc defaults 0 0
/dev/hda1 swap swap pri=42 0 0
and I wanted to add another partition /dev/hdc1, with file system jfs so that it automatically mounts on system boot, how would I add it to fstab?
/dev/hdc1 / jfs defaults 0 0
well, if you need fine grain optins, read here http://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-nfs-fstab.html