Where can I get fsck code? - linux

I have been trying to find out fsck code. I cannot find it in the coreutils package in Ubuntu. Could someone please let me know, where I would be able to take a look at the fsck code?

fsck has several implementations depending on used file system. For ext2/ext3/ext4 you need "e2fsprogs" package in Ubuntu.
Try:
sudo apt-get source e2fsprogs

fsck utility is a part of "util-linux" package. The sources of "util-linux" can be downloaded from https://mirrors.edge.kernel.org/pub/linux/utils/util-linux/
fsck calls specific utility dependent on fs type: fsck.minix, fsck.ext4, fsck.nfs, fsck.exfat, fsck.ext4dev, fsck.cramfs, fsck.ext3, fsck.fat, fsck.vfat, fsck.msdos, fsck.ext2.
e2fsprogs package includes fsck.ext2, fsck.ext3, fsck.ext4 and multi-utility fsck. If fsck checks vfat fs and fsck.vfat is not presented in system, fsck can not work.
dosfs package includes fsck.fat and symlinks: fsck.msdos and fsck.vfat to fsck.fat

Related

After updating glibc: Segmentation fault (core dumped)

I've been using centos 6.5. And after I used the yum to update my glibc.
yum update glibc
I found that my "yum" command as well as my "python" command will throw error as follows:
I' ve tired other shell commands like: ls ll ln rm mv etc. Those commands are working normally. When I check my libc link, the result are as follows:
Additionally, I have tried to print my libz config using
ldconfig -v|grep libz
The result will be as follows:
I was wondering why this may happen. And I really need your guys help to solve this problem.
What's more, my 'gdb' will throw this error too. When I use 'dmesg' command, I got the message as follows:
CentOS 6 is based on glibc 2.12. The symbolic link points to glibc 2.16, so you tried to install a glibc package which is not part of the operating system. This has corrupted the system, likely beyond repair. You will jave to reinstall it and restore the data from backup.
Avoid reinstallation is a complex operation. You need to make sure that you still have all the files for glibc 2.12 (with names ending in -2.12.so). Then you can delete the glibc 2.16 files (those ending in -2.16.so), with a single rm invocation. (The single rm invocation is necessary because rm will stop working once you start deleting the glibc 2.16 files.) After that, you can run ldconfig to get back the right symbolic links.
You could also try to use sln or ln -sf to fix the symbolic links manually, but you will have to remove the glibc 2.16 files at one point. Until you do the latter, every ldconfig invocation will bring back the glibc 2.16 symbolic links. And ldconfig is run automatically during package installation, so this can happen quite easily by accident.

How to change yum install location?

Could anyone tell me how to change yum install default directory? I have been trying to install datastax cassandra after creating the datastax.repo file in yum.repos.d directory but when installing it says no enough space. it is installing in default / file system. can i change to /data or /local/apps directory where there is plenty of space. how can i do this. commands used: yum install dse-full;
many thanks for the help
You don't. Not really.
If the RPM is built as a relocatable RPM (almost none are that I'm aware of). Then, and only then, you can use the --prefix or --relocate arguments to rpm to do some amount of prefix replacement/path translation.
That said that is almost certainly not the case.
If the rpm installs to under a specific prefix (e.g. /opt/cassandra) then you might be able to create a symlink at that location to your other partitions and that might work.
A better option (and one that might be more reliable) would be to use a bind mount at that location to somewhere on your other partition.
That said the real answer here is to give your root partition more space. Which, assuming you used LVM to create your partitions (and you really probably should) is not a complicated task.
I've been stuck on a legacy server with insufficient disk space, and had to use an approach similar to this answer.
You can find out where it wants to install to using rpm commands:
rpm -q -p -l /path/to/rpmfile.rpm |less
If it installs under a common directory such as /usr/local/, you're in luck. I cannot download the RPMs from the vendor, since it requires registration, but from the docs about the .run installer for the same product, the default is /usr/local/dse.
If this is the same for the .rpm installs, then you can just symlink that directory to your large disk:
ln -s /usr/local/dse /local/apps/dse
Hope that helps!

Files installed from debian package with dpkg do not belong to root

I created a binary package with this command:
dpkg-deb --build -z9 -Zlzma $(DEB_SRC_DIR) $(DEB_DEST_DIR)
and install it on my Ubuntu 12.04 with this command:
sudo dpkg -i /path/to/package
The contents of the package I think are irrelevant.
Despite the sudo command the files in the installation directory belong to the current user and not to root as I expected.
How can I fix that?
Try to run the dpkg-deb command with fakeroot:
`fakeroot dpkg-deb ...`
(This will only help if the files in the source directory already have the correct ownership, which they probably dont. The problem you're actually trying to solve here, is to create an archive with files in it that belong to user root, which is where fakeroot theoretically helps.)
Let me say though, that what you are doing is not the best way for creating a binary package (far from it).
Instead, create a debian/ directory with dh_make (from the dh-make package), and edit the control file and changelog accordingly. You also need a file debian/install that lists what files you are installing and where they should go. There are various guides on the net (and on Stack Overflow) that explain this process. For example, look at the Debian New Maintainers' Guide.
You can then use dpkg-buildpackage to create a real, standard-conforming Debian package with your files in a reproducible way.
dpkg-deb is a low-level tool for manipulating existing deb files; it's not meant to be used for package creation.

Unable to boot linux due to removing the filesystem package

In my fedora x64 system I accidently did removed the "filesystem" package while I was root , by executing this command :
rpm -e filesystem --nodeps
instead of doing this :
yum update filesystem
and unfortunatly the command executed normally and the "filesystem" package was deleted totally .
now the system is refusing to boot up showing this message :
systemd[1] : Failed to execute /bin/sh , giving up : No such file or directory
Now I can't do anything to fix it so any solutions are welcome, because I don't want to reinstall the system .
I am running an x64 Fedora 18 linux on an intel i3 processor.
I ran into the same beast on Fedora 19, after 3 hours I found a quite straight forward solution, what I did was:
Boot from Fedora-Live USB-stick of the same version installed
Mounted root into a local directory (btrfs): mount -o subvol=/root /dev/sda3 /mnt
Downloaded the filesystem package, telling yum it's working and base-directory are at my mountpoint: yum -c /mnt/etc/yum.conf --installroot=/mnt --downloadonly --downloaddir=/~ install filesystem
Since the package filesystem.x.x.x.rpm was gloriously removed by the rpm -e filesystem --nodeps command already, I installed the downloaded filesystem.rpm - at least I thought so. Turned out I had to force rpm because some other package from Google-Earth was blocking my command:
rpm -Uvh --root=/mnt ~/filesytem.x.x.x.rpm --force
Finally I edited /etc/selinux/config, I turned it off:
SELINUX=disabled
I'd take the drive out, install it in another system mount it as a secondary drive, and fool around with RPM to install the package in the specified path.
Bear in mind you'll need to manually check all your dependencies are installed too, and that you're building the correct version for Fedora 18.
I guess there might be other ways to do this too, but as long as you have another system you can connect the drive to, this might be the least effort.
I'd boot your broken system off a rescue disk on DVD, CD, USB or what have you. My experience was with Knoppix (a few years back), it was regarded as the best. However, if you don't have that, google "fedora rescue" and download that. See if that can read your hard drive, perhaps allowing you to avoid losing files of value that you had on old system, copy out to some removable media. Or, it may actually diagnose your situation and suggest fixing it for you.
Otherwise, I suspect the least-effort path back to a working system will be to install linux from scratch. The "filesystem" is not a separate package, it pretty much is the linux installation. The kernel is still present and booting, but everything else is gone.
I looked for the ISO mounted it extracted the rpm package filesystem-3.2-10.fc19.x86_64.rpm. I then looked for a live cd, boot into and mounted my former working partition and then run
rpm2cpio /root/filesystem-3.2-10.fc19.x86_64.rpm | cpio -idmv

Running apt-get for another partition/directory?

I have booted my system from a live Ubuntu CD, and I need to fix some package problems. I have mounted my hard drive, and now I want to run apt-get as if I booted normally. ie change the working directory for apt-get so it will work on my hard drive. I have done this before, but I can't remember the syntax. I think it was only some flag, like this:
apt-get --root-directory=/mnt/partition1 install....
But I only get "Command line option...is not understood". Any ideas?
Also this should work:
sudo apt-get -o Dir=/media/partitioni1 update
chroot /mnt/partition1
If your system uses several disk partitions you may have to mount some of them in order to get the package system working (I stopped setting up multiple partitions 10 years ago when hard disks started to get too large for raw physical backup).
This wouldn't work if you don't already have a usable debian system in that location. – akostadinov
If you can't get the package system working when chrooting, perhaps it is too messed up to ever be trusted again - in my experience the effort to bring it back to life rarely pays. If that happens, be happy you can still access your HD, backup your data and perform a clean reinstall.
Some relevant comments from other answer:
apt-get -o RootDir=/tmp/test_apt sets (almost) all paths to be in the different root. btw on a running system, if you copy /etc/apt, /usr/lib/apt, and mkdir -p usr/lib etc var/cache var/lib/dpkg var/lib/apt/lists/partial var/cache/apt/archives/partial and finally touch var/lib/dpkg/status, then apt is going to work in that root. It can even work as a non-root user if you add the option -o Debug::NoLocking=1. The nolock option is necessary because I couldn't find a way to set the lock file inside the different root directory. – akostadinov
Work means using search and downloading packages and such operations. Actually installing is not possible if some essential packages are not already there. debootstrap can help if the goal is actually installing packages in a new root for whatever reason. – akostadinov
Running chroot /mnt/partition1 will start a new shell in which the root of the filesystem is /mnt/partition1. Assuming the apt-get on your hard drive still works correctly, you can proceed from there.
dpkg --root=/mnt/partition1 -i mypackage.deb is an option that doesn't require chroot, but does require you to download the package yourself.

Resources