I have a dynamic linker which is /lib64/libc.so.6
I stupidly renamed it to /lib64/libc.so.6.old and now NO commands work.
I cannot do ls or mv to rename it back.
I can run ldconfig but it says permission denied and I cannot run sudo or su - What on earth can I do to fix this? I am running Oracle Linux redhat 6.7
LD_PRELOAD=/lib64/libc.so.6.old mv /lib64/libc.so.6.old /lib64/libc.so.6
Start from a recovery/install iso and rename the file back.
If you can't reboot or don't have physical access to the machine you could try to install a compiled version of BusyBox https://busybox.net/FAQ.html#getting_started and use its su and mv commands. Since BusyBox is statically linked it should work without libc.so.
Go to single user mode, mount the file system with rw, since you know the location of the renamed file move /lib64/libc.so.6.old /lib64/libc.so.6
I would also propose a workaround with a mount point as already mentioned by #wildplasser.
You can make majority of command line tools working again if you have a mounted directory on your broken host. If you are lucky to have one then all you need is to upload the libc-x.yz.so (which you can take from another host of from Internet) on the share, rename it there to libc.so.6 and add the mounted directory to LD_LIBRARY_PATH.
If the version x.yz is the same as for the one which you thoughtlessly moved then the commands like ls, cp etc. will work again in the console where you set LD_LIBRARY_PATH. You should not logout from this console, because you won't be able to login again.
! Be aware that setuid command line tools won't work (see https://askubuntu.com/a/1029363/832810). Unfortunately "sudo" is one of them, this is why you won't be able to put back easily your long-suffering .so (unless you have a root# console). However it gives you a possibility to save all data and finish all actions before you do some hard restore.
If you managed to do the above-mentioned trick and you have enough time you can try to build a statically linked version of "sudo" as suggested on https://askubuntu.com/a/1030475/832810 (probably even build on another host and copy through NFS) and move the .so back using it.
Related
I just found Matlab (2016a) put a 2.5 Gb installation files that it fetched during the installation in the root home directory (Linux mint 18), under /root/Downloads/MathWorks. I guess it is probably because I use sudo for installation.
My question is:
Is it normal that program store information when user executes it with sudo?
Can I delete the file under /root/Downloads? (My limited Linux knowledge told me do not touch anything in the /root folder)
When you execute anything with su...do, you basically execute it 'as' root.
Mathworks uses the Download-Folder (which is in your case /root/Downloads - since you have executed the installer as root) for temporary data (According to https://de.mathworks.com/matlabcentral/answers/229835-is-the-mathworks-folder-necessary-to-run-properly?requestedDomain=www.mathworks.com).
So, yes. It seems like you can delete the folder.
Or just move it to MathWorks.bak and check if Mathworks still works properly. In case everything is working fine, you can delete MathWorks.bak.
A program can do anything when run as sudo and depends only on what the program is designed to do. sudo simply elevates the permissions when running a given command.
I would have thought that the installer would have downloaded everything to /tmp instead of /root/Downloads, but as long as you didn't select /root/Downloads as your installation directory for MATLAB and this is only the temporary download location you can certainly remove it after successfully installing MATLAB to a "typical" location such as /usr/local/MATLAB/R2016a.
I've recently had to compile a program (Riak) from source since they don't have a repo available for Ubuntu 16.04 yet.
I've compiled the program and copied it to /opt/riak where it works fine.
Since this program requires sudo privileges, I've decided to symlink /opt/riak/bin/riak to /usr/local/bin/riak instead of adding the variable to the path via a profile.d file (because in order to work with sudo I'd have to remove env_reset from /etc/sudoers which I rather not do).
The error I get is the following:
/usr/local/bin/riak: 8: .: Can't open /usr/local/bin/../lib/env.sh
Shouldn't the symlink execute the file from the original's working directory? Is there a way to make it work?
The error message is almost self explanatory. Apparently the riak executable is trying to find a file called env.sh using a path relative to its own, namely ../lib/env.sh. Originally, this would resolve to the following path: /opt/riak/bin/../lib/env.sh, which is the same as /opt/riak/lib/env.sh. But now is trying to find the file at /usr/local/bin/../lib/env.sh which is the same as /usr/local/lib/env.sh and obviously the file is not there.
You have the following options (in order of preference):
Leave the program in /opt and invoke it from there
Leave the program in /opt and create a small wrapper shell script in /usr/local/bin that calls the original executable (see at the end of this post).
Recompile the program passing the right parameters to its configure script (e.g. --prefix=/usr/local) so that it works from /usr/local.
I would recommend against option 3; I prefer to let the /usr directory be managed by the distos package manager. If I have to compile something myself, I prefer to put it in a dedicated directory bellow /opt. This way, if I want to remove it later on, I can just delete that directory.
Example wrapper script for option 2:
#!/bin/bash
exec /opt/riak/bin/riak "$#"
I created a git repo in Windows 7 on a NTFS partition and when opening it in Linux (Ubuntu 12 x64, dual-boot setup) I get the index file open failed error. How can I figure out what's wrong? The partition is mounted read-write and I've never had any other problems. Does git store data in a different format Windows vs. Linux and I need to do either a clone or some conversion? I'd really like to be able to work on the same repo in both OSs without cloning around...
Clarification: I also get cat: index: Input/output error
when running the command cat index in the .git dir, so it is a NTFS related problem... but I've never had it before untill using git in a cross-systems way and I've run other apps from NTFS parts and copied files around...
The .git/index file is a binary file, which describes the current workdir. Perhaps a git fsck is able to fix it up (move the one you have out of the way to make sure it isn't lost while you fool around, or make any expertiments on a copy of the repository). You might try to clone the repository locally, the clone might get a good copy of the file, which you could then copy over the broken one.
Possibly permission problems? Backup what is relevant, defragment the drive, run hardware checks (it might be a broken/breaking disk!).
Either your Linux NTFS driver is broken, or you have filesystem corruption, or both. Reboot to Windows and run the disk checking utility, then see how things stand when it finishes.
I'm sure this has been asked before but I have no clue what to search for
I am trying to create a custom Linux image (for the Raspberry Pi) - I am currently manipulating the filesystem of the .img but I've discovered it's not as simple as dropping in the binary :( if only...
What is the accepted way to "pre-install" a package on a disk image where you can only manipulate the filesystem and ideally not run it first? Am I best to boot up, install, and then create the image from that, or is there a way of doing it beforehand in the same way you can change configuration settings etc?
Usually, when I have to change something in a disk image, I do the following:
sudo mount --bind /proc /mnt/disk_image/proc
sudo mount --bind /sys /mnt/disk_image/sys
sudo mount --bind /dev /mnt/disk_image/dev
These action are needed as this folder are create during boot process, mounting them in your system image will emulate a full boot. Then, you can chroot on it safely:
sudo chroot /mnt/disk_image
You're now able to issue commands in the chroot environment:
sudo apt-get install chromium
Of course, change /mnt/disk_image to the path where you have mounted your filesystem. apt-get will only works on Debian based system, change it according to your distribution.
You could find problem connecting to the internet and it can be cause by DNS configuration. The best thing you can do, is to copy your /etc/resolv.conf file in the remote filesystem as this file is usually changed by dhcp and it's empty on chroot environment.
This is the only solution that gives you full access to the command line of the system you're trying to modify.
This is an untested idea:
The dpkg tool, which can install .deb packages, has a --root option which can set a different filesystem than the local / path.
From the man page:
--instdir=dir
Change default installation directory which refers to the
directory where packages are to be installed. instdir is
also the directory passed to chroot(2) before running
package’s installation scripts, which means that the
scripts see instdir as a root directory. (Defaults to /)
--root=dir
Changing root changes instdir to dir and admindir to
dir/var/lib/dpkg.
If you mount your image and pass its mountpoint as --root, it should work.
There are things like the Ubuntu Customization Kit which allow you to create your own version of the distro with your own packages.
Crunchbang even has a utility like this, which is the distro I have personally selected for experimenting with my Pi.
We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.