On Ubuntu 12.04, open a new text file and write:
#include <stdlib.h>
int main()
{
abort();
return 0;
}
Now run:
g++ yourfile.cpp
Then run the executable, which will core dump:
./a.out
Now check the mtime of the file:
-rw-r----- 1 xxxxx xxxxx 228K 2012-10-01 19:20:20.752136399 -0500 core
Now run the executable again:
./a.out
Now check the mtime again:
-rw-r----- 1 xxxxx xxxxx 228K 2012-10-01 19:20:20.752136399 -0500 core
It's the same! Why doesn't a fresh core overwrite the old one? When rebuilding this causes gdb to complain the core is older than the executable.
Just to be sure it wasn't a permissioning problem, I tried this in a fresh directory in /tmp and ran chmod -R 777 **/* inside. Running the executable twice still didn't produce a new core O_o Also, ulimit -c reports 800000000, more than enough for a core this size.
I also tried running a clean bash with env - bash --noprofile --norc and still running the binary doesn't update the mtime of the core unless I delete it first.
If you refer to https://bugs.launchpad.net/ubuntu/+source/apport/+bug/160999 this is a bug in Ubuntu using O_EXCL to open the file, preventing it from overwriting an existing core.
The core(5) man page lists certain conditions under which a core file is not [re-]written. You probably meet one of those conditions. Please read carefully that man page.
Related
I'm copying my kernel config file from an existing system to the kernel tree, and I entered this command:
/boot/config$(uname -r)
Yet I got:
bash: /boot/config-5.15.0-46-generic: Permission denied
Does anyone know why its saying permission denied and how to fix? I am using Ubuntu in VirtualBox.
As #Tsyvarev mentioned in comments it is possible that you're trying to execute file, that have no exec permissions:
$ ls -l /boot
...
-rw-r--r-- 1 root root 217414 Aug 20 2021 config-5.4.0-rc1+
...
Try to run: cat /boot/config-$(uname -r) to read the config file for currently running kernel.
I'm using QEMU to test Raspberry Pi before putting the image onto an SD card. I'm setting up an automated script to put some files onto the Pi, among other things, so that when I put the SD card into the Pi, it is immediately usable. I think I've run into a quirk in how permissions work, but I'm not sure.
When you run test -x, the file is supposed to be executable. Basically, the x bit is supposed to be on for your user. However, this doesn't seem to apply to files inside mounted filesystems.
The host is Ubuntu, and the guest backing image is Raspberry Pi Buster. I created the mountpoint with guestmount, because I was mounting a snapshot, not the original, and this seems to be the only/best way to do that. The basic flow was:
qemu-img convert -Oqcow2 raspberry-pi.img raspberry-pi.qcow
qemu-img create -f qcow2 snapshot.qcow -b raspberry-pi.qcow
sudo guestmount -a 'snapshot.qcow' -i 'mountpoint/'
For example, I have a file outside the repository. The file I'm testing inside the mountpoint was created by root, so I chmoded this file to root for comparison:
$ sudo ls -l --author ~/test/file
-rw-r--r-- 1 root root root 1133 Oct 8 21:43 /home/me/test/file
$ sudo test -x ~/test/file && echo 'exists' || echo 'doesn\'t exist'
doesn't exist
However, for a file inside the mountpoint, with the same permissions, the test is successful:
$ sudo ls -l --author mountpoint/home/pi/test/file
-rw-r--r-- 1 root root root 0 Oct 8 22:41 mountpoint/home/pi/test/file
$ sudo test -x ~/test/file && echo 'exists' || echo 'doesn\'t exist'
exists
Why is the file inside the mountpoint executable, whereas the one outside is not executable? Is this because the mounted filesystem is a different architecture (x86 vs. ARM)? Is it because I'm using guestmount, and the filesystem isn't the real filesysem, but an amalgamation of the snapshot & the original file? Or is this just the way mounting works? Where can I find more resources on this peculiar behavior, like other permission quirks I might encounter?
If you need any more information about the host or guest, please ask.
This is a bug in libguestfs used by guestmount. You can see it here:
/* Root user should be able to access everything, so only bother
* with these fine-grained tests for non-root. (RHBZ#1106548).
*/
if (fuse->uid != 0) {
[...]
if (mask & X_OK)
ok = ok &&
( fuse->uid == statbuf.st_uid ? statbuf.st_mode & S_IXUSR
: fuse->gid == statbuf.st_gid ? statbuf.st_mode & S_IXGRP
: statbuf.st_mode & S_IXOTH);
}
The FS takes a shortcut saying that since you're root you have full access, so there's no point checking the permissions.
As you've demonstrated, this is not true. Root should only have execute permissions for directories, and for files where at least one of the execute bits is set.
I was unable to build the project to submit a patch, but you can file a bug.
The man page of ranlib says:
DESCRIPTION
ranlib generates an index to the contents of an archive and stores it in the archive. The index lists each symbol defined by a member of an archive that is a relocatable object file.
Well, I tried to compile an archive file like below:
$ cat o.c
#include<stdio.h>
int f(){
printf("hello\n");
return 2;
}
gcc -o o.o -c o.c
ar rc libmyobject.a o.o
cp libmyobject.a libmyobject.a.keep
ranlib libmyobject.a
I tried to compare the size of library file beforand after ranlib, I got:
-rw-rw-r-- 1 a a 1626 Oct 3 12:03 libmyobject.a.keep
-rw-rw-r-- 1 a a 1626 Oct 3 12:06 libmyobject.a
They're the same size. This is out of my expectation. I expect that runlib will store some extra information into .a file. But actually, the file size is still the same.
So what job does ranlib actually do, how can I check what job has ranlib done?
Thanks.
If binutils is compiled with --enable-deterministic-archives, then by default, multiple runs of ranlib on the same input sources will produce the same output file because it nulls out the timestamp, uid and gid values.
You can force it to be non-deterministic by passing the -U flag, which will cause the storing of the timestamp, uid and gid values, and because the timestamp will have changed the files will be different:
$ ranlib -U libmyobject.a
$ diff libm*
Binary files libmyobject.a and libmyobject.a.keep differ
however the file size will remain the same, though.
$ ls -l libm*
-rw-r--r-- 1 xx xx 1618 Oct 3 13:58 libmyobject.a
-rw-r--r-- 1 xx xx 1618 Oct 3 13:58 libmyobject.a.keep
Bear in mind that determinism in tools like this is actually quite desirable, as it allows for features like Reproducible Builds to work without having to add convolutions to the build process.
With modern GNU tools I don't know if it's actually possible to create an archive that doesn't have an index. ar creates the index by default, I don't see an option to not create one, and I don't see any command to remove it.
Running ranlib on an archive after you create it is something you do to make your build process work on older, crankier UNIX systems.
I try to use an executable script (wkhtmltopdf) on a Linux shared webserver (Debian, 64bit). I am pretty sure that I compiled everything correct, but whenever I want to execute the file I get as an response :
> ./wkhtmltopdf -H
-bash: ./wkhtmltopdf: No such file or directory
To be sure that the file is there, here the ls output :
> ls
wkhtmltoimage wkhtmltopdf
Furthermore I tested the file command on it, which outputs the following :
> file wkhtmltopdf
wkhtmltopdf: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
My question is now :
Why does bash tells me that there is no 'file or directory', when there obviously is one?
My first guess would be that the shared server does not allow to execute binary files? But shouldn't it then be a problem of permissions, with a different bash output?
Edit :
> id
uid=2725674(p8907906) gid=600(ftpusers) groups=600(ftpusers)
> ls -l wkhtmltopdf
-rwxrwxrwx 1 p8907906 ftpusers 39745960 Jan 20 09:33 wkhtmltopdf
> ls -ld
drwx---r-x 2 p8907906 ftpusers 44 Jan 28 21:02 .
I bet you miss dynamic linker. Just do a
readelf --all ./wkhtmltopdf | grep interpreter
You should get an output like this:
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
There are high chances that you system lacks the interpreter (/lib64/ld-linux-x86-64.so.2 in the example). In this case bash would yell No such file or directory, just like when the binary itself is missing.
You can try to use a different linker. Sometime you can succeed. Just do a:
/path/to/the/linker /path/to/your/executable
This command:
find /lib* -name ld-linux\*
will help you find the linkers on your system. Or you can do the readelf command on some command that does run. It will show you correct, working linker.
OR, since you are running Debian system, just do a
sudo apt-get install wkhtmltopdf
to install native version of the tool :)
In my case
$ readelf --all ./wkhtmltopdf | grep interpreter # readelf: Displays information about ELF files.
[Requesting program interpreter: /lib/ld-linux.so.2]
On a machine where the executable was working:
$ ls -lah /lib/ld-linux.so.2
lrwxrwxrwx 1 root root 25 Apr 16 2018 /lib/ld-linux.so.2 -> i386-linux-gnu/ld-2.27.so
$ dpkg -S /lib/ld-linux.so.2 # -S, --search filename-search-pattern: Search for a filename from installed packages.
libc6:i386: /lib/ld-linux.so.2
So to fix the problem (reference)
sudo dpkg --add-architecture i386
sudo apt update
sudo apt install libc6:i386 # GNU C Library: Shared libraries (from apt show)
Missing the linker was my case as well. I could fix it with the help of nsilent22 answer like this:
readelf --all /usr/local/myprogram | grep interpreter
[Requesting program interpreter: /lib64/ld-lsb-x86-64.so.3]
But that linker did not exist anymore.
The old situation in /lib64 was:
ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.31.so
ld-linux-x86-64.so.3 -> ld-linux-x86-64.so.2
So it turned out this was just a symlink to the systems' linker.
Moving over to /lib64 , which itself is a symlink to usr/lib64 and creating a symlink over there did not work. I assume that there are to many symbolic link levels after Debian moved everything into /usr
However creating a 'direct' symlink
ln -s /usr/lib64/ld-linux-x86-64.so.2 /lib64/ld-lsb-x86-64.so.3
did the job; /usr/lib64 now shows:
ld-linux-x86-64.so.2 -> /lib/x86_64-linux-gnu/ld-2.31.so
ld-lsb-x86-64.so.3 -> /usr/lib64/ld-linux-x86-64.so.2
I ran into this issue on my raspberry pi 4 running aarch64 alpine 3.13. Using the answer provided by #vkersten, I was able to determine that I was missing /lib/ld-linux-aarch64.so.1.
I resolved this by installing gcompat with apk add gcompat.
I'm setting up a minimal chroot and want to avoid having sudo or su in it but still run my processes as non-root. This is a bit of a trick as running chroot requiers root. I could write a program that does this that would look something like:
uid = LookupUser(args[username]) // no /etc/passwd in jail
chroot(args[newroot])
cd("/")
setuids(uid)
execve(args[exe:])
Is that my best bet or is there a standard tool that does that for me?
I rolled my own here:
If you invoke chroot from root, the chroot option --userspec=USER:GROUP will run the command under the non-root UID/GID.
By the way, the option '--userspec' is first introduced in coreutils-7.5 according to a git repository git://git.sv.gnu.org/coreutils.
fakechroot, in combination with fakeroot, will allow you to do this. They'll make all programs that are running act as if they're being run in a chroot as root but they'll actually be running as you.
See also fakechroot's man page.
You can make use of linux capabilities to give your binary the ability to call chroot() w/o being root. As an example, you can do this to the chroot binary. As non-root, normally you'd get this:
$ chroot /tmp/
chroot: cannot change root directory to /tmp/: Operation not permitted
But after you run the setcap command:
sudo setcap cap_sys_chroot+ep /usr/sbin/chroot
It will let you do the chroot call.
I don't recommend you do this to the system's chroot, that you instead do it to your own program and call chroot. That way you have more control over what is happening, and you can even drop the cap_sys_chroot privilege after you call it, so successive calls to chroot in your program will fail.
A custom chrooter isn't at all hard to write:
#define _BSD_SOURCE
#include <stdio.h>
#include <unistd.h>
const char newroot[]="/path/to/chroot";
int main(int c, char **v, char **e) {
int rc; const char *m;
if ( (m="chdir" ,rc=chdir(newroot)) == 0
&& (m="chroot",rc=chroot(newroot)) == 0
&& (m="setuid",rc=setuid(getuid())) == 0 )
m="execve", execve(v[1],v+2,e);
perror(m);
return 1;
}
Make that setuid root and owned by a custom group you add your favored user to (and no 'other' access).
You could use Linux Containers to create a chroot environment that is in a totally different namespace (IPC, filesytem, and even network)
There is even LXD which is able to manage the creation of image-based containers and configure them to run as unprivileged users so that if the untrusted code manages to somehow escape the container, it will only be able to execute code as the unprivileged user and not as the system's root.
Search 'Linux Containers' and 'LXD' on your favorite search engine ;)
These days chrooting without root-permissions is possible with unshare command provided by mount namespaces.
Plain Unshare
Suppose you want to chroot into ~/Projects/my-backup directory, and run inside it the ~/Projects/my-backup/bin/bash binary . So you run:
$ unshare -mr chroot ~/Projects/my-backup/ /bin/bash
Here:
-m means you can use mount --bind inside the new chroot (note that mounts will not be visible to the outside world, only to your chroot)
-r makes it look as if you are root inside the chroot
chroot … is the usual chroot command.
Possible pitfalls:
Make sure you have environment variables set correctly. For example, upon chrooting into an Ubuntu from Archlinux I was getting errors like bash: ls: command not found. Turned out, my $PATH did not contain /bin/ (on Archlinux it's a symlink to /usr/bin/), so I had to run:
PATH=$PATH:/bin/ unshare -mr chroot ~/Projects/my-backup/ /bin/bash
Note that /proc or /dev filesystems won't get populated, so running a binary that requires them will fail. There is a --mount-proc option, but it doesn't seem to be accessible to a non-root user.
chown won't work unless you do some complicated setup with newuidmap and newgidmap commands.
Buildah Unshare
This has advantage compared to plain unshare in that it takes care of the needed setup for chown command to start working. You only need to make sure you have defined the ranges in /etc/subuid and /etc/subgid for your user.
You run it as:
$ buildah unshare chroot ~/Projects/my-backup/ /bin/bash