When learning the system security in the ubuntu 20.04 on the VMware, I tried the set-uid operation and found the fllowing question:
With the excutable file catcall compiled by the source code caltcall.c:
#include<string.h>
#include<stdio.h>
#include<stdlib.h>
int main(int argc, char* argv[]){
char *cat = "/bin/cat";
if(argc < 2){
printf("please type a file name.\n");
return 1;
}
char *command = malloc(strlen(cat) + strlen(argv[1] + 2));
sprintf(command, "%s %s",cat, argv[1]);
system(command);
return 0;
}
I complete the set-uid operation through the following codes:
$ sudo chown root catcall
$ sudo chmod 4755 catcall
When excuting the excutable file catcall, I thought I can see the content of the file /etc/shadow, for the 'catcall' has been set to the Set-Uid programme.
But the operation is denied when trying to access the etc/shadow:
/bin/cat: /etc/shadow: Permission denied
Why did the set-uid operation failed?
Each process actually has two user ids, a real uid and an effective uid, see Difference between Real User ID, Effective User ID and Saved User ID. Normally they are equal. Setting the setuid bit causes the effective uid to be set to the owner of the binary upon exec, but the real uid remains unchanged, and will normally still be the uid of the user who actually ran the program.
So when your program starts, the effective uid will be root and the real uid will be dim_shimmer. Now you have chosen to try to start your program via system(), which executes the specified command using the shell. However, many shells have a security feature where they compare the real and effective uid, and if they are not equal, they set the effective uid equal to the real uid. (For instance, this avoids a disaster in case the sysadmin accidentally does chmod +s /bin/sh). I suspect that's what's happening here. So by the time your cat command runs, your real and effective uid are both dim_shimmer again, and that user does not have permission to read /etc/shadow.
If you run your cat command using execl instead (or one of its relatives), so as to bypass the shell, then I suspect you'll find that it works.
I'm trying to run the following program inside a Docker container, which is started with --privileged:
root#1df00aaf673d:~# cat > sysconf_test.c
#include <unistd.h>
#include <errno.h>
#include <stdio.h>
int main() {
long n = sysconf(_SC_CHILD_MAX);
printf("%ld %d\n", n, errno);
return 0;
}
root#1df00aaf673d:~# gcc sysconf_test.c ; ./a.out
-1 0
Going by the sysconf man page, "If name corresponds to a maximum or minimum limit, and that limit is indeterminate, -1 is returned and errno is not changed." Is there a way to make it determinate, perhaps by passing an option to the docker run command?
I'll answer my own question: it appears that sysconf returns -1L for "unlimited", although the man page doesn't spell it out:
[root#llg00amn ~]# ulimit -u
120996
[root#llg00amn ~]# docker run -ti debian /bin/bash
root#5668a7acb957:/# ulimit -u
unlimited
After setting the ulimit to an actual number, the program runs fine and returns the right result.
command:
bigxu#bigxu-ThinkPad-T410 ~/work/lean $ sudo ls
content_shell.pak leanote libgcrypt.so.11 libnotify.so.4 __MACOSX resources
icudtl.dat leanote.png libnode.so locales natives_blob.bin snapshot_blob.bin
most time it is right.but sometimes it is very slow.
so i strace it.
command:
bigxu#bigxu-ThinkPad-T410 ~/work/lean $ strace sudo ls
execve("/usr/bin/sudo", ["sudo", "ls"], [/* 66 vars */]) = 0
brk(0) = 0x7f2b3c423000
fcntl(0, F_GETFD) = 0
fcntl(1, F_GETFD) = 0
fcntl(2, F_GETFD) = 0
......
......
......
write(2, "sudo: effective uid is not 0, is"..., 140sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
) = 140
exit_group(1) = ?
+++ exited with 1 +++
other information:
bigxu-ThinkPad-T410 lean # ls /etc/sudoers -alht
-r--r----- 1 root root 745 2月 11 2014 /etc/sudoers
bigxu-ThinkPad-T410 lean # ls /usr/bin/sudo -alht
-rwsr-xr-x 1 root root 152K 12月 14 21:13 /usr/bin/sudo
bigxu-ThinkPad-T410 lean # df `which sudo`
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 67153528 7502092 56217148 12%
For security reasons, the setuid bit and ptrace (used to run binaries under a debugger) cannot both be honored at the same time. Failure to enforce this restriction in the past led to CVE-2001-1384.
Consequently, any operating system designed with an eye to security will either stop honoring ptrace on exec of a setuid binary, or fail to honor the setuid bit when ptrace is in use.
On Linux, consider using Sysdig instead -- which, being able to only view but not modify behavior, does not run the same risks.
How to trace sudo
$ sudo strace -u <username> sudo -k <command>
sudo runs strace as root.
strace runs sudo as <username> passed via the -u option.
sudo drops cached credentials from the previous sudo with -k option (for asking the password again) and runs <command>.
The second sudo is the tracee (the process being traced).
For automatically putting the current user in the place of <username>, use $(id -u -n).
Why sudo does not work with strace
In addition to this answer by Charles, here is what execve() manual page says:
If the set-user-ID bit is set on the program file referred to by pathname, then the effective user ID of the calling process is changed to that of the owner of the program file. Similarly, when the set-group-ID bit of the program file is set the effective group ID of the calling process is set to the group of the program file.
The aforementioned transformations of the effective IDs are not performed (i.e., the set-user-ID and set-group-ID bits are ignored) if any of the following is true:
the no_new_privs attribute is set for the calling thread (see prctl(2));
the underlying filesystem is mounted nosuid (the MS_NOSUID flag for mount(2)); or
the calling process is being ptraced.
The capabilities of the program file (see capabilities(7)) are also ignored if any of the above are true.
The permissions for tracing a process, inspecting or modifying its memory, are described in subsection Ptrace access mode checking in section NOTES of ptrace(2) manual page. I've commented about this in this answer.
In a Linux system, does the permissions of the directory in which a setuid program resides affect how the kernel launches the process? The reason I ask is that when I compiled an identical setuid program in two different directories, it only actually assumed the user permissions in one directory. I compiled it in /tmp and /home/flag03 where flag03 is the user account that I am attempting to gain access to. When executed from /tmp it did not escalate privileges as expected, but it worked under /home.
Some background on the problem:
I'm working on level 03 of exploit-exercises.com/nebula. The exercise requires that you gain access to a flag03 user account. The exercise is setup so that the flag03 user is periodically running a cronjob which will allow you to execute a script in a specific directory. My plan was to write a simple bash script which will compile a setuid program that itself launches a bash shell, and then set the setuid bit with chmod +s. The idea is that when the setuid program is compiled, it is compiled by user flag03 via the cronjob. Once this newly compiled program is executed, it will launch a shell as user flag03, and that is the goal.
Here is the simple setuid program (l3.c, based on levels 1 + 2):
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char **argv, char **envp)
{
uid_t uid;
gid_t gid;
uid = geteuid();
gid = getegid();
setresuid(uid,uid,uid);
setresgid(gid,gid,gid);
printf("uid: %d\n", getuid());
printf("gid: %d\n", getgid());
system("/bin/bash");
return 0;
}
In order for this to work, a bash script compiles the program as user flag03 and then chmods the setuid bit.
#!/bin/bash
#gcc -o /tmp/l3 /tmp/l3.c
#chmod +s,a+rwx /tmp/l3
gcc -o /home/flag03/l3 /tmp/l3.c
chmod +s,a+rwx /home/flag03/l3
The executable generated in /tmp does not escalate privileges as expected, but the one generated in /home/flag03 works as expected.
NOTE I just created a new bash script to move the version of the setuid program that was compiled in /tmp to /home/flag03, and then reset the setuid bit. When executed from there, that version worked as well. So it appears to me that the permissions of the directory in which the setuid program resides has some kind of impact with how the process is launch. Maybe this is related to /tmp being a somewhat "special" directory?
Thanks for any interest in this long-winded question!
If the filesystem is mounted with the nosuid option, the suid bit will be ignored when executing files located there. As I understand it, /tmp is usually a separate filesystem (often tmpfs) mounted with the nosuid option. The motivation for this configuration is preventing a compromised account that has no writable storage except /tmp (e.g. nobody) from being able to produce suid binaries, which may be used in certain elaborate multi-step attacks to elevate privilege.
I have a process in Linux that's getting a segmentation fault. How can I tell it to generate a core dump when it fails?
This depends on what shell you are using. If you are using bash, then the ulimit command controls several settings relating to program execution, such as whether you should dump core. If you type
ulimit -c unlimited
then that will tell bash that its programs can dump cores of any size. You can specify a size such as 52M instead of unlimited if you want, but in practice this shouldn't be necessary since the size of core files will probably never be an issue for you.
In tcsh, you'd type
limit coredumpsize unlimited
As explained above the real question being asked here is how to enable core dumps on a system where they are not enabled. That question is answered here.
If you've come here hoping to learn how to generate a core dump for a hung process, the answer is
gcore <pid>
if gcore is not available on your system then
kill -ABRT <pid>
Don't use kill -SEGV as that will often invoke a signal handler making it harder to diagnose the stuck process
To check where the core dumps are generated, run:
sysctl kernel.core_pattern
or:
cat /proc/sys/kernel/core_pattern
where %e is the process name and %t the system time. You can change it in /etc/sysctl.conf and reloading by sysctl -p.
If the core files are not generated (test it by: sleep 10 & and killall -SIGSEGV sleep), check the limits by: ulimit -a.
If your core file size is limited, run:
ulimit -c unlimited
to make it unlimited.
Then test again, if the core dumping is successful, you will see “(core dumped)” after the segmentation fault indication as below:
Segmentation fault: 11 (core dumped)
See also: core dumped - but core file is not in current directory?
Ubuntu
In Ubuntu the core dumps are handled by Apport and can be located in /var/crash/. However, it is disabled by default in stable releases.
For more details, please check: Where do I find the core dump in Ubuntu?.
macOS
For macOS, see: How to generate core dumps in Mac OS X?
What I did at the end was attach gdb to the process before it crashed, and then when it got the segfault I executed the generate-core-file command. That forced generation of a core dump.
Maybe you could do it this way, this program is a demonstration of how to trap a segmentation fault and shells out to a debugger (this is the original code used under AIX) and prints the stack trace up to the point of a segmentation fault. You will need to change the sprintf variable to use gdb in the case of Linux.
#include <stdio.h>
#include <signal.h>
#include <stdlib.h>
#include <stdarg.h>
static void signal_handler(int);
static void dumpstack(void);
static void cleanup(void);
void init_signals(void);
void panic(const char *, ...);
struct sigaction sigact;
char *progname;
int main(int argc, char **argv) {
char *s;
progname = *(argv);
atexit(cleanup);
init_signals();
printf("About to seg fault by assigning zero to *s\n");
*s = 0;
sigemptyset(&sigact.sa_mask);
return 0;
}
void init_signals(void) {
sigact.sa_handler = signal_handler;
sigemptyset(&sigact.sa_mask);
sigact.sa_flags = 0;
sigaction(SIGINT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGSEGV);
sigaction(SIGSEGV, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGBUS);
sigaction(SIGBUS, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGQUIT);
sigaction(SIGQUIT, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGHUP);
sigaction(SIGHUP, &sigact, (struct sigaction *)NULL);
sigaddset(&sigact.sa_mask, SIGKILL);
sigaction(SIGKILL, &sigact, (struct sigaction *)NULL);
}
static void signal_handler(int sig) {
if (sig == SIGHUP) panic("FATAL: Program hanged up\n");
if (sig == SIGSEGV || sig == SIGBUS){
dumpstack();
panic("FATAL: %s Fault. Logged StackTrace\n", (sig == SIGSEGV) ? "Segmentation" : ((sig == SIGBUS) ? "Bus" : "Unknown"));
}
if (sig == SIGQUIT) panic("QUIT signal ended program\n");
if (sig == SIGKILL) panic("KILL signal ended program\n");
if (sig == SIGINT) ;
}
void panic(const char *fmt, ...) {
char buf[50];
va_list argptr;
va_start(argptr, fmt);
vsprintf(buf, fmt, argptr);
va_end(argptr);
fprintf(stderr, buf);
exit(-1);
}
static void dumpstack(void) {
/* Got this routine from http://www.whitefang.com/unix/faq_toc.html
** Section 6.5. Modified to redirect to file to prevent clutter
*/
/* This needs to be changed... */
char dbx[160];
sprintf(dbx, "echo 'where\ndetach' | dbx -a %d > %s.dump", getpid(), progname);
/* Change the dbx to gdb */
system(dbx);
return;
}
void cleanup(void) {
sigemptyset(&sigact.sa_mask);
/* Do any cleaning up chores here */
}
You may have to additionally add a parameter to get gdb to dump the core as shown here in this blog here.
There are more things that may influence the generation of a core dump. I encountered these:
the directory for the dump must be writable. By default this is the current directory of the process, but that may be changed by setting /proc/sys/kernel/core_pattern.
in some conditions, the kernel value in /proc/sys/fs/suid_dumpable may prevent the core to be generated.
There are more situations which may prevent the generation that are described in the man page - try man core.
For Ubuntu 14.04
Check core dump enabled:
ulimit -a
One of the lines should be :
core file size (blocks, -c) unlimited
If not :
gedit ~/.bashrc and add ulimit -c unlimited to end of file and save, re-run terminal.
Build your application with debug information :
In Makefile -O0 -g
Run application that create core dump (core dump file with name ‘core’ should be created near application_name file):
./application_name
Run under gdb:
gdb application_name core
In order to activate the core dump do the following:
In /etc/profile comment the line:
# ulimit -S -c 0 > /dev/null 2>&1
In /etc/security/limits.conf comment out the line:
* soft core 0
execute the cmd limit coredumpsize unlimited and check it with cmd limit:
# limit coredumpsize unlimited
# limit
cputime unlimited
filesize unlimited
datasize unlimited
stacksize 10240 kbytes
coredumpsize unlimited
memoryuse unlimited
vmemoryuse unlimited
descriptors 1024
memorylocked 32 kbytes
maxproc 528383
#
to check if the corefile gets written you can kill the relating process with cmd kill -s SEGV <PID> (should not be needed, just in case no core file gets written this can be used as a check):
# kill -s SEGV <PID>
Once the corefile has been written make sure to deactivate the coredump settings again in the relating files (1./2./3.) !
Ubuntu 19.04
All other answers themselves didn't help me. But the following sum up did the job
Create ~/.config/apport/settings with the following content:
[main]
unpackaged=true
(This tells apport to also write core dumps for custom apps)
check: ulimit -c. If it outputs 0, fix it with
ulimit -c unlimited
Just for in case restart apport:
sudo systemctl restart apport
Crash files are now written in /var/crash/. But you cannot use them with gdb. To use them with gdb, use
apport-unpack <location_of_report> <target_directory>
Further information:
Some answers suggest changing core_pattern. Be aware, that that file might get overwritten by the apport service on restarting.
Simply stopping apport did not do the job
The ulimit -c value might get changed automatically while you're trying other answers of the web. Be sure to check it regularly during setting up your core dump creation.
References:
https://stackoverflow.com/a/47481884/6702598
By default you will get a core file. Check to see that the current directory of the process is writable, or no core file will be created.
Better to turn on core dump programmatically using system call setrlimit.
example:
#include <sys/resource.h>
bool enable_core_dump(){
struct rlimit corelim;
corelim.rlim_cur = RLIM_INFINITY;
corelim.rlim_max = RLIM_INFINITY;
return (0 == setrlimit(RLIMIT_CORE, &corelim));
}
It's worth mentioning that if you have a systemd set up, then things are a little bit different. The set up typically would have the core files be piped, by means of core_pattern sysctl value, through systemd-coredump(8). The core file size rlimit would typically be configured as "unlimited" already.
It is then possible to retrieve the core dumps using coredumpctl(1).
The storage of core dumps, etc. is configured by coredump.conf(5). There are examples of how to get the core files in the coredumpctl man page, but in short, it would look like this:
Find the core file:
[vps#phoenix]~$ coredumpctl list test_me | tail -1
Sun 2019-01-20 11:17:33 CET 16163 1224 1224 11 present /home/vps/test_me
Get the core file:
[vps#phoenix]~$ coredumpctl -o test_me.core dump 16163
This is typically sufficient:
ulimit -c unlimited
Note this will not persist between ssh sections! To add persistence:
echo '* soft core unlimited' >> /etc/security/limits.conf
Now, if you're using Ubuntu, "apport" is probably running. Here's how to check:
sudo systemctl status apport.service
If it is, you'll probably find core dumps in one of these places:
/var/lib/apport/coredump
/var/crash
If you want to change the location of core dumps
Make sure that you have the permissions to create files and the directory exists in the directory you're sending a core dump to!
Here's an example. Note this will not persist across reboots:
sysctl -w kernel.core_pattern=/coredumps/core-%e-%s-%u-%g-%p-%t
mkdir /coredumps
Make sure that the process that's crashing has access to write to this. The easiest way would be an example like this:
chmod 777 /coredumps
Test that core dumps works
> crash.c
gcc -Wl,--defsym=main=0 crash.c
./a.out
==output== Segmentation fault (core dumped)
If it doesn't say "core dumped" above, something isn't working.