What's the standard paradigm for exec'ing after dropping root? - security

In code like this in a daemon:
// run as root, after initgroups(...), setgid(...)
setuid(user);
const char* args[] = {"./userbinary",0};
execv("userbinary", args);
_exit(1);
there's an obvious problem where the user can attach to the process between the calls to setuid and exec[lvpe], and read out all the process's memory, including sensitive variables and state.
A workaround I use is like this (obviously, all error handling ommitted):
// run as root in daemon
const char* args = {"/usr/bin/mysetuid", uidStr, "./userbinary", 0};
execv("/usr/bin/mysetuid", args);
_exit(1);
// mysetuid.c:
int main(int arc, char* argv[]) {
setuid(atoi(argv[1]));
execv(argv[2], argv+2);
exit(1);
}
What is the "standard" way of doing this operation? Using a helper binary seems safest, but I can't find other applications that do this. For example, OpenSSH just relies on the fact that each user's connection gets its own process, so the setuid is always done from a process that has pretty much a blank slate.

Related

Glib: Calling a iterative loop function

Have a query on timeout calling and GMainContext. It is really confusing to me
Suppose I have the codes below (a bit incomplete, just for demonstration). I use normal Pthreads to create a thread. Within the thread, I run Glib functionality and created a GMainContext (stored within l_app.context).
I then created a source to run the function check_cmd iteratively at about 1 sec interval. This callback (or could we call it a thread?) will check for command from other threads( Pthreads not shown here for update in cmd status). From here onwards, there are two specific command
One to start a looping function
The other to end the looping function
I have done and thought of two ways to create the function and set them to run iteratively.
To create another timeout
using the same method of creating check_cmd
Essentially both to me are pretty much essentially the same method, when I tried both of them. Plan A (as I called it) does not work but Plan B ...actually run at least once. So I would like to know how to fix them...
Or maybe I should use g_source_add_child_source() instead?
In Summary, my question is
when you created a new context and push it to become the default context, do all subsequent function that require main_context will refer to this context?
in a nut shell, how do you add new sources when a loop is already running, ie like my cases
lastly, it is okay to quit the main loop within the callback you have created?
Here is my pseudocode
#include <glib.h>
#include <dirent.h>
#include <errno.h>
#include <pthread.h>
#define PLAN_A 0
typedef struct
{
GMainContext *context;
GMainLoop *loop;
}_App;
static _App l_app;
guint gID;
gboolean
time_cycle(gpointer udata)
{
g_print("I AM THREADING");
return true;
}
gboolean
check_cmd_session(NULL )
{
while(alive) /// alive is a boolean value that is shared with other threads(not shown)
{
if(start)
{
/// PLAN A
//// which context does this add to ??
#if PLAN_A
g_timeout_add_seconds(10, (GSourceFunc)timeout, NULL);
#else
/// or should i use PLAN B
GSource* source = g_timeout_source_new(1000);
gID = g_source_set_callback(source,
(GSourceFunc)time_cycle,
NULL,
NULL);
g_source_attach(source, l_app.context);
#endif
}
else
{
#if PLAN_A
g_source_remove(gID);
#else
}
}
g_main_loop_quit (l_app.loop);
return FALSE;
}
void*
liveService(Info *info)
{
l_app.context = g_main_context_new ();
g_main_context_push_thread_default(l_app.context);
GSource* source = g_timeout_source_new(1000);
g_source_set_callback(source,
(GSourceFunc)check_cmd_session,
NULL,
NULL);
/// make it run
g_source_attach(source, l_app.context);
g_main_loop_run (l_app.loop);
pthread_exit(NULL);
}
int main()
{
pthread_t tid[2];
int thread_counter = 0;
err = pthread_create(&(tid[thread_counter]), NULL, &live, &info);
if (err != 0)
{
printf("\n can't create live thread :[%s]", strerror(err));
}
else
{
printf("--> Thread for Live created successfully\n");
thread_counter++;
}
/**** other threads are build not shown here */
for(int i = 0; i < 2; i++)
{
printf("Joining the %d threads \n", i);
pthread_join(tid[i],NULL);
}
return 0;
}
In Summary, my question is
when you created a new context and push it to become the default context, do all subsequent function that require main_context will
refer to this context?
Functions that are documented as using the thread-default main context will use the GMainContext which has been most recently pushed with g_main_context_push_thread_default().
Functions that are documented as using the global default main context will not. They will use the GMainContext which is created at init time and which is associated with the main thread.
g_timeout_add_seconds() is documented as using the global default main context. So you need to go with plan B if you want the timeout source to be attached to a specific GMainContext.
in a nut shell, how do you add new sources when a loop is already running, ie like my cases
g_source_attach() works when a main context is being iterated.
lastly, it is okay to quit the main loop within the callback you have created?
Yes, g_main_loop_quit() can be called at any point.
From your code, it looks like you’re not creating a new GMainLoop for each GMainContext and are instead assuming that one GMainLoop will somehow work with all GMainContexts in the process. That’s not correct. If you’re going to use GMainLoop, you need to create a new one for each GMainContext you create.
All other things aside, you might find it easier to use GLib’s threading functions rather than using pthread directly. GLib’s threading functions are portable to other platforms and a little bit easier to use. Given that you’re already linking to libglib, using them would cost nothing extra.

How is bash able to kill children processes with CTRL+C

I wrote a simple program as follows -
int main(int argc, char* argv[]) {
setuid(0);
setgid(0);
printf("Current uid and euid are %d, %d\n", getuid(), geteuid());
while(1);
}
I compiled this as root and set the setuid bit using sudo chmod +s test.
When this program is run as a non-privileged user from bash, the program prints -
Current uid and euid are 0, 0
and then gets stuck in an infinite loop.
However I can still kill this process by pressing Crl+C. If I understand correctly, bash(running as a non-privileged user) should not be able to send SIGINT to a root process.
I also tried the same with kill <pid of test> and that fails as excepted.
How is bash able to kill the process? Is there a special relationship between the parent process and the child process?
I also tried this other wrapper program -
int main(int argc, char* argv[]) {
pid_t p = fork();
if (p == 0) {
char * args[] = {"./test", NULL};
execv("./test", args);
} else {
sleep(4);
int ret = kill(p, 9);
printf("Kill returned = %d\n", ret);
return 0;
}
}
And ran it as an unprivileged user (where test has setuid bit set by root). In this case the parent is not able to kill the child. the kill call returns -1 and the test process gets orphaned.
What is happening here? What does bash do special that it can kill the children processes it spawns?
Bash doesn't need any permissions because bash isn't doing anything. When you hit ^C, SIGINT is sent to all processes in the foreground process group by the tty driver. The signal comes from the system, not from another process, so the permission checks relevant to one process sending a signal to another don't apply.

Dynamic pool of processes

I'm writing a client-server (TCP) program in C on a Unix system. The client sends some information and the server answers. There's only one connection per child process. New connections use pre-running processes from a pool, and the pool size is dynamic, so if the number of free processes (processes not servicing a client) drops too low, it should create new processes, and likewise if it gets too high extra processes should be terminated.
This is my server code. Every connection make a new child process using fork(). Each connection runs in a new process. How can I make a dynamic pool like I explained above?
int main(int argc, char * argv[])
{
int cfd;
int listener = socket(AF_INET, SOCK_STREAM, 0); //create listener socket
if(listener < 0){
perror("socket error");
return 1;
}
struct sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_port = htons(PORT);
addr.sin_addr.s_addr = htonl(INADDR_ANY);
int binding = bind(listener, (struct sockaddr *)&addr, sizeof(addr));
if(binding < 0){
perror("binding error");
return 1;
}
listen(listener, 1); //listen for new clients
signal(SIGCHLD,handler);
int pid;
for(;;) // infinity loop on server
{
cfd = accept(listener, NULL, NULL); //client socket descriptor
pid = fork(); //make child proc
if(pid == 0) //in child proc...
{
close(listener); //close listener socket descriptor
... //some server actions that I do.(receive or send)
close(cfd); // close client fd
return 0;
}
close(cfd);
}
If you have several processes blocked in accept on the same listen socket, then a new connection that comes in will get delivered to one of them. (Depending, several may wake up, but only one will actually get the connection). So you need to fork several children after listen, but before accept. After handling a request, the child goes back to accept instead of exit. That handles (1) and (2).
(3) is harder. You need some form of IPC. Typically, you'd have a parent process that just manages having the right number of children. Your child processes need to use IPC to tell the parent how busy they are. The parent can then either fork more children (which go into the accept loop above) or send signals to children to tell them to finish up and exit. It should also handle waiting on children, handle unexpected deaths, etc.
The IPC you want to use is probably shared memory. Your two options are SysV (shmget) and POSIX (shm_open`) shared memory. You probably want the latter if available. You'll have to deal with synchronizing access (both POSIX and SysV provide semaphores to help with this, again prefer POSIX) or using atomic access only.
(You probably don't actually want a process to exit the instant there are more than X free children, that'll lead to repeatedly reaping and spawning them, which is expensive. Instead you probably want some measure of how utilized they were over the last second... So your data is more complicated than a bitmap of in use/free.)
There are a lot of daemons that work like this, so you can fairly easily find code examples. Of course, if you go look at Apache, you'll probably find it more complicated, to get good performance and be portable everywhere.

sharing a UDP socket / port with node.js child processes

sharing a TCP socket / port is easy using node.js cluster, but it does not seem possible to do this with UDP dgram.
is there a way to do this either using cluster, by sharing file descriptors between processes or other?
I had a lot of problems with this because node.js doesn't really "fork", it "spawns" or "fork/execs". When I used cluster for a UDP server only ONE of the child processes would receive packets, the last one bound. If you simply "fork()", the OS round robins the incoming packets to each of the children. If you "spawn()" you run into inheritance rights problems with file/socket handles, options have to be set etc. and the underlying node.js udp server may not have applied those options.
I had to write my own extension that simply called the underlying OS fork() and made it work like a normal forking, network server.
Windows doesn't have fork() so this approach won't work and it's probably why node.js doesn't have a normal, plain, garden variety fork(). Doing that would make it non portable to Windows.
1) create a directory, I called mine "util".
2) Put these two files in that directory.
------------------- cut here, name the following "util.cc" -------
#include <v8.h> //needed for extension infrastructure
#include <node.h> //needed for extension infrastructure
#include <iostream> // not part of extension infrastructure, just for the code I'm adding and only while developing to output debugging messages
using namespace node;
using namespace v8;
// The following two functions are examples of the minimum required for a node.js extension that does anything
static Handle<Value> spoon(const Arguments& args)
{
pid_t rval = fork();
if (rval < 0)
{
return ThrowException(Exception::Error(String::New("Unable to fork daemon, pid < 0.")));
}
Handle<Value> n = v8::Number::New(rval);
return n;
}
static Handle<Value> pid(const Arguments& args)
{
pid_t rval = getpid();
Handle<Value> n = v8::Number::New(rval);
return n;
}
extern "C" void init(Handle<Object> target)
{
NODE_SET_METHOD(target, "fork", spoon);
NODE_SET_METHOD(target, "pid", pid);
}
-------- cut here, name the following "wscript" -------
def set_options(opt):
opt.tool_options("compiler_cxx")
def configure(conf):
conf.check_tool("compiler_cxx")
conf.check_tool("node_addon")
def build(bld):
obj = bld.new_task_gen("cxx", "shlib", "node_addon")
obj.cxxflags = ["-g", "-D_FILE_OFFSET_BITS=64", "-D_LARGEFILE_SOURCE", "-Wall"]
obj.target = "util"
obj.source = "util.cc"
--------- end of cutting, spare us the cutter ------
3) run "node-waf configure"
If that goes well,
4) run "node-waf"
5) a new directory called "build" will be created, and your extension "build/default/util.node" will have been created. Copy that wherever and use it from within your node program like:
var util = require("util.node");
var pid = util.fork();
Also included in there is a util.pid() function because the process.pid doesn't work right after forking. It delivers the pid of the parent process.
I'm a beginner node extension writer so if this is a naive approach, oh well, but it has served me well so far. Any improvements, as in "simplifications" would be greatly appreciated.
This issue was resolved in node.js v0.10

Kernel module get data from user space

What is the proper way of sending some data to a loaded and running kernel module, without using netlink and without using features that may not be in place (e.g. debugfs)?
I'd like to see a clean and safe way of doing this which should work on most kernels (or preferably all modern ones), or at best an approximation of that.
The user who wants to send data to the module is the root user, the amount of data is probably under 64 kiB and consists of a series of strings.
I've already looked into trying to read files from the module, which is not only highly frowned upon for various reasons but also hard to do.
I've looked at netlink, which socket() tells me on my kernel is not supported.
I've looked at debugfs, which is not supported either on my kernel.
Obviously I could use a different kernel but as I mentioned I'd like a proper way of doing this. If someone could show me a simple example of a module that will just do a printk() of a string sent from user space that would be great.
... a simple example of a module that will just do a printk() of a string sent from user space, printkm.c:
#include <linux/module.h>
#include <linux/proc_fs.h>
MODULE_DESCRIPTION("printk example module");
MODULE_AUTHOR("Dietmar.Schindler#manroland-web.com");
MODULE_LICENSE("GPL");
static
ssize_t write(struct file *file, const char *buf, size_t count, loff_t *pos)
{
printk("%.*s", count, buf);
return count;
}
static struct file_operations file_ops;
int init_module(void)
{
printk("init printk example module\n");
struct proc_dir_entry *entry = proc_create("printk", 0, NULL, &file_ops);
if (!entry) return -ENOENT;
file_ops.owner = THIS_MODULE,
file_ops.write = write;
return 0;
}
void cleanup_module(void)
{
remove_proc_entry("printk", NULL);
printk("exit printk example module\n");
}
Example use:
root#kw:~# insmod printkm.ko
root#kw:~# echo a string >/proc/printk
root#kw:~# dmesg|tail -1
[193634.164459] a string
I think you can use a char device. Take a look at Linux Device Driver 3th Chapter 3. With the function *copy_to_user* and *copy_from_user* you can copy data safely to and from userspace.

Resources