I have written an LKM in Linux. The ioctl function, my_ioctl is called by a user-level program foo.c. I want to change the scheduling policy of foo.c. Therefore I am doing the following in my my_ioctl function from this link:
struct sched_attr attr;
int ret;
unsigned int flags = 0;
attr.size = sizeof(attr);
attr.sched_flags = 0;
attr.sched_nice = 0;
attr.sched_priority = 0;
/* This creates a 10ms/30ms reservation */
attr.sched_policy = SCHED_DEADLINE;
attr.sched_runtime = 10 * 1000 * 1000;
attr.sched_period = attr.sched_deadline = 30 * 1000 * 1000;
ret = sched_setattr(current->pid, &attr, flags);
if (ret < 0) {
perror("sched_setattr");
exit(-1);
}
The sched_setattr is following:
int sched_setattr(pid_t pid,
const struct sched_attr *attr,
unsigned int flags) {
return syscall(__NR_sched_setattr, pid, attr, flags);
}
I have changed the syscall to sys_mycall because it's LKM. struct sched_attr is also defined in the above mentioned Linux Kernel documentation link. However, I could not change the scheduling policy by this. It throws me error like scheduling policy cannot be changed from kernel space.
I don't understand why this is the case. There is an utility chrt which does the same thing for a process from the user-space; then why is this not possible from a LKM? Or am I missing something?
Related
I have written a program to scan kernel memory for a pattern from user space. I run it from root. I expect that it will generate SIGSEGVs when it hits pages that aren't accessible; I would like to ignore those faults and just jump to the next page to continue the search. I have set up a signal handler that works fine for the first occurrence, and it continues onward as expected. However, when a second SIGSEGV occurs, the handler is ignored (it was reregistered after the first occurrence) and the program terminates. The relevant portions of the code are:
jmp_buf restore_point;
void segv_handler(int sig, siginfo_t* info, void* ucontext)
{
longjmp(restore_point, SIGSEGV);
}
void setup_segv_handler()
{
struct sigaction sa;
sa.sa_flags = SA_SIGINFO|SA_RESTART|SA_RESETHAND;
sigemptyset (&sa.sa_mask);
sa.sa_sigaction = &segv_handler;
if (sigaction(SIGSEGV, &sa, NULL) == -1) {
fprintf(stderr, "failed to setup SIGSEGV handler\n");
}
}
unsigned long search_kernel_memory_area(unsigned long start_address, size_t area_len, const void* pattern, size_t pattern_len)
{
int fd;
char* kernel_mem;
fd = open("/dev/kmem", O_RDONLY);
if (fd < 0)
{
perror("open /dev/kmem failed");
return -1;
}
unsigned long page_size = sysconf(_SC_PAGESIZE);
unsigned long page_aligned_offset = (start_address/page_size)*page_size;
unsigned long area_pages = area_len/page_size + (area_len%page_size ? 1 : 0);
kernel_mem =
mmap(0, area_pages,
PROT_READ, MAP_SHARED,
fd, page_aligned_offset);
if (kernel_mem == MAP_FAILED)
{
perror("mmap failed");
return -1;
}
if (!mlock((const void*)kernel_mem,area_len))
{
perror("mlock failed");
return -1;
}
unsigned long offset_into_page = start_address-page_aligned_offset;
unsigned long start_area_address = (unsigned long)kernel_mem + offset_into_page;
unsigned long end_area_address = start_area_address+area_len-pattern_len+1;
unsigned long addr;
setup_segv_handler();
for (addr = start_area_address; addr < end_area_address;addr++)
{
unsigned char* kmp = (unsigned char*)addr;
unsigned char* pmp = (unsigned char*)pattern;
size_t index = 0;
for (index = 0; index < pattern_len; index++)
{
if (setjmp(restore_point) == 0)
{
unsigned char p = *pmp;
unsigned char k = *kmp;
if (k != p)
{
break;
}
pmp++;
kmp++;
}
else
{
addr += page_size -1;
setup_segv_handler();
break;
}
}
if (index >= pattern_len)
{
return addr;
}
}
munmap(kernel_mem,area_pages);
close(fd);
return 0;
}
I realize I can use functions like memcmp to avoid programming the matching part directly (I did this initially), but I subsequently wanted to insure the finest grained control for recovering from the faults so I could see exactly what was happening.
I scoured the Internet to find information about this behavior, and came up empty. The linux system I am running this under is arm 3.12.30.
If what I am trying to do is not possible under linux, is there some way I can get the current state of the kernel pages from user space (which would allow me to avoid trying to search pages that are inaccessible.) I searched for calls that might provide such information, but also came up empty.
Thanks for your help!
While longjmp is perfectly allowed to be used in the signal handler (the function is known as async-signal-safe, see man signal-safety) and effectively exits from the signal handling, it doesn't restore signal mask. The mask is automatically modified at the time when signal handler is called to block new SIGSEGV signal to interrupt the handler.
While one may restore signal mask manually, it is better (and simpler) to use siglongjmp function instead: aside from the effect of longjmp, it also restores the signal mask. Of course, in that case sigsetjmp function should be used instead of setjmp:
// ... in main() function
if(sigsetjmp(restore_point, 1)) // Aside from other things, store signal mask
// ...
// ... in the signal handler
siglongjmp(restore_point); // Also restore signal mask as it was at sigsetjmp() call
Recently I'm using the function sched_getcpu() from the header file sched.h on Linux.
However, I'm wondering where could I find the source code of this function?
Thanks.
Under Linux, the sched_getcpu() function is a glibc wrapper to sys_getcpu() system call, which is architecture specific.
For the x86_64 architecture, it is defined under arch/x86/include/asm/vgtod.h as __getcpu() (tree 4.x):
#ifdef CONFIG_X86_64
#define VGETCPU_CPU_MASK 0xfff
static inline unsigned int __getcpu(void)
{
unsigned int p;
/*
* Load per CPU data from GDT. LSL is faster than RDTSCP and
* works on all CPUs. This is volatile so that it orders
* correctly wrt barrier() and to keep gcc from cleverly
* hoisting it out of the calling function.
*/
asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
return p;
}
#endif /* CONFIG_X86_64 */
Being this function called by __vdso_getcpu() declared in arch/entry/vdso/vgetcpu.c:
notrace long
__vdso_getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *unused)
{
unsigned int p;
p = __getcpu();
if (cpu)
*cpu = p & VGETCPU_CPU_MASK;
if (node)
*node = p >> 12;
return 0;
}
(See vDSO for details regarding what vdso prefix is).
EDIT 1: (in reply to arm code location)
ARM code location
It can be found in the arch/arm/include/asm/thread_info.h file:
static inline struct thread_info *current_thread_info(void)
{
return (struct thread_info *)
(current_stack_pointer & ~(THREAD_SIZE - 1));
}
This function is used by raw_smp_processor_id() that is defined in the file arch/arm/include/asm/smp.h as:
#define raw_smp_processor_id() (current_thread_info()->cpu)
And it's called by getcpu system call declared in the file kernel/sys.c:
SYSCALL_DEFINE3(getcpu, unsigned __user *, cpup, unsigned __user *, nodep, struct getcpu_cache __user *, unused)
{
int err = 0;
int cpu = raw_smp_processor_id();
if (cpup)
err |= put_user(cpu, cpup);
if (nodep)
err |= put_user(cpu_to_node(cpu), nodep);
return err ? -EFAULT : 0;
}
I have a character device driver. It includes a 4MB coherent DMA buffer. The buffer is implemented as a ring buffer. I also implemente the splice_read call for the driver to improve the performance. But this implementation does not work well. Below is the using example:
(1)splice the 16 pages of device buffer data to a pipefd[1]. (the DMA buffer is managed as in page unit).
(2)splice the pipefd[0] to the socket.
(3)the receiving side (tcp client) receives the data, and then check the correctness.
I found that the tcp client got errors. The splice_read implementation is show below (I steal it from the vmsplice implementation):
/* splice related functions */
static void rdma_ring_pipe_buf_release(struct pipe_inode_info *pipe,
struct pipe_buffer *buf)
{
put_page(buf->page);
buf->flags &= ~PIPE_BUF_FLAG_LRU;
}
void rdma_ring_spd_release_page(struct splice_pipe_desc *spd, unsigned int i)
{
put_page(spd->pages[i]);
}
static const struct pipe_buf_operations rdma_ring_page_pipe_buf_ops = {
.can_merge = 0,
.map = generic_pipe_buf_map,
.unmap = generic_pipe_buf_unmap,
.confirm = generic_pipe_buf_confirm,
.release = rdma_ring_pipe_buf_release,
.steal = generic_pipe_buf_steal,
.get = generic_pipe_buf_get,
};
/* in order to simplify the caller work, the parameter meanings of ppos, len
* has been changed to adapt the internal ring buffer of the driver. The ppos
* indicate wich page is refferred(shoud start from 1, as the csr page are
* not allowed to do the splice), The len indicate how many pages are needed.
* Also, we constrain that maximum page number for each splice shoud not
* exceed 16 pages, if else, a EINVAL will return. If a high speed device
* need a more big page number, it can rework this routing. The off is also
* used to return the total bytes shoud be transferred, use can compare it
* with the return value to determint whether all bytes has been transfered.
*/
static ssize_t do_rdma_ring_splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len,
unsigned int flags)
{
struct rdma_ring *priv = to_rdma_ring(in->private_data);
struct rdma_ring_buf *data_buf;
struct rdma_ring_dstatus *dsta_buf;
struct page *pages[PIPE_DEF_BUFFERS];
struct partial_page partial[PIPE_DEF_BUFFERS];
ssize_t total_sz = 0, error;
int i;
unsigned offset;
struct splice_pipe_desc spd = {
.pages = pages,
.partial = partial,
.nr_pages_max = PIPE_DEF_BUFFERS,
.flags = flags,
.ops = &rdma_ring_page_pipe_buf_ops,
.spd_release = rdma_ring_spd_release_page,
};
/* init the spd, currently we omit the packet header, if a control
* is needed, it may be implemented by define a control variable in
* the device struct */
spd.nr_pages = len;
for (i = 0; i < len; i++) {
offset = (unsigned)(*ppos) + i;
data_buf = get_buf(priv, offset);
dsta_buf = get_dsta_buf(priv, offset);
pages[i] = virt_to_page(data_buf);
get_page(pages[i]);
partial[i].offset = 0;
partial[i].len = dsta_buf->bytes_xferred;
total_sz += partial[i].len;
}
error = _splice_to_pipe(pipe, &spd);
/* use the ppos to return the theory total bytes shoud transfer */
*ppos = total_sz;
return error;
}
/* splice read */
static ssize_t rdma_ring_splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe, size_t len, unsigned int flags)
{
ssize_t ret;
MY_PRINT("%s: *ppos = %lld, len = %ld\n", __func__, *ppos, (long)len);
if (unlikely(len > PIPE_DEF_BUFFERS))
return -EINVAL;
ret = do_rdma_ring_splice_read(in, ppos, pipe, len, flags);
return ret;
}
The _splice_to_pipe is just the same one as the splice_to_pipe in kernel. As this function is not an exported symbol, so I re-implemented it.
I think the main cause is that the some kind of lock of pages are omitted, but
I don't know where and how.
My kernel version is 3.10.
Today I am getting started with developing Linux modules. It was rather hard to write, compile and work with Helloworld, but I've done it.
My second module with open, write, read functions is ready, but I really dont know how to test it. Write method just makes printk(). My module is loaded, its name is iamnoob. How to test this write(...) function and to find smth in var/log/syslog?
cat > iamnoob just writes a file to the dir. Same with cp and other.
Sorry for noob question, i've googled, but no answer has been found. Sorry for poor English.
A basic kernel module would normally include registering a character device.
Simple imlementation requires:
Register chrdev region with specific major & minor.
Allocate file operations structure and implement the basic read / write APIs.
Initialize and register character device with the file operations structure to the major / minor region.
See the following code snippet as a template of a module (only read / write APIs are imlemented):
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <asm-generic/uaccess.h>
#define MY_BUFFER_SIZE (1024 * 10)
#define MY_CHRDEV_MAJOR 217
#define MY_CHRDEV_MINOR 0
static struct cdev my_cdev;
static unsigned char *my_buf;
static dev_t my_dev = MKDEV(MY_CHRDEV_MAJOR, MY_CHRDEV_MINOR);
ssize_t my_read(struct file *file, char __user * buf, size_t count, loff_t * ppos)
{
int size;
size = MY_BUFFER_SIZE - 100 - (int)*ppos;
if (size > count)
size = count;
if (copy_to_user(buf, my_buf + *ppos, count))
return -EFAULT;
*ppos += size;
return size;
}
ssize_t my_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos)
{
int size;
size = MY_BUFFER_SIZE - 100 - (int)*ppos;
if (size > count)
size = count;
if (copy_from_user(my_buf + *ppos, buf, count))
return -EFAULT;
*ppos += size;
return size;
}
long my_unlocked_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
printk ("%s!\n", __FUNCTION__);
return 0;
}
int my_mmap(struct file *f, struct vm_area_struct *vma)
{
printk ("%s!\n", __FUNCTION__);
return 0;
}
int my_open(struct inode *i, struct file *f)
{
printk ("%s!\n", __FUNCTION__);
return 0;
}
int my_release(struct inode *i, struct file *f)
{
printk ("%s!\n", __FUNCTION__);
return 0;
}
struct file_operations my_fops =
{
.owner = THIS_MODULE,
.read = &my_read,
.write = &my_write,
.unlocked_ioctl = &my_unlocked_ioctl,
.mmap = &my_mmap,
.open = &my_open,
.release = &my_release,
};
static int __init my_module_init(void)
{
int line = 0;
unsigned char *pos;
printk ("%s!\n", __FUNCTION__);
my_buf = (unsigned char *)kzalloc(MY_BUFFER_SIZE, 0);
if (my_buf == NULL) {
printk("%s - failed to kzallocate buf!\n", __FUNCTION__);
return -1;
}
pos = my_buf;
while (pos - my_buf < MY_BUFFER_SIZE - 100) {
sprintf(pos, "Line #%d\n", line++);
pos += strlen(pos);
}
cdev_init(&my_cdev, &my_fops);
if (register_chrdev_region(my_dev, 1, "my_dev")) {
pr_err("Failed to allocate device number\n");
}
cdev_add(&my_cdev, my_dev, 1);
printk ("%s - registered chrdev\n", __FUNCTION__);
return 0;
}
static void __exit my_module_exit(void)
{
printk ("my_module_exit.\n");
unregister_chrdev_region(my_dev, 1);
return;
}
module_init(my_module_init);
module_exit(my_module_exit);
MODULE_LICENSE("GPL");
This module uses a buffer for file operations, therefore can be tested on any machine, regardless of its HW. Make sure you avoid unnecessary printk's as loops may harm your kernel stability.
Once this is done, in user-space shell you should create a /dev node to represent your character device:
sudo mknod /dev/[dev_name] c [major] [minor]
for example:
sudo mknod /dev/my_dev c 217 0
Then you can test your read / write APIs with:
sudo insmod my_modult.ko
cat /dev/my_dev
less -f /dev/my_dev
sudo su
root> echo "This is a test" > /dev/my_dev
root> exit
cat /dev/my_dev
The shell commands listed above perform read, then login as root (to allow writing to device), write to the char dev, then exit and read again to see the changes.
Now you'd normally implement ioctl and mmap if needed.
I have a few questions on using shared memory with processes. I looked at several previous posts and couldn't glean the answers precisely enough. Thanks in advance for your help.
I'm using shm_open + mmap like below. This code works as intended with parent and child alternating to increment g_shared->count (the synchronization is not portable; it works only for certain memory models, but good enough for my case for now). However, when I change MAP_SHARED to MAP_ANONYMOUS | MAP_SHARED, the memory isn't shared and the program hangs since the 'flag' doesn't get flipped. Removing the flag confirms what's happening with each process counting from 0 to 10 (implying that each has its own copy of the structure and hence the 'count' field). Is this the expected behavior? I don't want the memory to be backed by a file; I really want to emulate what might happen if these were threads instead of processes (they need to be processes for other reasons).
Do I really need shm_open? Since the processes belong to the same hierarchy, can I just use mmap alone instead? I understand this would be fairly straightforward if there wasn't an 'exec,' but how do I get it to work when there is an 'exec' following the 'fork?'
I'm using kernel version 3.2.0-23 on x86_64 (Intel i7-2600). For this implementation, does mmap give the same behavior (correctness as well as performance) as shared memory with pthreads sharing the same global object? For example, does the MMU map the segment with 'cacheable' MTRR/TLB attributes?
Is the cleanup_shared() code correct? Is it leaking any memory? How could I check? For example, is there an equivalent of System V's 'ipcs?'
thanks,
/Doobs
shmem.h:
#ifndef __SHMEM_H__
#define __SHMEM_H__
//includes
#define LEN 1000
#define ITERS 10
#define SHM_FNAME "/myshm"
typedef struct shmem_obj {
int count;
char buff[LEN];
volatile int flag;
} shmem_t;
extern shmem_t* g_shared;
extern char proc_name[100];
extern int fd;
void cleanup_shared() {
munmap(g_shared, sizeof(shmem_t));
close(fd);
shm_unlink(SHM_FNAME);
}
static inline
void init_shared() {
int oflag;
if (!strcmp(proc_name, "parent")) {
oflag = O_CREAT | O_RDWR;
} else {
oflag = O_RDWR;
}
fd = shm_open(SHM_FNAME, oflag, (S_IREAD | S_IWRITE));
if (fd == -1) {
perror("shm_open");
exit(EXIT_FAILURE);
}
if (ftruncate(fd, sizeof(shmem_t)) == -1) {
perror("ftruncate");
shm_unlink(SHM_FNAME);
exit(EXIT_FAILURE);
}
g_shared = mmap(NULL, sizeof(shmem_t),
(PROT_WRITE | PROT_READ),
MAP_SHARED, fd, 0);
if (g_shared == MAP_FAILED) {
perror("mmap");
cleanup_shared();
exit(EXIT_FAILURE);
}
}
static inline
void proc_write(const char* s) {
fprintf(stderr, "[%s] %s\n", proc_name, s);
}
#endif // __SHMEM_H__
shmem1.c (parent process):
#include "shmem.h"
int fd;
shmem_t* g_shared;
char proc_name[100];
void work() {
int i;
for (i = 0; i < ITERS; ++i) {
while (g_shared->flag);
++g_shared->count;
sprintf(g_shared->buff, "%s: %d", proc_name, g_shared->count);
proc_write(g_shared->buff);
g_shared->flag = !g_shared->flag;
}
}
int main(int argc, char* argv[], char* envp[]) {
int status, child;
strcpy(proc_name, "parent");
init_shared(argv);
fprintf(stderr, "Map address is: %p\n", g_shared);
if (child = fork()) {
work();
waitpid(child, &status, 0);
cleanup_shared();
fprintf(stderr, "Parent finished!\n");
} else { /* child executes shmem2 */
execvpe("./shmem2", argv + 2, envp);
}
}
shmem2.c (child process):
#include "shmem.h"
int fd;
shmem_t* g_shared;
char proc_name[100];
void work() {
int i;
for (i = 0; i < ITERS; ++i) {
while (!g_shared->flag);
++g_shared->count;
sprintf(g_shared->buff, "%s: %d", proc_name, g_shared->count);
proc_write(g_shared->buff);
g_shared->flag = !g_shared->flag;
}
}
int main(int argc, char* argv[], char* envp[]) {
int status;
strcpy(proc_name, "child");
init_shared(argv);
fprintf(stderr, "Map address is: %p\n", g_shared);
work();
cleanup_shared();
return 0;
}
Passing MAP_ANONYMOUS causes the kernel to ignore your file descriptor argument and give you a private mapping instead. That's not what you want.
Yes, you can create an anonymous shared mapping in a parent process, fork, and have the child process inherit the mapping, sharing the memory with the parent and any other children. That obvoiusly doesn't survive an exec() though.
I don't understand this question; pthreads doesn't allocate memory. The cacheability will depend on the file descriptor you mapped. If it's a disk file or anonymous mapping, then it's cacheable memory. If it's a video framebuffer device, it's probably not.
That's the right way to call munmap(), but I didn't verify the logic beyond that. All processes need to unmap, only one should call unlink.
2b) as a middle-ground of a sort, it is possible to call:
int const shm_fd = shm_open(fn,...);
shm_unlink(fn);
in a parent process, and then pass fd to a child process created by fork()/execve() via argp or envp. since open file descriptors of this type will survive the fork()/execve(), you can mmap the fd in both the parent process and any dervied processes. here's a more complete code example copied and simplified/sanitized from code i ran successfully under Ubuntu 12.04 / linux kernel 3.13 / glibc 2.15:
int create_shm_fd( void ) {
int oflags = O_RDWR | O_CREAT | O_TRUNC;
string const fn = "/some_shm_fn_maybe_with_pid";
int fd;
neg_one_fail( fd = shm_open( fn.c_str(), oflags, S_IRUSR | S_IWUSR ), "shm_open" );
if( fd == -1 ) { rt_err( strprintf( "shm_open() failed with errno=%s", str(errno).c_str() ) ); }
// for now, we'll just pass the open fd to our child process, so
// we don't need the file/name/link anymore, and by unlinking it
// here we can try to minimize the chance / amount of OS-level shm
// leakage.
neg_one_fail( shm_unlink( fn.c_str() ), "shm_unlink" );
// by default, the fd returned from shm_open() has FD_CLOEXEC
// set. it seems okay to remove it so that it will stay open
// across execve.
int fd_flags = 0;
neg_one_fail( fd_flags = fcntl( fd, F_GETFD ), "fcntl" );
fd_flags &= ~FD_CLOEXEC;
neg_one_fail( fcntl( fd, F_SETFD, fd_flags ), "fcntl" );
// resize the shm segment for later mapping via mmap()
neg_one_fail( ftruncate( fd, 1024*1024*4 ), "ftruncate" );
return fd;
}
it's not 100% clear to me if it's okay spec-wise to remove the FD_CLOEXEC and/or assume that after doing so the fd really will survive the exec. the man page for exec is unclear; it says: "POSIX shared memory regions are unmapped", but to me that's redundant with the general comments earlier that mapping are not preserved, and doesn't say that shm_open()'d fd will be closed. any of course there's the fact that, as i mentioned, the code does seem to work in at least one case.
the reason i might use this approach is that it would seem to reduce the chance of leaking the shared memory segment / filename, and it makes it clear that i don't need persistence of the memory segment.