issue with copy_from_user in kernel - linux

I'm trying to use this function to copy a buffer from the user to one in kernel.
both buffers were allocated. I'm using while in case not all the bytes were copied on the first try. but for some reason, nothing is copied and the program is stuck in the while loop.
what can be the reasons for that?
void my_copy_from_user(const char* source_buff, char* dest_buff, int size_to_copy){
int not_copied = size_to_copy
int left = size_to_copy;
while( not_copied ){
not_copied = copy_from_user(dest_buff, source_buff, left);
dest_buff += (left - not_copied);
source_buff += (left - not_copied);
left = not_copied;
}
}

It is possible that it is legitimately failing for reasons that you cannot recover from.
Please look at: http://lxr.free-electrons.com/source/arch/x86/lib/usercopy_32.c#L681
unsigned long _copy_from_user(void *to, const void __user *from, unsigned n)
{
if (access_ok(VERIFY_READ, from, n))
n = __copy_from_user(to, from, n);
else
memset(to, 0, n);
return n;
}
This is the underlying implementation for copy_from_user for Linux on x86 processors. It first checks access_ok. If access is not allowed, it will fail and return with n (the number of bytes you requested to copy) immediately. This would cause an infinite loop.
Two points:
I do not think you should invoke copy_from_user in a loop like that. If it fails to copy in kernel mode, there is a reason why. This is a different beast from read() functions when reading from sockets, etc, where you are encouraged to read() in a loop.
Are you sure that you are passing in the correct dest_buff to copy_from_user?
Tips:
Printk all the values and see what's happening. Is left being changed or not? It is likely not.

Related

Local memory for each CUDA thread

I have a simple program below. My question is that where is "temp" actually stored? is it in global or local memory? I need array temp for each idx so that every thread has individual array temp. In this case, it is working properly. But in my actual program, when I tried to fill temp[0] from test2 it made the program stopped. Suppose we have 1024 threads then it only run the kernel around 200 threads. So, I am wondering whether temp is shared or not. If yes, maybe there is a collision there. I also did not get any error messsage. Please someone explain about this.
__device__ void test2(int temp[], int idx) {
temp[0] = idx;
printf("%d ", temp[0]);
}
__global__ void test() {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
int *temp = (int *) malloc(100 * sizeof (int));
test2(temp, idx);
}
int main() {
test << <1, 1024 >> >();
return 0;
}
My question is that where is "temp" actually stored?
The allocation for temp is stored in a place called the device heap. It is a form of global memory. However the temp variable itself (i.e. the pointer value) is in local memory - not shared or visible to other threads.
I need array temp for each idx so that every thread has individual array temp.
You will get that, subject to caveats below. Each thread will have its own individual array, referenced by its local variable temp. Each thread will have a separate allocation for storage on the device heap.
People commonly have problems with in-kernel new or malloc. One of the main reasons is that the device heap is initially limited to 8MB, across all of your device heap allocations. So if enough threads do a new or malloc of enough allocation requests, you will run out of space.
When you run out of space, the API way to signal that is to return a zero pointer value for the allocation (a NULL pointer). If you then attempt to use this NULL pointer, you will have trouble.
For debugging purposes (i.e. to prove this is happening), test the pointer for NULL (i.e. == 0) before using it. If it is NULL, don't use it (perhaps print an error message instead).
You can read more about this in the documentation or in many questions here on the SO cuda tag. If you read any of these sources, you will discover that you can increase the size of the device heap.

Was: How does BPF calculate number of CPU for PERCPU_ARRAY?

I have encountered an interesting issue where a PERCPU_ARRAY created on one system with 2 processors creates an array with 2 per-CPU elements and on another system with 2 processors, an array with 128 per-CPU elements. The latter was rather unexpected to me!
The way I discovered this behavior is that a program that allocated an array for the number of CPUs (using get_nprocs_conf(3)) and then read in the PERCPU_ARRAY into it (using bpf_map_lookup_elem()) ended up writing past the end of the array and crashing.
I would like to find out what is the proper way to determine in a program that reads BPF maps the number of elements in a PERCPU_ARRAY used on a system.
Failing that, I think the second best approach is to pick a buffer for reading in that is "large enough." Here, the problem is similar: what is that number and is there way to learn it at runtime?
The question comes from reading the source of bpftool, which figures this out:
unsigned int get_possible_cpus(void)
{
int cpus = libbpf_num_possible_cpus();
if (cpus < 0) {
p_err("Can't get # of possible cpus: %s", strerror(-cpus));
exit(-1);
}
return cpus;
}
int libbpf_num_possible_cpus(void)
{
static const char *fcpu = "/sys/devices/system/cpu/possible";
static int cpus;
int err, n, i, tmp_cpus;
bool *mask;
/* ---8<--- snip */
}
So that's how they do it!

linux wake_up_interruptible() having no effect

I am writing a "sleepy" device driver for an Operating Systems class.
The way it works is, the user accesses the device via read()/write().
When the user writes to the device like so: write(fd, &wait, size), the device is put to sleep for the amount of time in seconds of the value of wait. If the wait time expires then driver's write method returns 0 and the program finishes. But if the user reads from the driver while a process is sleeping on a wait queue, then the driver's write method returns immediately with the number of seconds the sleeping process had left to wait before the timeout would have occurred on its own.
Another catch is that 10 instances of the device are created, and each of the 10 devices must be independent of each other. So a read to device 1 must only wake up sleeping processes on device 1.
Much code has been provided, and I have been charged with the task of mainly writing the read() and write() methods for the driver.
The way I have tried to solve the problem of keeping the devices independent of each other is to include two global static arrays of size 10. One of type wait_head_queue_t, and one of type Int(Bool flags). Both of these arrays are initialized once when I open the device via open(). The problem is that when I call wake_up_interruptible(), nothing happens, and the program terminates upon timeout. Here is my write method:
ssize_t sleepy_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos){
struct sleepy_dev *dev = (struct sleepy_dev *)filp->private_data;
ssize_t retval = 0;
int mem_to_be_copied = 0;
if (mutex_lock_killable(&dev->sleepy_mutex))
{
return -EINTR;
}
// check size
if(count != 4) // user must provide 4 byte Int
{
return EINVAL; // = 22
}
// else if the user provided valid sized input...
else
{
if((mem_to_be_copied = copy_from_user(&long_buff[0], buf, count)))
{
return -EFAULT;
}
// check for negative wait time entered by user
if(long_buff[0] > -1)// "long_buff[]"is global,for now only holds 1 value
{
proc_read_flags[MINOR(dev->cdev.dev)] = 0; //****** flag array
retval = wait_event_interruptible_timeout(wqs[MINOR(dev->cdev.dev)], proc_read_flags[MINOR(dev->cdev.dev)] == 1, long_buff[0] * HZ) / HZ;
proc_read_flags[MINOR(dev->cdev.dev)] = 0; // MINOR numbers for each
// device correspond to array indices
// devices 0 - 9
// "wqs" is array of wait queues
}
else
{
printk(KERN_INFO "user entered negative value for sleep time\n");
}
}
mutex_unlock(&dev->sleepy_mutex);
return retval;}
Unlike the many examples on this topic, I am switching the flag back to zero immediately before the call to wait_event_interruptible_timeout() because flag values seem to be lingering between subsequent runs of the program. Here is the code for my read method:
ssize_t sleepy_read(struct file *filp, char __user *buf, size_t count,
loff_t *f_pos){
struct sleepy_dev *dev = (struct sleepy_dev *)filp->private_data;
ssize_t retval = 0;
if (mutex_lock_killable(&dev->sleepy_mutex))
return -EINTR;
// switch the flag
proc_read_flags[MINOR(dev->cdev.dev)] = 1; // again device minor numbers
// correspond to array indices
// TODO: this is not waking up the process in write!
// wake up the queue
wake_up_interruptible(&wqs[MINOR(dev->cdev.dev)]);
mutex_unlock(&dev->sleepy_mutex);
return retval;}
The way I am trying to test the program is to have two main.c's, one for writing to the device and one for reading from the device, and I just ./a.out them in separate consoles in my ubuntu installation in Virtual Box. Another thing, the way it is set up now, neither the writing or reading a.outs return until timeout occurs. I apologize for the spotty formatting of the code. I'm not sure exactly what is going on here, so any help would be much appreciated! Thanks!
Your write method hold sleepy_mutex while wait event. So read method waits on mutex_lock_killable(&dev->sleepy_mutex) while the mutex become unlocked by the writer. It is occured only when writer's timeout exceeds, and write method returns. It is the behaviour you observe.
Usually, wait_event* is executed outside of any critical section. That can be achieved by using _lock-suffixed variants of such macros, or simply wrapping cond argument of such macros with spinlock acquire/release pair:
int check_cond()
{
int res;
spin_lock(&lock);
res = <cond>;
spin_unlock(&lock);
return res;
}
...
wait_event_interruptible(&wq, check_cond());
Unfortunately, wait_event-family macros cannot be used, when condition checking should be protected with a mutex. In that case, you can use wait_woken() function with manual condition checking code. Or rewrite your code without needs of mutex lock/unlock around condition checking.
For achive "reader wake writer, if it is sleep" functionality, you can adopt code from that answer https://stackoverflow.com/a/29765695/3440745.
Writer code:
//Declare local variable at the beginning of the function
int cflag;
...
// Outside of any critical section(after mutex_unlock())
cflag = proc_read_flags[MINOR(dev->cdev.dev)];
wait_event_interruptible_timeout(&wqs[MINOR(dev->cdev.dev)],
proc_read_flags[MINOR(dev->cdev.dev)] != cflag, long_buff[0]*HZ);
Reader code:
// Mutex holding protects this flag's increment from concurrent one.
proc_read_flags[MINOR(dev->cdev.dev)]++;
wake_up_interruptible_all(&wqs[MINOR(dev->cdev.dev)]);

Why my implementation of sbrk system call does not work?

I try to write a very simple os to better understand the basic principles. And I need to implement user-space malloc. So at first I want to implement and test it on my linux-machine.
At first I have implemented the sbrk() function by the following way
void* sbrk( int increment ) {
return ( void* )syscall(__NR_brk, increment );
}
But this code does not work. Instead, when I use sbrk given by os, this works fine.
I have tryed to use another implementation of the sbrk()
static void *sbrk(signed increment)
{
size_t newbrk;
static size_t oldbrk = 0;
static size_t curbrk = 0;
if (oldbrk == 0)
curbrk = oldbrk = brk(0);
if (increment == 0)
return (void *) curbrk;
newbrk = curbrk + increment;
if (brk(newbrk) == curbrk)
return (void *) -1;
oldbrk = curbrk;
curbrk = newbrk;
return (void *) oldbrk;
}
sbrk invoked from this function
static Header *morecore(unsigned nu)
{
char *cp;
Header *up;
if (nu < NALLOC)
nu = NALLOC;
cp = sbrk(nu * sizeof(Header));
if (cp == (char *) -1)
return NULL;
up = (Header *) cp;
up->s.size = nu; // ***Segmentation fault
free((void *)(up + 1));
return freep;
}
This code also does not work, on the line (***) I get segmentation fault.
Where is a problem ?
Thanks All. I have solved my problem using new implementation of the sbrk. The given code works fine.
void* __sbrk__(intptr_t increment)
{
void *new, *old = (void *)syscall(__NR_brk, 0);
new = (void *)syscall(__NR_brk, ((uintptr_t)old) + increment);
return (((uintptr_t)new) == (((uintptr_t)old) + increment)) ? old :
(void *)-1;
}
The first sbrk should probably have a long increment. And you forgot to handle errors (and set errno)
The second sbrk function does not change the address space (as sbrk does). You could use mmap to change it (but using mmap instead of sbrk won't update the kernel's view of data segment end as sbrk does). You could use cat /proc/1234/maps to query the address space of process of pid 1234). or even read (e.g. with fopen&fgets) the /proc/self/maps from inside your program.
BTW, sbrk is obsolete (most malloc implementations use mmap), and by definition every system call (listed in syscalls(2)) is executed by the kernel (for sbrk the kernel maintains the "data segment" limit!). So you cannot avoid the kernel, and I don't even understand why you want to emulate any system call. Almost by definition, you cannot emulate syscalls since they are the only way to interact with the kernel from a user application. From the user application, every syscall is an atomic elementary operation (done by a single SYSENTER machine instruction with appropriate contents in machine registers).
You could use strace(1) to understand the actual syscalls done by your running program.
BTW, the GNU libc is a free software. You could look into its source code. musl-libc is a simpler libc and its code is more readable.
At last compile with gcc -Wall -Wextra -g and use the gdb debugger (you can even query the registers, if you wanted to). Perhaps read the x86/64-ABI specification and the Linux Assembly HowTo.

String manipulation in Linux kernel module

I am having a hard time in manipulating strings while writing module for linux. My problem is that I have a int Array[10] with different values in it. I need to produce a string to be able send to the buffer in my_read procedure. If my array is {0,1,112,20,4,0,0,0,0,0}
then my output should be:
0:(0)
1:-(1)
2:-------------------------------------------------------------------------------------------------------(112)
3:--------------------(20)
4:----(4)
5:(0)
6:(0)
7:(0)
8:(0)
9:(0)
when I try to place the above strings in char[] arrays some how weird characters end up there
here is the code
int my_read (char *page, char **start, off_t off, int count, int *eof, void *data)
{
int len;
if (off > 0){
*eof =1;
return 0;
}
/* get process tree */
int task_dep=0; /* depth of a task from INIT*/
get_task_tree(&init_task,task_dep);
char tmp[1024];
char A[ProcPerDepth[0]],B[ProcPerDepth[1]],C[ProcPerDepth[2]],D[ProcPerDepth[3]],E[ProcPerDepth[4]],F[ProcPerDepth[5]],G[ProcPerDepth[6]],H[ProcPerDepth[7]],I[ProcPerDepth[8]],J[ProcPerDepth[9]];
int i=0;
for (i=0;i<1024;i++){ tmp[i]='\0';}
memset(A, '\0', sizeof(A));memset(B, '\0', sizeof(B));memset(C, '\0', sizeof(C));
memset(D, '\0', sizeof(D));memset(E, '\0', sizeof(E));memset(F, '\0', sizeof(F));
memset(G, '\0', sizeof(G));memset(H, '\0', sizeof(H));memset(I, '\0', sizeof(I));memset(J, '\0', sizeof(J));
printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%s\n",A,B,C,D,E,F,G,H,I,J);
memset(A,'-',sizeof(A));
memset(B,'-',sizeof(B));
memset(C,'-',sizeof(C));
memset(D,'-',sizeof(D));
memset(E,'-',sizeof(E));
memset(F,'-',sizeof(F));
memset(G,'-',sizeof(G));
memset(H,'-',sizeof(H));
memset(I,'-',sizeof(I));
memset(J,'-',sizeof(J));
printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%\n",A,B,C,D,E,F,G,H,I,J);
len = sprintf(page,"0:%s(%d)\n1:%s(%d)\n2:%s(%d)\n3:%s(%d)\n4:%s(%d)\n5:%s(%d)\n6:%s(%d)\n7:%s(%d)\n8:%s(%d)\n9:%s(%d)\n",A,ProcPerDepth[0],B,ProcPerDepth[1],C,ProcPerDepth[2],D,ProcPerDepth[3],E,ProcPerDepth[4],F,ProcPerDepth[5],G,ProcPerDepth[6],H,ProcPerDepth[7],I,ProcPerDepth[8],J,ProcPerDepth[9]);
return len;
}
it worked out with this:
char s[500];
memset(s,'-',498);
for (i=len=0;i<10;++i){
len+=sprintf(page+len,"%d:%.*s(%d)\n",i,ProcPerDepth[i],s,ProcPerDepth[i]);
}
I wonder if there is an easy flag to multiply string char in sprintf. thanx –
Here are a some issues:
You have entirely filled the A, B, C ... arrays with characters. Then, you pass them to an I/O routine that is expecting null-terminated strings. Because your strings are not null-terminated, printk() will keep printing whatever is in stack memory after your object until it finds a null by luck.
Multi-threaded kernels like Linux have strict and relatively small constraints regarding stack allocations. All instances in the kernel call chain must fit into a specific size or something will be overwritten. You may not get any detection of this error, just some kind of downstream crash as memory corruption leads to a panic or a wedge. Allocating large and variable arrays on a kernel stack is just not a good idea.
If you are going to write the tmp[] array and properly nul-terminate it, there is no reason to also initialize it. But if you were going to initialize it, you could do so with compiler-generated code by just saying: char tmp[1024] = { 0 }; (A partial initialization of an aggregate requires by C99 initialization of the entire aggregate.) A similar observation applies to the other arrays.
How about getting rid of most of those arrays and most of that code and just doing something along the lines of:
for(i = j = 0; i < n; ++i)
j += sprintf(page + j, "...", ...)

Resources