Direct access to linux framebuffer - copyarea - linux

I want to move very quickly a rectangle over a framebuffer in an embedded linux application. I have found that the function cfb_copyarea may be useful. But I cannot find any ioctl over the /dev/fb device to call the function. Or can this function be called directly?

Here is a code to init and close FrameBuffer
class CFrameBuffer
{
void* m_FrameBuffer;
struct fb_fix_screeninfo m_FixInfo;
struct fb_var_screeninfo m_VarInfo;
int m_FBFD;
int InitFB()
{
int iFrameBufferSize;
/* Open the framebuffer device in read write */
m_FBFD = open(FB_NAME, O_RDWR);
if (m_FBFD < 0) {
printf("Unable to open %s.\n", FB_NAME);
return 1;
}
/* Do Ioctl. Retrieve fixed screen info. */
if (ioctl(m_FBFD, FBIOGET_FSCREENINFO, &m_FixInfo) < 0) {
printf("get fixed screen info failed: %s\n",
strerror(errno));
close(m_FBFD);
return 1;
}
/* Do Ioctl. Get the variable screen info. */
if (ioctl(m_FBFD, FBIOGET_VSCREENINFO, &m_VarInfo) < 0) {
printf("Unable to retrieve variable screen info: %s\n",
strerror(errno));
close(m_FBFD);
return 1;
}
/* Calculate the size to mmap */
iFrameBufferSize = m_FixInfo.line_length * m_VarInfo.yres;
printf("Line length %d\n", m_FixInfo.line_length);
/* Now mmap the framebuffer. */
m_FrameBuffer = mmap(NULL, iFrameBufferSize, PROT_READ | PROT_WRITE,
MAP_SHARED, m_FBFD,0);
if (m_FrameBuffer == NULL) {
printf("mmap failed:\n");
close(m_FBFD);
return 1;
}
return 0;
}
void CloseFB()
{
munmap(m_FrameBuffer,0);
close(m_FBFD);
}
};

Note that this code is not entirely correct, although it will work on many linux devices, on some it won't. To calculate the framebuffer size, do not do this:
iFrameBufferSize = m_FixInfo.line_length * m_VarInfo.yres;
Instead, do this:
iFrameBufferSize = m_FixInfo.smem_len;
And your code will be more portable.

As far as I know after a few days of research, there is no ioctl for invoking this function. I have to write my own system call preferrably in a kernel module. Or copy the algorithm the from kernel source and use it in the user space via nmaped memory.

Related

Why my code is not able to create simple input device using uinput?

I am trying to learn using /dev/uinput in linux and copied simple code from kernel.org/doc/html/v4.12/input/uinput.html which is below:
#include <linux/uinput.h>
void emit(int fd, int type, int code, int val)
{
struct input_event ie;
ie.type = type;
ie.code = code;
ie.value = val;
/* timestamp values below are ignored */
ie.time.tv_sec = 0;
ie.time.tv_usec = 0;
write(fd, &ie, sizeof(ie));
}
int main(void)
{
struct uinput_setup usetup;
int fd = open("/dev/uinput", O_WRONLY | O_NONBLOCK);
/*
* The ioctls below will enable the device that is about to be
* created, to pass key events, in this case the space key.
*/
ioctl(fd, UI_SET_EVBIT, EV_KEY);
ioctl(fd, UI_SET_KEYBIT, KEY_SPACE);
memset(&usetup, 0, sizeof(usetup));
usetup.id.bustype = BUS_USB;
usetup.id.vendor = 0x1234; /* sample vendor */
usetup.id.product = 0x5678; /* sample product */
strcpy(usetup.name, "Example device");
ioctl(fd, UI_DEV_SETUP, &usetup);
ioctl(fd, UI_DEV_CREATE);
/*
* On UI_DEV_CREATE the kernel will create the device node for this
* device. We are inserting a pause here so that userspace has time
* to detect, initialize the new device, and can start listening to
* the event, otherwise it will not notice the event we are about
* to send. This pause is only needed in our example code!
*/
sleep(1);
/* Key press, report the event, send key release, and report again */
emit(fd, EV_KEY, KEY_SPACE, 1);
emit(fd, EV_SYN, SYN_REPORT, 0);
emit(fd, EV_KEY, KEY_SPACE, 0);
emit(fd, EV_SYN, SYN_REPORT, 0);
/*
* Give userspace some time to read the events before we destroy the
* device with UI_DEV_DESTOY.
*/
sleep(1);
//ioctl(fd, UI_DEV_DESTROY);
close(fd);
return 0;
}
It compiles and run successfully without error. But i can't found any device created using this command xinput in terminal. I also checked inside /dev/input/ , but no changes found after executing my that program.
I tried it on Ubuntu, Kali. What i am missing ?
May be it help others.
int i=0;
while(i<100){
i++;
sleep(1);
emit(fd, EV_KEY, KEY_SPACE, 1);
emit(fd, EV_SYN, SYN_REPORT, 0);
emit(fd, EV_KEY, KEY_SPACE, 0);
emit(fd, EV_SYN, SYN_REPORT, 0);
}
I found that while program in loop, device is created. During that you can check with xinput command. After program ends, device get destroyed even if i don't call ioctl(fd,UI_DEV_DESTROY).

Could I/O memory access be used inside ISR under Linux (ARM)?

I'm creating driver for communication with FPGA under Linux. FPGA is connected via GPMC interface. When I tested read/write from driver context - everithing works perfectly. But the problem is that I need to read some address on interrupt. So I created interrupt handler, registred it and put iomemory reading in it (readw function). But when interrupt is fired - only zero's are readed. I tested every part of driver from the top to the bottom and it seems like the problem is in iomemory access inside ISR. When I replaced io access with constant value - it successfully passed to user-level application.
ARM version: armv7a (Cortex ARM-A8 (DM3730))
Compiler: CodeSourcery 2014.05
Here is some code from driver which represents performed actions:
// Request physical memory region for FPGA address IO
void* uni_PhysMem_request(const unsigned long addr, const unsigned long size) {
// Handle to be returned
void* handle = NULL;
// Check if memory region successfully requested (mapped to module)
if (!request_mem_region(addr, size, moduleName)) {
printk(KERN_ERR "\t\t\t\t%s() failed to request_mem_region(0x%p, %lu)\n", __func__, (void*)addr, size);
}
// Remap physical memory
if (!(handle = ioremap(addr, size))) {
printk(KERN_ERR "\t\t\t\t%s() failed to ioremap(0x%p, %lu)\n", __func__, (void*)addr, size);
}
// Return virtual address;
return handle;
}
// ...
// ISR
static irqreturn_t uni_IRQ_handler(int irq, void *dev_id) {
size_t readed = 0;
if (irq == irqNumber) {
printk(KERN_DEBUG "\t\t\t\tIRQ handling...\n");
printk(KERN_DEBUG "\t\t\t\tGPIO %d pin is %s\n", irqGPIOPin, ((gpio_get_value(irqGPIOPin) == 0) ? "LOW" : "HIGH"));
// gUniAddr is a struct which holds GPMC remapped virtual address (from uni_PhysMem_request), offset and read size
if ((readed = uni_ReadBuffer_IRQ(gUniAddr.gpmc.addr, gUniAddr.gpmc.offset, gUniAddr.size)) < 0) {
printk(KERN_ERR "\t\t\t\tunable to read data\n");
}
else {
printk(KERN_INFO "\t\t\t\tdata readed success (%zu bytes)\n", readed);
}
}
return IRQ_HANDLED;
}
// ...
// Read buffer by IRQ
ssize_t uni_ReadBuffer_IRQ(void* physAddr, unsigned long physOffset, size_t buffSize) {
size_t size = 0;
size_t i;
for (i = 0; i < buffSize; i += 2) {
size += uni_RB_write(readw(physAddr + physOffset)); // Here readed value sent to ring buffer. When "readw" replaced with any constant - everything OK
}
return size;
}
Looks like the problem was in code optimizations. I changed uni_RB_write function to pass physical address and data size, also read now performed via ioread16_rep function. So now everything works just fine.

Call to ioctl() on a usb device using usbfs does not work

I am trying to create my own driver for my Gamepad right now, I found out the original reason why I wanted to create it does not exist but I still want to do it for the experience. So please don't tell me there is a better way to do this than writing my own driver.
The part in kernelspace with the ioctl function that should be called is:
static int xpad_ioctl (struct usb_interface *intf, unsigned int code,void *buf) {
//struct usb_xpad *xpad = usb_get_intfdata(intf);
printk(KERN_INFO"(Ongy)IOCTL called\n");
//if (_IOC_TYPE(code) != XPAD_IOMAGIC) return -ENOTTY;
//if (_IOC_NR(code) > XPAD_IOMAX) return -ENOTTY;
switch(code){
case XPAD_IORMAP:
printk(KERN_INFO"(Ongy)IORMAP called\n");
break;
default:
return -EINVAL;
}
return 0;
}
static struct usb_driver xpad_driver =
{
.name = "Cyborg-V5-driver",
.probe = xpad_probe,
.disconnect = xpad_disconnect,
.unlocked_ioctl = xpad_ioctl,
.id_table = xpad_table,
};
The part in userspace to call it is (this is part of a Qt-application):
int openfile() {
char *device = "/dev/bus/usb/005/009";
printf("Opening device %s\n", device);
return open(device, /*O_RDONLY*/O_WRONLY | O_NONBLOCK );
}
[...] the closefile(int file_desc) is missing here, this and the openfile functions exist because of me not knowing one can call "::open()" when Qt overrides function calls.
void MainContainer::callioctl() {
int file_desc, ret_val;
errno = 0;
file_desc = openfile();
if (file_desc==-1){
printf("Ioctl notcalled because of: error %s\n", strerror(errno));
}
else
{
errno = 0;
//struct usbdevfs_getdriver* driver = (usbdevfs_getdriver*)malloc(sizeof(struct usbdevfs_getdriver));
struct mappingpair* pair = (mappingpair*)malloc(sizeof(struct mappingpair));
ret_val = ioctl(file_desc, XPAD_IORMAP, pair);
//printf("Drivername %s\n", driver->driver);
closefile(file_desc);
if (ret_val==-1) printf("Ioctl failed with error %s\n", strerror(errno));
else printf("Ioctl call successfull\n");
}
}
ok, the string to the file I open I get with a call to lsusb and change it by hand in the code, this is only for debugging and until I get the ioctl calls working
When I call the callioctl() it prints:
Ioctl failed with error Unpassender IOCTL (I/O-Control) für das Gerät
The German part means "wrong ioctl (I/O-Control) for the device" and nothing appears in dmesg, that is why I think my ioctl function in the driver is not called.
If you look at http://www.hep.by/gnu/kernel/usb/usbfs.html it says that to send an ioctl to the usb_driver device you need to do:
struct usbdevfs_ioctl {
int ifno;
int ioctl_code;
void *data;
};
/* user mode call looks like this.
* 'request' becomes the driver->ioctl() 'code' parameter.
* the size of 'param' is encoded in 'request', and that data
* is copied to or from the driver->ioctl() 'buf' parameter.
*/
static int
usbdev_ioctl (int fd, int ifno, unsigned request, void *param)
{
struct usbdevfs_ioctl wrapper;
wrapper.ifno = ifno;
wrapper.ioctl_code = request;
wrapper.data = param;
return ioctl (fd, USBDEVFS_IOCTL, &wrapper);
}
The documentation is listing usb device under /proc/bus so admittedly this may have changed.

Shared memory across processes on Linux/x86_64

I have a few questions on using shared memory with processes. I looked at several previous posts and couldn't glean the answers precisely enough. Thanks in advance for your help.
I'm using shm_open + mmap like below. This code works as intended with parent and child alternating to increment g_shared->count (the synchronization is not portable; it works only for certain memory models, but good enough for my case for now). However, when I change MAP_SHARED to MAP_ANONYMOUS | MAP_SHARED, the memory isn't shared and the program hangs since the 'flag' doesn't get flipped. Removing the flag confirms what's happening with each process counting from 0 to 10 (implying that each has its own copy of the structure and hence the 'count' field). Is this the expected behavior? I don't want the memory to be backed by a file; I really want to emulate what might happen if these were threads instead of processes (they need to be processes for other reasons).
Do I really need shm_open? Since the processes belong to the same hierarchy, can I just use mmap alone instead? I understand this would be fairly straightforward if there wasn't an 'exec,' but how do I get it to work when there is an 'exec' following the 'fork?'
I'm using kernel version 3.2.0-23 on x86_64 (Intel i7-2600). For this implementation, does mmap give the same behavior (correctness as well as performance) as shared memory with pthreads sharing the same global object? For example, does the MMU map the segment with 'cacheable' MTRR/TLB attributes?
Is the cleanup_shared() code correct? Is it leaking any memory? How could I check? For example, is there an equivalent of System V's 'ipcs?'
thanks,
/Doobs
shmem.h:
#ifndef __SHMEM_H__
#define __SHMEM_H__
//includes
#define LEN 1000
#define ITERS 10
#define SHM_FNAME "/myshm"
typedef struct shmem_obj {
int count;
char buff[LEN];
volatile int flag;
} shmem_t;
extern shmem_t* g_shared;
extern char proc_name[100];
extern int fd;
void cleanup_shared() {
munmap(g_shared, sizeof(shmem_t));
close(fd);
shm_unlink(SHM_FNAME);
}
static inline
void init_shared() {
int oflag;
if (!strcmp(proc_name, "parent")) {
oflag = O_CREAT | O_RDWR;
} else {
oflag = O_RDWR;
}
fd = shm_open(SHM_FNAME, oflag, (S_IREAD | S_IWRITE));
if (fd == -1) {
perror("shm_open");
exit(EXIT_FAILURE);
}
if (ftruncate(fd, sizeof(shmem_t)) == -1) {
perror("ftruncate");
shm_unlink(SHM_FNAME);
exit(EXIT_FAILURE);
}
g_shared = mmap(NULL, sizeof(shmem_t),
(PROT_WRITE | PROT_READ),
MAP_SHARED, fd, 0);
if (g_shared == MAP_FAILED) {
perror("mmap");
cleanup_shared();
exit(EXIT_FAILURE);
}
}
static inline
void proc_write(const char* s) {
fprintf(stderr, "[%s] %s\n", proc_name, s);
}
#endif // __SHMEM_H__
shmem1.c (parent process):
#include "shmem.h"
int fd;
shmem_t* g_shared;
char proc_name[100];
void work() {
int i;
for (i = 0; i &lt ITERS; ++i) {
while (g_shared->flag);
++g_shared->count;
sprintf(g_shared->buff, "%s: %d", proc_name, g_shared->count);
proc_write(g_shared->buff);
g_shared->flag = !g_shared->flag;
}
}
int main(int argc, char* argv[], char* envp[]) {
int status, child;
strcpy(proc_name, "parent");
init_shared(argv);
fprintf(stderr, "Map address is: %p\n", g_shared);
if (child = fork()) {
work();
waitpid(child, &status, 0);
cleanup_shared();
fprintf(stderr, "Parent finished!\n");
} else { /* child executes shmem2 */
execvpe("./shmem2", argv + 2, envp);
}
}
shmem2.c (child process):
#include "shmem.h"
int fd;
shmem_t* g_shared;
char proc_name[100];
void work() {
int i;
for (i = 0; i &lt ITERS; ++i) {
while (!g_shared->flag);
++g_shared->count;
sprintf(g_shared->buff, "%s: %d", proc_name, g_shared->count);
proc_write(g_shared->buff);
g_shared->flag = !g_shared->flag;
}
}
int main(int argc, char* argv[], char* envp[]) {
int status;
strcpy(proc_name, "child");
init_shared(argv);
fprintf(stderr, "Map address is: %p\n", g_shared);
work();
cleanup_shared();
return 0;
}
Passing MAP_ANONYMOUS causes the kernel to ignore your file descriptor argument and give you a private mapping instead. That's not what you want.
Yes, you can create an anonymous shared mapping in a parent process, fork, and have the child process inherit the mapping, sharing the memory with the parent and any other children. That obvoiusly doesn't survive an exec() though.
I don't understand this question; pthreads doesn't allocate memory. The cacheability will depend on the file descriptor you mapped. If it's a disk file or anonymous mapping, then it's cacheable memory. If it's a video framebuffer device, it's probably not.
That's the right way to call munmap(), but I didn't verify the logic beyond that. All processes need to unmap, only one should call unlink.
2b) as a middle-ground of a sort, it is possible to call:
int const shm_fd = shm_open(fn,...);
shm_unlink(fn);
in a parent process, and then pass fd to a child process created by fork()/execve() via argp or envp. since open file descriptors of this type will survive the fork()/execve(), you can mmap the fd in both the parent process and any dervied processes. here's a more complete code example copied and simplified/sanitized from code i ran successfully under Ubuntu 12.04 / linux kernel 3.13 / glibc 2.15:
int create_shm_fd( void ) {
int oflags = O_RDWR | O_CREAT | O_TRUNC;
string const fn = "/some_shm_fn_maybe_with_pid";
int fd;
neg_one_fail( fd = shm_open( fn.c_str(), oflags, S_IRUSR | S_IWUSR ), "shm_open" );
if( fd == -1 ) { rt_err( strprintf( "shm_open() failed with errno=%s", str(errno).c_str() ) ); }
// for now, we'll just pass the open fd to our child process, so
// we don't need the file/name/link anymore, and by unlinking it
// here we can try to minimize the chance / amount of OS-level shm
// leakage.
neg_one_fail( shm_unlink( fn.c_str() ), "shm_unlink" );
// by default, the fd returned from shm_open() has FD_CLOEXEC
// set. it seems okay to remove it so that it will stay open
// across execve.
int fd_flags = 0;
neg_one_fail( fd_flags = fcntl( fd, F_GETFD ), "fcntl" );
fd_flags &= ~FD_CLOEXEC;
neg_one_fail( fcntl( fd, F_SETFD, fd_flags ), "fcntl" );
// resize the shm segment for later mapping via mmap()
neg_one_fail( ftruncate( fd, 1024*1024*4 ), "ftruncate" );
return fd;
}
it's not 100% clear to me if it's okay spec-wise to remove the FD_CLOEXEC and/or assume that after doing so the fd really will survive the exec. the man page for exec is unclear; it says: "POSIX shared memory regions are unmapped", but to me that's redundant with the general comments earlier that mapping are not preserved, and doesn't say that shm_open()'d fd will be closed. any of course there's the fact that, as i mentioned, the code does seem to work in at least one case.
the reason i might use this approach is that it would seem to reduce the chance of leaking the shared memory segment / filename, and it makes it clear that i don't need persistence of the memory segment.

Kernel Panic after changes in sys_close

I'm doing a course on operating systems and we work in Linux Red Hat 8.0
AS part of an assignment I had to change sys close and sys open. Changes to sys close passed without an incident, but when I introduce the changes to sys close suddenly the OS encounters an error during booting, claiming it cannot mount root fs, and invokes panic. EIP is reportedly at sys close when this happens.
Here are the changes I made (look for the "HW1 additions" comment):
In fs/open.c:
asmlinkage long sys_open(const char * filename, int flags, int mode)
{
char * tmp;
int fd, error;
event_t* new_event;
#if BITS_PER_LONG != 32
flags |= O_LARGEFILE;
#endif
tmp = getname(filename);
fd = PTR_ERR(tmp);
if (!IS_ERR(tmp)) {
fd = get_unused_fd();
if (fd >= 0) {
struct file *f = filp_open(tmp, flags, mode);
error = PTR_ERR(f);
if (IS_ERR(f))
goto out_error;
fd_install(fd, f);
}
/* HW1 additions */
if (current->record_flag==1){
new_event=(event_t*)kmalloc(sizeof(event_t), GFP_KERNEL);
if (!new_event){
new_event->type=Open;
strcpy(new_event->filename, tmp);
file_queue_add(*new_event, current->queue);
}
}
/* End HW1 additions */
out:
putname(tmp);
}
return fd;
out_error:
put_unused_fd(fd);
fd = error;
goto out;
}
asmlinkage long sys_close(unsigned int fd)
{
struct file * filp;
struct files_struct *files = current->files;
event_t* new_event;
char* tmp = files->fd[fd]->f_dentry->d_name.name;
write_lock(&files->file_lock);
if (fd >= files->max_fds)
goto out_unlock;
filp = files->fd[fd];
if (!filp)
goto out_unlock;
files->fd[fd] = NULL;
FD_CLR(fd, files->close_on_exec);
__put_unused_fd(files, fd);
write_unlock(&files->file_lock);
/* HW1 additions */
if(current->record_flag == 1){
new_event=(event_t*)kmalloc(sizeof(event_t), GFP_KERNEL);
if (!new_event){
new_event->type=Close;
strcpy(new_event->filename, tmp);
file_queue_add(*new_event, current->queue);
}
}
/* End HW1 additions */
return filp_close(filp, files);
out_unlock:
write_unlock(&files->file_lock);
return -EBADF;
}
The task_struct defined in schedule.h was changed at the end to include:
unsigned int record_flag; /* when zero: do not record. when one: record. */
file_queue* queue;
And file queue as well as event t are defined in a separate file as follows:
typedef enum {Open, Close} EventType;
typedef struct event_t{
EventType type;
char filename[256];
}event_t;
typedef struct file_quque_t{
event_t queue[101];
int head, tail;
}file_queue;
file queue add works like this:
void file_queue_add(event_t event, file_queue* queue){
queue->queue[queue->head]=event;
queue->head = (queue->head+1) % 101;
if (queue->head==queue->tail){
queue->tail=(queue->tail+1) % 101;
}
}
if (!new_event) {
new_event->type = …
That's equivalent to if (new_event == NULL). I think you mean if (new_event != NULL), which the kernel folks typically write as if (new_event).
Can you please post the stackdump of the error. I don't see a place where queue_info structure is allocated memory. One more thing is you cannot be sure that process record_flag will be always zero if unassigned in kernel, because kernel is a long running program and memory contains garbage.
Its also possible to check the exact location in the function is occurring by looking at the stack trace.

Resources