I am developing a force feedback driver (linux) for a yet unsupported gamepad.
Whenever a application in userspace requests a ff-effect (e.g rumbling), a function in my driver is called:
static int foo_ff_play(struct input_dev *dev, void *data, struct ff_effect *effect)
this is set by the following code inside my init function:
input_set_capability(dev, EV_FF, FF_RUMBLE);
input_ff_create_memless(dev, NULL, foo_ff_play);
I'm accessing the ff_effect struct (which is passed to my foo_ff_play function) like this:
static int foo_ff_play(struct input_dev *dev, void *data, struct ff_effect *effect)
{
u16 length;
length = effect->replay.length;
printk(KERN_DEBUG "length: %i", length);
return 0;
}
The problem is, that the reported length (in ff_effect->replay) is always zero.
That's confusing, since i am running fftest on my device, and fftest definitely sets the length attribute: https://github.com/flosse/linuxconsole/blob/master/utils/fftest.c (line 308)
/* a strong rumbling effect */
effects[4].type = FF_RUMBLE;
effects[4].id = -1;
effects[4].u.rumble.strong_magnitude = 0x8000;
effects[4].u.rumble.weak_magnitude = 0;
effects[4].replay.length = 5000;
effects[4].replay.delay = 1000;
Does this have something to do with the "memlessness"? Why does the data in ff_replay seem to be zero if it isn't?
Thank you in advance
Why is the replay struct empty?
Taking a look at https://elixir.free-electrons.com/linux/v4.4/source/drivers/input/ff-memless.c#L406 we find:
static void ml_play_effects(struct ml_device *ml)
{
struct ff_effect effect;
DECLARE_BITMAP(handled_bm, FF_MEMLESS_EFFECTS);
memset(handled_bm, 0, sizeof(handled_bm));
while (ml_get_combo_effect(ml, handled_bm, &effect))
ml->play_effect(ml->dev, ml->private, &effect);
ml_schedule_timer(ml);
}
ml_get_combo_effect sets the effect by calling ml_combine_effects., but ml_combine_effects simply does not copy replay.length to the ff_effect struct which is passed to our foo_play_effect (at least not if the effect-type is FF_RUMBLE): https://elixir.free-electrons.com/linux/v4.4/source/drivers/input/ff-memless.c#L286
That's why we cannot read out the ff_replay-data in our foo_play_effect function.
Okay, replay is empty - how can we determine how long we have to play the effect (e.g. FF_RUMBLE) then?
Looks like the replay structure is something we do not even need to carry about. Yes, fftest sets the length and then uploads the effect to the driver, but if we take a look at ml_ff_upload (https://elixir.free-electrons.com/linux/v4.4/source/drivers/input/ff-memless.c#L481), we can see the following:
if (test_bit(FF_EFFECT_STARTED, &state->flags)) {
__clear_bit(FF_EFFECT_PLAYING, &state->flags);
state->play_at = jiffies +
msecs_to_jiffies(state->effect->replay.delay);
state->stop_at = state->play_at +
msecs_to_jiffies(state->effect->replay.length);
state->adj_at = state->play_at;
ml_schedule_timer(ml);
}
That means that the duration is already handled by the input-subsystem. It starts the effect and also stops it as needed.
Furthermore we can see at https://elixir.free-electrons.com/linux/v4.4/source/include/uapi/linux/input.h#L279 that
/*
* All duration values are expressed in ms. Values above 32767 ms (0x7fff)
* should not be used and have unspecified results.
*/
That means that we have to make our effect play at least 32767ms. Everything else (stopping the effect before) is up to the scheduler - which is not our part :D
Requirement:
For my tiny graphics engine, I need an array of all objects to draw. For performance reasons this array needs to be sorted on the attributes. In short:
Store a lot of attributes per struct, add the struct to an array of structs
Efficiently sort the array
walk over the array and perform operations (modesetting and drawing) depending on the attributes
Approach: bitfields in a union (i.e.: let the compiler do the masking and shifting for me)
I thought I had an elegant plan to accomplish this, based on this article: http://realtimecollisiondetection.net/blog/?p=86. The idea is as follows: each attribute is a bitfield, which can be read and written to (step 1). After writing, the sorting procedure look at the bitfield struct as an integer, and sorts on it (step 2). Afterwards (step 3), the bitfields are read again.
Sometimes code says more than a 1000 words, a high-level view:
union key {
/* useful for accessing */
struct {
unsigned int some_attr : 2;
unsigned int another_attr : 3;
/* ... */
} bitrep;
/* useful for sorting */
uint64_t intrep;
};
I would just make sure that the bit-representation was as large as the integer representation (64 bits in this case). My first approach went like this:
union key {
/* useful for accessing */
struct {
/* generic part: 11 bits */
unsigned int layer : 2;
unsigned int viewport : 3;
unsigned int viewportLayer : 3;
unsigned int translucency : 2;
unsigned int type : 1;
/* depends on type-bit: 64 - 11 bits = 53 bits */
union {
struct {
unsigned int sequence : 8;
unsigned int id : 32;
unsigned int padding : 13;
} cmd;
struct {
unsigned int depth : 24;
unsigned int material : 29;
} normal;
};
};
/* useful for sorting */
uint64_t intrep;
};
Note that in this case, there is a decision bitfield called type. Based on that, either the cmd struct or the normal struct gets filled in, just like in the mentioned article. However this failed horribly. With clang 3.3 on OSX 10.9 (x86 macbook pro), the key union is 16 bytes, while it should be 8.
Unable to coerce clang to pack the struct better, I took another approach based on some other stack overflow answers and the preprocessor to avoid me having to repeat myself:
/* 2 + 3 + 3 + 2 + 1 + 5 = 16 bits */
#define GENERIC_FIELDS \
unsigned int layer : 2; \
unsigned int viewport : 3; \
unsigned int viewportLayer : 3; \
unsigned int translucency : 2; \
unsigned int type : 1; \
unsigned int : 5;
/* 8 + 32 + 8 = 48 bits */
#define COMMAND_FIELDS \
unsigned int sequence : 8; \
unsigned int id : 32; \
unsigned int : 8;
/* 24 + 24 = 48 bits */
#define MODEL_FIELDS \
unsigned int depth : 24; \
unsigned int material : 24;
struct generic {
/* 16 bits */
GENERIC_FIELDS
};
struct command {
/* 16 bits */
GENERIC_FIELDS
/* 48 bits */
COMMAND_FIELDS
} __attribute__((packed));
struct model {
/* 16 bits */
GENERIC_FIELDS
/* 48 bits */
MODEL_FIELDS
} __attribute__((packed));
union alkey {
struct generic gen;
struct command cmd;
struct model mod;
uint64_t intrep;
};
Without including the __attribute__((packed)), the command and model structs are 12 bytes. But with the __attribute__((packed)), they are 8 bytes, exactly what I wanted! So it would seems that I have found my solution. However, my small experience with bitfields has taught me to be leery. Which is why I have a few questions:
My questions are:
Can I get this to be cleaner (i.e.: more like my first big union-within-struct-within-union) and still keep it 8 bytes for the key, for fast sorting?
Is there a better way to accomplish this?
Is this safe? Will this fail on x86/ARM? (really exotic architectures are not much of a concern, I'm targeting the 2 most prevalent ones). What about setting a bitfield and then finding out that an adjacent one has already been written to. Will different compilers vary wildly on this?
What issues can I expect from different compilers? Currently I'm just aiming for clang 3.3+ and gcc 4.9+ with -std=c11. However it would be quite nice if I could use MSVC as well in the future.
Related question and webpages I've looked up:
Variable-sized bitfields with aliasing
Unions within unions
for those (like me) scratching their heads about what happens with bitfields that are not byte-aligned and endianness, look no further: http://mjfrazer.org/mjfrazer/bitfields/
Sadly no answer got me the entirety of the way there.
EDIT: While experimenting, setting some values and reading the integer representation. I noticed something that I had forgotten about: endianness. This opens up another can of worms. Is it even possible to do what I want using bitfields or will I have to go for bitshifting operations?
The layout for bitfields is highly implementation (=compiler) dependent. In essence, compilers are free to place consecutive bitfields in the same byte/word if it sees fit, or not. Thus without extensions like the packed attribute that you mention, you can never be sure that your bitfields are squeezed into one word.
Then, if the bitfields are not squeezed into one word, or if you have just some spare bits that you don't use, you may be even more in trouble. These so-called padding bits can have arbitrary values, thus your sorting idea could never work in a portable setting.
For all these reasons, bitfields are relatively rarely used in real code. What you can see more often is the use of macros for the bits of your uint64_t that you need. For each of your bitfields that you have now, you'd need two macros, one to extract the bits and one to set them. Such a code then would be portable on all platforms that have a C99/C11 compiler without problems.
Minor point:
In the declaration of a union it is better to but the basic integer field first. The default initializer for a union uses the first field, so this would then ensure that your union would be initialized to all bits zero by such an initializer. The initializer of the struct would only guarantee that the individual fields are set to 0, the padding bits, if any, would be unspecific.
Here is an excerpt from ntdddisk.h
typedef struct _DISK_GEOMETRY_EX {
DISK_GEOMETRY Geometry; // Standard disk geometry: may be faked by driver.
LARGE_INTEGER DiskSize; // Must always be correct
UCHAR Data[1]; // Partition, Detect info
} DISK_GEOMETRY_EX, *PDISK_GEOMETRY_EX;
What is the point of UCHAR Data[1];? Why not just UCHAR Data; ?
And there are a lot of structures in DDK which have arrays of one element in declarations.
Thanks, thats clear now. The one thing is not clear the implementation of offsetof.
It's defined as
#ifdef _WIN64
#define offsetof(s,m) (size_t)( (ptrdiff_t)&(((s *)0)->m) )
#else
#define offsetof(s,m) (size_t)&(((s *)0)->m)
#endif
How this works:
((s *)0)->m ???
This
(size_t)&((DISK_GEOMETRY_EX *)0)->Data
is like
sizeof (DISK_GEOMETRY) + sizeof( LARGE_INTEGER);
But there is two additional questions:
1)
What type is this? And why we should use & for this?
((DISK_GEOMETRY_EX *)0)->Data
2) ((DISK_GEOMETRY_EX *)0)
This gives me 00000000. Is it convering to the address alignment? interpret it like an address?
Very common in the winapi as well, these are variable length structures. The array is always the last element in the structure and it always includes a field that indicates the actual array size. A bitmap for example is declared that way:
typedef struct tagBITMAPINFO {
BITMAPINFOHEADER bmiHeader;
RGBQUAD bmiColors[1];
} BITMAPINFO, FAR *LPBITMAPINFO, *PBITMAPINFO;
The color table has a variable number of entries, 2 for a monochrome bitmap, 16 for a 4bpp and 256 for a 8bpp bitmap. Since the actual length of the structure varies, you cannot declare a variable of that type. The compiler won't reserve enough space for it. So you always need the free store to allocate it using code like this:
#include <stddef.h> // for offsetof() macro
....
size_t len = offsetof(BITMAPINFO, bmiColors) + 256 * sizeof(RGBQUAD);
BITMAPINFO* bmp = (BITMAPINFO*)malloc(len);
bmp->bmiHeader.biClrUsed = 256;
// etc...
//...
free(bmp);
I am trying to pass a struct from user space to kernel space. I had been trying for many hours and it isn't working. Here is what I have done so far..
int device_ioctl(struct inode *inode, struct file *filep, unsigned int cmd, unsigned long arg){
int ret, SIZE;
switch(cmd){
case PASS_STRUCT_ARRAY_SIZE:
SIZE = (int *)arg;
if(ret < 0){
printk("Error in PASS_STRUCT_ARRAY_SIZE\n");
return -1;
}
printk("Struct Array Size : %d\n",SIZE);
break;
case PASS_STRUCT:
struct mesg{
int pIDs[SIZE];
int niceVal;
};
struct mesg data;
ret = copy_from_user(&data, arg, sizeof(*data));
if(ret < 0){
printk("PASS_STRUCT\n");
return -1;
}
printk("Message PASS_STRUCT : %d\n",data.niceVal);
break;
default :
return -ENOTTY;
}
return 0;
}
I have trouble defining the struct. What is the correct way to define it? I want to have int pIDs[SIZE]. Will int *pIDs do it(in user space it is defined like pIDs[SIZE])?
EDIT:
With the above change I get this error? error: expected expression before 'struct' any ideas?
There are two variants of the structure in your question.
struct mesg1{
int *pIDs;
int niceVal;
};
struct mesg2{
int pIDs[SIZE];
int niceVal;
};
They are different; in case of mesg1 you has pointer to int array (which is outside the struct). In other case (mesg2) there is int array inside the struct.
If your SIZE is fixed (in API of your module; the same value used in user- and kernel- space), you can use second variant (mesg2).
To use first variant of structure (mesg1) you may add field size to the structure itself, like:
struct mesg1{
int pIDs_size;
int *pIDs;
int niceVal;
};
and fill it with count of ints, pointed by *pIDs.
PS: And please, never use structures with variable-sized arrays in the middle of the struct (aka VLAIS). This is proprietary, wierd, buggy and non-documented extension to C language by GCC compiler. Only last field of struct can be array with variable size (VLA) according to international C standard. Some examples here: 1 2
PPS:
You can declare you struct with VLA (if there is only single array with variable size):
struct mesg2{
int niceVal;
int pIDs[];
};
but you should be careful when allocating memory for such struct with VLA
I am having a hard time in manipulating strings while writing module for linux. My problem is that I have a int Array[10] with different values in it. I need to produce a string to be able send to the buffer in my_read procedure. If my array is {0,1,112,20,4,0,0,0,0,0}
then my output should be:
0:(0)
1:-(1)
2:-------------------------------------------------------------------------------------------------------(112)
3:--------------------(20)
4:----(4)
5:(0)
6:(0)
7:(0)
8:(0)
9:(0)
when I try to place the above strings in char[] arrays some how weird characters end up there
here is the code
int my_read (char *page, char **start, off_t off, int count, int *eof, void *data)
{
int len;
if (off > 0){
*eof =1;
return 0;
}
/* get process tree */
int task_dep=0; /* depth of a task from INIT*/
get_task_tree(&init_task,task_dep);
char tmp[1024];
char A[ProcPerDepth[0]],B[ProcPerDepth[1]],C[ProcPerDepth[2]],D[ProcPerDepth[3]],E[ProcPerDepth[4]],F[ProcPerDepth[5]],G[ProcPerDepth[6]],H[ProcPerDepth[7]],I[ProcPerDepth[8]],J[ProcPerDepth[9]];
int i=0;
for (i=0;i<1024;i++){ tmp[i]='\0';}
memset(A, '\0', sizeof(A));memset(B, '\0', sizeof(B));memset(C, '\0', sizeof(C));
memset(D, '\0', sizeof(D));memset(E, '\0', sizeof(E));memset(F, '\0', sizeof(F));
memset(G, '\0', sizeof(G));memset(H, '\0', sizeof(H));memset(I, '\0', sizeof(I));memset(J, '\0', sizeof(J));
printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%s\n",A,B,C,D,E,F,G,H,I,J);
memset(A,'-',sizeof(A));
memset(B,'-',sizeof(B));
memset(C,'-',sizeof(C));
memset(D,'-',sizeof(D));
memset(E,'-',sizeof(E));
memset(F,'-',sizeof(F));
memset(G,'-',sizeof(G));
memset(H,'-',sizeof(H));
memset(I,'-',sizeof(I));
memset(J,'-',sizeof(J));
printk("A:%s\nB:%s\nC:%s\nD:%s\nE:%s\nF:%s\nG:%s\nH:%s\nI:%s\nJ:%\n",A,B,C,D,E,F,G,H,I,J);
len = sprintf(page,"0:%s(%d)\n1:%s(%d)\n2:%s(%d)\n3:%s(%d)\n4:%s(%d)\n5:%s(%d)\n6:%s(%d)\n7:%s(%d)\n8:%s(%d)\n9:%s(%d)\n",A,ProcPerDepth[0],B,ProcPerDepth[1],C,ProcPerDepth[2],D,ProcPerDepth[3],E,ProcPerDepth[4],F,ProcPerDepth[5],G,ProcPerDepth[6],H,ProcPerDepth[7],I,ProcPerDepth[8],J,ProcPerDepth[9]);
return len;
}
it worked out with this:
char s[500];
memset(s,'-',498);
for (i=len=0;i<10;++i){
len+=sprintf(page+len,"%d:%.*s(%d)\n",i,ProcPerDepth[i],s,ProcPerDepth[i]);
}
I wonder if there is an easy flag to multiply string char in sprintf. thanx –
Here are a some issues:
You have entirely filled the A, B, C ... arrays with characters. Then, you pass them to an I/O routine that is expecting null-terminated strings. Because your strings are not null-terminated, printk() will keep printing whatever is in stack memory after your object until it finds a null by luck.
Multi-threaded kernels like Linux have strict and relatively small constraints regarding stack allocations. All instances in the kernel call chain must fit into a specific size or something will be overwritten. You may not get any detection of this error, just some kind of downstream crash as memory corruption leads to a panic or a wedge. Allocating large and variable arrays on a kernel stack is just not a good idea.
If you are going to write the tmp[] array and properly nul-terminate it, there is no reason to also initialize it. But if you were going to initialize it, you could do so with compiler-generated code by just saying: char tmp[1024] = { 0 }; (A partial initialization of an aggregate requires by C99 initialization of the entire aggregate.) A similar observation applies to the other arrays.
How about getting rid of most of those arrays and most of that code and just doing something along the lines of:
for(i = j = 0; i < n; ++i)
j += sprintf(page + j, "...", ...)