How do I cast the struct to a char pointer? - struct

Effectively I want to make an spi interface where I'll be able to change bits 18-22 and bits 1-16 separately (I want a one hot address on bits 1-16 and a binary coded decimal on bits 18-22) here's how I intend on implementing the struct
struct spi_out
{
unsigned int BCDADDR : 4
unsigned int OHADDR : 16
/// Some other spi bit addresses making up the rest of the 3 bytes
So here's my problem
I want to be able to access BCD address and encode it directly eg: spi_out.bcd = 5// in order to access the 6th cellbut I also want to use the operator function to format the bits the way I need them since I need the variables in the order I put them in and I can't figure out a simple way of doing this since I wouldn't want to have to put an LUT inside an operator function but I need to be able to cast the string of bits to a char pointer so the 3 bytes of information from the function can be fed to a hardware abstraction functionHAL_SPI_Transmit() , like I know the data is kept as 3 bytes so I don't see why I can't access them as such>:/

Okay so i have come to appreciate that my question was worded in such an annoying confusing way but i have actually found an answer to my own question, that is to user the Union keyword, this means i can define a union that means i can create a type that can be treated as a struct to access the individual bits or an array of 4 chars. I did not realize this existed but here is a link to the stack overflow question where i found my answer
accessing bit fields in structure
sorry guys

Related

What is happening when I declare a string as: char aString[SIZE*2+1];?

I am studying an example of a program the takes a user input, stores in a string then prints two occurrences of the corrosponding character in upper or lower case. For example, input ABCdef would print aabbccDDEEFF.
I'm a little bit confused about the way the new string is declared, can anybody help explain what is happening.
char string[MAX_STRING_SIZE+1]; // MAX_STRING_SIZE is defined as 500 and +1 is for the NULL char to terminate string
char stringNew[MAX_STRING_SIZE*2+1]; // Here I do not understand *2+1
EDIT: Just after I posted this question I figured out the answer and realised it may not be useful to the stack overflow community but as the question had already been answered it would be rude to delete it.
In the event that an other member is interested in this specific question I have attempted to make it useful by editing the question title and summarising the answer.
As this program will print 2 occurrences of every character from the user input, the new string needs to be twice the size as the original string. By declaring the new string size with *2 you are simply multiplying the size by 2 thus doubling the size.
Very simple in hindsight, I hope this can be of use to somebody else.
No pointers are involved in that declaration. You're just declaring another array of chars that is twice the size of the first one + the extra byte that'll store \0.
It can't be a pointer because there are no identifiers between the square brackets. After the preprocessor will have done its job with the source file, that expression will actually become char stringNew[500*2+1];

Is there a convenient way to represent x86 instructions in a struct or other language feature?

Rust doesn't have a "bit" data type, however, x86 instructions have a "field" which is in size of bits. Instead of using bit-wise operations, is there any data structure that can be directly compiled to such "memory/byte alignment" required by x86 instruction set or any binary protocol?
OpCode 1or2 byte
Mod-R/M 0 or 1 byte
Mod 7,6 bit
Reg/OpCode 5,4,3 bit
R/M 2,1,0 bit
SIB 0 or 1 byte
SS 7,6
Index 5,4,3
Base 2,1,0
Displacement 0,1,2 or 4 byte
Immediate 0,1,2 or 4
is there any data structure that can be directly compiled
No, there are no structures that correspond to this:
OpCode 1or2 byte
That is, you cannot have a struct that has a value that is either one or two bytes long. Structures have a fixed size at compile time.
Your main choices are:
Use pretty Rust features like enums and structs. This is likely to not match the bit pattern of the actual instructions.
Make something like struct Instruction([u8; 4]) and implement methods that use bitwise operations. This will allow you to match the bit patterns.
Since you don't want to use bitwise operations and must match the bit representation, I do not believe your problem can currently be solved in the fashion you'd like.
Personally, I'd probably go the enum route and implement methods to parse the raw instructions from a type implementing Read and Write back to bytes.
It's also possible you are interested in bitfields, like this C++ example:
struct S {
unsigned int b : 3;
};
There is no direct support for that in Rust, but a few crates appear to support macros to create them. Perhaps that would be useful.

linux kernel function _copy_to_user, want clear understanding of that

I am using this function to copy some structures to the kernel.
But, the problem is that I have to copy three data structures which are part of a bigger data structure. NOTE: the 3 data structures are contiguous in the bigger data structure.
SO, In my copy user function I pass the pointer of the 1st data structure and give the length of all the 3 data structure. But, when I go to user-space and print the 1st element of the 2nd data structure it gives some other value.
SO, what am I doing wrong.
As, a solution I made 3 copt_to_user calls and to my surprise it works fine. Its the problem when I make a single copy_to_user call.
Please, let me know what could be the reason.
Hey guys thanks for the answer it was a alignment issue , but, going further, if I want to pad an internal structure how do I do it..?
Example-
structure d{
struct b;
struct c; //I want to make this structure a padded one, how to go about it?
struct d;
}
As mentioned in the comments, this really seems to be an alignment problem. Gcc will probably add some padding between the structures a, b and c in struct d. Depending on how you instantiated the one in userland, it could be a problem. You can force gcc to not generate padding, using __atribute__ ((packed)) on your structure, but unless this structure maps to hardware registers, it's usually a bad idea as it will lead to worse performance when accessing fields of that structure.
Another possible problem would be if your kernel is 64 bits and your userland program is 32 bits, in this case you need to use fixed size types to be sure to have the same layout.

What is the difference between the following code and why this type casting is used

I am writing the following size of macro, I wanted to know the difference.
#define my_sizeof(type) (char*)(&type+1)-(char*)(&type)
#define my_sizeof(type) (void*)(&type+1)-(void*)(&type)
1st question is why the type casting is required, I know that if I don't type cast it, it always return 1, I checked it by running it. I want to know the significance(I mean what it tells the compiler to do).
2ndly,
What difference does it make with char* and void*.?
Thanks a lot.
For the second question: You cannot do that at all, since there is no pointer arithmetic for void pointers (or for pointers to incomplete types in general, for that matter).
For the first part: By definition, sizeof(char) == 1, so by casting the pointers to char pointers, you obtain the difference in units of 1 rather than in units of sizeof(type) -- in other words, you obtain precisely the value of sizeof(type).

How is integer overflow exploitable?

Does anyone have a detailed explanation on how integers can be exploited? I have been reading a lot about the concept, and I understand what an it is, and I understand buffer overflows, but I dont understand how one could modify memory reliably, or in a way to modify application flow, by making an integer larger than its defined memory....
It is definitely exploitable, but depends on the situation of course.
Old versions ssh had an integer overflow which could be exploited remotely. The exploit caused the ssh daemon to create a hashtable of size zero and overwrite memory when it tried to store some values in there.
More details on the ssh integer overflow: http://www.kb.cert.org/vuls/id/945216
More details on integer overflow: http://projects.webappsec.org/w/page/13246946/Integer%20Overflows
I used APL/370 in the late 60s on an IBM 360/40. APL is language in which essentially everything thing is a multidimensional array, and there are amazing operators for manipulating arrays, including reshaping from N dimensions to M dimensions, etc.
Unsurprisingly, an array of N dimensions had index bounds of 1..k with a different positive k for each axis.. and k was legally always less than 2^31 (positive values in a 32 bit signed machine word). Now, an array of N dimensions has an location assigned in memory. Attempts to access an array slot using an index too large for an axis is checked against the array upper bound by APL. And of course this applied for an array of N dimensions where N == 1.
APL didn't check if you did something incredibly stupid with RHO (array reshape) operator. APL only allowed a maximum of 64 dimensions. So, you could make an array of 1-64 dimension, and APL would do it if the array dimensions were all less than 2^31. Or, you could try to make an array of 65 dimensions. In this case, APL goofed, and surprisingly gave back a 64 dimension array, but failed to check the axis sizes.
(This is in effect where the "integer overflow occurred"). This meant you could create an array with axis sizes of 2^31 or more... but being interpreted as signed integers, they were treated as negative numbers.
The right RHO operator incantation applied to such an array to could reduce the dimensionaly to 1, with an an upper bound of, get this, "-1". Call this matrix a "wormhole" (you'll see why in moment). Such an wormhole array has
a place in memory, just like any other array. But all array accesses are checked against the upper bound... but the array bound check turned out to be done by an unsigned compare by APL. So, you can access WORMHOLE[1], WORMHOLE[2], ... WORMHOLE[2^32-2] without objection. In effect, you can access the entire machine's memory.
APL also had an array assignment operation, in which you could fill an array with a value.
WORMHOLE[]<-0 thus zeroed all of memory.
I only did this once, as it erased the memory containing my APL workspace, the APL interpreter, and obvious the critical part of APL that enabled timesharing (in those days it wasn't protected from users)... the terminal room
went from its normal state of mechanically very noisy (we had 2741 Selectric APL terminals) to dead silent in about 2 seconds.
Through the glass into the computer room I could see the operator look up startled at the lights on the 370 as they all went out. Lots of runnning around ensued.
While it was funny at the time, I kept my mouth shut.
With some care, one could obviously have tampered with the OS in arbitrary ways.
It depends on how the variable is used. If you never make any security decisions based on integers you have added with input integers (where an adversary could provoke an overflow), then I can't think of how you would get in trouble (but this kind of stuff can be subtle).
Then again, I have seen plenty of code like this that doesn't validate user input (although this example is contrived):
int pricePerWidgetInCents = 3199;
int numberOfWidgetsToBuy = int.Parse(/* some user input string */);
int totalCostOfWidgetsSoldInCents = pricePerWidgetInCents * numberOfWidgetsToBuy; // KA-BOOM!
// potentially much later
int orderSubtotal = whatever + totalCostOfWidgetInCents;
Everything is hunky-dory until the day you sell 671,299 widgets for -$21,474,817.95. Boss might be upset.
A common case would be code that prevents against buffer overflow by asking for the number of inputs that will be provided, and then trying to enforce that limit. Consider a situation where I claim to be providing 2^30+10 integers. The receiving system allocates a buffer of 4*(2^30+10)=40 bytes (!). Since the memory allocation succeeded, I'm allowed to continue. The input buffer check won't stop me when I send my 11th input, since 11 < 2^30+10. Yet I will overflow the actually allocated buffer.
I just wanted to sum up everything I have found out about my original question.
The reason things were confusing to me was because I know how buffer overflows work, and can understand how you can easily exploit that. An integer overflow is a different case - you cant exploit the integer overflow to add arbitrary code, and force a change in the flow of an application.
However, it is possible to overflow an integer, which is used - for example - to index an array to access arbitrary parts of memory. From here, it could be possible to use that mis-indexed array to override memory and cause the execution of an application to alter to your malicious intent.
Hope this helps.

Resources