What's the exact meaning of uint8_t in TinyOS? - sensors

everyone! I'm trying to become familiar with TinyOS.
I'd like to know the difference between uint8_t and uint16_t.
Thank you in advance :-)

Just for the sake of thoroughness:
Data types come in many shapes and sizes. The two you are referring to are of types unsigned 8 bit integer and unsigned 16 bit integer.
An integer is a whole number that can be positive or negative; however, in the case of types an unsigned integer can only be positive as it does not designate space for a sign (i.e. a negative sign). 8 bit and 16 bit refer to the amount of space the integer takes up in memory. An unsigned 8 bit integer's values can be 0 - 255 while an unsigned 16 bit integer can hold values from 0 - 65,535 (Side note: If you are familiar with networking you may notice that 65,535 is the largest port number possible. This is due to the fact a port number is an unsigned 16 bit integer.)
Hope this helps.

Related

High precision floating point numbers in Haskell?

I know Haskell has native data types which allow you to have really big integers so things like
>> let x = 131242358045284502395482305
>> x
131242358045284502395482305
work as expected. I was wondering if there was a similar "large precision float" native structure I could be using, so things like
>> let x = 5.0000000000000000000000001
>> x
5.0000000000000000000000001
could be possible. If I enter this in Haskell, it truncates down to 5 if I go beyond 15 decimal places (double precision).
Depending on exactly what you are looking for:
Float and Double - pretty much what you know and "love" from Floats and Doubles in all other languages.
Rational which is a Ratio of Integers
FixedPoint - This package provides arbitrary sized fixed point values. For example, if you want a number that is represented by 64 integral bits and 64 fractional bits you can use FixedPoint6464. If you want a number that is 1024 integral bits and 8 fractional bits then use $(mkFixedPoint 1024 8) to generate type FixedPoint1024_8.
EDIT: And yes, I just learned about the numbers package mentioned above - very cool.
Haskell does not have high-precision floating-point numbers naitively.
For a package/module/library for this purpose, I'd refer to this answer to another post. There's also an example which shows how to use this package, called numbers.
If you need a high precision /fast/ floating point calculations, you may need to use FFI and long doubles, as the native Haskell type is not implemented yet (see https://ghc.haskell.org/trac/ghc/ticket/3353).
I believe the standard package for arbitrary precision floating point numbers is now https://hackage.haskell.org/package/scientific

What do these cipher algorithm symbols mean?

Please take a look at the following image…
There are two symbols in this image.
I learned from Wikipedia's “List of logic symbols” the symbol “⊕” stands for “XOR”, but what does that cross in square symbol mean? Does that mean “XOR” too?
XOR
Means: combine the two inputs using XOR. So, this symbol indeed can be read as “⊕”.
Addition
Means: combine the two inputs using addition. This symbol indeed can be read as “+”.
Nota Bene
In the image you're asking about, it is noted that the S-boxes take 8 bit (= unsigned char) input and return 32 bits (= unsigned int)… which means the cipher expects you to do the addition and XOR on unsigned integers.
The plus in a box is addition mod 232 (actually, I don't remember for sure -- it could be mod 232-1, but it's addition in any case).

Is there a difference between datatypes on different bit-size OSes?

I have a C program that I know works on 32-bit systems. On 64-Bit systems (at least mine) it works to a point and then stops. Reading some forums the program may not be 64-bit safe? I assume it has to do with differences of data types between 32-bit and 64-bit systems.
Is a char the same on both? what about int or long or their unsigned variants? Is there any other way a 32-bit program wouldn't be 64-bit safe? If I wanted to verify the application is 64-bit safe, what steps should I take?
Regular data types in C has minimum ranges of values rather than specific bit widths. For example, a short has to be able to represent, at a minimum, -32767 thru 32767 inclusive.
So,yes, if your code depends on values wrapping around at 32768, it's unlikely to behave well if the short is some big honking 128-bit behemoth.
If you want specific-width data types, look into stdint.h for things like int64_t and so on. There are a wide variety to choose from, specific widths, "at-least" widths, and so on. They also mandate two's complement for these, unlike the "regular" integral types:
integer types having certain exact widths;
integer types having at least certain specified widths;
fastest integer types having at least certain specified widths;
integer types wide enough to hold pointers to objects;
integer types having greatest width.
For example, from C11 7.20.1.1 Exact-width integer types:
The typedef name intN_t designates a signed integer type with width N, no padding
bits, and a two’s complement representation. Thus, int8_t denotes such a signed
integer type with a width of exactly 8 bits.
Provided you have followed the rules (things like not casting pointers to integers), your code should compile and run on any implementation, and any architecture.
If it doesn't, you'll just have to start debugging, then post the detailed information and code that seems to be causing problem on a forum site dedicated to such things. Now where have I seen one of those recently? :-)

What precisely is the "silence" value in the SDL audio API?

In SDL, when you set up your audio output device, you and SDL have to agree on an audio format - e.g. 44.1KHz stereo 16-bit signed little-endian. That's fine. But along with the final agreed format, you also get a computed "silence" value which doesn't seem well documented.
A silent sound sample obviously consist of the same sample value repeated over and over again, and you want that to be at the "zero" level. In a sense any constant value will do, but you have to agree a value (so you don't get pops when switching to a different sound), and in a sane world you want to choose a value bang in the centre of your sample-value range.
So if you happen to use an unsigned format for your sample value range for 0..whatever, your silence value will be (whatever/2).
EDIT - inserted "unsigned" below to avoid confusion.
That's all fine. But the silence value you get given is an unsigned 8-bit integer. That doesn't work very well if you want unsigned 16 bit samples - the logical silence value of 0x8000 requires two different byte values and it requires them to be in the correct endian order.
So the silence value you get from SDL doesn't seem to make much sense. You can't use it to wipe your buffers, for instance, without dealing with extra complications and making inferences which pretty much make the precalculated silence value pointless anyway.
Which means, of course, that I've misunderstood the point.
So - if this isn't how the silence value is meant to be used, how should it be used?
I have no evidence to back this up but I think the assumption here is that "silence" could be interpreted as "silence for common soundcard formats". Those being:
Unsigned 8-bit integers
Signed 16-bit integers
Signed 32-bit integers (for 24-bit audio data)
Normalized 32-bit floating point
Normalized 64-bit floating point.
In all the cases except for unsigned 8-bit, zero (0) is the "zero amplitude" value. So the returned unsigned 8-bit integer contains all the possible values of "zero amplitude" for these formats.

Multiplying 23 bit datatypes in a system with no long long

I am trying to implement floating point operations in a microcontroller and so far I have had ample success.
The problem lies in the way I do multiplication in my computer and it works fine:
unsigned long long gig,mm1,mm2;
unsigned long m,m1,m2;
mm1 = f1.float_parts.mantissa;
mm2 = f2.float_parts.mantissa;
m1 = f1.float_parts.mantissa;
m2 = f2.float_parts.mantissa;
gig = mm1*mm2; //this works fine I get all the bits I need since they are all long long, but won't work in the mcu
gig = m1*m2//this does not work, to be precise it gives only the 32 least significant bits , but works on the mcu
So you can see that my problem is that the microcontroller will throw an undefined refence to __muldi3 if I try the gig = mm1*mm2 there.
And If I try with the smaller data types, it only keeps the least significant bits, which I don't want it to. I need the 23 msb bits of the product.
Does anyone have any ideas as to how I can do this?
Apologizes for the short answer, I hope that someone else will take the time to write a fuller explanation, but basically you do exactly as when you multiply two big numbers by hand on a paper! It's just that instead of working with base 10, you work in base 256. That is, treat your numbers as a byte vectors, and do with each byte what you do to a digit when you "hand multiply".
The comments in the FreeBSD implementation of __muldi3() have a good explanation of the required procedure, see muldi3.c. If you want to go straight to the source (always a good idea!), according to the comments this code was based on an algorithm described in Knuth's The Art of Computer Programming vol. 2 (2nd ed), section 4.3.3, p. 278. (N.B. the link is for the 3rd edition.)
Back on the Intel 8088 (the original PC CPU and the last CPU I wrote assembly code for) when you multiplied two 16 bit numbers (32 bits? whoow) the CPU would return 2 16 bit numbers in two different registers - one with the 16 msb and one with the lsb.
You should check the hardware capabilities of your micro-controller, maybe it has a similar setup (obviously you'll need the code this in assembly if it does).
Otherwise you'll have to implement multiplication on your own.

Resources