File handling in tinyos or tossim - sensors

I need to read data from a text file in a tinyos file (nesc file). I searched lot on Internet but couldn't find a way.
Is there any way?

I don't know about TOSSIM, but using a real sensor board its possible to do so.
What you could do, is to write a program using Java, C#, etc that reads the file and passes the acquired data to the serial/usb port as a SERIAL PACKET. But you are limited to maximum of 255 byte for each packet.
So you should make a simple protocol that takes care of data chunks.
Of course you should know that how you can create a serial packet to be readable by sensor boards.
For that you need to read the TEP#113. But short story, a serial packet is consisted of:
HEADER + CONTENT + FOOTER
header contains protocol byte, destination and source address etc...
content is your message_t struct
footer has CRC and some other stuff
You have to take care of CRC calculation and also escaping start/end delimeters (I believe byte 126 or 127 is the delimiter, I mean indicator of starting and ending a packet).

Related

Input Port vs file

Is a Scheme input port the same thing as a C FILE* or a python file? Is that the same thing as the unix file descriptor concept? If not, how does an Input Port differe from the others (and why is it it called that way and not just a 'file')?
It's approximately, but only approximately equivalent: ports are objects to and from which you can write or read other objects: bytes or characters (usually).
But the approximation is not terribly close. Ports from which you can read or write characters rather than bytes are, well, ports which handle characters. That means that before they can, for instance, write some octets down on the TCP connection underlying the port, they have to translate those characters into octets using (with luck) some standard encoding. I don't think the mechanism for controlling this encoding & decoding is specified in Scheme, but it must exist. So a port is, at least sometimes, a complicated thing.
(As for why they're called 'ports': well, C made a choice to call communication endpoints 'file descriptors' which makes sense in the context of the Unix 'everything's a file' idea (calling bytes chars was always just a mistake though). But Scheme doesn't come from a Unix/C background so there's really no reason to do that. Given what we call communication endpoints in TCP/IP, 'port' seems quite a good choice of name.)

Interpreting data from RDM630 RFID reader

I am trying to build an RFID based door opener with an Attiny2313 and an RDM630 RFID reader. There has been no Problem with programming or getting the two ICs to talk to each other via UART. The Problem is the interpretation of the data.
I wasn't able to make any sense of what the RDM630 had sent to the Attiny, so I hooked it up via an RS232/USB Adapter, and this is what I get on my PC:
Display = ASCII:
Display set to HEX:
Written on the Card is:
0000714511 010,59151
Can anyone help me make sense of the Data?
Most of the bytes that the RDM630 RFID reader module sends are ASCII characters of hex digits ('0'-'9','A'-'F') which means 0x30-0x39, 0x41-0x46.
It looks like your RS232/USB inverts the bits, comparing to a direct TTL connections.
(RS232 is inverted TTL. It has also different voltage levels, but that's OK as long as TTL transmit output feeds RS232 receive input, as in your case. The other way around is more tricky).

Packing 20 bytes chunk via BLE

I've never worked with bluetooth before. I have to sends data via BLE and I've found the limit of 20 bytes per chunk.
The sender is an Arduino and the receiver could be both an Android or a Node.js app on a pc.
I have to send 9 values, stored in float values, so 4 bytes * 9 = 36 bytes. I need 2 chunks for all my data via BLE. The receiving part needs both chunks to process them. If some data are lost, I don't care.
I'm not expert in network protocols and I think I have to give each message an incremental timestamp so that the receiver can glue the two chunks with the same timestamp or discard the last one if the new timestamp is higher. But I'm not sure how to do a checksum, if I really need it or not, if I really have to care about it, or if - for a simple beta version of my system - I can ignore all those problems..
Does anyone can give me some advice? Like examples of similar situations handled with BLE communication?
You can get around the size limitation using the "Read Blob Request" of ATT. It allows you to read an attribute and also give an offset. So, you can use it to read the attribute with an offset of 0, if there's more than ATT_MTU bytes than you can request again with the offset at ATT_MTU*1, if there's still more ATT_MTU*2, etc... (You can read it in 3.4.4.5 of the Bluetooth v4.1 specifications; it's in the 4.0 spec too but I don't have that in front of me right now)
If the value changes between request, I'm not sure how you could go about detecting such a change. You could have the attribute send notifications when there's a change to interrupt the process in case the value changes in the middle of reading it.

Serial port : not able to write big chunk of data

I am trying to send text data from one PC to other using Serial cable. One of the PC is running linux and I am sending data from it using write(2) system call. The log size is approx 65K bytes but the write(2) system call returns some 4K bytes (i.e. this much amount of data is getting transferred). I tried breaking the data in chunks of 4K but write(2) returns -1.
My question is that "Is there any buffer limit for writing data on serial port? or can I send data of any size?. Also do I need to continously read data from other PC as I write 4K chunk of data"
Do I need to do any special configuration in termios structure for sending (huge) data?
The transmit buffer is one page (took a look at Linux 2.6.18 sources) - which is 4K in most (if not all) cases.
The other end must read (don't know the size of the receive buffer), but more importantly you should not write faster than the serial port can transmit, if you are using 115200 bps 8-N-1 you can write the 4K chunk approximately 3 times a second. (115200 / 9 / 4096 = 3.125)
Yes, there is a buffer limit - but when you reach that limit, the write() should block.
When write() returns -1, what is errno set to?
Make sure that the receiver is reading.
You should update the current position it your buffer from the write(), and continue the next write from there. (Applies to all writes(), regardless if the fd is a serial port, tcp socket or a file.)
If you get an error back for subsequent writes. Judging by the manpage, its safe to retry the writes for the following errnos: EAGAIN, EINTR, and probably ENOSPC. Use perror() to see what you get. (..and post it, I am curious.)
EFBIG would seem to indicate that you are trying to write using a buffer (or rather count) that is too large, but that is probably much larger than 64k.
If the internal buffer is filled up, because you are writing to fast, try to (nano)sleep a little between the writes. There are several clever ways of doing this (like tcp does), but if the rate is known, just write at a fixed rate.
If you think the receiver is actually reading, but not much happens, have a look at the serial ports flow-control options and if the cable is wired for DTS/RTS.

To pad or not to pad - creating a communication protocol

I am creating a protocol to have two applications talk over a TCP/IP stream and am figuring out how to design a header for my messages. Using the TCP header as an initial guide, I am wondering if I will need padding. I understand that when we're dealing with a cache, we want to make sure that data being stored fits in a row of cache so that when it is retrieved it is done so efficiently. However, I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
For example: I want to send over a message header consisting of a 3 byte field followed by a 1 byte padding field for 32 bit alignment. Then I will send over the message data.
In this case, the receiver will just take 3 bytes from the stream and throw away the padding byte. And then start reading message data. As I see it, he will not be storing the 3 bytes and the message data the way he wants. The whole point of byte alignment is so that it will be retrieved in an efficient manner. But if the retriever doesn't care about the padding how will it be retrieved efficiently?
Without the padding, the retriever just takes the 3 header bytes from the stream and then takes the data bytes. Since the retriever stores these bytes however he wants, how does it matter whether or not the padding is done?
Maybe I'm missing the point of padding.
It's slightly hard to extract a question from this post, but with what I've said you guys can probably point out my misconceptions.
Please let me know what you guys think.
Thanks,
jbu
If word alignment of the message body is of some use, then by all means, pad the message to avoid other contortions. The padding will be of benefit if most of the message is processed as machine words with decent intensity.
If the message is a stream of bytes, for instance xml, then padding won't do you a whole heck of a lot of good.
As far as actually designing a wire protocol, you should probably consider using a plain text protocol with compression (including the header), which will probably use less bandwidth than any hand-designed binary protocol you could possibly invent.
I do not understand how it makes sense to pad a header considering that an application will parse a stream of bytes and store it how it sees fit.
If I'm a receiver, I might pass a buffer (i.e. an array of bytes) to the protocol driver (i.e. the TCP stack) and say, "give this back to me when there's data in it".
What I (the application) get back, then, is an array of bytes which contains the data. Using C-style tricks like "casting" and so on I can treat portions of this array as if it were words and double-words (not just bytes) ... provided that they're suitably aligned (which is where padding may be required).
Here's an example of a statement which reads a DWORD from an offset in a byte buffer:
DWORD getDword(const byte* buffer)
{
//we want the DWORD which starts at byte-offset 8
buffer += 8;
//dereference as if it were pointing to a DWORD
//(this would fail on some machines if the pointer
//weren't pointing to a DWORD-aligned boundary)
return *((DWORD*)buffer);
}
Here's the corresponding function in Intel assembly; note that it's a single opcode i.e. quite an efficient way to access the data, more efficient that reading and accumulating separate bytes:
mov eax,DWORD PTR [esi+8]
Oner reason to consider padding is if you plan to extend your protocol over time. Some of the padding can be intentionally set aside for future assignment.
Another reason to consider padding is to save a couple of bits on length fields. I.e. always a multiple of 4, or 8 saves 2 or 3 bits off the length field.
One other good reason that TCP has padding (which probably does not apply to you) is it allows dedicated network processing hardware to easily separate the data from the header. As the data always starts on a 32 bit boundary, it's easier to separate the header from the data when the packet gets routed.
If you have a 3 byte header and align it to 4 bytes, then designate the unused byte as 'reserved for future use' and require the bits to be zero (rejecting messages where they are not as malformed). That leaves you some extensibility. Or you might decide to use the byte as a version number - initially zero, and then incrementing it if (when) you make incompatible changes to the protocol. Don't let the value be 'undefined' and "don't care"; you'll never be able to use it if you start out that way.

Resources