I'm trying to make my own wireless PPP/SLIP protocol and I can't figure out if the radios are at fault or if my thoughts (presented as pseudo-code below) is at fault.
The data packet is as follows:
Byte 1: Special Start flag (same value that starts packet)
Byte 2: Most significant bit = direction packet is going
Byte 2: Remaining 7 bits: Slave the master communicates with.
Byte 3: Packet Sequence number
Byte 4-n: Data
Byte n+1: Checksum
I written my whole code in the 8051 assembler since the target chips are also 8051 based.
I have configured the speed of data to be the same throughout the whole system.
I verified my assembly code against a 8051 simulator (ucsim). I'm using a version that's fresh as of 2 months ago.
The setup in the end will be one master communicating to each client, one at a time sequentially and will repeat endlessly.
Since the radio modules are half-duplex, I make it where only one device may transmit at a time.
Because I am new to perfecting my own SLIP/PPP protocol, I have some questions regarding my setup.
My thought is to make the master check the incoming sequence number from a packet to make sure its not the same as the last one, and if it is, discard the packet. Also, the client accepts any packet at first (for sync) then
increments local sequence number, then sends request to master then next packet it receives must match local sequence number.
Am I on the right track with this thought? or did the people who invented SLIP/PPP have a different mindset when it came to synchronizing and receiving proper fixed-length packets?
For wireless operation in general, is three retries enough?
Note: when my system sends consecutive packets without a break, the sequence number for each packet in the group is the same.
I did implement byte stuffing along with a data reception timeout.
Suppose a timeout happened. Should I automatically clear the flag (that indicated the bit received was an escape character)? or should I wait until the receiver sends the start flag byte and then clear the flag there?
Also, would there be any benefit to adding a special end-byte to my packet or would that not be beneficial?
All devices in my setup know that when the start byte is received it stops receiving the packet at the checksum byte.
So the point of all this is that I'm trying to use similar logic in my PPP/SLIP protocol setup as the people used when they designed the first PPP protocol we use today.
This is my pseudo code.
All functions except "Begin Transmit" are automatically called by the system in priority sequence
when that event happens. Serial functions have highest priority.
Timeout function:
clear timer-run
clear escape-bit
set ignore-byte
If packet-received is set then
clear packet-received
set packet-official
** If device is client then
Increment sequence number (auto-roll over past 255)
** End if
end if
Exit Function
Serial Function
If byte received then clear receive-flag and run received
If byte transmitted then clear transmit-flag and run transmitted
Exit Function
Begin Transmit Function
Set packet-count to 3
Set byte to start code
clear escape-bit
Output byte to serial
Exit Function
Transmitted Function
If escape-bit set then
clear escape-bit
output saved transmit byte
Exit Function
end if
Save old variables and load new bank
If transmit-pointer is at end of packet then
Decrement packet-count
If packet-count = 0 then
Enable Receiver
** If device is host then
Enable Timeout timer
** End If
Restore old variables
Exit Function
End if
Set transmit-pointer to start of packet
Set byte to start code
goto Output-mode
ELSE
increment transmit-pointer
load byte from current ram location (defined by transmit-pointer)
End If
If byte is Escape code then
Set saved transmit byte to Special escape character
Set escape-bit
End If
If byte is Start code then
Set saved transmit byte to Special start character
Set escape-bit
Set byte to Escape code
End If
Output-mode:
Output byte to serial
Restore old variables
Exit Function
Received Function
Clear timer-run
Save old variables and load new bank
Load received byte
If escape-bit is set then
clear escape-bit
If received-byte is special start character then
set received byte to start code
goto receive-2
End If
If received-byte is special escape character then
set received byte to escape code
goto receive-2
End If
goto receive-fail
End If
If received-byte is escape code then
set escape-bit
reset timer timeout
set timer-run
Restore old variables
Exit function
End if
If received-byte is start code then
reset receive-pointer
clear ignore-byte
reset timer timeout
set timer-run
Restore old variables
Exit function
End if
receive-2:
If ignore-byte is set then
Restore old variables
Exit Function
End If
If receive-pointer is at checksum byte address then
Validate checksum
If checksum is correct then
Set Packet-received flag
End if
Restore old variables
Exit Function
End If
If receive-pointer is at sequence byte address then
If received-byte is not expected-sequence then
set ignore-byte
Restore old variables
Exit Function
End if
End If
If receive-pointer is at beginning then
reset checksum seed
End If
store byte in ram address (defined by receive-pointer)
calculate checksum on received-byte and update checksum value variable
reset timer timeout
set timer-run
Restore old variables
Exit Function
Related
I have a driver that builds on the new serdev bus in the linux kernel.
In my driver I receive messages from an external device, all messages ends with a null byte (0x00) and the protocol ensures that there are no null bytes in my data (COBS). Now I try to have the TTY layer hand me full messages by scanning for zeros in my input and if there are none I'll just return zero in the callback that is called from the tty layer when bytes are available.
This kind of works. Or rather it works for some messages. After a while though it locks up and the tty layer keeps sending the same size of received bytes indefinitely. My guess is that this happens when one half of the tty flip buffer is full and the rest of my message is in the other half.
I have two questions:
Am I correct in that the tty layer can "hang" until I read out all data in one half of the flip buffer?
If that is so, is there some way to prevent this from happening? I'd rather not implement my own buffering scheme on top of the tty buffer already available.
Thanks
It looks like (drivers/tty/tty_buffer.c and the function flush_to_ldisc) that it is not possible to do what I attempted to do. When the tty buffer is about to flip over the consumer will have to do a read and buffer any half messages.
That is, returning zero and hoping for a larger chunk of data in your callback next time will only work up until the end of the first part of the buffer then the last bit of data must be read.
This is not a problem in userspace because a read call will have an argument that is the most bytes you want but read is free to return fewer bytes than requested.
As I understand the term "word length" (spi_bits_per_word) in spi, defines the CS (chip select) active time.
It therefore seems that linux driver will function correctly when dealing with simple spi protocols which keeps word size constant.
But, How can we deal with spi protocols which use different spi size as part of protocol.
for example cs need to be active for sending spi word - 9 bits, and then reading spi - 8 bits or 24 bits (the length of the register read is different each time, depends on register)
How can we implement that using spi_write_then_read ?
Do we need to set bits_per_word size for the sending and then another bits_per_word for the receiving ?
Regards,
Ran
"word length" means number of bits you can send in one transaction. It doesn't defines the CS (chip select) active time. You can keep it active for whatever time you want(least is for word-length).
SPI has got some format. You cannot randomly read-write whatever number of bits you want.Most of SPI supports 4-bit, 8-bit, 16-bit and 32-bit mode. If the given mode doesn't satisfy your requirement then you need to break your requirement. For eg:- To read 24-bit data, we need to use 8-bit word-length transfer for 3 times.
Generally SPI is fullduplex means it will read at same time it will write.
I would like to know if it would be somehow possible to handle a Serial.println() on an Arduino uno without holding up the main program.
Basically, I'm using the arduino to charge a 400V capacitor and the main program is opening and closing the gate on a MOSFET transistor for 15 and 20 microseconds respectively. I also have a voltage divider connected to the capacitor which I use to measure the voltage on the capacitor when its being charged with the Arduino. I use analogRead() to get the raw value on the pin, multiply the value by the required ratio and try to print that value to the serial console at the end of each cycle. The problem is, though, that in order for the capacitor to charge quickly, the delays need to be very small (in the range of microseconds) and the serial print command takes much longer than that to execute and therefore holds up the entire program.
My question therefore is wherther it would somehow be possible to get the command to execute on something like a different "thread" without holding up the main cycle. Is the 8bit AVR capable of doing something like this?
Thanks
Arduino does not support multithreading, but there are a few libraries that allow you to do the equivalent:
Arduino Multi-Threading Library
ArduinoThread
For more information, you can also visit this question: Does Arduino support threading?
The basic Arduino serial print functions are blocking, They watch the TX Ready flag then load the next byte to transmit. This means if you send "Hello World" the print function will block for as long as it take to load the UART to send 10 characters at your selected baud rate.
One way to deal with this is using high baud rate so you don't wait long.
The better way is to use interrupt driven transmit. This way your string of data to send is buffered with each character sent when the UART can accept the character. The call to send data just loads the TX buffer, loads the first character into the UART to start the process then returns.
This bufferd approach will still block if you fill the TX buffer. The send method would have to wait to load more into the buffer. A good serial lib might give you an easy way to check the buffer status so you could implement your own extra buffering.
Here is a library that claims to be interrupt driven. I've not used it myself. Let us know if it works for you.
https://code.google.com/p/arduino-buffered-serial/
I found this post: Serial output is now interrupt driven. Serial.print() just stuffs the data in a buffer. Interrupts have to happen for that data to actually be shifted out. posted: Nov 11, 2012. I have found that to be true BUT if you try to print more then the internal serial buffer will hold it will lock up until the buffer is at least partially emptied.
I am writing a serial port application using VC++, in which I can open a port on a switch device, send some commands and display their output. I am running a thread which always read open port for output of given command. My main thread waits until read completes, but problem is how do I recognize that command output ends, and I should signal main thread.
Almost any serial port communication requires a protocol. Some way for the receiver to discover that a response has been received in full. A very simple one is using a unique byte or character that can never appear in the rest of the data. A linefeed is standard, used by any modem for example.
This needs to get more elaborate when you need to transfer arbitrary binary data. A common solution for that is to send the length of the response first. The receiver can then count down the received bytes to know when it is complete. This often needs to be embellished with a specific start byte value so that the receiver has some chance to re-synchronize with the transmitter. And often includes a checksum or CRC so that the receiver can detect transmission errors. Further embellishments then is to make errors recoverable with ACK/NAK responses from the receiver. You'd be then well on your way in re-inventing TCP. The RATP protocol in RFC-916 is a good example, albeit widely ignored.
I have to do an application for a GPRS modem JAVA (J2ME) programmable that must interface with an electromedical device (glucometer).
I have an input buffer and an output buffer on the serial port of the device.
When the application starts, I listen on the serial port and I receive from the glucometer one byte with the decimal code "5" which corresponds, to the ASCII table, the symbol of Enquiry and after 15 seconds I get the bytes "4" that corresponds to the End of Transmission.
To receive data from the glucometer I need to send an ACK signal (acknowledge) which corresponds to the byte "6".
I tried the following forms:
outBuffer.write("ACK\r\n".getBytes()); //first without setting the charset and after I trying to set all the charset.
I tried to send a byte buffer like this:
byte[] bSend = new byte[] { 6 };
outBuffer.write(bSend); //(I tried also with the byte 10 (LF) and 13 (CR)).
The result is that I can not receive data but I get cyclically but only the values 5 and 4.
With all the software that can comunicate with serial port (like Serial Monitor) if I send an ACK message I receive data from glucometer correctly.
I think my problem is due to the value of the ACK coding in Java, someone can indicate any solution?
As this seems to be a pretty low-level interface that uses ASCII control characters to do its communication I think you need to send these byte values verbatim, and without extra stuff like newlines or whatever. This means that
byte[] bSend = new byte[] { 6 };
outBuffer.write(bSend);
Is the correct approach. Now, this protocol looks a lot like ASTM E1381, so I checked here and paragraph 6.1.2 might be related to your problem:
When the meter initiates the Establishment Phase, the meter determines
if the computer is connected by initially sending an <ENQ> character.
If the computer responds within 15 seconds by sending an <ACK>
character, the meter proceeds with Data Transfer Mode. If the computer
responds within 15 seconds with a <NAK> character, the meter sends an
<EOT> then attempts to enter Remote Command Mode, by looking for an
<ENQ> character from the computer. Also see "Section 6.2 Remote
Command Mode Protocol". Any response within 15 seconds to the meter’s
<ENQ> other than an <ACK> or <NAK> character causes the meter to send
an <EOT>, delay one second, then send another <ENQ>. If the computer
does not respond within 15 seconds, then the meter sends an <EOT>,
delays one second, then sends another <ENQ> and waits again for a
response from the computer. Note: One second after sending an <ENQ>,
the meter may enter a low power mode. Thus, there is a possibility
that the first <ACK> sent by the computer is not read correctly. In
this case, the meter responds with an <EOT>, delays one second, then
sends another <ENQ>.
Emphasis mine, I guess that that's what's happening. So, you should repeat sending another ENQ to get it into data transfer mode, assuming that that's what you want.
it should be
byte bSend=(byte)0x6;
outBuffer.write(bSend);