I am writing a serial port application using VC++, in which I can open a port on a switch device, send some commands and display their output. I am running a thread which always read open port for output of given command. My main thread waits until read completes, but problem is how do I recognize that command output ends, and I should signal main thread.
Almost any serial port communication requires a protocol. Some way for the receiver to discover that a response has been received in full. A very simple one is using a unique byte or character that can never appear in the rest of the data. A linefeed is standard, used by any modem for example.
This needs to get more elaborate when you need to transfer arbitrary binary data. A common solution for that is to send the length of the response first. The receiver can then count down the received bytes to know when it is complete. This often needs to be embellished with a specific start byte value so that the receiver has some chance to re-synchronize with the transmitter. And often includes a checksum or CRC so that the receiver can detect transmission errors. Further embellishments then is to make errors recoverable with ACK/NAK responses from the receiver. You'd be then well on your way in re-inventing TCP. The RATP protocol in RFC-916 is a good example, albeit widely ignored.
Related
I need to create a monitor, which will log information about packet missing using ZeroMQ ipc. Actually I don't really understand everything about it because of there are some LINX, TIPS protocols also. Can you please explain me that and answer the main question?
You could make the application self-monitoring, by including a message serial number in each message structure. The message sender keeps track of the serial number it last sent, and increments it every time it sends a message.
The recipient should then be receiving messages with ever-increasing message serial numbers embedded. If that ever jumps by 2 or more, a message has gone missing.
IPC is not lossy like a network can be - the bytes put in come out the other end. TCP is not lossy either, provided both ends are still running and the network itself hasn't failed. However, depending on the ZMQ pattern used and how it's set up whole messages can be undelivered (for example, if the recipient hasn't connected yet, etc). If that's what you mean by "packet missing", it would be revealed by including an incrementing message serial number.
I am developing a program that sniffs network packets using a raw socket (AF_PACKET, SOCK_RAW) and processes them in some way.
I am not sure whether my program runs fast enough and succeeds to capture all packets on the socket. I am worried that the recieve buffer for this socket occainally gets full (due to traffic bursts) and some packets are dropped.
How do I know if packets were dropped due to lack of space in the
socket's receive buffer?
I have tried running ss -f link -nlp.
This outputs the number of bytes that are currently stored in the revice buffer for that socket, but I can not tell if any packets were dropped.
I am using Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-52-generic x86_64).
Thanks.
I was having a similar problem as you. I knew that tcpdump was able to to generate statistics about packet drops, so I tried to figure out how it did that. By looking at the code of tcpdump, I noticed that it is not generating those statistic by itself, but that it is using the libpcap library to get those statistics. The libpcap is on the other hand getting those statistics by accessing the if_packet.h header and calling the PACKET_STATISTICS socket option (at least I think so, but I'm no C expert).
Therefore, I saw only two solutions to the problem:
I had to interact somehow with the linux header files from my Pyhton script to get the packet statistics, which seemed a bit complicated.
Use the Python version of libpcap which is pypcap to get those information.
Since I had no clue how to do the first thing, I implemented the second option. Here is an example how to get packet statistics using pypcap and how to get the packet data using dpkg:
import pcap
import dpkt
import socket
pc=pcap.pcap(name="eth0", timeout_ms=10000, immediate=True)
def packet_handler(ts,pkt):
#printing packet statistic (packets received, packets dropped, packets dropped by interface
print pc.stats()
#example packet parsing using dpkt
eth=dpkt.ethernet.Ethernet(pkt)
if eth.type != dpkt.ethernet.ETH_TYPE_IP:
return
ip =eth.data
layer4=ip.data
ipsrc=socket.inet_ntoa(ip.src)
ipdst=socket.inet_ntoa(ip.dst)
pc.loop(0,packet_handler)
tpacket_stats structure is defined in linux/packet.h header file
Create variable using the tpacket_stats structre and pass it to getSockOpt with PACKET_STATISTICS SOL_SOCKET options will give packets received and dropped count.
-- some times drop can be due to buffer size
-- so if you want to decrease the drop count check increasing the buffersize using setsockopt function
First off, switch your operating system.
You need a reliable, network oriented operating system. Not some pink fluffy "ease of use" with "security" functionality enabled. NetBSD or Gentoo/ArchLinux (the bare installations, not the GUI kitted ones).
Start a simultaneous tcpdump on a network tap and capture the traffic you're supposed to receive along side of your program and compare the results.
There's no efficient way to check if you've received all the packets you intended to on the receiving end since the packets might be dropped on a lower level than you anticipate.
Also this is a question for Unix # StackOverflow, there's no programming here what I can see, at least there's no code.
The only certain way to verify packet drops is to have a much more beefy sender (perhaps a farm of machines that send packets) to a single client, record every packet sent to your reciever. Have the statistical data analyzed and compared against your senders and see how much you dropped.
The cheaper way is to buy a network tap or even more ad-hoc enable port mirroring in your switch if possible. This enables you to dump as much traffic as possible into a second machine.
This will give you a more accurate result because your application machine will be busy as it is taking care of incoming traffic and processing it.
Further more, this is why network taps are effective because they split the communication up into two channels, the receiving and sending directions of your traffic if you will. This enables you to capture traffic on two separate machines (also using tcpdump, but instead of a mirrored port, you get a more accurate traffic mirroring).
So either use port mirroring
Or you buy one of these:
I would like to know if it would be somehow possible to handle a Serial.println() on an Arduino uno without holding up the main program.
Basically, I'm using the arduino to charge a 400V capacitor and the main program is opening and closing the gate on a MOSFET transistor for 15 and 20 microseconds respectively. I also have a voltage divider connected to the capacitor which I use to measure the voltage on the capacitor when its being charged with the Arduino. I use analogRead() to get the raw value on the pin, multiply the value by the required ratio and try to print that value to the serial console at the end of each cycle. The problem is, though, that in order for the capacitor to charge quickly, the delays need to be very small (in the range of microseconds) and the serial print command takes much longer than that to execute and therefore holds up the entire program.
My question therefore is wherther it would somehow be possible to get the command to execute on something like a different "thread" without holding up the main cycle. Is the 8bit AVR capable of doing something like this?
Thanks
Arduino does not support multithreading, but there are a few libraries that allow you to do the equivalent:
Arduino Multi-Threading Library
ArduinoThread
For more information, you can also visit this question: Does Arduino support threading?
The basic Arduino serial print functions are blocking, They watch the TX Ready flag then load the next byte to transmit. This means if you send "Hello World" the print function will block for as long as it take to load the UART to send 10 characters at your selected baud rate.
One way to deal with this is using high baud rate so you don't wait long.
The better way is to use interrupt driven transmit. This way your string of data to send is buffered with each character sent when the UART can accept the character. The call to send data just loads the TX buffer, loads the first character into the UART to start the process then returns.
This bufferd approach will still block if you fill the TX buffer. The send method would have to wait to load more into the buffer. A good serial lib might give you an easy way to check the buffer status so you could implement your own extra buffering.
Here is a library that claims to be interrupt driven. I've not used it myself. Let us know if it works for you.
https://code.google.com/p/arduino-buffered-serial/
I found this post: Serial output is now interrupt driven. Serial.print() just stuffs the data in a buffer. Interrupts have to happen for that data to actually be shifted out. posted: Nov 11, 2012. I have found that to be true BUT if you try to print more then the internal serial buffer will hold it will lock up until the buffer is at least partially emptied.
I'm writing an Interface under Linux which gets Data from a TCP socket. The user provides a Buffer in which the received Data is stored. If the provided Buffer is to small I just want to return an Error.
First problem is to determine if the Buffer was to small. The recv() function just returns me the amount of bytes actually written into the Buffer. If I use the MSG_TRUNC flag stated on the recv() manpage it still returns me the same.
Second problem is to discard the data still queued in the socket. So if I would determine that my provided Buffer was to small I just want to erase everything which is left on the socket. Is there any other ways to do so except Closing and opening the socket again or just receive until nothing is left?
Best Regards
Toby
As documented in the man page, MSG_TRUNC is only valid for packet sockets (e.g. UDP) so this will not work as you want for your TCP socket which is stream based. There are literally hundreds of posts on stackoverflow and elsewhere that talk about preserving application message boundaries on TCP (hint: you need to do this yourself, TCP is a byte stream interface and doesn't) so I won't go into the details here, suffice it to say, you need a mechanism to know how big an application "message" or "packet" is on the recv() side to enable you to do what you want over TCP (or you need to switch over to UDP).
For TCP, if you need to "drain" the socket, reading until there is no data left would work, however, again, you need to consider message boundaries as mentioned above so that you do not read through one "message" and start eating into the next (again, most important point to remember is that TCP provides a byte stream interface and will not necessarily preserve your concept of application level packets or messages).
This seems like a simple question, but it is difficult to search for. I need to interface with a device over the serial port. In the event my program (or another) does not finish writing a command to the device, how do I ensure the next run of the program can successfully send a command?
Example:
The foo program runs and begins writing "A_VERY_LONG_COMMAND"
The user terminates the program, but the program has only written, "A_VERY"
The user runs the program again, and the command is resent. Except, the device sees "A_VERYA_VERY_LONG_COMMAND," which isn't what we want.
Is there any way to make this more deterministic? Serial port programming feels very out-of-control due to issues like this.
The required method depends on the device.
Serial ports have additional control signal lines as well as the serial data line; perhaps one of them will reset the device's input. I've never done serial port programming but I think ioctl() handles this.
There may be a single byte which will reset, e.g. some control character.
There might be a timing-based signal, e.g. Hayes command set modems use “pause +++ pause”.
It might just reset after not receiving a complete command after a fixed time.
It might be useful to know whether the device was originally intended to support interactive use (serial terminal), control by a program, or both.
I would guess that if you call write("A_VERY_LONG_COMMAND"), and then the user hits Ctrl+C while the bytes are going out on the line, the driver layer should finish sending the full buffer. And if the user interrupts in the middle of the call, the driver layer will probably just ignore the whole thing.
Just in case, when you open a new COM port, it's always wise to clear the port.
Do you have control over the device end? It might make sense to implement a timeout to make the device ignore unfinished or otherwise corrupt packets.
The embedded device should be implemented such that you can either send an abort/clear/break character that will dump the contents of its command buffer and give you a clean slate on your client app startup.
Or else it should provide a software reset character which will reset the command buffer and all state.
Or else it so be designed so that you can send a command termination (perhaps a newline, etc, depending on command protocol) and possibly have an error generated on the parsing of a garbled partial command that was in its buffer, query/clear the error, and then be good to go.
It wouldn't be a bad idea upon connection of your client program to send some health/status/error query repeatedly until you get a sound response, and only then commence sending configuration or operation commands. Unless you can via a query determine that the device was left in a suitable state, you probably want to assume nothing and configure it from scratch, after a configuration reset if available.