C#: UdpClient not sending data when calling close() to soon - c#-4.0

On some computers I have the strange effect that UdpClient will not send data when UdpClient.Close() is called too soon after a UdpClient.Send().
I'm using .NET 4.0 and WireShark to verify the packet-loss.
The essential part of the coding is:
UdpClient sender = new UdpClient();
sender.Connect( new IPEndPoint( this.ipAddress, this.Port ) );
int bytesSent = sender.Send( data, data.Length );
sender.Close();
Weired is:
On most Computers data will be sent without problems
There is no exception or other error even if no packet was sent
bytesSent will always equal data.Length
On computers not sending packets a Thread.Sleep( 250 ) right before calling sender.Close() will fix the problem!
So, what could cancel the sending of packets, even after UdpClient.Send() reported the correct amount of bytes? Why is this manifesting only on certain machines? Could this be a different behaviour of some network drivers, anti-virus software or the like?
I also tried to set LingerOptions explicitly which should be unneccessary as the default is to send all pending data before closing the underlying socket. However, when doing a sender.Client.LingerState = new LingerOption( true, 10 ) (as descibed in http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.lingerstate.aspx) I get
SocketException (0x80004005): An unknown, invalid, or unsupported option
or level was specified in a getsockopt or setsockopt call.
Any ideas what's going on?
Regards,
Seven

OK, this has nothing to do with .NET and my software either.
It seems that the virus scanner also scans the complete network trafic. And so .NET library functions for sending UDP packages actially did send the package, but the scanner discards it if UdpClient.Close() was called too soon after the Send() method.
So, there are two possible work-arounds now (that worked for me)
Introduce a little sleep before calling UdpClient.Close() (about 4ms are sufficient)
Remove the virus scanner and try another one (Microsoft Security Essentials does not show this effect)

Related

Reading from already open COM (serial) port?

I am trying to control a bench multimeter (GW Instek GDM-8251A) from excel/vba.
I could not find a description of the protocol so I need to reverse engineer it.
The communication method is serial over usb.
So I have loaded in VBA the module from the following page with offers functions for talking to the COM ports using the windows API.
http://www.thescarms.com/vbasic/commio.aspx
Which had been recommended from a previous SO thread
So the plan was to open the supplied DMMVIEWER application and listen on the traffic to and from the device.
However the issue is that when I run the CommOpen function, which itself calls the CreateFile win32 API, I get the following error if the DMMVIEWER is already running.
The exact parameters are as follows
CreateFile(strPort, GENERIC_READ Or _
GENERIC_WRITE, 0, ByVal 0&, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0)
or exactly this
CreateFile( "COM4", -1073741824 , 0 , 0 3 ,128 ,0)
This returns the following error.
COM Error: Error (5): CommOpen (CreateFile) - Access is denied.
I have the alternative of using wireshark in usb capture mode, but then all the USB overhead is making it even harder to understand the underlying protocol.
Also doing this from excel/vba makes it much easier to experiment sending data and seeing what happens.
So my question is this. How can I open a COM port in a non-exclusive read mode so that I can sniff serial traffic ?
Thanks !
I plan to release my findings about the protocol and the VBA code to the sigrok.org project BTW.
(BTW I tried freeserialanalyzer.com but it was extremely limited 15 day trial and limited to 5 captures, which wasn't even enough to configure the capture properly, also this software is so expensive that I couldn't even find the price, probably like 800$. I also tried realterm, which can save serial stream to text, however the sniffing an existing connection requires a special driver which I couldn't find and apparently requires a donation. I tried portmon but the capture option remained grayed out)

Does IPC guarantee message order in Linux?

I need to create a monitor, which will log information about packet missing using ZeroMQ ipc. Actually I don't really understand everything about it because of there are some LINX, TIPS protocols also. Can you please explain me that and answer the main question?
You could make the application self-monitoring, by including a message serial number in each message structure. The message sender keeps track of the serial number it last sent, and increments it every time it sends a message.
The recipient should then be receiving messages with ever-increasing message serial numbers embedded. If that ever jumps by 2 or more, a message has gone missing.
IPC is not lossy like a network can be - the bytes put in come out the other end. TCP is not lossy either, provided both ends are still running and the network itself hasn't failed. However, depending on the ZMQ pattern used and how it's set up whole messages can be undelivered (for example, if the recipient hasn't connected yet, etc). If that's what you mean by "packet missing", it would be revealed by including an incrementing message serial number.

HLS playback performance and stability on Chromecast

I am having issues with HLS playback using the media player library at least since v0.3.0 which continue until the current version (v0.5.0). I know the player library is in beta so I am wondering if others see what I see.
Basically, the issue manifests itself in such a way that, after some time, Chromecast device becomes unresponsive. The debugger stops showing any output, closing it and attempting to access it again results in a timeout error. Sometimes, after some time, device just crashes to homescreen (no brainfreeze).
I tried looking at the profiles and timeline before this happens and I don't see any unusual spikes. I did notice some errors in the log (but they could be unrelated to this), saying something like:
An attempt was made to use an object that is not, or is no longer, usable
The only "unusual" thing I am doing is that I broadcast status on every video timeupdate event. This does not cause any such issues in normal playback though.
Hoping that you already fixed that, I come with a workaround for people having the same problem.
I have a receiver streaming HLS (correctly encoded, using CORS headers and AES encryption). I noticed that sometimes the Chromecast goes crazy with huge segments (>25Mo) driving it to crash (almost) randomly when appending the latter segment.
Believing that I was probably asking too much of this small device, there are two solutions to lower the device load :
Disabling AES encryption (not always acceptable)
Reducing the segments quality
About solution 2, this works well :
window.host = new cast.player.api.Host({'mediaElement':mediaElement, 'url':url});
window.protocol = cast.player.api.CreateHlsStreamingProtocol( host );
window.host.getQualityLevel = function(streamIndex, qualityLevel){
var lowestQuality = protocol.getStreamInfo()["bitrates"].length-1;
var plusOneQuality = (qualityLevel == lowestQuality)?qualityLevel:qualityLevel+1;
console.log( "original QualityLevel : " + qualityLevel, "returned QualityLevel", plusOneQuality );
return plusOneQuality;
}
I'd love having some feedback about this. Does someone already had to use such a trick to prevent HLS HD streaming to crash the device ?

Heartbleed: Payloads and padding

I am left with a few questions after reading the RFC 6520 for Heartbeat:
https://www.rfc-editor.org/rfc/rfc6520
Specifically, I don't understand why a heartbeat needs to include arbitrary payloads or even padding for that matter. From what I can understand, the purpose of the heartbeat is to verify that the other party is still paying attention at the other end of the line.
What does these variable length custom payloads provide that a fixed request and response do not?
E.g.
Alice: still alive?
Bob: still alive!
After all, FTP uses the NOOP command to keep connections alive, which seem to work fine.
There is, in fact, a reason for this payload/padding within RFC 6520
From the document:
The user can use the new HeartbeatRequest message,
which has to be answered by the peer with a HeartbeartResponse
immediately. To perform PMTU discovery, HeartbeatRequest messages
containing padding can be used as probe packets, as described in
[RFC4821].
>In particular, after a number of retransmissions without
receiving a corresponding HeartbeatResponse message having the
expected payload, the DTLS connection SHOULD be terminated.
>When a HeartbeatRequest message is received and sending a
HeartbeatResponse is not prohibited as described elsewhere in this
document, the receiver MUST send a corresponding HeartbeatResponse
message carrying an exact copy of the payload of the received
HeartbeatRequest.
If a received HeartbeatResponse message does not contain the expected
payload, the message MUST be discarded silently. If it does contain
the expected payload, the retransmission timer MUST be stopped.
Credit to pwg at HackerNews. There is a good and relevant discussion there as well.
(The following is not a direct answer, but is here to highlight related comments on another question about Heartbleed.)
There are arguments against the protocol design that allowed an arbitrary limit - either that there should have been no payload (or even echo/heartbeat feature) or that a small finite/fixed payload would have been a better design.
From the comments on the accepted answer in Is the heartbleed bug a manifestation of the classic buffer overflow exploit in C?
(R..) In regards to the last question, I would say any large echo request is malicious. It's consuming server resources (bandwidth, which costs money) to do something completely useless. There's really no valid reason for the heartbeat operation to support any length but zero
(Eric Lippert) Had the designers of the API believed that then they would not have allowed a buffer to be passed at all, so clearly they did not believe that. There must be some by-design reason to support the echo feature; why it was not a fixed-size 4 byte buffer, which seems adequate to me, I do not know.
(R..) .. Nobody thinking from a security standpoint would think that supporting arbitrary echo requests is reasonable. Even if it weren't for the heartbleed overflow issue, there may be cryptographic weaknesses related to having such control over the content the peer sends; this seems unlikely, but in the absence of a strong reason to support a[n echo] feature, a cryptographic system should not support it. It should be as simple as possible.
While I don't know the exact motivation behind this decision, it may have been motivated by the ICMP echo request packets used by the ping utility. In an ICMP echo request, an arbitrary payload of data can be attached to the packet, and the destination server will return exactly that payload if it is reachable and responding to ping requests. This can be used to verify that data is being properly sent across the network and that payloads aren't being corrupted in transit.

Serial port programming - Recognize end of received data

I am writing a serial port application using VC++, in which I can open a port on a switch device, send some commands and display their output. I am running a thread which always read open port for output of given command. My main thread waits until read completes, but problem is how do I recognize that command output ends, and I should signal main thread.
Almost any serial port communication requires a protocol. Some way for the receiver to discover that a response has been received in full. A very simple one is using a unique byte or character that can never appear in the rest of the data. A linefeed is standard, used by any modem for example.
This needs to get more elaborate when you need to transfer arbitrary binary data. A common solution for that is to send the length of the response first. The receiver can then count down the received bytes to know when it is complete. This often needs to be embellished with a specific start byte value so that the receiver has some chance to re-synchronize with the transmitter. And often includes a checksum or CRC so that the receiver can detect transmission errors. Further embellishments then is to make errors recoverable with ACK/NAK responses from the receiver. You'd be then well on your way in re-inventing TCP. The RATP protocol in RFC-916 is a good example, albeit widely ignored.

Resources