Multithreading and packet loss in UDP - multithreading

Is a multi-threaded client necessary in order to cause packet loss, if both the server and client are on the same machine ? What would be the case if there is a remote server ? Suppose I'm sending packets to the server from the client sequentially (in a for loop), is packet loss possible here ?

Related

How to ensure the packet can be transferred under connected mode (FEC forward error correction??)

I have two two about BLE technology.
1)How to ensure the packet can be transferred under connected mode (FEC forward error correction??)
understand the impact of packet loss (in both broadcast & connected mode). For connected mode, if packet loss, will the packet error correction / sequence number / CRC will help to correct it and retransmit the packet?? For broadcast mode, any ack after receiving the packet by the Gateway or just keep transmit by the client device with the same packet information for a while??
I expeceted the BLE have the correction mechanism, but i am not sure

netem packet loss in TCP/IP protocol

I'm trying to emulate packet loss for my project. I'm using TCP/IP protocol. netem tool provides such functionality. The delay works in the loopback IP but I couldn't make packet loss to work. According to the website of netem, the packet loss is activated as follows:
tc qdisc change dev lo root netem loss 5%
In the client/server app using TCP/IP socket in c programming, the client sends this message "Echo this !", the echoed message from the server received by the client is intact. As far as I know, TCP/IP guarantees the delivery of packets. Is emulating packet loss impossible with TCP/IP protocol?
If the packet is lost, TCP will send it again after some delay. If it gets lost again, it will send it again. And so on, up to a maximum of 10 minutes or so, after which it just gives up.
5% packet loss is not completely terrible and your message is likely to get through after one or two resends - or zero. Also notice your whole message fits in a packet, so your programs only send a few packets in total (your message plus extra ones to connect and disconnect) and it's quite likely that none of them will be lost.
You can try sending a longer message (like a megabyte), and you can try cranking the packet loss up to 25% or 50% (or even higher!). It should take a lot longer to send the message, even without any delay in the network, but your message should get through eventually, unless TCP decides to just give up and disconnect you.

TCP — delivery of packets sent before client closes connection

Let’s say client opens a tcp connection to server.
Say client sends 100 packets.
10 of them reached server and were picked by application.
50 of them reached server but not yet picked up by application
40 are still sitting in client
socket buffer because the servers receive window is full.
Let’s say now client closes the connection.
Question —
Does application get the 50 packets before it is told that the connection is closed?
Does the client kernel send the remaining 40 packets first to client before it sends the FIN packet?
Now to complicate matters, if there is lot of packet loss, what happens to the remaining 40 packets and the FIN. Does it close it?
Does application get the 50 packets before it is told that the connection is closed?
It does.
Does the client kernel send the remaining 40 packets first to client before it sends the FIN packet?
It does.
Now to complicate matters, if there is lot of packet loss, what happens to the remaining 40 packets and the FIN. Does it close it?
The kernel will keep trying to send the outstanding data. The fact that you closed the socket doesn't change things, unless you altered socket options to change this behaviour.

Different order of packets in Wireshark vs tcpdump/libpcap?

I noticed that for the transfer of one-packet file from remote FTP site to localhost on Linux, Wireshark can always capture the packets in correct order but not in tcpdump/libpcap or simple recvfrom on RAW_PACKET with promiscuous mode on.
In the former, the "transfer complete" response is always before the single data packet (in different connections so no TCP reordering), but in the latter the data packet always arrives first - which is clearly wrong according to the protocol and the implementation of FTP servers since "transfer complete" is sent after the data is sent out - and if the client has received it before data it'd stop waiting for data connection, which didn't happen since I can see the data clearly. So the libpcap/tcpdump actually captures packets in wrong order but no such problem in Wireshark?
How is this possible?? Wireshark also uses libpcap...
For FTP protocol payload is transferred in separate TCP connection, there is no promises about order of packets in parallel TCP connection (actually there is no promise in order of packets even in same TCP connection, your host should order them).
server has two open sockets.
it writes file to data socket
immediately after that it writes "transfer complete" to control sockets.
the difference between 2 & 3 is several microseconds.
Packets run thru the internet, they could even choose different paths
They came to your machine in random order.
p.s. also tcpdump file has number of packet and timestamp. Packets are not necessarily written sorted by timestamps. Wireshark could order them when showing, take a look on order field.

Linux application doesn't ACK the FIN packet and retransmission occurs

I have a server running on linux(kernel 3.0.35) and a client running on Windows 7. Every time the client connects to the server a TCP retransmission occurs:
Here is the traffic from wireshark:
http://oi39.tinypic.com/ngsnyb.jpg
What I think happens is this:
The client(192.168.1.100) connects to the server(192.168.1.103), the connection is successful. At some point the client decides to close the connection( FIN,ACK), the server doesn't ACK the FIN.
Then the client starts a new connection, that connection is ACK, and is successful. In the meantime the Windows kernel continues to retransmit the FIN,ACK packet, and finally decides to do a reset.
At the moment the second connection is established I don't receive the data that the client is sending(the packet with 16 bytes length) at server side, I receive these bytes only after the RST packet.
On the server side I'm using the poll() function to check for POLLIN events, and I'm not notified of any data until the RST packet arrives.
So does anyone know why this situation is happening?
Your data bytes are not sent on that 52687 connection but rather the following 52690 connection. I'd guess that the server app is accepting only one connection at a time (the kernel will accept them in advance and just hold the data) and thus not seeing data from the second connection until the first connection is dead and it moves on to the next.
That doesn't explain why your FIN is not being ACK'd. It should. Perhaps there is something in the kernel that doesn't like open-then-close-with-no-data connections? Maybe some kind of attack mitigation? Firewall rules?

Resources