SYN Denial Of Service attack - security

This may be a trivial question. This is regarding Syn Cookie.
Why only half open connections are only considered as DOS attack.
It may be possible that a client completes the handshake (SYN, SYN-ACK, ACK) and never replies after that. That will also take system resources.
So if a client is flooding with (SYN, SYN-ACK, ACK) sequence why that is not considered as DOS attack?

A SYN flood attack, which is what are describing, is a specific form of Denial of Service attack. A DOS can take many forms, often unrelated to SYN requests.
The reason that a SYN flood attack is effective is because you can forge the client IP address. This allows a very large number of SYN requests from the same client, but since the SYN-ACK will never be received, there is no way of sending the ACK, and the server is left waiting for the response, hence using available connections on the server. A client sending SYN and ACK will not be using up the available connections. A large number of useless (SYN, SYN-ACK, ACK) would still be a DOS attack, just not such an effective one.

In a SYN flood the client does not have to track state or complete connections. The client produces a number of SYN packets with spoofed IP, which can be done very fast. The server (when not using SYN cookies) consumes resources waiting for each connection attempt to time out or complete. As a result, this is a very effective DoS with leverage of resources consumed by the (DoS-ing) client vs. attacked server in the range of 1:10000.
If the client completes each connection, then the leverage disappears - the server does not have to wait any more and the client has to start tracking state. Thus we have 1:1 instead of 1:10000. There is a secondary issue here - buffers for half open connection are (or were when this attack was first invented) small compared to established connections and are (were) exhausted easier.
SYN cookies allow the server to forget about the connection immediately after replying to the SYN packet, until it receives and ACK with the correct sequence number. Here again the resource use becomes 1:1

Yes, Sockstress is an old attack, from 2008, that completes TCP handshakes and then lowers the Window size to zero (or some other small value), tying up connections at layer 4, somewhat similar to the SlowLoris layer 7 attack.
Click Here to learn more about Sockstress

Related

TCP sockets connection becomes unreliable with SO_REUSEADDR

I have an app where a single client talks to a single server. Normally, the client does a single connect, and then calls send repeatedly, and there's no problem.
However, I need to do a version where the client sets up a connection for each individual send (a bit like HTTP with and without keep-alive). In this version, the client calls socket, connect, send once, and then close.
The problem with this is that I very quickly run out of ephemeral client ports, and the connect fails. To get around this I call setsockopt with SO_REUSEADDR, and then bind to port 0, before calling connect (see here, for example).
This works, except that the TCP connection is no longer reliable. I get occasional incorrect data, presumably because there's still data around when the TCP connection is closed.
Is there any way to make this reliable (and fast)? shutdown before close doesn't help. Maybe I can get select to tell me if the socket is ready for output, but that seems like overkill.
Do you have to use TCP? If so, you will probably have to maintain an open connection and route your messages over that one connection.
There is SCTP, which may be a good fit for your use case - a reliable datagram protocol:
Like TCP, SCTP provides reliable, connection oriented data delivery with congestion control. Unlike TCP, SCTP also provides message bound‐ ary preservation, ordered and unordered message delivery, multi-stream‐ ing and multi-homing. Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.

Identifying remote disconnection in socket client

How do I find out from a socket client program that the remote connection is down (e.g. the server is down). When I do a recv and the server is down it blocks if I do not set any timeout. However in my case I cannot put any reliable timeout value to get around it since otherwise the recv times out even when the server is up but the response really takes longer than the timeout value that I have set.
Unfortunately, ZeroMQ just passes this on to the next layer. So the protocol you are implementing on top of ZeroMQ will have to handle this.
Heartbeats are recommended. Basically, just have one side send a message if the connection is otherwise idle. The other side can treat the absence of such messages as a failure condition and close the connection.
You may wish to modify your higher level protocols to be more robust. For example, you can submit a command, query its status, and allow the other side to forget about the command. That way, if the connection is lost, you can reconnect and query any outstanding commands. Any it doesn't have, you know didn't get through and can resubmit. Once you get a reply with the result of a command, you can tell the other side that it can now forget the response.
This allows you to keep the connection active while a long-running command is ongoing. Every so often you ask, "is everything okay". The other side responds, "yes". You can use long polling where the other side delays responding for a second or so while the command is in process. This allows it to return the results immediately rather than having to wait a second for your next query.
The specifics depend on your exact requirements, but you must design this correctly into your protocol.
If the remote host goes down without sending you a tcp FIN package then you have no chance to detect that. You can test that behaviour by firewalling a port after a connection has been established on that port. Your program will "hang" forever.
However, the Linux kernel supports a mechanism called TCP keep alives which are meant to close a tcp connection after a given timeout. If you can't specify a timeout for your application, than there isn't a reliable chance to use that. Last chance might be to use features of the application protocol (can you name it?), if that protocol does not support features for connection handling you may invent something on your own on top of that.

Socket.IO confirmed delivery

Before I dive into the code, can someone tell me if there is any documentation available for confirmed delivery in Socket.IO?
Here's what I've been able to glean so far:
A callback can be provided to be invoked when and if a message is acknowledged
There is a special mode "volatile" that does not guarantee delivery
There is a default mode that is not "volatile"
This leaves me with some questions:
If a message is not volatile, how is it handled? Will it be buffered indefinitely?
Is there any way to be notified if a message can't be delivered within a reasonable amount of time?
Is there any way to unbuffer a message if I want to give up?
I'm at a bit of a loss as to how Socket.IO can be used in a time sensitive application without falling back to volatile mode and using an external ACK layer that can provide failure events and some level of configurability. Or am I missing something?
TL;DR You can't have reliable confirmed delivery unless you're willing to wait until the universe dies.
The delivery confirmation you seek is related to the theoretical Two Generals Problem, which is also discussed in this SO answer.
TCP manages the reliability problem by guaranteeing delivery after infinite retries. We live in a finite universe, so the word "guarantee" is theoretically dubious :-)
Theory aside, consider this: engine.io, the underpinnings of socket.io 1.x, uses the following transports:
WebSocket
FlashSocket
XHR polling
JSONP polling
Each of those transports is based upon TCP, and TCP is reliable. So as long as connections stay connected and transports don't change, each individual socket.io message or event should be reliable. However, two things can happen on the fly:
engine.io can change transports
socket.io can reconnect in case the underlying transport disconnects
So what happens when a client or your server squirts off a few messages while the plumbing is being fiddled with like that? It doesn't say in either the engine.io protocol or the socket.io protocol (at versions 3 and 4, respectively, as of this writing).
As you suggest in your comments, there is some acknowledgement logic in the implementation. But even simple digital communications has notrivial behavior, so I do not trust an unsupervised socket.io connection for reliable delivery for mission- or safety-critical operations. That won't change until reliable delivery is part of their protocol and their methods have been independently and formally verified.
You're welcome to adopt my policies:
Number my messages
Ask for a resend when in doubt
Do not mutate my state - client or server - unless I know I'm ready
In Short:
Guaranteed message delivery acknowledgement is proven impossible, but TCP guarantees delivery and order given "infinite" retries. I'm less confident about socket.io messages, but they're really powerful and easy to use so I just use them with care.
I ensured delivery using different strategies
I send data using socket including nonce in the message to prevent repeated message errors
The other party sends a confirmation of recived meassage or i resend after x seconds
I used a REST call by the client every 30 seconds to request all new messages sent by server to catch any dropped messages during transport

a UDP socket based rateless file transmission

I'm new to socket programming and I need to implement a UDP based rateless file transmission system to verify a scheme in my research. Here is what I need to do:
I want a server S to send a file to a group of peers A, B, C.., etc. The file is divided into a number of packets. At the beginning, peers will send a Request message to the server to initialize transmission. Whenever S receives a request from a client, it ratelessly transmit encoded packets(how to encode is done by my design, the encoding itself has the erasure-correction capability, that's why I can transmit ratelessly via UDP) to that client. The client keeps collecting packets and try to decode them. When it finally decodes all packets and re-construct the file successfully, it sends back a Stop message to the server and S will stop transmitting to this client.
Peers request the file asynchronously (they may request the file at different time). And the server will have to be able to concurrently serve multiple peers. The encoded packets for different clients are different (they are all encoded from the same set source packets, though).
Here is what I'm thinking about the implementation. I have not much experience with unix network programming though, so I'm wondering if you can help me assess it, and see if it is possible or efficient.
I'm gonna implement the server as a concurrent UDP server with two socket ports(similar to TFTP according to the UNP book). One is to receive controlling messages, as in my context it is for the Request and Stop messages. The server will maintain a flag (=1 initially) for each request. When it receives a Stop message from the client, the flag will be set to 0.
When the serve receives a request, it will fork() a new process that use the second socket and port to send encoded packets to the client. The server keeps sending packets to the client as long as the flag is 1. When it turns to 0, the sending ends.
The client program is easy to do. Just send a Request, recvfrom() the server, progressively decode the file and send a Stop message in the end.
Is this design workable? The main concerns I have are: (1), is that efficient by forking multiple processes? Or should I use threads? (2), If I have to use multiple processes, how can the flag bit be known by the child process? Thanks for your comments.
Using UDB for file transfer is not best idea. There is no way for server or client to know if any packet has been lost so you would only know that during reconstruction assuming you have some mechanism (like counter) to detect lost packes. It would then be hard to request just one of those packets that got lost. And in the end you would have a code that would do what TCP sockets do. So I suggest to start with TCP.
Typical design of a server involves a listener thread that spawns a worker thread whenever there is a new client request. That new thread would handle communication with that particular client and then end. You should keep a limit of clients (threads) that are served simultaneously. Do not spawn a new process for each client - that is inefficient and not needed as this will get you nothing that you can't achieve with threads.
Thread programming requires carefulness so do not cut corners. Otherwise you will have hard time finding and diagnosing problems.
File transfer with UDP wil be fun :(
Your struct/class for each message should contain a sequence number and a checksum. This should enable each client to detect, and ask for the retransmission of, any missing blocks at the end of the transfer.
Where UDP might be a huge winner is on a local LAN. You could UDP-broadcast the entire file to all clients at once and then, at the end, ask each client in turn which blocks it has missing and send just those. I wish Kaspersky etc. would use such a scheme for updating all my local boxes.
I have used such a broadcast scheme on a CANBUS network where there are dozens of microControllers that need new images downloaded. Software upgrades take minutes instead of hours.

What is the initial re-transmission timeout for TCP?

I am interested in Linux. I have a client and a server. Lets say the client is trying to connect to the server (which is down). What is the initial re-transmission timeout value? Does it exponentially back off?
Lets say the server is now trying to connect to the client. The server receives the SYN packet and sends a SYN ACK. But the client dies. What is the re-transmission policy for this scenario?
What is the initial re-transmission timeout value?
It is both platform-dependent and configuration-dependent.
Does it exponentially back off?
All retransmissions in TCP do that.
What is the re-transmission policy for this scenario?
See above.

Resources