Should source IP address filtering be implemented in the Application layer itself or delegated by Application to the Firewall - linux

Let's say my application has listening UDP socket and it knows from what IP addresses it could receive UDP datagrams. Anything coming from other IP addresses would be considered as malicious datagram and should be discarded as early as possible to prevent DoS attacks. The hard part is that the set of these legit IP addresses can dynamically change over application's life time (ie by dynamically receiving them over control channel).
How would you implement filtering based on the source IP address in the case above?
I see two solutions where to put this source IP filtering logic:
Implement it in the application itself after recvfrom() call.
Install default drop policy in the Firewall and then let the application install Firewall rules that would dynamically whitelist legit IP addresses.
There are pros and cons for each solutions. Some that come to my mind:
iptables could end up with O(n) filtering complexity (con for iptables)
iptables drop packets before they even get to the socket buffer (pro for iptables)
iptables might not be very portable (con for iptables)
iptables from my application could interfere with other applications that potentially would also install iptables rules (con for iptables)
if my application installs iptables rules then it can potentially become attack vector itself (con for iptables)
Where would you implement source IP filtering and why?
Can you name any Applications that follow convention #2 (Administrator manually installing static Firewall rules does not count)?

My recommendation (with absolutely no authority behind it) is to use iptables to do rate-limiting to dampen any DoS attacks and do the actual filtering inside your application. This will give you the least-bad of both worlds, allowing you to use the performance of iptables to limit DoS throughput as well as the ability to change which addresses are allowed without introducing a potential security hole.
If you do decide to go about it with iptables alone, I would create a new chain to do the application-specific filtering so that the potential for interference is lowered.
Hope this helps.

Hope this link help you
Network layer firewalls or packet filters operate at the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set defined by the administrator or applied by default. Modern firewalls can filter traffic based on many packet attributes such as source IP address, source port, destination IP address or port, or destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes.
Application layer firewalls work on the application level of the TCP/IP stack, intercepting all packets travelling to or from an application, dropping unwanted outside traffic from reaching protected machines, without acknowledgment to the sender. The additional inspection criteria can add extra latency to the forwarding of packets to their destination.
Mandatory access control (MAC) filtering or sandboxing protect vulnerable services by allowing or denying access based on the MAC address of specific devices allowed to connect to a specific network.
Proxy servers or services can run on dedicated hardware devices or as software on a general-purpose machine, responding to input packets such as connection requests, while blocking other packets. Abuse of an internal system would not necessarily cause a security breach, although methods such as IP spoofing could transmit packets to a target network.
Network address translation (NAT) functionality allows hiding the IP addresses of protected devices by numbering them with addresses in the "private address range", as defined in RFC 1918. This functionality offers a defence against network reconnaissance

Related

How to transfer of any type of data across two separate networks without violating cyber security using UDP

How we can share any type of data over the two separate networks without violating security mechanisms using UDP ?
There are a few things you'll have to remember:
Every network has its firewall, and it depends on the firewall rules, whether to allow your traffic into the network or not. First, ask your client or receiver to make changes in the firewall so that it accepts your IP address and also remember most of the systems have an edge firewall too.
Be clear with the type of connection i.e., p2p (or) server & client. It's better if it is a client & server type connection.
UDP by definition is NOT a connection-oriented protocol, so there is no state to keep track of as far as OSI layers 2-4 are concerned. All incoming UDP connections are treated as "new" or the same.
Also, see that none of the systems is under NAT connection, as the router will remember the IP and port of the device just for a while. And if there is any delay in response from client-side then the system under NAT will not know the IP or the port of the device, where it is supposed to send the traffic.

Test setup on AWS to test TCP transparent proxy (TPROXY) and spoofing sockets

I'm developing a proof-of-concept of some kind of transparent proxy on Linux.
Transparent proxy intercepts TCP traffic and forwards it to backend.
I use https://www.kernel.org/doc/Documentation/networking/tproxy.txt and spoofing sockets for outgoing TCP connection.
On my dev PC I was able to emulate network using Docker and all works fine.
But I need to deploy test environment on AWS.
Proposed design:
Three VMs within the same subnet:
client, 192.168.0.2
proxy, 192.168.0.3
backend, 192.168.0.4
On client I add route to 192.168.0.4 thru 192.168.0.3
On proxy I confugure TPROXY to intercept TCP packets and forward it to backend with 192.168.0.2 IP source address. Here our transparent proxy works.
On backend I run simple web server. Also I add route to 192.168.0.2 thru 192.168.0.3 otherwise packets will go back directly to 192.168.0.2
The question:
Will proposed network design work as expected?
AWS uses some kind of software defined network and I don't know will it work in the same way as I would connect 3 Linux boxes to one Ethernet switch.
Will proposed network design work as expected?
Highly unlikely.
The IP network in VPC that instances can access is, from all appearances, an IP network (Layer 3), not an Ethernet network (Layer 2), even though it's presented to the instances as though it were Ethernet.
The from/to address that is "interesting" to an Ethernet switch is the MAC address. The from/to address of interest to the EC2 network is the IP address. If you tweak your instance's IP stacks by spoofing the addresses and manipulating the route tables, the only two possible outcomes should be one of these: the packets will actually arrive at the correct instance according to the infrastructure's knowledge of where that IP address should exist... or the packets will be dropped by the network. Most likely, the latter.
There is an IP Source/Destination Check Flag on each EC2 instance that disables some of the network's built-in blocking of packets the network would otherwise have considered spoofed, but this should only apply to traffic with IP addresses outside the VPC supernet CIDR block -- the IP address of each instance is known to the infrastructure and not subject to the kind of tweaking you're contemplating.
You could conceivably build tunnels among the instances using the Generic Route Encapsulation (GRE) protocol, or OpenVPN, or some other tunneling solution, and then the instances would have additional network interfaces in different IP subnets where they could directly exchange traffic using a different subnet and rules they make up, since the network wouldn't see the addresses on the packets encapsulated in the tunnels, and wouldn't impose any restrictions on the inner payload.
Possibly related: In a certain cloud provider other than AWS, a provider with a network design that is far less sensible than VPC, I use inter-instance tunnels (built with OpenVPN) to build my own virtual private subnets that make more sense than what that other cloud provider offers, so I would say this is potentially a perfectly viable alternative -- the increased latency of my solution is sub-millisecond.
But this all assumes that you have a valid reason for choosing a solution involving packet mangling. There should be a better, more inside-the-box way of solving the exact problem you are trying to solve.

Open vSwitch, in-band control: How it works?

I try to measure the control flow impact on Open vSwitch performance while using in-band connections.
So in this task I need to count the messages sent from controller to every switch in the network that uses in-band control.
I try to understand how the controller installs flows into Open vSwitch while using in-band connection.
I've created an example topology using mininet and this article:
http://tocai.dia.uniroma3.it/compunet-wiki/index.php/In-band_control_with_Open_vSwitch
The topology contains 5 switches connected one-by-one (as show on the first picture of the article).
The controller is launched on the h3 host. In my case the POX controller is used. And all is pingable.
So when I try to sniff the traffic on s1 ... s5 interfaces, I see that OpenFlow messages (PacketIn, PacketOut etc) appear only on the s3 interface. On other interface I don't see any TCP or OpenFlow packets.
The question is how the controller installs new flows on s1, s2, s4, s5 switches? And how the controller messages are delivered to the switch that is not directly connected to controller?
Thanks.
Look no further than the OVS documentation! The OpenVSwitch design document has a section describing this in detail:
Design Decisions In Open vSwitch:
Github, direct link to In-Band Control section
official HTML
official plaintext
The implementation subsection says:
Open vSwitch implements in-band control as "hidden" flows, that is,
flows that are not visible through OpenFlow, and at a higher priority
than wildcarded flows can be set up through OpenFlow. This is done so
that the OpenFlow controller cannot interfere with them and possibly
break connectivity with its switches. It is possible to see all flows,
including in-band ones, with the ovs-appctl "bridge/dump-flows"
command.
(...)
The following rules (with the OFPP_NORMAL action) are set up on any
bridge that has any remotes:
(a) DHCP requests sent from the local port.
(b) ARP replies to the local port's MAC address.
(c) ARP requests from the local port's MAC
address.
In-band also sets up the following rules for each unique next-hop MAC
address for the remotes' IPs (the "next hop" is either the remote
itself, if it is on a local subnet, or the gateway to reach the
remote):
(d) ARP replies to the next hop's MAC address.
(e) ARP requests from the next hop's MAC address.
In-band also sets up the following rules for each unique remote IP
address:
(f) ARP replies containing the remote's IP address as a target.
(g) ARP requests containing the remote's IP address as a source.
In-band also sets up the following rules for each unique remote
(IP,port) pair:
(h) TCP traffic to the remote's IP and port.
(i) TCP traffic from the remote's IP and port.
The goal of these rules is to be as narrow as possible to allow a
switch to join a network and be able to communicate with the remotes.
As mentioned earlier, these rules have higher priority than the
controller's rules, so if they are too broad, they may prevent the
controller from implementing its policy. As such, in-band actively
monitors some aspects of flow and packet processing so that the rules
can be made more precise.
In-band control monitors attempts to add flows into the datapath that
could interfere with its duties. The datapath only allows exact match
entries, so in-band control is able to be very precise about the flows
it prevents. Flows that miss in the datapath are sent to userspace to
be processed, so preventing these flows from being cached in the "fast
path" does not affect correctness. The only type of flow that is
currently prevented is one that would prevent DHCP replies from being
seen by the local port. For example, a rule that forwarded all DHCP
traffic to the controller would not be allowed, but one that forwarded
to all ports (including the local port) would.
The document also contains more information about special cases and potential problems, but is quite long so I'll omit it here.

ip masking a specific ip address

so suppose I detect a user's ip using some code to perform restrictions....
is there a way for a user to circumvent this by arbitrarily setting their ip to any ip they want anytime they want (eg. via proxy server or something) hence allowing them to choose a specific ip to be displayed when I detect it
There are several tunneling and proxy-based techniques that will effectively present a different IP address for any HTTP requests than the one belonging to the originating computer. I have mentioned several methods in this answer, as well as here. In many cases it is actually impossible to tell apart a relayed connection from the real thing...
In general you cannot forge the source of a TCP connection on the Internet to be an arbitrary address for a number of reasons, some of which are:
TCP is a stateful protocol and packets go back and forth even in order to establish the connection. Forging an IP source address means that you cannot get any packets back.
Some ISPs will drop packets generated within their own network that do not have a source IP within the proper subnet. This is usually done at the client connection level to protect the ISP network and prevent random packets from leaking client information to the Internet due to simple misconfiguration issues client-side.
ISP filters will also prevent users from just setting an arbitrary IP - if not for any other reason, then just because ISPs sell connections with static IP addresses at significantly higher prices and having users set their own IPs would spoil that. Not to mention the mess that would result if there could be IP conflicts among the clients of an ISP...
So in general you cannot just spoof the source of a TCP connection. You have to use an intermediate computer to relay the connection.
Keep in mind, however, that motivated and experienced attackers may have at their disposal botnets that consist of millions of compromised computers belonging to innocent users. Each and every one of those computers could theoretically be used as a connection relay, thus allowing a potential attacker quite a wide variety of IP addresses to chose from.
The bottom line is that simple IP-based checks and filters cannot in any form ensure the legitimacy of a connection. You should be using additional methods to protect your service:
HTTPS and proper user accounts.
Extensive logging and monitoring of your service.
Intrusion detection systems and automatic attack responders (be careful with those - make sure you don't lock yourself out!).
We cannot really provide a more concrete answer unless you tell us what service you are providing, what restrictions you want to apply and what kind of attacks you are so worried about...
Sort of - as you mentioned, proxies are a risk, however it makes life a wee bit harder for the attacker so it is still worth using IP bans.
Monitor your logs, automate alerts and if attacks come from another IP - ban it too. If you make life hard enough for an attacker they may give up.

Is authenticating a TCP connection by source IP safe?

I'm developing an application that accepts connections from clients over the internet. All these clients are on fixed IP addresses and will establish a non-encrypted TCP connection.
The current plan is for the server to check which IP the connections come from and allow only client connections from a list of know IPs?
How safe is that against IP spoofing?
My reasoning is that since this is a TCP connection, an attacker couldn't just fake its sender IP (which is easy), but would have to assure that the packets travel back to him and thus he would have to hack all routers on the path, which seems rather hard.
I know I could use encryption, like SSH, but lets stick with the question of how safe the plain TCP connection would be.
Restricting connections by IP address is generally a good practice when practical, as it greatly reduces the attack surface and makes the complexity of an attack much higher. As stated in other answers, you would now have to do something like IP spoofing, or attacking the network itself (false BGP routes, etc).
That said, IP address restriction should be used as one layer of a defense-in-depth approach. Could you encrypt the TCP stream without too much rework? Maybe SSL? If you can't modify the program, how about the network? Site ti site IPSEC VPN tunnels are not difficult to establish, as almost any commercial firewall supports them. Even some soho routers can be modified to support IPSEC (with OpenWrt plus OpenSwan, for example).
Lastly, could you require the client and server to mutually authenticate?
Not safe. BGP gateways are not immune to attack, and with that, false routes can be advertised and IPs can be spoofed.
First of all, using the IP you are not identifying the client, but just some numbers. Even if the IP is right, there still can be a troyan on user's computer, authenticating in place of the user itself (as I don't know what kind of service you provide, I assume that this might make sense).
Now, if one has access to one of the routers via which the packets between the client and the server go, then he can do almost anything - he can send and receive packets in the name of the client or he can modify them (as the data goes unencrypted). Moreover, the attacker doesn't need to hack all or one of routers - he just needs to have access (including legitimate one) to the channel where the data goes, be it the router itself or the cable (which can be cut and the router can be inserted).
So to summarize, IP can be used as one of the component that hardens spoofing to some extent, but it can't be the main security measure.

Resources