Firewall status in resource monitor - firewall

In resource monitor under network, there is a tab called listening ports which has a column called firewall status. It has values like
1)allowed, not restricted
2)not allowed, not restricted
3)allowed, restricted
My understanding is that the first one represents whether the incoming traffic will be allowed or not.
(when I tested after disabling the firewall, everything changed to allowed)
My understanding of the second one is whether there is a rule restricting that connection or not. But in this case, allowed, restricted gives no sense because when it is restricted how can it be allowed.
And also there is an option to either block or allow all connections that do no match any available rules.
could anyone please explain these things in detail please?

My understanding of Allowed/Restricted.
FW allows access to that port but with some restrictions.
For example, there is a rule in fw blocking access to that port for some IP addresses or allowing access from local subnet only.

Related

Should source IP address filtering be implemented in the Application layer itself or delegated by Application to the Firewall

Let's say my application has listening UDP socket and it knows from what IP addresses it could receive UDP datagrams. Anything coming from other IP addresses would be considered as malicious datagram and should be discarded as early as possible to prevent DoS attacks. The hard part is that the set of these legit IP addresses can dynamically change over application's life time (ie by dynamically receiving them over control channel).
How would you implement filtering based on the source IP address in the case above?
I see two solutions where to put this source IP filtering logic:
Implement it in the application itself after recvfrom() call.
Install default drop policy in the Firewall and then let the application install Firewall rules that would dynamically whitelist legit IP addresses.
There are pros and cons for each solutions. Some that come to my mind:
iptables could end up with O(n) filtering complexity (con for iptables)
iptables drop packets before they even get to the socket buffer (pro for iptables)
iptables might not be very portable (con for iptables)
iptables from my application could interfere with other applications that potentially would also install iptables rules (con for iptables)
if my application installs iptables rules then it can potentially become attack vector itself (con for iptables)
Where would you implement source IP filtering and why?
Can you name any Applications that follow convention #2 (Administrator manually installing static Firewall rules does not count)?
My recommendation (with absolutely no authority behind it) is to use iptables to do rate-limiting to dampen any DoS attacks and do the actual filtering inside your application. This will give you the least-bad of both worlds, allowing you to use the performance of iptables to limit DoS throughput as well as the ability to change which addresses are allowed without introducing a potential security hole.
If you do decide to go about it with iptables alone, I would create a new chain to do the application-specific filtering so that the potential for interference is lowered.
Hope this helps.
Hope this link help you
Network layer firewalls or packet filters operate at the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set defined by the administrator or applied by default. Modern firewalls can filter traffic based on many packet attributes such as source IP address, source port, destination IP address or port, or destination service like WWW or FTP. They can filter based on protocols, TTL values, netblock of originator, of the source, and many other attributes.
Application layer firewalls work on the application level of the TCP/IP stack, intercepting all packets travelling to or from an application, dropping unwanted outside traffic from reaching protected machines, without acknowledgment to the sender. The additional inspection criteria can add extra latency to the forwarding of packets to their destination.
Mandatory access control (MAC) filtering or sandboxing protect vulnerable services by allowing or denying access based on the MAC address of specific devices allowed to connect to a specific network.
Proxy servers or services can run on dedicated hardware devices or as software on a general-purpose machine, responding to input packets such as connection requests, while blocking other packets. Abuse of an internal system would not necessarily cause a security breach, although methods such as IP spoofing could transmit packets to a target network.
Network address translation (NAT) functionality allows hiding the IP addresses of protected devices by numbering them with addresses in the "private address range", as defined in RFC 1918. This functionality offers a defence against network reconnaissance

Restricts daemons to open certain ports on linux

I want to restrict the daemons from opening certain ports, and i wish to achieve it at kernel level.
I came across an idea, i.e to write my own bind function and then redirect to the original bind function. But the user can bypass this by invoking the system call. Any suggestions?
Just a thought:
there's a chance that 'iptables' could do the work for you.
Using 'iptables' you can define a rule which will deny outgoing traffic from a port.
This solution may work for you if you can identify the deamon's traffic according to iptables options. It will not work for you if you can only identify the deamon's traffic according to its process id.

Two external IPs one WebServer/Website

I'm having the following dilemma, I have a website on IIS with two internal IPs, each one of those IPs are NATed to different external IPs (each IP is from a different ISP). I also configured a RoundRobin DNS Service (two A hosts with the same name but with a different IP). Basically what this does is that the traffic is balanced between the two ISPs, and that's what we want. The thing is that apparently this configuration (DNS Roundrobin) is meant for when you have a cluster of server so each server has its own ISP on its own NIC, so the traffic from the webserver to the client is made over that ISP.
Right now we are being told that no matter where our inbound traffic comes from, the outbound traffic is always through our main WAN, which is also OK, because we have tested that when the primary WAN link is down, the website keeps working on the secondary link.
OK, the question is, do you think there may be problem with this configuration? Is the DNS Rounrobin also useful on this configuration?.
Thanks a lot for your feedback.
normally when you host a web service the responses are much bigger compared to the inbound traffic (normally you receive an HTTP GET/ and deliver the whole content back) - so it would make much more sense to balance the outbound traffic over your ISPs to get value out of your additional bandwidth.
does it make sense - yes - you can loose one ISP and your site is still available (assuming you do Healthchecks on your DNS server to determine if the sites are available before you send the IP address back - if you always deliver both IPs even when one ISP is down it won't help you at all)
it would be better to add an additional server - OR do policy based routing on your single server - so sending the response out of the interface where it was received.
hope that helps!

referrer ip manipulation

Some applications are providing security only by accepting requests from certain IPs. Is this a good way of making that app secure. Is there any way to manipulate this referrer IP during request period?
getRemoteAddr, getRemoteHost and getRemotePort
Is there any way to set the values above when making the request?
Yes, it is possible to "spoof" the source IP of packets to make the request appear to be from a different IP address than it really is. However, this is not a concern because the three-way handshake of TCP will not complete if the IP address has been spoofed, with a few exceptions (such as the attacker sniffing packets and generating a response when the packet passes on the wire). Generally speaking though, it is very hard to do.
This is not good security practice, however, even though it is typically reliable. The reason is that IP addresses can be assumed by anyone, and they are frequently changed in packets due to techniques like NAT and fire-walling.
Consider that if you have two users on the same private network using NAT, and they both make requests to your server at the same time, your server will see the IP addresses as the same, with different source ports. The differentiating factor that allows routing to happen properly is the source port, not the IP address. To make this even less reliable, the source port will change on every new request, which can happen dozens of times during a single HTTP session.
That being said, there is some benefit to IP filtering. You can make it much harder for someone from a certain country or area to connect by filtering by IP. This should not be your only security, but it can help because it is usually non-trivial to obtain a valid IP address from a a different range. Some organizations will block all non-US based IPs by default, for example. This is used in conjunction with user accounts. This makes it much more difficult for non-local attackers to reach the server.

ip masking a specific ip address

so suppose I detect a user's ip using some code to perform restrictions....
is there a way for a user to circumvent this by arbitrarily setting their ip to any ip they want anytime they want (eg. via proxy server or something) hence allowing them to choose a specific ip to be displayed when I detect it
There are several tunneling and proxy-based techniques that will effectively present a different IP address for any HTTP requests than the one belonging to the originating computer. I have mentioned several methods in this answer, as well as here. In many cases it is actually impossible to tell apart a relayed connection from the real thing...
In general you cannot forge the source of a TCP connection on the Internet to be an arbitrary address for a number of reasons, some of which are:
TCP is a stateful protocol and packets go back and forth even in order to establish the connection. Forging an IP source address means that you cannot get any packets back.
Some ISPs will drop packets generated within their own network that do not have a source IP within the proper subnet. This is usually done at the client connection level to protect the ISP network and prevent random packets from leaking client information to the Internet due to simple misconfiguration issues client-side.
ISP filters will also prevent users from just setting an arbitrary IP - if not for any other reason, then just because ISPs sell connections with static IP addresses at significantly higher prices and having users set their own IPs would spoil that. Not to mention the mess that would result if there could be IP conflicts among the clients of an ISP...
So in general you cannot just spoof the source of a TCP connection. You have to use an intermediate computer to relay the connection.
Keep in mind, however, that motivated and experienced attackers may have at their disposal botnets that consist of millions of compromised computers belonging to innocent users. Each and every one of those computers could theoretically be used as a connection relay, thus allowing a potential attacker quite a wide variety of IP addresses to chose from.
The bottom line is that simple IP-based checks and filters cannot in any form ensure the legitimacy of a connection. You should be using additional methods to protect your service:
HTTPS and proper user accounts.
Extensive logging and monitoring of your service.
Intrusion detection systems and automatic attack responders (be careful with those - make sure you don't lock yourself out!).
We cannot really provide a more concrete answer unless you tell us what service you are providing, what restrictions you want to apply and what kind of attacks you are so worried about...
Sort of - as you mentioned, proxies are a risk, however it makes life a wee bit harder for the attacker so it is still worth using IP bans.
Monitor your logs, automate alerts and if attacks come from another IP - ban it too. If you make life hard enough for an attacker they may give up.

Resources