Allow or Blocking access in Linux to a port - linux

For the project that I am currently working on, the task is to read a file from disk that is of following format:
port number [in/out/both]
So, if a port number is followed by in, only inbound connections are allowed. If it is followed by out, only outbound connections are allowed and bidirectional if it is followed by both. Block all other ports.
One way to do this, is to read the file at boot time and store port and type in a data structure and keep that in memory, and when a process tries to use a port, grant the access based in the data structure that is in memory.
The problem is, I dont know how to actually implement this, just need a push in the right directions. I know this can be done using iptables, but that is not allowed.

As a start on Linux kernel coding and for some parts of your problem, you might find this useful:
Storing struct array in kernel space, Linux
EDIT:
For your specific problem of packet filtering, I would suggest that you use the netfilter framework from within the kernel to set up the proper rules that will do what you want. Creating your own packet-filtering framework is probably way too complex - plus it's generally not a good idea to reinvent the wheel.
The netfilter subsystem is quite modular, so you might want to consider the possibility of just creating yet another module with your intended functionality for it.

Related

How do Transfer Protocols work?

Hypothetically, lets say that I wanted to study/create (a) transfer protocol such as http, ftp or ptp. How would I go about doing so? What do I need to know about the internet and servers and what do I need to make to be able to send and receive data through my own homemade transfer protocol?
That's a little backwards.
First you have a problem you need to solve that involves multiple machines.
Then you write software to solve it, which requires communication between those machines.
The details of that communication is called a 'protocol'.
Since the protocol is the interface between machines, it's beneficial if it is generic enough to let you swap out the software on one side or the other.
In this way, HTTP was invented to serve web pages to browsers, FTP was invented to let users transfer files, etc. The details of the protocol indicate the elements of communication required to solve the problem in the desired way.

RPCGEN over Unix domain sockets

My requirement is to make RPC calls between different processes. By nature these calls are 1-1; meaning single sender single receiver. I am architecturally restricted to use only unix domain sockets for this purpose.
I wanted to use 'rpcgen' towards this end. But the problem is that rpcgen works over TCP/UDP as transport mechanisms. What i want is to run them over domain sockets. Given that they dont support it over domain sockets; i figured out to stub the transport routines with my own code after generation to accomplish what i need. But that does not look easy at all.
I explored an option where the generated XDR stream can be written to a local buffer which can then be transported that way i want it; ie. over domain sockets. May be i can retreive it at the remote end to make it work. This might involve another copy of data but performance is not my concern at this point in time.
Is there a readymade solution for this kind of problem? What are my best options here.
Thanks
Sudarshan

File access count in Linux

Is there a way how to effectively determine the number of accesses to a specific file and the process which accessed it without storing the access info by a 3rd party software? I'm looking for something built in inside the linux-based operating systems. The date of the last change is pretty obvious but I need information at least on how many times it was accessed since the creation of the file.
Can anyone shed some light on this file accessing information? Is it stored somewhere?
No, it is not stored. That would be a very odd feature.
You can monitor access to a file and count what you need yourself.
You can write your own program doing this with inotify. Here is a rather nice introduction.
Another option is using Linux audit subsystem. This way you'll set up rules telling the kernel which files are you interrested in, and later you'll be able to check logs to get whichever statistics you need. Here is a short tutorial.

flow-based traffic classification for traffic shaping

I’m wondering if there are ways to achieve flow-based traffic shaping with linux.
Traditional traffic shaping approaches seem be based on creating classes for specific protocols or types of packets (such as ssh, http, SYN or ACK) that need high troughput.
Here I want to see every TCP connection as a flow characterized by a certain data-rate.
There’ll be
quick flows such as interactive ssh or IRC chat and
slow flows (bulk data) such as scp or http file transfers
Now I’m looking for a way to characterize / classify an incoming packet to one of these classes, so I can run a tc based traffic shaper on it. Any hints?
Since you mention a dedicated machine I'll assume that you are managing from a network bridge and, as such, have access to the entirety of the packet for the lifetime it is in your system.
First and foremost: throttling at the receiving side of a connection is meaningless when you are speaking of link saturation. By the time you see the packet it has already consumed resources. This is true even if you are a bridge; you can only realistically do anything intelligent on the egress interface.
I don't think you will find an off-the-shelf product that is going to do exactly what you want. You are going to have to modify something like dummynet to be dynamic according to rules you derive during execution or you are going to have to program a dynamic software router using some existing infrastructure. One I am familiar with is Click modular router, but there are others. I really dont know how things like tc and ipfw will react to being configured/reconfigured with high frequency - I suspect poorly.
There are things that you should address ahead of time, however. Things that are going to make this task difficult regardless of the implementation. For instance,
How do you plan on differentiating between scp bulk and ssh interactive behavior? Will you monitor initial behavior and apply a rule based on that?
You mention HTTP-specific throttling; this implies DPI. Will you be able to support that on this bridge/router? How many classes of application traffic will you support?
How do you plan on handling contention? (you allot for 'bulk' flows to each get 30% of the capacity but get 10 'bulk' flows trying to consume)
Will you hard-code the link capacity or measure it? Is it fixed or will it vary?
In general, you can get a fairly rough idea of 'flow' by just hashing the networking 5-tuple. Once you start dealing with applications semantics, however, all bets are off and you need to plow through packet contents to get what you want.
If you had a more specific purpose it might render some of these points moot.

Dynamic IP-based blacklisting

Folks, we all know that IP blacklisting doesn't work - spammers can come in through a proxy, plus, legitimate users might get affected... That said, blacklisting seems to me to be an efficient mechanism to stop a persistent attacker, given that the actual list of IP's is determined dynamically, based on application's feedback and user behavior.
For example:
- someone trying to brute-force your login screen
- a poorly written bot issues very strange HTTP requests to your site
- a script-kiddie uses a scanner to look for vulnerabilities in your app
I'm wondering if the following mechanism would work, and if so, do you know if there are any tools that do it:
In a web application, developer has a hook to report an "offense". An offense can be minor (invalid password) and it would take dozens of such offenses to get blacklisted; or it can be major, and a couple of such offenses in a 24-hour period kicks you out.
Some form of a web-server-level block kicks in on before every page is loaded, and determines if the user comes from a "bad" IP.
There's a "forgiveness" mechanism built-in: offenses no longer count against an IP after a while.
Thanks!
Extra note: it'd be awesome if the solution worked in PHP, but I'd love to hear your thoughts about the approach in general, for any language/platform
Take a look at fail2ban. A python framework that allows you to raise IP tables blocks from tailing log files for patterns of errant behaviour.
are you on a *nix machine? this sort of thing is probably better left to the OS level, using something like iptables
edit:
in response to the comment, yes (sort of). however, the idea is that iptables can work independently. you can set a certain threshold to throttle (for example, block requests on port 80 TCP that exceed x requests/minute), and that is all handled transparently (ie, your application really doesn't need to know anything about it, to have dynamic blocking take place).
i would suggest the iptables method if you have full control of the box, and would prefer to let your firewall handle throttling (advantages are, you don't need to build this logic into your web app, and it can save resources as requests are dropped before they hit your webserver)
otherwise, if you expect blocking won't be a huge component, (or your app is portable and can't guarantee access to iptables), then it would make more sense to build that logic into your app.
I think it should be a combination of user-name plus IP block. Not just IP.
you're looking at custom lockout code. There are applications in the open source world that contain various flavors of such code. Perhaps you should look at some of those, although your requirements are pretty trivial, so mark an IP/username combo, and utilize that for blocking an IP for x amount of time. (Note I said block the IP, not the user. The user may try to get online via a valid IP/username/pw combo.)
Matter of fact, you could even keep traces of user logins, and when logging in from an unknown IP with a 3 strikes bad username/pw combo, lock that IP out for however long you like for that username. (Do note that a lot of ISPs share IPs, thus....)
You might also want to place a delay in authentication, so that an IP cannot attempt a login more than once every 'y' seconds or so.
I have developed a system for a client which kept track of hits against the web server and dynamically banned IP addresses at the operating system/firewall level for variable periods of time for certain offenses, so, yes, this is definitely possible. As Owen said, firewall rules are a much better place to do this sort of thing than in the web server. (Unfortunately, the client chose to hold a tight copyright on this code, so I am not at liberty to share it.)
I generally work in Perl rather than PHP, but, so long as you have a command-line interface to your firewall rules engine (like, say, /sbin/iptables), you should be able to do this fairly easily from any language which has the ability to execute system commands.
err this sort of system is easy and common, i can give you mine easily enough
its simply and briefly explained here http://www.alandoherty.net/info/webservers/
the scripts as written arn't downloadable {as no commentry currently added} but drop me an e-mail, from the site above, and i'll fling the code at you and gladly help with debugging/taloring it to your server

Resources