I never use honeypot before. But, I have a task from my lecture, that I should use a honeypot for detecting hackers attacks.
I searched in journals, tutorials and articles. I tried using honeydrive3 and used the honeypot Kippo. When I tried that, and I attack by myself, it works, the detailed of attack is served. But, when I told that to my lecturer, he said it was not what he wanted.
The workflow he want is, we use the honeypot and then we try that to some websites. But, when the attacker scanning or do something to that web IP address, it must deflect to the honeypot, it means that the attacker really attacks the real website.. and I really don't know what to do.
You either misunderstood what the lecturer wanted, or what he wants does not make sense.
You can only analyze traffic sent to your IP (or an IP you control), it is not possible for you to "deflect the traffic" from a generic IP address.
What you did is correct: putting in place the honeypot, and then sending some traffic to it.
The next step would be to expose it to Internet to get malicious traffic (directed to your IP) but you must be very careful as the whole machine is likely going to get successfully attacked. It must not have any connection to your (home|uni|private) network, because (I am being frank reading your question), you stand no chance to secure it for the time being.
I would go for a cloud hosted machine which I would then kill.
Can two or more domains be hosted on the same server? If yes what is the ip address we are going to get for the two domains?
as a user can i know how a server resolves the host name and assign unique id to different host names
After looking at my comment above and as you are a user, not an administrator, just look at the documentation of nslookup(1). It is a tool to make DNS queries to servers. It allow you to make dns resolution and to investigate the ways you are getting the answer (there are many ways to answer a query, believe me)
First you need to know how the asking is being done. Normally, clients make recursive queries (they want definitive answers to a query and want the server to do the heavy work) and servers do iterative ones (they approximate the answer by asking the servers in the chain to the final domain you are looking for) Servers and clients normally cache results for future questions and provide several ways of fault tolerance, so you cannot control normally how a query is solved. As this is probably the most requested service in internet, the protocol has been optimized to get quick answers even in the worst case.
Once you get an answer, it can be a partial one, it can be cached, it can be non-authoritative (meaning the server is serving a cached entry, not a locally administered one)
When you have several responses to a query (ok, this can happen) you receive them normally in order, depending where are you querying from. The server makes a best effort to order them on proximity to the client (the nearest address is served first) and/or randomly ordered, so you can make round robing to each of the addresses you receive. It depends on the client software, the server implementation, the administrator policy, etc.
Even you can receive a different response depending on who you are. Several corporate servers serve different views of the database depending on where the clients come from. If they come from the inside of the company, they serve addresses for servers not visible from the outside. For example, if you try to access the corporate web server, you can receive the private address to reach it, not the public address of the server accesible from the internet. This concept is called view, and many servers implement it, so the answer to your question is: it depends :)
Tough question. It has to do mainly with security, but also computers. Probably not been done yet.
I was wondering, is it possible to host for example a web application, yet be able to hide *where* the actual server is, and, or who is the originator, making it very very hard ( practically impossible ) for some one to track the origin of the server, and who is behind it?
I was thinking that this might be possible through a third party server, preferably with an owner unrelated to the proxy sites. But the question then also becomes an issue of reliability *of* the third party.
Does the TOR network have support for registering for recieving incoming requests rather than outgoing ones? How secure would that be? Might it be possible that the TOR network has been infiltrated by for example a big goverment ( read USA ) ( dont get angry, please enlighten me as I do not know much of how the TOR network is hosted ).
How can one possibly create such a secure third party server, that preferably does not even know who the final recipient of the request is? Third party companies might be subjected *to* pressure from goverments, either directly from powerful *nations* such as USA, or by the USA applying pressure on the goverments of the country where the server is, applying pressure on the company behind it, and force you to enable a backdoor. ( Just my wild fantasy, think worst case scenario is my motto :) ).
I just came with the idea, that being that this is probably *impossible*, the best way would be to have a bunch of distributed servers, across several nations, make it as hard as possible to go through each and one of them to find the next bouncing server. This would have to be in a linked list, with one public server being registered on a DNS. If compromised, the public server needs to be replaced with another one.
request from user0 -> server1 -> server2 -> server3 -> final processing server -> response to user0 or through the incoming server chain.
When sending a response to someone, could it be done using UDP rather than TCP and hide who the sender was ( also in a web application ) ? So that a middle man listening on user0 computer incoming responses ( and outgoing requests ) do not figure *out who the final* processing server is, if we decide to respond directly to user0 from the final processing server?
The IP of server1 will be public and known to anyone, server1 will send the message to server2 and it is possibly to figure out by listening directly behind server1 traffic node, but perhaps it could hide its own origin if not being listened to directly, so that if big goverments have filters on big traffic nodes or routers, they wouldn't be able to track who it came from, and therefore what the message to server2 is intended for. It would blend in with all other requests.
Anyhow, if you have followed my thoughts this far I think you should know by now what I am thinking about.
Could this be possibly through a P2P network, with a central server behind it, and have the P2P network deliver it to the final server respond in some pattern? The idea is to have one processing server, and then have "minor", "cheaper" servers that acts as proxys?
Why I keep saying central server, is that I am thinking web. But any thoughts on the matter is interesting.
For those that wonders, why... I am looking into creating as secure as possible, and that could withstand goverment pressure ( read BlackBerry, Skype and others ).
This is also a theoretical question.
PS.
I would also be interested in knowing how one have a distributed SECURE database ( for keeping usernames, friendlists and passwords for example ) but this time, it is not neccessery for it to be on the web. A P2P software with a distributed secure database.
Thanks!
Yes, you're reinventing Tor. You should research Tor more fully before going further. In particular, see Hidden Service Protocol. Tor is not perfect, but you should understand it before you try to reinvent it.
If you want to find an ant's nest, follow the ants. If you want to find the original server, follow the ip packets. If you meet a proxy server not willing to provide their path, call the server administrator and have your men in black put a gun on his head. If he does not comply, eliminate the administrator and the server. Carry on following the ants in their new path. Repeat the operation until server is reached or server can't communicate anymore.
So no, you can't protect the origin and keep your server up and running when your men in black can reach any physical entity.
my software-house is developing a component for advertisement in some of ours portals. The advertisement is click based, thus the source portal that more originates click's is the winner. My preucupation is about "fake clicks", malicious HTTP clients raising requests. It's possible for a attacker to falsify the IP source address of a HTTP request? How i can prevent this? We are observing so much random requests in a little interval of time
It is possible to go via some proxy, there is a plenty of free proxies around the net and you can pretend to have a bunch of different IPs even when you are physically sitting in front of one computer. This is probably very hard to detect although it can be considered "malicious".
As far as I know there is no simple way to change the IP without physically going via a machine with this IP, but I am not an expert here so probably somebody else will give you more confidence.
I think your best bet is going to be some sort of cookie & IP technology together. Realistically though the other poster is right. You can clear cookies and use TOR to come out of any number of unique IP addresses.
I'm planning to deploy an internal app that has sensitive data. I suggested that we put it on a machine that isn't exposed to the general internet, just our internal network. The I.T. department rejected this suggestion, saying it's not worth it to set aside a whole machine for one application. (The app has its own domain in case that's relevant, but I was told they can't block requests based on the URL.)
Inside the app I programmed it to only respect requests if they come from an internal I.P. address, otherwise it just shows a page saying "you can't look at this." Our internal addresses all have a distinct pattern, so I'm checking the request I.P. against a regex.
But I'm nervous about this strategy. It feels kind of janky to me. Is this reasonably secure?
IP filtering is better than nothing, but it's got two problems:
IP addresses can be spoofed.
If an internal machine is compromised (that includes a client workstation, e.g. via installation of a Trojan), then the attacker can use that as a jump host or proxy to attack your system.
If this is really sensitive data, it doesn't necessarily need a dedicated machine (though that is best practice), but you should at least authenticate your users somehow, and don't run less sensitive (and more easily attacked) apps on the same machine.
And if it is truly sensitive, get a security professional in to review what you're doing.
edit: incidentally, if you can, ditch the regex and use something like tcpwrappers or the firewalling features within the OS if it has any. Or, if you can have a different IP address for your app, use the firewall to block external access. (And if you have no firewall then you may as well give up and email your data to the attackers :-)
I would rather go with SSL and some certificates, or a simple username / password protection instead of IP filtering.
It depends exactly HOW secure you really need it to be.
I am assuming your server is externally hosted and not connected via a VPN. Therefore, you are checking that the requesting addresses for your HTTPS (you are using HTTPS, aren't you??) site are within your own organisation's networks.
Using a regex to match IP addresses sounds iffy, can't you just use a network/netmask like everyone else?
How secure does it really need to be? IP address spoofing is not easy, spoofed packets cannot be used to establish a HTTPS connection, unless they also manipulate upstream routers to enable the return packets to be redirected to the attacker.
If you need it to be really secure, just get your IT department to install a VPN and route over private IP address space. Set up your IP address restrictions for those private addresses. IP address restrictions where the routing is via a host-based VPN are still secure even if someone compromises an upstream default gateway.
If your application is checking the IP Address, then it is extremely vulnerable. At that point you don't have any protection at the router which is where IP filtering really needs to be. Your application is probably checking HTTP header information for the sending IP address and this is extremely easy to spoof. If you lock the IP address down at the router, that is a different story and will buy you some real security about who can access the site from where.
If are you are doing is accessing the application internally, then SSL won't buy you much unless you are trying to secure the information from parties internal to the organization, or you require client certificates. This is assuming you won't ever access the site from an external connection (VPNs don't count, because you are tunneling into the internal network and are technically part of it at that point). It won't hurt either and isn't that hard to setup, just don't think that it is going to be the solution to all your problems.
IP Whitelisting is, as others have mentioned, vulnerable to IP spoofing and Man-in-the-Middle attacks. On an MITM, consider that some switch or router has been compromised and will see the "replies". It can either monitor or even alter them.
Consider also vulnerabilities with SSL encryption. Depending on the effort, this can be foiled in a MITM as well, as well as the well-known gaffs with the reuse of the primes, etc.
Depending on the sensitivity of your data, I would not settle for SSL, but would go with StrongSWAN or OpenVPN for more security. If handled properly, these will be a lot less vulnerable to a MITM.
Reliance on whitelisting alone (even with SSL) I would consider "low grade", but may be sufficient for your needs. Just be keenly aware of the implications and do not fall into the trap of a "false sense of security".
If it's limited by IP address, then although they can spoof the IP address, they won't be able to get the reply. Of course, if it's exposed to the internet, it can still get hit by attacks other than against the app.
Just because all your internal IPs match a given regex, that doesn't mean that all IPs that match a given regex are internal. So, your regex is a point of possible security failure.
I don't know what technology you used to build your site, but if it's Windows/ASP.net, you can check the permissions of the requesting machine based on its Windows credentials when the request is made.
Like all security, it's useless on it's own. If you do have to put it on a public-facing webserver, use IP whitelisting, with basic username/password auth, with SSL, with a decent monitoring setup, with an up-to-date server application.
That said, what is the point in having the server be publicly accessible, then restrict it to only internal IP addresses? It seems like it's basically reinventing what NAT gives you for free, and with an internal-only server, plus you have to worry about web-server exploits and the likes.
You don't seem to gain anything by having it externally accessible, and there are many benefits to having having it be internal-only..
My first thought on the ressource issue would be to ask if it wouldn't be possible to work some magic with a virtual machine?
Other than that - if the IP addresses you check up against are either IPs you KNOW belongs to computers that are supposed to access the application or in the local IP range, then I cannot see how it could not be secure enough (I am actually using a similiar approach atm on a project, although it is not incredibly important that the site is kept "hidden").
A proper firewall can protect against IP spoofing, and it's not as easy as say spoofing your caller ID, so the argument /not/ to use IP filtering because of the spoofing danger is a bit antiquated. Security is best applied in layers, so you're not relying on only one mechanism. That's why we have WAF systems, username+password, layer 3 firewalls, layer 7 firewalls, encryption, MFA, SIEM and a host of other security measures, each which adds protection (with increasing cost).
If this is a web application you're talking about (which was not clear from your question), the solution is fairly simple without the cost of advanced security systems. Whether using IIS, Apache, etc. you have the ability to restrict connections to your app to a specific target URL as well as source IP address - no changes to your app necessary - on a per-application basis. Preventing IP-based web browsing of your app, coupled with IP source restrictions, should give you significant protection against casual browsing/attacks. If this is not a web app, you'll have to be more specific so people know whether OS-based security (as has been proposed by others) is your only option or not.
Your security is only as strong as your weakest link. In the grand scheme of things, spoofing an IP is child's play. Use SSL and require client certs.
It became useful first to distinguish among different kinds of IP vpn based on the administrative relationships, not the technology, interconnecting the nodes. Once the relationships were defined, different technologies could be used, depending on requirements such as security and quality of service.
Maybe this will help ? I've been looking for the same answer, and found this stackoverflow as well as this idea from Red Hat Linux Ent. I'll try it soon. I hope it helps.
iptables -A FORWARD -s 192.168.1.0/24 -i eth0 -j DROP
Where 0/24 is the LAN range you want to protect. The idea is to block "internet" facing (Forward) devices from being able to spoof the local IP network.
Ref: http://www.centos.org/docs/4/html/rhel-sg-en-4/s1-firewall-ipt-rule.html