How do I protect my program's from packet sniffers?
E.g. I don't want packet sniffers to be able to see where my program connects to.
What is the best way to counter packet sniffing?
The package content can always be encrypted, but the destination address always needs to be visible for the packets to be routed correctly.
The only way to hide the destination would be to use a proxy and encrypt the message containing the real destination. This only protects the path from the source to the proxy however.
You can protect the content of your communications using a scheme such as SSL. However, you can't hide the destination of your communications because all the routers along the way also need to know where to send your packets.
It's sort of like asking whether you can send a letter to your friend in London, without telling the postal service where your friend lives.
Use encryption. You can either use SSL to protect everything, or you can encrypt specific data using one of the many public key encryption schemes available.
Revision
If you want to hide where the program is connecting to, perhaps you can use an Anonymizer service
Related
I have multiple clients sending data to a central server. Is there a way I can ensure that the server get the data but in no way it can associate sender with the data.
If the clients are identified using IP address, then spoofing is a way to make sure that they are not traceable. To spoof, you need to identify the packets the client is sending to the server. In Network layer, you shall find the IP bits, which you need to replace(or remove, if it works).
(Use wireshark tool, it might be helpful)
Although, it shall still be considered a malpractice in the society. I sincerely advice you to contact the server administration, to discuss and put in place other security measures instead of spoofing.
I am trying to use zeroMQ for communicating between 2 processes. The message contains instructions from one process for the second to execute, so that from a security perspective it is quite important that only the proper messages are sent and received.
If I am worried about 3rd parties who may try to intercept or send malicious messages to the process, am I correct in assuming that as long as my messages are sent/received on IP 127.0.0.1 i am always safe? or is there any circumstance where this can be compromised?
Thanks for the help all!
Assumptions and security are usually two things you don't want to mix. The short answer to your question is that sending or receiving traffic to localhost (127.0.0.1) will not, under default conditions, send or receive traffic outside of the local host.
Of course if the machine itself is compromised then you are no longer secure at all.
You've applied the ipc tag, which I assume means that you're using the ipc:// protocol (if not, you should be if all of the communication is happening on one box). In this case, you shouldn't be using IPv4 addresses at all (or localhost), but ipc endpoint names. See here and here.
For ipc, you're not connecting or binding to an IP or DNS address, but something much more akin to a local file name. You just need to make sure both processes refer to the same filename, and that permissions are set so that both processes can appropriately access the directory (see the ZMQ docs for a little more info there, search for ipc). The only difference between an ipc endpoint name and a filename is that you don't need to create the file, ZMQ creates the resource so both processes can communicate with the same thing.
As S.Richmond says, if your machine is compromised, then all bets are off, but there's no way to publish ipc endpoints to the internet if you use them appropriately.
We use WebSockets to communicate with our EC2 instances.
Our script is served using nodejs and Express, and then initialize the WebSocket.
Right now ELB is used which makes life harder to identify the client IP.
Using x-forwarded-for header we can get the IP in the context of HTTP, but when it comes to WebSocket context in the server, it looks like it's not forwarded by Amazon.
We identified 2 options:
Communicate the WebSocket directly with the instance (using its public DNS).
Maintain some sort of sessionid, in which store the IP when in the context of HTTP and associate it with the sessionid. The client side will get its sessionid using the HTTP response, and will use it to on the WebSockets. The the server will be to identify the client and resolve its IP from the cache.
Both options are not great: 1 is not fault tolerant and 2 is complex.
Are there more solutions? Can Amazon somehow forward the IP? What is the best practice?
Thanks
I have worked with websockets and I have worked with ELB, but I've never worked with them together, so I didn't realize that an HTTP forwarder on an Elastic Load Balancer doesn't understand websocket requests...
So I take it you must be using a TCP forwarder, which explains why you're using a different port, and of course the TCP forwarder is protocol-unaware, so it won't be adding any headers at all.
An option that seems fairly generic and uncomplicated would be for the http side of your application to advise the websocket side by pushing the information across rather than storing it in a cache for retrieval. It's scalable and lightweight, assuming there's not an obstacle in your environment that makes it difficult or impossible to implement.
While generating the web page that loads the websocket, take the string "ipv4:" and the client's IP ("192.168.1.1," for example), concatenate and encrypt them, and make the result url-friendly:
/* pseudo-code */
base64_encode(aes_encrypt('ipv4:192.168.1.1','super_secret_key'))
Using that example key with 128 bit aes and that example IP address, I get:
/* actual value returned by pseudo-code above */
1v5n2ybJBozw9Vz5HY5EDvXzEkcz2A4h1TTE2nKJMPk=
Then when rendering the html for the page containing the websocket, dynamically build the url:
ws = new WebSocket('ws://example.com/sock?client=1v5n2ybJBozw9Vz5HY5EDvXzEkcz2A4h1TTE2nKJMPk=');
Assuming the querystring from the websocket is accessible to your code, you could base64_decode and then aes_decrypt the string found in the query parameter "client" using the super-secret key, and then verify that it begins with "ipv4:" ... if it doesn't, then it's not a legitimate value.
Of course, "ipv4:" (at the beginning of the string) and "client" (for the query parameter) were arbitrary choices and do not have any actual significance. My choice of 128 bit AES was also arbitrary.
The problem, of course, with this setup is that it is subject to a replay: a given client IP address will always generate the same value. If you only using the client IP address for "informational purposes" (such as logging or debugging) then this may be sufficient. If you are using it for anything more significant, you may want to expand this implementation -- for example, by adding a timestamp:
'ipv4:192.168.1.1;valid:1356885663;'
On the receiving end, decode the string and check the timestamp. If it is not +/- whatever interval in seconds that you deem safe, then don't trust it.
These suggestions all hinge on your ability to dynamically generate the websocket url, the browser's ability to connect with it, and you being able to access the querystring portion of the URL in the websocket request... but if those pieces will fall into place, maybe this will help.
Additional thoughts (from comments):
The timestamp I suggested, above, is seconds from the epoch, which gives you an incrementing counter that requires no statefulness in your platform -- it only requires that all of your server clocks are correct -- so it doesn't add unnecessary complexity. If the decrypted value contains a timestamp less than (for example) 5 seconds different (+/-) from the server's current time, then you know you're dealing with an authenticated client. The time interval permitted only needs to be as long as the maximum reasonable time for a client to attempt its websocket connection after loading the original page, plus the maximum skew of all your server clocks.
It is true, of course, that with NAT, multiple different users could be behind the same source IP address. It's also true, though far less less likely, that a user could actually make the websocket connection from a different source IP than the one where they originated the first http connection, and still be quite legitimate... and it sounds like the identity of the authenticate user may be a more important value for you than the actual source IP.
If you include the authenticated user's ID within the encrypted string as well, you have a value that is unique to origin IP, user account, and time, to a precision of 1 second. I think this is what you're referring to by additional salt. Adding the user account to the string should get you the information you're wanting.
'ipv4:192.168.1.1;valid:1356885663;memberid:32767;'
TLS should prevent discovery of the this encrypted string by an unauthorized party, but avoidance of replayability is also important because the generated URL is available in clear text in a user's browser's "view source" for the html page. You don't want a user who is authorized today but unauthorized tomorrow to be able to spoof their way in with a signed string that should be recognized as no longer valid. Keying to a timestamp and requiring it to fall in a very small valid window prevents this.
It depends on how serious the application is.
Basing any kind of decision on client IP address is a risky proposition. Basing security on it, even more so. While the suggestions offered so far work well within the given constraints, it would not be sufficient for a robust enterprise application.
Client IP addresses can be obscured by NATs, as already pointed out. So people accessing the Web from their place of work will often appear to have the same IP address. People's routers at home act as a NAT, so every family member at home accessing the Web will appear to the have the same IP address. Or even the same person accessing the application from a PC and a tablet...
Whether behind a NAT or not, using the application from two browsers on the same machine will appear to the have the same address. Similarly, multiple tabs in the same browser will appear to have the same address.
Other junction points like proxies or load balancers may also hide the original client IP address such that the thing behind the proxy/load balancers thinks they are the client. (More sophisticated or lower level intermediaries can prevent this, which is what makes them more sophisticated or expensive.)
Given all of the above, a serious application should not rely on client IP address for any kind of important decision, especially around security.
Tough question. It has to do mainly with security, but also computers. Probably not been done yet.
I was wondering, is it possible to host for example a web application, yet be able to hide *where* the actual server is, and, or who is the originator, making it very very hard ( practically impossible ) for some one to track the origin of the server, and who is behind it?
I was thinking that this might be possible through a third party server, preferably with an owner unrelated to the proxy sites. But the question then also becomes an issue of reliability *of* the third party.
Does the TOR network have support for registering for recieving incoming requests rather than outgoing ones? How secure would that be? Might it be possible that the TOR network has been infiltrated by for example a big goverment ( read USA ) ( dont get angry, please enlighten me as I do not know much of how the TOR network is hosted ).
How can one possibly create such a secure third party server, that preferably does not even know who the final recipient of the request is? Third party companies might be subjected *to* pressure from goverments, either directly from powerful *nations* such as USA, or by the USA applying pressure on the goverments of the country where the server is, applying pressure on the company behind it, and force you to enable a backdoor. ( Just my wild fantasy, think worst case scenario is my motto :) ).
I just came with the idea, that being that this is probably *impossible*, the best way would be to have a bunch of distributed servers, across several nations, make it as hard as possible to go through each and one of them to find the next bouncing server. This would have to be in a linked list, with one public server being registered on a DNS. If compromised, the public server needs to be replaced with another one.
request from user0 -> server1 -> server2 -> server3 -> final processing server -> response to user0 or through the incoming server chain.
When sending a response to someone, could it be done using UDP rather than TCP and hide who the sender was ( also in a web application ) ? So that a middle man listening on user0 computer incoming responses ( and outgoing requests ) do not figure *out who the final* processing server is, if we decide to respond directly to user0 from the final processing server?
The IP of server1 will be public and known to anyone, server1 will send the message to server2 and it is possibly to figure out by listening directly behind server1 traffic node, but perhaps it could hide its own origin if not being listened to directly, so that if big goverments have filters on big traffic nodes or routers, they wouldn't be able to track who it came from, and therefore what the message to server2 is intended for. It would blend in with all other requests.
Anyhow, if you have followed my thoughts this far I think you should know by now what I am thinking about.
Could this be possibly through a P2P network, with a central server behind it, and have the P2P network deliver it to the final server respond in some pattern? The idea is to have one processing server, and then have "minor", "cheaper" servers that acts as proxys?
Why I keep saying central server, is that I am thinking web. But any thoughts on the matter is interesting.
For those that wonders, why... I am looking into creating as secure as possible, and that could withstand goverment pressure ( read BlackBerry, Skype and others ).
This is also a theoretical question.
PS.
I would also be interested in knowing how one have a distributed SECURE database ( for keeping usernames, friendlists and passwords for example ) but this time, it is not neccessery for it to be on the web. A P2P software with a distributed secure database.
Thanks!
Yes, you're reinventing Tor. You should research Tor more fully before going further. In particular, see Hidden Service Protocol. Tor is not perfect, but you should understand it before you try to reinvent it.
If you want to find an ant's nest, follow the ants. If you want to find the original server, follow the ip packets. If you meet a proxy server not willing to provide their path, call the server administrator and have your men in black put a gun on his head. If he does not comply, eliminate the administrator and the server. Carry on following the ants in their new path. Repeat the operation until server is reached or server can't communicate anymore.
So no, you can't protect the origin and keep your server up and running when your men in black can reach any physical entity.
I have a couple questions about SSL certificates.
I never used them before but my current project requires me to do so.
Question 1.
Where should you use SSL? Like I know places like logging in, resetting passwords are definite places to put it. How about once they are logged in? Should all requests go through SSL even if the data in there account is not considered sensitive data? Would that slow down SSL for the important parts? Or does it make no difference?(sort of well you got SSL might as well make everything go through it no matter what).
Question 2.
I know in smtp you can enable SSL as well. I am guessing this would be pretty good to use if your sending say a rest password to them.
If I enable this setting how can I tell if SSL if it is working? Like how do I know if it really enabled it? What happens if the mail server does not have SSL enabled and your have that boolean value enabled. Will it just send it as non SSL then?
With an SSL connection, one of the most expensive portions (relatively speaking) is the establishment of the connection. Depending on how it is set up, for example, it might create an ephemeral (created on the fly) RSA key for establishing a session key. That can be somewhat expensive if many of them have to be created constantly. If, though, the creation of new connections is less common (and they are used for longer periods of time), then the cost may not be relevant.
Once the connection has been established, the added cost of SSL is not that great although it does depend on the encryption type. For example, using 256-bit AES for encryption will take more time than using 128-bit RC4 for the encryption. I recently did some testing with communications all on the same PC where both client and server were echoing data back and forth. In other words, the communications made up almost the entire cost of the test. Using 128-bit RC4 added about 30% to the cost (measured in time), and using 256-bit AES added nearly 50% to the cost. But remember, this was on one single PC on the loopback adapter. If the data were transmitted across a LAN or WAN, then the relative costs is significantly less. So if you already have an SSL connection established, I would continue to use it.
As far as verifying that SSL is actually being used? There are probably "official" ways of verifying it, using a network sniffer is a poor man's version. I ran Wireshark and sniffed network traffic and compared a non-SSL connection and an SSL connection and looked at the raw data. I could easily see raw text data in the non-SSL version while the SSL "looked" encrypted. That, of course, means absolutely nothing. But it does show that "something" is happening to the data. In other words, if you think you are using SSL but can recognize the raw text in a network sniff, then something is not working as you expected. The converse is not true, though. Just because you can't read it, it does not mean it is encrypted.
Use SSL for any sensitive data, not just passwords, but credit card numbers, financial info, etc. There's no reason to use it for other pages.
Some environments, such as ASP.NET, allow SSL to be used for encryption of cookies. It's good to do this for any authentication or session-ID related cookies, as these can be used to spoof logins or replay sessions. You can turn these on in web.config; they're off by default.
ASP.NET also has an option that will require all authenticated pages to use SSL. Non-SSL requests get tossed. Be careful with this one, as it can cause sessions to appear hung. I'd recommend not turning on options like this, unless you really need them.
Sorry, can't help with the smtp questions.
First off, SSL is used to encrypt communications between client and server. It does this by using a public key that is used for encryption. In my opinion it is a good practice to use it for as anything that has personally identifiable information or sensitive information.
Also, it is worth pointing out that there are two types of SSL authentication:
One Way - in which there is a single, server certificate - this is the most common
Two Way - in which there is a server certificate and a client certificate - the client first verifies the server's identity and then the server ids the client's id - example is DOD CAC
With both, it is important to have up to date, signed, certificates by a reputable CA. This verifies your site's identity.
As for question 2, yes, you should use SSL over SMTP if you can. If your emails are routed through an untrusted router, they can be eavesdropped if sent without encryption. I am not sure about the 'boolean value enabled' question. I don't believe setting up SSL is simply as easy as checking a box though.
A couple people have already answered your Question 1.
For question 2 though, I wouldn't characterize SMTP over SSL as protecting the message. There could be plenty of points at which the message is exposed. If you want to protect the message itself, you need S/MIME, or something similar. I'd say SMTP over SSL is more useful for protecting your SMTP credentials, so that someone cannot grab your password.