How to make my API private but usable by mobile application? - security

Here is my requirements:
Usable by any mobile application I'm developing
I'm developing the mobile application, therefore I can implement any securing strategies.
Cacheable using classical HTTP Cache strategy
I'm using Varnish with a very basic configuration and it works well
Not publicly available
I don't want people be able to consume my API
Solutions I think of:
Use HTTPS, but it doesn't cover the last requirements because proxying request from the application will show the API KEY used.
Is there any possibility to do this? Using something like a private/public key for example?
Which fits well with HTTP, Apache, and Varnish.

There is no way to ensure that the other end of a network link is your application. This is not a solvable problem. You can obfuscate things with certificates, keys, secrets, whatever. But all of these can be reverse-engineered by the end user because they have access to the application. It's ok to use a little obfuscation like certificates or the like, but it cannot be made secure. Your server must assume that anyone connecting to it is hostile, and behave accordingly.
It is possible to authenticate users, since they can have accounts. So you can certainly ensure that only valid users may use your service. But you cannot ensure that they only use your application. If your current architecture requires that, you must redesign. It is not solvable, and most certainly not solvable on common mobile platforms.
If you can integrate a piece of secure hardware, such as a smartcard, then it is possible to improve security in that you can be more certain that the human at the other end is actually a customer, but even that does not guarantee that your application is the one connecting to the server, only that the smartcard is available to the application that is connecting.
For more on this subject, see Secure https encryption for iPhone app to webpage.

Even though it's true there's basically no way to guarantee your API is only consumed by your clients unless you use a Hardware secure element to store the secret (which would imply you making your own phone from scratch, any external device could be used by any non official client App as well) there are some fairly effective things you can do to obscure the API. To begin with, use HTTPS, that's a given. But the key here, is to do certificate pinning in your app. Certificate pining is a technique in which you store the valid public key certificate for the HTTPS server you are trying to connect. Then on every connection, you validate that it's an HTTPS connection (don't accept downgrade attacks), and more importantly, validate that it's exactly the same certificate. This way you prevent a network device in your path to perform a man in the middle attack, thus ensuring no one is listening in in your conversation with the server. By doing this, and being a bit clever about the way you store the API's parameters general design in your application (see code obfuscation, particularly how to obfuscate string constants), you can be fairly sure you are the only one talking to your server. Of course, security is only a function of how badly does someone want to break in your stuff. Doing this doesn't prevent a experienced reverse-engineer with time to spare to try (and possibly succeed) to decompile your source code and find what it is looking for. But doing all of this will force it to look at the binary, which is a couple of orders of magnitude more difficult to do than just performing a man in the middle attack. This is famously related to the latest snap chat flurrry of leaked images. Third party clients for snapchat exist, and they were created by reverse engineering the API, by means of a sniffer looking at the traffic during a man in the middle attack. If the snapchat app developers would have been smarter, they would've pinned their certificate into their app, absolutely guaranteeing it's snapchat's server who they're talking to, and the hackers would need to inspect the binary, a much more laborious task that perhaps given the effort involved, would not have been performed.

We use HTTPS and assign authorized users a key which is sent in and validated with each request.
We also use HMAC hashing.
Good read on this HMAC:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/

Related

How to "hide" top-secret data that need to be fed to the app

Let say I have an application that should run on a VPS. The app utilizes a configuration file that contains very important private keys, in a sense that no one should ever have access to! I know VPS providers can easily access my files. So, how may I "hide" the sensitive data from malicious acts while still have them usable for the app?
I believe encryption will be of no help, since the decryption should be done on the same machine! Also, I know running my own private server is a no-brainier; but, that's not an option, unfortunately.
You cannot solve this problem. Whatever workaround you can find, there will be a way for someone with access to repeat the same steps. You can only solve this if you have full control over the server (both hardware and software), otherwise, it's a lost battle.
Some links:
https://cheatsheetseries.owasp.org/cheatsheets/Key_Management_Cheat_Sheet.html
https://owaspsamm.org/model/implementation/secure-deployment/stream-b/
https://security.stackexchange.com/questions/223457/how-to-store-api-keys-when-algo-trading
You can browse security SE for some direction, and ask a more target question.
This problem is mitigated with using your own servers, using specialized hardware for key storage, trusting to your host provider or cloud, and using well-designed security protocols.
But the VPS provider doesn't know how your app will decrypt the keys in the file? Perhaps your app has a decrypt key embedded in it, or maybe it is something even simpler. Without decompiling your app they are no closer to learning the secrets. Of course if your "app" is just a few scripts then they can work it out.
For example if the first key in the file is customerID, they don't know that all the other keys are simply xor'ed against a hash of your customerID - they don't even know the hashing algorithm you used.
Ok, that might be too simplistic of you used one of the few well known hashes, but if there are only a few clients, it can be enough.
Obviously, they could be listening to the network traffic your app is sending, but then that should be end-to-end encrypted already, if you are that paranoid.

Is it possible to find the origin of a request in nestjs? [duplicate]

Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary? This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way. Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
I'm looking for something as fail proof as possible. The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
I'm open to any effective solution, no matter how complicated. Tin foil hat solutions are greatly appreciated.
Any credentials that are stored in the app can be exposed by the user. In the case of Android, they can completely decompile your app and easily retrieve them.
If the connection to the server does not utilize SSL, they can be easily sniffed off the network.
Seriously, anybody who wants the credentials will get them, so don't worry about concealing them. In essence, you have a public API.
There are some pitfalls and it takes extra time to manage a public API.
Many public APIs still track by IP address and implement tarpits to simply slow down requests from any IP address that seems to be abusing the system. This way, legitimate users from the same IP address can still carry on, albeit slower.
You have to be willing to shut off an IP address or IP address range despite the fact that you may be blocking innocent and upstanding users at the same time as the abusers. If your application is free, it may give you more freedom since there is no expected level of service and no contract, but you may want to guard yourself with a legal agreement.
In general, if your service is popular enough that someone wants to attack it, that's usually a good sign, so don't worry about it too much early on, but do stay ahead of it. You don't want the reason for your app's failure to be because users got tired of waiting on a slow server.
Your other option is to have the users register, so you can block by credentials rather than IP address when you spot abuse.
Yes, It's public
This app will be distributed on Google Play and the Apple App Store so it should be implied that someone will have access to its binary and try to reverse engineer it.
From the moment its on the stores it's public, therefore anything sensitive on the app binary must be considered as potentially compromised.
The Difference Between WHO and WHAT is Accessing the API Server
Before I dive into your problem I would like to first clear a misconception about who and what is accessing an API server. I wrote a series of articles around API and Mobile security, and in the article Why Does Your Mobile App Need An Api Key? you can read in detail the difference between who and what is accessing your API server, but I will extract here the main takes from it:
The what is the thing making the request to the API server. Is it really a genuine instance of your mobile app, or is it a bot, an automated script or an attacker manually poking around your API server with a tool like Postman?
The who is the user of the mobile app that we can authenticate, authorize and identify in several ways, like using OpenID Connect or OAUTH2 flows.
Think about the who as the user your API server will be able to Authenticate and Authorize access to the data, and think about the what as the software making that request in behalf of the user.
So if you are not using user authentication in the app, then you are left with trying to attest what is doing the request.
Mobile Apps should be as much dumb as possible
The reason why is because I need to deliver data to the app based on data gathered by the phone sensors, and if people can pose as my own app and send data to my api that wasn't processed by my own algorithms, it defeats its purpose.
It sounds to me that you are saying that you have algorithms running on the phone to process data from the device sensors and then send them to the API server. If so then you should reconsider this approach and instead just collect the sensor values and send them to the API server and have it running the algorithm.
As I said anything inside your app binary is public, because as yourself said, it can be reverse engineered:
should be implied that someone will have access to its binary and try to reverse engineer it.
Keeping the algorithms in the backend will allow you to not reveal your business logic, and at same time you may reject requests with sensor readings that do not make sense(if is possible to do). This also brings you the benefit of not having to release a new version of the app each time you tweak the algorithm or fix a bug in it.
Runtime attacks
I was thinking something involving the app signatures, since every published app must be signed somehow, but I can't figure out how to do it in a secure way.
Anything you do at runtime to protect the request you are about to send to your API can be reverse engineered with tools like Frida:
Inject your own scripts into black box processes. Hook any function, spy on crypto APIs or trace private application code, no source code needed. Edit, hit save, and instantly see the results. All without compilation steps or program restarts.
Your Suggested Solutions
Security is all about layers of defense, thus you should add as many as you can afford and required by law(e.g GDPR in Europe), therefore any of your purposed solutions are one more layer the attacker needs to bypass, and depending on is skill-set and time is willing to spent on your mobile app it may prevent them to go any further, but in the end all of them can be bypassed.
Maybe a combination of getting the app signature, plus time-based hashes, plus app-generated key pairs and the good old security though obscurity?
Even when you use key pairs stored in the hardware trusted execution environment, all an attacker needs to do is to use an instrumentation framework to hook in the function of your code that uses the keys in order to extract or manipulate the parameters and return values of the function.
Android Hardware-backed Keystore
The availability of a trusted execution environment in a system on a chip (SoC) offers an opportunity for Android devices to provide hardware-backed, strong security services to the Android OS, to platform services, and even to third-party apps.
While it can be defeated I still recommend you to use it, because not all hackers have the skill set or are willing to spend the time on it, and I would recommend you to read this series of articles about Mobile API Security Techniques to learn about some complementary/similar techniques to the ones you described. This articles will teach you how API Keys, User Access Tokens, HMAC and TLS Pinning can be used to protect the API and how they can be bypassed.
Possible Better Solutions
Nowadays I see developers using Android SafetyNet to attest what is doing the request to the API server, but they fail to understand it's not intended to attest that the mobile app is what is doing the request, instead it's intended to attest the integrity of the device, and I go in more detail on my answer to the question Android equivalent of ios devicecheck. So should I use it? Yes you should, because it is one more layer of defense, that in this case tells you that your mobile app is not installed in a rooted device, unless SafetyNet has been bypassed.
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
You can allow the API server to have an high degree of confidence that is indeed accepting requests only from your genuine app binary by implementing the Mobile App Attestation concept, and I describe it in more detail on this answer I gave to the question How to secure an API REST for mobile app?, specially the sections Securing the API Server and A Possible Better Solution.
Do you want to go the Extra Mile?
In any response to a security question I always like to reference the excellent work from the OWASP foundation.
For APIS
OWASP API Security Top 10
The OWASP API Security Project seeks to provide value to software developers and security assessors by underscoring the potential risks in insecure APIs, and illustrating how these risks may be mitigated. In order to facilitate this goal, the OWASP API Security Project will create and maintain a Top 10 API Security Risks document, as well as a documentation portal for best practices when creating or assessing APIs.
For Mobile Apps
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
OWASP - Mobile Security Testing Guide:
The Mobile Security Testing Guide (MSTG) is a comprehensive manual for mobile app security development, testing and reverse engineering.
No. You're publishing a service with a public interface and your app will presumably only communicate via this REST API. Anything that your app can send, anyone else can send also. This means that the only way to secure access would be to authenticate in some way, i.e. keep a secret. However, you are also publishing your apps. This means that any secret in your app is essentially being given out also. You can't have it both ways; you can't expect to both give out your secret and keep it secret.
Though this is an old post, I thought I should share the updates from Google in this regard.
You can actually ensure that your Android application is calling the API using the SafetyNet mobile attestation APIs. This adds a little overhead on the network calls and prevents your application from running in a rooted device.
I found nothing similar like SafetyNet for iOS. Hence in my case, I checked the device configuration first in my login API and took different measures for Android and iOS. In case of iOS, I decided to keep a shared secret key between the server and the application. As the iOS applications are a little bit difficult to reversed engineered, I think this extra key checking adds some protection.
Of course, in both cases, you need to communicate over HTTPS.
As the other answers and comments imply, you cant truly restrict API access to only your app but you can take different measures to reduce the attempts. I believe the best solution is to make requests to your API (from native code of course) with a custom header like "App-Version-Key" (this key will be decided at compile time) and make your server check for this key to decide if it should accept or reject. Also when using this method you SHOULD use HTTPS/SSL as this will reduce the risk of people seeing your key by viewing the request on the network.
Regarding Cordova/Phonegap apps, I will be creating a plugin to do the above mentioned method. I will update this comment when its complete.
there is nothing much you can do. cause when you let some one in they can call your APIs. the most you can do is as below:
since you want only and only your application (with a specific package name and signature) calls your APIs, you can get the signature key of your apk pragmatically and send is to sever in every API call and if thats ok you response to the request. (or you can have a token API that your app calls it every beginning of the app and then use that token for other APIs - though token must be invalidated after some hours of not working with)
then you need to proguard your code so no one sees what you are sending and how you encrypt them. if you do a good encrypt decompiling will be so hard to do.
even signature of apk can be mocked in some hard ways but its the best you can do.
Someone have looked at Firebase App Check ?
https://firebase.google.com/docs/app-check
Is there any way to restrict post requests to my REST API only to requests coming from my own mobile app binary?
I'm not sure if there is an absolute solution.
But, you can reduce unwanted requests.
Use an App Check:
The "Firebase App Check" can be used cross-platform (https://firebase.google.com/docs/app-check) - credit to #Xande-Rasta-Moura
iOS: https://developer.apple.com/documentation/devicecheck
Android: https://android-developers.googleblog.com/2013/01/verifying-back-end-calls-from-android.html
Use BasicAuth (for API requests)
Allow a user-agent header for mobile devices only (for API requests)
Use a robots.txt file to reduce bots
User-agent: *
Disallow: /

Can Mylar be hacked?

I'm interested in using Mylar for an upcoming project.
The promises that Mylar makes seem impressive. However, could a dev write a back-door attack into the code, that is allowed to run (verified by hash/signature), so that the data is compromised (likely via XSS)? Mylar documentation states:
"Mylar ensures that client-side application code is authentic, even if
the server is malicious."
The only way I can imagine this being protected against is for the browser itself to disallow outbound communication of unencrypted data. But, for that to happen, how can the app query the database, make calls back to the server (I understand that Mylar is best used with a browser side framework like Meteor, but still, Meteor needs to communicate with the server for certain tasks).
Is Mylar able to provide complete data security, even from the application developer/server admin?
Here is Mylar's claim (from http://www.mit.edu/~ralucap/mylar.pdf):
3.4 Threat model
Threats. Both the application and the database servers can be fully controlled by an adversary: the adversary may obtain all data
from the server, cause the server to send arbitrary responses to web
browsers, etc. This model subsumes a wide range of real-world security
problems, from bugs in server software to insider attacks. Mylar also
allows some user machines to be controlled by the adversary, and to
collude with the server. This may be either because the adversary is a
user of the application, or because the adversary broke into a user’s
machine. We call this adversary active, in contrast to a passive
adversary that eavesdrops on all information at the server, but does
not make any changes, so that the server responds to all client
requests as if it were not compromised.
Guarantees. Mylar protects a data item’s confidentiality in the face of arbitrary server compromises, as long as none of the users
with access to that data item use a compromised machine.
In this context, 'compromised machine' means the client machine/browser.
After re-reading the Mylar white paper, I see where the document states:
Assumptions. To provide the above guarantees, Mylar makes the
following assumptions. Mylar assumes that the web application as
written by the developer will not send user data or keys to
untrustworthy recipients, and cannot be tricked into doing so by
exploiting bugs (e.g., cross-site scripting). Our prototype of Mylar
is built on top of Meteor, a framework that helps programmers avoid
many common classes of bugs in practice.
Does this mean the way the application was written at the time of encryption, or at the time of attack? In other words, is the encrypted data somehow tied to a specific version of the application code? Elsewhere in the referenced Mylar white paper it indicates that the app code is verified against a hash signature.
If the app code can simply be hacked at the server, this reduces the value proposition greatly, as any attacker who gains access to the source code could modify the code and leach data as it is requested (at the browser). The Guarantee of "protecting confidentiality in the face of arbitrary server compromises" seems broad enough to include the idea of the attacker modifying the source code of the application, hence my confusion.
Also refer to section 6 in the white paper for more information. I believe the Mylar doc is conveying that it does mitigate compromised application code attacks. I'd really love to hear from a dev with authoritative understanding of Mylar.
... could a dev write a back-door attack into the code, that is allowed to run (verified by hash/signature), so that the data is compromised (likely via XSS)?
Yes, a developer could write a back-door into the code. There is no way to prevent that, because a developer could claim he's using Mylar although he doesn't or does use a compromised version. Note that Mylar doesn't say, it could prevent that. It's preventing attacks by server operators, for example if you host your application in a third-party cloud.
3 MYLAR ARCHITECTURE
There are three different parties in Mylar: the users, the web site owner, and the server operator. Mylar’s goal is to help the site owner protect the confidential data of users in the face of a malicious or compromised server operator.
If you don't trust the developers or web site owner, you have to check the client-side source code very time it's loaded.
Mylar documentation states: "Mylar ensures that client-side application code is authentic, even if the server is malicious."
The only way I can imagine this being protected against is for the browser itself to disallow outbound communication of unencrypted data. But, for that to happen, how can the app query the database, make calls back to the server [...]
Is Mylar able to provide complete data security, even from the application developer/server admin?
That's right, the browser won't send unencrypted data to the server (at least the data which you marked as secret). I can't provide a full explanation for how it allows a large subset of SQL functionality on encrypted data, because it's complicated. As Raluca Ada Popa explains in one of her presentations, data is encrypted several times with different algorithms, because each algorithm allows different operations on encrypted data (equality check, ordering, text search, ...). The MIT institute also developed CryptDB, which uses the same methodology but only protects the database server.
3.4 Threat model: Both the application and the database servers can be fully controlled by an adversary [...]
When an attacker controls the application server, he could exchange the whole application with his own, which mocks the original user interface. Here comes the browser plugin into play: The application is signed by the web site owner before it's deployed, so that the browser plugin may check the signature and alarm the user if the application was modified.
You might have noticed that Mylar needs the user to check authenticity himself. Other things that an user needs to be aware of:
Mylar applications must be loaded over a secure HTTPS connection.
Retrieved data must be signed by the expected user (for example a chat room must show who created it and the user has to check if someone tries to fake an existing room).
The client machine must not compromised.
...
Mylar assumes that the web application as written by the developer will not send user data or keys to untrustworthy recipients, and cannot be tricked into doing so by exploiting bugs (e.g., cross-site scripting).
Does this mean the way the application was written at the time of encryption, or at the time of attack?
They assume the application as delivered doesn't contain any bugs which could leak private data. Mylar doesn't prevent coding mistakes, it prevents untrusted modifications later on.
In other words, is the encrypted data somehow tied to a specific version of the application code? Elsewhere in the referenced Mylar white paper it indicates that the app code is verified against a hash signature.
If the app code can simply be hacked at the server, this reduces the value proposition greatly, as any attacker who gains access to the source code could modify the code and leach data as it is requested (at the browser).
Encrypted data isn't tied to a specific version. Each version of the application needs to be signed by the web site owner, so that the browser plugin may check it's signature and attacks would be obvious to users. A common dynamic web site wouldn't allow signing, because each user data is different and would modify the received code, therefore application code (HTML, JavaScript, ..) and data are strictly separated. After the application is loaded and it's signature was checked, data is retrieved via AJAX, whereas the AJAX response must not contain executable code (this is part of the Meteor framework, I can't tell anything about it).
Conclusion
If the web site owner himself is dishonest, you can't be sure about privacy. This is especially the case if governments are able to force the web site owner to cooperate.
Also Mylar doesn't prevent bugs, which could leak data. For example the simplest mistake would be that a developer forgot to mark a field as private.
When an attacker overtakes the application server, users are warned, but if they ignore it (for example they didn't install the browser plugin) their data could be intercepted.
If you want to outsource hosting of your application or you won't trust your own server operators, Mylar provides better security than any other framework I know of.

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

Client Server socket security

Assuming we have a server S and a few Clients (C) and whenever a client update a server, an internal database on the server is updated and replicated to the other clients. This is all done using sockets in an intranet environment.
I believe that an attacker can fairly easily sniff this plain text traffic. My colleagues believe I am overly paranoid because we are behind a firewall.
Am I being overly paranoid? Do you know of any exploit (link please) that took advantage of a situation such as this and what ca be done differently. Clients were rewritten in Java but server is still using C++.
Any thing in code can protect against an attack?
Inside your company's firewall, you're fairly safe from direct hack attacks from the outside. However, statistics that I won't trouble to dig out claim that most of the damage to a business' data is done from the INside. Most of that is simple accident, but there are various reasons for employees to be disgruntled and not found out; and if your data is sensitive they could hurt your company this way.
There are also boatloads of laws about how to handle personal ID data. If the data you're processing is of that sort, treating it carelessly within your company could also open your company up to litigation.
The solution is to use SSL connections. You want to use a pre-packaged library for this. You provide private/public keys for both ends and keep the private keys well hidden with the usual file access privileges, and the problem of sniffing is mostly taken care of.
SSL provides both encryption and authentication. Java has it built in and OpenSSL is a commonly used library for C/C++.
Your colleagues are naïve.
One high-profile attack occurred at Heartland Payment Systems, a credit card processor that one would expect to be extremely careful about security. Assuming that internal communications behind their firewall were safe, they failed to use something like SSL to ensure their privacy. Hackers were able to eavesdrop on that traffic, and extract sensitive data from the system.
Here is another story with a little more description of the attack itself:
Described by Baldwin as "quite a
sophisticated attack," he says it has
been challenging to discover exactly
how it happened. The forensic teams
found that hackers "were grabbing
numbers with sniffer malware as it
went over our processing platform,"
Baldwin says. "Unfortunately, we are
confident that card holder names and
numbers were exposed." Data, including
card transactions sent over
Heartland's internal processing
platform, is sent unencrypted, he
explains, "As the transaction is being
processed, it has to be in unencrypted
form to get the authorization request
out."
You can do many things to prevent a man in the middle attack. For most internal data, within a firewall/IDS protected network you really don't need to secure it. However, if you do wish to protect the data you can do the following:
Use PGP encryption to sign and encrypt messages
Encrypt sensitive messages
Use hash functions to verify that the message sent has not been modified.
It is a good standard operating proceedure to secure all data, however securing data has very large costs. With secure channels you need to have a certificate authority, and allow for extra processing on all machines that are involved in communication.
You're being paranoid. You're talking about data moving across an, ideally, secured internal network.
Can information be sniffed? Yea, it can. But it's sniffed by someone who has already breached network security and got within the firewall. That can be done in innumerable ways.
Basically, for the VAST majority of businesses, no reason to encrypt internal traffic. There are almost always far far easier ways of getting information, from inside the company, without even approaching "sniffing" the network. Most such "attacks" are from people who are simply authorized to see the data in the first place, and already have a credential.
The solution is not to encrypt all of your traffic, the solution is to monitor and limit access, so that if any data is compromised, it is easier to detect who did it, and what they had access to.
Finally, consider, the sys admins, and DBAs pretty much have carte blanche to the entire system anyway, as inevitably, someone always needs to have that kind of access. It's simply not practical to encrypt everything to keep it away from prying eyes.
Finally, you're making a big ado about something that is just as likely written on a sticky tacked on the bottom of someone's monitor anyway.
Do you have passwords on your databases? I certainly hope the answer to that is yes. Nobody would believe that password protecting a database is overly paranoid. Why wouldn't you have at least the same level of security* on the same data flowing over your network. Just like an unprotected DB, unprotected data flow over the network is vulnerable not only to sniffing but is also mutable by a malicious attacker. That is how I would frame the discussion.
*By same level of security I mean use SSL as some have suggested, or simply encrypt the data using one of the many available encryption libraries around if you must use raw sockets.
Just about every "important" application I've used relies on SSL or some other encryption methodology.
Just because you're on the intranet doesn't mean you may not have malicious code running on some server or client that may be trying to sniff traffic.
An attacker which has access to a device inside your network that offers him the possibility to sniff the entire traffic or the traffic between a client and a server is the minimum required.
Anyway, if the attacker is already inside, sniffing should be just one of the problems you'll have to take into consideration.
There are not many companies I know of which use secure sockets between clients and servers inside an intranet, mostly because of the higher costs and lower performance.

Resources