I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.
Related
The gist of it all is that I'm trying to fetch audio metadata from a user's google drive files to store them into firebase.
At first I intended to do this locally, entirely client-sided, because my front-facing web/iOS/Android app is in flutter;
but as it turns out, there's almost no library handling audio metadata properly, and after dabbling with it, I realized I could probably get some formats (say, .wav & most RIFF-type audio files) to work, but doing an entire library to handle all kinds of audio metadata was a task significantly bigger than my original plans. Another option would be to create interfaces between C++ code and/or JS code into my Flutter application, but I'd have almost no control over that, it's not the easiest of process, and there would be possible inconsistencies between platforms.
I might make that library eventually, but in order to facilitate my work, I decided to use a server as a middleman that'd run with node and handle the file requests and metadata treatment, & also facilitate the interactions with firebase for me by making them handled by a service account.
Now, this makes me run into one issue : how to handle the google Auth.
When my user logs into my app, I get all the required auth scopes (google drive files access and write, contacts, email, etc) for my app; it goes through the consent screen and I get authenticated.
I'm still a little confused with the recommendations from google and best practices in this case, since my app, in itself, did not require an auth system outside of getting access to the google drive files through google identification, and I therefore do not have Firebase/Firestore users; I can simply store them in my (firestore) database for identification purposes (or maybe tie in the frontend flow to my firestore app to also create a user when logging in through google if that is possible. I'm currently using the google sign in
flutter package.)
To come back to my actual problem now that the situation is laid out :
Should I just transfer the auth tokens (and maybe reverify them in some ways to avoid impersonation) from my frontend app to the server through a HTTPS post request or through headers, and use them to directly query the Google Drive API (I wouldn't even need to store them outside of memory, which would be relatively safe against any attacks on the server itself), handle the files and the possibly expired token ?
Should I modify my frontend workflow so it directly grants access to my server who would handle the session rather than getting the tokens locally ?
In the first case, I would most likely simply use the users UID as identifiers for the firestore data (none of it is sensitive anyway, it would simply be playlists and some metadata). In the second case, I could probably implement a stronger security on firestore using the firestore rules,but it'd require a significant amount of refactoring and logic changes in my frontend.
In case that wasn't clear, I wish my server to make all the Drive related requests (after getting the proper authorizations from the user of course) and handle these without having to request the files locally in frontend. Both solutions (and others if available) should work, but I'm wondering what the best practice would be in the context of the Oauth2 system used by google and the fact that the authorization is transitioning between client and server and could be subject to security issues.
I'll add code/visual representations if this isn't clear enough. It is to me, but I obviously designed the mess.
The App could have a private key hard-coded into it and my server could have the public for it and the App could sign everything. But then a hack could identify the private key in the object code and write a malicious App that signs everything with that same key. Then that App could use my server.
The App could do a key exchange with my server but how does the server know the App is authentic when it does the key exchange?
In essence you cannot know.
Reason is simple: since anybody can get to the client and have everything the client is and knows by reverse engineering the client (to which they have all they need to perfrom that), there is nothing that can prevent them from answering any challenge you might set to what the real app would answer.
You can make it harder on fake apps though. But they could (if done right) give the answer anyway.
E.g. how to make it harder:
The server sends a challenge to the client app to calculate e.g. the CRC32 (or md5, sha-1, sha-256, ... doesn't matter as such) of the app itself from a given start to a given end. If you set those start and end points to be fully random for every challenge you send, you essentially force the fake app to have the real app's compiled code in full ... So you place the burden of having to have the real app (not forcing it to be actually running the (unmodified) code, just having the actual unmodified code).
Take care that you would need to support the server side with allowing for multiple versions of the client etc. or you can't upgrade the clients anymore.
Anybody distributing a fake app would hence be forced to violate your copyright on the real app (and your lawyers would have am easier case maybe).
Alternatives:
To pick an alternative, you need to figure out why it's (so) important to have your client ?
If the client contains secrets: remove them, make the client display only and have an 3 tier model where you only let the user run the display part and keep all secrets on your servers.
If you get your revenue from selling an app, give it away for free and sell accounts on your server. Use authentication to do that: you can authenticate users (login&password, real 2 factor authentication , ...) you can also disallow them to dramatically change their geo location in a short time, disallow simultaneous logins, ...
But the price is the hoops for the user to jump through. And they might use other clients nonetheless.
If you allow logic (like e.g. used in online games) to use the power of the user's CPU to do things, you can still keep oversight on a logic level on the server: e.g. if it takes 5 minutes at the very minimum to complete a task in the real client, and if the client reports back as "achieved" before those 5 minutes are done: you have a cheater ... Similarly, make sure all important assets are only given from the server, don't trust the clients ...
I want to digitally sign documents and messages on a Linux server. How do I securely store the private key and a passphrase if any?
The problem is, if an application gets compromised, keys would also become compromised. If I could somehow let an app sign something, but don't let it touch actual keys, that wouldn't completely solve my problem (as an attacker would still be able to sign anything for some time), but reduce the impact (like, we won't have to revoke the keys).
For example, in case of SSL servers there's no such problem because usually there's no practical need for an application to access the keys. Hence, they can be semi-securely stored in a separate location. E.g. a webserver (like nginx) would be able to read the keys, but not the application.
Am I overthinking it? Is it even worthy thinking of?
Create a separate, lightweight signing application that listens on an UNIX socket and runs as a separate user from the main app; when your app wants to sign something it throws the file and any additional info down that socket, and gets back the signed file.
If the application ever gets compromised the attacker will still be able to sign files as long as he is still on the server, but unless he uses a privilege escalation exploit to get root privileges and copy the signing app's key, he won't he able to steal the key and then sign at will without being connected to the server.
You can replace the UNIX socket with a standard TCP socket and put the signing app on a separate server for extra security; make sure to implement some basic access control on the signing app and of course use proper firewall rules to make sure the signing server is never exposed to the internet, or simplify things a bit by using a "setuid" binary for signing that gets invoked by your app, in that case the signing binary will run as a different user with additional privileges to access the keys, while the webapp itself doesn't have such privileges.
Basically you should implement a rudimentary software HSM.
If you have very high security needs you could consider moving the keys to a completely independent server, or better yet a hardware security module (but those are expensive). Like you mention it can help prevent the loss of keys, but if the app is compromised the attacker could still sign whatever they wanted.
The main reason to go through the trouble then is auditing. That is if you have your signing server or device keep logs of everything it signs, then if only your app is compromised you will be better able to assess the extent of the damage (assuming you're signing server has not been compromised).
So yes there are benefits, but your first focus should be on securing your main application properly, because once that's compromised you're already having a very bad no good day, even if you have moved your keys to a sperate service.
I'm building an app that stores users' potentially-private notes. It's a little weird to me that I can just go into the Firebase Forge UI and look up anything which anyone has written, and it also means that anyone who somehow gains access to my Firebase account can then go in and select "Export JSON" to get all of my users' data.
Obviously I am careful with my account and am a scrupulous human being, but it generally seems like good practice for administrators to not have access to all of our users' data.
The only way I can think of to accomplish this would be to store everything in stringified JSON that has been encrypted by the user's password, but that obviously makes dealing with Firebase much more annoying, and would prevent granular access to data below the point at which things are stringified and encrypted.
Edit: This is, on second thought, not specific to Firebase, but is the case with most/all data stores unless you go out of your way to make it otherwise.
The only way to guarantee information security is to hand roll your own encryption on the server. You could host your firebase connectivity server-side and have your user send the data to that via SSL and from there do your encryption and then use the SSL address of firebase to store.
On the clientside, things are suspect to CSS attacks. If you really want to go down this route you can use js encryption from this lib: http://code.google.com/p/crypto-js/. Note that crpto-js works well in isolation but you will also need to be sure your webpages are not tampered with (quite hard to do IMOP, cause you don't know whats infected the users machine)
I'm building a system that need to collect some user sensitive data via secured web connection, store it securely on the server for later automated decryption and reuse. System should also allow user to view some part of the secured data (e.g., *****ze) and/or change it completely via web. System should provide reasonable level of security.
I was thinking of the following infrastructure:
App (Web) Server 1
Web server with proper TLS support
for secured web connections.
Use public-key algorithm (e.g. RSA) to
encrypt entered user sensitive data
and send it to App Server 2 via
one-way outbound secured channel
(e.g. ssh-2) without storing it
anywhere on either App Server 1 or DB
Server 1.
Use user-password-dependent
symmetric-key algorithm to encrypt
some part of the entered data (e.g.
last few letters/digits) and store
it on the DB Server 1 for later
retrieval by App Server 1 during
user web session.
Re-use step 2 for data modification by user via web.
DB Server 1
Store unsecured non-sensitive user
data.
Store some part of the sensitive
user data encrypted on App Server 1
(see step 3 above)
App Server 2
Do NOT EVER send anything
TO App Server 1 or DB Server 1.
Receive encrypted user sensitive
data from App Server 1 and store it
in DB Server 2.
Retrieve encrypted
user sensitive data from DB Server 2
according to the local schedules,
decrypt it using private key
(see App Server 1, step 2) stored
locally on App Server 2 with proper key management.
DB Server 2
Store encrypted user sensitive data (see App Server 2, step 2)
If either App (Web) Server 1 or DB Server 1 or both are compromised then attacker will not be able to get any user sensitive data (either encrypted or not). All attacker will have is access to public-key and encryption algorithms which are well known anyway. Attacker will however be able to modify web-server to get currently logged users passwords in plaintext and decrypt part of user sensitive data stored in DB Server 1 (see App Server 1, step 3) which I don't consider as a big deal. Attacker will be able to (via code modification) also intercept user sensitive data entered by users via web during potential attack. Later I consider as a higher risk, but provided that it is hard (is it?) for attacker to modify code without someone noticing I guess I shouldn't worry much about it.
If App Server 2 and private key are compromised then attacker will have access to everything, but App Server 2 or DB Server 2 are not web facing so it shouldn't be a problem.
How secure is this architecture? Is my understanding of how encryption algorithms and secured protocols work correct?
Thank you!
I don't think I can give a proper response because I'm not sure the goal of your system is clear. While I appreciate you getting feedback on a design, it's a bit hard without a purpose.
I would suggest to you this though:
Strongly document and analyse your threat model first
You need to come up with a fixed hard-lined list of all possible attack scenarios. Local attackers, etc, who are you trying to protect against? You also say things like 'with proper key management'; yet this is one of the hardest things to do. So don't just assume you can get this right; fully plan out how you will do this, with specific linking to who it will prevent attacks by.
The reason you need to do a threat model, is that you will need to determine on what angles you will be vulnerable; because this will be the case.
I will also suggest that while the theory is good; in crypto implementation is also very critical. Do not just assume that you will do things correctly, you really need to take care as to where random numbers come from, and other such things.
I know this is a bit vague, but I do think that at least coming up with formal and strong threat model, will be very helpful for you.
So far so good. You are well on your way to a very secure architecture. There are other concerns, such as firewalls, password policies, logging, monitoring and alerting to consider, but everything you described so far is very solid. If the data is sensitive enough, consider a third party audit of your security.
I would not recommend using any form of public key to communicate from your web server to your app server. If you control both system just a regular secret system of encryption. You know the identity of your app server, so keeping the key secure is not an issue. If you ever need to change or update the secret key just do so manually to prevent it from leaking across a connection.
What I would be most careful about is direction of data transfer from your server in your DMZ, which should only be your webserver, to those boxes residing internally to your network. It is becoming increasingly common for legitimate domains to be compromised to distribute malware to visiting users. That is bad, but if the malware were to turn in ward to your network instead of only outward to your users then your business would be completely hosed.
I also did not see anything about preventing sql injection or system hardening/patching to prevent malware distribution. This should be your first and most important consideration. If security were important to you then you would be your architecture to be flexible to minor customizations of inter-server communication and frequent patching. Most websites, even major legitimate businesses, never fix their security holes even if they are compromised. You must be continually fixing security holes and changing things to prevent holes from arise if you wish to avoid being compromised in the first place.
To prevent becoming a malware distributor I would suggest making hard and fast rules upon how media is served that contains any sort of client-side scripting. Client-side scripting can be found in JavaScript, ActiveX, Flash, Acrobat, Silverlight, and other code or plugin that executes on the client system. Policies for serving that content must exist so that anomolous code fragments can be immediately identified. My recommendation is to NEVER embed client-side code directly into a browser, but always reference some external file. I would also suggest conslidating like minded media to give you better asset control and save you bandwidth, such as serving one large JavaScript file instead of 8 small ones. I would also recommend forcing all such media onto an external content distribution system that references your domain in its directory structure. That way media is not served from your servers directly and if it served from you directly you can quickly identify it as potentially malicious and necessittating a security review.