I want to host copyrighted data on a Amazon S3 bucket (to have a larger bandwidth available than what my servers can handle) and provide access to these copyrighted data for a large numbers of authorized clients.
My problem is:
i create signed expiring HTTPS URL for these resources on the server side
these URL are sent to clients via a HTTPS connection
when the client uses these URL to download the contents, the URL can be seen in clear for any man-in-the-middle
In details, the URL are created via a Ruby On Rails server using the fog gem.
The mobile clients I'm talking about are iOS devices.
The proxy I've used for my test is mitmproxy.
The URL I generated looked like this:
https://mybucket.s3.amazonaws.com/myFileKey?AWSAccessKeyId=AAA&Signature=BBB&Expires=CCC
I'm not a network or security expert but I had found resources stating nothing was going clear over HTTPS connections (for instance, cf. Are HTTPS headers encrypted?). Is it a misconfiguration of my test that led to this clear URL? Any tip on what could have gone wrong here? Is there a real chance I can prevent S3 URL to go clear over the network?
So firstly, when sending a request over SSL all parameters are encrypted. If you were to look at the traffic going through a normal proxy you wouldn't be able to read them.
However, many proxies allow interception of SSL data by creating dummy certificates. This is exactly what mitmproxy does. You may well have enabled this and not realised it (although you would have had to install a client-side certificate to do this).
The bottom line is that your AWS URLs could be easily intercepted by somebody looking to reverse engineer your app, either through a proxy or by tapping into the binary itself. However, this isn't a 'bad thing' per se: Amazon themselves know this happens, and that's why they're not sending the secret key directly in the URL itself, but using a signature.
I don't think this is a huge problem for you: after all, you're creating URLs that expire, so even if someone can get hold of them through a proxy they'll only be able to access the URL for as long as it is valid. To access your resources post-expiry would require direct access to your secret key. Now, it actually turns out this isn't impossible (since you've probably hard-coded it into your binary), but it's difficult enough that most users won't be bothering with it.
I'd encourage you to be realistic with your security and copyright prevention: when you've got client-side native code it's not a matter of if it gets broken but when.
Related
I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.
We have webpage which uses the sapui5-framework to build a spa. The communication between the browser and the server uses https. The interaction to log into the page is the following:
The user opens the website by entering https://myserver.com in the browser
A login dialogue with two form fields for unsername and password is shown.
After entering username and password and pressing the login-button
an ajax-request is send using GET to the URL: https://myusername:myPassword#myserver.com/foo/bar/metadata
According to my understanding using GET to send sensitive data is never a good idea. But this answer to HTTPS is the url string secure says the following
HTTPS Establishes an underlying SSL conenction before any HTTP data is
transferred. This ensures that all URL data (with the exception of
hostname, which is used to establish the connection) is carried solely
within this encrypted connection and is protected from
man-in-the-middle attacks in the same way that any HTTPS data is.
An in another answer in the same thread:
These fields [for example form field, query strings] are stripped off
of the URL when creating the routing information in the https packaging
process by the browser and are included in the encrypted data block.
The page data (form, text, and query string) are passed in the
encrypted block after the encryption methods are determined and the
handshake completes.
But it seems that there still might be security concerns using get:
the URL is stored in the logs on the server and in the same thread
leakage through browser history
Is this the case for URLs like?
https://myusername:myPassword#myserver.com/foo/bar/metadata
// or
https://myserver.com/?user=myUsername&pass=MyPasswort
Additional questions on this topic:
Is passsing get variables over ssl secure
Is sending a password in json over https considered secure
How to send securely passwords via GET/POST?
On security.stackexchange are additional informations:
can urls be sniffed when using ssl
ssl with get and post
But in my opinion a few aspects are still not answered
Question
In my opinion the mentioned points are valid objections to not use get. Is the case; is using get for sending passwords a bad idea?
Are these the attack options, are there more?
browser history
server logs (assuming that the url is stored in the logs unencrypted or encrypted)
referer information (if this is really the case)
Which attack options do exist when sending sensitive data (password) over https using get?
Thanks
Sending any kind of sensitive data over GET is dangerous, even if it is HTTPS. These data might end up in log files at the server and will be included in the Referer header in links to or includes from other sides. They will also be saved in the history of the browser so an attacker might try to guess and verify the original contents of the link with an attack against the history.
Apart from that you better ask that kind of questions at security.stackexchange.com.
These two approaches are fundamentally different:
https://myusername:myPassword#myserver.com/foo/bar/metadata
https://myserver.com/?user=myUsername&pass=MyPasswort
myusername:myPassword# is the "User Information" (this form is actually deprecated in the latest URI RFC), whereas ?user=myUsername&pass=MyPasswort is part of the query.
If you look at this example from RFC 3986:
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
| _____________________|__
/ \ / \
urn:example:animal:ferret:nose
myusername:myPassword# is part of the authority. In practice, use HTTP (Basic) authentication headers will generally be used to convey this information. On the server side, headers are generally not logged (and if they are, whether the client entered them into their location bar or via an input dialog would make no difference). In general (although it's implementation dependent), browsers don't store it in the location bar, or at least they remove the password. It appears that Firefox keeps the userinfo in the browser history, while Chrome doesn't (and IE doesn't really support them without workaround)
In contrast, ?user=myUsername&pass=MyPasswort is the query, a much more integral part of the URI, and it is send as the HTTP Request-URI. This will be in the browser's history and the server's logs. This will also be passed in the referrer.
To put it simply, myusername:myPassword# is clearly designed to convey information that is potentially sensitive, and browsers are generally designed to handle this appropriately, whereas browsers can't guess which part of which queries are sensitive and which are not: expect information leakage there.
The referrer information will also generally not leak to third parties, since the Referer header coming from an HTTPS page is normally only sent with other request on HTTPS to the same host. (Of course, if you have used https://myserver.com/?user=myUsername&pass=MyPasswort, this will be in the logs of that same host, but you're not making it much worth since it stays on the same server logs.)
This is specified in the HTTP specification (Section 15.1.3):
Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.
Although it is just a "SHOULD NOT", Internet Explorer, Chrome and Firefox seem to implement it this way. Whether this applies to HTTPS requests from one host to another depends on the browser and its version.
It is now possible to override this behaviour, as described in this question and this draft specification, using a <meta> header, but you wouldn't do that on a sensitive page that uses ?user=myUsername&pass=MyPasswort anyway.
Note that the rest of HTTP specification (Section 15.1.3) is also relevant:
Authors of services which use the HTTP protocol SHOULD NOT use GET based forms for the submission of sensitive data, because this will cause this data to be encoded in the Request-URI. Many existing servers, proxies, and user agents will log the request URI in some place where it might be visible to third parties. Servers can use POST-based form submission instead
Using ?user=myUsername&pass=MyPasswort is exactly like using a GET based form and, while the Referer issue can be contained, the problems regarding logs and history remain.
Let assume that user clicked a button and following request generated by client browser.
https://www.site.com/?username=alice&password=b0b123!
HTTPS
First thing first. HTTPS is not related with this topic. Because using POST or GET does not matter from attacker perspective. Attackers can easily grab sensitive data from query string or directly POST request body when traffic is HTTP. Therefor it does not make any difference.
Server Logs
We know that Apache, Nginx or other services logging every single HTTP request into log file. Which means query string ( ?username=alice&password=b0b123! ) gonna be written into log files. This can be dangerous because of your system administrator can access this data too and grab all user credentials. Also another case could be happen when your application server compromise. I believe you are storing password as hashed. If you use powerful hashing algorithm like SHA256, your client's password will be more secure against hackers. But hackers can access log files directly get passwords as a plain-text with very basic shell scripts.
Referer Information
We assumed that client opened above link. When client browser get html content and try to parse it, it will see image tag. This images can be hosted at out of your domain ( postimage or similar services, or directly a domain that under the hacker's control ) . Browser make a HTTP request in order to get image. But current url is https://www.site.com/?username=alice&password=b0b123! which is going to be referer information!
That means alice and her password will be passed to another domain and can be accessible directly from web logs. This is really important security issue.
This topic reminds me to Session Fixation Vulnerabilities. Please read following OWASP article for almost same security flaw with sessions. ( https://www.owasp.org/index.php/Session_fixation ) It's worth to read it.
The community has provided a broad view on the considerations, the above stands with respect to the question. However, GET requests may, in general, need authentication. As observed above, sending user name/password as part of the URL is never correct, however, that is typically not the way authentication information is usually handled. When a request for a resource is sent to the server, the server generally responds with a 401 and Authentication header in the response, against which the client sends an Authorization header with the authentication information (in the Basic scheme). Now, this second request from client can be a POST or a GET request, nothing prevents that. So, generally, it is not the request type but the mode of communicating the information is in question.
Refer http://en.wikipedia.org/wiki/Basic_access_authentication
Consider this:
https://www.example.com/login
Javascript within login page:
$.getJSON("/login?user=joeblow&pass=securepassword123");
What would the referer be now?
If you're concerned about security, an extra layer could be:
var a = Base64.encode(user.':'.pass);
$.getJSON("/login?a="+a);
Although not encrypted, at least the data is obscured from plain sight.
I'm writing an API that will be hosted without SSL support and I need a way to authenticate the requests. Each client would have a different ID, but if requests were authorised with that, anyone with a packet sniffer could forge requests. Is it possible to make a secure system WITHOUT relying on SSL?
(Some thoughts I had included OAuth, could that be implemented?)
Many thanks
Have each client cryptographically sign its requests with a client-specific key. Verify the signature on the server.
Using cryptography pretty simple. The main challenge is setting up the clients' keys. It'll be hard to do that securely without using SSL. There's no information in the question about how you set up client IDs, so I don't know if it's secure enough to set up keys at that point as well.
It's also going to be a problem if you serve the client code without SSL.
But hey, it's just an API you're building. Maybe the code that interacts with it is served over HTTPS. Or maybe the code is stored locally on the client.
I feel like a lot of people are going to complain about this answer though.
A classic dumb thing to do is pass something security related info via a GET on the query string ala:
http://foo?SecretFilterUsedForSecurity=username
...any yahoo can just use Fiddler or somesuch to see what's going on....
How safe is it to pass this info to an app server(running SSL) via a POST, however? This link from the Fiddler website seems to indicate one can decrypt HTTPS traffic:
http://fiddler2.com/documentation/Configure-Fiddler/Tasks/DecryptHTTPS
So is this equally dumb if the goal is to make sure the client can't capture / read information you'd prefer them not to? It seems like it is.
Thanks.
Yes, it's "equally dumb". SSL only protects data from being read by a third party; it does not prevent the client (or the server) from reading it. If you do not trust the client to read some data, they should not be given access to that data, even just to make a POST.
Yes, any user can easily examine the data in a POST request, even over HTTPS/SSL, using software like Burp Suite, Webscarab, or Paros Proxy. These proxies will complete the SSL transaction with the server, and then pass on the data to the client. All data passing through the proxy is stored and is visible to the client.
Perhaps you are trying to store sensitive/secret data on the client-side to lighten the load on your server? the way to do this so that the user cannot look at it (or change it) even with a proxy, is to encrypt it with a strong symmetrical secret key known only to the server. If you want to be sure that the encrypted data is not tampered with, throw on an HMAC. Make sure you use a sufficiently random key and a strong encryption algorithm and key length such as AES 256.
If you do this you can offload the storage of this data to the client but still have assurance that it has not changed since the server last saw it, and the client was not able to look at it.
This depends on who you're trying to protect your data from, and how much control you have over the client software. Fundamentally, in any client-server application the client must know what it is sending to the server.
If implemented properly, SSL will prevent any intermediary sniffing or altering the traffic without modifying the client. However, this relies on the connection being encrypted with a valid certificate for the server domain, and on the client refusing to act if this is not the case. Given that condition, the connection can only be decrypted by someone holding the private key for that SSL certificate.
If your "client" is just a web browser, this means that third parties (e.g. at a public wi-fi location) can't intercept the data without alerting the person using the site that something is suspicious. However, it doesn't stop a user deliberately by-passing that prompt in their browser in order to sniff the traffic themselves.
If your client is a custom, binary, application, things are a little safer against "nosy" users: in order to inspect the traffic, they would have to modify the client to by-pass your certificate checks (e.g. by changing the target URL, or tricking the app to trust a forged certificate).
In short, nothing can completely stop a determined user sniffing their own traffic (although you can make it harder) but properly implemented SSL will stop third-parties intercepting traffic.
The other, more important reason not to add confidential information into URL with GET requests is that the web server and any proxies on the way will log it. POST parameters don't get logged by default.
You don't want your passwords to show up in server logs - logs are usually protected much, much less than, for example, the password database itself.
My API (a desktop application) communicates with my web app using basic HTTP authentication over SSL (Basically I'm just using https instead of http in the requests). My API has implemented logic that makes sure that users don't send incorrect information, but the problem I have is that someone could bypass the API and use curl to potentially post incorrect data (obtaining credentials is trivial since signing up on my web app is free).
I have thought about the following options:
Duplicate the API's logic in the web app so that even if users try to cheat the system using curl or some other tool they are presented with the same conditions.
Implement a further authentication check to make sure only my API can communicate with my web app. (Perhaps SSL client certificates?).
Encrypt the data (Base 64?)
I know I'm being a little paranoid about users spoofing my web app with curl-like tools but I'd rather be safe than sorry. Duplicating the logic is really painful and I would rather not do that. I don't know much about SSL client certificates, can I use them in conjunction with basic HTTP authentication? Will they make my requests take longer to process? What other options do I have?
Thanks in advance.
SSL protects you from the man-in-the-middle attacks, but not from attacks originated on the client side of the SSL. A client certificate built into your client API would allow you to identify that data was crafted by the client side API, but will not help you figuring out if client side manually modified the data just before it got encrypted. Technically ssavy user on the client end can always find a way to modify data by debugging through your client side API. The best you can do is to put roadblocks to your client side API, to make it harder to decipher it. Validation on the server side is indeed the way to go.
Consider refactoring your validation code so that it can be used on both sides.
You must validate the data on the server side. You can throw nasty errors back across the connection if the server-side validation fails — that's OK, they're not supposed to be tripped — but if you don't, you are totally vulnerable. (Think about it this way: it's the server's logic that you totally control, therefore it is the server's logic that has to make the definitive decisions about the validity of communications.)
Using client certificates won't really protect you much additionally from users who have permission to use the API in the first place; if nothing else, they can take apart the code to extract the client certificate (and it has to be readable to your client code to be usable at all). Nor will adding extra encryption; it makes things much more difficult for you (more things to go wrong) without adding much safety over that already provided by that SSL connection. (The scenario where adding encryption helps is when the messages sent over HTTPS have to go via untrusted proxies.)
Base-64 is not encryption. It's just a way of encoding bytes as easier-to-handle characters.
I would agree in general with sinelaw's comment that such validations are usually better on the server side to avoid exactly the kind of issue you're running into (supporting multiple client types). That said, you may just not be in a position to move the logic, in which case you need to do something.
To me, your options are:
Client-side certificates, as you suggest -- you're basically authenticating that the client is who (or what, in your case) you expect it to be. I have worked with these before and mutual authentication configuration can be confusing. I would not worry about the performance, as I think the first step is getting the behavior your want (correctness first). Anyway, in general, while this option is feasible, it can be exasperating to set up, depending on your web container.
Custom HTTP header in your desktop app, checking for its existence/value on the server side, or just leveraging of the existing User-Agent header. Since you're encrypting the traffic, one should not be able to easily see the HTTP header you're sending, so you can set its name and value to whatever you want. Checking for that on the server side is akin to assuring you that the client sending the request is almost certainly using your desktop app.
I would personally go the custom header route. It may not be 100% perfect, but if you're interested in doing the simplest possible thing to mitigate the most risk, it strikes me as the best route. It's not a great option if you don't use HTTPS (because then anyone can see the header if they flip on a sniffer), but given that you do use HTTPS, it should work fine.
BTW, I think you may be confusing a few things -- HTTPS is going to give you encryption, but it doesn't necessarily involve (client) authentication. Those are two different things, although they are often bundled together. I'm assuming you're using HTTPS with authentication of the actual user (basic auth or whatever).