Penetration Testing HTML Posting Issue - security

We are planning to go for a security testing certificate. For that reason we are using Paros tool to test our system.
The system is written in GWT on front end and database connectivity is happening through Hibernate.
When we use this tool to test our application following behaviour is happening which needs to be restricted.
The tool is able to see the data which is passed to server. This is fine but when we make any changes in the data through tool it gets updated in the system on database end. This is a big security issue.
Can someone guide me in this?

If you're still looking for a solution to this problem, you could use request signing. The reason I didn't mention it earlier was because the only time I had seen request signing, there were certificates involved, and it was mostly using the Web Services Security Standard. The other time I recommended implementation of request signing was for a mobile application - its relatively easier to do there also, since you can use certificates that are on the device to perform the signing, and the server can verify this signature (essentially, a public key encryption mechanism).
As you mention in the comments, there are multiple aspects to it - one is to prevent XSRF, which is essentially including a nonce to ensure that an attacker cannot replay requests, or craft requests that might harm an authenticated user. This nonce will have to come from the server, since anything that you create using Javascript, the attacker can create also. This nonce will make sure that your request is time specific, and that it cannot be replayed at a later point of time.
However, a nonce isn't going to stop attacks where a user is in a hostile network, and an attacker is performing a MitM attack on all traffic. The attacker can still modify a request, and since the server has never seen that nonce before, it will accept the request as valid. To prevent this, you need to countermeasures in place - one, all traffic should go via SSL, and two, all requests must be signed so as to prevent tampering. The signature part is particularly hard, especially if you have to ensure that an attacker cannot perform the same signing. The examples I have seen of it involve certificate level authentication for the webapp, and using these certificates to then perform the signing - which might be too stringent a requirement for the application that you seem to be developing. Other methodologies involve using something that the user has/knows - maybe a token, password, secret answer, etc. - that cannot be replicated by an attacker, and using that information to sign requests.
Here's an example on how you can do this via PHP. I don't know if this mechanism can be adapted to do it for your purposes, though. OAuth might be another possible method, but since I've never seen an application do it that way, I am not very sure.
Sorry I don't have a specific methodology or examples of code for you to look at, but most implementations I've seen are only from a design standpoint, versus an actual code standpoint.

Related

Are breaches of JWT-based servers more damaging?

UPDATE: I have concluded my research on this problem and posted a lengthy blog entry explaining my findings: The Unspoken Vulnerability of JWTs. I explain how the big push to use JWTs for local authentication is leaving out one crucial detail: that the signing key must be protected. I also explain that unless you're willing to go to great lengths to protect the keys, you're better off either delegating authentication via Oauth or using traditional session IDs.
I have seen much discussion of the security of JSON Web Tokens -- replay, revocation, data transparency, token-specified alg, token encryption, XSS, CSRF -- but I've not seen any assessment of the risk imposed by relying on a signing key.
If someone breaches a server and acquires a JWT signing key, it seems to me that this person could thereafter use the key to forge unexpired JWTs and secretly gain access. Of course, a server could look up each JWT on each request to confirm its validity, but servers use JWTs exactly so they don't have to do this. The server could confirm the IP address, but that also involves a lookup if the JWT is not to be trusted, and apparently doing this precludes reliable mobile access anyway.
Contrast this with a breach of a server based on session IDs. If this server is hashing passwords, the attacker would have to snag and use a session ID separately for each user before it expires. If the server were only storing hashes of the session IDs, the attacker would have to write to the server to ensure access. Regardless, it seems that the attacker is less advantaged.
I have found one architecture that uses JWTs without this disadvantage. A reverse proxy sits between untrusted clients externally and a backend collection of microservices internally, described here by Nordic APIs. A client acquires an opaque token from an authorization server and uses that token to communicate with the server app for all requests. For each request, the proxy translates the opaque token into a JWT and caches their association. The external world never provides JWTs, limiting the damage wrought by stealing keys (because the proxy goes to the authentication server to confirm the opaque tokens). However, this approach requires dereferencing each client token just as session IDs require per-request dereferencing, eliminating the benefit of JWTs for client requests. In this case, JWTs just allow services to pass user data among themselves without having to fully trust one another -- but I'm still trying to understand the value of the approach.
My concern appears to apply only to the use of JWTs as authentication tokens by untrusted clients. Yet JWTs are used by a number of high-profile APIs, including Google APIs. What am I missing? Maybe server breaches are rarely read-only? Are there ways to mitigate the risk?
I believe you're thinking about this the wrong way. Don't get me wrong, it's great you're considering security, however the way you're approaching it in regards to double checking things server-side, adding additional checks that defeat the objective of stateless sessions, etc, appear to be along a one way street towards the end of your own sanity.
To sum up the two standard approaches:
JWTs are sessionless state objects, MAC'd by a secret key held server side.
Traditional Session Identifiers are stored either in memory or in a database server-side, and as you say are often hashed to prevent sessions from being hijacked should this data be leaked.
You are also right that write access is often harder for an attacker to achieve. The reason is that database data is often extracted from a target system via a SQL injection exploit. This almost always provides read access to data, but it is harder to insert data using this technique, although not impossible (some exploits actually result in full root access of the target machine being achieved).
If you have a vulnerability that allows access to the key when using JWTs or one that allows database tables to be written to when using session identifiers, then it's game over - you are compromised because your user sessions can be hijacked.
So not more damaging necessarily, it all depends on the depth of the vulnerability.
Double check that the security of your JWT keys align with your risk appetite:
Where are they stored?
Who has access?
Where are backups stored?
Are different keys used in pre-production and production deployments of your app?
The ways to mitigate is as good practise dictates with any web app:
Regular security assessments and penetration testing.
Security code reviews.
Intrusion detection and prevention (IDS/IPS).
WAF.
These will help you evaluate where your real risks lie. It is pointless concentrating on one particular aspect of your application so much, because this will lead to the neglect of others, which may well be higher risk to your business model. JWTs aren't dangerous and have no more risk than other components of your system necessarily, however if you've chosen to use them you should make sure you're using them appropriately. Whether you are or not comes down to the particular context of your application and that is difficult to assess in a general sense, so I hope my answer guides you in the right direction.
When an attacker is able to get hold of the signing key in a JWT based system that means that he is able to get access to the server backend itself. In that case all hope is lost. In comparison to that, when the same attack succeeds in session based systems the attacker would be able to intercept username/password authentication requests to the backend, and/or generate sessions ids himself, and/or change the validation routines required to validate the session ids and/or modify the data to which the session id points. Any security mechanism used to mitigate this works as well for session systems as for JWT systems.

Secure web programming - Best practises in authenticating users

Getting into web development and would like to become good at making secure websites. Any general typs/answers to any of the below would be greatly appreciated.
So got some questions on the authentication side of things:
How should the password typed on the client be encoded and sent to the server - assuming https is already in use? i have heard of some suggesting that only the hash is sent for security for example. Should it be encrypted client side - how?
Similar but on server side. How should the passwords be saved. Actual, hash, etc? Should they be encrypted - how?
Also, is there a kind of architecture that can protect the passwords in such a way that if one password is compromised, not everyone else's is? For example, if all passwords are stored in one file then access to only this one file would compromise every user on the system.
if only hashes must be stored - how to handle collisions?
Once authenticated should you just rely on session IDs to maintain authenticated status throughout? I have read on tips to reduce session highjacking and was therefore wondering whether it is a good idea/the only idea in the first place for keeping users authenticated.
Is there a safe way to provide an autoLogIn feature so that the browser remembers the password - similar to social network/web-email clients?
-------------
Extra - preventing attacks
Are there any tools or even just some common practises out there that must be applied to the username/password entries provided to prevent injection or any other kind of attacks?
If I use a Java development environment (using PlayFrameWork btw) how likely is it in general that attackers could include harmful code snippets of any kind in any form entries?
PS
As mentioned I will probably be using the Java PlayFrameWork to encode the website - can you suggest anything I should take into account for this?
Any tips on design patterns that must be followed for security purposes would be helpful.
Many Thanks
PPS
You could suggest passing the job on to an expert but if possible I would like to have some experience coding it myself. I hope that this is a viable option?
Will probably like to set up an e-commerce system FYI.
How should the password typed on the client be encoded and sent to the server - assuming https is already in use? i have heard of some suggesting that only the hash is sent for security for example. Should it be encrypted client side - how?
It should not be sent to the server in a way that can be recovered. The problem with SSL/TLS and PKI is the {username, password} (or {username, hash(password)}) is presented to nearly any server that answers with a certificate. That server could be good or bad.
The problem here is channel setup is disjoint from user authentication, and web developers and server administrators then do dumb things like put a plain text password on the wire in a basic_auth scheme.
Its better to integrate SSL/TLS channel setup with authentication. That's called channel binding. Its provides mutual authentication and does not do dumb things like put a {username, password} on the wire so it can be easily recovered.
SSL/TLS offers nearly 80 cipher suites that don't do the dumb {username, password} on the wire. They are Preshared Key (PSK) and Secure Remote Password (SRP). Even if a bad guy answers (i.e., controls the server), the attacker cannot learn the password because its not put on the wire for recovery. Instead, he will have to break AES (for PSK) or solve the Discrete Log problem (for SRP).
All of this is covered in great detail in Peter Gutmann's Engineering Security book.
Similar but on server side. How should the passwords be saved. Actual, hash, etc? Should they be encrypted - how?
See the Secure Password Storage Cheat Sheet and Secure Password Storage paper John Steven wrote for OWASP. It takes you through the entire threat model, and explains why things are done in particular ways.
Once authenticated should you just rely on session IDs to maintain authenticated status throughout?
Yes, but authorization is different than authentication.
Authentication is a "coarse grained" entitlement. It asks the question, "can a user use this application?". Authorization is a "fine grained" entitlement. It answers the question, "can a user access this resource?".
Is there a safe way to provide an autoLogIn feature so that the browser remembers the password - similar to social network/web-email clients
It depends on what you consider safe and what's in the threat model. If your threat model does not include an attacker who has physical access to a user's computer or device, then its probably "safe" by most standards.
If the attacker has access to a computer or device, and the user does not protect it with a password or pin, then its probably not considered "safe".
Are there any tools or even just some common practises out there that must be applied to the username/password entries provided to prevent injection or any other kind of attacks?
Yes, user login suffers injections. So you can perform some filtering on the way in, but you must perform HTML encoding on the output.
Its not just username/password and logins. Nearly everything should have some input filtering; and it must have output encoding in case its malicious.
You should definitely spend so time on the OWASP web site. If you have a local chapter, you might even consider attending meetings. You will learn a lot, and meet a lot of awesome people.
If I use a Java development environment (using PlayFrameWork btw) how likely is it in general that attackers could include harmful code snippets of any kind in any form entries?
Java is a hacker's delight. Quality and security has really dropped since Oracle bought it from Sun. The more paranoid (security conscious?) folks recommend not signing any Java code because the sandbox is so broken. That keeps a legitimate application properly sandboxed. From http://threatpost.com/javas-losing-security-legacy:
...
“The sandbox is a huge problem for Oracle,” Jongerius told Threatpost.
“Everyone is breaking in. Their solution is to code-sign and get out
of the sandbox. But then, you have full permission to the machine. It
doesn’t make sense.”
Its too bad the bad guys didn't get the memo. They sign their code the malware and break out of the sandbox.
Any tips on design patterns that must be followed for security purposes would be helpful.
You also have web server configurations, like HTTPS Only and Secure cookies, HTTP Strict Transport Security (HSTS), and Content Security Policies (CSP), Suhosin (hardened PHP), SSL/TLS algorithms, and the like.
There's a lot to it, and you will need to find the appropriate hardening guide.

What types of threats are RESTful services susceptible to?

What types of vulnerabilities or threats are RESTful web services susceptible to?
I work on a project which exposes a lot of these services however there is a lack of any validation or security.
In a short non-exhaustive list, there are a few things that you should keep in mind:
Do not abuse GET request when working with sensitive data
When passing sensitive data, always use POST/PUT/DELETE with secure connection (https). Of course, proper SSL certification and configuration is needed to ensure the communication cannot be decoded by third parties.
For RESTful authentication, avoid passing credentials on each request.
Do not get tempted to use HTTP error codes for authentication errors
When authenticating, try to make the service behave in the same way regardless if there is authentication failure or success. Always return the same HTTP error code (200 OK, but with a different body depending on the authentication result). This may confuse potential attacker of whether his technique is working or if he had found a weak spot, they must now learn how to interpret the response body of your API. Giving too much information from the HTTP response will help them orient themselves faster. Leave HTTP error codes for their true purpose -- to inform of HTTP communication issues. This is also good for developers who would integrate with the API, as the behavior is less ambiguous.
Allow limited attempts to authenticate
Reject connection from clients who perform too much unsuccessful authentication attempts for a limited amount of time. Some systems would prevent you from authenticating again within 10 or 30 minutes if you were unable to log-in after a small number of attempts. This could reduce the risk of a DDoS attack, and could significantly cripple any brute-force password guess attempts.
Password validation time matters
If using password encryption on the sever side, use such comparison algorithms between the incoming password and the one stored in the server, so that it will take near-equal time to compare passwords regardless of them being correct or not. Add custom timeouts when necessary. This could prevent a timed attack - usually wrong passwords are rejected faster by most algorithms and a hacker may use the response time differences to determine if s/he is getting closer to guessing the password. Combined with limited authentication attempts, this can be prevented.
CORS
By using CORS you also limit the allowed users of your api when ran into the browser. This could be a serious improvement security wise, as an attacker would not be capable of attacking your RESTful API directly from their machines, but rather they have to find ways of bypassing CORS. The latter could further be prevented by using strict enough CORS rules and having good security on the servers hosting the allowed CORS urls, so that an attacker may not compromise a CORS-allowed machine that can access the API directly.
Of course, there are other things that must be kept in mind, these are the most important I can come up with now.
You should also know, that the request/response are still visible in the Network tab of Firebug (or whatever browser debugger you are using), or any attached traffic listener, so anyone on a web page calling the REST service can at least see the url and the data for get/requests and the response.
Pass and return data that is needed to be visualized and for the page/app to work correctly, never return sensitive info like passwords or user sensitive data.
Like many services, RESTful services can be a subject of DDoS attack, still the latter aims at shutting down the service rather than compromising data or accomplish an authorization/authentication breach.

Security advice: SSL and API access

My API (a desktop application) communicates with my web app using basic HTTP authentication over SSL (Basically I'm just using https instead of http in the requests). My API has implemented logic that makes sure that users don't send incorrect information, but the problem I have is that someone could bypass the API and use curl to potentially post incorrect data (obtaining credentials is trivial since signing up on my web app is free).
I have thought about the following options:
Duplicate the API's logic in the web app so that even if users try to cheat the system using curl or some other tool they are presented with the same conditions.
Implement a further authentication check to make sure only my API can communicate with my web app. (Perhaps SSL client certificates?).
Encrypt the data (Base 64?)
I know I'm being a little paranoid about users spoofing my web app with curl-like tools but I'd rather be safe than sorry. Duplicating the logic is really painful and I would rather not do that. I don't know much about SSL client certificates, can I use them in conjunction with basic HTTP authentication? Will they make my requests take longer to process? What other options do I have?
Thanks in advance.
SSL protects you from the man-in-the-middle attacks, but not from attacks originated on the client side of the SSL. A client certificate built into your client API would allow you to identify that data was crafted by the client side API, but will not help you figuring out if client side manually modified the data just before it got encrypted. Technically ssavy user on the client end can always find a way to modify data by debugging through your client side API. The best you can do is to put roadblocks to your client side API, to make it harder to decipher it. Validation on the server side is indeed the way to go.
Consider refactoring your validation code so that it can be used on both sides.
You must validate the data on the server side. You can throw nasty errors back across the connection if the server-side validation fails — that's OK, they're not supposed to be tripped — but if you don't, you are totally vulnerable. (Think about it this way: it's the server's logic that you totally control, therefore it is the server's logic that has to make the definitive decisions about the validity of communications.)
Using client certificates won't really protect you much additionally from users who have permission to use the API in the first place; if nothing else, they can take apart the code to extract the client certificate (and it has to be readable to your client code to be usable at all). Nor will adding extra encryption; it makes things much more difficult for you (more things to go wrong) without adding much safety over that already provided by that SSL connection. (The scenario where adding encryption helps is when the messages sent over HTTPS have to go via untrusted proxies.)
Base-64 is not encryption. It's just a way of encoding bytes as easier-to-handle characters.
I would agree in general with sinelaw's comment that such validations are usually better on the server side to avoid exactly the kind of issue you're running into (supporting multiple client types). That said, you may just not be in a position to move the logic, in which case you need to do something.
To me, your options are:
Client-side certificates, as you suggest -- you're basically authenticating that the client is who (or what, in your case) you expect it to be. I have worked with these before and mutual authentication configuration can be confusing. I would not worry about the performance, as I think the first step is getting the behavior your want (correctness first). Anyway, in general, while this option is feasible, it can be exasperating to set up, depending on your web container.
Custom HTTP header in your desktop app, checking for its existence/value on the server side, or just leveraging of the existing User-Agent header. Since you're encrypting the traffic, one should not be able to easily see the HTTP header you're sending, so you can set its name and value to whatever you want. Checking for that on the server side is akin to assuring you that the client sending the request is almost certainly using your desktop app.
I would personally go the custom header route. It may not be 100% perfect, but if you're interested in doing the simplest possible thing to mitigate the most risk, it strikes me as the best route. It's not a great option if you don't use HTTPS (because then anyone can see the header if they flip on a sniffer), but given that you do use HTTPS, it should work fine.
BTW, I think you may be confusing a few things -- HTTPS is going to give you encryption, but it doesn't necessarily involve (client) authentication. Those are two different things, although they are often bundled together. I'm assuming you're using HTTPS with authentication of the actual user (basic auth or whatever).

I need resources for API security basics. Any suggestions?

I've done a little googling but have been a bit overwhelmed by the amount of information. Until now, I've been considering asking for a valid md5 hash for every API call but I realized that it wouldn't be a difficult task to hijack such a system. Would you guys be kind enough to provide me with a few links that might help me in my search? Thanks.
First, consider OAuth. It's somewhat of a standard for web-based APIs nowadays.
Second, some other potential resources -
A couple of decent blog entries:
http://blog.sonoasystems.com/detail/dont_roll_your_own_api_security_recommendations1/
http://blog.sonoasystems.com/detail/more_api_security_choices_oauth_ssl_saml_and_rolling_your_own/
A previous question:
Good approach for a web API token scheme?
I'd like to add some clarifying information to this question. The "use OAuth" answer is correct, but also loaded (given the spec is quite long and people who aren't familiar with it typically want to kill themselves after seeing it).
I wrote up a story-style tutorial on how to go from no security to HMAC-based security when designing a secure REST API here:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
This ends up being basically what is known as "2-legged OAuth"; because OAuth was originally intended to verifying client applications, the flow is 3-parts involving the authenticating service, the user staring at the screen and the service that wants to use the client's credentials.
2-legged OAuth (and what I outline in depth in that article) is intended for service APIs to authenticate between each other. For example, this is the approach Amazon Web Services uses for all their API calls.
The gist is that with any request over HTTP you have to consider the attack vector where some malicious man-in-the-middle is recording and replaying or changing your requests.
For example, you issue a POST to /user/create with name 'bob', well the man-in-the-middle can issue a POST to /user/delete with name 'bob' just to be nasty.
The client and server need some way to trust each other and the only way that can happen is via public/private keys.
You can't just pass the public/private keys back and forth NOR can you simply provide a unique token signed with the private key (which is typically what most people do and think that makes them safe), while that will identify the original request coming from the real client, it still leaves the arguments to the comment open to change.
For example, if I send:
/chargeCC?user=bob&amt=100.00&key=kjDSLKjdasdmiUDSkjh
where the key is my public key signed by my private key only a man-in-the-middle can intercept this call, and re-submit it to the server with an "amt" value of "10000.00" instead.
The key is that you have to include ALL the parameters you send in the hash calculation, so when the server gets it, it re-vets all the values by recalculating the same hash on its side.
REMINDER: Only the client and server know the private key.
This style of verification is called an "HMAC"; it is a checksum verifying the contents of the request.
Because hash generation is SO touchy and must be done EXACTLY the same on both the client and server in order to get the same hash, there are super-strict rules on exactly how all the values should be combined.
For example, these two lines provides VERY different hashes when you try and sign them with SHA-1:
/chargeCC&user=bob&amt=100
/chargeCC&amt=100&user=bob
A lot of the OAuth spec is spent describing that exact method of combination in excruciating detail, using terminology like "natural byte ordering" and other non-human-readable garbage.
It is important though, because if you get that combination of values wrong, the client and server cannot correctly vet each other's requests.
You also can't take shortcuts and just concatonate everything into a huge String, Amazon tried this with AWS Signature Version 1 and it turned out wrong.
I hope all of that helps, feel free to ask questions if you are stuck.

Resources