I would like to know if intercepting and modifying requests using Burpsuite before reaching server is considered as vulnerability.
In our web based and mobile applications, adequate security measures are in place to avoid replay attack and data integrity etc.,
Now the same is being evaluated by one of the application security teams, they use burpsuite to intercept the request payload and have raised few security vulnerabilities which aren't reproducible without using Burpsuite.
So using such tools is still a valid case and considered to be vulnerable?
Related
I observed that the web tool project I'm working on has a potential vulnerability, where a well-forged http form request can make the internal server execute arbitrary shell command.
However, the web tool page is only accessible to my company's internal network and users. Although the attacker can still make a malicious page which forges the request and trap our internal user into clicking on the malicious page, it seems to be difficult for attacker to figure out a well-forged http request without direct access to the webpage. In such case, is that still a serious vulnerability which needs to be solved?
Sorry I'm not very familiar with security. Please let me know if further information is needed.
This is usually a judgement call and handled by company policy.
If your company is small, the entire staff can be trusted and it is certain that the application will never be used in a public setting, you may choose not to address this issue if it is hard to fix.
If any of these is not the case, then you should fix the vulnerability. Often times a formerly internal application becomes public and vulnerabilities are forgotten. Also, consider that an insider may be laid off and use this vulnerability for revenge.
It is always safer to fix the vulnerability. Make the tradeoff wisely.
I'm currently in the process of building a browser helper object.
One of the things the BHO has to do is to make cross-site requests that bypass the cross-domain policy.
For this, I'm exposing a __MyBHONameSpace.Request method that uses WebClient internally.
However, it has occurred to me that anyone that is using my BHO now has a CSRF vulnerability everywhere as a smart attacker can now make arbitrary requests from my clients' computers.
Is there any clever way to mitigate this?
The only way to fully protect against such attacks is to separate the execution context of the page's JavaScript and your extension's JavaScript code.
When I researched this issue, I found that Internet Explorer does provide a way to achieve creation of such context, namely via IActiveScript. I have not implemented this solution though, for the following reasons:
Lack of documentation / examples that combines IActiveScript with BHOs.
Lack of certainty about the future (e.g. https://stackoverflow.com/a/17581825).
Possible performance implications (IE is not known for its superb performance, how would two instances of a JavaScript engines for each page affect the browsing speed?).
Cost of maintenance: I already had an existing solution which was working well, based on very reasonable assumptions. Because I'm not certain whether the alternative method (using IActiveScript) would be bugfree and future-proof (see 2), I decided to drop the idea.
What I have done instead is:
Accept that very determined attackers will be able to access (part of) my extension's functionality.
#Benjamin asked whether access to a persistent storage API would pose a threat to the user's privacy. I consider this risk to be acceptable, because a storage quota is enforced, and all stored data is validated before it's used, and it's not giving an attacker any more tools to attack the user. If an attacker wants to track the user via persistent storage, they can just use localStorage on some domain, and communicate with this domain via an <iframe> using the postMessage API. This method works across all browsers, not just IE with my BHO installed, so it is unlikely that any attacker dedicates time at reverse-engineering my BHO in order to use the API, when there's a method that already works in all modern browsers (IE8+).
Restrict the functionality of the extension:
The extension should only be activated on pages where it needs to be activated. This greatly reduces the attack surface, because it's more difficult for an attacker to run code on https://trusted.example.com and trick the user into visiting https://trusted.example.com.
Create and enforce whitelisted URLs for cross-domain access at extension level (in native code (e.g. C++) inside the BHO).
For sensitive APIs, limit its exposure to a very small set of trusted URLs (again, not in JavaScript, but in native code).
The part of the extension that handles the cross-domain functionality does not share any state with Internet Explorer. Cookies and authorization headers are stripped from the request and response. So, even if an attacker manages to get access to my API, they cannot impersonate the user at some other website, because of missing session information.
This does not protect against sites who use the IP of the requestor for authentication (such as intranet sites or routers), but this attack vector is already covered by a correct implemention a whitelist (see step 2).
"Enforce in native code" does not mean "hard-code in native code". You can still serve updates that include metadata and the JavaScript code. MSVC++ (2010) supports ECMAScript-style regular expressions <regex>, which makes implementing a regex-based whitelist quite easy.
If you want to go ahead and use IActiveScript, you can find sample code in the source code of ceee, Gears (both discontinued) or any other project that attempts to enhance the scripting environment of IE.
Here is my requirements:
Usable by any mobile application I'm developing
I'm developing the mobile application, therefore I can implement any securing strategies.
Cacheable using classical HTTP Cache strategy
I'm using Varnish with a very basic configuration and it works well
Not publicly available
I don't want people be able to consume my API
Solutions I think of:
Use HTTPS, but it doesn't cover the last requirements because proxying request from the application will show the API KEY used.
Is there any possibility to do this? Using something like a private/public key for example?
Which fits well with HTTP, Apache, and Varnish.
There is no way to ensure that the other end of a network link is your application. This is not a solvable problem. You can obfuscate things with certificates, keys, secrets, whatever. But all of these can be reverse-engineered by the end user because they have access to the application. It's ok to use a little obfuscation like certificates or the like, but it cannot be made secure. Your server must assume that anyone connecting to it is hostile, and behave accordingly.
It is possible to authenticate users, since they can have accounts. So you can certainly ensure that only valid users may use your service. But you cannot ensure that they only use your application. If your current architecture requires that, you must redesign. It is not solvable, and most certainly not solvable on common mobile platforms.
If you can integrate a piece of secure hardware, such as a smartcard, then it is possible to improve security in that you can be more certain that the human at the other end is actually a customer, but even that does not guarantee that your application is the one connecting to the server, only that the smartcard is available to the application that is connecting.
For more on this subject, see Secure https encryption for iPhone app to webpage.
Even though it's true there's basically no way to guarantee your API is only consumed by your clients unless you use a Hardware secure element to store the secret (which would imply you making your own phone from scratch, any external device could be used by any non official client App as well) there are some fairly effective things you can do to obscure the API. To begin with, use HTTPS, that's a given. But the key here, is to do certificate pinning in your app. Certificate pining is a technique in which you store the valid public key certificate for the HTTPS server you are trying to connect. Then on every connection, you validate that it's an HTTPS connection (don't accept downgrade attacks), and more importantly, validate that it's exactly the same certificate. This way you prevent a network device in your path to perform a man in the middle attack, thus ensuring no one is listening in in your conversation with the server. By doing this, and being a bit clever about the way you store the API's parameters general design in your application (see code obfuscation, particularly how to obfuscate string constants), you can be fairly sure you are the only one talking to your server. Of course, security is only a function of how badly does someone want to break in your stuff. Doing this doesn't prevent a experienced reverse-engineer with time to spare to try (and possibly succeed) to decompile your source code and find what it is looking for. But doing all of this will force it to look at the binary, which is a couple of orders of magnitude more difficult to do than just performing a man in the middle attack. This is famously related to the latest snap chat flurrry of leaked images. Third party clients for snapchat exist, and they were created by reverse engineering the API, by means of a sniffer looking at the traffic during a man in the middle attack. If the snapchat app developers would have been smarter, they would've pinned their certificate into their app, absolutely guaranteeing it's snapchat's server who they're talking to, and the hackers would need to inspect the binary, a much more laborious task that perhaps given the effort involved, would not have been performed.
We use HTTPS and assign authorized users a key which is sent in and validated with each request.
We also use HMAC hashing.
Good read on this HMAC:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
I have a web application that supports a variety of clients on various platforms including desktop browsers, mobile browsers, as well as mobile and tablet native applications. I am wondering if it is possible to detect, in a secure manner, which of these platforms is being used to connect to the service.
This would be useful information to have, and would enable use cases where a security decision could be made based on the client platform. For example, I could restrict access to certain portions of the service if a user was on a mobile client, or a browser with a known vulnerability.
I am aware of EFF's Panopticlick research, which uses a variety of browser-based attributes, such as User-Agent string, installed plugins, screen dimensions, etc. to establish a unique fingerprint for a client, but this doesn't meet the "verifiable" requirement, as all the information is compiled on the client and could easily be spoofed.
I need a solution that is verifiable on the server side that the information sent by the client is accurate. Does such a solution exist?
There is no way to know which user-agent/platform you're dealing with, because ultimately any information you might use to identify them comes from the client side.
Any attribute you use to fingerprint my browser or operating system can be faked by simply sending you different HTTP headers. There are dozens of browser-level HTTP header manipulators and HTTP request code libraries that do precisely that.
I would therefore highly recommend against making -any- security decision based on platform or user-agent of the client. Assume that whatever rules you may set for purposes of usability along those lines can be violated by a hacker.
Web applications on uncompromised computers are vulnerable to XSS,CRSF,sql injection attacks and cookie stealing in unsecure wifi environments.
To prevent those security issues there are the folowing remedies
sql injection: a good datamapper(like linq-to-sql) does not have the risk of sql injection (am i naïeve to believe this?)
CSRF: Every form-post is verified with the <%:Html.AntiForgeryToken() %> (this is a token in a asp.net mvc environment that is stored in a cookie and verified on the server)
XSS: every form that is allowed to post html is converted, only bb code is allowed, the rest is encoded . All possible save actions are done with a post event so rogue img tags should have no effect
cookie stealing: https
Am i now invulnerable to web-based hacking attempts(when implemented correctly)? Or am i missing some other security issues in web-development?(except for possible holes in the OS platform or other software)
The easy answer is "No you're not invulnerable - nobody is!"
This is a good start, but there are a few other things you could do. The main one you haven't mentioned is validation of untrusted data against a white-list and this is important as it spans multiple exploits such as both SQLi and XSS. Take a look at OWASP Top 10 for .NET developers part 1: Injection and in particular, the section about "All input must be validated against a whitelist of acceptable value ranges".
Next up, you should apply the principle of least privilege to the accounts connecting to your SQL Server. See the heading under this name in the previous link.
Given you're working with ASP.NET, make sure Request Validation remains on and if you absolutely, positively need to disable it, just do it at a page level. More on this in Request Validation, DotNetNuke and design utopia.
For your output encoding, the main thing is to ensure that you're encoding for the right context. HTML encoding != JavaScript encoding != CSS encoding. More on this in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS).
For the cookies, make them HTTP only and if possible, only allow them to be served securely (if you're happy to only run over HTTPS). Try putting your web.config through the web.config security analyser which will help point you in the right direction.
Another CSRF defense - albeit one with a usability impact - is CAPTCHA. Obviously you want to use this sparingly but if you've got any really critical functions you want to protect, this puts a pretty swift stop to it. More in OWASP Top 10 for .NET developers part 5: Cross-Site Request Forgery (CSRF).
Other than that, it sounds like you're aware of many of the important principles. It won't make you invulnerable, but it's a good start.
Am I now invulnerable to web-based hacking attempts?
Because, no matter how good you are, everyone makes mistakes, the answer is no. You almost certainly forgot to sanitize some input, or use some anti-forgery token. If you haven't now, you or another developer will as your application grows larger.
This is one of the reason we use frameworks - MVC, for example, will automatically generate anti-CSRF tokens, while LINQ-to-SQL (as you mentioned) will sanitize input for the database. So, if you are not already using a framework which makes anti-XSS and anti-CSRF measures the default, you should begin now.
Of course, these will protect you against these specific threats, but it's never possible to be secure against all threats. For instance, if you have an insecure SQL-connection password, it's possible that someone will brute-force your DB password and gain access. If you don't keep your versions of .Net/SQL-Server/everything up to date, you could be the victim of online worm (and even if you do, it's still possible to be zero-dayed).
There are even problems you can't solve in software: A script kiddie could DDOS your site. Your server-company could go bankrupt. A shady competitor could simply take a hedge-clippers to your internet line. Your warehouse could burn down. A developer could sell the source-code to a company in Russia.
The point is, again, you can't ever be secure against everything - you can only be secure against specific threats.
This is the definitive guide to web attacks. Also, I would recommend you use Metasploit against your web app.
It definitely is not enough! There are several other security issues you have to keep in mind when developing a web-app.
To get an overview you can use the OWASP Top-Ten
I think this is an very interesting post to read when thinking about web-security: What should a developer know before building a public web site?
There is a section about security that contains good links for most of the threats you are facing when developing web-apps.
The most important thing to keep in mind when thinking about security is:
Never trust user input!
[I am answering to this "old" question because I think it is always an actual topic.]
About what you didn't mention:
You missed a dangerous attack in MVC frameworks: Over Posting Attack
You also missed the most annoying threats: Denial of Service
You also should pay enough attention to file uploads (if any...) and many more...
About what you mentioned:
XSS is really really really waster and more annoying to mitigate. There are several types of encoding including Html Encoding, Javascript Encoding, CSS Encoding, Html Attribute Encoding, Url Encoding, ...
Each of them should be performed to the proper content, in the proper place - i.e. Just doing Html Encoding the content is not enough in all situations.
And the most annoying about XSS, is that there are some situations that you should perform Combinational Encoding(i.e. first JavascriptEncode and then HtmlEncode...!!!)
Take a look at the following link to become more familiar with a nightmare called XSS...!!!
XSS Filter Evasion Cheat Sheet - OWASP