Security vulnerability or not? - security

I observed that the web tool project I'm working on has a potential vulnerability, where a well-forged http form request can make the internal server execute arbitrary shell command.
However, the web tool page is only accessible to my company's internal network and users. Although the attacker can still make a malicious page which forges the request and trap our internal user into clicking on the malicious page, it seems to be difficult for attacker to figure out a well-forged http request without direct access to the webpage. In such case, is that still a serious vulnerability which needs to be solved?
Sorry I'm not very familiar with security. Please let me know if further information is needed.

This is usually a judgement call and handled by company policy.
If your company is small, the entire staff can be trusted and it is certain that the application will never be used in a public setting, you may choose not to address this issue if it is hard to fix.
If any of these is not the case, then you should fix the vulnerability. Often times a formerly internal application becomes public and vulnerabilities are forgotten. Also, consider that an insider may be laid off and use this vulnerability for revenge.
It is always safer to fix the vulnerability. Make the tradeoff wisely.

Related

Remote code execution and XSS vulnerabilities. What steps should be taken to secure a server once these are discovered and patched?

I've just been notified of a remote code execution vulnerability and an xss vulnerability on a site that I run. I've fixed the responsible code, but I'm wondering what steps should be taken afterwards to:
Ensure the server is secure
Ensure no data was compromised
Ensure no malicious files were uploaded.
The remote code execution vulnerability was particularly bad and allowed any PHP code to be run on the server and output displayed to the user.
The app is hosted on Amazon Lightsail. Would it be helpful to redeploy on a new instance?
Well, definitely ensure that the vulnerabilities have been successfully patched. Remember that block lists are particularly not effective when it comes to patching XSS and RCE.
With regards to XSS, do not display any user input in places like links, iframe sources, and basically not within any elements. Exceptions can be made for displaying it in input boxes. Always put user input through htmlspecialchars() (or a similar function for whatever server-side language you're using, which I assume is PHP judging by your question).
If the vulnerabilities were found by you, or someone reported them to you, it's pretty likely no data has been compromised. Big companies find vulnerabilities regularly.
With regards to preventative measures, simply check for more vulnerabilities and ensure they do not exist and also harden your server. You can even use a firewall or other security software which won't patch the vulnerabilities but might block malicious payloads and log them which will both allow you to see the vulnerability exists and also prevent it from being exploited.
You can't really ensure no malicious files were uploaded if it were exploited. I would definitely recommend restoring to a secure backup so long as that does not affect your site negatively.

Do vulnerability scans make sense for password protected websites?

Are vulnerability scans aimed at public facing websites or they can be run on password protected websites as well?
Generally there are two types of vulnerability scans Static and Dynamic.
Static Scan:
It is nothing but testing application from inside out by examining code, byte code or application binaries for probable vulnerability.
Dynamic Scan:
It is testing application from outside in by examining application in running state and trying to poke it in unexpected ways to discover probable vulnerability.
Regarding Password protection -
Both of these scans asses password policies used in application. Most of the times static scan analyses for hardcoded passwords inside application, cryptographic mechanisms used in application. Also, dynamic scan assesses weak passwords which are easy to guess, etc. So, major part of password protection can be covered by using both of these scan. However, for more protection, we can rely on third party tools to analyze password policies configured in application.
The scan will generally test your server as a whole, not just the website. It can see what other ports are open, which may indicate a vulnerability. The login page itself might have vulnerabilities.
An automated scan won't be able to spider the site and get all the pages, but that's usually not the point.
Yes they work on both public, and private. Just like what user Garr Godfrey said, it will check the whole hosting server. So as long as the public facing side is on the same host as the protected side they do work the same way.
There are also tools that can yield information regarding a server besides port scans. These are usually known as (site) crawlers here is a link for one.
https://portswigger.net/burp/help/spider_using.html

What damage can a website do?

Now and then I (accidentally) come across websites that my anti-virus warns me about. Out of curiosity, what kind of damage can a website do?
I've been working in web development for around 4 years now and can't think of any 'genuine' damage worth warning the user about. Maybe I'm missing something obvious, but surely browsers and basic security measures implemented by main operating systems prevent anything particularly invasive going on?
I'm talking about threats aside from anything deceptive by the way (phishing etc.). Could taxing the browser enough warrant an anti-virus warning (i.e. overload a page with resource-draining javascript)? Typically, cookies, caches and localstorage all have limits - so I can't think of what could go on there.
I suspect this may be slightly off-topic, as it's less technically specific than what I'd usually ask. I'll happily delete it if this is the case.
The main risk is encountering a drive-by download.
A drive-by download isn't necessarily a file download in the usual sense, it could be a browser exploit that allows executable code to download and execute on your system (known as the payload).
One example is the Microsoft Internet Explorer colspan Element Processing Arbitrary Code Execution Vulnerability:
Microsoft Internet Explorer contains a vulnerability that could allow
an unauthenticated, remote attacker to execute arbitrary code on a
targeted system.
The vulnerability is due to improper processing of elements in web
pages. An unauthenticated, remote attacker could exploit this
vulnerability by convincing a user to view a malicious website. If
successful, the attacker could exploit this vulnerability to execute
arbitrary code on the system with the privileges of the user.
The vulnerability is due to improper handling of constantly changed
colspan in a fixed table layout. If colspan could be increased after
initialization, it could trigger a heap-based buffer overflow.
However, more recent exploits exist such as this one this year (2015) in Flash Player:
Adobe Flash Player before 13.0.0.269 and 14.x through 16.x before
16.0.0.305 on Windows and OS X and before 11.2.202.442 on Linux allows attackers to execute arbitrary code or cause a denial of service
(memory corruption) via unspecified vectors
Another attack vector from a website could be exploitation of a cross domain attack such as Cross Site Request Forgery. Such a malicious site could be making background requests to other sites you're logged into. For example, it might be making AJAX requests to https://facebook.com/delete_account (made up URL path), and as you're logged into Facebook your browser will pass cookies and the action would be triggered. That is, if Facebook did not have CSRF protection for the delete account function (I'm pretty sure it does though).
Another example of a cross domain attack is that the site may be trying to exploit any XSS flaw on another site you use. It could redirect you another site and capture your credentials as you log in, or it could do something more sneaky like request a site in the background and grab your session cookie. This requires the target site to contain such an XSS flaw however.
One of the main issues is that when you go onto a website it can automatically download something onto your computer. Normally an ordinary website will ask you if you are sure that you want to download the item, but a website can download something without your permission. And if the file that was downloaded was a virus, then you now have a virus on your computer and the virus can inflict any sort of damage to the computer.
See here (https://www.microsoft.com/security/pc-security/virus-whatis.aspx) to see the issues of a virus and how to remove them.

When writing a HTTP proxy, what security problems do I need to think about?

My company has written a HTTP proxy that takes the original website page and translates it. Think something along the lines of the web translation service provided by Google, Bing, etc.
I am in the middle of security testing of the service and associated website. Of course there is going to be a million attacks or misuses of the site that I haven't yet thought of. Additionally I don't want our site to become a vector that allows anonymous attacks against third party sites. Since this site will be subject to many eyes from the day it is opened, ensuring the security of both our service and the sites visited by our service is concerning me.
Can anyone point me to any online or published information for security testing. e.g. good lists of attacks to be worried about, security best practices for creating web sites/proxies/etc. I have a good general understanding of security issues (XSS, CSRF, SQL injection, etc). I'm more looking for resources to help me with the specifics of creating tests for security testing.
Any pointers?
Seen:
https://www.owasp.org/index.php/Top_10
https://stackoverflow.com/questions/1267284/common-website-attack-methods-detection-and-recovery
Most obvious problems for a translation service:
Ensure that the proxy cannot access to internal network. Obvious when you think but mostly forgotten in the first release. i.e. user should not able to request translation for http://127.0.0.1 etc. As you can imagine this can cause some serious problems. A clever attack would be http://127.0.0.1/trace.axd which will expose more than necessary as it thinks the request coming from localhost. If you also have any kind IP based restrictions between that system and any other systems you might want to be careful about them as well.
XSS is the obvious problem, ensure that translation delivered to the user in a separate domain (like Google Translate). This is crucial, don't even think that you can filter XSS attacks successfully.
Other than that for all other common web security issues, there are lots of things to do. OWASP is the best resource to start for automated testing there are free tools such as Netsparker and Skipfish

I want to use security through obscurity for the admin interface of a simple website. Can it be a problem?

For the sake of simplicity I want to use admin links like this for a site:
http://sitename.com/somegibberish.php?othergibberish=...
So the actual URL and the parameter would be some completely random string which only I would know.
I know security through obscurity is generally a bad idea, but is it a realistic threat someone can find out the URL? Don't take the employees of the hosting company and eavesdroppers on the line into account, because it is a toy site, not something important and the hosting company doesn't give me secure FTP anyway, so I'm only concerned about normal visitors.
Is there a way of someone finding this URL? It wouldn't be anywhere on the web, so Google won't now it about either. I hope, at least. :)
Any other hole in my scheme which I don't see?
Well, if you could guarantee only you would ever know it, it would work. Unfortunately, even ignoring malicious men in the middle, there are many ways it can leak out...
It will appear in the access logs of your provider, which might end up on Google (and are certainly read by the hosting admins)
It's in your browsing history. Plugins, extensions etc have access to this, and often use upload it elsewhere (i.e. StumbleUpon).
Any proxy servers along the line see it clearly
It could turn up as a Referer to another site
some completely random string
which only I would know.
Sounds like a password to me. :-)
If you're going to have to remember a secret string I would suggest doing usernames and passwords "properly" as HTTP servers will have been written to not leak password information; the same is not true of URLs.
This may only be a toy site but why not practice setting up security properly as it won't matter if you get it wrong. So hopefully, if you do have a site which you need to secure in future you'll have already made all your mistakes.
I know security through obscurity is
generally a very bad idea,
Fixed it for you.
The danger here is that you might get in the habit of "oh, it worked for Toy such-and-such site, so I won't bother implementing real security on this other site."
You would do a disservice to yourself (and any clients/users of your system) if you ignore Kerckhoff's Principle.
That being said, rolling your own security system is a bad idea. Smarter people have already created security libraries in the other major languages, and even smarter people have reviewed and tweaked those libraries. Use them.
It could appear on the web via a "Referer leak". Say your page links to my page at http://entrian.com/, and I publish my web server referer logs on the web. There'll be an entry saying that http://entrian.com/ was accessed from http://sitename.com/somegibberish.php?othergibberish=...
As long as the "login-URL" never posted anywhere, there shouldn't be any way for search engines to find it. And if it's just a small, personal toy-site with no personal or really important content, I see this as a fast and decent-working solution regarding security compared to implementing some form of proper login/authorization system.
If the site is getting a big number of users and lots of content, or simply becomes more than a "toy site", I'd advice you to do it the proper way
I don't know what your toy admin page would display, but keep in mind that when loading external images or linking to somewhere else, your referrer is going to publicize your URL.
If you change http into https, then at least the url will not be visible to anyone sniffing on the network.
(the caveat here is that you also need to consider that very obscure login system can leave interesting traces to be found in the network traces (MITM), somewhere on the site/target for enabling priv.elevation, or on the system you use to log in if that one is no longer secure and some prefer admin login looking no different from a standard user login to avoid that)
You could require that some action be taken # of times and with some number of seconds of delays between the times. After this action,delay,action,delay,action pattern was noticed, the admin interface would become available for login. And the urls used in the interface could be randomized each time with a single use url generated after that pattern. Further, you could only expose this interface through some tunnel and only for a minute on a port encoded by the delays.
If you could do all that in a manner that didn't stand out in the logs, that'd be "clever" but you could also open up new holes by writing all that code and it goes against "keep it simple stupid".

Resources