My web server got a hundred odd tries of accesses from Acunetix scanner within 1 minute!
How can I avoid these attacks, seeing that I have many different end users accessing the website per minute?
There are two main ways you can tackle this:
Setup tools such as Fail2Ban to automatically block IPs for users scanning your website.
Automatically block any requests using custom headers. In this case, see your web server's access log and see whether or not it is sending any custom headers similar to:
Acunetix-Product: WVS/X (Acunetix Web Vulnerability Scanner - NORMAL)
Acunetix-Scanning-agreement: Third Party Scanning PROHIBITED
Acunetix-User-agreement: http://www.acunetix.com/wvs/disc.htm
You can then block such patterns possibly using web server rules or a WAF (Web Application Firewall).
Related
We use Application Insights on Frontend and we also use Azure Front Door with WAF(Web Application Firewall) policy.
I can see in WAF logs that a lot of requests are blocked by some WAF Managed Rules.
When I have inspected the WAF logs I found out that requests are blocked by value in cookies ai_session and ai_user (App insights cookies).
Rules that blocks requests:
(942210) Detects chained SQL injection attempts 1/2 - block request because of OR value in ai_session cookie like this:
D/6NkwBRWBcMc4OR7+EFPs|1647504934370|1647505171554
(942450) SQL Hex Encoding Identified - block because of Ox value in ai_user cookie like this:
mP4urlq9PZ9K0xc19D0SbK|2022-03-17T10:53:02.452Z
(932150) Remote Command Execution: Direct Unix Command Execution - block because of ai_session cookie with value: KkNDKlGfvxZWqiwU945/Cc|1647963061962|1647963061962
Is there a way how to force App Insights to generate "secure" cookies?
Why does Azure generate cookie values that on the other side cause blocking requests by Azure Firewall?
I know that I can allow those WAF Rules but is there any other solution?
We have started to encounter this error as well; disabling (or setting to allowed) the OWASP rules as you indicated will work.
I have opened a bug report on the project page that outlines this in more detail here: https://github.com/microsoft/ApplicationInsights-JS/issues/1974 the jist of it, as you identified is the WAF rule's Regex being overzealous.
The IDs that are eventually used by the cookies are generated by this section of code:https://github.com/microsoft/ApplicationInsights-JS/blob/0c76d710a0cd465f0b8b5e143250898122511874/shared/AppInsightsCore/src/JavaScriptSDK/RandomHelper.ts#L125-L145
If the developers chose, they have various way to solve the problem:
Test the generated cookies against the list of known regex and then regenerate on failure.
Remove some of the offending combinations to avoid the rules entirely.
We'll have to see how that plays out. If you cannot do this, in theory you could branch the project and then add such changes yourself but I would not recommend vendoring the SDK.
It's not a common question, but I wonder if any tricks or upcoming standards exist.
Belows are a flow and what I want to implement.
Web application loaded from server-side
Client-side script loads some secure contents (not from #1) that need to be protected from web application provider. It could be shown to a user visually.
Web application provider knows where are the secure contents (in Dom path) and possibly may try to catch it by putting a script
However the secure contents shouldn't be hijacked from servers (even from the same origin) or from external application (even from developer tools if possible)
EDIT:
For better understanding, it's for use case where web application doesn't hold user data in their DB but loads the data from somewhere else. In case, I need to protect the data from web application, which is uncommon in regular web application.
You should use a content security policy (CSP) which would enable the browser to deny injection attacks. These can be a little tricky to setup correctly so I would use Report URI to help you get it going. The trick is to use report only mode first until you have validated the settings then switch to enforce.
In node, if I use a library like axios and a simple async script, I can send unlimited post requests to any web server. If I know all parameters, headers and cookies needed for that url, I'll get a success response.
Also, anyone can easily make those requests using Postman.
I already use CORS in my node servers to block requests coming from different origins, but that works only for other websites triggering requests in browsers.
I'd like to know if it's possible to completely block requests from external sources (manually created scripts, postman, softwares like LOIC, etc...) in a node server using express.
thanks!
I'd like to know if it's possible to completely block requests from external sources (manually created scripts, postman, softwares like LOIC, etc...) in a node server using express.
No, it is not possible. A well formed request from postman or coded with axios in node.js can be made to look exactly like a request coming from a browser. Your server would not know the difference.
The usual scheme for an API is that you require some sort of developer credential in order to use your API. You apply terms of service to that credential that describe what developers are allowed or are not allowed to do with your API.
Then, you monitor usage programmatically and you slow down or ban any credentials that are misusing the APIs according to your terms (this is how Google does things with its APIs). You may also implement rate limiting and other server protections so that a run-away developer account can't harm your service. You may even black list IP addresses that repeatedly abuse your service.
For APIs that you wish for your own web pages to use (to make Ajax calls to), there is no real way to keep others from using those same APIs programmatically. You can monitor their usage and attempt to detect usage that is out-of-line of what your own web pages would do. There are also some schemes where you place a unique, short-use token in your web page and require your web pages to include the token with each request of the API. With some effort, that can be worked around by smart developers by regularly scraping the token out of your web page and then using it programmatically until it expires. But, it is an extra obstacle for the API thief to get around.
Once you have identified an abuser, you can block their IP address. If they happen to be on a larger network (like say a university), their public IP address may be shared by many via NAT and you may end up blocking many more users than you want to - that's just a consequence of blocking an IP address that might be shared by many users.
As a web developer, I'm increasingly debugging issues only to find that our IT department are using our firewall to filter HTTP response headers.
They are using a whitelist of known headers, so certain newer technologies (CORS, websockets, etc) are automatically stripped until I debug the problem and request whitelisting.
The affected responses are third-party services we are consuming - so if we have an internal site that uses disqus, comments cannot be loaded because the response from disqus is having it's headers stripped. The resources we are serving are not affected, as it's only traffic coming in to the office.
Are there genuine reasons to block certain headers? Obviously there are concerns such as man-in-the-middle, redirects to phishing sites etc but these require more than just an errant header to be successful.
What are the security reasons to maintain a whitelist of allowed HTTP response headers?
Fingerprinting could be the main reason to strip the response headers:
https://www.owasp.org/index.php/Fingerprint_Web_Server_%28OTG-INFO-002%29
It depends on the stack that you're running, and most of the time, the information included in the response headers is configurable in each server, but it requires tampering with each serving application individually (and there might be cases when the software is privative and doesn't offer the option to set the HTTP headers).
Let's go with an easy example:
In our example datacenter, we're running a set of servers for different purposes, and we have configured them properly, so that they're returning no unnecessary metadata on the headers.
However, a new (imaginary) closed-source application for managing the print jobs is installed on one of the servers, and it offers a web interface that we want to use for whatever reason.
If this application returns an additional header such as (let's say) "x-printman-version" (and it might want to do that, to ensure compatibility with clients that use its API), it will be effectively exposing its version.
And, if this print job manager has known vulnerabilities for certain versions, an attacker just has to query it to know whether this particular install is vulnerable.
This, that might not seem so important, opens a window for automated/random attacks, scanning ports of interests, and waiting for the right headers to appear ("fingerprints"), in order to launch an attack with certainty of success.
Therefore, stripping most (setting policies and rules for those that we want to keep) of the additional HTTP headers might sound sensible in an organisation.
With that being clear, stripping the headers from outgoing connections responses is overkill. It's true that they can constitute a vector, but since it's an outgoing connection, this means we "trust" the endpoint. There's no straightforward reason why an attacker with control over the trusted endpoint would use the metadata instead of the payload.
I hope this helps!
I am architecting a project which uses jQuery to communicate with a single web service hosted inside sharepoint (this point is possibly moot but I include it for background, and to emphasize that session state is not good in a multiple front end environment).
The web services are ASP.Net ASMX services which return a JSON model, which is then mapped to the UI using Knockout. To save the form, the converse is true - JSON model is sent back to the service and persisted.
The client has unveiled a requirement for confidentiality of the data being sent to and from the server:
There is data which should only be sent from the client to the server.
There is data which should only appear in specific views (solvable using ViewModels so not too concerned about this one)
The data should be immune from classic playback attacks.
Short of building proprietary security, what is the best way to secure the web service, and are there any other libraries I should be looking at to assist me - either in JavaScript, or .Net
I'll post this as an answer...for the possibility of points :)...but I don't know that you could easily say there is a "right way" to do this.
The communications sent to the service should of course be https. That limits the man-in-the-middle attack. I always check to see that the sending client's IP is the same as the IP address in the host header. That can be spoofed, but it makes it a bit more annoying :). Also, I timestamp all of my JSON before send on the client, then make sure it's within X seconds on the server. That helps to prevent playback attacks.
Obviously JavaScript is insecure, and you always need to keep that in mind.
Hopefully that gives you a tiny bit of insight. I'm writing a blog post about this pattern I've been using. It could be helpful for you, but it's not done :(. I'll post a link sometime tonight or tomorrow.