XPages cluster and state variables - xpages

We are about to make another server for XPages applications. In front of it there will be fail over/load balance component (Microsoft Forefront, IBM Web server) that will redirect HTTP request to one of two cluster servers.
I suppose that scoped variables will be reinitialized in case of fail over - user is redirected to other server which will initialize XPage from scratch (GET) or subset of data (POST). Anything binded to beans/scoped variables will be lost (pager state, application specific data). This can cause odd behaviour to users: loss of entered data or opening of unexpected page. I am aware of fact, that this is highly depending on application design.
The situation can be very similar to expired session on one server - how to prevent loss of data in such case.
Are there any coding best practices how to avoid side effects of fail over from server to server?

While not a code best code best practise, you first need to configure your load balancer to keep users on the same session once started (probably_ using a cookie, so failover only happens when your box really goes down.
Secondly don't take scope variables to be there, always test for them - which is a good practice anyway since a session can timeout and loose its variables on a single server too.
POST will fail due to a lack of x-session, so you might resort to posting only via Ajax that can have an error handler.
You could consider to use cookies to capture state information.

Related

Rest API real time Tricky Question- Need Answer

I was recently interviewed by a MNC technical panel and they asked me different questions related to RestAPI , i was able to answer all but below 2 questions though i answered but not sure if those are correct answers. Can somebody answer my queries with real time examples
1) How can i secure my Rest API when somebody send request from Postman.The user provides all the correct information in the header like session id, Token etc.
My answer was: The users token sent in the header of the request should be associated with the successfully authenticated user info then only the user will be granted access if the Request either comes from Postman or application calls these API.(The panel said no to my answer)
2) How can i handle concurrency in Rest API Means if multiple users are trying to access the API at the same given time (For e.g multiple post request are coming to update data in a table) how will you make sure one request is served at one time and accordingly the values are updated as requested by different user request.
2) My answer was: In Entitiy framework we have a class called DbUpdateConcurrencyException, This class takes of handling concurrency and serves one request is served at a time.
I am not sure about my both the above answers and i did not find any specific answer on Googling also.
Your expert help is appreciated.
Thanks
1) It is not possible, requests from Postman or any other client or proxy (Burp, ZAP, etc) are indistinguishable from browser requests, if the user has appropriate credentials (like for example can observe and copy normal requests). It is not possible to authenticate the client application, only the client user.
2) It would be really bad if a web application could only serve one client at a time. Think of large traffic like Facebook. :) In many (maybe most?) stacks, each request gets its own thread (or similar) to run, and that finishes when the request-response ends. These threads are not supposed to directly communicate with each other while running. Data consistency is a requirement of the persistence technology, ie. if you are using a database for example, it must guarantee that database queries are run one after the other. Note that if an application runs multiple queries, database transactions or locks need to be used on the database level to maintain consistency. But this is not at all about client requests, it's about how you use your persistence technology to achieve consistent data. With traditional RDBMS it's mostly easy, with other persistence technologies (like for example using plaintext files for storage) it's much harder, because file operations typically don't support a facility similar to transactions (but they do support locks, which you have to manage manually).

is authentication with client side rendered app and sessions possible?

No matter how I reason about it, it seems as if there is no secure way of implementing a client side rendered single-page-application that uses/accesses information on sessions for authentication, either via cookies, without severe compromise in security. I was mainly looking to building a React app, but it seems as if I will need to build it with SSR for a relatively secure version of authentication.
The use case that I'm especially thinking of is where the user logs in or registers and then gets a cookie with the session id. From there, in a server side implementation, I can simply set up conditional rendering depending on whether the server stored session has an associated user id or not and then pull the user information from there and display it.
However, I can't think of a client-side rendered solution where the user can use the session id alone on the cookie that isn't easily spoofable. Some of the insecure implementations would include using browser storage (local/session). Thanks.
I think the major issue here is that you are mixing the two parts of a web page (at least according to what HTML set out achieve) and treating them both as sensitive information.
You have two major parts in a web page - the first being the display format and the second being the data. The presumption in client side rendering / single page applications is that the format itself is not sensitive, and only the data needs to be protected.
If that's the case you should treat your client-side redirect to login behavior as a quality of life feature. The data endpoints on your server would still be protected - meaning that in theory an unauthenticated user could muck about the static HTML he is being served and extract page layouts and templates - but those would be meaningless without the data to fill them - which is the protected part.
In practice - your end product would be a single page application that makes requests to various API endpoints to fetch data and fill in the requested page templates. You wouldn't even need to go as far as storing complex session states - a simple flag notifying the client if it is authenticated or not would suffice (that is beyond what you would normally use for server-side authentication such as cookies or tokens)
Now let's say I'm a malicious user who is up to no good - I could "spoof" - or really just open the browser dev tools and set the isAuthenticated flag to true letting me skip past the login screen - now what would I do? I could theoretically navigate to my-service/super-secret without being redirected locally back to the login page on the client side - and then as soon as the relevant page tries to load the data from the server with the nonexistent credentials it would fail - best case displaying an error message, worst case with some internal exception and a view showing a broken template.
So just to emphasize in short:
A. If what you want to protect is your TEMPLATE then there is no way to achieve this clientside.
B. If what you want to protect is your DATA then you should treat gating/preventing users from navigating to protected pages as a quality of life feature and not a security feature, since that will be implemented on the server when serving the data for that specific page.

How to duplicate Web traffic to a to-be-production application, taking care of CSRF tokens, JSF view states and differing session cookies?

I want to replicate web traffic from production server to another instance of the application (pre-production env), so that i can verify that any improvements (e.g. performance-wise) that were introduced (and tested on course), remain improvements in the context of production-like load.
(Obviously, something that is clearly a performance improvement during tests, might as well turn out not quite so on the production. For example when trading time vs memory usage.)
There are tools like
teeproxy (https://serverfault.com/questions/729368/replicating-web-application-traffic-to-another-instance-for-testing, https://github.com/chrislusf/teeproxy/blob/master/teeproxy.go)
duplicator (Duplicate TCP traffic with a proxy, https://github.com/agnoster/duplicator)
but they don't seem to address the fact, that the duplicated web traffic will get
different session cookies
different CSRF tokens (in my case this is covered by JSF view state ids)
Is there a tool that could do that, automatically?
I had a similar concern and opened an issue here
Also since it is an open project I wanted to see if i can inject my own logic.
I am stuck there :- Adding headers to diffy proxy before multicasting
I dont have enough reputation to comment so added an answer
Twitter's diffy is specialized to duplicate HTTP traffic.

Are Vaadin or RAP/RWT prone to denial of service attacks

Vaadin and Eclipse RAP/RWT are web application frameworks with - as far as I understand - similar architecture. My question is if an application built with Vaadin or RAP is prone to denial of service attacks? I am currently evaluating the frameworks and want to be sure that this is not a concern.
Here is my reasoning:
With Vaadin or RAP you don't build HTML output but instead construct a Widget tree similar to an Swing/SWT application (in case of RAP it is SWT). The frameworks renders the widget tree as HTML in the browser, and sends user interactions back to the server, where the application gets notified in terms of events. The events are delivered to listener objects, which have previously been registered on widget.
Therefore the widget tree must be kept in some sort of user session and of course consumes some memory.
If the application is public (i.e. does not sit behind a login page), then it seems that there is a possible denial of service attack for such an application:
Just fire requests to the apps landing page, and probably fake a first response. For any such request, the framework will build a new widget tree, which will live some time on the server, until the session expires. Therefore the server memory should be filled with tangling users sessions very soon.
Or did these frameworks invent protection against this scenario?
Thanks.
A framework only can not protect you from DoS attacks.
Vaadin has some features built in to prevent attacks, but of course these features depend on how you code your application.
There is a long webinar about vaadin security:
https://vaadin.com/de/blog/-/blogs/vaadin-application-security-webinar
Vaadin does some validation of the client<->server traffic
to prevent XSS and other attacks.
But when you do some special things, you can open doors for such attacks.
As for the scenario you described:
The initial vaadin session takes some memory on the server (Just as all other sessions on any server)
How large this memory footprint is, depends on the initial number of widgets and what you load in memory for this. (DB connections etc.)
Usually this is not a problem when you have a very lightweight login page
But if you display large tables and many other fancy things, then you will have to have enough memory for the number of requests. (The same apply to all othe http servers/applications, they also need memory for this)
If the number of requests surpasses the capacity of your server, any web service can be brought down in a DoS attack
EDIT:
Due to good points in Andrés answer I want to sharpen my question.
First of all, of course I agree, if you put an application behind a login-wall
then the DOS-threat is not that big. At least you can identify the attacking user. And the login page itself can
be lightweight or even must not be implemented with Vaadin/RAP.
And as Vaadin/RAP applications are most likely used in RIA-style intranet settings then the DOS scenario does not invalidate
their use in these settings.
But at least both frameworks itself expose demo pages without logins on the internet:
See http://www.eclipse.org/rap/demos/ and http://demo.vaadin.com/dashboard/
These are not simple pages, and probably use quite a bit of memory.
My concern is about such a scenario, a non access-restricted internet page:
Once these frameworks responded to a request, they must keep serverside memory for that request for quite some time
(say the classical 30 minutes of a HTTP session, at least in minute scale).
Or to express it differently, if the application retains memory per intial user request for some substantial time it will be
prone to DOS attacks.
Contrast this with an old-style roundtrip web application which does not require user identification.
All the information needed to decide what to return is contained in the request (path, parameters, http-method, ...),
so a stateless server is possible.
If the user interacts with such an application, the application is still free to store session-persistent data on the client
(shopping cart contents in a cookie, etc.).

How to defend excessive login requests?

Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.

Resources