Plone session security findings - any ideas? [closed] - security

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
From a recent audit of our Plone 4.01 instance:
Where possible encrypt the username stored in the session cookie, or use the one-time session ID to identify the user (at the server end) instead of the username.
The usernames were stored in the applications session cookie using base64 encoding, which can be easily decoded.
Invalidate the session ID immediately when the user logs out.
When logging out of the PLONE application, the application deletes the cookie, however does not invalidate the user session ID. It was also noted that authenticating with the application a second time did not invalidate previous session cookies.
We would prefer not to add another product to the stack to resolve these findings if possible.
Btw we do have Beaker installed and this is being used for public accounts as part of the ecommerce area of the site, whereas the content admins/authors are using the standard Plone login/security mechanism which is what is drawing the audit findings... perhaps Beaker can be reused for the content authors as well? Not sure if this is a good idea though...
Btw we are also updating to Plone 4.2 soonish.

If you are concerned with cookie security, you should always use SSL encryption. The same username is included in the page output, for example, so the fact that it's included in the cookie as well isn't a information leak as such.
The cookie uses a cryptographic hash that has a limited timespan, the default is 12 hours, after which the cookie will no longer be accepted.
You can lower this timeout:
go to the ZMI of your plone instance
Find the acl_users folder, then the session plugin:
Go to the Properties tab (right-most tab)
Change the "Cookie validity timeout (in seconds)" property to a new value.
Take note of the "Refresh interval (in seconds, -1 to disable refresh)" value below it though; whenever the signed cookie is older than the refresh interval, a new cookie is generated, to refresh the cookie lifetime. So, by default, once every hour, you are issued a new cookie that is valid for 12 hours.
You don't want your cookie validity timeout to fall below the refresh interval. If you set these values very low, you may want to think about using a periodic AJAX 'ping' request to keep the cookie fresh while the user is still using the site.
In fact, plone.session already includes a facility to implement this ping for you. Simply enable it by installing the "Session refresh support" add-on in the control panel ("Site setup" > "Add-ons" > check "Session refresh support 3.5", click "Activate"). This will install the javascript library for you, and it'll ping the server every 5 minutes, provided there has been some mouse or keyboard activity while the current page is loaded.

Related

Handling expiration & validation of 2FA codes [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am currently planning a 2FA implementation to require users to provide a code via SMS for some actions, like a login. I will also use tools like Google Authenticator, but I do not want the users to force to download the app, thats why I need to send the codes via SMS (or potentially email) as well.
My plan so far is:
User wants to login and requests a code
Backend generates a numeric code, stores it hashed in DB and returns the ID (or selector) of the database entry to the frontend
Frontend displays a code-input field next to the users email & password
Code is sent to the user via SMS / email
User has now 5 Minutes to send the selector + code + email + password to the backend where all those get validated
2 Questions about this:
1) Handling expiration of code
My first idea was to store the code only hashed like a password in the database, but I would have to implement the 5 minute expiration myself. Of course I could add another column with a timestamp to check the expiration, but I would rather go with something more secure.
Now I am thinking about to store the code inside the claims object of a json web token in the database and set the expiration of this token to 5 minutes. So after the 5 minutes are over, parsing the web token to compare it with the code the user has sent, fails. This would allow me in case of an attack scenario to just change the secret of the web tokens and all existing codes would be invalid instantly.
Is this a good approach? Or do you guys see any problems in this, or are there maybe better ways of handling it? Or is there maybe a library for hashing passwords with an expiration date as well?
2) Validation & handling brute force attacks
As I only want to send a 6 or max 8 digit numeric code to the user, I will have to implement some sort of protection against brute force attacks (lets assume that an attacker knows the email & password of the user).
What I want to do:
If an invalid code was sent once, increase the failed tries of that specific code db entry += 1
If the code exceeds 3 failed attempts, invalidate the code in the database and ask the user to request a new code
When a user requests a new code, have him wait 1 minute before he can request a new code, store the date of the last failed attempt as a timestamp in the users db entry as well as the 1 minute delay
If the third code fails, store the new timestamp and double the delay to 2 minutes
... and so on. After 3 failed codes a JS Challenge (Google Recaptcha) will be required as well.
After 5 retries I would lock the account and wait for the user to contact us.
I this a secure approach to handle the validation of the codes?
I think you're overdoing th security of your six-digit verification codes by using JWTs.
No matter how you manage them, you must invalidate them when they expire or when they're used. A good way to do that is to give each code a row in a table including the expiration timestamp. Then DELETE the row for the code when the user presents it. Whenever you look up those codes add WHERE expires > NOW() to the query. And routinely DELETE expired rows.
Resisting brute force attacks is straightforward. By the time you're ready to send your user a code, you have already validated their password so you know who they pretend to be. So just keep track of that user's attempts to guess the code. As you suggested, give them three tries. Then make them request another code. If they rerequest more than five codes in a calendar day, lock them out until the next calendar day.
This scheme, by the way, is useful for generating all kinds of nonces. (Numbers used once.) Nonces come in handy for many purposes like password resets by email.

logout from CAS doesn't logout from bonita [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have a problem with Bonita that I've tried to work around without success. I am authenticating with CAS into Bonita, Alfresco and Liferay; the first time I am authenticating with CAS everything works fine, but when I logout from Liferay (which should then logout automatically from CAS), the current Bonita session is not terminated. The next time I login with CAS the Liferay and Alfresco sessions are correct (they belong to the new user), but the Bonita session doesn't change (the old user is still connected). Did anyone encounter this issue, and if yes, what could be a possible correction for this?
Any insight regarding the matter would be very appreciated, thanks!
I have managed to do this after a couple of difficult attempts to understand what the problem is. Apparently, there was a problem with the session cookie created by Bonita: the cookie JSESSIONID with the path "/bonita" was not destroyed when CAS destroyed its session and somehow its presence stopped it from being recreated. I have changed the Bonita cookie name to be different than JSESSIONID, because there were other cookies with that name in the browser and I changed the path of the cookie from "/bonita" to "/" in the file context.xml of Bonita. Then I have added javascript code to eliminate this cookie each time the theme of Liferay was reloaded (on page refresh), thus ensuring that the old cookie was destroyed. Every time after, when the Bonita url is visited the updated cookie is recreated from the new CAS session and everything seems to work fine. A better approach would be to destroy the cookie in the CAS logout jsp page, but I didn't manage to do it like this.

SSL for logged in users? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
For a website which offers free content and paid content (when the user is logged in), should it operate over SSL (i.e. https)?
More to the point- for pages other than the registration page, there is sometimes content like facebook like, third-party banners, etc. When I view those pages in various browsers, I get warnings that the page is not completely secure, since some of the content is unprotected.
Is there a standard for this, and reasons why?
I've noticed, for example, that gmail keeps it over https while facebook opts for http...
Regarding the first question: you should consider using HTTPS for your website for user authentication and their usage once authenticated, because most authentication methods (typically cookie-based, session-ID in the URL or HTTP Basic) would transmit the authentication token (e.g. the cookie) in clear otherwise. As such, an eavesdropper could impersonate the authenticated user by re-using the session-ID/cookie for themselves. This sort of attack has been around for a long time, but tools like Firesheep, in conjunction with the use of unprotected (possibly public) WiFi networks makes this quite practical unfortunately.
Regarding the second question: you get those warnings for mixed-content, i.e. pages served over HTTPS that embed content from HTTP sites. If you're using secure cookies, your authentication token (in the cookie) shouldn't leak to the non-protected content embedded on the page... However, it's impossible for the use to know that. Teaching users to ignore warnings is generally bad practice.
If it's your content, turn on HTTPS. If it's someone else's and they don't have HTTPS access, it's a bit trickier. One solution may be to relay their content through your website (but you would need to rewrite their links, etc.).
As always, it's a matter of risk assessment. You can actually use Facebook over HTTPS (by typing https:// explicitly). Since posting on Facebook can have you sent to prison, you wouldn't want anyone impersonating you.
Some sites don't enable HTTPS because it's thought to be expensive, which isn't necessarily true (Compatibility of XP for Server Name Indication and shared hosting is also an issue, as detailed in this article too).
SSL is computationally expensive; serving pages over SSL will make your site a little slower (especially with lots of traffic). (you can replace the computational expense with financial expense by buying separate SSL hardware)
Gmail serves everything under SSL because they believe that all of your email content is confidential.
Basically, they think that email needs to be more secure than Facebook.

How important is it to use SSL on every page of your website? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it?
Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service.
It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that.
What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.
Here's the issue, and why banks are still horribly vulnerable: their landing page is HTTP, so it can be man-in-the-middled. Then they have a link to the login, and the login page is HTTPS.
So if you go directly to the login page, you can't be Man-in-the-Middled. But if you go to the homepage/landing page, since I control that, I'm going to rewrite the login page link to be HTTP. Then I'll do a SSL handshake with the login page, and send you (the user) the insecure version. So now you're (the user) doing all your sensitive transactions - and the server thinks it's HTTPS - and I'm in the middle doing shenanigans.
This is a very hard problem to solve completely because it goes all the way down to the DNS level on the server-side, and all the way down to default actions in browsers on the client-side.
As a content provider, you could try putting in javascript to check that the secure areas of your site are being accessed securely (and hope that I, as a cracker, don't remove that js before forwarding it). You can also include your happy "Please make sure this site is accessed via https" banners.
As a user, NoScript has an option to make sure sites are in HTTPS.
There's a new technology (I believe it's a marker on DNS entries maybe?) not supported by all clients/servers that lets a server opt in and say it is only accessible via HTTPS and to die a fiery death if it's being MITM-ed. I can't for the life of me recall or able to find it on google though...
I would take the bulk of the site off HTTPS with some exceptions of course:
Any checkout or account editing screens.
Any screens that would display "sensitive" information.
To deal with the session hijacking issue, I would add another layer of authentication where you prompt them for their username and password again at checkout or whenever they try to view/update account information - basicly whenever you make a transition from http to https.
Yes, I would just use SSL to secure important elements such as input fields, passwords, etc. I believe that's what most sites do, including online banking sites.

Limit client to visit a website with 1 tab and 1 browser? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I would like to do something like play.clubpenguin.com.
What it does is that, when you visit the site, maybe using firefox or IE, if you opened a new tab or use other browser, when you visit that site again, it will show something like: "Please close the other browser that opened this page" or something like that.
How to do this? (Its client's request)
More information: The site is Flash site
EDIT:
Ok, i think this is a tough question, well, in general, Can this be done using php, mysql and JS?
Each time you serve the flash page to the user, generate a random token. Embed this token somewhere in the page, for example as a flashVar. Also, store the most recently generated token in the user's session.
Whenever the flash posts something back to the server, post the token as well. If the token does not match the token stored in the session, reject the post.
That way, only the most recently generated version of the page will have the ability to communicate with the server and if the user opens multiple versions of the page only the most recent will work.
This technique should work even if the user opens extra browsers on other machines. It doesn't use IP addresses to establish identity. And there is no chance that a user will somehow be 'locked out' permanently because every time they open the page again you reset the stored token.
It's a similar idea to the way some frameworks insert a validation token into forms to prevent Cross-site Request Forgery attacks.
try using the below code:
window.onload = function(){
if (document.cookie.indexOf("_instance=true") === -1) {
document.cookie = "_instance=true";
// Set the onunload function
window.onunload = function(){
document.cookie ="_instance=true;expires=Thu, 01-Jan-1970 00:00:01 GMT";
};
// Load the application
}
else {
// Notify the user
}
};
The code will restrict the user to open one browser tab at a time and during browser refresh(current tab) the code will not show any alert. Copy pasting the same URL in new tab will not be allowed the user to open. for more info try this
If you want to forbid access with 2 different logins, you can enforce a rule that lock on a given resource.
The client IP could be one of this lockable ressource: only one session allowed for one given IP address. That would reduce the cheating to people that have multiple public IP addresses. People that shares public IP through proxy would have problem.
I don't see what other lockable resource you can use easily.
A few options spring to mind...
When they first open the site, you'd need to store the user's current state in a cookie or similar, which you'd check for every time you open the site. If the state is Active, then it means they have another window open. The problem in ensuring that the state is cleared when they leave the original site window - you'd need to listen for the window.onunload event, and clear the state at that point - but you can't 100% guarantee this will happen.
Otherwise, you could place a script on the site which pings a server script every n seconds, notifying the server there is a window open for that client, and prevent new windows being opened until there is a lapse in pings.
To get more complex, you could maintain a persistent connection between the server and client (via sockets or similar), which would keep note of the same. Less calls from the client, a bit more complex to set up. See http://www.kirupa.com/developer/flash8/php5sockets_flash8.htm for basic info on flash + sockets.
Given you're working with Flash, you could look into Local Shared Objects (flash cookies) to store the state. Still possible to miss the unload event, but at least the cookie is persisted across all browser sessions and browser types.
Option 3 is the best IMHO.
Solution:
Response your clients request with a NO, because you are the webdesign guru that knows what's the best for the client's visitors. the website doens't have to appeal to your client but the client's visitors. and when they are limited so hard on their own computers they are everything but satisfied.

Resources