I would like to cache my website with memcache as much as possible. There are rare modifications (somewhat like in a forum) which I am perfectly ok with re-caching once change is made. My only concern is login information (similar to how stackoverflow has a bar on top). This is how I am doing it right now:
$('div#user_bar').load('/login-info/');
(jQuery on a fully cached page loads up userinfo)
However, I think I can do without dynamic pages completely. My idea is this:
On login: create cookie `logged_in`:true
On each page: if JS finds cookie is set: show links to logout, settings, etc
if not: show link to login page
On logoff: delete cookie
No actual userinfo is stored in cookies, not even username.
How secure, reasonable, sane is this? Any ideas? Am I missing something? Thank you.
Disclaimer: This is more of an exercise than a production environment. But I am trying to keep security and performance in mind nonetheless.
About your main target: Caching dynamic pages is reasonable. If you work on the ASP.NET platform, you might want to have a look at the output cache feature which does exactly this, even including dynamic substitutions. 4 Guys from rolla.com have a nice starter article with links to all the details.
Regarding the non-userspecific pages: I doubt that this can work for anything but the most simple pages. Web applications usually allow different operations for different users, and if it's only the change of your password. You probably have to pass specialized content to the client at some point, and that's where the dynamic substitutions of the ASP.NET output cache come into play.
Related
I'm working on a little node.js-project, and while googling alot, I kinda got a bit confused, but maybe some of you are able to point me towards the road again.
Several websites are generated by DocPad (excellent piece of software), and hosted on different domains.
All these websites shall now get a "login module" (which is also written in Node.js, using passport). Visually, it will look similar to the excellent login-slider from Web-Kreation (Here a demo). My plan was to use nginx and route all the /login-requests to the login-app, which is working fine.
The problem is rather related to the multiple domains, and the clientside implementation of it all. All logins use the same database.
Can I somehow use both together, and create the session-cookies from the Login-Module (which could use the same domain all the time)?
I'm answering my own question for reference, in case someone else comes across the same problem.
In the end, I solved my problem by having a bit of a different setup. Instead of a module, using the dns of each page, I use a central login-application for all sites. The sites itself do not require to access any personal information, so that's not a problem.
DocPad is still being used to generate the different websites (works excellent - I know I say this very often, but if there's a brilliant piece of software out, there's no reason to not mention it once in a while) statically, and all static content is delivered to the user using a CDN.
The login-system is a node.js-application using Redis as the only database. It is integrated via a simple iframe on all pages rendered by DocPad on login.example.com.
After successful login in 'login-app' you can create encrypted string with info about current user. You can pass this string back in get/post parameter with redirect to necessary domain. Encription key is known only to the 'login-app' and your websites. You can trust this encrypted data. It is necessary to make sure that every time the key is different for the same user. For example you can add the information about the time of login or random. After decrypting the data you can set authorization cookie for a particular domain.
I have a page that contains sensitive information that I would like to require reauthentication in order to load. I am using Classic authentication mode, not forms.
The first method i looked at was the PrincipalContext.ValidateCredentials method, but that would require sending login details in plain text (i think).
I have thought about using javascript to turn off cookies so they would have to log back in, but I haven't thought of a way of doing this well.
Has anyone done this before with SharePoint?
what i ended up with:
a web part on the page with sensitive material which forces an HTTP 401, and then redirects to another page.
this other page has a second web part, which then redirects back to the original page after setting some session variable.
You could use something along the lines of this if you're using IE6/8 but other browser may have issues with it (look into http-keep-alives).
<script type='text/javascript'>
document.execCommand("ClearAuthenticationCache");
</script>
That said, it doesn't seem like friendliest UI option to forcibly clear someones authentication. I suspect a better option would depend on the audience and if they are on a trusted domain or coming from an external source. If they are on the trusted domain and don't normally login anyway, this approach likely wont please them much.
I'm implementing a voting system like Stackoverflow's. How can I implement this so it is hack proof?
I've got some PHP that does database work according to the ajax request sent after the javascript parses it. Would doing a query to check the current vote state of a user be enough to avoid unauthorised votes?
It is definitely possible to implement pretty reliable solution. But this must be done server-side.
Basic rule of security: you don't trust client data.
Move all your checks to PHP and make your javascript as dumb as
$(".vote").click(function(e) {
$.post('/vote.php', vote_data, function(result) {
// update UI according to returned result
}
}
It's a common thing, however, to still do checks on the client, but as a way to improve usability (mark required form fields that weren't filled) or reduce server load (by not sending obviously incomplete data). These client checks are for user's comfort, not for your security.
Answering to your updated question:
If you store full log of when which user voted for which question, then yes, it's pretty easy to prevent multiple voting (when user can vote for the same thing several times). Assuming, of course, that anonymous votes are not allowed.
But if you have a popular site, this log can get pretty big and be a problem. Some systems try to get away by disabling voting on old articles (and removing corresponding log entries).
What if someone intentionally tries to hack me?
There are different types of attacks a malicious user can perform.
CSRF (cross-site request forgery)
The article lists some methods for preventing the attack. Modern Ruby on Rails has built-in protection, enabled by default. Don't know how it is in PHP world.
Clickjacking
This attack tricks users into clicking on something what isn't what they think. For example, they may click "Play video", but the site will intercept this click and post on user's wall instead.
There are some articles on the Web as well.
Wiki on clickjacking
5 ways to prevent clickjacking
Javascript to prevent clickjacking
NOTE: THIS IS AN ANSWER TO THE ORIGINAL QUESTIONDon't downvote it just because the OP radically changed his question.
It's a huge error even just thinking of relying on browser-side components to enforce application logic. Javascript should be used, in untrusted environments, exclusively for presentation purposes.
All application logic should be implemented, validated and enforced server-side.
So if you have a RIA version (Silverlight or Flash) and a standard HTML version (or AJAX even), should you have the same URL for both, or is it ok to have a different one for the RIA app and just redirect accordingly?
So, for instance, if you have a site (http://example.com), is it ok to have the about page URL for the RIA app be http://example.com/#/about and the html be http://example.com/about? Does it matter?
Of course if you take the route with different URLs you will need to map between them.
The URLs of your pages denote the identity of the content. In my view, if the content is the same but the presentation varies (i.e RIA vs. HTML), then the URL should be the same and you should use some other mechanism to select between the different presentation forms. Choices of other mechanisms include cookies, content negotiation, session identifiers or, if your users are identified, a persistent user preferences model. Even using a URL argument would at least keep the root of the URL consistent (e.g. http://your.si.te/foobar vs. http://your.si.te/foobar?view=plain)
If the content of the two presentations differs in some meaningful way, then you should make that difference meaningful in the URL. Exploiting the presence or absence of #, and other such hacks, would be a mistake in my view.
Try to pick URL's that do not change over time: so-called cool URL's. This will aide the long-term usefulness of your site to your users: consider what happens if they come back to a bookmarked page in a year's time. Consistency will also help you to get a better critical mass of links or reviews of your site in del.icio.us and similar bookmarking/review sites.
Ian
It's perfectly acceptable to use 2 different link formats. If 2 users are not seeing the same content why should they be at the same URL.
I guess what I really need here is not a Question/Answer format but some kind of poll. While I agree (and accepted) that because they are getting two different views of the same content, that different urls are ok, but I'm thinking more of sharing these urls out.
Thanks for the reply though!
I want to log onto Stack Overflow using OpenID, but I thought I'd set up my own OpenID provider, just because it's harder :) How do you do this in Ubuntu?
Edit: Replacing 'server' with the correct term OpenID provider (Identity provider would also be correct according to wikipedia).
You might also look into setting up your own site as a delegate for another OpenID provider. That way, you can use your own custom URL, but not worry about security and maintenance as mentioned already. However, it's not very difficult, so it may not meet your criteria :)
As an example, you would add this snippet of HTML to the page at your desired OpenID URL if you are using ClaimID as the OpenID provider:
<link rel="openid.server" href="http://openid.claimid.com/server" />
<link rel="openid.delegate" href="http://openid.claimid.com/USERNAME" />
So when OpenID clients access your URL, they "redirect" themselves to the actual provider.
I've actually done this (set up my own server using phpMyID). It's very easy and works quite well. One thing that annoys me to no end is the use of HTML redirects instead of HTTP. I changed that manually, based on some information gotten in the phpMyID forum.
However, I have switched to myOpenId in the meantime. Rolling an own provider is fun and games but it just isn't secure! There are two issues:
More generally, you have to act on faith. phpMyID is great but it's developed in someone's spare time. There could be many undetected security holes in it – and there have been some, in the past. While this of course applies to all security-related software, I believe the problem is potentially more severe with software developed in spare time, especially since the code is far from perfect in my humble opinion.
Secondly, OpenID is highly susceptible to screen scraping and mock interfaces. It's just too easy for an attacker to emulate the phpMyID interface to obtain your credentials for another site. myOpenId offers two very important solutions to the problem.
The first is its use of a cookie-stored picture that is embedded in the login page. If anyone screen-scapes the myOpenId login page, this picture will be missing and the fake can easily be identified.
Secondly, myOpenId supports sign-in using strongly signed certificates that can be installed in the web browser.
I still have phpMyID set up as an alternative provider using Yadis but I wouldn't use it as a login on sites that I don't trust.
In any case, read Sam Ruby's tutorial!
I personnally used phpMyID just for StackOverflow. It's a simple two-files PHP script to put somewhere on a subdomain. Of course, it's not as easy as installing a .deb, but since OpenID relies completely on HTTP, I'm not sure it's advisable to install a self-contained server...
Take a look over at the Run your own identity server page. Community-ID looks to be the most promising so far.
I totally understand where you're coming from with this question. I already had a OpenID at www.myopenid.com but it feels a bit weird relying on a 3rd party for such an important login (a.k.a my permanent "home" on the internet).
Luckily, It is easy to move to using your own server as a openID server - in fact, it can be done with just two files with phpMyID.
Download "phpMyID-0.9.zip" from http://siege.org/projects/phpMyID/
Move it to your server and unzip it to view the README file which explains everything.
The zip has two files: MyID.config.php, MyID.php. I created a directory called <mydocumentroot>/OpenID and renamed MyID.config.php to index.php. This means my OpenID URL will be very cool: http://<mywebsite>/OpenID
Decide on a username and password and then create a hash of them using: echo -n '<myUserNam>:phpMyID:<myPassword>' | openssl md5
Open index.php in a text editor and add the username and password hash in the placeholder. Save it.
Test by browsing to http://<mywebsite>/OpenID/
Test ID is working using: http://www.openidenabled.com/resources/openid-test/checkup/
Rerefence info: http://www.wynia.org/wordpress/2007/01/15/setting-up-an-openid-with-php/ , http://siege.org/projects/phpMyID/ , https://blog.stackoverflow.com/2009/01/using-your-own-url-as-your-openid/
The above answers all seem to contains dead links.
This seems be a possible solution which is still working:
https://simpleid.org/