Liferay Clustering - liferay

I am trying to Implement clustering in liferay 6.2 using below Link
https://www.liferay.com/en_GB/documentation/liferay-portal/6.2/user-guide/-/ai/liferay-clustering-liferay-portal-6-2-user-guide-20-en
I had put below properties in both liferay server's portal-ext.properties and Pointing both Liferay server to same Database.
cluster.link.enabled=true
cluster.link.autodetect.address=localhost:3306
lucene.replicate.write=true
Now I added one portlet to a page from first Liferay instance and accessing the same page from Second Liferay instance(Using same userID) i am getting error message stating "Invalid authentication Token".It seems to be session replication problem in Cluster but not able to figure out how to resolve this.
Looking for help to figure out whats going wrong.
Thanks in Advance.

You have a severe problem in your configuration, despite the already accepted answer (which I mostly disagree with).
You'll have to set up proper clustering on Liferay. In order for Liferay to find "the other" node, it uses Multicast (by default). And if you have multiple network cards but want/need one specific network card to be used for detecting the other node, you'll give the cluster.link.autodetect.address. Liferay will use this to see which network adapter is used (by your OS) to connect to that address and then use the resulting network adapter. If you have only one adapter, the default value www.google.com:80 is good. If you set this to localhost, Liferay will try to communicate with your other node(s) on localhost, thus only succeed if you have two Liferay processes running on the same machine - otherwise you'll have no cache synchronization at all.
As of the principles of Liferay Clustering, I'd recommend to go to the User's Guide and specifically not do session replication (e.g. the tomcat-configured session replication). 95% of users don't need it, it adds significant overhead to processing, eating the performance benefits of the second server. And while you're at it: Check some common pitfalls when clustering Liferay (but beware: There are more)
Edit: To answer your "Invalid Authentication Token" question: My advice is to implement sticky sessions - e.g. balance the session creation to a certain machine, then stick to it. The Authentication token is used for mitigating CSRF attacks. Not having it implies that your session replication doesn't work correctly. As I've laid out above, I don't recommend enabling session replication, thus sticky sessions are your best option IMHO.

Session clustering is implemented in Application Server, not in Liferay. You have either configure it manually on your AS, or enable sticky sessions on your Load Balancer.
There is a lot of articles about how to do it in Tomcat:
In Tomcat documentation
Liferay wiki
Te second article is very heavy, as far as I remember to run the basic session replication, all you need to do is to enable SimpleTcpCluster in tomcat configuration and add <distributable /> to web.xmls of all your web applications.

Related

Does Apache Cassandra provide measurements that can be taken to prevent data vandalization (malicious nodes)?

We're working on a big school project with twenty people. The case is a decentralized anonymous chatting platform. So we're not allowed to set up a central server, therefore we were looking into distributed databases and found Cassandra to best fit in our project.
This means that everybody who is running the application will also be a Cassandra node. This rises many concerns for me, mainly malicious nodes. If everybody runs a Cassandra node on their computer how can we prevent them from manipulation/vandalizing or even just straight up deleting data?
I was doing some research and I'm starting to conclude that Cassandra (and other distributed databases I looked into) are meant for corporate solutions where the company owns, runs and maintains the databases. This is not true in our case, because as soon as the application launches there won't be an "owner". Every user is equally part of the system.
I know one (or maybe the only) way to prevent malicious node in a decentralized/distributed system is to have nodes keep each other in check. I found no way to do this in Cassandra thus my question, can we prevent data vandalism and malicious node from being a threat?
As you mentioned, the design of Cassandra assumes that you'll have control of all the nodes, as once that any third party has access to a copy of your data, you lose control of what they can do with it, similar to any post in the internet.
One option to ensure that only "authorized nodes" are joining the cluster, you can enforce SSL internode encryption which can give you some control, but there are some caveats:
if a node goes rogue or is compromised after it was given access, it will be very difficult to kick it out.
a node that is using an expired certificate will be able to continue interacting with the cluster until the service gets restarted.
administration of SSL certificates adds another layer of complexity for administration.
Regarding the statement I know one (or maybe the only) way to prevent malicious node in a decentralized/distributed system is to have nodes keep each other in check. Cassandra is already using a gossip mechanism to keep each of the nodes in check with the others.

Glassfish security realm for each application

I have deployed two applications onto the glassfish server, each of which uses its own security realm (file, jdbc). The problem is that the glassfish allows only one default realm to be set which results in only one application to be functional at a time. I'm a newbie with the glassfish so I might be missing something fundamental or should approach this problem differently (do I need a separate domain for each of mine applications to be able to set the security with specific realm?).
Any suggestions would be appreciated.
Thank you.
It's possible to create more file Realms in the same GlassFish domain, you simply have to specify a new file name for the keyfile storing the users / passwords information. You can follow this tutorial if you wish.
Concerning the other part of the question, you can also consider to use a LDAP server, which is a scalable and more general solution, because it can be used also by other applications inside the same firm. You can use OpenLDAP or OpenDJ for example, and use JNDI API for letting your applications access the LDAP realm.
Here you can find a JNDI and OpenLDAP Tutorial, but you can easily find other tutorials around on the subject.

Going Live - Any best practice check list and how to increase security on an MVC Site?

I have been building quite a few MVC based websites locally and am finally ready to deploy the first, but, I am getting rather nervous.
During testing, I noticed several things that worried me - I am using the default forms authentication with a few tweaks (although nothing to the underlining security).
I noticed that if I created a user in one application and logged in, then launched another application... it would keep me logged in* as the user from the previous application. The user doesn't even exist in the new application!
* - I used [Authorize] on controllers, and was surprised I could just get straight in without any sort of authentication
I assume it is because the cookie is being set for localhost instead of the application/port (although, not too much I can do about this in development).
Based on this, how secure is the default authentication?
1. Is there anyway to check from the code that the user doesn't have a "faked" cookie? / Check the user has logged in from my application?
2. I was just wondering if there are any sort of check lists or anything I can go through before deploying?
Sort of - 3.As of writing this, For question 1. I am guessing I could add a column with a random number that is saved to the cookie, and then that number is checked every time any authentication is done... however, I did not want to start mucking around with the membership provider... but I think this could work. Is this a good idea?
Try using IIS on your machine instead of VS Dev Server. Solves your problem 1.
Other than that I don't think you will need any extra effort to make default membership mechanisms of asp.net to make more secure if of course you don't need a real custom things to do in your projects. These things are around for a while now and I think they have been well tested in terms of security.
You just need to remember to put [Authorize] attribute to right places. If not on your controllers put them to right methods.
Basic Web Authentication shouldn't be trusted for applications which contain truly sensitive information. That being said it's sufficient for most applications. Be sure to check your application as often as possible before and after release for XSS vulnerabilities.
Here is Microsoft's recommended "Secure yourself" list. http://msdn.microsoft.com/en-us/library/ff649310.aspx
No matter how strong your authentication is, one small XSS mistake and a malicious user can do as they wish to your site, and your users data!
I recently read a good book: Worx Professional ASP.NET, it talks about these steps in more detail on securing yourself as well as exposing examples of problems. After reading this I was able to "deface and steal" my own sites information with relative ease, was a good eye opener on the importance of securing for XSS.

Security in Java EE Application with JBoss

What would be the basic and obvious security considerations and recommendations in a Java EE Web application?
Use HTTPS
Use Jasypt to simplify some stuff.
Limit external access point.
Make sure you don't have a single point of failure.
Make sure communication channels are properly secured when needed.
Secure access to components by white list (give access instead of removing access).
Make sure the state is kept on then server side.
Test test test test test...
Keep updated on security flaws.
The rest is all about good design.
Don't trust anything that's not under your control. The primary, most important aspect of this is: Don't trust that the input to your POST/GET handlers will come from the forms you design.
Validate all client input, especially before you use it to interact with SQL, HQL, other external data sources or the command line.

NHibernate and shared web hosting

Has anyone been able to get an NHibernate-based project up and running on a shared web host?
NHibernate does a whole lot of fancy stuff with reflection behind the scenes but the host that I'm using at the moment only allows applications to run in medium trust, which limits what you can do with reflection, and it's throwing up all sorts of security permission errors. This is the case even though I'm only using public properties in my mapping files, though I do have some classes defined as proxies.
Which companies offer decent (and reasonably priced) web hosting that allows NHibernate to run without complaining?
Update: It seems from these answers (and my experimentation -- sorry Ayende, but I still can't get it to work on my web host even after going through the article you linked to) is to choose your hosting provider wisely and shop around. It seems that WebHost4Life are pretty good in this respect. However, has anyone tried NHibernate with Windows shared hosting with 1and1? I have a Linux account with them already and I'm fairly satisfied on that front, and if I could get NHibernate to work seamlessly with Windows I'd probably stick with them.
I have had no issues with running NHibernate based apps on WebHost4Life, although I don't like them.
Getting NHibernate to run on medium trust is possible. A full description on how this can be done is found here:
http://blechie.com/WPierce/archive/2008/02/17/Lazy-Loading-with-nHibernate-Under-Medium-Trust.aspx
I ran my my own geek siteoff N2 (which uses NHibernate and Windsor Castle) and 4 pet NHibernate/Fluent projects on dailyrazor.com for a while.
You get a good deal for $5 a month, including unlimited SQL Server databases and subdomains and it runs off Plesk with FTP and remote SQL Server Management Studio access.
I'm using a Finnish host called Nebula that happily runs my NHibernate-leveraging applications. I had an issue once with trust levels; the machine.config on the host was configured to deny reflection but I successfully overrode it in the web.config.

Resources