When we are in loadbalancer then the proper login is not happening in liferay but when we are in individual server it works.
ANy idea we are using siteminder authentication
I guess siteminder and load balancer are standing before the liferay servers in the network. And i guess your problem is you want the session to be sticky to the same liferay server so the load balancer has to see the siteminder session cookie after successful login to siteminder?
If this is the case (please be more specific) then it is not a problem of liferay, but only of siteminder und loadbalancer.
(I know liferay, but not siteminder and I cannot guess your load balancer)
Related
We are blocked by our security team to go live with openam because of below issue. We have openam deployed in tomcat in server1 (Tomcat). We have agent and HTMLs in Server2. Agent redirects all the unauthenticated requests from server2 (Apache/httpd) where html is deployed to server1 where openam is deployed.
The problem is we don’t want to open server2 for public considering security risk. Is it possible for an web agent deployed on server1 to connect to openam deployed on server2 which is not a opened/externalized server? How we have to externalize the openam server, how to hide all console stuff and block all API calls?
It is possible to externalize the login interface of OpenAM, but you would need to write a custom login application for that. This custom app then could have direct access to the OpenAM server to perform the REST calls necessary to authenticate end-users.
Once you have the login UI in place on a public server, you can change the agent configuration to use that login UI for the Login and Logout URLs.
I am new to openAM, I have sso url, username and password. My question is how can I land to openAM/idm/EndUser page from my .net application without going to openAM log in page? what kind of service or API should I use for that. Is there a sample demo?
not at all
EndUser page is part of OpenAM console, which is 'protected' by OpenAM itself. Without an OpenAM SSO tracking cookie you can not access it.
What would be the intention to hit the OpenAM console (which should not be made public in Internet space)? Password Change? Identity Management?
Note OpenAM is NOT really an identity management / provisioning 'tool' (and is not a web-frontend to LDAP-based Directory Servers).
I have my agent installed in domain 1, and OpenSSO in domain 2, and their is a VPN connection established between the two. So in this scenario will the normal SSO work or CDSSO. I tested and find normal SSO works fine and able to get cookie, but in CDSSO i get 403x error.
You can either configure agent in CDSSO or SSO mode when the resource protected by the agent is located in the same cookie domain ... but it does not make sense. CDSOO is only needed to transfer the SSO tracking cookie in the other cookie domain.
However I suspect you get '#403x' error which is due to agent profile property 'agent root url for CDSSO' does not match the value for 'ProviderID' in the request hitting the CDCServlet.
I am having two domains.One is secured and the other is not.Currently,when the user submits form data i redirect the the user to this secure website to collect further details.This redirection is made secure by means of cross domain cookies.
Now,instead of redirecting to the secure page i am planning to load the secure page in an iframe.But i am not aware of the security measures to be taken up to secure this communication via iframe.How to ensure that this communication is secured?Will setting cross-domain cookies solve the problem?
I send a pixel request from non-secure to secure site,which inturn drops a cookie with its domain and sends back the pixel as mean of successful response. Now when the real request comes from non-secure site, i check for the cookie and it's domain therby creating a secure environment and also made the page in secure site a one time vist page.
We have a SQUID reverse proxy and a MOSS 2007 portal. All sites are using NTLM.
We cannot get it working with SQUID as a reverse proxy.
Any ideas where to start?
Can you switch to Kerberos instead of NTLM?
You're encountering the "Double-Hop Issue", whereby NTLM authentication cannot traverse proxies or servers.
This is outlined at this location:
http://blogs.msdn.com/knowledgecast/archive/2007/01/31/the-double-hop-problem.aspx
And over here:
http://support.microsoft.com/default.aspx?scid=kb;en-us;329986
Double-Hop Issue
The double-hop issue is when the ASPX page tries to use resources that are located on a server that is different from the IIS server. In our case, the first "hop" is from the web browser client to the IIS ASPX page; the second hop is to the AD. The AD requires a primary token. Therefore, the IIS server must know the password for the client to pass a primary token to the AD. If the IIS server has a secondary token, the NTAUTHORITY\ANONYMOUS account credentials are used. This account is not a domain account and has very limited access to the AD.
The double-hop using a secondary token occurs, for example, when the browser client is authenticated to the IIS ASPX page by using NTLM authentication. In this example, the IIS server has a hashed version of the password as a result of using NTLM. If IIS turns around and passes the credentials to the AD, IIS is passing a hashed password. The AD cannot verify the password and, instead, authenticates by using the NTAUTHORITY\ANONYMOUS LOGON.
On the other hand, if your browser client is authenticated to the IIS ASPX page by using Basic authentication, the IIS server has the client password and can make a primary token to pass to the AD. The AD can verify the password and does authenticate as the domain user.
For more information, click the following article number to view the article in the Microsoft Knowledge Base:
264921 (http://support.microsoft.com/kb/264921/) How IIS authenticates browser clients
If switching to Kerberos is not an option, have you investigated the Squid NTLM project?
http://devel.squid-cache.org/ntlm/
you can use HAProxy for load balancing