I have my agent installed in domain 1, and OpenSSO in domain 2, and their is a VPN connection established between the two. So in this scenario will the normal SSO work or CDSSO. I tested and find normal SSO works fine and able to get cookie, but in CDSSO i get 403x error.
You can either configure agent in CDSSO or SSO mode when the resource protected by the agent is located in the same cookie domain ... but it does not make sense. CDSOO is only needed to transfer the SSO tracking cookie in the other cookie domain.
However I suspect you get '#403x' error which is due to agent profile property 'agent root url for CDSSO' does not match the value for 'ProviderID' in the request hitting the CDCServlet.
Related
If an application issues an authenticated user a session and gives them a cookie that should only be valid for a certain subdomain (say, because there are other customers located on other subdomains but all the subdomains resolve to the same running application) then should the server verify this cookie's intended subdomain against the host header before setting the session at the beginning of a request?
e.g.
User successfully authenticates to client.example.com
Server creates a new session for them and adds a property to the session about the originating domain
{user: "fred#gmail.com", domain: "client.example.com"}
Server sends a Set-Cookie header in the response with the session id
Set-Cookie: secure-session-id=1234-5678; Secure
The browser won't send that cookie if the user navigates to otherclient.example.com due to the implicit same-domain behavior of Set-Cookie
There's nothing stopping the user from constructing a curl command with that cookie but pointed at otherclient.example.com.
If the server doesn't validate that the host header of an incoming request matches the originating domain of the session for the provided session id in the cookie, then it's possible that a user with a valid account could masquerade as another customer (if the app bases any logic off of the subdomain instead of purely off of information gathered from the authentication). Prior to setting the user's session and continuing with the request I would expect the server to take the session id submitted, look up the session, see if the request host header matches the "originating domain" that was put on the stored session and if not then either return a 401 or redirect the user to the appropriate subdomain.
This seems like a generic enough scenario that I'd expect most server authentication frameworks to do this out of box unless you turn it off (ultimately it boils down to enforcing on the server side the same behavior that browsers are relied upon to do by default (not send session cookies for one subdomain to another subdomain). Are you aware of any that do this? Is there a better way of preventing this scenario? Am I misunderstanding anything?
Are you aware of any that do this?
ASP.NET has a different Application Domain per IIS application. Therefore, a session cookie from one application won't be valid on another. The only exception is if you've written a multi-tenant application that resides in the same Application Domain and you're not doing any validation on the received session cookie to ensure that the host matches the one where it was set.
PHP on the other hand will store all sessions in the session.save_path (e.g. /var/lib/php/session) and therefore a session cookie from one application would set session variables if used for another, which, as you've rightly pointed out is a security concern.
This can be remedied by overriding the session.save_path local value for each application or access host for the application.
Is there a better way of preventing this scenario?
As an additional security measure you could set the host when starting a session.
Session["host"] = HttpContext.Current.Request.ServerVariables["HTTP_HOST"];
Then validate this before any session values are used in the request. i.e. what you said in your question:
I would expect the server to take the session id submitted, look up
the session, see if the request host header matches the "originating
domain" that was put on the stored session and if not then either
return a 401 or redirect the user to the appropriate subdomain.
If these measures aren't being done then it would be an interesting attack vector in substituting a set of session variables from one application into another that reside on the same server. Of course, if the applications are the same (e.g. multi tenanted scenario) then there would be exploits such as leveraging admin access on one host to gain admin access on another. If not, then there still may be attack paths there depending on which variables are set and how they are used.
I was wondering how the amlbcookie and sticky sessions works with the policy agents, specifically in a CDSSO environment.
I understand that in a regular SSO implementation, where the protected application, and therefore the web agent is in the same domain as the OpenAM deployment, the web agent would have access to the amlbcookie and can read the value or just pass on the cookie to OpenAM during session validation.
However, how does this work in a CDSSO situation? In this case, the policy agent does not have access to the amlbcookie since it is in a different domain (the OpenAM domain). I understand that the policy agent will read the session id from the LARES POST.
Is the amlbcookie value passed as well in the LARES POST? Is this what the com.sun.identity.agents.config.postdata.preserve.lbcookie property is for?
After doing some research, and working with the vendor, I learned that right now, policy agents do not pass on the amlbcookie in the CDSSO workflow. There is a bug open for this at the following link:
https://bugster.forgerock.org/jira/browse/OPENAM-2396
However, the authoritative server that issued the token will store its server id as part of the session value.
http://blogs.forgerock.org/petermajor/2015/08/sessions/
The Policy Agent will extract the server id from the token, and can create the amlbcookie, or the OpenAM server can read the authoritative server id from the token.
So, there is no need for the LARES post to also pass the amlbcookie, since all the information required to derive it is in the session token.
I deployed multiple applications in tomcat-7.0.55 and used Central authentication service(CAS) for single sign-on. But when I am accessing an application and when it is getting redirected to CAS login page, I understood it was authenticated successfully and TGT and ST tickets were also generated.
But after the successful authentication, it is not getting redirected to the application page. I observed that the Proxy callback authentication is failed and the corresponding ticket is not generated. I could see this information in the catalina.out file.
For more information, my CAS authentication is running on http instead of https. Please let me know if this could create any problem.
And more over, all my apps are hosted on the 8080 port but the proxyCallbackUrl I gave is on 80 port. First of all my URL for the proxyCallbackURL is http://my_server_private_ip/webappcas2/proxyCallback (this was configured earlier to me) but I am not sure what URL is to be given as for the proxyCallbackUrl in the web.xml file of the application.
Thanks in advance.
Running CAS on a nonsecure port will not allow you to use single sign on. Furthermore, proxy callbacks are required to be https by default which is why you are seeing that error. Switch to https and all your problems will go away.
I have an IIS set up in a domain A, on let's call it the process network. We are using windows-authentication and in this environment everything works as it should.
But we also have users on an office network set up in domain B. There is no trust between the domains, but there is an opening between the networks so they can reach the site. For most users in domain B everything works as expected, when they try do log in they are prompted for the Domain A credentials and then logged in.
However some users are unable to log in. They get prompted to supply credentials as expected but when they do they are denied (3 tries followed by a 401), due to:
Failure Reason: Unknown user name or bad password.
Status: 0xc000006d
Sub Status: 0xc000006a
Above is taken from IIS event log.
I know for sure the user name is valid, and that the password is correct. I have not tried for all these users, but for some. I have tried to login using my credentials from a user's computer that cannot login, and it worked. So it doesn't seem like it's a client issue.
An interesting side note is that the users having trouble are on geographically different locations than the IIS. I have not received any problems from office network-users from the same region as the IIS is located.
EDIT: The users have changed password after the reset, so i shouldn't be becuase of expired password.
You must establish a two-way domain trust in order to make Kerberos work. Everything else will fail as you see in your logs.
We have a SQUID reverse proxy and a MOSS 2007 portal. All sites are using NTLM.
We cannot get it working with SQUID as a reverse proxy.
Any ideas where to start?
Can you switch to Kerberos instead of NTLM?
You're encountering the "Double-Hop Issue", whereby NTLM authentication cannot traverse proxies or servers.
This is outlined at this location:
http://blogs.msdn.com/knowledgecast/archive/2007/01/31/the-double-hop-problem.aspx
And over here:
http://support.microsoft.com/default.aspx?scid=kb;en-us;329986
Double-Hop Issue
The double-hop issue is when the ASPX page tries to use resources that are located on a server that is different from the IIS server. In our case, the first "hop" is from the web browser client to the IIS ASPX page; the second hop is to the AD. The AD requires a primary token. Therefore, the IIS server must know the password for the client to pass a primary token to the AD. If the IIS server has a secondary token, the NTAUTHORITY\ANONYMOUS account credentials are used. This account is not a domain account and has very limited access to the AD.
The double-hop using a secondary token occurs, for example, when the browser client is authenticated to the IIS ASPX page by using NTLM authentication. In this example, the IIS server has a hashed version of the password as a result of using NTLM. If IIS turns around and passes the credentials to the AD, IIS is passing a hashed password. The AD cannot verify the password and, instead, authenticates by using the NTAUTHORITY\ANONYMOUS LOGON.
On the other hand, if your browser client is authenticated to the IIS ASPX page by using Basic authentication, the IIS server has the client password and can make a primary token to pass to the AD. The AD can verify the password and does authenticate as the domain user.
For more information, click the following article number to view the article in the Microsoft Knowledge Base:
264921 (http://support.microsoft.com/kb/264921/) How IIS authenticates browser clients
If switching to Kerberos is not an option, have you investigated the Squid NTLM project?
http://devel.squid-cache.org/ntlm/
you can use HAProxy for load balancing