I have Varnish up and running and every thing works just fine.
But I am using esi <esi:include src="/esi/cache/temp.phtml?id=1"/> and well it works fine but I want to prevent that external resources can access the esi directory.
Now I have it working by setting a header in varnish with the req.esi_level. It will be 0 if you access a esi directly and well otherwise it will be n+1
The only issue with this is that is will hit the back end will I think/hope Varnish it self can prevent access to the esi directory.
tl;dr how can you prevent external access to an esi directory with varnish
In your VCL, instead of setting a header with the value of request.esi_level, just short circuit requests for those resources with esi_level of 0.
pseudo-vcl:
if (req.esi_level == 0 && req.url ~ "^/esi/.*") {
error (403);
}
Related
I am getting the error IDX21323 OpenIdConnectProtocolValidationContext.Nonce was null, OpenIdConnectProtocolValidatedIdToken.Paylocad.Nonce was not null.
https://testing.demo.com/message=IDX21323:%20RequireNonce%20is%20'[PII%20is%20hidden]'.%20OpenIdConnectProtocolValidationContext.Nonce%20was%20null,%20OpenIdConnectProtocol.ValidatedIdToken.Payload.Nonce%20was%20not%20null.%20The%20nonce%20cannot%20be%20validated.%20If%20you%20don't%20need%20to%20check%20the%20nonce,%20set%20OpenIdConnectProtocolValidator.RequireNonce%20to%20'false'.%20Note%20if%20a%20'nonce'%20is%20found%20it%20will%20be%20evaluated.
I checked in other SO links and found this issue is related to redirect URI mismatch like if you have one URL in the code but different one in AZure.
IDX21323 OpenIdConnectProtocolValidationContext.Nonce was null, OpenIdConnectProtocolValidatedIdToken.Paylocad.Nonce was not null
For me Redirect URI same for both i.e. in code and Azure. moreover I registered one application with two redirect URI (http://localhost:11111/ and https://testing.demo.com). so when I am running through local using localhost it's working fine but when I use https://testing.demo.com, I got IDX21323 error in my system, where as in different system its in a loop.
https://login.microsoftonline.com/{tenantID}/oauth2/authorize?client_id={client ID}&redirect_uri=https%3a%2f%2flogin.microsoftonline.com%2fte%{tenant ID}%2foauth2%2fauthresp&response_type=id_token&response_mode=form_post&nonce={nonce 1}state=StateProperties%3deyJTSUQiOiJ4LW1zLWNwaW0tcmM6qswsdwdY2OTAtNzlk
The above URL remains same but only the nonce got changed everytime.
So is it because I have kept two redirect URI for the same application. Do I need to create two different application one for localhost (Redirect URI - http://localhost:11111) and another for Dev (Redirect URI - https://testing.demo.com)
Your expertise matters.
Thanks!!
Its always recommended to use different application for development and production. Mainly from security and isolation point of view.
Your redirect_url seems to be wrong, its pointing to the tenant itself and hence in a loop. The redirect_uri below should be your app's reply url - http://localhost:11111 or https://testing.demo.com,
https://login.microsoftonline.com/{tenantID}/oauth2/authorize?client_id={client ID}&redirect_uri=http://localhost:11111/&response_type=id_token&response_mode=form_post&nonce={nonce 1}state=StateProperties%3deyJTSUQiOiJ4LW1zLWNwaW0tcmM6qswsdwdY2OTAtNzlk
I have a ColdFusion application being hosted on my IIS server. I added the Shibboleth service to my web IIS, and the CGI/Filters are setup to use it. I added my application to the testshib federation and was able to login successfully. Now I'm trying to get the session variable into the ColdFusion code.
When I dump the CGI scope, I see the shibboleth session is saved under HTTP_COOKIE, but REMOTE_USER is an empty string. This is because REMOTE_USER cannot be used according to the docs. Instead the request header variable should be named HTTP_REMOTE_USER, but I don't see that in the CGI dump. Does anyone why this is? Do I have to set that up my shibboleth attribute-map or in ColdFusion ?
index.cfm
CGI dUMP
<cfdump var = "#cgi#" >
<br>HTTP_REMOTE_USER
<cfdump var="#CGI.HTTP_REMOTE_USER#">
<br>Get Request
<cfset x = GetHttpRequestData()>
<cfdump var="x">
Dump result
HTTP_COOKIE:_shibsession_64656487474733a2f2f6465736f6d2f73686962626f6c657468=_ecb60f7e4bf7616ab3522;
Session
Miscellaneous
Session Expiration (barring inactivity): 479 minute(s)
Client Address: 224.61.30.228
SSO Protocol: urn:oasis:names:tc:SAML:2.0:protocol
Identity Provider: https://idp.testshib.org/idp/shibboleth
Authentication Time: 2017-11-30T14:48:48.255Z
Authentication Context Class: urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
Authentication Context Decl: (none)
Attributes
affiliation: 2 value(s)
entitlement: 1 value(s)
eppn: 1 value(s)
persistent-id: 1 value(s)
unscoped-affiliation: 2 value(s)
I believe that ColdFusion doesn't expose every possible CGI variable in the <cfdump>, only the most common ones. That doesn't mean you can't access what appear to be missing CGI variables directly. Try changing your dump to specifically target the one you need, like:
<cfdump var="#CGI.HTTP_REMOTE_USER#">
If it still isn't being written to the CGI scope, you might be able to access that specific request header variable through the the page request using getHTTPRequestData().
I'm running a stand alone instance of varnish on a Digital Ocean Ubuntu VM which basically works fine. The setup is used to take load of an older wordpress server that sits anyhwere else. That works quite well but i'm having a hard time getting content purged. And when talking about purge i mean to invalidate the cache for a URL to force varnish to fetch a fresh version from the backend (just to make sure as i've seen some irritation about purge/ban).
I have setup an ACL for purge and as far as i can see with varnishlog the purges get accepted - on one side from the WordPress blog (where W3TC handles the purges) as well es from the local console where i tried to purge with curl -X PURGE http://url.to.purge
The problem is that i still get the old versions of the URL in the browser on matter what i do locally.
This is how i handle purge in vcl_recv:
if (req.method == "PURGE") {
if (!client.ip ~ purge) {
return(synth(405,"Not allowed."));
}
return (purge);
}
and i get VCL_error(200, Purged) on every purge so i guess it's probably ok.
Looks like i'm still doing things wrong. After giving service varnish a restart the full cache refreshes and the pages refresh too - until then varnish keeps everything for ages - no matter how much i'm purging.
my Varnish version is 4.0.3.
Any idea?
Thanks,
Frank
Got same behavior on Varnish 6 with vcl 4.1.
The only way to solve it was explicitly define sub vcl_purge like this:
sub vcl_purge {
set req.method = "GET";
set req.http.X-Purger = "Purged";
return (restart);
}
Didn't find the reason and this may not be exactly what you want because after purge it will get content from the backend without waiting for client request.
But still didn't find another way and this is good enough for me.
I'm trying to send a HTTP POST request to a PHP file on my web hosting server from my Android app.
That request contains the data from the app and is validated or saved on the server and then the PHP file sends a JSON response which is received by the Android device and the actions are taken accordingly like logging in or registering etc.
Now the problem that has arisen is that till now I didn't have a paid DOMAIN instead only a HOSTING service. That Hosting Service gave me a Server IP address to access the Index.php file I had uploaded.
So in my Android code I had written the url to be connected as http://10x.xxx.xx.xx/index.php/
and the request and response were working totally fine.
Now I have purchased a Domain name from Godaddy.com and I'm forwarding that domain name to the Server IP I had and when I open it in browser it's working perfectly fine. And so I changed the ip on which the request should be sent in my Android code to http://www.sampleurl.com/index.php/
This is my index.php file
if (isset($_POST['tag']) && $_POST['tag'] != '') {
// get tag
$tag = $_POST['tag'];
//do other authorization stuff
}
else
echo "Access Denied";
Now the problem is when I'm using the Server Ip address to connect it goes into the if block and functions correctly. But when I use the Domain name it always returns Access Denied.
The Logcat shows:
03-18 02:59:08.780: E/JSON(30892): Access Deniedn
03-18 02:59:08.780: E/JSON Parser(30892): Error parsing data org.json.JSONException: Value Access of type java.lang.String cannot be converted to JSONObject
03-18 02:59:08.780: W/System.err(30892): java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String org.json.JSONObject.getString(java.lang.String)' on a null object reference
Now I don't know how a letter 'n' is appended after 'Access denied' neither why the request is not being served correctly.
P.S. I have used forwarding instead of updating the NS at the domain. Can that possibly be causing the issue?
With the DNS being configured to forward, your post-request got lost. Check your server logs. You need to set it to be a type A record.
My softwares are:
Liferay 6.0.6 with Tomocat 6.0.29, OpenSSO 9.5.2_RC1 Build 563 with tomcat 6.0.35, CentOS 6.2 Operating system
Setup:
I have setup both liferay and opensso on the same CenOS machine, making sure that both of its tomcat run on very different port, I have installed and configured OpenSSO with Liferay as per the guidelines availaible on liferay forums
Problem:
when i hit my application URL i get redirected to Opensso login page which is what i want, when i login with proper authentication details it trys to redirect to my application which is exactly how it should behave, however this redirect goes in a loop and i don't see my application dashboard. The conclusion i come to is that the redirect is trying to authenticate in liferay but somehow it does not get what it is looking for and goes back to opensso and this repeats infinitely. I can find similar issues been reported here. Unfortunetly, it did not work.
Later i decided to debug the liferay code and i put a break point on com.liferay.portal.servlet.filters.sso.opensso.OpenSSOUtil and com.liferay.portal.servlet.filters.sso.opensso.OpenSSOFilter. The way i understand this code is written is it first goes to the OpenSSOUtil.processFilter() method which get's the openSSO setting information that i have configured on liferay and later checks if it is authenticated by calling the method OpenSSOUtil.isAuthenticated(). This particular implementation basically reads the cookie information sent and tries to set the cookie property on liferay by calling the method OpenSSOUtil._setCookieProperty(). This is where it fails, it tries to read the cookie with name [iPlanetDirectoryPro] from the liferay class com.liferay.util.CookieUtil using the HttpServletRequest object but all it get's a NULL. this value set's the authenticate status to false and hence the loop executes.
Following is the code from class com.liferay.util.CookieUtil
public static String get(HttpServletRequest request, String name) {
Cookie[] cookies = request.getCookies();
if (cookies == null) {
return null;
}
for (int i = 0; i < cookies.length; i++) {
Cookie cookie = cookies;
String cookieName = GetterUtil.getString(cookie.getName());
if (cookieName.equalsIgnoreCase(name)) {
return cookie.getValue();
}
}
return null;
}
Can anyone please let me know why liferay is not able to find the cookie that opensso sent. If its related to Opensso setting about enable cookie value, then i have done that already which is here
In OpenSSO go to: Configuration -> Servers and Sites -> -> Security -> Cookie -> check Encode Cookie Value (set to Yes)
What works:
when this loop is executing i open another tab and login to my application explicitly, from my application when i signout it get's signout from opensso also. This is strange to me.
For more information, while this redirect loop happens, following URL's give me these set of information
http://opensso.ple.com:9090/openam/identity/getCookieNameForToken
string=iPlanetDirectoryPro
http://opensso.ple.com:9090/openam/identity/isTokenValid
boolean=true