How to disable Glimpse, the difference between turning Glimpse.axd off and defaultRuntimePolicy="Off" - glimpse

What is the difference between
1) turning Glimpse off via the web.config's setting:
<glimpse defaultRuntimePolicy="Off" endpointBaseUri="~/Glimpse.axd">
2) turning it off via Glimpse.axd
As I understand it, 1) will turn off all the tracing whereas 2) will stop the return of the traces to that particular browser session, but tracing will still be happening on the server. As I understand it, the only way to turn Glimpse off, say for a production instance, to remove any Glimpse processing overhead, would be to use 1).
Is my understanding correct?
Thanks

In case of 1 the GlimpseRuntime will detect that it should not trace actions going on during any of the requests. This value is one of the Glimpse Runtime Policy values of which Off is the most restricted one. Keep in mind that there will still be a little bit of overhead to make that check. If you want to take Glimpse completely out of the picture, then you must make sure there are no Glimpse related assemblies in your bin folder and that the registered HttpModule and HttpHandler are removed from the config
In case of 2 it will also prevent any tracing for a particular request, which is different from case 1 where the configuration value applies to all requests.
Let me clarify that a little bit. The GlimpseRuntime determines a specific RuntimePolicy value for each request and it does that based on IRuntimePolicy implementations. Glimpse comes with a couple of policies out-of-the-box, some decide whether or not to trace requests or to return the Glimpse client as part of the response. They do that based on returning content types (you don't want the Glimpse panel to be returned when an image is requested for instance), the status code, uri used, ... and one of those policies is the ControlCookiePolicy which effectively checks whether a specific Glimpse cookie is part of the request, if that is not the case, tracing will be disabled completely for that particular request. When you go to the Glimpse.axd page and you turn Glimpse on or off, you're basically creating or deleting that cookie.
So in case of 1 no tracing will be done at all, but in case of 2 tracing can be done for request A if the cookie has been set, but can be disabled for request B if the cookie is no longer there.
It is possible to ignore this ControlCookiePolicy and to create your own policies to determine whether or not the Glimpse Client should be returned or tracing should be done, ...

Related

Pingdom breaks IIS output caching when using varyByHeaders (Cookie)

I've been doing a lot of research on output caching lately and have been able to successfully implement output caching in IIS via web.config with either varyByQueryString or varyByHeaders.
However, then there's the issue of Pingdom's Performance & Real User Monitoring (or PRUM). They have a "fun" little beforeUnload routine that sets a PRUM_EPISODES cookie just as you navigate away from the page so it can time your next page load. The value of this cookie is basically a unixtimestamp() which changes every second.
As you can image, this completely breaks user-mode output caching because now every request will be sent with a different Cookie header on each subsequent request.
So two questions:
My first inclination says to find a way to drop PRUM_EPISODES cookie before it reaches the server since it's serves no purpose to the actual application (this is also my informal request for a ClientOnly flag in the next HTTP version). Is anyone familiar with a technique for dropping individual cookies before they reach IIS' output caching engine or some other technique to leverage varyByHeaders="Cookie" while ignoring PRUM_EPISODES? Haven't found such a technique for Web.config as of yet.
Do all monitoring systems manipulate cookies in this manner (changing every page request) for their tracking mechanisms and do they not realize that by doing so, they break user-mode output caching?

How to change response header (cache) in CouchDB?

Do you know how to change the response header in CouchDB? Now it has Cache-control: must-revalidate; and I want to change it to no-cache.
I do not see any way to configure CouchDB's cache header behavior in its configuration documentation for general (built-in) API calls. Since this is not a typical need, lack of configuration for this does not surprise me.
Likewise, last I tried even show and list functions (which do give custom developer-provided functions some control over headers) do not really leave the cache headers under developer control either.
However, if you are hosting your CouchDB instance behind a reverse proxy like nginx, you could probably override the headers at that level. Another option would be to add the usual "cache busting" hack of adding a random query parameter in the code accessing your server. This is sometimes necessary in the case of broken client cache implementations but is not typical.
But taking a step back: why do you want to make responses no-cache instead of must-revalidate? I could see perhaps occasionally wanting to override in the other direction, letting clients cache documents for a little while without having to revalidate. Not letting clients cache at all seems a little curious to me, since the built-in CouchDB behavior using revalidated Etags should not yield any incorrect data unless the client is broken.

GWT security: web.xml filter vs overriding processPost() in RemoteServiceServlet

I have a GWT application that resides within a single web page, which I believe is fairly typical. I am in the process of securing it, and I need advice on choosing a proper approach. My ultimate intention is to check for presence of authenticated session on every gwtrpc server call.
In the past when dealing with servlet/JSP-based web application, I used filter and filter-mapping definitions in web.xml. And that worked like a charm considering that such applications usually consisted of many web pages, and redirection to a login page went right along with it. But in case of GWT and its often-used single screen nature, I feel that overriding RemoteServiceServlet's processPost() function may be a better approach. My intention would be to check for presence of an existing session, and then throw an appropriate exception if needed. The client would then react accordingly (i.e. login popup, etc) by determining the course of action based on whatever exception is thrown back to it.
I am aware of other existing solutions such as Spring security, but I would really like to hear opinions on my idea. Thank you.
I don't think that you should check for an authenticated session yourself. Let the application container deal with that. Of course, in order to do that, you will need a login-config section and security constraints in your web.xml file.
A good way to secure specific parts of your application is to check (prior to the actual display of the screen) if the current user is allowed to. From your remote servlet you can call getThreadLocalRequest().getUserPrincipal() to get the actual user (null if not authenticated) and getThreadLocalRequest().isUserInRole("admin") to make the autorization.
Hope this is helpful for you !

how do i keep the session permanent with servicestack

I am working with a remote iPad developer who is using a tool that he says does not allow him to set the "RememberMe=true" value when registering the user. Since we always want to have this value set anyway, I thought I could simply intercept the request on the server side and set it myself. I am using Basic Authentication and I had already overridden "BasicAuthProvider" so I have access to the "TryAuthenticate" and "Authenticate" methods. These methods both provide a parameter of IServiceBase which contains the original Request. I was thinking about modifying the DTO but it is null. So I looked at the cookie values and I could easily add a value for "ss-opt=perm" in there. But I'm not even sure "perm" is right.
My question is this...is this the best way to set the RememberMe flag to true on the server side? My partner says the library he is using is called "afnetworking" but that looks to be a dead end.
Marcus
EDIT: My partner found a way to set the "ss-opt" value with their tool but this does not seem to be helping. He is still experiencing a problem after 6 hours. There is additional information. The first response he gets after waiting 6 hours has the "ss-pid" cookie value but the "ss-id" and "ARRAffinity" cookies are missing from the first response. The subsequent responses has them. Weird.
I am going to switch to using the AzureCache instead of MemCache to see if that helps. But I did not update the server in that 6 hours so shouldn't the memory cache still have the session id values that correlate to the ss-pid value?
EDIT 2: I was under the false impression the "cache" was where the system kept the permanent ss-pid values and all I had to do was to register the cache. How do I keep the ss-pid values around between server updates?
Switching to AzureCache and having the client insert the ss-opt cookie seems to be working.

Remote activation/deactivation and protecting against out of business

I'm in charge of an app that uses the internet to transfer data between sites, and some customers are being awkward about paying, so we need a mechanism that will allow us to cut off the service of non-payers. I'd like to protect against the admin people using firewalls to block off our checks, but conversely I'd like to give some allowance for our company web site disappearing for some reason and not being accessible.
The scheme I'm imagining is:
server makes twice daily check to web page using a URL like:
http://www.ourcompany.com/check.php?myID=GUID&Code=MyCode
This then returns a response that contains either nothing of interest, or the GUID and a value.
GUID=0
That zero indicates that the server should stop operation. To make it work again, the server will check every 5 mins for the same info, until the value matches what it thinks the code that it passed in should be transformed to.
This scheme makes sense to me, but the question really is how to protect against blocking. Given we know we must have internet access, how long should we continue to operate without being able to get the response from our web server? Is something like 14 days and then we just shut it off anyway the best way?
The solution I used in the end was pretty much as I suggested. Yes, it is defeatable using tools outlined here, but it is better than nothing.
The app checks daily to access a web site that contains a control file encrypted using public key encryption. It decrypts in memory, and if it finds its GUID, then it must match a code. To disable the operation, the code is set to 0 (zero) which will always fail. When disabled, it checks every two minutes to allow rapid restoration. There is also a manual mechanism to generate a code that will work for a week in case of server trouble.
The code will allow up to 14 days without connecting to the server before it takes this as a deliberate attempt to block it. After 10 days, it shows an error message which asks them to contact support.
This method is really easy to circumvent: just use a local dns server to point www.ourcompany.com to the local machine, or use a http proxy. Then the user can return whatever response they want to the program.
Assuming the user hasn't circumvented the check, how long you are to continue to operate without confirmation is a business decision and not a programming decision.
A user can use a tool such as OWASP WebScarab to change values on the fly to subvert your security model. You need to include something more difficult such as requiring a secure channel, comparing public key and so on.

Resources