Content Expiration - IIS 6 - iis

If I set the content expiration for static files to something like 14 days and I decide to update some files later on, will IIS know to serve the updated files or will the client have to wait until the expiration date?
Or is it the other way around where the browser requests a new file if the modified date is different?
Sometimes I update a file on the server and I have to do a hard refresh (CTRL+F5) to see the difference. Currently I have it to expire after 1 day.

The web browser, and any intermediate proxies, are allowed to cache the page until its expiration date. This means that IIS might not even be aware of the client viewing the page.

You want ETags
An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL. If the resource content at that URL ever changes, a new and different ETag is assigned. Used in this manner ETags are similar to fingerprints, and they can be quickly compared to determine if two versions of a resource are the same or not. [...]

Related

Amazon cloudfront how to set signed url outdated if used one time

I need to protect my videos to be downlouded using "Internet Download Manager - IDM/IDMan"
i used
1. rtmp stream
2. signed URL
3. expiration date of signed URL (60seconds)
4. i set player(jwplayer)to *autostar*
AND i need to set signed url outdated if it used one time
using this solution IDM will get an url that is already used then blocked
Is there any way to configure cloudfront to use signed url just one time;
Or any solution that can protect videos to be uploaded and used in other web sites.
Please can you help?
Thanks in advance
Cloudfront does not support the ability to only play a url once and they never will. The reason is that the only way to do this will be for all their edge servers to share the information - they currently do not share state which means scaling is much easier and performance is much better.
Unfortunately, if you're looking for fine grained control over how your videos are played, you're going to need more fine grained code, which you can't do in cloudfront - you'll need to host content directly on your server.
Idea 1: Limit by count
You can implement the idea that you have - once the url has been used once, you no longer serve up that file.
Idea 2: Limit by referrer
You can look at the referrer header and if it's from your website, then allow the content to be downloaded. Otherwise, reject it. Note: this can be spoofed and a user can set the referrer header manually.
Preventing a video from being downloaded and later uploaded is technically impossible. By letting them display the video, there really isn't any way to do that without them being able to record those bits and replay them later. There are probably things, like preventing right clicks or using an odd proprietary format or something else but I'm not familiar with DRM techniques.

Keep Alive and Multiple SSO Domino HTTP configuration

I have some problem in this specific scenario:
If my XPages application
If I have my Domino HTTP configure with Single server setting the Ext.lib Keep-Alive control work well...and my session don't expire.
But I I use Domino HTTP configured with Multiple SSO (LPTAtoken) with Firebug I see the Ext.lib Keep-Alive control work well (I see the PING request) but I don't know because my session expire.
Have someone any suggest for me?
Tnx you
p.s. my release in 9 social on linux 32 bit
What kind of key did you use when you created the LTPA token?
When using WebSphere LTPA keys, a token is assigned and it will expire when the time specified in the field Expiration (minutes) elapses, no matter whether you are actively using your application or not.
When examining the documentation for products that use WebSphere server (Sametime, Connections) I found that IBM suggests to set Expiration time to a long interval (such as 600) minutes to minimize the risk of users being logged out in the middle of a working day. I admit that this does not sound like a good suggestion security-wise.
I assume it is the same when using Domino LTPA keys, with the added option of being able to specify Idle Session Timeout.
So, you can either increase the token expiration interval (depending on your requirements this could be an easy fix) or go with Stephan's suggestion. I don't know how to code his approach, but if I find a solution, I'll update this answer.
In a single server setting the server tracks the validity of the cookie. So whenever you hit the server it is updated. In a multi server environment you get a new cookie before expiry. So you need to process the incoming cookie to replace the predecessor. Easiest way using a regular page and an iframe

S3, cloudfront and expiry date

I am using s3 to host static website. This website is placed in s3 bucket and being distributed by cloudfront. It all works well but we are facing problem when we need to change specific files. if we change index.html file in s3 bucket we are not getting the latest file from cloudfront.
Should I be setting expiry time on s3 for these static files and only then after expired time will cloudfront look for new version of file and distribute new files?
CloudFront uses the Cache-Control and Expires header sent by the origin server to decide if a resource is to be stored in cache and how long a it is considered fresh. If you don't control caching via response headers, CF will consider each resource as stale after 24 hours it was fetched from the origin. Optionally, you can configure a distribution to ignore cache control headers and use an expiry time for each resource that you specify.
When you update a file at the origin, CF will not attempt to refresh its copy until it expires. You can follow different strategies to have CF update cached copies.
1) The least efficient and not recommended is use invalidation. You can do it via AWS console or API.
2) Tell CF when to look for updated content by sending Expires headers. For example, if you have a strict policy for deploying new content/version to your website and you know that say you roll out a deployment almost every Thursday, you may send an Expires header with each resource from your origin set to next planned deployment date. (This will probably not work with S3 origins.)
3) The most efficient and recommended way is to use versioned URLs. A good practice could be to include the last modified time of the resource in its access URI. With EC2 or other origins able to serve dynamic content it is fairly easy, with an S3 origin, it's not that straight forward if possible at all.
Therefore I'd recommend invalidating the updated resources.
It looks like you have to set the meta data on the s3 side:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html
The best way I found to do this is use BucketExplorer and go "Batch Operation", "Update Metadata", "Add Metadata" and then add "Cache-Control:max-age=604800, public" a 1 week cache period.

OpenAM 10 iPlanetDirectoryPro cookie change

Several documents on the ForgeRock site mention to change the iPlanetDirectoryPro cookie name in openAM 10 but never mention which file(s) to change it in. I've tried several including AgentService.xml and AMAuth.xml to no avail. Has anyone does this successfully?
You don't have to change it in files, the files you mentioned are 'OpenAM service descriptions' which are loaded into the configuration store when OpenAM is configured.
Later on you have to change the service attributes using either the console or ssoadm.
You can change the name of the SSO session tracking cookie by changing value in 'server defaults' under 'servers and sites'.
If you have Agents running in normal SSO mode, be sure to adopt the value there as well.

How to force clear user's browser all the time?

We're working on a website. Our client want to check the website daily, but they're facing a problem. Whenever we make a change on the website, they have to clear their browser cache.
So I added the following header to my server configuration
Cache-Control: no-cache
As far as I see, firefox is receiving this header and I'm pretty sure that it is obeying it.
My question is, is this "Cache-Control: no-cache" guaranteed and does it work across all the browsers (including IEs)?
I find it's handy to use a "useless" version number in the requests. For example, instead of requesting script.js, request script.js?v=1.0
If you are generating your pages dynamically (PHP, etc) you can just keep the version number in a variable and only have to update it in one place whenever you update. If you want the content never to be cached, just use the output of time() as your version number.
EDIT: have you tried asking your client to change his browser caching settings? That way you can bypass the problem entirely

Resources