In IIS, is there a way to limit how many times a particular file can be served? - iis

My webpage automatically creates a .pdf file and a link to go with it. I want the user to be able to download the file from the link-- but only once (or maybe 5x or whatever I decide later).
Is there any easy way in IIS of limiting the number of times files in a particular folder are served before requests will be refused?

There is no relevant configuration in IIS that can limit the number of user downloads.
You can completely implement your needs in your program. Add the code to count the number of downloads by the currently logged in user. Restrict users according to the limit times you set.

Related

For IIS, what is the best way to update static files while maintaining availability on a live website?

I've investigated this and found lots of related information, but nothing that answers my question.
A little background. A small set of files are shared by IIS 10 statically. These files need to be updated usually weekly, but not more than once an hour (unless someone manually runs an update utility for testing). The files are expected to be a couple K bytes in size, no larger then 10 kilobytes. The update process can be run on the IIS server and will be written in PowerShell or C#.
My plan for updating files that are actively being served as static files by IIS is:
Copy the files to a temporary local location (on the same volume)
Attempt to move the files to the IIS static site location
The move may fail if the file is in use (by IIS). Implement a simple retry strategy for this.
It doesn't cause a problem if there is a delay in publishing these files. What I really want to avoid is IIS trying to access one of the files at just the wrong time, a race condition while my file replacement is in process. I have no control over the HTTP client which might be a program that's not tolerant of the type of error IIS could be expected to return, like an HTTP status 404, "Not Found".
I have a couple random ideas:
HTTP GET the file from IIS before I replace it with the intention of getting the file into IIS's cache in hopes that will improve the situation.
Just ignore this potential issue an hope for the best
I can't be the only developer who's faced this. What's a good way to address this issue? (Or is it somehow not an issue at all and I'm just over thinking it?)
Thanks in advance for any help.

What solutions are there to backup millions of image files and sub-directories on a webserver efficiently?

I have a website that I host on a Linux VPS which has been growing over the years. One of its primary functions is to store images/photos and these image files are typically around 20-40kB each. The way the site is organised at the moment is all images are stored in a root folder ‘photos’ and under that root folder are many subfolders determined by a random filename. For example, one image could have a file name abcdef1234.jpg and that would be stored in the folder photos/ab/cd/ef/. The advantage of this is that there are no directories with excessive numbers of images in them and accessing files is quick. However, the entire photos directory is huge and is set to grow. I currently have almost half a million photos in tens of thousands of sub-folders and whilst the system works fine, it is fairly cumbersome to back up. I need advice on what I could do to make life easier for back-ups. At the moment, I am backing up the entire photos directory each time and I do that by compressing the folder and downloading it. It takes a while and puts some strain on the server. I do this because every FTP client I use takes ages to sift through all the files and find the most recent ones by date. Also, I would like to be able to restore the entire photo set quickly in the event of a catastrophic webserver failure so even if I could back up the data recursively, how cumbersome would it be to have to upload each back stage by stage?
Does anyone have any suggestions perhaps from experience? I am not a webserver administrator and my experience of Linux is very limited. I have also looked into CDN’s and Amazon S3 but this would require a great deal of change to my site in order to make these system work – perhaps I’ll use something like this in the future.
Since you indicated that you run a VPS, I assume you have shell access which gives you substantially more flexibility (as opposed to a shared webhosting plan where you can only interact with a web frontend and an FTP client). I'm pretty sure that rsync is specifically designed to do what you need to do (sync large numbers of files between machines, and do so efficiently).
This gets into Superuser territory, so you might get more advice over on that forum.

NodeJS: How would one watch a large amount of files/folders on the server side for updates?

I am working on a small NodeJS application that essentially serves as a browser based desktop search for a LAN based server that multiples users can query. The users on the LAN all have access to a shared folder on that server and are traditionally used to just placing files within that folder to sharing among everyone, and I want to keep that process the same.
The first solution I came across was the fs.watchFile which has been touched on in other stackoverflow questions. In the first question user Ivo Wetzel noted that on a linux system fs.watchFile uses inotify but, was of the opinion that fs.watchFile should not be used for large amounts of files/folders.
In another question about fs.watchFile user tjameson first reiterated that on Linux inotify would be used by fs.fileWatch and recommended to just use a combination of node-inotify-plusplus and node-walk but again stated this method should not be used for a large number of files. With a comment and response he suggested only watching the modified times of directories and then rescanning the relevant directory for file changes.
My biggest hurdles seem to be that even with tjameson's suggestion there is still a hard limit to the number of folders monitored (of which there are many and growing). Also it would have to be done recursively because the directory tree is somewhat deep and can also be subject to change at the lower branches so I would have to monitor the following at every folder level (or alternatively monitor the modified time of the folders and then scan to find out what happened):
creation of file or subfolder
deletion of file or subfolder
move of file or subfolder
deletion of self
move of self
Assuming the inotify has limits in line with what was said above then this alone to me seems like it may be too many monitors when I have a significant amount of nested subfolders. The real awesome way looks like it would involve kqueue which I subsequently found as a topic of discussion on a better fs.fileWatch in a google group.
It seems clear to me that keeping a database of the relevant file and folder information is the appropriate course of action on the query side of things, but keeping that database synchronized with the actual state of the file system under the directories of concern will be the challenge.
So what does the community think? Is there a better or well known solution for attacking this problem that I am just unaware of? Is it best just to watch all directories of interest for a single change e.g. modified time and then scan to find out what happened? Is it better to watch all the relevant inotify alerts and modify the database appropriately? Is this not a problem which is solvable by a peasant like me?
Have a look at monit. I use it to monitor files for changes in my dev environment and restart my node processes when relevant project files change.
I recommend you to take a look at the Dropbox API.
I implemented something similar with ruby on the client side and nodejs on the server side.
The best approach is to keep hashes to check if the files or folders changed.

Sharepoint 2010 Export-SPWeb does it affect the site you run it on

If I run Export-SPWeb against a share point site does it has an affect on the site apart from slowing it down potential. can people still read and edit the site?
Yes people can still use (read/update) it depending on which parameters you specify.
You can prevent access by using GradualDelete parameter. When this parameter is used, the site collection is marked as deleted, which immediately prevents any further access to its content. The data in the deleted site collection is then deleted gradually over time by a timer job instead of all at once, which reduces its impact on the performance of farm servers and SQL Server.
The NoFileCompression parameter lets you specify that no file compression is performed during the export process. Using this parameter can lower resource usage up to 30% during the export process. Using this parameter will result in a backup folder being created instead of a compressed file. If you use the NoFileCompression parameter in the Export-SPWeb command, you must also use it when you import the content by using the Import-SPWeb command.
More on MSDN.

Uploading large files in JSF

I want to upload a file that is >16GB. How can I do this in JSF?
When using HTTP, you'll face two limitations. The one on the client side (webbrowser) and the one on the server side (webserver). The average webbrowser (IE/FF/Chrome/etc) has a limit of 2~4GB, depending on the make/version/platform. You cannot control this from the server side on. The enduser has to change the browser settings itself (sometimes this isn't possible at all). The average webserver (Tomcat/JBoss/Glassfish/etc) in turn has a limit of 2GB. You can configure this, but this still won't and can't remove the limitation on the webbrowser.
Your best bet is FTP. If you want to do this by a webpage, consider an applet which utilizes Apache Commons Net FTPClient. There are several ready-to-use opensource/commercial ones by the way.
You however still need to take into account that the disk file system on the FTP server side supports that large files. FAT32 for example has a limit of 4GB per file. NTFS and several *Nix file systems, however, can go up to 16EB.

Resources