I'm wondering if it's a good idea to use DFS to sync content across a web farm? Does anyone have any experience of this? We've used Robocopy in the past but found it a little patchy and clunky.
Essentially we want to avoid having to make ten changes to content each time one file changes (this happens a lot since our site is old and usues classic ASP.)
From what I gather, DFS is usually meant for geographically seperated locations and used to make the UNC shares appear simpler to users and easier to manage.
What I'd like to achieve with it is to only copy content changes to one of ten servers which will be the hub. I'd then configure the other nine servers as spokes using FRS.
Any thoughts on this methodology or suggestions for better setups would be much appreciated.
For performance reasons, don't point a web site to a UNC path. SMB file access is horribly inefficient and slow compared to pretty much any other file access method.
You can use DFS-R (via Windows 2003 R2) to enable replication between DFS-enabled shares, but definitely setup IIS to point to the share's local path, not UNC.
If you're using Win2003 make sure to install R2, DFS replication is much improved and doesn't use FRS. It will do what you want, even over a LAN.
Don't use FRS for this, it may get confused. Using DFS and another sync technique, such as Symantec Replication Exec, works fine. Make sure to create the correct site structure with IP ranges in Active Directoy so that the correct servers are chosen by DFS.
I tried that some years ago with FRS, when Windows 2003 was new (before SP1, things may have become better since then, but I'm not sure). FRS twice completely went nuts and deleted our files, not to talk from the amount of times it just clogged up and failed to recover itself. FRS also does only sync files which are closed, files which are left open are not synched (when doing log file collection for instance). FRS is fine in environments where you have a moderate amount of relatively small files with not too many changes going on on the server.
I have very recently disabled the UNC DFS as the site root on a server; under heavy load the site would become unresponsive to requests. Pointing the site wwwroot to a local drive and restarting IIS quickly restored the site speed. I have to recommend that if you go the DFS route, simply have it replicate to a local drive instead of using the UNC path as the wwwroot.
Related
Posting here as server fault doesn't seem to have the detailed Azure knowledge.
I have a Azure storage account, a file share. This file share is connected to a Azure VM through mapped drive. A FTP server on the VM accepts a stream of files and stores them in the File Share directly.
There are no other connections. Only I have Azure admin access, limited support people have access to the VM.
Last week, for unknown reasons 16 million files, which are nested in many sub-folders (origin, date) moved instantly into a unrelated subfolder, 3 levels deep.
I'm baffled how this can happen. There is a clear instant cut off when files moved.
As a result, I'm seeing increased costs on LRS. I'm assuming because internally Azure storage is replicating the change at my expense.
I have attempted to copy the files back using a VM and AZCOPY. This process crashed midway through leaving me with a half a completed copy operation. This failed attempt took days, which makes me confident I wasn't the support guys dragging and moving a folder by accident.
Questions:
Is it possible to just instantly move so many files (how)
Is there a solid way I can move the files back, taking into account the half copied files - I mean an Azure backend operation way rather than writing an app / power shell / AZCOPY?
So there a cost efficient way of doing this (I'm on Transaction Optimised tier)
Do I have a case here to get Microsoft to do something, we didn't move them... I assume something internally messed up.
Thanks
A tool that supports server-side copy (like AzCopy) can move the files quickly because only the metadata is updated. If you wants to investigate the root cause, I recommend opening a support case. (To sort this out – Your best bet is to connect with our Azure support team by filing a ticket, our support team on best effort basis can help you guide on this matter. )
We have several servers using shared IIS config stored on a network storage. After access to that storage is down for a few seconds (and then comes back), IIS isn't working until you do iisreset.
The problem seems to be that the local app pool config files become corrupted. To be more precise, the error given out is "Configuration file is not well-formed XML", and if you go to the app pool config, you see that instead of an actual config, it contains the following:
Now, trying to solve this we've come across the "Offline Files" feature and tried it for the shared applicationHost.config, but it wouldn't sync (saying other process is using the file, which is strange - I can easily change and save it).
The shared path starts with an IP (like \1.2.3.4...) - perhaps that's the issue (can't figure why it would be, just out of ideas at this point)?
Basically, I have two questions:
1) If the shared config is unavailable for a bit, how to make IIS recover and not be left with corrupt files till iisreset?
2) Any other idea to prevent this situation altogether.
We did manage to get offline files to work - the problem was the network drive is over Samba, and had to have oplocks on - otherwise was telling it can't sync because file is used by another process.
Now, the IIS does recover - actually, doesn't go down with the drive. However, since our websites are also on that drive, they are not available during network outage (which is predictable), the last strange thing is that it takes IIS about 1 minute to "feel" them again after the drive is back online.
I have a website that I host on a Linux VPS which has been growing over the years. One of its primary functions is to store images/photos and these image files are typically around 20-40kB each. The way the site is organised at the moment is all images are stored in a root folder ‘photos’ and under that root folder are many subfolders determined by a random filename. For example, one image could have a file name abcdef1234.jpg and that would be stored in the folder photos/ab/cd/ef/. The advantage of this is that there are no directories with excessive numbers of images in them and accessing files is quick. However, the entire photos directory is huge and is set to grow. I currently have almost half a million photos in tens of thousands of sub-folders and whilst the system works fine, it is fairly cumbersome to back up. I need advice on what I could do to make life easier for back-ups. At the moment, I am backing up the entire photos directory each time and I do that by compressing the folder and downloading it. It takes a while and puts some strain on the server. I do this because every FTP client I use takes ages to sift through all the files and find the most recent ones by date. Also, I would like to be able to restore the entire photo set quickly in the event of a catastrophic webserver failure so even if I could back up the data recursively, how cumbersome would it be to have to upload each back stage by stage?
Does anyone have any suggestions perhaps from experience? I am not a webserver administrator and my experience of Linux is very limited. I have also looked into CDN’s and Amazon S3 but this would require a great deal of change to my site in order to make these system work – perhaps I’ll use something like this in the future.
Since you indicated that you run a VPS, I assume you have shell access which gives you substantially more flexibility (as opposed to a shared webhosting plan where you can only interact with a web frontend and an FTP client). I'm pretty sure that rsync is specifically designed to do what you need to do (sync large numbers of files between machines, and do so efficiently).
This gets into Superuser territory, so you might get more advice over on that forum.
Hi it's a question and it may be redundant but I have a hunch there is a tool for this - or there should be and if there isn't I might just make it - or maybe I am barking up the wrong tree in which case correct my thinking:
But my problem is this: I am looking for some way to migrate large virtual disk drives off a server once a week via an internet connection of only moderate speed, in a solution that must be able to be throttled for bandwidth because the internet connection is always in use.
I thought about it and the problem is familar: large files that can moved that also be throttled that can easily survive disconnection/reconnection/large etc etc - the only solution I am familiar with that just does it perfectly is torrents.
Is there a way to automatically strategically make torrents and automatically "send" them to a client download list remotely? I am working in Windows Hyper-V Host but I use only Linux for the guests and I could easily cook up a guest to do the copying so consider it a windows or linux problem.
PS: the vhds are "offline" copies of guest servers by the time I am moving them - consider them merely 20-30gig dum files.
PPS: I'd rather avoid spending money
Bittorrent is an excellent choice, as it handles both incremental updates and automatic resume after connection loss very well.
To create a .torrent file automatically, use the btmakemetainfo script found in the original bittorrent package, or one from the numerous rewrites (bittornado, ...) -- all that matters is that it's scriptable. You should take care to set the "disable DHT" flag in the .torrent file.
You will need to find a tracker that allows you to track files with arbitrary hashes (because you do not know these in advance); you can either use an existing open tracker, or set up your own, but you should take care to limit the client IP ranges appropriately.
This reduces the problem to transferring the .torrent files -- I usually use rsync via ssh from a cronjob for that.
For point to point transfers, torrent is an expensive use of bandwidth. For 1:n transfers it is great as the distribution of load allows the client's upload bandwidth to be shared by other clients, so the bandwidth cost is amortised and everyone gains...
It sounds like you have only one client in which case I would look at a different solution...
wget allows for throttling and can resume transfers where it left off if the FTP/http server supports resuming transfers... That is what I would use
You can use rsync for that (http://linux.die.net/man/1/rsync). Search for the --partial option in man and that should do the trick. When a transfer is interrupted the unfinished result (file or directory) is kept. I am not 100% sure if it works with telnet/ssh transport when you send from local to a remote location (never checked that) but it should work with rsync daemon on the remote side.
You can also use that for sync in two local storage locations.
rsync --partial [-r for directories] source destination
edit: Just confirmed the crossed out statement with ssh
I've inherited a website from an external company (who have gone bust) and I need to get it deployed to our servers (its 3 web sites running together)
However, in testing, although the app runs correctly, performance is poor and I am pretty certain this is because the app writes files to the local disk. We only currently have a single disk in the server but as its virtual we can increase this to two fairly quickly.
Server is Windows 2008 running IIS7 and has two processors already.
Some of the files are 100mb+, but there are also lots of small writes and log file writes as well.
My question is where to put which parts of the application?
Is it best to have the OS on one disk, Web sites and files/log on another?
or sites and OS on one and files on another?
Is there a "standard" point to start from?
If anyone could reply with something like this, but with an explanation so I understand WHY!!
e.g.
C: OS
C: WebSites
D: Files
D: Logs
Your background sounds like it's from Linux, because some people configure new servers taking the items you listed into account. We have a handful of IIS sites; we mostly run Apache and on Linux, so I'm taking a stab at this.
Where we have IIS, we also tend to have MS SQL Server. I would keep the Windows OS on a nice large partition, and put everything IIS, including the root directory on a second drive. IIS installs defaulted to C:\, but I believe you can move it to another drive. The names of the utilities and how to do this, are best left to those who do this regularly.
In other words, I'd make a gross disk split OS/IIS, and then tune from there. However, make sure you have lots of disk space and can defragment.