Disk configuration for server running IIS - iis

I've inherited a website from an external company (who have gone bust) and I need to get it deployed to our servers (its 3 web sites running together)
However, in testing, although the app runs correctly, performance is poor and I am pretty certain this is because the app writes files to the local disk. We only currently have a single disk in the server but as its virtual we can increase this to two fairly quickly.
Server is Windows 2008 running IIS7 and has two processors already.
Some of the files are 100mb+, but there are also lots of small writes and log file writes as well.
My question is where to put which parts of the application?
Is it best to have the OS on one disk, Web sites and files/log on another?
or sites and OS on one and files on another?
Is there a "standard" point to start from?
If anyone could reply with something like this, but with an explanation so I understand WHY!!
e.g.
C: OS
C: WebSites
D: Files
D: Logs

Your background sounds like it's from Linux, because some people configure new servers taking the items you listed into account. We have a handful of IIS sites; we mostly run Apache and on Linux, so I'm taking a stab at this.
Where we have IIS, we also tend to have MS SQL Server. I would keep the Windows OS on a nice large partition, and put everything IIS, including the root directory on a second drive. IIS installs defaulted to C:\, but I believe you can move it to another drive. The names of the utilities and how to do this, are best left to those who do this regularly.
In other words, I'd make a gross disk split OS/IIS, and then tune from there. However, make sure you have lots of disk space and can defragment.

Related

RDweb: Cannot run legacy VB6 DLL more than once, at the same time

We have a VB6 program installed on all of our clients' local C drives, along with an associated VB6 DLL program. The program was written back before my time in the 90s. It was not designed to run off a server or to allow multiple user access to the same EXE at the same time, hence why it's on everyone's C drive. However, all running sessions of it refer to the same database source on a separate SQL Server via ODBC. The database connectivity works fine.
Ok that's all history, with everyone working remotely (Covid19)!
Today however, our clients are all remoting into a virtual server via RD Web. We want them to avoid using our VPN. We have TWO virtual servers allocated to RDweb users: TS01 and TS02, and license for up to 64 users. Every user is automatically allocated one of the two servers. If two people log in at the same time, and one in TS01 and the other in TS02 - everything is fine! It's when a 3rd person logs in and is given either of the servers, and runs the program, is when it crashes, with this error:
The DLL is registered in both Computer\HKEY_CLASSES_ROOT\ and Computer\HKEY_LOCAL_MACHINE\SOFTWARE\, but not LOCAL_USER, which I think is necessary to make this be a multi-user program, within a server environment.
Converting the app is not an option, as we don't have VB6 compilers. Do we need to wrap the DLL in "something"?
Any ideas how to get this legacy program to run for multiple users, are appreciated.
Thanks
Try installing/copying VB program and related DLLs in each users folders (like home folder and shortcuts pointing to these HOME directories). If the program runs, it should update the database in the same way. Sometimes, most workarounds are simple. If they need different locked DLL working space then give them that (May have memory issues later)
Please see this
https://stackoverflow.com/a/345154/12011019
and
https://learn.microsoft.com/en-us/archive/msdn-magazine/2005/april/simplify-app-deployment-with-clickonce-and-registration-free-com
Some DLLs are not designed to be shared and this behaviour cannot be modified without reprogramming. There are in process and out process (threads ) DLLs. Or there can be many other issues. If its not working, its not allowed by design.
https://support.microsoft.com/en-au/help/911359/a-client-application-may-intermittently-receive-an-error-message-when
The shared DLLs that are used system wide do not have this limitation as many they are designed to be used by many applications.
Please try and comment the behaviour.

Moving files from multiple Linux servers to a central windows storage server

I have multiple Linux servers with limited storage space that create very big daily logs. I need to keep these logs but can't afford to keep them on my server for very long before it fills up. The plan is to move them to a central windows server that is mirrored.
I'm looking for suggestions on the best way to this. What I've considered so far are rsync and writing a script in python or something similar.
The ideal method of backup that I want is for the files to be copied from the Linux servers to the Windows server, then verified for size/integrity, and subsequently deleted from the Linux servers. Can rsync do that? If not, can anyone suggest a superior method?
You may want to look into using rsyslog on the linux servers to send logs elsewhere. I don't believe you can configure it to delete logged lines with a verification step - I'm not sure you'd want to either. Instead, you might be best off with an aggressive logrotate schedule + rsyslog.

What solutions are there to backup millions of image files and sub-directories on a webserver efficiently?

I have a website that I host on a Linux VPS which has been growing over the years. One of its primary functions is to store images/photos and these image files are typically around 20-40kB each. The way the site is organised at the moment is all images are stored in a root folder ‘photos’ and under that root folder are many subfolders determined by a random filename. For example, one image could have a file name abcdef1234.jpg and that would be stored in the folder photos/ab/cd/ef/. The advantage of this is that there are no directories with excessive numbers of images in them and accessing files is quick. However, the entire photos directory is huge and is set to grow. I currently have almost half a million photos in tens of thousands of sub-folders and whilst the system works fine, it is fairly cumbersome to back up. I need advice on what I could do to make life easier for back-ups. At the moment, I am backing up the entire photos directory each time and I do that by compressing the folder and downloading it. It takes a while and puts some strain on the server. I do this because every FTP client I use takes ages to sift through all the files and find the most recent ones by date. Also, I would like to be able to restore the entire photo set quickly in the event of a catastrophic webserver failure so even if I could back up the data recursively, how cumbersome would it be to have to upload each back stage by stage?
Does anyone have any suggestions perhaps from experience? I am not a webserver administrator and my experience of Linux is very limited. I have also looked into CDN’s and Amazon S3 but this would require a great deal of change to my site in order to make these system work – perhaps I’ll use something like this in the future.
Since you indicated that you run a VPS, I assume you have shell access which gives you substantially more flexibility (as opposed to a shared webhosting plan where you can only interact with a web frontend and an FTP client). I'm pretty sure that rsync is specifically designed to do what you need to do (sync large numbers of files between machines, and do so efficiently).
This gets into Superuser territory, so you might get more advice over on that forum.

File Server vs NAS for hosting media files

I have a web portal where my users can login and submit artworks (image, documents, etc.). This web portal is hosted in 2 load-balanced web servers.
Because of this load balancing, I'm thinking of using NAS to serve as a centralized media file storage for my web portal. I'm considering NAS because it's cheaper than a file server and it's easier to maintain.
Now the questions are:
File hosting - Is there any NAS device that can act as a file hosting server? Or, do I need to create a virtual path in my web server to the NAS? This can be achieved easily if I use a file server, I can just bind a separate domain to the file server, something like media.mydomain.com, so all media files will be served through this domain. I don't mind serving the media files through a virtual path from my web servers, smthg like mydomain.com/media. I would like to know if NAS can do any of the approaches above, and whether it's secure, easy to setup, etc.
Performance - This is more important because read and writes are quite intensive. I never use NAS before. I'm thinking of getting 2 hard drives (2TB, 15000RPM) configured for RAID-1. Would this be able to match the performance of a common file server? I know the answer to this question is relative but I just want to see how NAS can be used as a file hosting, not just as a file sharing device.
My web servers are running Windows Server 2008R2 with IIS 7.5. I would appreciate if anyone can also share best practices for integrating NAS with Win Server/IIS.
Thanks.
A NAS provides a shared location for information on a private network (at least you shouldn't expose NAS technologies as NFS and CIFS over the internet) and is not really designed as a web file host. That is not to say you can't configure a NAS as a web file host utilizing IIS/apache/nginx, but then you don't need your web servers. NAS setup is well documented for both windows server and most unix/linux distros, both are relatively easy. A NAS is as secure as it is designed to be, you can utilize a variety of access control methods to secure a NAS depending on your implementation.
This really depends on your concurrent users and what kind of load you are expecting them to put on the system, for the most part performance over a 1Gb LAN connection and a 15,000 RPM hard drive for a NAS should provide ample performance for a decent amount of concurrent users, but I can't say for certain because if a user sits there downloading hundreds of files at a time you can have issues. As with any web technology wrap limits around user usage to prevent one user bringing down your entire system. I'm not sure what you are defining a file server (a NAS is a file server), if you think of a file server as a website that hosts files, a NAS will provide the same if not better performance based on where the device is in relation to your web servers (again, depending on utilization). If you are worried about performance you can always build a larger RAID array using RAID 5, RAID 6, RAID 10 or use SSDs to increase storage performance. For the most part in any NAS the hardware constraints usually are: storage speed, network speed, ram, cpu. Again this really depends on utilization, so test well, benchmark, and monitor performance
Microsoft provides a tuning document for server 2008 r2 that is useful: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463392.aspx
In my opinion your architecture would be your 2 web servers referencing the NAS as a shared location using a virtual directory pointed at the NAS for your files or handle the NAS location in code (using code provides a whole plethora of options around security and usage).

Should DFS be used to sync wwwroot?

I'm wondering if it's a good idea to use DFS to sync content across a web farm? Does anyone have any experience of this? We've used Robocopy in the past but found it a little patchy and clunky.
Essentially we want to avoid having to make ten changes to content each time one file changes (this happens a lot since our site is old and usues classic ASP.)
From what I gather, DFS is usually meant for geographically seperated locations and used to make the UNC shares appear simpler to users and easier to manage.
What I'd like to achieve with it is to only copy content changes to one of ten servers which will be the hub. I'd then configure the other nine servers as spokes using FRS.
Any thoughts on this methodology or suggestions for better setups would be much appreciated.
For performance reasons, don't point a web site to a UNC path. SMB file access is horribly inefficient and slow compared to pretty much any other file access method.
You can use DFS-R (via Windows 2003 R2) to enable replication between DFS-enabled shares, but definitely setup IIS to point to the share's local path, not UNC.
If you're using Win2003 make sure to install R2, DFS replication is much improved and doesn't use FRS. It will do what you want, even over a LAN.
Don't use FRS for this, it may get confused. Using DFS and another sync technique, such as Symantec Replication Exec, works fine. Make sure to create the correct site structure with IP ranges in Active Directoy so that the correct servers are chosen by DFS.
I tried that some years ago with FRS, when Windows 2003 was new (before SP1, things may have become better since then, but I'm not sure). FRS twice completely went nuts and deleted our files, not to talk from the amount of times it just clogged up and failed to recover itself. FRS also does only sync files which are closed, files which are left open are not synched (when doing log file collection for instance). FRS is fine in environments where you have a moderate amount of relatively small files with not too many changes going on on the server.
I have very recently disabled the UNC DFS as the site root on a server; under heavy load the site would become unresponsive to requests. Pointing the site wwwroot to a local drive and restarting IIS quickly restored the site speed. I have to recommend that if you go the DFS route, simply have it replicate to a local drive instead of using the UNC path as the wwwroot.

Resources