I am new to IIS and currently doing a lab on company.
So I have 2 volumes, each stored the same html files that use to display the IIS web.
Is there a way to set my physical path to point up to both of their directory? Just in case if one volume down, the other still up and keep the web alive?
If no then is there anyway for me to achieve this?
I'm afraid it is impossible to do failover between two different physical paths in IIS.
If you have multiple servers. You could build ARR load balancer and specify the path that you want to do failover. Then if one server break down, ARR will mark it as unhealthy due to health test and request will be routed to another server with different path.
https://learn.microsoft.com/en-us/iis/extensions/configuring-application-request-routing-arr/http-load-balancing-using-application-request-routing
Related
We have several servers using shared IIS config stored on a network storage. After access to that storage is down for a few seconds (and then comes back), IIS isn't working until you do iisreset.
The problem seems to be that the local app pool config files become corrupted. To be more precise, the error given out is "Configuration file is not well-formed XML", and if you go to the app pool config, you see that instead of an actual config, it contains the following:
Now, trying to solve this we've come across the "Offline Files" feature and tried it for the shared applicationHost.config, but it wouldn't sync (saying other process is using the file, which is strange - I can easily change and save it).
The shared path starts with an IP (like \1.2.3.4...) - perhaps that's the issue (can't figure why it would be, just out of ideas at this point)?
Basically, I have two questions:
1) If the shared config is unavailable for a bit, how to make IIS recover and not be left with corrupt files till iisreset?
2) Any other idea to prevent this situation altogether.
We did manage to get offline files to work - the problem was the network drive is over Samba, and had to have oplocks on - otherwise was telling it can't sync because file is used by another process.
Now, the IIS does recover - actually, doesn't go down with the drive. However, since our websites are also on that drive, they are not available during network outage (which is predictable), the last strange thing is that it takes IIS about 1 minute to "feel" them again after the drive is back online.
I want to setup 2 LAMP servers for load balancing for webpages. I will only upload files to server A and want the server sync the files from server A to server B automatically. What is the best way to do it?
Don't try to keep two servers in sync. Store the files somewhere central with redundant storage, ie, use RAID
You can use High Availability with shared storage software like openfiler or you can write a scripts for replication of data between both servers.
I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.
I've inherited a website from an external company (who have gone bust) and I need to get it deployed to our servers (its 3 web sites running together)
However, in testing, although the app runs correctly, performance is poor and I am pretty certain this is because the app writes files to the local disk. We only currently have a single disk in the server but as its virtual we can increase this to two fairly quickly.
Server is Windows 2008 running IIS7 and has two processors already.
Some of the files are 100mb+, but there are also lots of small writes and log file writes as well.
My question is where to put which parts of the application?
Is it best to have the OS on one disk, Web sites and files/log on another?
or sites and OS on one and files on another?
Is there a "standard" point to start from?
If anyone could reply with something like this, but with an explanation so I understand WHY!!
e.g.
C: OS
C: WebSites
D: Files
D: Logs
Your background sounds like it's from Linux, because some people configure new servers taking the items you listed into account. We have a handful of IIS sites; we mostly run Apache and on Linux, so I'm taking a stab at this.
Where we have IIS, we also tend to have MS SQL Server. I would keep the Windows OS on a nice large partition, and put everything IIS, including the root directory on a second drive. IIS installs defaulted to C:\, but I believe you can move it to another drive. The names of the utilities and how to do this, are best left to those who do this regularly.
In other words, I'd make a gross disk split OS/IIS, and then tune from there. However, make sure you have lots of disk space and can defragment.
Looking at implementing load balancing for Windows hosted web sites on IIS, but want to know of the issues involved in doing so. Website are ASP.NET - with a mix of custom authentication and ASP.NET Authentication. For example:
Files : store on networked share obvious choice, but I expect it will put a lot of demand on the file server, making the websites slow and possibly resulting in the "network BIOS command limit has been reached" issue, or slowing down other data traffic
Session state : best to store in a SQL database, to prevent issues when responses come from a different server? How to do this? A web.config setting I presume?
Server downtime : when a server goes down (crash, hardware failure etc) or is updated, will the web sites still function?
DNS : As it is load balanced, I assume sites will need two IP addresses? Using Microsoft DNS
Regarding the files, could they easily be synced between servers? There may be up to 8MB files (occasionally bigger). DFS is an option that may be considered in the future, but are there other technologies (besides robocopy) that can be used?
Files: Pointing to a network share solves some problems and introduces others, such as a single point of failure. Clustering the fileshare mitigates that a bit, but you still have a single copy of the data. I prefer a shared nothing approach where the content is local to each webserver, eliminating the SPOF and removing the network from the equation when serving content. I'm currently using robocopy to do this, not DFS - don't like the AD tie in, constraints on server placement in a zoned network infrastructure. Robocopy is not ideal, either.
Session State - agree, ASPState SQL database. Make sure you set up a common machinekey in your web.config for all of your servers, though, while I'm thinking about this.
Drop<sessionState
allowCustomSqlDatabase="true"
cookieName="yourcookiname"
mode="SQLServer"
cookieless="false"
sqlConnectionString="connnstringname"
sqlCommandTimeout="10"
timeout="30"/>
and
<connectionStrings>
<add name="connstringname" connectionString="Data Source=sqlservername;Initial Catalog=DTLAspState;Persist Security Info=True;User ID=userid;Password=password" providerName="System.Data.SqlClient"/>
</connectionStrings>
into web.config.
lastly, run
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regsql.exe -S sqlservername -E -ssadd -sstype p
Server downtime - the website will still function when an individual server goes down. This depends on your webserver, of course - make sure it's not using 'ping' as the sole metric to determine if it should be sending requests to that webserver.
DNS - single entry. Shame that HTTP isn't SRV record aware.