Looking at implementing load balancing for Windows hosted web sites on IIS, but want to know of the issues involved in doing so. Website are ASP.NET - with a mix of custom authentication and ASP.NET Authentication. For example:
Files : store on networked share obvious choice, but I expect it will put a lot of demand on the file server, making the websites slow and possibly resulting in the "network BIOS command limit has been reached" issue, or slowing down other data traffic
Session state : best to store in a SQL database, to prevent issues when responses come from a different server? How to do this? A web.config setting I presume?
Server downtime : when a server goes down (crash, hardware failure etc) or is updated, will the web sites still function?
DNS : As it is load balanced, I assume sites will need two IP addresses? Using Microsoft DNS
Regarding the files, could they easily be synced between servers? There may be up to 8MB files (occasionally bigger). DFS is an option that may be considered in the future, but are there other technologies (besides robocopy) that can be used?
Files: Pointing to a network share solves some problems and introduces others, such as a single point of failure. Clustering the fileshare mitigates that a bit, but you still have a single copy of the data. I prefer a shared nothing approach where the content is local to each webserver, eliminating the SPOF and removing the network from the equation when serving content. I'm currently using robocopy to do this, not DFS - don't like the AD tie in, constraints on server placement in a zoned network infrastructure. Robocopy is not ideal, either.
Session State - agree, ASPState SQL database. Make sure you set up a common machinekey in your web.config for all of your servers, though, while I'm thinking about this.
Drop<sessionState
allowCustomSqlDatabase="true"
cookieName="yourcookiname"
mode="SQLServer"
cookieless="false"
sqlConnectionString="connnstringname"
sqlCommandTimeout="10"
timeout="30"/>
and
<connectionStrings>
<add name="connstringname" connectionString="Data Source=sqlservername;Initial Catalog=DTLAspState;Persist Security Info=True;User ID=userid;Password=password" providerName="System.Data.SqlClient"/>
</connectionStrings>
into web.config.
lastly, run
C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regsql.exe -S sqlservername -E -ssadd -sstype p
Server downtime - the website will still function when an individual server goes down. This depends on your webserver, of course - make sure it's not using 'ping' as the sole metric to determine if it should be sending requests to that webserver.
DNS - single entry. Shame that HTTP isn't SRV record aware.
Related
I am new to IIS and currently doing a lab on company.
So I have 2 volumes, each stored the same html files that use to display the IIS web.
Is there a way to set my physical path to point up to both of their directory? Just in case if one volume down, the other still up and keep the web alive?
If no then is there anyway for me to achieve this?
I'm afraid it is impossible to do failover between two different physical paths in IIS.
If you have multiple servers. You could build ARR load balancer and specify the path that you want to do failover. Then if one server break down, ARR will mark it as unhealthy due to health test and request will be routed to another server with different path.
https://learn.microsoft.com/en-us/iis/extensions/configuring-application-request-routing-arr/http-load-balancing-using-application-request-routing
In the Azure Management Portal, you can configure your website. As an example, you can change the PHP version your website is using. When you have edited a configuration option, you have to click “Save”.
So far, so good. But you also have the option to restart your site (by clicking “Restart“ next to “Save”).
My question is, when should you restart your website? Are there some configuration changes that require a restart, and others that don't? I haven't found any hints in the user interface.
Are there other situations that require a restart? Say, the website has been running for a given time without a restart?
Also, what are the consequences of restarting a website? Does it affect cookies/sessions in any way (i.e. delete a user's shopping cart or log them out)? Are there any other consequences I should be aware of?
Generally speaking, you may want to restart your website because of application performance issues. For example, you may have a memory leak in your application, connections not getting closed, or other things that would degrade the performance of the application over time. As you monitor your website and observe conditions like this you may make a decision to restart it. Even better, you may even automate the task of restarting when these conditions occurr. Anyway, these kinds of things are not unique to Azure Websites. You would take similar actions for a website running on-premises.
As for configuration changes, if you make a change to your web.config file, this change is detected and your website would be restarted automatically for you. Similarily, if you were to make configuration changes in the CONFIG page of your website in the Azure Management Portal such as application settings, connection strings, etc., then Azure Websites will detect this change to your environment and automatically restart it.
Indeed, restarting a website will result in any session data kept in memory being lost for that instance. Additionally, if you have startup/initialization code that takes time to complete then that will have to be rerun. Again, this is not anything unique to Azure Websites though.
I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.
I have a web portal where my users can login and submit artworks (image, documents, etc.). This web portal is hosted in 2 load-balanced web servers.
Because of this load balancing, I'm thinking of using NAS to serve as a centralized media file storage for my web portal. I'm considering NAS because it's cheaper than a file server and it's easier to maintain.
Now the questions are:
File hosting - Is there any NAS device that can act as a file hosting server? Or, do I need to create a virtual path in my web server to the NAS? This can be achieved easily if I use a file server, I can just bind a separate domain to the file server, something like media.mydomain.com, so all media files will be served through this domain. I don't mind serving the media files through a virtual path from my web servers, smthg like mydomain.com/media. I would like to know if NAS can do any of the approaches above, and whether it's secure, easy to setup, etc.
Performance - This is more important because read and writes are quite intensive. I never use NAS before. I'm thinking of getting 2 hard drives (2TB, 15000RPM) configured for RAID-1. Would this be able to match the performance of a common file server? I know the answer to this question is relative but I just want to see how NAS can be used as a file hosting, not just as a file sharing device.
My web servers are running Windows Server 2008R2 with IIS 7.5. I would appreciate if anyone can also share best practices for integrating NAS with Win Server/IIS.
Thanks.
A NAS provides a shared location for information on a private network (at least you shouldn't expose NAS technologies as NFS and CIFS over the internet) and is not really designed as a web file host. That is not to say you can't configure a NAS as a web file host utilizing IIS/apache/nginx, but then you don't need your web servers. NAS setup is well documented for both windows server and most unix/linux distros, both are relatively easy. A NAS is as secure as it is designed to be, you can utilize a variety of access control methods to secure a NAS depending on your implementation.
This really depends on your concurrent users and what kind of load you are expecting them to put on the system, for the most part performance over a 1Gb LAN connection and a 15,000 RPM hard drive for a NAS should provide ample performance for a decent amount of concurrent users, but I can't say for certain because if a user sits there downloading hundreds of files at a time you can have issues. As with any web technology wrap limits around user usage to prevent one user bringing down your entire system. I'm not sure what you are defining a file server (a NAS is a file server), if you think of a file server as a website that hosts files, a NAS will provide the same if not better performance based on where the device is in relation to your web servers (again, depending on utilization). If you are worried about performance you can always build a larger RAID array using RAID 5, RAID 6, RAID 10 or use SSDs to increase storage performance. For the most part in any NAS the hardware constraints usually are: storage speed, network speed, ram, cpu. Again this really depends on utilization, so test well, benchmark, and monitor performance
Microsoft provides a tuning document for server 2008 r2 that is useful: http://msdn.microsoft.com/en-us/library/windows/hardware/gg463392.aspx
In my opinion your architecture would be your 2 web servers referencing the NAS as a shared location using a virtual directory pointed at the NAS for your files or handle the NAS location in code (using code provides a whole plethora of options around security and usage).
I'm looking to launch a linux EC2 instance.
Although I understand linux quite well my ability to security/harden a linux OS would undoubtedly leave me vulnerable to attach. eg: there are others who know more about linux security than me.
I'm looking to just run Linux, Apache & PHP5.
Are there any recommended Amazon AMI's that would come pre-harden running linux/apache/php or something similar to this?
Any advice would be greatly appreciated.
thankyou
Here is an older article regarding this (I haven't read it, but it's probably a good place to start): http://media.amazonwebservices.com/Whitepaper_Security_Best_Practices_2010.pdf
I would recommend a few best practices off the top of my head
1) Move to VPC, and control inbound and outbound access.
2a) Disable password authentication in SSH & only allow SSH from known IP's
2b) If you cannot limit SSH access via IP (due to roaming etc) allow password authentication and use google authenticator to provide multi-factor authentication.
3) Put an elastic load balancer in front of all public facing websites, and disable access to those servers except from the ELB
4) Create a central logging server, that holds your logs in a different location in case of attack.
5) Change all system passwords every 3 months
6) Employ an IDS, as a simple place to start I would recommend tripwire.
7) check for updates regularly (you can employ a monitoring system like Nagios w/NRPE to do this on all your servers) If you're not a security professional you probably don't have time to be reading bugtraq all day, so use the services provided by your OS (CentOS/RHEL it's yum)
8) Periodically (every quarter) do an external vulnerability assessment. You can learn and use nessus yourself (for non-corporate use) or use a third party such as qualys.
If you're concerned and in doubt, contract a security professional for an audit. This shouldn't be to cost prohibitive and can give you some great insight.
Actually, you can always relaunch your server from pre-configured AMI, if something happened.
It can be done very easy with Auto Scaling, for example. Use SSH Without a Password. Adjust your Security Groups accordingly. Here's good article on Securing Your EC2 Instance.
You have to understand 2 things:
Tight security make life hard for attackers as well as for you...
Security is an on-going task.
having your server secure at specific point in time don't say anything about the future.
New exploits and patches published every day, and lot of "development" acts render security unstable.
Solution?
You might consider services like https://pagodabox.com/
Where you are getting specific PHP resources without having to manage Linux/Security and so...
Edit:
Just to empathize...
Running Production system, where you are responsible for the on going security of the site, force you to do much more than starting up with a secure instance!
Otherwise, your site will become much less secure as time passed by (and as more people will learn about it)
As I see it (for a real production site), you have 2 options:
Get a security expert (in house or freelance) that will check your site regularly and will apply needed patches and so.
Get hosting service that will manage the security aspect for you.
I pointed to one service like that, where you can put your PHP code in and they will take care of everything else for you.
I would check this type of service for every production site that don't have the ability to get real periodically security checkup/fixes
Security is a very complex field... do not underestimate the risks...
One of the things I like most about using Amazon is how quickly and easily I can restrict my attack surface. I've made a prioritized list here. Near the end it gets a bit advanced.
Launch in a VPC
Put your webserver behind a loadbalancer ELB or ALB (terminate SSL there too)
Only allow web traffic from your load balancer
Create a restrictive security group. The only things allowed into your host should be incoming traffic from the load balancer and ssh from your IP (or your dhcp subnet if your ISP does not offer a static address)
Enable automatic security updates
yum-cron (amazon linux)
or unattended-upgrades (ubuntu)
Harden ssh
disallow root login and default amazon accounts
disallow password login in favor of ssh keys
Lock down your aws root account with 2fa and a long password.
Create and use IAM credentials for day-to-day operations
If you have a data layer deploy encrypted RDS and put it in a private subnet
Explore connecting to RDS with IAM credentials (no more db password saved in a conf file)
Check out yubikey for 2fa ssh.
Advanced: For larger or more important deployments you might consider using something like ThreatStack. They can warn you of AWS misconfig (s3 bucket containing customer data open to the world?), security vulnerabilities in packages on your hosts. They also alert on signals of compromise and keep a command log which is useful for investigating security incidents.