Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am just thinking of moving a website from a VPS to Windows Azure Web Sites. After doing a load test, I accidentally took down my test website, using around 30MB over the daily bandwidth.
This made me wonder what would happen if my website was suddenly hit by a DDOS attack? I'm pretty sure everything would max out the daily and hourly limits in no time, even worse, redirecting all the users to the azure over limit notification.
Is there anything that can be done about that? I know the daily bandwidth limit will be harder to reach after I put all the images on a CDN, but I'm afraid what would happen if there's a spike or something.
Sorry for such an answer with no head and tail. I hope you guys will understand.
Windows Azure has built-in load balancers that will stave off most (if not all) DOS type attacks. The truth is, Microsoft is very hush-hush on the specifics of how their load balancers protect against malicious attacks (as they should be).
An added benefit to hosting your applications in the cloud is that you can take advantage of auto-scaling when you get heavy loads (malicious or otherwise) so your site won't go down.
You might want to check out the Security Best Practices For Developing Windows Azure Applications document for more information on this.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
How long does it take to spin up a new instance of a website on the azure platform?
So If I have 1 instance and I want to increase to say 3 - how long will it take for the other 2 instances to serve requests?
Thanks
With Azure Websites it is instant (few seconds) - under assumption there is incoming traffic to activate those instances. Azure Websites has already a pool of machines standing by, so the only thing you notice is a cold start of your site on a new machine.
I just did this. It took just under 10 minutes. Although not instant, this seems reasonable for most scenarios, especially if you know that your site is about to be hit. What is important though, is that this answer is really only a guess. There are many variables that could affect this massively:
I was using a smallish .Net web service - a larger app could be slower to deploy?
My service runs on Windows Server 2012 - are other editions faster/slower?
Do MS sometimes have new VMs on standby for quick deployment?
I'm sure the Azure centres are busier/slower at certain times than others
If you have a large/slow set of startup scripts, this will all add time
I don't know answers to any of those but hopefully this gives you an idea.
EDIT: Just to be clear - I did this on Cloud Services, not "Web Sites", which I don't have any of! I'm not always sure whether OPs know the difference - I used to call my Web Sites for ages!
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I found some technical articles which mentioned that i need to have three separate servers for SharePoint production environment;
- First one is for the Database,
- Second server for Application,
- and the third for the front-end.
But in my case i am planning to have only two servers one for the Database and the other for the application and front-end, will it still be valid to have two servers .Baring in mind that me deployment is considered somehow small with around 60 internal users and around 100 external users?
You can set it up this way. The difference will be in how many SharePoint Service Applications you start on each box.
In environments that have three machines you will see that the there is one box dedicated to the web front end and another that runs the desired SharePoint Applications such as Search, Excel Services, PerformancePoint etc. Since those applications are memory and processor intensive it is best to keep them on a separate machine.
Your performance may vary based on the scale of hardware in your box and how many of those Services Applications you need to kick off.
Some Service Applications can cause a lot of load and need to be finely tuned such as Excel Services and PerformancePoint. I recommend you looking into each that you plan on starting to determine if you will put too much load on your machine
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have found the internet to be a massive time sink for me.
My efforts to block the websites that are utterly useless to me have been vain for the simple reason that, if I am bored enough, I will bypass the block.
All I can think of is to use the hosts file and a file monitor to ensure it has a loopback in place every time it is edited.
Note: I run Linux and Mac.
StayFocusd is a productivity extension for Google Chrome that helps you stay focused on work by restricting the amount of time you can spend on time-wasting websites. Once your allotted time has been used up, the sites you have blocked will be inaccessible for the rest of the day.
It is highly configurable, allowing you to block or allow entire sites, specific subdomains, specific paths, specific pages, even specific in-page content (videos, games, images, forms, etc).
You could block the website on your router, assuming you have a firmware that allows for it. You could make a long, not easily typed password to the router and then it would (hopefully) so inconvenient that you wouldn't bother changing it when you're bored. On the other hand, you could just not go on these sites.
Try creating a crontab task which checks and updates the hosts file every few minutes. You can obfuscate the job and script to make it more time consuming to remove.
Check out: https://www.rescuetime.com/
Supposedly their product is designed for this purpose.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have quite a few domains that I manage (100+) and I'm getting tired of GoDaddy's management. Whenever I need to make changes shifting things around to DreamHost or Heroku to Google App Engine or my own VPS and private servers things eventually get hairy and it's tiresome to have to go to multiple locations in order to manage things.
I was curious if there was a solid option for developers that need robust domain management. I don't really (and PLEASE correct me if I'm wrong) see an answer with DynDNS or EasyDNS options. Perhaps I'm overlooking something.
I'm really looking for a single console to rule them all (i.e., register wherever and set NS entries to the master service) and to then be able to go into a domain and, by using a template split everything out to where I want it go go. In other words by setting up my own DNS templates I could with one fell swoop set up Google Apps sub domains, development dyndns cnames, AWS CDNs, etc. etc. etc.
Anyone aware of such a comprehensive solution?
I'm quite happy with DynDNS but I'm equally satisfied with Zerigo. Templates, AJAX interface, migration tools, an API...
Short of deploying your own infrastructure or piggybacking off something like Dynect, I'd hazard that Zerigo should do everything you want. The fact that it's recently been acquired by 8x8 suggests other people agree.
[I don't work for them if this sounds like a plug ;)]
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to create a SharePoint Server setup that will allow applications to be highly avaliable. Say if we have a portal in SharePoint, and I wanted to make it available always. I know it has to do with WFE. Someone guide me with article or Arch that need to be set for this.
Having multiple WFE (Web Front-ends) will make the web part of your SharePoint more reliable -- if one goes down, you can have your load-balancer stop sending requests to it. There is no way to ensure 100% uptime -- reliability is a combination of having redundancy (in hardware and services), monitoring, 24x7 staff to fix problems, etc.
Some things to look at:
Plan for Redundancy
http://technet.microsoft.com/en-us/library/cc263044.aspx
Plan for Availability
http://technet.microsoft.com/en-us/library/cc748832.aspx
There are third-party products that can help with fail-over, but I haven't used one to recommend.
See Lou's links. You can have redundant WFEs, query servers, and application servers as well as cluster your database.
Note that you cannot have a redundant index server unless you have two SSPs that basically index the same content. The query servers get the index replicated on them, so if the index server goes down you can still perform a query, the index will just not be updated until the index server comes back online. If you can't get it back online you will need to rebuild your index (full crawls).