Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am looking for some sort of map to showing physical location of worldwide Windows Azure MS Data Centers.
Can you help?
Here is a public Google Map of Azure datacenter locations - https://maps.google.com/maps/ms?msid=214511169319669615866.0004d04e018a4727767b8&msa=0&ll=-3.513421,-145.195312&spn=147.890481,316.054688
Microsoft does not disclose the exact location of the data centres, for obvious reasons, although the internet does have some information you may have seen, such as http://matthew.sorvaag.net/2011/06/windows-azure-data-centre-locations/
Worth noting, thought, that this refers only to the 'main' Windows/SQL Azure data centres; in addition there are many CDN nodes around the world in smaller data centres.
I am curious though - why do you ask?
Even below link will give you the location of data centers.
http://azure.microsoft.com/en-in/regions/
The exact physical location of a data centre isn't usually relevant for users of applications. What's more important is the latency that they see when reaching the application.
But the most important thing is usually the speed of your own application.
For example, at my particular location in the UK I see somewhat better responses from the Northern Europe Azure site than the Western Europe site. This will be down to the particular route taken by packets from my PC through the local network and out to the point on the wider Internet where it peers with the Microsoft Azure systems.
If I'm dialled in through a VPN to an office in the US then I'll see better responses from a US-hosted Azure site.
However, compared to the ~60 millisecond ping time I see to the data centre, the ~200 millisecond response time from the SQL Azure queries on my site are something I can control and which are more important.
Better ways to make your Web application faster include:
Cache, cache, cache. Use the CDN hosted versions of e.g. JQuery where possible.
Minify your scripts and CSS, and merge if possible.
Only perform postbacks as a last resort. Use Javascript / AJAX to load data into your application.
... all of which applies to Web applications whether they're on Azure or other hosts.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am a freshman Computer Science student. I would appreciate if some of you guys could help me understand the process of creating a web hosting service and explain how it works. Thank You. This question is my first; if I made mistakes, I look forward to positive criticism.
Although this is a simple and straightforward question, the answer can be far from it. There are several different categories of a web hosting service. Your typical "Godaddy, HostGator, Bluehost" shared web hosting service is the most common one, but there are also Amazon's AWS or Digital Ocean that focus on Virtual Private Servers and there are also web hosting companies with a strong added value such as a website builder (like Squarespace, Weebly or Wix).
A web hosting is often (especially shared hosting) combined with a powerful control panel (cPanel and Plesk have most users) which allow you to create additional addon domains, subdomains, email addresses, MySQL databases, FTP accounts and many other simple and complex features. It goes so far that cPanel combined with Fantastico offers automatic (few clicks) Wordpress and Joomla installation among many CMS systems. No need to manually upload files, create databases, etc.
If you are looking to start a shared web hosting business yourself, you have to go one level above. For example, above cPanel is WHM. You can look at WHM like a very powerful software that helps you monitor active processes on servers, create new web hosting resellers, new web hosting accounts, track CPU activity and many other functions. Now account creation, termination and suspending is easier and almost automated with additional systems like WHMCS.
WHMCS is an entirely different system that is installed on a single domain on a server. WHMCS besides from account creation and termination allows web hosting clients to open support tickets, register domain names, purchase and automatically create web hosting accounts, and manage their account in general. In WHMCS, you cannot create email address, subdomains and other cPanel functionality.
With web hosting, there are several products and services that go hand in hand. I've already mentioned website builders, Content Management Systems and domain names. But there are also SSL certificates, various analytics services, web shops and other products that are installed on millions of websites.
When you are managing a server that offers shared hosting, dedicated hosting or reseller hosting you need to pay attention to load averages (CPU usage), downtime (99.99%) email blacklisting, hacks, phishing attempts, virus injections, holes in various widgets installed on websites and other threats that come your way on a daily basis.
This is just scratching the surface, but it's a step in a right direction in understanding the overview of a small web hosting company.
You want to become a web host?
Just check out which all hosting services you want to provide
Research about Shared hosting, VPS hosting, Reseller hosting
check which all latest technologies can be used to create these services.
study how to setup and handle the activities.
web hosting is easy to create,
you need some requirement,
first is hardware requirement,
second is proper ISP bandwidth
you need more uploading speed again downloading speed,
because if request come out to your server service and all page or data send to client,
so require high uploading speed,
first thing is hardware requirement.
you need proper storage device,
with auto backup system,
do you need good ram capacity,
if your system is low speed all clients face slow speed,
in this case bandwidth not work,
Yara ISP provide 1gbps speed but your system process data only 200 MB to 300 MB so that all data useless
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Currently i am trying to learn various Services of Amazon web services and Microsoft windows azure like Amazon sns,Amazon Storage,amazon Search.
I am having some question in my mind that why now a days cloud platforms are getting so popular than old traditional approach like previously we were storing our files(img,txt,.doc etc) in our web application project only but now adays some web application are storing their files on amazon storage or on azure storage.
What is the benefits of storing files and folders over cloud platform ??
Next why amazon search or azure search is preferred as when they were
not available searching was done and amazon and azure search are not
freely availabale??
Now if we talk about push notification then why to use amazon or azure
push notification if we can easily send notification through codes
that are available on internet??
I general i just want to know that why now a days web application are using more cloud platforms(Azure or amazon) even though they are costly??
Can anybody explain me this with some details??
Among the many reasons, the most important and common ones I can think of are:
High Availability - When you manage your own services, there is always an operational challenge of ensuring that they do not go down i.e, crash. This may cause a downtime to your application or even data loss depending on the situation. The cloud services you have mentioned, offer reliable solutions that guarantee maximum up time and data safety (by backup, for example). They often replicate your data across multiple servers, so that even if one of their servers are down, you do not loose any data.
Ease of use - Cloud services make it very easy to use a specific service by providing detailed documentation and client libraries to use their services. The dashboard or console of many cloud services are often user friendly and do not require extensive technical background to use. You could deploy a Hadoop cluster in Google Compute Engine in less than a minute, for instance. They offer many pre-built solutions which you can take advantage of.
Auto-Scale - Many cloud services nowadays are designed to scale automatically. The are built to scale automatically with increasing traffic. You do not have to worry about the traffic or application load.
Security - Cloud services are secure. They offer sophisticated security solutions using which, you can secure your service from being misused.
Cost - Trying to host your own services require extensive resources like high end servers, dedicated system administrators, good network connectivity etc. Cloud services are quite cheap these days.
Of course you could solve these problems yourself, but smaller organizations often do not prefer to do so because of the operational overhead. It would take more time and resources to reach a stage where your solution is both reliable and functional. People would often prefer to work on the actual problem their application is trying to solve and abstract away most operational problems which cloud services readily offer.
p.s. These are some opinions from an early stage startup perspective.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm currently running a file hosting site as a side project and I'm using Azure Storage to actually store and serve the files. The big issue for me is that I would like to be able to support relatively large files as well, which are really expensive to serve.
According to the pricing details for Azure outbound data transfers, it'll cost me $0.087 per GB to serve files to the user. This is okay for things like images, but if the user stores something like a 1 GB video, it'll cost me around 9 cents per person who wants to download the file. Even if I try to monetize the service, I cannot see how I can reasonably sustain these costs if the service ever becomes popular.
Does anyone have any suggestions or alternatives to reducing outbound data transfer costs?
Edit: As I come across helpful ways to reduce my costs, I'll update the list below:
Use a free CDN provider like Cloudflare. Specifically for me, I only enabled the CDN for files served through Azure Storage, because enabling it for my whole site would impose a 100MB file size upload restriction. One thing to note is that Cloudflare doesn't cache everything, so even though I'm covered for images, I'm still out of luck for many other media types that users might upload.
Compress uploaded files so that not as much bandwidth is used on outbound transfers.
If you're using cloud storage but host your website on a dedicated server with generous bandwidth, you can implement some kind of local cache and serve content directly from your cache, with the storage provider being a fallback on a cache miss. Unfortunately this isn't viable for me since I also host my site on Azure, and the outbound data transfer rates apply across their entire service stack.
Are all of your assets available publicly or do you have some kind of authentication before them? If they are available publicly then maybe a CDN would be an option here
You can try caching your content on the client. For scenarios where you are accessing static content such as photos or videos, having a cache set up could keep you from having to go to the server each time you need data.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to measure how many concurrent users my current azure subscription will accept, to determine my cost per user. How can I do this?
This is quite a big area within capacity planning of a product/solution, but effectively you need to script up a user scenario, say using a tool like JMeter or VS2012 Ultimate has a similar feature, then fire-off lots of requests to your site an monitor the results.
Visual Studio can deploy your Azure project using a profiling mode, which is great for detecting the bottlenecks in your code for optimisation. But if you just want to see how many requests per/role before it breaks something like JMeter should work.
There are also lots of products out there on offer, like http://loader.io/ which is great for not worrying about bandwidth issues, scripting, etc... and it should just work.
If you do role your own manual load testing scripts, please be careful to avoid false negatives or false positives, by this I mean that if you internet connection is slow and you send out millions of requests, the bandwidth of your internet may cause your site to appear VERY slow, when in-fact its not your site at all...
This has been answered numerous times. I suggest searching [Azure] 'load testing' and start reading. You'll need to decide between installing a tool to a virtual machine or Cloud Service (Visual Studio Test, JMeter, etc.) and subscribing to a service (LoadStorm)... For the latter, if you're focused on maximum app load, you'll probably want to use a service that runs within Azure, and make sure they have load generators in the same data center as your system-under-test.
Announced at TechEd 2013, the Team Foundation Test Service will be available in Preview on June 26 (coincident with the //build conference). This will certainly give you load testing from Azure-based load generators. Read this post for more details.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I like how facebook releases features incrementally and not all at once to their entire user base. I get that this can be replicated with a bunch if statements smattered all throughout your code, but there needs to be a better way to do this. Perhaps that really is all they are doing, but that seems rather inelegant. Does anyone know if there is an industry standard for an architecture than can incrementally release features to portions of a user base?
On that same note, I have a feeling that all of their employees see an entirely different completely beta view of the site. So it seems that they are able to deem certain portions of their website as beta and others as production and have some sort of access control list to guide what people see? That seems like it would be slow.
Thanks!
Facebook has a lot of servers so they can apply new features only on some of them. Also they have some servers where they test new features before commiting to the production.
A more elegant solution is, if statements and feature flags using systems like gargoyle (in python).
Using a system like this you could do something like:
if feature_flag.is_active(MY_FEATURE_NAME, request, user, other_key_objects):
# do some stuff
In a web interface you would be able to isolate describe users, requests, or any other key object your system has and deliver your feature to them. In fact, via requests you could do things like direct X% of traffic to the new feature, and thus do things like A/B test and gather analytics.
An approach to this is to have a tiered architecture where the authentication tier hands-off to the product tier.
A user enters the product URL and that is resolved to direct them to a cluster of authentication servers. These servers handle authentication and then hand off the session to a cluster of product servers.
Using this approach you can:
Separate out your product servers in to 'zones' that run different versions of your application
Run logic on your authentication servers that decides which zone to route the session to
As an example, you could have Zone A running the latest production code and Zone B running beta code. At the point of login the authentication server sends every user with a user name starting with a-m to Zone A and n-z to Zone B. That way roughly half the users are running on the beta product.
Depending on the information you have available at the point of login you could even do something more sophisticated than this. For example you could target a particular user demographic (e.g. age ranges, gender, location, etc).