How data transfer will cost on my Azure project setup - azure

I have built one utility (startup task) for end user Azure application.
That start-up task is going to post some of data from that webrole application (hosted on any region as per end user choice) to database (Hosted on East Asia) through REST API (Hosted on East Asia).
So if end user has hosted their application on same region (East Asia) then Azure doesn't cost anything for data transfer to me as well as for end user (correct me if I understood wrong).
In another case if end user has hosted their application on another region then it will cost to end user to transfer data to REST API that is on another region.
What I have done to reduce this cost, I have set up REST API in all of regions and in startup task tell user to set path of specific REST API according to region where they are going to deploy their application. Now in this case end user doesn't cost anything for data transfer But My REST API (hosted in different region ) will transfer data to database (hosted on East Asia) so it will cost me for that as well as for to host REST API on all regions to reduce end user cost.
In the case above is there a way to reduce cost using shared database and removing multiple REST API hosted on all region? Also please suggest if a better solution exists to reduce cost to me as well as for end user.

Interesting scenario: You basically have your end-user deploying the app for themselves, to the region of choice? What is the purpose of the database upload via REST API: Backup, or for live data?
One thing I'm not understanding is the question about a shared database. From your description, there's already a shared database, as each user is pushing data through a REST API to a single database in East Asia. Is this not correct?
Based on the information you provided, where all data goes to one datacenter, I see 3 options, two that you already pointed out:
User hosts app in any of 8 datacenters, you host REST API in one datacenter. In this case, user gets best local app performance between on-prem and app, user pays for egress to your API, and user possibly sees latency when transferring data across datacenters.
User hosts app in any of 8 datacenters. You host REST API in all datacenters. User's data transfer is free to local REST API, and very fast. However, you then transfer data to East Asia database, which will cost you egress bandwidth if the user is not hosted in East Asia. Also you now have cost of many additional REST API instances.
User host ONLY in East Asia, along with your REST API and database. All datacenter bandwidth is free and fast. User may see latency connecting from on-prem to app, depending on where they're located.
In each case, you have the user deploying their own hosted service, which then talks to yours. Is it possible to optimize user cost by having them all access the same hosted service, but with different credentials?
For your cost, hosting a REST API in every datacenter seems like a large expense, considering you need at least 2 instances running in each datacenter to obtain Windows Azure's uptime SLA. But... again, I don't exactly understand your app's scenario.

Base on my understanding, it seems the scenario is a bit strange. If you’re providing a package that end users can host using their own Windows Azure account, why let them use their own database as well? Or do you mean the you’re providing end users the source code of your project as a sample? In any case, I think it is better to ask end users to provide their own database account. If you want to design a REST API for yourself, then host it in your own Windows Azure service. Please do not distribute the package to your end users. In addition, for more information about billing, you can also contact our customer support on https://support.microsoft.com/oas/default.aspx?prid=14234&st=1&wfxredirect=1&sd=gn.
Best Regards,
Ming Xu.

I think in this case, it is better to ask the customers to provide their own database. If you need to use your own database, it is hard to save cost on your side. You can use multiple databases hosted in different data centers. But obviously that requires more cost. You can also use a single database, but then it is inevitable that some users will transfer data from another data center. However, you can ask the users to pay for the cost by increasing the price of your product. But looks like better to ask the user to provide their own database (they can even choose on-premises databases if they like). This also helps to increase security, since users' data would not be shared.
Best Regards,
Ming Xu.

Related

Scale out scenarios with Azure Sql (geolocation)

I have my Azure Sql database located in West Europe and are considering to have a database in the States also. Deploying my website in the states was easy, but this website then query the database in Europe, which gives delays.
What do people do in these cases? Having separate databases for different users could work I guess, but it then fails if a user normally on one server get routed to the other server, then his data is not in the database. Is there easy solutions to have the same data available in two azure SQL servers, and Azure maintain the data sync? What about conflicts when syncing?
It really depends on your requirements and how you implement routing. You can design your distributed application in a manner that user A, when authenticated always go the US server for instance. Even if he/she is currently in Europe or Asia.
If you want to sync everything everywhere, there a preview feature named "SQL Data Sync". It can sync data between multiple instance of SQL Server (including on-premises SQL Server installations). It is quite flexible in terms of configuring and options for syncing. But again, it really depends on application requirements. If I was building distributed system, I would not sync data across continents. Will design the app so that user specific data lives in only one Data Centre. this, of course is impossible if my user has access to a lot more data then just related to his/her profile.
The best option would be to keep user-specific data in user's designated Data Centre, and sync the data that must be available to all users at all locations.

Setting up Azure to Sync Contacts in Custom Program, Tasks and Pricing

We have our own application that stores contacts in an SQL database. What all is involved in getting up and running in the cloud so that each user of the application can have his own, private list of contacts, which will be synced with both his computer and his phone?
I am trying to get a feeling for what Azure might cost in this regard, but I am finding more abstract talk than I am concrete scenarios.
Let's say there are 1,000 users, and each user has 1,000 contacts that he keeps in his contacts book. No user can see the contacts set up by any other user. Syncing should occur any time the user changes his contact information.
Thanks.
While the Windows Azure Cloud Platform is not intended to compete directly with consumer-oriented services such as Dropbox, it is certainly intended as a platform for building applications that do that. So your particular use case is a good one for Windows Azure: creating a service for keeping contacts in sync, scalable across many users, scalable in the amount of data it holds, and so forth.
Making your solution is multi-tenant friendly (per comment from #BrentDaCodeMonkey) is key to cost-efficiency. Your data needs are for 1K users x 1K contacts/user = 1M contacts. If each contact is approx 1KB then we are talking about approx 1GB of storage.
Checking out the pricing calculator, the at-rest storage cost is $9.99/month for a Windows Azure SQL Database instance for 1GB (then $13.99 if you go up to 2GB, etc. - refer to calculator for add'l projections and current pricing).
Then you have data transmission (Bandwidth) charges. Though since the pricing calculator says "The first 5 GB of outbound data transfers per billing month are also free" you probably won't have any costs with current users, assuming moderate smarts in the sync.
This does not include the costs of your application. What is your application, how does it run, etc? Assuming there is a client-side component, (typically) this component cannot be trusted to have the database connection. This would therefore require a server-side component running that could serve as a gatekeeper for the database. (You also, usually, don't expose the database to all IP addresses - another motivation for channeling data through a server-side component.) This component will also cost money to operate. The costs are also in the pricing calculator - but if you chose to use a Windows Azure Web Site that could be free. An excellent approach might be the nifty ASP.NET Web API stack that has recently been released. Using the Web API, you can implement a nice REST API that your client application can access securely. Windows Azure Web Sites can host Web API endpoints. Check out the "reserved instance" capability too.
I would start out with Windows Azure Web Sites, but as my service grew in complexity/sophistication, check out the Windows Azure Cloud Service (as a more advance approach to building server-side components).

Data sovereignty vs Azure hosting options

I'm just learning about Azure so forgive me for my naivety. I work for a federal government that would be very hesitant to have their applications and data hosted in another country. Could a local company offer "Azure" services? i.e. could software developers in a government department build their applications and deploy them to the Azure cloud, ensuring that their data stays within the country? Or would they have to look at a non-Microsoft cloud provider?
Data and Compute will reside in the datacenter you specify. Blobs, Tables and Queues are also backed up automatically to a paired data center:
San Antonio <--> Chicago
Dublin <--> Amsterdam
Hong Kong <--> Singapore
You can opt-out of cross-datacenter data backup if data sovereignty becomes an issue. Once opted-out, data would only be in the specified data center, and you'd need to handle DR on your own (by possibly backing up data to on-premises storage).
Aside from those 6 datacenters, Fujitsu runs a Windows Azure data center in Japan. See this press release for more info.
Yes, when you create your Azure service you can specify what region (of the country) it runs in.
I'm not sure if you know this, but the Federal CIO (Vivek Kundra) is really pushing hard for Agencies to move to the cloud. You might want to check out Info.Apps.Gov for guidelines on the Federal Cloud initiative and resources for what you can and can't do.
To answer your immediate question: No. Only MS hosts Azure to my knowledge. I do know that Amazon is bending over backwards however to accommodate Government clients and you can control which datacenters are used on that service. MS appears to have a similar capability per the other answer to this question.
As far as I can tell, these are the only locations:
http://en.wikipedia.org/wiki/Azure_Services_Platform#Datacenters
If they're that concerned about data security though, they should deal directly with Microsoft, not buy Azure services that same way a client usually would. Microsoft may be able to arrange something depending on budget (but probably not).
Edit: What I'm basically saying is, Microsoft is not going to arbitrarily do special licensing. Meaning you either need a large enough budget to convince MS to build a data center in your country, or you need some other way of convincing MS to allow Azure services hosted in your country. Also, I hate to sound paranoid, but if you're worried about America seeing your data, you likely should avoid Ameican companies.
If there isn't a Windows Azure Data Centre in the relevant country, but you still want to use Azure, you'll need to look at a hybrid cloud model where data remains resident in a private cloud. However, in-flight data can still present complications for some organisations and Azure may not be the right answer in all cases.
If you like, I can talk about it some more using Chat. The company I work for specialises in just these cases and has the only production Windows Azure data centre that isn't owned by Microsoft (and isn't in the US). Probably best not go into further specifics here, though, for fear of my answer looking like pure spam!

Doubts about Windows Azure Platform Introductory Special

I'm considering to join the Windows Azure Platform Introductory Special, but I'm a little bit afraid of losing money with it. I don't wanna develop any fancy large scale application, I want to join just to learn Azure and do my experiments, what should I be afraid of?
In the transference, it says: "Data Transfers (per region)", what does that mean?
Can I put limits to stop the app if it goes over this plan in order to avoid get charged?
Can it be "pre pay" instead "bill pay"?
Would it be enough for a blog?
Any experiencie so far?
Kind regards.
As ligget pointed out, Azure isn't cost affect as a host for an application that can be easily deployed to a traditional shared hosting provider. Azure's target market are those that want dedicated resources without the need to micro-manage the infrasture and the capability to easily scale up/down based on demand.
That said, here's the answers to the questions you posted:
Data Transfers are based on bandwidth in and out of the hosting data center. bandwidth for communication occuring within components (SQL Azure, Windows Azure, Azure Storage, etc...) in the same datacenter are not billable.
Your usage is not currently capped when the free quotas are used up. However, you will recieved warning emails when those items approach their usage threadsholds.
There is the option to pay your subscription using a PO, but the minimum threshold for most of these operations is $500/month. So as a hobbyist, its unlikely you're wanting that route.
The introductory special does not provide enough resources for hosting a 24x7 personal blog. That level includes only 25hrs of compute resources. Each hour a single instance of your application is deployed will count against this, even if the application received no traffic. Think of it like renting office space. You still pay rent on the office even if there are no customers there.
All this said, there's still much to be learned with the introductory special. The azure development tools allows you to work with Windows Azure and Azure storage locally and get a feel for how they work. The introductory special then lets you deploy those solutions so you can see what works and what doesn't (not everything that works locally works hosted).
I would recommend you host your blog somewhere else - it's a waste of resources running it on Azure and you'll find much cheaper options. A recently introduced extra small instance would be a better choice in this case, but AFAIK it is charged separately as of now, e.g. even when you have an MSDN subscription those extra small instance hours do not count towards free Azure hours that come with the subscription.
There is no pre-pay option I know of and it's not possible to stop the app automatically. It'll be running until the deployment is deleted (beware! even if suspended/stopped the deployment will continue to accrue charges). I believe you will be sent a notification shortly before reaching your free hours threshold.
Be aware that when launching more than 1 instance you are charged for every hour of every instance combined. This can happen for example when you have more than one role in your Azure project (1 web role + 1 worker role - a separate instance will be started for each role).
Data trasfer means your entire data trasfer: blobs/Table storage/queues (transfers between your hosted service and storage account inside the same data center are free) + whatever data is transfered in/out of your hosted application, e.g. when somebody visits your pages. When you create storage accounts and hosted services in Azure you will specify a region that will be hosting your account/app - hosting in Asia is slightly more expensive than in Europe/U.S.
Your best bet would be to contact Microsoft with these questions.

Minimize downtime in Azure

We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?

Resources