Share data between users in metro application - azure

I would like to create a Metro application that allows a group of people to interact. One person would create data and serve as the owner, and multiple others would be invited in and be allow to modify that data. I heard from Build talks that each Metro application will get per-user Azure storage, but will it be possible to share that data between multiple users? Does anyone have a link they could share where I could research this?

I think that you are confusing SkyDrive with Azure Blob Storage.
SkyDrive
Personal to a Live ID
Not really meant as a base for collaborative work
Azure Blob Storage
You can have public files that anyone can view and update
You can have a lease on file that only allows certain people to edit it
Since you own the Azure account you also control the content
You can learn the basics here

If you want to share private app data between users, the best way to do so would be via a shared server of some sort. You should have a server (running on Azure, Amazon EC2, or anything really) that exposes a REST-ful web service which each application connects to. The shared state then lives on that server.
This is better than trying to use skydrive or some file-based system for storing shared data. With a file on skydrive and multiple users trying to access it, you would run into concurrency issues when more than 1 person tries to write to it.

You don't get Azure with Metro.
With Live you get a free SkyDrive that is a personal cloud storage. Like 10 GB. Can share files but it is via sending an email link. It is not file storage that would readily support a server type application to manage that sharing.
Azure is a cloud platform for file and data sharing. Azure is not free but storage cost is only $0.125 / GB per month. 10 GB = $1.25 / month. Using SkyDrive as shared storage you are giving up a lot of developer and hosting tools that come with Azure to save $1.25 / month.

It looks like there is a more formal definition of this with the updated help now available. They were referring to roaming application data. I found the following links that provide guidance:
http://msdn.microsoft.com/en-us/library/windows/apps/hh464917.aspx
http://msdn.microsoft.com/en-us/library/windows/apps/hh465094.aspx
The general is that a small amount of temporary application data is provided on a per-app, per-user basis. The actual size you get is not detailed, but the guidance is pretty clear - app settings only, no large data sets, and don't use it for instant synchronization. Given this guidance, my plan is not a good one and will change.

Related

How to store (and query) the MaxMind GeoIP2 database in Azure?

In an Azure Web App I need to efficiently query the MaxMind GeoIP2 City Database (due to the volume of queries and the latency requirements we cannot use the MaxMind's rest API).
I'm wondering what's the best approach for storing the db (binary MMDB format, accessed via the official .NET api) so that it's easy to update with minimal downtime (we are going to subscribe Monthly updates) and still cost effective as to what regards Azure storage and transactions.
Apparently block blobs are the way to go, but I'm not sure about the monthly updates and the fact that the GeoIP2 api load in memory the whole db (I do not know if this would be a problem for the Web App, if I need a web worker to keep it up or I need something else), but actually I do not know yet how large the file is.
What's the most cost effective solution that preserve low latency over a huge volume?
According to the API docs you must have the database available in a file system (the API doesn't know anything about Azure storage and related REST API). So, regardless where you permanently store it, you'll need to have it on a disk somewhere.
I have no idea how large the database footprint is, but Web Apps, Cloud Services (web/worker roles) and Virtual Machines (whether Linux or Windows) all have local disks. And you have read/write access to these disks. So, you'd need to copy the database binary file (or csv) to local disk from somewhere. At this point, when you initialize the SDK, you'd create a DatabaseReader and point it to your locally-downloaded copy of the database file.
You mentioned storing the database in blob storage. There's nothing stopping you from doing so and simply downloading a copy to local disk. And there's nothing stopping you from storing multiple versions in multiple blobs. Note: You may also take advantage of Azure File storage (an SMB share). Which you choose is up to you.
As far as most cost effective solution: You'll need to do the pricing workup yourself to see what's most effective. You'd also need to evaluate how much RAM is available for the given size VM/role instance/Web App you choose. You mentioned Web Apps in your question: Web App instances scale from 0.5GB to 14GB, depending on the tier you choose (again, you'll need to evaluate this).

Azure Mobile Services Easy Tables - Am I On The Right Track?

I'm working on a simple mobile application in order to learn more about app development in general. I'm using Xamarin and C# to make a cross-platform app.
The end goal is to make a listing of users that are willing to be contacted to play golf. I want users to be able to enter their name and email address on one page, save the entries in a table using Azure SQL Database, and then display them in a list on another page in the app.
I've done some pretty extensive research on my own, but now I think it's time to get some real-life interaction to help guide me along. So here's my actual question...
It looks like the "Getting Started" tutorial here is close to what I want to do. But it seems like the database the app in the example uses is stored locally, whereas I want to create a table that all users will be able to access. Is following this walkthrough the right move for me? If not, what should I do instead?
Bear in mind that I'm committed to using Azure Mobile Services, so please refrain from answers suggesting I use a different platform.
Thanks guys!
If you use Azure Storage directly from the client app, then make sure you are not using Shared Key authentication. Otherwise, anyone could simply steal the credentials from the app and get full access to your blob account. To learn more, see Shared Access Signatures and the SO question Azure blob storage and security best practices.
From the official documentation:
Exposing either of your account keys opens your account to the possibility of malicious or negligent use. Shared access signatures provide a safe alternative that allows other clients to read, write, and delete data in your storage account according to the permissions you've granted, and without need for the account key.
For new projects, you should use Azure Mobile Apps instead of Azure Mobile Services. The new service offers a number of features, and it is where all future investments will be.
For instance, there is now support for blob storage syncing along with regular offline data sync, and it uses SAS tokens to connect securely. Here's a tutorial for Xamarin.Forms: Connect to Azure Storage in your Xamarin.Forms app. It includes a sample that you can deploy to your own Azure subscription with one click.
For your specific question, you could modify the Todo sample (or look at the more full-featured Field Engineer sample) and add tables for Players and Games.
There are a number of offering on the Azure platform that will allow you to store your golf players. However, the page you linked to is for BLOB storage, and I would not recommend using that.
There is Azure table storage. Which is a NoSQL store on the Azure platform. It's highly scalable and schema-less, so very flexible. You can leverage the Azure SDK to read and write to it - or go REST if that's what you prefer. Check out the tutorial here: https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-tables/
Then there is Azure SQL, which is SQL server offered on the Azure platform. This is a traditional relational database store, but more scalable ( since it's on the Azure Platform ). You can also use this solution, but it does require a bit of extra work, since you probably want to use an ORM like Entity Framework.
So in all - I would go for Azure table storage. It's really easy to get started with and will do what you want to do.

Cloud Services - Two web roles sharing file system

I have a very special requirement which is:
Two web roles accessing a local shared file location.
I am aware of the "Local Storage" role settings, but those are only accessible within each role scope.
Does anyone know another option to accomplish this?
------- EDIT --------
As suggested I will explain more clearly what I'm trying to achieve here.
I'm implementing Only Office which is a web editor for office files. Their product requires to have a file saved on the file system to be opened on the editor.
I don't want to mix their ASP.NET MVC open source project with my code, so that's why I want to deploy their website as a separate webrole.
-------- END EDIT ------------
Thanks
In your question, you state that (my emphasis):
I'm implementing Only Office which is a web editor for office files. Their product requires to have a file saved on the file system to be opened on the editor.
If Only Office's requirement is to have temporary file storage that is used while the document is being edited, you may be able to get away with this in a Cloud Service Web Role. This is assuming that your users wouldn't be too angry if the temp. working document was 'lost' during a role re-start.
Web (and Worker) Roles are non-durable and the Azure Service Fabric might bounce them if they need to patch the underlying host or they might just crash due to a fault (which is usually why you deploy them in pairs - fault-tolerance etc.) If you save something to the file system on a Web Role, you are not guaranteed that it will be there if the role is bounced.
If however you need durability, you will need to implement something based around Azure Blob Storage and possibly something based on Blob Leases. However I imagine that Only Office doesn't have an implementation for Azure....
Failing that, you could try running on Azure Web App Service, however I imagine you would have the same issue re. backing storage and would need to implement something on Blob Storage.
So, finally, if you want complete control and something akin to running on-premise, take a look at using an IaaS Virtual Machine where you have all of the file system to play-with as you please.
==UPDATE==
Taking a look at the Only Office website, there is a SaaS offering Only Office SaaS Hosting which is probably cheaper to run for a year than the time taken for me to write this answer!
Failing that, if you look at the requirements for Only Office Document Server there is no way you're going to run that on a Web Role. Go Azure IaaS VM's.
You basically have 2 options here, both mentioned in the comments. You can use BLOB storage, or you can use an SMB share using Azure Files, which I believe is in preview still. We have used Azure files to mount an SMB share on several linux boxes. One thing we have noticed is that it is not particularly fast. It is also built on top of blob storage. Here is a link to Azure Files https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/.
If you choose to use blob storage and you will need to consider concurrency.
https://azure.microsoft.com/en-us/blog/managing-concurrency-in-microsoft-azure-storage-2/
I would suggest to use Azure File Services, you could have a share like URI to be used.
take a look at this:
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/

About windows azure blob storage, the implementation of a project should not depends on the cloud platform

We plan to migrate the existing website to Windows azure, and i have been told that we need to store files to blob storage.
My questions is:
If we want to use blob storage, that means i need to re-write the file storage function(we use file system for now), call blob service api to store files, that's very strange for me just because we want to use windows azure, how about in the future we want to use Amazon EC2 or other cloud platform, they might have there own way to store file, then may be i need to re-write the file storage function again, in my opinion , the implementation of a project should not depends on the cloud platform(or cloud server)! Can any body correct me, thanks!
I won't address the commentary about whether an app should have a dependency on a particular cloud environment (or specific ways to deal with that particular issue), as that's subjective and it's a nice debate to have somewhere else. What I will address is the actual storage in Azure, as your info is a bit out-of-date.
One reason to use blob storage directly (and possibly the reason you were told to use blob storage) is that it provides access from multiple instances of your app. Also, blob storage provides 500TB of storage per storage account, and it's triple-replicated within the deployed region (and optionally geo-replicated). With attached storage (either with local disk or blob-backed Azure Disk), the access is specific to a particular instance of your app. Shifting from file system access to blob storage access does require app modification.
If you choose not to modify your app's file I/O operations, then you can also consider the new Azure File Service, which provides SMB access to storage (backed by blob storage). Using File Service, your app would (hopefully) not need to be modified, although you might need to change your root path.
More information on Azure File Service may be found here.
Why does it seem strange? You need to store your files somewhere and the cloud is a good a place as any IF it suits your needs. The obvious advantages are redundancy and geo replication, sharing files across multiple projects and servers, The list goes on. It's difficult to advise on whether it would be a good idea or not without hearing some specifics.
You could use windows azure storage with amazon in the future if you wanted to (you'd just need to set up the access for it), obviously with slighter longer delay. Then again that slight performance drop may be significant and you may end up re-writing it.
Most importantly, swapping over from one cloud provider to another is not trivial depending on just how much you use it or how much data you've got in it, so I would strongly suggest looking at the advantages / disadvantages of each platform closely before putting your lot in with either one and then fully learn that platform.
Personally, I went for Azure cloud services + storage etc even though it was slightly more expensive at the time, because i'm a Microsoft Person (not that I didn't do my research). It was annoying in the early days when key features were missing, but it's really matured now and I like the pace that it's improving.
It's cheap to test, why not try both and see which one suits you? A small price to pay when you have big decisions to make.
Disclaimer: I don't know the current state of Amazon web services.
Nice question. We are in the middle of a migration of an old PHP/MySQL/LocalShare to WebRole/SQLAzure/AzureStorage ERP application. We faced the same problem and decision. Let me write some thoughts about the issue :
It is a good option to just be able to switch the storage provider but is it reasonable? You can always build the abstraction but do you plan how to do the actual change of storage provider - migration/sync while in production? What kind of argument will exactly drive the transition to another storage provider? How much users and data do you have? Do you plan to shard-rebalance the storage in the future? How reliable must be this system during this storage provider switch? Do you want to totally move the data when you want to switch or you just want to shard it so that you start using this different provider? Does the cost development of these (reliable) storage layers and the cost of development of reliable transitions (or bi-directional syncs) outweighs the money difference between any two storage providers?
Just switching storage mechanism from Azure Blob to Amazon will incur heavy latency penalty if your other services are on Azure - When you create Storage and Services on Azure you set affinity groups by region so that you minimize the network latency.
These are only a few of the questions to answer before doing all the weightlifting. We have abstracted the file repository (blob) because we planned to move from local NFS to Blob transparently and gradually and it answers our needs.

Setting up Azure to Sync Contacts in Custom Program, Tasks and Pricing

We have our own application that stores contacts in an SQL database. What all is involved in getting up and running in the cloud so that each user of the application can have his own, private list of contacts, which will be synced with both his computer and his phone?
I am trying to get a feeling for what Azure might cost in this regard, but I am finding more abstract talk than I am concrete scenarios.
Let's say there are 1,000 users, and each user has 1,000 contacts that he keeps in his contacts book. No user can see the contacts set up by any other user. Syncing should occur any time the user changes his contact information.
Thanks.
While the Windows Azure Cloud Platform is not intended to compete directly with consumer-oriented services such as Dropbox, it is certainly intended as a platform for building applications that do that. So your particular use case is a good one for Windows Azure: creating a service for keeping contacts in sync, scalable across many users, scalable in the amount of data it holds, and so forth.
Making your solution is multi-tenant friendly (per comment from #BrentDaCodeMonkey) is key to cost-efficiency. Your data needs are for 1K users x 1K contacts/user = 1M contacts. If each contact is approx 1KB then we are talking about approx 1GB of storage.
Checking out the pricing calculator, the at-rest storage cost is $9.99/month for a Windows Azure SQL Database instance for 1GB (then $13.99 if you go up to 2GB, etc. - refer to calculator for add'l projections and current pricing).
Then you have data transmission (Bandwidth) charges. Though since the pricing calculator says "The first 5 GB of outbound data transfers per billing month are also free" you probably won't have any costs with current users, assuming moderate smarts in the sync.
This does not include the costs of your application. What is your application, how does it run, etc? Assuming there is a client-side component, (typically) this component cannot be trusted to have the database connection. This would therefore require a server-side component running that could serve as a gatekeeper for the database. (You also, usually, don't expose the database to all IP addresses - another motivation for channeling data through a server-side component.) This component will also cost money to operate. The costs are also in the pricing calculator - but if you chose to use a Windows Azure Web Site that could be free. An excellent approach might be the nifty ASP.NET Web API stack that has recently been released. Using the Web API, you can implement a nice REST API that your client application can access securely. Windows Azure Web Sites can host Web API endpoints. Check out the "reserved instance" capability too.
I would start out with Windows Azure Web Sites, but as my service grew in complexity/sophistication, check out the Windows Azure Cloud Service (as a more advance approach to building server-side components).

Resources