How to store (and query) the MaxMind GeoIP2 database in Azure? - azure

In an Azure Web App I need to efficiently query the MaxMind GeoIP2 City Database (due to the volume of queries and the latency requirements we cannot use the MaxMind's rest API).
I'm wondering what's the best approach for storing the db (binary MMDB format, accessed via the official .NET api) so that it's easy to update with minimal downtime (we are going to subscribe Monthly updates) and still cost effective as to what regards Azure storage and transactions.
Apparently block blobs are the way to go, but I'm not sure about the monthly updates and the fact that the GeoIP2 api load in memory the whole db (I do not know if this would be a problem for the Web App, if I need a web worker to keep it up or I need something else), but actually I do not know yet how large the file is.
What's the most cost effective solution that preserve low latency over a huge volume?

According to the API docs you must have the database available in a file system (the API doesn't know anything about Azure storage and related REST API). So, regardless where you permanently store it, you'll need to have it on a disk somewhere.
I have no idea how large the database footprint is, but Web Apps, Cloud Services (web/worker roles) and Virtual Machines (whether Linux or Windows) all have local disks. And you have read/write access to these disks. So, you'd need to copy the database binary file (or csv) to local disk from somewhere. At this point, when you initialize the SDK, you'd create a DatabaseReader and point it to your locally-downloaded copy of the database file.
You mentioned storing the database in blob storage. There's nothing stopping you from doing so and simply downloading a copy to local disk. And there's nothing stopping you from storing multiple versions in multiple blobs. Note: You may also take advantage of Azure File storage (an SMB share). Which you choose is up to you.
As far as most cost effective solution: You'll need to do the pricing workup yourself to see what's most effective. You'd also need to evaluate how much RAM is available for the given size VM/role instance/Web App you choose. You mentioned Web Apps in your question: Web App instances scale from 0.5GB to 14GB, depending on the tier you choose (again, you'll need to evaluate this).

Related

Can I store and access any temporary storage on cloud/ container?

I am relatively new to cloud, so please guide me through complete process.
I have an application that will be hosted on containers in cloud environment. I want some temporary storage on the container or the cloud environment, and access it via my web application (written in C#), meaning I will generate a file and keep it there. First of all, is it possible without costing me extra? Secondly, if it is possible, how can I access the area with C# code? And even if it costs me extra, will I have any access issues?? Also, please let me know the limitations of that free space, in terms of storage, accessibility and cost.
Using an App Service you can store temporal files inside %TMP% folder which is maped to %SYSTEMDRIVE%\local\Temp with no extra cost.
Depending on your App Service Plan you will have from 10GB to 140GB free space to store files. Beware, your files will dissapear if App service is restarted.
Refer to this link:
https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system

About windows azure blob storage, the implementation of a project should not depends on the cloud platform

We plan to migrate the existing website to Windows azure, and i have been told that we need to store files to blob storage.
My questions is:
If we want to use blob storage, that means i need to re-write the file storage function(we use file system for now), call blob service api to store files, that's very strange for me just because we want to use windows azure, how about in the future we want to use Amazon EC2 or other cloud platform, they might have there own way to store file, then may be i need to re-write the file storage function again, in my opinion , the implementation of a project should not depends on the cloud platform(or cloud server)! Can any body correct me, thanks!
I won't address the commentary about whether an app should have a dependency on a particular cloud environment (or specific ways to deal with that particular issue), as that's subjective and it's a nice debate to have somewhere else. What I will address is the actual storage in Azure, as your info is a bit out-of-date.
One reason to use blob storage directly (and possibly the reason you were told to use blob storage) is that it provides access from multiple instances of your app. Also, blob storage provides 500TB of storage per storage account, and it's triple-replicated within the deployed region (and optionally geo-replicated). With attached storage (either with local disk or blob-backed Azure Disk), the access is specific to a particular instance of your app. Shifting from file system access to blob storage access does require app modification.
If you choose not to modify your app's file I/O operations, then you can also consider the new Azure File Service, which provides SMB access to storage (backed by blob storage). Using File Service, your app would (hopefully) not need to be modified, although you might need to change your root path.
More information on Azure File Service may be found here.
Why does it seem strange? You need to store your files somewhere and the cloud is a good a place as any IF it suits your needs. The obvious advantages are redundancy and geo replication, sharing files across multiple projects and servers, The list goes on. It's difficult to advise on whether it would be a good idea or not without hearing some specifics.
You could use windows azure storage with amazon in the future if you wanted to (you'd just need to set up the access for it), obviously with slighter longer delay. Then again that slight performance drop may be significant and you may end up re-writing it.
Most importantly, swapping over from one cloud provider to another is not trivial depending on just how much you use it or how much data you've got in it, so I would strongly suggest looking at the advantages / disadvantages of each platform closely before putting your lot in with either one and then fully learn that platform.
Personally, I went for Azure cloud services + storage etc even though it was slightly more expensive at the time, because i'm a Microsoft Person (not that I didn't do my research). It was annoying in the early days when key features were missing, but it's really matured now and I like the pace that it's improving.
It's cheap to test, why not try both and see which one suits you? A small price to pay when you have big decisions to make.
Disclaimer: I don't know the current state of Amazon web services.
Nice question. We are in the middle of a migration of an old PHP/MySQL/LocalShare to WebRole/SQLAzure/AzureStorage ERP application. We faced the same problem and decision. Let me write some thoughts about the issue :
It is a good option to just be able to switch the storage provider but is it reasonable? You can always build the abstraction but do you plan how to do the actual change of storage provider - migration/sync while in production? What kind of argument will exactly drive the transition to another storage provider? How much users and data do you have? Do you plan to shard-rebalance the storage in the future? How reliable must be this system during this storage provider switch? Do you want to totally move the data when you want to switch or you just want to shard it so that you start using this different provider? Does the cost development of these (reliable) storage layers and the cost of development of reliable transitions (or bi-directional syncs) outweighs the money difference between any two storage providers?
Just switching storage mechanism from Azure Blob to Amazon will incur heavy latency penalty if your other services are on Azure - When you create Storage and Services on Azure you set affinity groups by region so that you minimize the network latency.
These are only a few of the questions to answer before doing all the weightlifting. We have abstracted the file repository (blob) because we planned to move from local NFS to Blob transparently and gradually and it answers our needs.

Azure Traffic Manager for Cloud Services - What about storage access?

I have finally got the time to start looking at Azure. It's looks good and easy scaling.
Azure SQL, Table Storage and Blog Storage should cover most of my things. Fast access to data, auto replication and failover to an other datacenter.
Should the idea come for an app that needs fast global access the Traffic manager is there and one can route users for "Fail Over" or "Performance".
The "performance" is very nice for Cloud Services and "Web Roles / Worker Roles" ... BUT ... What about access to data from SQL Azure/Table Storage/Blog Storage.
I have tried searching the web(for what to do about this need), but haven't found anything about the traffic manager that mentions anything about how to access data in such a scenario.
Have I missed anything?
Do people access the storage in the original data center (and if that fails use the Geo Replication feature)? Is that fast enough? Is internal traffic on the MS network free across datacenters?
This seems like such a simple ...
Take a look at the guidance by Microsoft: Replicating, Distributing, and Synchronizing Data. You could use the Service Bus to keep data centers in Sync. This can cover SQL Databases, Storage, search indexes like SolR, ElasticSearch, ... The advantage over solutions like SQL Data Sync is that it's technology independent and it can keep virtually all your data in sync:
In this episode of Channel 9 they state that Traffic Manager is only for Cloud Services as of now (Jan 2014) but support is coming for Azure Web Sites and other services. I agree that you should be able to ask for a Blob using a single global URL and expect that the content will be served from the closest datacenter.
There isn't a one-click easy to implement solution for this issue. The way you solve it will depend on where the data lives (ie. SQL Azure, Blob storage, etc) and your access patterns.
Do you have a small number of data requests that are not on a performance critical path in your code? Consider just using the main datacenter.
Do you have a large number of read-only type of requests? Consider doing a replication of the data to another datacenter.
Do you do a large number of read and only a few write operations? Consider duplicating the data among all datacenters and each write will write to all datacenters at the same time (incurring a perf penalty) and do all reads to the local datacenter (fast reads).
Is your data in SQL Azure? Consider using SQL Data Sync to keep multiple datacenters in sync.

Share data between users in metro application

I would like to create a Metro application that allows a group of people to interact. One person would create data and serve as the owner, and multiple others would be invited in and be allow to modify that data. I heard from Build talks that each Metro application will get per-user Azure storage, but will it be possible to share that data between multiple users? Does anyone have a link they could share where I could research this?
I think that you are confusing SkyDrive with Azure Blob Storage.
SkyDrive
Personal to a Live ID
Not really meant as a base for collaborative work
Azure Blob Storage
You can have public files that anyone can view and update
You can have a lease on file that only allows certain people to edit it
Since you own the Azure account you also control the content
You can learn the basics here
If you want to share private app data between users, the best way to do so would be via a shared server of some sort. You should have a server (running on Azure, Amazon EC2, or anything really) that exposes a REST-ful web service which each application connects to. The shared state then lives on that server.
This is better than trying to use skydrive or some file-based system for storing shared data. With a file on skydrive and multiple users trying to access it, you would run into concurrency issues when more than 1 person tries to write to it.
You don't get Azure with Metro.
With Live you get a free SkyDrive that is a personal cloud storage. Like 10 GB. Can share files but it is via sending an email link. It is not file storage that would readily support a server type application to manage that sharing.
Azure is a cloud platform for file and data sharing. Azure is not free but storage cost is only $0.125 / GB per month. 10 GB = $1.25 / month. Using SkyDrive as shared storage you are giving up a lot of developer and hosting tools that come with Azure to save $1.25 / month.
It looks like there is a more formal definition of this with the updated help now available. They were referring to roaming application data. I found the following links that provide guidance:
http://msdn.microsoft.com/en-us/library/windows/apps/hh464917.aspx
http://msdn.microsoft.com/en-us/library/windows/apps/hh465094.aspx
The general is that a small amount of temporary application data is provided on a per-app, per-user basis. The actual size you get is not detailed, but the guidance is pretty clear - app settings only, no large data sets, and don't use it for instant synchronization. Given this guidance, my plan is not a good one and will change.

Pulling data asynchronously from third-party web service on Windows Azure Platform

I want to pull large amount of data, frequently from different third party API web services and store it in a staging area (this is what I want to decide right now) from where it will be then moved one by one as required into my application's database.
I wanted to know that can I use Azure platform to achieve the above? How good is it to use Azure platform for this task?
What if the data to be pulled is of large amount and the frequency of the pull is high i.e. may be half-hourly or hourly for 2,000 different users?
I assume that if at all this is possible, then the bandwidth, data storage and server capability etc. will not be a thing to worry for me but for ©Microsoft. And obviously, I should be able to access the data back whenever I need it.
If I would have to implement it on Windows Servers, then I know that I would use a windows service to do this. But I don't know how it can be done for Windows Azure Platform if at all it is possible?
As Rinat stated, you can use Lokad's solution. If you choose to do it yourself, you can run a timed task in your worker role - maybe spawn a thread that sleeps, waking every 30 minutes to perform its task. It can then reach out to the Web Services in question (or maybe one thread per Web Service?) and fetch data. You can store it temporarily in Azure Table Storage, which is a fraction of the cost of SQL Azure (0.15 per GB), and then easily read it out of Table Storage on-demand and transfer to SQL Azure.
Assuming you host your services, storage and SQL Azure are in the same data center (by setting the affinity appropriately), you'd only pay for bandwidth when pulling data from the web service. There'd be no bandwidth charges to retrieve from Table Storage or insert into SQL Azure.
In Windows Azure that's usually Worker Role used to host the cloud processing. In order to accomplish your tasks you'll either need to implement this messaging/scheduling infrastructure yourself or use something like Lokad.Cloud or Lokad.CQRS open source projects for Azure.
We use Lokad.Cloud for distributed BI processing of hundreds of thousands of series and Lokad.CQRS allows to reliably retrieve and synchronize millions of products on schedule.
There are samples, docs and community in both projects to get you started.

Resources