We have written two mobile apps and a web back end. Mobile apps are written in Xamarin, back end in C# in Azure.
There is shared data between all three apps, some are simple keyword tables, but some data tables will change, e.g. mobile user is moving around and making some updates to a table, updates need to go back to web app and then possibly out to the apps.
Currently use SQLite on the mobile apps and following a off-line first approach, i.e. user changes a table we write to SQLite on mobile and then sync to server. If user has no connectivity a background process will eventually sync up data to server when possible.
All this is custom code now, and I am a little hesitant to continue on this path. We are in testing with 4 users or so, but expectation is to grow to thousands or tens of thousands of users in 6 to 18 months.
I think that our approach might not scale. Would prefer to switch to an Offline first framework instead of continuing to roll our own.
Given our environment I think using Azure Mobile SDK would be the obvious path to follow.
In general would you choose an offline first framework if your app will grow? In particular, any experience with using Azure Mobile SDK?
Note that your question will likely be closed because you're asking for an opinion/recommendation but anyways...
From the Azure Mobile Apps Github repo:
Please note that the product team is not currently investing in any
new feature work for Azure Mobile Apps.
Also to my knowledge, Microsoft has not announced any new SDK or upgrade path.
With that in mind, one option is to keep your custom code and bonify it with code that you'd extract from the SDK or vice versa.
Assuming that your mobile app calls a web service, which then performs any necessary writes, you could load test a copy of your production environment to see if things fall over and at what point. I'm not a huge fan of premature optimization.
Assuming things do fall over, you could introduce a shock absorber between your web service endpoint and the database using a Service Bus Queue.
I have a local VPS that hosting and providing my Node.js REST API in my country.
However soon I will need to open it for different countries.
That means that clients from remote will ask for my services.
Since they are far it will be probably slow connection.
How can I avoid this? Maybe I need more servers located in their countries too, but still, how the data could be shared over one DB?
I do not looking for a full tutorial for how to do that (could be nice to have) but I am looking for get info about the methodology of this.
What do you recommend to do, keep buying servers in remote countries, sharing their data between them someway, or maybe choose to use some cloud service like Firebase? How cloud services work in first place?
Without going into too much detail for each item, here are some keypoints in which I think you should focus your on learning to solve your problem.
For data storage - look into firestore (not the json database) as firestore is globally scaleable.
For your REST endpoints I would use google cloud functions, but without knowing the nature of your application its hard to say if its suitable. The key to being able to reach global scale is having cacheable endpoints. Then you are leveraging google's global CDN which is much faster than hitting the origin server. Note: The firebase cloud functions infrastructure WILL face cold start issues which may/may not be a problem for you.
Cache invalidation is a little lacking so you can leverage longer max-age cache settings but use either cache busing and/or the header stale-while-revalidate to help with this.
There is some great info here https://www.youtube.com/watch?v=dbV-293m1dQ that covers some of what I have mentioned in more detail.
I'm working on Integration project where third party will call our web service in Azure. For performance reason I would like to store 2 table data (more than 1000 records) on to the app fabric cache.
Could anyone please suggest if this is the right design pattern?
Depending on how much data this is (you don't mention how wide the tables are) you have a couple of options
You could certainly store it in the azure cache, this will cost though.
You might also want to consider storing the data in the http runtime cache which is free but not distributed.
You choice would largely depend on the size of the data, how often it changes and what effect is caused if someone receives slightly out of date data.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.
I am designing a system that's going to have about 10 millions+ users, each has a photo, which is about 1~2 MB.
We are going to deploy both database and web app using Microsoft Azure
I am wondering the way I should store the photos, there are currently two options,
1, Store all photos use Sql Server FileStream
2, Use File Server
I haven't experienced such large scale BLOB data using FileStream.
Can anybody give my any suggestion? The Cons and Pros?
And anyone with Microsoft Azure experiences concerning the large photos store is really appreciated!
Thx
Ryan.
I vote for neither. Use Windows Azure Blob storage. Simple REST API, $0.15/GB/month. You can even serve the images directly from there, if you make them public (like <img src="http://myaccount.blob.core.windows.net/container/image.jpg" />), meaning you don't have to funnel them through your web app.
Database is almost always a horrible choice for any large-scale binary storage needs. Database is best for relational-only systems, and instead, provide references in your database to the actual storage location. There's a few factors you should consider:
Cost - SQL Azure costs quite a lot per GB of storage, and has small storage limitations (50GB per database), both of which make it a poor choice for binary data. Windows Azure Blob storage is vastly cheaper for serving up binary objects (though has a bit more complicated pricing system, still vastly cheaper per GB).
Throughput - SQL Azure has pretty good throughput, as it can scale well, however, Windows Azure Blog storage has even greater throughput as it can scale to any number of nodes.
Content Delivery Network - A feature not available to SQL Azure (though a complex, custom wrapper could be created), but can easily be setup within minutes to piggy-back off your Windows Azure Blob storage to provide limitless bandwidth to your end-users, so you never have to worry about your binary objects being a bottleneck in your system. CDN costs are similar to that of Blob storage, but you can find all that stuff here: http://www.microsoft.com/windowsazure/pricing/#windows
In other words, no reason not to go with Blob storage. It is simple to use, cost effective, and will scale to any needs.
I can't speak on anything Azure related but for my money the biggest advantage of using FILESTREAM is that that data can get backed up inside the normal SQL Server backup process. The size of the data that you are talking about also suggests that FILESTREAM may be a good choice as well.
I've worked on a SCM system with a RDBMS back end and one of our big decisions was whether to store the file deltas on the file system or inside the DB itself. Because it was cross-RDBMS we had to cook up a generic non-FILESTREAM way of doing it but the ability to do a single shot backup sold us.
FILESTREAM is a horrible option for storing images. I'm surprised MS ever promoted it.
We're currently using it for our images on our website. Mainly the user generated images and any CMS related stuff that admins create. The decision to use FILESTREAM was made before I started. The biggest issue is related to serving the images up. You better have a CDN sitting in front. If not, plan on your system coming to a screeching halt. Of course, most sites have a CDN, but you don't want to be at the mercy of that service going down meaning your system will get overloaded. The amount of stress put on your sql server is the main problem here.
In terms of ease of backup. Your tradeoff there is that your db is MUCH MUCH LARGER and, therefore, the backup takes longer. Potentially, much longer and the system runs slower during the backup. Not to mention, moving backups around takes longer (i.e., restoring prod data in a dev environment or on local machines for dev purposes). Don't use this as a deciding factor.
Most cloud services have automatic redundancy of any files that you store on their system (i.e., aws's S3 and azure's blob). If you're on premise, just make sure you use a shared location for the images and make sure that location is backed up. I think the best option is to set it up so each image (other UGC file types too) has an entry in your db with a path to that file. Going one step further, separate the root path into a config setting and only store the remaining path with the entry. For example, root path in config might be a base url, a shared drive or virtual dir, or a blank entry. Then your entry might have "/files/images/image.jpg". This way, if you move your filestore, you can just update the root config. I would also suggest creating a FileStoreProvider interface (Singleton) that can be used for managing (saving, deleting, updating) these files. This way, if you switch between AWS, Azure, or on premise, you can just create a new Provider.
I have a client server DB, i manage many files (doc, txt, pdf, ...) and all of them go in a filestream BLOB. Customers has 50+ MB dbs. If in azure you can do the same go for it. Having all in the db is a wonderful thing. It is considered good policy also for Postgres and MySQL