I have a Xamarin Forms application, and I have to get initial remote data (with images, maybe in urls) and save that data as a cache on my app. Every time the application starts, the data has to be refreshed, and if cannot, use the cached data.
So far, I have already viewed Easy Tables, but seems that its focus is on save user data on the cloud, and I don't want to do that.
I only want to get the initial data for an application, cache that data and refresh that data every time the app starts.
I didn't find a scenario with Easy Tables that the app administrator loads the initial data (maybe by REST calls) and then the app only consumes that data without modifying it.
Could you give some advice on how to do this? Using Azure.
Thanks!
So far, I have already viewed Easy Tables, but seems that its focus is on save user data on the cloud, and I don't want to do that.
Easy Tables work with Node.js backend, you just need to add the table and your backend would be automatically created for you. By using Offline Data Sync, you could create and modify data in your local store (e.g. sqlite) when your app is offline mode, then when your app is online you could push local changes to your server or pull changes from your server into your local store. This may be an approach for you and you could just pull the data from server and only read data from your local store.
I have a Xamarin Forms application, and I have to get initial remote data (with images, maybe in urls) and save that data as a cache on my app.
I didn't find a scenario with Easy Tables that the app administrator loads the initial data (maybe by REST calls) and then the app only consumes that data without modifying it.
Per my understanding, if your initial data is more about images, settings and without any sensitive data, I assumed that you could just leverage Azure Blob storage for storing data (image urls or settings within *.json file) or Azure Table storage, and you could leverage the related client SDK to retrieve the data and store into your local sqlite db or files.
I would prefer to use the blob storage and you could control access (Anonymous access or delegated access permissions) to your blob data. For more details, you could refer to Managing security for blobs.
You absolutely can do that with a Sync Table.
https://learn.microsoft.com/en-us/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-offline-data
Just do a PullAsync in the splash screen to retrieve the values, you don't need to make use of the Post methods, and can even remove them (or return errors) in your Azure TableController
Related
I am working on a project where we have to store some audio/video files on Azure Blob Storage and after the file is uploaded we need to calculate some price on the basis of the length of the file in minutes. We have an Angular frontend and the idea was to upload the file directly from the frontend, get the response from Azure with the file stats , then call a backend API to put that data in the database.
What I am wondering is what are the chances of manipulation of data in between getting the file data back from Azure and calling our backend API. Is there any chance the length could be modified before sending it to our API?
One possible solution would be to make use of Azure Event Grid with Blob integration. Whenever a blob is uploaded, an event will be raised automatically that you can consume in an Azure Function and save the data in your database.
There's a possibility that a user might re-upload same file with different size. If that happens, you will get another event (apart from the original event when the blob was first created). How you handle with updates would be entirely up to you.
I'm currently developing a web application using Flask, this app takes information from a SQL database to generate a .pdf that the final user can download from the browser.
My problem is that some hours after I deploy the app using Azure App Services, make some changes to the SQL database and generate some .pdf, the app automatically resets to its original state, therefore, I lose all the changes to the SQL database and generated documentation as it had some kind of ephemeral memory.
Is there any way in which I can store these files without losing them after a while?
In this case what Azure service/solution do you recommend me to use to store these files?
The data generated once the app is in use should be pretty little, the SQL will be updated once a month with a couple of .pdf generated.
I'm in the process of deciding which technology to use for a project. The project will store large amounts of documents (Word, PDF etc) and I'm trying to figure out the best way of storing them. So far, I've come up with:
Standard hosting, and using the file system
Standard hosting, use Full Text Search and store documents in SQL Server
Use Azure Blobs.
Under no circumstances can the documents be visible to anyone. Only certain, authorised people should be able to view documents. Can anyone point me in the direction of how you can secure the documents so that you can't just point a browser to it and it will be visible?
Windows Azure blobs are a great place to store a lot of documents. By default, blobs can only be retrieved by someone who has the access key for the account. (The headers on API calls have to be signed with that key, so there's a cryptographic guarantee that unauthorized third-parties can't access them.)
I assume you'll host a front-end of some sort that gives access to authorized users, and this front-end will act as a proxy to the files themselves.
Don't store your documents in a web server directory. It's as simple as that. Why go through the all the efforts of configuring a web server when you don't want the files on the web in the first place?
I need to store multiple files that users upload, and then provide these users with the capability of accessing their files via http. There are two key considerations:
- Storage (which is my primary concern here)
- Security (which let's leave aside for now)
The question is:
What is the most cost efficient and performant way of storing all these files and giving access to them later? I believe the answer is:
- Store files within Azure Storage Account, and have a key that references them in an SQL Azure database.
I am correct on this?
Is a blob storage flat? Or can I create something like folders inside it to better organize my files?
The idea of using SQL Azure to store metadata for your blobs is a pretty common scenario, which allows you to take advantage of SQL for searching, and blobs for storage.
Blobs are organized by container. So you'd have something like:
http://mystorage.blob.core.windows.net/mycontainer/myfile.doc
You can also simulate a hierarchy using a delimiter, but in reality there's just container plus blob.
If you keep the container or blob private, the user would either have to go through your web front end (or web service), or you'd have to provide them with a special URL with a Shared Access Signature appended, which is a time-limited URL.
I would recommend you to take a look at BlobShare Sample which is a simple file sharing application that demonstrates the storage services of the Windows Azure Platform, together with the authentication and authorization capabilities of Access Control Service (ACS). The full sample code is located at following link:
http://blobshare.codeplex.com/
You can use this sample code immediately, just by adding proper reference to your Windows Azure Account credentials. The best thing with this sample is that you can provide blob access directly through Access Control Services. You can also modify the code to add SAS support as well as blob download from public containers. Once you have it working and understood the concept you can tweak to make it the way you would want.
I have an application that is deployed on Windows Azure, in the application there is a Report part, the reports works as shown below.
The application generates the report as a PDF file and save it in a certain folder in the application.
I have a PDF viewer in the application that takes the URL of the file and displays it.
As you know, in windows azure I will have several VMs that will handled through a Load balancer so I can not ensure that the request in step 2 will go to the same VM in step 1, and this will cause a problem for me.
Any help is very appreciated.
I know that I can use BLOB, but this is not the problem.
The problem is that after creating the file on a certain VM, I give the PDF viewer the url of the pdf viewer as "http://..../file.pdf". This will generate a new request that I cannot control, and I cannot know which VM will server, so even I saved the file in the BLOB it will not solve my problem.
as in any farm environment, you have to consider saving files in a storage that is common for all machines in the farm. In Windows Azure, such common storage is Windows Azure Blob Storage.
You have to make some changes to your application, so that it saves the files to a Blob stroage. If these are public files, then you just mark the Blob Container as public and provide the full URL to the file in blob to the PDF viewer.
If your PDF files are private, you have to mark your container as private. Second step is to generate a Shared Access Signature URL for the PDF and provide that URL to the PDF viewer.
Furthermore, while developing you can explore your Azure storage using any of the (freely and not so freely) available tools for Windows Azure Storage. Here are some:
Azure Storage Explorer
Azure Cloud Storage Studio
There are a lot of samples how to upload file to Azure Storage. Just search it with your favorite search engine. Check out these resources:
http://msdn.microsoft.com/en-us/library/windowsazure/ee772820.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/04/11/using-windows-azure-page-blobs-and-how-to-efficiently-upload-and-download-page-blobs.aspx
http://wely-lau.net/2012/01/10/uploading-file-securely-to-windows-azure-blob-storage-with-shared-access-signature-via-rest-api/
The Windows Azure Training Kit has great lab named "Exploring Windows Azure Storage"
Hope this helps!
UPDATE (following question update)
The problem is that after creating the file on a certain VM, I give
the PDF viewer the url of the pdf viewer as "http://..../file.pdf".
This will generate a new request that I cannot control, and I cannot
know which VM will server, so even I saved the file in the BLOB it
will not solve
Try changing a bit your logic, and follow my instructions. When your VM create the PDF, upload the file to a blob. Then give the full blob URL for your pdf file to the PDF viewer. Thus the request will not go to any VM, but just to the blob. And the full blob URL will be something like http://youraccount.blob.core.windows.net/public_files/file.pdf
Or I am missing something? What I understand, your process flow is as follows:
User makes a special request which would cause PDF file generation
File is generated on the server
full URL to the file is sent back to the client so that a client PDF viewer could render it
If this is the flow, that with suggested changes will look like the following:
User make a special request which would cause PDF file generation
File is generated on the server
File is uploaded to a BLOB storage
Full URL for the file in the BLOB is returned back to the client, so that it can be rendered on the client.
What is not clear? Or what is different in your process flow? I do exaclty the same for on-the-fly report generation and it works quite well. The only difference is that my app is Silverlight based and I force file download instead of inline-displaying.
An alternative approach is not to persist the file at all.
Rather, generate it in memory, set the content type of the response to "application/pdf" and return the binary content of the report. This is particularly easy if you're using ASP.NET MVC, but you can use a HttpHandler instead. It is a technique I regularly use in similar circumstances (though lately with Excel reports rather than PDF).
The usefulness of this approach does depend on how you're generating the PDF, how big it is and what the load is on your application.
But if the report is to be served just once, persisting it just so that another request can be made by the browser to retrieve it is just wasteful (and you have to provide the persistence mechanism).
If the same file is to be served multiple times and it is resource-intensive to create, it makes sense to persist it, then.
You want to save your PDF to a centralized persisted storage. VM's hard drive is neither. Azure Blob Storage is likely the simplest and best solution. It is dirt cheap to store and access. API for storing files and access them is very simple
There are two things you could consider.
Windows Azure Blob + Queue Storage
Blob Storage is a cost effective way of storing binary and sharing that information between instances. You would most likely use a worker role to create the Report which would store the report to Blob Storage and drop a completed message on the Queue.
Your web role instance could monitor the queue looking for reports that are ready to be displayed.
It would be similar to the concept used in the Windows Azure Guest Book app.
Windows Azure Caching Service
Similarly [and much more expensive] you could share the binary using the Caching Service. This gives a common layer between your VMs in which to store things, however you won't be able to provide a url to the PDF you'd have to download the binary and use either an HttpHandler or change the content-type of the request.
This would be much harder to implement, very expensive to run, and is not guaranteed to work in your scenario. I'd still suggest Blobs over any other means
Another option would be to implement a sticky session handler of your own. Take a look at:
http://dunnry.com/blog/2010/10/14/StickyHTTPSessionRoutingInWindowsAzure.aspx