I am using Azure storage service for my application
I need to store some organisation data like images, documents, videos etc for my application.
In my application user from 50 organisations upload their data.
We have following concerns
1) each company will use only 10 GB space. If their user-data exceeds 10 GB then there will be no access for storage.
Is it feasible?
2) What is best architecture we can design? For example I have container for each 5 organisations and year folder like 2018/2017 and then sub-folders inside each years like image/doc/videos
So I will have 5 container then years folder with 3 sub-folders each.
Heirarchy will like
organisation (container) - > year (folder) - > three sub-folders (image/doc/videos)
Then is it possible to restrict/grant access to years (folder)?
Please suggest?
I have abstracted three questions from your revised question (I think)
Can we limit a storage account container to a specific size (like 10 GB)? (Or is there an approach we can implement to achieve this)
No there is no ability to set a quota on a given container in a storage account. To do that you will need to implement the size check business logic in an API that is fronting the storage account and containers.
What architecture is suggested given the data provided? (organization, year, type, etc.)
The container structure suggested will work as long as you implement an API above it for controlling access and to enforce business rules and storage placement.
Can you restrict/grant access to containers in the Azure storage account (aka folders)?
You have to use SAS tokens if you want to secure containers in an Azure storage account. Broadly speaking with your requirements you need to look at implementing a middle-man service like an API (through Functions / API Management / Logic Apps) that will implement your storage routing, business logic and security rules.
Related
I want to build a image sharing platform for customers to use. This platform will take an image provided by a user, create copies of it at multiple resolutions, and store them ready to be shared or downloaded. How to achieve this using azure in a cost effective ways
Im thinking to use Azure functions(for the api calls) storage blobs , event grid and cosmos db for the same.
To keep the costs low keep it simple:
Store data in Blob storage. Price varies based on redundancy, speed of access and location
Azure functions for processing images, consumption plan gives 1M free requests per month
Azure app service to host web site for uploading images, there is a free tier
SUMMARY: Can I use an Azure Storage Plan as additional storage that is available to an Azure Web App?
DETAILS: We have a very simple Web App using a low amount of CPU and large amount of storage (all it does is list and allow you to download files). I need at least 50GB of storage for video and audio files for this app and to get that amount of storage on a plan pushes me up to unreasonable CPU and other resources that I don't need with the consequences of a much higher price.
My plan (hope) is that I could create a Web App and remap the root directory of the Web App to a Storage Plan that is 50GB. Two challenges that I have spent the last day researching but at this point, I have not been able to find an answer.
I have created the Web App. I see that /wwwroot is on the D: drive of the Web App. I also created a storage plan and have created a sharable area of 50GB on the storage plan.
So... Can anyone give me some insight into the following:
How do I tell the Web App to use the 50GB of storage which is now available to it?
How can I map a drive letter to the area in the storage plan?
How do I tell the Web App that it should use the 50GB area on the storage plan as the root drive instead of using D:/Webroot
Many Thanks!
Refer this feedback link on a similar request: https://feedback.azure.com/forums/169385-web-apps/suggestions/13536996-the-ability-to-store-iis-logs-in-azure-file-storag - it has been denied.
By default, on Azure WebApps, all files are stored in the file system with the application, including the media files. You may wish to know about the main types of files (https://github.com/projectkudu/kudu/wiki/Azure-runtime-environment) that are dealt on Azure WebApp (Persisted files, Temporary files and Machine level read-only files)
Refer to the article (https://github.com/projectkudu/kudu/wiki/File-structure-on-azure) File structure on azure to know the sets of files & dirs on Azure WebApp, and check the directories which include the possible increasing files, such as LogFiles, site/repository, site/deployments(for deployment slots) and your directory for uploading files.
To verify you can go to your scm site's debug console
(https://{sitename}.scm.azurewebsites.net/DebugConsole) and query for the free space on d:\local. The “Disk Space” depends on the App Service plan you're using. It's 1 GB for Free, 10 GB for Basic, 50 GB for Standard and 250 GB for Premium, refer the document for more details on these limits: https://azure.microsoft.com/en-us/pricing/details/app-service/.
If your requirement fits you may use ASE - Azure App Service Environment is an Azure App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale.
https://learn.microsoft.com/en-us/azure/app-service/environment/app-service-web-how-to-create-a-web-app-in-an-ase
According to Microsoft Azure Support:
"... since the Product Group confirmed that it is not possible to mount
additional storage to the web app, you can integrate Azure storage
with the Azure SDK or rest API. But you can't mount the drive and use
it as storage.
Another option that you have would be to replicate the scenario on a
Virtual Machine where you can choose its capabilities (Number of
cores, RAM, and Storage Memory)."
So there you have it. It appears that WebApps are pretty fixed configurations which means that when you scale up a Web App, you get more CPU resources AND more Disk storage. It's a packaged deal most likely designed for ease of deployment. Nothing, it appears, you can do about that.
The best alternative, it seems, is to spin up a VM with your chosen OS and then add additional disk storage as needed. It's a "do-it-yourself" approach but the best solution that seems to be available.
When exploring Azure storage I've noticed that access to a storage container is done through a shared key. There is concern where I work that if a developer is using this key for an application they're building and then leave the company that they could still login to the storage account and delete anything they want. The workaround for this would be to re-generate the secondary key for the account but then we'd have to change all of the keys in every application that uses those keys.
Is it best practice to have an entire storage account per application per environment (dev, test, staging, production)?
Is it possible to secure the storage account behind a virtual network?
Should we use signatures on our containers on a per application basis?
Anybody have experiences similar and have found a good pattern for dealing with this?
I have a bit different scenario – external applications, but the problem is the same - data access security
I use Shared Access Signatures(SAS) to grant an access to a container.
In your scenario you can create Shared Access Policy per application on a container and generate SAS using this Stored Access Policy with long expiration time, you can revoke it at any point by removing Shared Access Policy from container. So in you scenario you can revoke current SAS and generate new one when your developer leaves. You can't generate single SAS for multiple containers so if you application uses multiple containers you would have to generate multiple SAS.
Usage, from the developers perspective, stays the same:
You can use SAS token to create CloudStorageAccount or CloudBlobClient so it’s almost like a regular access key.
Longer term, I would probably think about creating one internal service(internal API) responsible for generating SAS and renewing them. This way you can have completely automated system and with access keys only disclosed to this main service. You can then restrict access to this service with virtual network, certificates, authentication etc. And if something goes wrong (developer who wrote that service leaves :-) ) you can regenerate access keys and change them, but this time only in one place.
Few things:
Storage account per application (and/or environment) is a good strategy, but you have to be aware of the limit – max 100 storage accounts per subscription.
There is no option to limit access to a storage account with virtual network
You can have maximum 5 Shared Access Policies on a single container
I won't get into subjective / opinion answers, but from an objective perspective: If a developer has a storage account key, then they have full access to the storage account. And if they left the company and kept a copy of the key? The only way to block them out is to regenerate the key.
You might assume that separating apps with different storage accounts helps. However, just keep this in mind:if a developer had access to a subscription, they had access to keys for every single storage account in that subscription.
When thinking of key regeneration, think about the total surface area of apps having knowledge of the key itself. If storage manipulation is solely a server-side operation, the impact of changing a key is minimal (a small app change in each deployment, along with updating any storage browsing tools you use). If you embedded the key in a desktop/mobile application for direct storage access, you have a bigger problem with having to push out updated clients, but you already have a security problem anyway.
We are developing a "multi tenant application" (MTA) on AZURE. In addition we develop "single tenant applications" (STA) for customers that utilise MTA data via a REST API end point i.e so the STA can be hosted anywhere.
A specific STA uploads and stores video files. Security for these video files is important and 1xVideo 1xConcurrentUser most likely consumption use case. It not clear at this stage the user will consume the content by streaming or download.
QUESTIONS
Using Azure MEDIA SERVICES account/keys its easy to upload , store and download media content. What are the benefits of using MEDIA SERVICES over a standard Azure STORAGE ACCOUNT ? ? I understand MEDIA SERVICES use a STORAGE ACCOUNT.
Does isolating a STA into a new Azure subscription makes sense to isolate video related costs categorically ? the itemised bill contains 6000+ rows. Difficult to extract the relevant data for an STA each month. In theory a STA customer could in future take control of this account management and costs.
Is there a max number of CONTAINERS that can be added to a STORAGE ACCOUNT ?
Should the CONTAINER be of type PRIVATE to secure the content but still allow access for the STA?
Thank you
Scott,
Media Services is good if you're looking to accept incoming video and process it to serve in other formats or to leverage streaming media playback. Serving video directly out of an Azure Blob Storage Account is possible but it will not provide smooth streaming or transcoding (no streaming playback may mean stop / start of video for users with high latency connections).
I would advise against putting each STA into their own subscription. While it will give you a degree of control over the management of charging back usage to the STA user it will be a big overhead to manage. Your best bet would be to use an appropriate storage account / container setup to allow you track calls some other way and provide estimated costs. Don't forget that Azure is always changing and it may be that future features give you the ability to tag and track costs inside a subscription more effectively.
There is no limit on number of containers in a storage account. The limits are 50 storage accounts per subscription and a maximum of 500TB of storage per account. Storage and Subscription Limits are documented here: http://azure.microsoft.com/en-us/documentation/articles/azure-subscription-service-limits/#storagelimits
You can use Shared Access Signatures to control access to Blobs in Azure Blob Storage. See here for how to create and use them: http://msdn.microsoft.com/en-us/library/azure/jj721951.aspx and here for guidance on setting permissions on Blob Storage containers: http://msdn.microsoft.com/en-us/library/azure/ee393343.aspx
HTH
Simon.
I will try to answer the first question:
Using Azure MEDIA SERVICES account/keys its easy to upload , store and download media content. What are the benefits of using MEDIA SERVICES over a standard Azure STORAGE ACCOUNT ? ? I understand MEDIA SERVICES use a STORAGE ACCOUNT.
Answer: Azure Media Services origin server is the IIS media service in the cloud. All video contents are stored in Azure Blob storage and there is a mapping between the media service and storage. There are many advantages of using media server rather than directly downloading from storage: (1) Media server has the intelligent to forward the right data fragment(right bitrate, time stamp) to your client efficiently. (2) our origin server dynamically package multiple bitrate MP4 from storage account into multiple streaming format (HLS, Smooth streaming and MPEGDASH), which get to played on various devices and platform. Hence, you save on the cost for encoding your video into multiple formats. (3) Our origin server supports live streaming.
I think this question goes into why we invent media server. I have a blog explains how video streaming works for your reference: http://mingfeiy.com/adaptive-streaming-video-streaming.
I would like to create a Metro application that allows a group of people to interact. One person would create data and serve as the owner, and multiple others would be invited in and be allow to modify that data. I heard from Build talks that each Metro application will get per-user Azure storage, but will it be possible to share that data between multiple users? Does anyone have a link they could share where I could research this?
I think that you are confusing SkyDrive with Azure Blob Storage.
SkyDrive
Personal to a Live ID
Not really meant as a base for collaborative work
Azure Blob Storage
You can have public files that anyone can view and update
You can have a lease on file that only allows certain people to edit it
Since you own the Azure account you also control the content
You can learn the basics here
If you want to share private app data between users, the best way to do so would be via a shared server of some sort. You should have a server (running on Azure, Amazon EC2, or anything really) that exposes a REST-ful web service which each application connects to. The shared state then lives on that server.
This is better than trying to use skydrive or some file-based system for storing shared data. With a file on skydrive and multiple users trying to access it, you would run into concurrency issues when more than 1 person tries to write to it.
You don't get Azure with Metro.
With Live you get a free SkyDrive that is a personal cloud storage. Like 10 GB. Can share files but it is via sending an email link. It is not file storage that would readily support a server type application to manage that sharing.
Azure is a cloud platform for file and data sharing. Azure is not free but storage cost is only $0.125 / GB per month. 10 GB = $1.25 / month. Using SkyDrive as shared storage you are giving up a lot of developer and hosting tools that come with Azure to save $1.25 / month.
It looks like there is a more formal definition of this with the updated help now available. They were referring to roaming application data. I found the following links that provide guidance:
http://msdn.microsoft.com/en-us/library/windows/apps/hh464917.aspx
http://msdn.microsoft.com/en-us/library/windows/apps/hh465094.aspx
The general is that a small amount of temporary application data is provided on a per-app, per-user basis. The actual size you get is not detailed, but the guidance is pretty clear - app settings only, no large data sets, and don't use it for instant synchronization. Given this guidance, my plan is not a good one and will change.