Azure App Service app service logs blob retention not working - azure

Screenshot of Settings
I have configured my azure app service app service logs settings as above image attached.
As expected the logs are stored in Azure Blob storage but it is not deleting the log files even after retention period is completed.
Any solution will be helpful

APPROACH-1:
Based on this MS Doc. It is possible with Azure blob storage lifecycle policy
Transition blobs from cool to hot immediately when they are accessed, to optimize for performance.
Transition blobs, blob versions, and blob snapshots to a cooler storage tier if these objects have not been accessed or modified for a
period of time, to optimize for cost. In this scenario, the lifecycle
management policy can move objects from hot to cool, from hot to
archive, or from cool to archive.
Delete blobs, blob versions, and blob snapshots at the end of their lifecycles.
Define rules to be run once per day at the storage account level.
Apply rules to containers or to a subset of blobs, using name prefixes or blob index
tags
as filters.
APPROACH-2: We can use Azure logic app to delete files older than X number of days from Azure Blob Storage .
For more information please refer this Microsoft Documentation: Blob rehydration from the archive tier

Related

Azure function storage file isn't deleted with function

When I create new Azure function, the specified storage account also create logs and files like host locks. For consumption, plan storage uses File Share to store whole function app by default.
When I want to delete my azure function, nothing is deleted in the storage account.
Storage account after deleted:
Is that correct for consumption plan?
Should I delete it manually?
On either a Consumption plan or an App Service plan, a function app requires a general Azure Storage account, which supports Azure Blob, Queue, Files, and Table storage. This is because Functions relies on Azure Storage for operations such as managing triggers and logging function executions, but some storage accounts do not support queues and tables.
They are part of a resource group, if you don't delete the whole resource group you have to delete each item seperately.
reference:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale

Azure Storage and WebApps Relation

I have two webapps on separate plans each has multiple instances of large size (P3) and it says I get 250GB of storage on P3.
I also have azure storage to store photos.
I want to know, how is Azure storage related to the webapp plans... meaning, what if I reduce the webapp to S3 where it's only 50GB, how will that affect storage?
Also, do I get 50GB for each instances or for the entire plan?
Thank you
Azure App Service plans represent the collection of physical resources used to host your apps.
App Service plans define:
Region (West US, East US, etc.)
Scale count (one, two, three instances, etc.)
Instance size (Small, Medium, Large) SKU (Free, Shared, Basic, Standard, Premium)
If you scale down your App Service plan to S3, yes you will get 50GB storage.
This storage includes/stores all of the resources and deployment files, logs etc.
You can only store data/files up to the available Storage according to the pricing tier that you choose. To increase the storage you can scale up your pricing tier.
Also, note that increase/decreasing instances is nothing but increase/decrease the number of VM instances that run your app. You get only one Storage account for all the instances not individual Storage.
Before scaling based on instance count, you should consider that scaling is affected by Pricing tier in addition to instance count. Different pricing tiers can have different numbers cores and memory, and so they will have better performance for the same number of instances (which is Scale up or Scale down).
For more details, you may refer the Azure App Service plans in-depth overview and App Service pricing.
Hope this answers your questions.
App Service storage is completely different than Azure Storage (blobs/tables/queues).
App Service Storage
For a given tier size (e.g. S1), you get a specific amount of durable storage, shared across all instances of your web app. So, if you get 50GB for a given tier, and you have 5 instances, all 5 instances share that 50GB storage (and all see and use the same directories/files).
All files in your Web App's allocated storage are manipulated via standard file I/O operations.
App Service Storage is durable (meaning there's no single disk to fail, and you won't lose any info stored), until you delete your web app. Then all resources (including the allocated storage, in this example 50GB) are removed.
Azure Storage
Azure Storage, such as blobs, is managed completely independently of web apps. You must access each item in storage (a table, a queue, a blob / container) via REST or a language-specific SDK. A single blob can be as large as 4.75TB, far larger than the largest App Service plan's storage limit.
Unlike App Service / Web App storage, you cannot work with a blob with normal file I/O operations. As I mentioned already, you need to work via API/SDK. If, say, you needed to perform an operation on a blob (e.g. opening/manipulating a zip file), you would typically copy that blob down to working storage in your Web App instance (or VM, etc.), manipulate the file there, then upload the updated file back to blob storage.
Azure Storage is durable (triple-replicated within a region), but has additional options for replication to secondary regions, and even further, allowing for read-only access to the secondary region. Azure Storage also supports additional features such as snapshots, public access to private blobs (through Shared Access Policies & Signatures), and global caching via CDN. Azure Storage will remain in place even if you delete your Web App.
Note: There is also Azure File Storage (backed by Azure Storage), which provides a 5TB file share, and acts similarly to the file share provided by Web Apps. However: You cannot mount an Azure File Storage share with a Web App (though you can access it via API/SDK).

Azure Billing when Blobs are not present in storage

As we are stopping azure service, we have stopped and deleted the VM's and attached disks last month, which now displays no instances in cloud services, which in turn have deleted the blobs inside the storage. Do we get billed for subscription this month.Any idea?
For Azure storage account, your total cost depends on how much you store, the volume and type of storage transactions and outbound data transfers, and which data redundancy option you choose.
If you delete all your blobs in your storage account, you don't pay for storage account.
More information about Azure Storage Pricing please refer to this link.

Delete blob in azure after certain time

Is it possible to make a blob be able to auto delete after a certain time?
I need to delete my blobs after few hours they were uploaded to azure, I don't need store them more than 10 days.
Not at this time, unfortunately. Using Webjobs or something similar this is something that could be accomplished on top of Azure Storage, but there is nothing offered from the platform itself.
Since March 2019, this is possible with Lifecycle management support in Azure Blob Storage. See https://stackoverflow.com/a/57305518/347805
Azure Blob storage lifecycle management offers a rich, rule-based
policy for GPv2 and Blob storage accounts. Use the policy to
transition your data to the appropriate access tiers or expire at the
end of the data's lifecycle.
The lifecycle management policy lets you:
Transition blobs to a cooler storage tier (hot to cool, hot to archive, or cool to archive) to optimize for performance and cost
Delete blobs at the end of their lifecycles
Define rules to be run once per day at the storage account level Apply rules to containers or a subset of blobs (using prefixes as filters)
In short, it is NOT POSSIBLE to make a blob auto-delete after a certain time by any setting/configuration on the blob itself in Azure at this time.
You will need to rely on other services such as Azure WebJobs or Azure Automation to automate such task.

How to achieve Incremental deployment of Blob Files storage files to different environments of windows azure storage?

We are new to Windows azure and are developing a web application. In the beginning of the project , we have deployed complete code to different environments which actually publish complete code and uploaded blob objects to azure storage as we linked sitefinity to hold blob objects in azure storage . But now as we are in the middle of development , we are just required to upload any new blob files created which can be quite less in numbers (1 or 2 or maybe few).Now I would like to know best process to sync these blob files to different azure storage environments which is for each cloud service. So ideally we would like to update staging cloud service and staging storage first and then test there and then once no bugs are found, then will be required to update UAT and production storages as well with the changed or new blob objects.
Please help.
You can use the Azure Storage Explorer to manually upload/download blobs from storage accounts very easily. For one or two blobs, this would be an easy solution, otherwise you will need to write a tool that connects to the blob storage via an API and does the copying for you.

Resources