Azure Redis Cache, get induvidual keys statistics - azure

I have a azure redis cache configured through APIM, i have requirement, as i need to generate below reports.
How many hits has been done for specific key.
What is the data size of specific key.
Last accessed time of specific key.
If i can get the above key details, then if keys gets deleted, will i still get the reports?
Or if getting this details from redis is not possible, then can i store these details somewhere in database.
Can anyone suggest on this pls?

Related

AWS QLDB - GDPR support

I am planning for AWS QLDB for audit data.
Does QLDB support GDPR? Is there any performance impact to this?
There are some fields encrypted using custom encryption key before storing into QLDB. I might change the key down the line when the key gets compromised or for the key rotation policy. So, I may need to read all the old records, decrypt using old key and encrypt using new record and update again. Is this possible with QLDB?
How to do multi tenancy with QLDB? Like, I have multiple apps writing to audit and would like to have a virtual separation for each app in the same cluster.
thank you for the question; it touches some of the concepts that is at the heart of QLDB.
Does QLDB support GDPR? Is there any performance impact to this?
The QLDB developer guide page on data protection may help provide more information about the AWS shared responsibility model. It may also be helpful to read this AWS blog post about the shared responsibility model and GDPR.
We are currently working on a feature that will allow customers to remove the customer data payload from QLDB revisions. Many customers have asked for this feature in order to accommodate GDPR ”Right to forget“ requirements. Please do be aware that this is not a claim of ”compliance“ - as this is something you would need to evaluate independently. We do not anticipate this impacting any read/write performance. If you’re interested to know more about this, please reach out to AWS support and they’ll connect you with our team to tell you more about it.
There are some fields encrypted using custom encryption key before storing into QLDB. I might change the key down the line when the key gets compromised or for the key rotation policy. So, I may need to read all the old records, decrypt using old key and encrypt using new record and update again. Is this possible with QLDB?
Reading all the old records is possible in QLDB through a few different methods — querying revision history, exporting journal data, or streaming journal data.
However, it is worth noting that QLDB does provide encryption at rest via KMS. You can leverage KMS for key rotations or key expiry as well, and you’ll be able to access the old data with the new key via KMS’s key hierarchy. KMS will allow you to rotate keys without the need to reencrypt all your data.
How to do multi tenancy with QLDB? Like, I have multiple apps writing to audit and would like to have a virtual separation for each app in the same cluster.
There are a few potential ways to go about this, that ultimately may depend on your use-case(s). Within a single ledger you could leverage attributes in each document to differentiate between tenants. You could leverage multiple ledgers in QLDB in a single account with the default quota. It may also be the case that you want even more separation and may consider creating multiple accounts and leveraging something like AWS Control Tower.
All that said, the best approach could depend very heavily on your use-case(s), as well as other AWS products that you’re using. You may want to reach out to AWS support on this as well to potentially connect with the relevant Solutions Architect who could consult on approaches, given your specific use-case(s).

Can i cache connection secret in ADF pipeline instead of hitting AKV from every activity

Currently I am using parametrized linked service to connect to AKV and then retrieve the connection secret. But with many Pipelines and activities we are facing throttling issue on AKV side. We want to limit number of hots to AKV and cache/store the retrieved connection somewhere within ADF pipeline but I do not see any option to do so. Please advice.
Absolutely you can - I recently worked with a massive retail customer and they hit a similar issue so they used a Java cache class that cached the relevant data from AKV (it was a payment token) for 1hr and at the end of the hour it pulled the data in from AKV again. The cache time was configurable, they are not bound to 1hr.

Azure Table Storage for housing Application Configuration

-- I am exploring Azure functionality and am wondering if Azure Table Storage can be an easy way for holding application configuration for an entire environment. It would be easy to see and change (adding list values etc.). Can someone please guide me on whether this is a good idea? I would expect this table to hold no more than 2000 rows if all our applications were moved over to Azure.
Partition Key --> Project Name + Component Name (Azure Function/Logic App)
Row Key --> Parameter Key
Value column --> Parameter Value
-- For securing password/keys, I can use the Azure Key Vault.
There are different ways of storing application configurations:
Key Vault (as you stated) for sensitive information. Ex. tokens, keys, connection strings. It can be standardized and extended to any type of resources for ease of storing and retrieving these.
Application Settings, found under each App Service. This approach assumes you have an App Service for each of your app.
Release Pipeline, such as Azure DevOps Services (AzDo). AzDo has variables that can be global to the release pipeline or some that can be specific to each stages
I am exploring Azure functionality and am wondering if Azure Table
Storage can be an easy way for holding application configuration for
an entire environment. It would be easy to see and change (adding list
values etc.). Can someone please guide me on whether this is a good
idea?
Considering Azure Tables is a key/value pair store, it is certainly a good idea to store application configuration values there. Only thing I would recommend is that you incorporate some kind of caching layer between your application and table storage so that you don't end up making calls to table storage every time you need to fetch a setting.
I would expect this table to hold no more than 2000 rows if all our
applications were moved over to Azure.
Considering the number of entities is going to be less than 2000, I think your design would have no impact in querying the entities however I think your design is good. For best performance, please ensure that you're including both PartitionKey and RowKey while querying. At the very least, include PartitionKey in your query.
Please see this for more details: https://learn.microsoft.com/en-us/azure/cosmos-db/table-storage-design-guide.
For securing password/keys, I can use the Azure Key Vault.
That's the way to go for storing sensitive data in Azure.
Have you looked at the App Configuration service?
There are client libraries in .NET, Java, TypeScript and Python to interact with the service that you can leverage in your application.

How can I cache data using Azure Key Vault?

I want to use Azure Key Vault for my PAAS application. Is there any way to cache the data instead of making calls every time to Key Vault to retrieve a key?
Here is a code sample to cache and proxy secrets, keys, and certificates from Azure Key Vault.
Links:
https://learn.microsoft.com/en-us/samples/azure/azure-sdk-for-net/azure-key-vault-proxy/ OR
https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/keyvault/samples/keyvaultproxy/src
It is a pretty clean way.
Yes, any of the standard caching mechanisms will still work.
On first request, your app will look in cache first and won't find the value, so it will call KeyVault for value. You'll then store the value in cache so that the next time your application needs the value, it will be retrieved from cache.
You could do in memory, or ideally, something out of process, such as Redis.

Azure notification hubs shared access key expiry

From the Azure Portal, on the Configure Tab for a notification hub I am able to generate a primary key and secondary key. I understand these are required to gain programmatic access to the Azure API - allowing my client app to create registrations and send messages.
Could anyone please explain:
Why are there two keys (primary and secondary)?
Do the keys generated from this UI expire and if so how long do they live before expiry?
They don't expire. The reason there are two is because it's recommended that you regenerate the keys periodically for security reasons. For example, suppose your application is using the primary key today. If you regenerated the primary key, then your application would be broken until you could update it and resulting in downtime. Instead, you can change your application to use the secondary key with basically little or no downtime. Then, after your application has been updated, you can regenerate the primary key. Next month (or whatever schedule you like), you can repeat the process, switching back to the primary key and regenerating the secondary key.
This is not unique to Notification Hubs. You will see primary and secondary keys in other services such as Storage and Media Services. The idea is the same.

Resources