Minimum permission to view live data - Azure Kubernetes - azure

I have enabled Kubernetes RBAC authorization in all my Azure kubernetes clusters. Now I need to give permissions for viewing live data in containers tab
How can I do it? Which is the minimum premission needed?
Thanks

As far as I understand from my investigation, if you want to do it using Azure Built-in roles, you need the following three roles at the very least:
Reader role assignment scoped to the AKS cluster to be able to discover the cluster
Azure Kubernetes Service Cluster User Role role assignment scoped to the AKS cluster as mentioned in the note atop the page in the docs. This is needed to allow access to Microsoft.ContainerService/managedClusters/listClusterUserCredential/action API call. This API call lists the cluster user credentials.
Log Analytics Contributor role assignment scoped to the Log Analytics workspace associated to the AKS cluster. This is needed to execute an Analytics query for data, i.e., perform a /workspaces/{workspaceId}/query API call. More here.
This should let one pull up the live data for containers. If not comfortable with this approach, you might also create a Custom Role allowing only those exact actions.
Hope this helps!

Related

DataBricks best practice to manage resource correspond deleted user

currently I have some prblem about my DataBricks workspace when an user was deleted and it cause some issue:
Applications or scripts that use the tokens generated by the user will no longer be able to access the Databricks API
Jobs owned by the user will fail
Clusters owned by the user will stop
Queries or dashboards created by the user and shared using the Run as Owner credential will have to be assigned to a new owner to prevent sharing from failing
How can I keep these resource operate normaly ?
How can I keep these resource operate normaly ?
The main solution for this is to use service principals instead of user identity for all production clusters/jobs/SQL queries/dashboards/...
Databricks documentation has a dedicated article about use of the service principals.

Forward Azure Metrics for very specific resources to DataDog

This question may sound a little odd, but here it goes: A customer of ours would like to get access to certain metrics of his environment of our product which we host on Azure for the customer. It's a pretty complicated deployment, but in the end it consists of an Application Gateway, some virtual machines and a dedicated Azure SQL database.
The customer now would want to get select metrics from this deployment forward to their own DataDog subscription, e.g. VM CPU metrics, database statistics and those things. DataDog obviously supports all this information (which is good), but as a default would slurp in information from all resources within our subscription (which is not OK).
Is there a way to fine-granularly define which data is forwarded to DataDog, e.g. the resources and also which type of metrics to forward for each resource? What are my options here? Is it enough to create a service principal with a limited reading right, or can I configure this somewhere else? I am unfortunately not familiar with DataDog.
The main thing which must be prevented is that the customer due to the metrics forwarding could get access to other metrics in our subscription - we need to control the exact scope of the metrics.
The pretty straightforward solution to this issue is to create a service principal via command line, and then to assign the monitoring role to this service principal only exactly for the resources you need. This even works down to a level of specific databases for example.
Kicker: This is not possible to do in such a granularity from the UI, but the az command line accepts assigning the monitoring reader permission on a deep resource ID level, even if the UI for this is not there. By finding the resource ID from the UI, and then using the resource ID from the command line, it's possible to achieve exactly this behaviour.

Azure created DefaultResourceGroup-EAU resource group

Today I am noticing that the Azure Group, I dont know when Azure created the
"DefaultResourceGroup-EAU" resource group, and in this group two item is placed
I am not using any Azure Container Registry service and AKS, should I remove this group because it paying in my invoice, I just only have Azure Web Apps and Azure SQL databases and one VM only, should its impact on my above mentioned services after deletion?
certainly not in terms of how those services function, but monitoring might be impaired if you delete those.
Those resources look like they were created alongside AKS cluster. Doesn't mean that they were only being used for that, but highly likely.

Azure Sentinel RBAC - Best Practice

We are in the process of implmenting Sentinel with several data sources, what is the best way to do the RBAC?
You would just create the dashboard from Azure Sentinel and assign RBAC roles to it the same way you would with any other Resource in Azure.
The quickstart guide covers it:
To create a new dashboard from scratch, select Dashboards and then +New dashboard.
Select the subscription the dashboard is created in and give it a descriptive name. Each dashboard is an Azure resource like any other, and you can assign it roles (RBAC) to define and limit who can access.
To enable it to show up in your dashboards to pin visualizations to, you have to share it. Click Share and then Manage users.
Use the Check access and Role assignments as you would for any other Azure resource. For more information, see Share Azure dashboards by using RBAC.
Let me know if this helps.
There are two parts to this. The first is using RBAC to secure your Azure Sentinel. This document has more information on that:
https://learn.microsoft.com/en-us/azure/sentinel/roles
The second part is to secure the logs within your Log Analytics workspace. This will control what information those users that have access to your Azure Sentinel can see. So if you want only certain Security Analysts to see your O365 logs you can control it through table level RBAC in the logs.
https://techcommunity.microsoft.com/t5/Azure-Sentinel/Table-Level-RBAC-In-Azure-Sentinel/ba-p/965043

Changing Permissions of Google Container Engine Cluster

I have been able to successfully create a Google Container Cluster in the developers console and have deployed my app to it. This all starts up fine, however I find that I can't connect to Cloud SQL, I get;
"Error: Handshake inactivity timeout"
After a bit of digging, I hadn't had any trouble connecting to the Database from App Engine or my local machine so I thought this was a little strange. It was then I noticed the cluster permissions...
When I select my cluster I see the following;
Permissions
User info Disabled
Compute Read Write
Storage Read Only
Task queue Disabled
BigQuery Disabled
Cloud SQL Disabled
Cloud Datastore Disabled
Cloud Logging Write Only
Cloud Platform Disabled
I was really hoping to use both Cloud Storage and Cloud SQL in my Container Engine Nodes. I have allowed access to each of these API's in my project settings and my Cloud SQL instance is accepting connections from any IP (I've been running Node in a Managed VM on App Engine previously), so my thinking is that Google is Explicitly disabling these API's.
So my two part question is;
Is there any way that I can modify these permissions?
Is there any good reason why these API's are disabled? (I assume there must be)
Any help much appreciated!
With Node Pools, you can sort of add scopes to a running cluster by creating a new node pool with the scopes you want (and then deleting the old one):
gcloud container node-pools create np1 --cluster $CLUSTER --scopes $SCOPES
gcloud container node-pools delete default-pool --cluster $CLUSTER
The permissions are defined by the service accounts attached to your node VMs during cluster creation (service accounts can't be changed after a VM is instantiated, so this the only time you can pick the permissions).
If you use the cloud console, click the "More" link on the create cluster page and you will see a list of permissions that you can add to the nodes in your cluster (all defaulting to off). Toggle any on that you'd like and you should see the appropriate permissions after your cluster is created.
If you use the command line to create your cluster, pass the --scopes command to gcloud container clusters create to set the appropriate service account scopes on your node VMs.
Hmm, I've found a couple of things, that maybe would be interested:
Permissions belong to a service account (so-called Compute Engine default service account, looks like 12345566788-compute#developer.gserviceaccount.com)
Any VM by default works using this service account. And its permissions do not let us Cloud SQL, buckets and so on. But...
But you can change this behavior using another service account with the right perms. Just create it manually and set only needed perms. Switch it out using gcloud auth activate-service-account --key-file=/new-service-account-cred.json
That's it.
For the cloudsql there's the possibility to connect from containers specifying a proxy as explained here https://cloud.google.com/sql/docs/postgres/connect-container-engine

Resources