I want to create a GCP service account that will have full access to a specific bucket (say my-bucket).
I can work around this by providing to my service account the roles.storageAdmin role as follows:
gcloud iam service-accounts create my-sa \
--description="My Service Account" \
--display-name="my-sa"
and
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:my-sa#my-project.iam.gserviceaccount.com" \
--role="roles/storage.objectAdmin"
but unless I am wrong, this gives very broad permissions to all the project's buckets.
Is there a way to provide full access to a specific bucket (and only this one) to a service account?
You can use gsutil ch :
gsutil iam ch \
serviceAccount:my-sa#my-project.iam.gserviceaccount.com:objectAdmin \
gs://my-bucket
You can also get the IAM for your bucket :
gsutil iam get gs://my-bucket > iam.json
, modify it to add the bindings you want, and then set the IAM with the updated file :
gsutil iam set iam.json gs://my-bucket
Related
I am following (previous and) this tutorial: https://learn.microsoft.com/en-us/training/modules/connect-an-app-to-azure-storage/9-initialize-the-storage-account-model?pivots=javascript to connect an application to the Azure Storage account.
At step 8, when I verify the creation of the container by running the given Azure CLI command and replacing with my storage account:
az storage container list \
--account-name <name>
I get the following output:
There are no credentials provided in your command and environment, we will query for account key for your storage account.
It is recommended to provide --connection-string, --account-key or --sas-token in your command as credentials.
You also can add `--auth-mode login` in your command to use Azure Active Directory (Azure AD) for authorization if your login account is assigned required RBAC roles.
For more information about RBAC roles in storage, visit https://learn.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli.
In addition, setting the corresponding environment variables can avoid inputting credentials in your command. Please use --help to get more information about environment variable usage.
[]
which I am not sure whether the container is listed as [] at the end of the above output.
Comments and suggestions are welcome. Thanks!
This error you are getting is because of an auth issue.
So, there are three solution one is that you run the following command before the running the az storage container list
az login
The other way would be to use the --auth-mode option in the az storage container list this is written in the error prompt itself which you have given.
command:
az storage container list --account-name <name> --auth-mode login
this will prompt you for login credentials once provided the output should look like this
Lastly you can use the same option as above but with key
az storage container list --account-name <name> --auth-mode key <key>
you can get your key from the portal under access keys
The output of the command should look like this here I have two containers name photos and test.
I tried to reproduce in my environment and I got same error:
There are no credentials provided in your command and environment,
we will query for account key for your storage account. It is
recommended to provide --connection-string, --account-key or
--sas-token in your command as credentials.
You also can add --auth-mode login in your command to use Azure
Active Directory (Azure AD) for authorization if your login account is
assigned required RBAC roles. For more information about RBAC roles in
storage, visit
https://learn.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli.
In addition, setting the corresponding environment variables can avoid
inputting credentials in your command. Please use --help to get more
information about environment variable usage. []
The above error show that in your storage account you didn't create any containers and files.
I have created one container and add files.
I tried the same command now i got an output successfully.
If you need to remove warnings you can use this command--only-show-errors
Reference:
az storage container | Microsoft Learn
Is there a way to enable version-level immutability support and create time-based retention policy for Azure storage account-level with Terraform? All I am trying to do is to enable Access Control under Data Protection and set time based immutability policy. Seems simple, but I cannot find how to do that in TF.
From documentation here, it appears to me as a newly added feature?
It might take some time to get this packaged into azurerm provider. You can create a PR on their codebase or create a feature request to include this into azurerm_storage_blob.
In the meantime, you could use local-exec on above resource to execute az-cli commands.
resource "azurerm_storage_blob" "immutable-blob" {
# ...
provisioner "local-exec" {
command = "az storage blob immutability-policy set \
--expiry-time 2022-05-20T08:00:00Z \
--policy-mode Unlocked \
--container <container> \
--name <blob-version> \
--account-name <storage-account> \
--auth-mode login"
}
}
I working on AKS shared cluster, where have multiple teams are working on the same cluster and have their own ACR for each team.
I want to find ways to allow ACR to pull from specified namespace only.
Currently that I have though is an expensive way by
Using ACR premium tier to enable the scope-map feature, and create the token for authentication on pull secret.
Or someone did know how to pull an image from the service principal with the AcrPull role.
please tell me.
thank you.
I have found the solution without changing the ACR pricing tier, by using only the service principal to access the target ACR.
Solution
Create the service principal and assign AcrPull role.
After that, Create kubernetes secret into your namespace to pull image by ImagePullSecrets
kubectl create secret docker-registry <secret-name> \
--namespace <namespace> \
--docker-server=<container-registry-name>.azurecr.io \
--docker-username=<service-principal-ID> \
--docker-password=<service-principal-password>
reference
there are some directories in StorageV2 (general purpose v2) as shown below
My role is contributor I want to give access for a user to read W the directory named '2000' and all its subdirectories How can I do this? Is this possible with Powershell.
It would be really nice if you can help
You can do this with the Azure CLI which can be run in Powershell. You should be able to do this as a contributor with the az role assignment create command.
Keep in mind that permissions with this command are granted at the container scope. Data Lake storage could be used to give folder level permissions. Blob storage doesn't actually have folders, it's just displayed that way so that it is easier to navigate.
az role assignment create \
--role "Storage Blob Data Contributor" \
--assignee <email> \
--scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>"
Hello together i try to create an AKS within my CI but i struggle with the required privileges.
I am using a service principal login, crete a resource group and then i try to create an aks.
- az login --service-principal -u ${AZ_PRINC_USER} -p ${AZ_PRINC_PASSWORD} --tenant ${AZ_PRINC_TENANT}
- az group create --name ${AZ_RESOURCE_GROUP} --location ${AZ_RESOURCE_LOCATION}
- az aks create --name ${AZ_AKS_TEST_CLUSTER} --resource-group ${AZ_RESOURCE_GROUP} --node-count ${AZ_AKS_TEST_NODECOUNT} --generate-ssh-keys
However the execution drops an error :
ERROR: Directory permission is needed for the current user to register the application. For how to configure, please refer 'https://learn.microsoft.com/azure/azure-resource-manager/resource-group-create-service-principal-portal'. Original error: Insufficient privileges to complete the operation.
Any ideas what privileges are necessary ? It seems there should be a Directory permission... but i am not really able to find and assign it.
you need to grant it permissions to create applications in Azure AD (if you do not pre-create those). you also need to give it permissions to create AKS (Microsoft.ContainerService/managedClusters/write) and you need to give it permissions to assign roles (Microsoft.Authorization/roleAssignments/write) if you want to deploy to existing subnet. this would be the bare minimum derived logically, but I never tried it. you might not be able to do this with just these permissions
you would probably also need some Microsoft.Network permissions