Unable to get storage account key with Azure Fluent - azure

I created a storage account with Azure Fluent SDK. But after I created the storage account, I wanted to get the name and key to build the connection string that I can use to access the storage account. The problem is that the 'Key' property is a Guid, not the key shown in the Azure Portal.
This is how I create the storage account.
IStorageAccount storage = azure.StorageAccounts.Define(storageAccountName)
.WithRegion(Region.USEast)
.WithNewResourceGroup(rgName)
.Create();
How can I get the proper Key to build the connection string?

You should be able to do it via the below code, this documentation also shows the use of Fluent but only the auth methods:
// Get a storage account
var storage = azure.StorageAccounts.GetByResourceGroup("myResourceGroup", "myStorageAccount");
// Extract the keys
var storageKeys = storage.GetKeys();
// Build the connection string
string storageConnectionString = "DefaultEndpointsProtocol=https;"
+ "AccountName=" + storage.Name
+ ";AccountKey=" + storageKeys[0].Value
+ ";EndpointSuffix=core.windows.net";
// Connect
var account = CloudStorageAccount.Parse(storageConnectionString);
// Do things with the account here...

Related

How to connect to Azure Table Storage using RBAC

I want to connect to Azure Table Storage using RBAC. I have done the role assignment in the portal but I could not find a way to connect to Azure Table form .NET code. I could find a lot of documentation on how to connect to BlobClient
static void CreateBlobContainer(string accountName, string containerName)
{
// Construct the blob container endpoint from the arguments.
string containerEndpoint = string.Format("https://{0}.blob.core.windows.net/{1}",
accountName,
containerName);
// Get a token credential and create a service client object for the blob container.
BlobContainerClient containerClient = new BlobContainerClient(new Uri(containerEndpoint),
new DefaultAzureCredential());
// Create the container if it does not exist.
containerClient.CreateIfNotExists();
}
But could not find the similar documentation for Azure Table Acess.
Has anyone done this before?
The patterns for authorizing with Azure Active Directory and other token sources for the current generation of the Azure SDK are based on credentials from the Azure.Identity package.
The rough equivalent to the snippet you shared would look like the following for Tables:
// Construct a new TableClient using a TokenCredential.
var client = new TableClient(
new Uri(storageUri),
tableName,
new DefaultAzureCredential());
// Create the table if it doesn't already exist to verify we've successfully authenticated.
await client.CreateIfNotExistsAsync();
More information can be found in the Azure Tables authorization sample and the Azure.Identity library overvivew.

Copy Blob in Azure Blob Storage using Java v12 SDK

My Application is in a Kubernetes cluster and I'm using Java v12 SDK to interact with the Blob Storage. To authorize against Blob Storage I'm using Managed Identities.
My application needs to copy blobs within one container. I haven't found any particular recommendations or examples of how SDK should be used to do the copy.
I figured that the following approach works when I'm working with the emulator
copyBlobClient.copyFromUrl(sourceBlobClient.getBlobUrl());
However, when this gets executed in the cluster I get the following error
<Error>
<Code>CannotVerifyCopySource</Code>
<Message>The specified resource does not exist. RequestId: __ Time: __ </Message>
</Error>
Message says "resource does not exist" but the blob is clearly there. My container has private access, though.
Now when I change the public access level to "Blob(anonymous read access for blobs only)" everything works as excepted. However, public access not acceptable to me.
Main question - what are the right ways to implement copy blob using Java v12 SDK.
What I could miss or misconfigured in my situation?
And the last is the error message itself. There is a part which says "CannotVerifyCopySource" which kind of helps you understand that there is something with access, but the message part is clearly misleading. Shouldn't it be more explicit about the error?
If you want to use Azure JAVA SDK to copy blob with Azure MSI, please refer to the following details
Copy blobs between storage accounts
If you copy blobs between storage accounts with Azure MSI. We should do the following actions
Assign Azure Storage Blob Data Reader to the MSI in the source container
Assign Azure Storage Blob Data Contributor to the MSI in the dest container. Besides when we copy blob, we need write permissions to write content to blob
Generate SAS token for the blob. If the souce blob is public, we can directly use source blob URL without sas token.
For example
try {
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
.endpoint("https://<>.blob.core.windows.net/" )
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
// get User Delegation Key
OffsetDateTime delegationKeyStartTime = OffsetDateTime.now();
OffsetDateTime delegationKeyExpiryTime = OffsetDateTime.now().plusDays(7);
UserDelegationKey key =blobServiceClient.getUserDelegationKey(delegationKeyStartTime,delegationKeyExpiryTime);
BlobContainerClient sourceContainerClient = blobServiceClient.getBlobContainerClient("test");
BlobClient sourceBlob = sourceContainerClient.getBlobClient("test.mp3");
// generate sas token
OffsetDateTime expiryTime = OffsetDateTime.now().plusDays(1);
BlobSasPermission permission = new BlobSasPermission().setReadPermission(true);
BlobServiceSasSignatureValues myValues = new BlobServiceSasSignatureValues(expiryTime, permission)
.setStartTime(OffsetDateTime.now());
String sas =sourceBlob.generateUserDelegationSas(myValues,key);
// copy
BlobServiceClient desServiceClient = new BlobServiceClientBuilder()
.endpoint("https://<>.blob.core.windows.net/" )
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
BlobContainerClient desContainerClient = blobServiceClient.getBlobContainerClient("test");
String res =desContainerClient.getBlobClient("test.mp3")
.copyFromUrl(sourceBlob.getBlobUrl()+"?"+sas);
System.out.println(res);
} catch (Exception e) {
e.printStackTrace();
}
Copy in the same account
If you copy blobs in the same storage account with Azure MSI, I suggest you assign Storage Blob Data Contributor to the MSI in the storage account. Then we can do copy action with the method copyFromUrl.
For example
a. Assign Storage Blob Data Contributor to the MSI at the account level
b. code
try {
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
.endpoint("https://<>.blob.core.windows.net/" )
.credential(new DefaultAzureCredentialBuilder().build())
.buildClient();
BlobContainerClient sourceContainerClient = blobServiceClient.getBlobContainerClient("test");
BlobClient sourceBlob = sourceContainerClient.getBlobClient("test.mp3");
BlobContainerClient desContainerClient = blobServiceClient.getBlobContainerClient("output");
String res =desContainerClient.getBlobClient("test.mp3")
.copyFromUrl(sourceBlob.getBlobUrl());
System.out.println(res);
} catch (Exception e) {
e.printStackTrace();
}
For more details, please refer to here and here
I had the same issue using the Java SDK for Azure I solved it by copying the blob using the URL + the SAS token. Actually the resource you're getting through the URL won't appear as available if you don't have the right access to it. Here is the code I used to solve the problem:
BlobClient sourceBlobClient = blobServiceClient
.getBlobContainerClient(currentBucketName)
.getBlobClient(sourceKey);
// initializing the copy blob client
BlobClient copyBlobClient = blobServiceClient
.getBlobContainerClient(newBucketName)
.getBlobClient(newKey);
// Creating the SAS Token to get the permission to copy the source blob
OffsetDateTime expiryTime = OffsetDateTime.now().plusDays(1);
BlobSasPermission permission = new BlobSasPermission().setReadPermission(true);
BlobServiceSasSignatureValues values = new BlobServiceSasSignatureValues(expiryTime, permission)
.setStartTime(OffsetDateTime.now());
String sasToken = sourceBlobClient.generateSas(values);
//Making the copy using the source blob URL + generating the copy
var res = copyBlobClient.copyFromUrl(sourceBlobClient.getBlobUrl() +"?"+ sasToken);
Perhaps another way is to use the streaming API to download and upload data. In our company, we are not allowed to generate SAS token on our storage account due to security and we use the following to copy from an append blob to a block blob (overwriting):
BlobAsyncClient src;
BlobAsyncClient dest;
//...
AppendBlobAsyncClient srcAppend = src.getAppendBlobAsyncClient();
Flux<ByteBuffer> streamData = srcAppend.downloadStream();
Mono<BlockBlobItem> uploaded = dest.upload(streamData, new ParallelTransferOptions(), true);
This returns Mono<BlockBlobItem> and you need to subscribe it to start the process. If used in a non-reactive context, perhaps the easiest way is to block().
Note that this will only copy the data and additional work is needed if you also need to copy the metadata and tags. For tags, there is BlobAsyncClientBase.getTags(). For meta data, there is BlobAsyncClientBase.getProperties(). You can get these tags and metadata from the source and apply the same to dest

Azure Blob Storage "Authorization Permission Mismatch" error for get request with AD token

I am building an Angular 6 application that will be able to make CRUD operation on Azure Blob Storage. I'm however using postman to test requests before implementing them inside the app and copy-pasting the token that I get from Angular for that resource.
When trying to read a file that I have inside the storage for test purposes, I'm getting: <Code>AuthorizationPermissionMismatch</Code>
<Message>This request is not authorized to perform this operation using this permission.
All in production environment (although developing)
Token acquired specifically for storage resource via Oauth
Postman has the token strategy as "bearer "
Application has "Azure Storage" delegated permissions granted.
Both the app and the account I'm acquiring the token are added as "owners" in azure access control IAM
My IP is added to CORS settings on the blob storage.
StorageV2 (general purpose v2) - Standard - Hot
x-ms-version header used is: 2018-03-28 because that's the latest I could find and I just created the storage account.
I found it's not enough for the app and account to be added as owners. I would go into your storage account > IAM > Add role assignment, and add the special permissions for this type of request:
Storage Blob Data Contributor
Storage Queue Data Contributor
Make sure to use Storage Blob Data Contributor and NOT Storage Account Contributor where the latter is only for managing the actual Storage Account and not the data in it.
I've just solved this by changing the resource requested in the GetAccessTokenAsync method from "https://storage.azure.com" to the url of my storage blob as in this snippet:
public async Task<StorageCredentials> CreateStorageCredentialsAsync()
{
var provider = new AzureServiceTokenProvider();
var token = await provider.GetAccessTokenAsync(AzureStorageContainerUrl);
var tokenCredential = new TokenCredential(token);
var storageCredentials = new StorageCredentials(tokenCredential);
return storageCredentials;
}
where AzureStorageContainerUrl is set to https://xxxxxxxxx.blob.core.windows.net/
Be aware that if you want to apply "STORAGE BLOB DATA XXXX" role at the subscription level it will not work if your subscription has Azure DataBricks namespaces:
If your subscription includes an Azure DataBricks namespace, roles assigned at the subscription scope will be blocked from granting access to blob and queue data.
Source: https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal#determine-resource-scope
Make sure you add the /Y at the end of the command.
Used the following to connect using Azure AD to blob storage:
This is code uses SDK V11 since V12 still has issues with multi AD accounts
See this issue
https://github.com/Azure/azure-sdk-for-net/issues/8658
For further reading on V12 and V11 SDK
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet-legacy
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet
using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Azure.Storage.Auth;
using Microsoft.Azure.Storage.Blob;
using Microsoft.Azure.Storage.Queue;
[Fact]
public async Task TestStreamToContainer()
{
try
{
var accountName = "YourStorageAccountName";
var containerName = "YourContainerName";
var blobName = "File1";
var provider = new AzureServiceTokenProvider();
var token = await provider.GetAccessTokenAsync($"https://{accountName}.blob.core.windows.net");
var tokenCredential = new TokenCredential(token);
var storageCredentials = new StorageCredentials(tokenCredential);
string containerEndpoint = $"https://{accountName}.blob.core.windows.net";
var blobClient = new CloudBlobClient(new Uri(containerEndpoint), storageCredentials);
var containerClient = blobClient.GetContainerReference(containerName);
var cloudBlob = containerClient.GetBlockBlobReference(blobName);
string blobContents = "This is a block blob contents.";
byte[] byteArray = Encoding.ASCII.GetBytes(blobContents);
using (MemoryStream stream = new MemoryStream(byteArray))
{
await cloudBlob.UploadFromStreamAsync(stream);
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
Console.ReadLine();
throw;
}
}

Azure blob returns 403 forbidden from Azure function running on portal

I've read several posts regarding similar queries, like this one, but I keep getting 403.
Initially I wrote code in Visual Studio - azure function accessing a storage blob - and everything runs fine. But when I deploy the very same function, it throws 403! I tried the suggested, moving to x64 etc and removing additional files, but nothing works.
Please note - i have verified several times - the access key is correct and valid.
So, I did all the following
(1) - I wrote a simple Azure function on Portal itself (to rule out the deployment quirks), and voila, same 403!
var storageConnection = "DefaultEndpointsProtocol=https;AccountName=[name];AccountKey=[key1];EndpointSuffix=core.windows.net";
var cloudStorageAccount = CloudStorageAccount.Parse(storageConnection);
var blobClient = cloudStorageAccount.CreateCloudBlobClient();
var sourceContainer = blobClient.GetContainerReference("landing");
CloudBlockBlob blob = container.GetBlockBlobReference("a.xlsx");
using (var inputStream = new MemoryStream())
{
log.Info($"Current DateTime: {DateTime.Now}");
log.Info("Starting download of blob...");
blob.DownloadToStream(inputStream); // <--- 403 thrown here!!
log.Info("Download Complete!");
}
(2) - I verified the date time by logging it, and its UTC on the function server
(3) - I used Account SAS key, generated on portal, but still gives 403. I had waited for over 30seconds after SAS key generation, to ensure that the SAS key propagates.
var sasUri = "https://[storageAccount].blob.core.windows.net/?sv=2017-11-09&ss=b&srt=sco&sp=rwdlac&se=2019-07-31T13:08:46Z&st=2018-09-01T03:08:46Z&spr=https&sig=Hm6pA7bNEe8zjqVelis2y842rY%2BGZg5CV4KLn288rCg%3D";
StorageCredentials accountSAS = new StorageCredentials(sasUri);
var cloudStorageAccount = new CloudStorageAccount(accountSAS, "[storageAccount]", endpointSuffix: null, useHttps: true);
// rest of the code same as (1)
(4) - I generated the SAS key on the fly in code, but again 403.
static string GetContainerSasUri(CloudBlobContainer container)
{
//Set the expiry time and permissions for the container.
//In this case no start time is specified, so the shared access signature becomes valid immediately.
SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();
sasConstraints.SharedAccessStartTime = DateTimeOffset.UtcNow.AddMinutes(-5);
sasConstraints.SharedAccessExpiryTime = DateTimeOffset.UtcNow.AddMinutes(25);
sasConstraints.Permissions = SharedAccessBlobPermissions.Write | SharedAccessBlobPermissions.Add | SharedAccessBlobPermissions.Create;
//Generate the shared access signature on the container, setting the constraints directly on the signature.
string sasContainerToken = container.GetSharedAccessSignature(sasConstraints);
//Return the URI string for the container, including the SAS token.
return container.Uri + sasContainerToken + "&comp=list&restype=container";
}
and used the above as
var sourceContainer = blobClient.GetContainerReference("landing");
var sasKey = GetContainerSasUri(sourceContainer);
var container = new CloudBlobContainer(new Uri(sasKey));
CloudBlockBlob blob = container.GetBlockBlobReference("a.xlsx");
I completely fail to understand why the code works flawlessly when running from visual studio, accessing the storage (not emulator) on cloud, but when same is either deployed or run explicitly on the portal, it fails to run.
What am I missing here?
Since you have excluded many possible causes, the only way I can reproduce your problem is to configure Firewall on Storage Account.
Locally the code works as you may have added your local IP into White List while this step was omitted for Function. On portal, go to Resource Explorer under Platform features. Search outboundIpAddresses and add those(usually four) IPs into Storage Account White List.
If you have added Function IPs but still get 403 error, check location of Storage and Function app. If they live in the same region(like both in Central US), two communicate in an internal way without going through outboundIpAddresses. Workaround I can offer is to create a Storage in different region if Firewall is necessary in your plan. Otherwise just allow all networks to Storage.

Is there a limit on the amout of Azure Blob Storage SAS keys generated per hour?

According to this article detailing the limits of Azure Storage, there is a limit on the number of Azure Resource Manager requests that can be made.
Additionally, this article details limits of the ARM API. A post here claims they ran into issues running a list operations after making too many requests.
My question is, is there a limit on number of SAS Keys generated per hour for blob storage? Is creating a SAS Key an ARM event?
For example, if I'm using the Python Azure Storage SDK and I attempt to create 160,000 SAS keys in one hour for various blobs (files in containers in storage accounts), will I be throttled or stopped?
My application depends on these keys for allowing micro services access to protected data, but I cannot scale this application if I cannot create a large amount of SAS keys in a short period of time.
Creating a SAS token does not interact with the Azure Api at all -
From Using shared access signatures
The SAS token is a string you generate on the client side A SAS token you generate with the storage client library, for example, is not tracked by Azure Storage in any way. You can create an unlimited number of SAS tokens on the client side.
I couldn't find code for building a Storage SAS token, but the the principal is similar to the following (from here)
private static string createToken(string resourceUri, string keyName, string key)
{
TimeSpan sinceEpoch = DateTime.UtcNow - new DateTime(1970, 1, 1);
var week = 60 * 60 * 24 * 7;
var expiry = Convert.ToString((int)sinceEpoch.TotalSeconds + week);
string stringToSign = HttpUtility.UrlEncode(resourceUri) + "\n" + expiry;
HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key));
var signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign)));
var sasToken = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}", HttpUtility.UrlEncode(resourceUri), HttpUtility.UrlEncode(signature), expiry, keyName);
return sasToken;
}
Basically a SAS token is a hash of the storage credentials, that are locked down to provide a subset of services. You can create as many as you require without any interaction with the Azure API.

Resources