Creating a container under a private storage account in Azure - azure

I have a storage account created for an AKS cluster, which is configured with a private endpoint. Public access is denied on it.
There is a client service installed in the same network as the cluster, which is trying to create a container within this storage account.
Here is the code snippet:
c, err: = azblob.NewSharedKeyCredential(accountName, accountKey)
if err != nil {
return azblob.ContainerURL {}, err
}
p: = azblob.NewPipeline(c, azblob.PipelineOptions {
Telemetry: azblob.TelemetryOptions {
Value: "test-me"
},
})
u, err: = url.Parse(fmt.Sprintf(blobFormatString, accountName))
if err != nil {
return azblob.ContainerURL {}, err
}
service: = azblob.NewServiceURL( * u, p)
container: = service.NewContainerURL(containerName)
c, err: = GetContainerURL(a.Log, ctx, a.SubscriptionID, a.ClientID, a.ClientSecret, a.TenantID, a.StorageAccount, accountKey, a.ResourceGroup, a.Bucket)
if err != nil {
return err
}
_, err = c.GetProperties(ctx, azblob.LeaseAccessConditions {})
if err != nil {
if strings.Contains(err.Error(), "ContainerNotFound") {
_, err = c.Create(
ctx,
azblob.Metadata {},
azblob.PublicAccessContainer)
if err != nil {
return err
}
}
}
This code when executed throws an error like:
Details: \n Code: PublicAccessNotPermitted\n PUT https://storageaccountname.blob.core.windows.net/containername?restype=container&timeout=61\n Authorization: REDACTED
RESPONSE Status: 409 Public access is not permitted on this storage account
Should not the container creation be completed, since the client is already on the cluster. What is it that i am doing wrong?
Many thanks!!

Details: \n Code: PublicAccessNotPermitted\n PUT https://storageaccountname.blob.core.windows.net/containername?restype=container&timeout=61\n Authorization: REDACTED RESPONSE Status: 409 Public access is not permitted on this storage account
• The error code that you are interacting with clearly states that ‘public access is not permitted on your storage account’, i.e., either the private endpoint connection that you have configured on your storage account is not configured properly and the account is not secured by configuring the storage firewall to block all connections on the public endpoint for the storage service.
• Thus, I would suggest you increase the security for the virtual network (VNET), by enabling you to block the exfiltration of data from the VNET. Also, securely connect to storage accounts from on-premises networks that connect to the VNET using VPN or ExpressRoutes with private-peering.
• Also, please ensure that the IP address that is assigned to the private endpoint is an IP address from the address range of the VNET and it is excluded from any restrictions in the network security group or AKS ingress controller or the Azure Firewall.
• Finally, ensure that the private endpoints provisioned for the storage account are not general-purpose v1 storage accounts as private endpoints for these storage accounts are not permitted. Also, configure the storage firewall for the storage account as described in the documentation link below: -
https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal#change-the-default-network-access-rule
To know more about the details regarding the configuration of private endpoints for storage accounts, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/storage/common/storage-private-endpoints

Related

Accessing AKS cluster configuration (specifically OIDC issuer URI) from within the cluster

I am writing a job in golang that sets up an Azure user assigned managed identity using the Azure Go SDK, specifically github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/msi/armmsi, a Kubernetes service account using the go-client for Kubernetes k8s.io/client-go/kubernetes and then using the Azure SDK to set up a Federated Identity Credential on the new user assigned managed identity.
In order to set up a federated credential on azure I need to get the oidc issuer uri.
I know how to get it using the Azure CLI, and I can simply paste a string. But I expect this code to be running on any cluster and i would expect the issuer to change. Id rather just get the issuer from a config file on the cluster itself or through the azure sdk. I am not sure how to do that. any help...
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
....some code here to set up azure clients and Kubernetes client...
//creating a new managed identity using azure client
msi, err := azureMsiClient.CreateOrUpdate(ctx, "new-umid-resource-group", "new-umid", armmsi.Identity{Location: strPtr(`East US`)}, &armmsi.UserAssignedIdentitiesClientCreateOrUpdateOptions{})
if err != nil {
fmt.Printf("could not create a managed identity", err.Error())
return err
}
....
//creating the new service account
annotations := make(map[string]string)
annotations["azure.workload.identity/client-id"] = *msi.Properties.ClientID
annotations["azure.workload.identity/tenant-id"] = "<myazuretenantid>"
labels := make(map[string]string)
labels["azure.workload.identity/use"] = "true"
mainSA := &v1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: "myserviceaccount",
Namespace: "my-namespace",
Annotations: annotations,
Labels: labels,
},
_, err = kubeClientSet.CoreV1().ServiceAccounts("my-namespace").Create(ctx, mainSA, metav1.CreateOptions{})
if err != nil {
fmt.Println("could not create k8 service account")
return err
}
....
//creating the federated credential
aud := make([]*string, 1)
aud[0] = strPtr("api://AzureADTokenExchange")
beParms := armmsi.FederatedIdentityCredential{
Properties: &armmsi.FederatedIdentityCredentialProperties{
Audiences: aud,
Issuer: strPtr(<my cluster's OidcIssuerUri>),
Subject: strPtr(`system:serviceaccount:my-namespace:myserviceaccount`),
},
}
_, err := azureFedIdClient.CreateOrUpdate(ctx, "new-umid-resource-group","new-umid", "new-umid-fed-id", beParms, nil)
if err != nil {
fmt.Println("could not create a new federation between be auth service account and federated identity")
return err
}

Azure Function - Managed IDs to write to storage table - failing with 403 AuthorizationPermissionMismatch

I have an Azure function application (HTTP trigger) that writes to the storage queue and table. Both fail when I try to change to managed Id. This post / question is about just the storage table part.
Here's the code that does the actual writing to the table:
GetStorageAccountConnectionData();
try
{
WorkspaceProvisioningRecord provisioningRecord = new PBIWorkspaceProvisioningRecord();
provisioningRecord.status = requestType;
provisioningRecord.requestId = requestId;
provisioningRecord.workspace = request;
#if DEBUG
Console.WriteLine(Environment.GetEnvironmentVariable("AZURE_TENANT_ID"));
Console.WriteLine(Environment.GetEnvironmentVariable("AZURE_CLIENT_ID"));
DefaultAzureCredentialOptions options = new DefaultAzureCredentialOptions()
{
Diagnostics =
{
LoggedHeaderNames = { "x-ms-request-id" },
LoggedQueryParameters = { "api-version" },
IsLoggingContentEnabled = true
},
ExcludeVisualStudioCodeCredential = true,
ExcludeAzureCliCredential = true,
ExcludeManagedIdentityCredential = true,
ExcludeAzurePowerShellCredential = true,
ExcludeSharedTokenCacheCredential = true,
ExcludeInteractiveBrowserCredential = true,
ExcludeVisualStudioCredential = true
};
#endif
DefaultAzureCredential credential = new DefaultAzureCredential();
Console.WriteLine(connection.storageTableUri);
Console.WriteLine(credential);
var serviceClient = new TableServiceClient(new Uri(connection.storageTableUri), credential);
var tableClient = serviceClient.GetTableClient(connection.tableName);
await tableClient.CreateIfNotExistsAsync();
var entity = new TableEntity();
entity.PartitionKey = provisioningRecord.status;
entity.RowKey = provisioningRecord.requestId;
entity["requestId"] = provisioningRecord.requestId.ToString();
entity["status"] = provisioningRecord.status.ToString();
entity["workspace"] = JsonConvert.SerializeObject(provisioningRecord.workspace);
//this is where I get the 403
await tableClient.UpsertEntityAsync(entity);
//other stuff...
catch(AuthenticationFailedException e)
{
Console.WriteLine($"Authentication Failed. {e.Message}");
WorkspaceResponse response = new PBIWorkspaceResponse();
response.requestId = null;
response.status = "failure";
return response;
}
catch (Exception ex)
{
Console.WriteLine($"whoops! Failed to create storage record:{ex.Message}");
WorkspaceResponse response = new WorkspaceResponse();
response.requestId = null;
response.status = "failure";
return response;
}
I have the client id/ client secret for this security principal defined in my local.settings.json as AZURE_TENANT_ID/AZURE_CLIENT_ID/AZURE_CLIENT_SECRET.
The code dies trying to do the upsert. And it never hits the AuthenticationFailedException - just the general exception.
The security principal defined in the AZURE* variables was used to created this entire application including the storage account.
To manage data inside a storage account (like creating table etc.), you will need to assign different sets of permissions. Owner role is a control-plane role that enables you to manage storage accounts themselves and not the data inside them.
From this link:
Only roles explicitly defined for data access permit a security
principal to access blob data. Built-in roles such as Owner,
Contributor, and Storage Account Contributor permit a security
principal to manage a storage account, but do not provide access to
the blob data within that account via Azure AD.
Even though the text above is for Blobs, same thing applies for Tables as well.
Please assign Storage Table Data Contributor to your Managed Identity and then you should not get this error.

How to specify x509 certificate for Azure SDK in Golang

I am trying to connect to use the Azure SDK for Golang to download files from a container online to my device and am using the connection string provided from azure to connect. For context this is running on a version of embedded Linux
I have two questions, first how do I pass a specific certificate to the azure SDK to use to connect, as currently when I connect, I get this issue
Get "https://transaction.blob.core.windows.net/transactions?comp=list&restype=container": x509: certificate signed by unknown authority
or failing that how do I generate the correct certificate to put it in /etc/ssl? Which I think is where go is looking for certificates as far as I understand.
Also second question what function from the azure sdk for go should I be using to download from a blob online if my folder structure looks like /transaction/my-libs/images/1.0.0/libimage.bin where transaction is my blob container.
func testConnection(){
Println("TESTING CONNECTION")
connStr := "..." // actual connection string hidden
serviceClient, err := azblob.NewServiceClientFromConnectionString(connStr, nil)
// crashes here <------------
//ctx := context.Background()
//container := serviceClient.NewContainerClient("transactions")
//
//_, err = container.Create(ctx, nil)
//
//blockBlob := container.NewBlockBlobClient("erebor-libraries")
//_, err = blockBlob.Download(ctx, nil)
//Open a buffer, reader, and then download!
downloadedData := &bytes.Buffer{}
reader := get.Body(RetryReaderOptions{}) // RetryReaderOptions has a lot of in-depth tuning abilities, but for the sake of simplicity, we'll omit those here.
_, err = downloadedData.ReadFrom(reader)
err = reader.Close()
if data != downloadedData.String() {
err := errors.New("downloaded data doesn't match uploaded data")
if err != nil {
return
}
}
pager := container.ListBlobsFlat(nil)
for pager.NextPage(ctx) {
resp := pager.PageResponse()
for _, v := range resp.ContainerListBlobFlatSegmentResult.Segment.BlobItems {
fmt.Println(*v.Name)
}
}
• You can use the following Azure SDK for Go command for passing a specific certificate to the Azure SDK to connect to other Azure resources by creating a service principal for it: -
‘ type ClientCertificateConfig struct {
ClientID string
CertificatePath string
CertificatePassword string
TenantID string
AuxTenants []string
AADEndpoint string
Resource string
} ‘
For more information on the creation of the client certificate and its usage, please refer to the documentation link below for more details: -
https://pkg.go.dev/github.com/Azure/go-autorest/autorest/azure/auth#ClientCertificateConfig
Also, even if your folder structure is ‘/transaction/my-libs/images/1.0.0/libimage.bin’, but the blob URL is unique with folder hierarchy mentioned in the blob URL, thus when connecting to the Azure storage account to download the blob, mention the URL in single inverted comma notation for the blob path to be specific.
Please refer to the sample code below for downloading the blobs through Azure SDK for Go: -
https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#example-package
https://pkg.go.dev/github.com/Azure/azure-storage-blob-go/azblob#pkg-examples

Go client example to fetch storage account keys

How can we get the Azure storage account key from storage account name and other parameters ?
I tried to build the storage account client but it requires, storage account name and key to build the client. I want to fetch programatically storage account key using storage account name and other parameters. Equivalent Go Sample code to below Azure CLI command.
az storage account keys list --resource-group --account-name
Can you please give some pointer, how can i fetch using Go sample code ?
Thank you
To get keys for a storage account, you will need to make use of Azure SDK for Go especially armstorage`.
Here's the code sample for listing account keys:
func ExampleStorageAccountsClient_ListKeys() {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
log.Fatalf("failed to obtain a credential: %v", err)
}
client := armstorage.NewStorageAccountsClient(arm.NewDefaultConnection(cred, nil), "<subscription ID>")
resp, err := client.ListKeys(context.Background(), "<resource group name>", "<storage account name>", nil)
if err != nil {
log.Fatalf("failed to delete account: %v", err)
}
for _, k := range resp.StorageAccountListKeysResult.Keys {
log.Printf("account key: %v", *k.KeyName)
}
}
This sample and more code examples are available here: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/storage/armstorage/example_storageaccounts_test.go.

Calling CosmosDB server from Azure Cloud Function

I am working on an Azure Cloud Function (runs on node js) that should return a collection of documents from my Azure Cosmos DB for MongoDB API account. It all works fine when I build and start the function locally, but fails when I deploy it to Azure. This is the error: MongoNetworkError: failed to connect to server [++++.mongo.cosmos.azure.com:++++] on first connect ...
I am new to CosmosDB and Azure Cloud Functions, so I am struggling to find the problem. I looked at the Firewall and virtual networks settings in the portal and tried out different variations of the connection string.
As it seems to work locally, I assume it could be a configuration setting in the portal. Can someone help me out?
1.Set up the connection
I used the primary connection string provided by the portal.
import * as mongoClient from 'mongodb';
import { cosmosConnectionStrings } from './credentials';
import { Context } from '#azure/functions';
// The MongoDB Node.js 3.0 driver requires encoding special characters in the Cosmos DB password.
const config = {
url: cosmosConnectionStrings.primary_connection_string_v1,
dbName: "****"
};
export async function createConnection(context: Context): Promise<any> {
let db: mongoClient.Db;
let connection: any;
try {
connection = await mongoClient.connect(config.url, {
useNewUrlParser: true,
ssl: true
});
context.log('Do we have a connection? ', connection.isConnected());
if (connection.isConnected()) {
db = connection.db(config.dbName);
context.log('Connected to: ', db.databaseName);
}
} catch (error) {
context.log(error);
context.log('Something went wrong');
}
return {
connection,
db
};
}
2. The main function
The main function that execute the query and returns the collection.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('Get all projects function processed a request.');
try {
const { db, connection } = await createConnection(context);
if (db) {
const projects = db.collection('projects')
const res = await projects.find({})
const body = await res.toArray()
context.log('Response projects: ', body);
connection.close()
context.res = {
status: 200,
body
}
} else {
context.res = {
status: 400,
body: 'Could not connect to database'
};
}
} catch (error) {
context.log(error);
context.res = {
status: 400,
body: 'Internal server error'
};
}
};
I had another look at the firewall and private network settings and read the offical documentation on configuring an IP firewall. On default the current IP adddress of your local machine is added to the IP whitelist. That's why the function worked locally.
Based on the documentation I tried all the options described below. They all worked for me. However, it still remains unclear why I had to manually perform an action to make it work. I am also not sure which option is best.
Set Allow access from to All networks
All networks (including the internet) can access the database (obviously not advised)
Add the inbound and outbound IP addresses of the cloud function project to the whitelistThis could be challenging if the IP addresses changes over time. If you are on the consumption plan this will probably happen.
Check the Accept connections from within public Azure datacenters option in the Exceptions section
If you access your Azure Cosmos DB account from services that don’t
provide a static IP (for example, Azure Stream Analytics and Azure
Functions), you can still use the IP firewall to limit access. You can
enable access from other sources within the Azure by selecting the
Accept connections from within Azure datacenters option.
This option configures the firewall to allow all requests from Azure, including requests from the subscriptions of other customers deployed in Azure. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. Use this option only if your requests don’t originate from static IPs or subnets in virtual networks. Choosing this option automatically allows access from the Azure portal because the Azure portal is deployed in Azure.

Resources