We are using Terraform to generate endpoint and set to our service, we can get the document db connection string:
AccountEndpoint=https://mygraphaccount.documents.azure.com:443/
My question is how to get Gremlin Endpoint:
GremlinEndpoint: wss://mygraphaccount.gremlin.cosmos.azure.com:443/,
In the document of terraform:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_account
id - The CosmosDB Account ID.
endpoint - The endpoint used to connect to the CosmosDB account.
read_endpoints - A list of read endpoints available for this CosmosDB account.
write_endpoints - A list of write endpoints available for this CosmosDB account.
primary_key - The Primary key for the CosmosDB Account.
secondary_key - The Secondary key for the CosmosDB Account.
primary_readonly_key - The Primary read-only Key for the CosmosDB Account.
secondary_readonly_key - The Secondary read-only key for the CosmosDB Account.
connection_strings - A list of connection strings available for this CosmosDB account.
None of these looks like GreminEndpoint.
I had a similar issue with MongoDB and solved it by a custom string interpolation (as mentioned in the comments of the question).
output "gremlin_url" {
value = "wss://${azurerm_cosmosdb_account.example.name}.gremlin.cosmos.azure.com:443/"
}
Related
For sake of safety I wish to use geo-replication / secondary Blob Storage container for a data source for AzureML Datastore. So I do the following:
New Datastore
Enter name + Azure Blob Storage + Enter manually
For URL I paste "Secondary Blob Service Endpoint" value from "Storage account endpoints" and I add container name at the end, e.g. https://somedata-secondary.blob.core.windows.net/container-name
Select subscription ID
I select the resource group in which somedata is hosted,
I add account key taken from "Access keys" section, I tried also with SAS token
After finalizing, the new datastore seem to appear in the list but it is impossible to Browse (preview), throwing the error "Invalid host".
What is the correct way of doing this?
Is it possible at all to access this geo-replication / secondary Blob Storage as datastore?
Please check with below points:
Initially please check if Share Access Token (SAS) token is outdated or expired
Please note that Both primary and geo-secondary are required to have the same service tier and strongly recommended that the geo-secondary is configured with the same backup storage redundancy and compute size as the primary.
Note: You can only access your storage account by its primary name. In the event of failover, that name will be mapped to the alternate datacenter.
There are two disadvantages of GRS redundancy:
Replication between regions is asynchronous and so data is propagated with a small delay
The second region cannot be accessed or read until the storage account fails over
Active geo-replication - Azure SQL Database | Microsoft Docs
As the replicated endpoint will be https://account-secondary.blob.core.windows.net. Note that this DNS entry won’t even be registered unless read access geo redundant replication is enabled.
The access keys for your storage account are the same for both the primary and secondary endpoints. You can use the same primary (or secondary) access key for the secondary too.
I try to upload a XML file to Azure FTP server, using this code:
https://www.c-sharpcorner.com/article/upload-and-download-files-from-blob-storage-using-c-sharp/
I have difficulty to how find storageAccount_connectionString? I have access to the server in Azure portal but I cannot find this connection string.
//Copy the storage account connection string from Azure portal
storageAccount_connectionString = "your Azure storage account connection string here";
The connection Strings are found from Storage Account Access Keys Blade as already said by Gaurav Mantri (StorageAccount>>AccessKeys>>ShowKeys).
If you are using a Primary Access Key then you can use the connection string in Box 1 and if you are using Secondary Access Key then you can use the connection string in Box 2.
You can refer the below Microsoft Documents for more information:
View and Manage Keys
Configure Connection Strings
I have data stored in Azure Table Storage and want to secure it such that only my API (a function app) can read and write data.
What is best practice and how can I do this? I thought setting --default-action on the network rules to Deny for the Storage, plus adding a --bypass Logging Metrics AzureServices would shut down access but enable my Azure services, but this did not work.
I then looked at creating a Managed Service Identity (MSI) for the function app and adding RBAC to the Storage Account, but this did not work either. It doesn't look like MSIs are supported for Table Storage Access Azure Table Storage with Azure MSI
Am I missing or misunderstanding something? How do I secure the data in the tables in the Storage account, and is this even possible?
As the link you provided, azure table storage does not support Azure MSI, and it only support Shared Key (storage account key) and Shared access signature (SAS).
You must use Shared Key authorization to authorize a request made against the Table service if your service is using the REST API to make the request.
To encode the signature string for a request against the Table service made using the REST API, use the following format:
StringToSign = VERB + "\n" +
Content-MD5 + "\n" +
Content-Type + "\n" +
Date + "\n" +
CanonicalizedResource;
You can use Shared Key Lite authorization to authorize a request made against any version of the Table service.
StringToSign = Date + "\n"
CanonicalizedResource
For more details, you could refer to this article.
For securing Azure Table Storage data you do below network configurations -
Use selected network instead of public network. This configuration is available under "Firewalls and virtual networks" of storage account.
Second step which you can do is to either move the data to Azure Key Vault or use an encryption key stored in Azure Key Vault to encrypt required fields of Azure Table Storage. This way you won't face Azure Key Vault's throttling limits - https://learn.microsoft.com/en-us/azure/key-vault/general/service-limits#secrets-managed-storage-account-keys-and-vault-transactions
I have created a PowerShell script to update a key vault with new blob storage key every time when the key rotates but the problem I have is how do I update the apps with the new blob storage key.
I have used Set-AzKeyVaultAccessPolicy to give the apps access to the key vault secret which contains the latest storage key. I have a logic app which uses blob storage but when the key is rotated the blob storage within the logic app show an error. This is the error I encounter:
{
"status": 403,
"message": "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\r\nclientRequestId: 7c8edbdf-e7c9-4658-8370-54102589213e",
"source": "azureblob-uks.azconn-uks-01.p.azurewebsites.net"
}
Is there a way for the logic app to get the latest key from the key vault.
Logic app does support a connector to connect your logic app to Azure KeyVault to retrieve the keys: https://learn.microsoft.com/en-us/connectors/keyvault/
My suggestion is to write a specific script which would discard the rotated keys, then retrieve the new ones as soon as they are available in Azure Key vault.
I'm setting up CosmosDb with a partition key as a Stream Analytics Job output and the connection test fails with the following error:
Error connecting to Cosmos DB Database: Invalid or no matching collections >found with collection pattern 'containername/{partition}'. Collections must >exist with case-sensitive pattern in increasing numeric order starting with >0..
NOTE: I'm using the cosmosdb with SQL API, but the configuration is done through portal.azure.com
I have confirmed I can manually insert documents into the DocumentDB through the portal Data Explorer. Those inserts succeed and the partition key value is correctly identified.
I set up the Cosmos container like this
Database Id: testdb
Container id: containername
Partition key: /partitionkey
Throughput: 1000
I set up the Stream Analytics Output like this
Output Alias: test-output-db
Subscription: My-Subscription-Name
Account id: MyAccountId
Database -> Use Existing: testdb
Collection name pattern: containername/{partition}
Partition Key: partitionkey
Document id:
When testing the output connection I get a failure and the error listed above.
I received a response from Microsoft support that specifying the partition via the "{partition}" token pattern is no longer supported by Azure Stream Analytics. Furthermore, writing to multiple containers from ASA in general has been deprecated. Now, if ASA outputs to a CosmosDb with a partition configured, Cosmos should automatically take care of that on its side.
after discussion with our ASA developer/product group team, the
collection pattern such as MyCollection{partition} or
MyCollection/{partition} is no longer supported. Writing to multiple
fixed containers is being deprecated and it is not the recommended
approach for scaling out the Stream Analytics job [...] In summary,
you can define the collection name simply as "apitraffic". You don't
need to specify any partition key as we detect it automatically from
Cosmos DB.