Create DialogFlow Agent using SDK - dialogflow-es

I want to create an Agent using the SDK of DialogFlow. I need create and destroy agents dynamically.
I have tried using the Java SDK and whitout success.
I tried generating the AgentsClient with the Java SDK V2.
First i make the connection to Storage
com.google.cloud.storage.Storage storage = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials.fromStream(new FileInputStream("key.json")))
.build().getService();
Afthe that i had tried this:
AgentsClient client = AgentsClient.create();
But not work like i understood. The requiremente is that using the SDK can create a new agent not get some existing agent.

Related

how to create azure batch pool based on custom VM image using java sdk

I want to use custom ubuntu VM image that I had created for by batch job. I can create a new pool by selecting the custom image from the azure portal itself but I wanted to write build script to do the same using the azure batch java sdk. This is what I was able to come up with:
List<NodeAgentSku> skus = client.accountOperations().listNodeAgentSkus().findAll({ it.osType() == OSType.LINUX })
String skuId = null
ImageReference imageRef = new ImageReference().withVirtualMachineImageId('/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Compute/images/$CUSTOM_VM_IMAGE_NAME')
for (NodeAgentSku sku : skus) {
for (ImageReference imgRef : sku.verifiedImageReferences()) {
if (imgRef.publisher().equalsIgnoreCase(osPublisher) && imgRef.offer().equalsIgnoreCase(osOffer) && imgRef.sku() == '18.04-LTS') {
skuId = sku.id()
break
}
}
}
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration()
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef)
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount)
But I am getting exception:
Caused by: com.microsoft.azure.batch.protocol.models.BatchErrorException: Status code 403, {
"odata.metadata":"https://analyticsbatch.eastus.batch.azure.com/$metadata#Microsoft.Azure.Batch.Protocol.Entities.Container.errors/#Element","code":"AuthenticationFailed","message":{
"lang":"en-US","value":"Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:bf9bf7fd-2ef5-497b-867c-858d081137e6\nTime:2019-04-17T23:08:17.7144177Z"
},"values":[
{
"key":"AuthenticationErrorDetail","value":"The specified type of authentication SharedKey is not allowed when external resources of type Compute are linked."
}
]
}
I definitely think the way I am getting the skuId is wrong. Since client.accountOperations().listNodeAgentSkus() does not list the custom image, I just thought of giving skuId based of the ubuntu version that I had used to create the custom image.
So what is the correct way to create pool using custom VM image for azure batch account using java sdk?
You must use Azure Active Directory credentials in order to create a pool with a custom image. It is in the prerequisites section of the Batch Custom Image doc.
This is a frequently asked question:
Custom Image under AzureBatch ImageReference class not working
Azure Batch Pool: How do I use a custom VM Image via Python?
Just shows as the error, you need to authenticate to Azure first and then you could create the pool with a custom image as you want.
First, you need an Azure Batch Account, you can create it in the Azure portal or using Azure CLI. Or you also can create the batch account through Java. See Manage the Azure Batch Account through Java.
Then I think you also need to authenticate to your batch account. There are two ways below:
Use the account name, key, and URL to create a BatchSharedKeyCredentials instance for authentication with the Azure Batch service. The BatchClient class is the simplest entry point for creating and interacting with Azure Batch objects.
BatchSharedKeyCredentials cred = new BatchSharedKeyCredentials(batchUri, batchAccount, batchKey);
BatchClient client = BatchClient.open(cred);
The other way is using AAD (Azure Active Directory) authentication to create the client. See this document for detail.
BatchApplicationTokenCredentials cred = new BatchApplicationTokenCredentials(batchEndpoint, clientId, applicationSecret, applicationDomain, null, null);
BatchClient client = BatchClient.open(cred);
Then you can create the pool with the custom as you want. Just like this:
System.out.println("Created a pool using an Azure Marketplace image.");
VirtualMachineConfiguration configuration = new VirtualMachineConfiguration();
configuration.withNodeAgentSKUId(skuId).withImageReference(imageRef);
client.poolOperations().createPool(poolId, poolVMSize, configuration, poolVMCount);
System.out.println("Created a Pool: " + poolId);
For more details, see Azure Batch Libraries for Java.

Using powershell, how do I create an api for an api management service with a version in path segment?

I can create an api using powershell as follows
New-AzureRmApiManagementApi -Context $azContext -ApiId $apiId -Name $apiName -ServiceUrl "https://myapp-dev-apims.azure-api.net/${subDomainName}" -Protocols #("https") -Path $subDomainName
However, this cmdlet does not create a version. It appears I need to use
New-AzureRmApiManagementApiVersionSet
However, Its not well documented how to do this when looking to add a version using a path segment such as myApi.com/cart/v1.
When creating the version within the portal, it says "Versioning creates a new API. This new API is linked to your existing API through a versioning scheme. Choose a versioning scheme and choose a version number for your API:"
Do I need to create a new api using New-AzureRmApiManagementApi, again a second time? This is confusing.
The workaround to this is to just use the New-AzureRmApiManagementApi cmdlet to initially create the api, then go into the portal to MANUALLY create the version. But it would obviously be nice if the process of creating both the api and its version were repeatable in a script.
Using powershell alone, how do I both create an api, and the version in one script? Help is appreciated. Thank you.
When creating the version within the portal, it says "Versioning creates a new API. This new API is linked to your existing API through a versioning scheme. Choose a versioning scheme and choose a version number for your API:"
It says correct, if you Add version in the portal, it will create a new API, just in the UI, it appears like under the original API. You could check them clearly in the resource explorer, there will be a "apiVersion": "xx"in the api version. After you adding a version, it will add a new API in the apis, and automatic create a version set in the api-version-sets, refer to the screenshot.
Per my test, the command New-AzureRmApiManagementApiVersionSet just create in the api-version-sets, and will not create in the apis, so you could not get what you want with it.
Also, I add version in the portal, and use Fiddler to catch the request, it essentially call the same REST API with creating a new API.
Some Workarounds for you to refer:
1.As you mentioned, create the api and add version in the portal manually.
2.Try to use New-AzureRmResource to create the api version.
3.Use the powershell Invoke-RestMethod to call the REST API.
To create a versioned API you first need to create a version set. You found the Powershell cmdlet for that. However, looking at New-AzureRmApiManagementApi it seems you cannot provide a versionsetid as parameter, which is needed to link the version set to the API.
With Powershell alone I don't think it's possible what you're trying to achieve, but what you could consider is using ARM templates.
These templates can be kicked off by Powershell and do provide the option to create an entire versioned API in one script.
For inspiration you could take a look at this blog post:
https://blog.eldert.net/api-management-ci-cd-using-arm-templates-linked-template/

How can I check if the AWS SDK was provided with credentials?

There are many ways to provide the AWS SDK with credentials to perform operations.
I want to make sure any of the methods were successful in setting up the interface before I try my operation on our continuous deployment system.
How can I check if AWS SDK was able to find credentials?
You can access them via the config.credentials property on the main client. All AWS service libraries included in the SDK have a config property.
Class: AWS.Config
The main configuration class used by all service objects to set the region, credentials, and other options for requests.
By default, credentials and region settings are left unconfigured. This should be configured by the application before using any AWS service APIs.
// Using S3
var s3 = new AWS.S3();
console.log(s3.config.credentials);

Cannot connect to DocumentDb directly after having deployed the DocumentDb account

I have an ARM template that I use to deploy a DocumentDB as well as other Azure reosurces to a resource group. I want my ARM template to setup a Stream Analytics job that uses the DocumentDB as output. In order to do this the DocumentDB account created by the ARM template needs to have a database and a collection setup as well. I cannot find a way to do this from an ARM template so I have written a Powershell CmdLet to create the database and collction for me.
The Stream Analytics job cannot be created by the first ARM template since it depends on having the database and collection created first. Instead I have to divide the deployment into two ARM templates, the first setting up the DocDb account and the second setting up the SA job.
The problem is that I cannot create a database in the DocDB account directly after having deployed the account via the ARM template. I get an exception with the following message: "The remote name could not be resolved: 'test.documents.azure.com'" when I try to execute the CreateDatabaseAsync method with the DocDbEndpoint and AuthKey I get back from the ARM template deployment.
Are there any timing issues after having deployed Azure resources using a ARM template before you can access them programatically? This do not seem to be a problem with other Azure reosurces created this way.
Any help on this matter is highly appreciated as well as what is a good practice for working with ARM templates with DocumentDB and Stream Analytic jobs.
Update 2016-03-23
Code for setting up the connection to the DocumentDB to create the database.
Uri endpointUri = new Uri(documentDbEndPoint);
DocumentClient client = new DocumentClient(endpointUri, authKey);
var db = await client.CreateDatabaseAsync(new Database { Id = databaseId });
return db;
Where the documentDbEndPoint is in the form of: https://name.documents.azure.com:443/ and name is the name of my DocDB account just created by the ARM template deployment.
I have the code in a library which I can either call from a Console application or from a Powershell script by loading the library with:
Add-Type -Path <path to library dll file>
No matter if I use powershell or console application I get the same error if I try to create a database just after having created the DocDB account using the ARM template. If I wait like an hour or so both the powershell script and console application works and can create a database in the account.
Seems like there is some kind of timing issue in order for Azure to setup dns records for the newly created DocDB account so that it can be accessed using the DocDB API.
Update 2 2016-03-23
Just tried to create a DocDB account directly from the portal and doing this instead of creating it from an ARM template makes it possible to create a database in the account using my powershell script and console application immediately.
This timing issue has been fixed now and you should be able to use it from the ARM template now.

Connecting to Rackspace Cloud using ASP.net

I am trying to connect to Rackspace Cloud using Asp.net.
I've downloaded Rackspace.CloudFiles assembly from NuGet, and i am trying to connect to the server:
UserCredentials userCred = new UserCredentials("username", "api_key");
Connection connection = new Connection(userCred);
var containers = connection.GetContainers();
This works, but it connects every time to only one storage location. In rackspace control panel, i have more locations where i have containers.
Is there a way to specify the Location when i connect to Rackspace?
You may want to get the entire OpenStack .NET SDK via NuGet; it allows you to connect to "the cloud" and then select containers based on region (or all regions, or course).
Such as this:
// Get a list of containers
CloudFilesProvider cfp = new CloudFilesProvider(_cloudIdentity);
IEnumerable<ContainerCDN> listOfContainers = cfp.ListCDNContainers(region: "DFW");
If you do decide to use the OpenStack .NET SDK, please don't hesitate to ask questions; I'm here to help.
-- Don Schenck, OpenStack .NET Developer Advocate, Rackspace

Resources