How to create Azure Resources using REST API? - azure

I want to create resources like CosmosDB, Azure Kubernetes service, etc
I went through the following document :
https://learn.microsoft.com/en-us/rest/api/resources/resources/create-or-update
I see that the request URL has parameters like :-
https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{parentResourcePath}/{resourceType}/{resourceName}?api-version=2021-04-01
Where can I find the values for the fields like resourceProviderNamespace, parentResourcePath, resourceType, etc for each of the resources like cosmosDB, AKS, etc?
Also the properties that each of the resources expect, like location, backup, etc ??

As suggested by Gaurav Mantri, you can refer to Azure FarmBeats control plane and data plane operations
Thank you and AnuragSharma-MSFT and MarcoPapst-5675. Posting your suggestion as an answer to help community members.
You can refer to the following script for CosmosDB role assignment:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"roleDefinitionId": {
"type": "string",
"metadata": {
"description": "Name of the Role Definition"
}
},
"roleAssignmentName": {
"type": "string",
"metadata": {
"description": "Name of the Assignment"
}
},
"scope": {
"type": "string",
"metadata": {
"description": "Scope of the Role Assignment"
}
},
"principalId": {
"type": "string",
"metadata": {
"description": "Object ID of the AAD identity. Must be a GUID."
}
}
},
"variables": { },
"resources": [
{
"name": "[concat(parameters('roleAssignmentName'), '/', guid(parameters('scope')))]",
"type": "Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments",
"apiVersion": "2021-04-15",
"properties": {
"roleDefinitionId": "[parameters('roleDefinitionId')]",
"principalId": "[parameters('principalId')]",
"scope": "[parameters('scope')]"
}
}
]
}
You can refer to Create a CosmosDB Role Assignment using an ARM Template, Azure REST API get resource parentResourcePath parameter and Resources - Get By Id

First of all you need clientID, clientSecret and tenentID of your Azure account. Make sure you have given reuired permission to access and modify azure resources via API. Hint Use rbac command in azure cloudshell.
// Get Access token to use azure management API end points.
public static string GetAzureAccessToken()
{
// Get Access token for azure api
string AccessToken = "";
var tenantId = System.Configuration.ConfigurationManager.AppSettings["AzureTenantID"];
var clientId = System.Configuration.ConfigurationManager.AppSettings["AzureClientID"];
var secret = System.Configuration.ConfigurationManager.AppSettings["AzureSecret"];
var resourceUrl = "https://management.azure.com/";
var requestUrl = $"https://login.microsoftonline.com/{tenantId}/oauth2/token";
// in real world application, please use Typed HttpClient from ASP.NET Core DI
var httpClient = new System.Net.Http.HttpClient();
var dict = new Dictionary<string, string>
{
{ "grant_type", "client_credentials" },
{ "client_id", clientId },
{ "client_secret", secret },
{ "resource", resourceUrl }
};
var requestBody = new System.Net.Http.FormUrlEncodedContent(dict);
var response = httpClient.PostAsync(requestUrl, requestBody).Result;
if (response != null)
{
response.EnsureSuccessStatusCode();
string responseContent = response.Content.ReadAsStringAsync().Result;
if (!string.IsNullOrEmpty(responseContent))
{
var TokenResponse = JsonConvert.DeserializeObject<AzureTokenResponseModel>(responseContent);
if (TokenResponse != null)
{
AccessToken += TokenResponse?.access_token;
}
}
}
return AccessToken;
}
// create azure resource group by calling API
var clientResourceGroup = new RestClient($"https://management.azure.com/subscriptions/{SubscriptionID}/resourcegroups/{ResourceGroupName}?api-version=2022-05-01");
clientResourceGroup.Timeout = -1;
var requestResourceGroup = new RestRequest(Method.PUT);
requestResourceGroup.AddHeader("Authorization", "Bearer " + GetAzureAccessToken());
requestResourceGroup.AddHeader("Content-Type", "application/json");
var bodyResourceGroup = #"{'location':'" + AzLocation + #"','Name':'" + ResourceGroupName + #"'}}";
requestResourceGroup.AddParameter("application/json", bodyResourceGroup, ParameterType.RequestBody);
IRestResponse responseResourceGroup = clientResourceGroup.Execute(requestResourceGroup);
Post down here if still facing difficulties. We can create other azure resources like storage account, functions, app service etc..

Related

How to flatten a nested JSON structure using Azure Data Factory

I want to flatten my JSON with nested array object.
For example, my current JSON from Cosmos DB is:
[
{
"id": "",
"name": "",
"type": "",
"Data": [
{
"id": "",
"name": "aaa",
"value": "100"
},
{
"id": "",
"name": "bbb",
"value": "200"
}
]
}
]
I want to transform it to:
[
{
"id": "",
"name": "",
"type": "",
"aaa": "100",
"bbb": "200"
}
]
Basically, I want to use values of "Data.name" as key and "Data.value" as value in root structure.
You can achieve this using parse JSON and compose connectors in logic apps. After reproducing below is the logic app that worked for me.
I have initialized the variable in order to retrieve the Data.name and Data.Value.
In the above step, I was trying to retrieve all Data.name and Data.value values present in the JSON file.
and then I'm finally building the whole flattened JSON using compose connector.
RESULTS
Hi I’m Wayne Wang from the Microsoft for Founders Hub team!
I wrote this script using .net 5 function app with System.Text.Json 6.0.4 package
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Text.Json.Nodes;
using Microsoft.Azure.Functions.Worker;
using Microsoft.Azure.Functions.Worker.Http;
using Microsoft.Extensions.Logging;
namespace FunctionApp2
{
public static class Function1
{
[Function("Function1")]
public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get", "post")] HttpRequestData req,
FunctionContext executionContext)
{
var logger = executionContext.GetLogger("Function1");
logger.LogInformation("C# HTTP trigger function processed a request.");
var stringInput = #"[
{
""id"": """",
""name"": """",
""type"": """",
""Data"": [
{
""id"": """",
""name"": ""aaa"",
""value"": ""100""
},
{
""id"": """",
""name"": ""bbb"",
""value"": ""200""
}
]
}
]"; //or you can get it from post;
var jr = JsonNode.Parse(stringInput);
var jcol = jr.AsArray().Select(arrayItem =>
{
var obj = arrayItem.AsObject();
var rval = new JsonObject();
CopyValue(obj, rval, "id");
CopyValue(obj, rval, "name");
CopyValue(obj, rval, "type");
if (obj.TryGetPropertyValue("Data", out var pnode))
{
var dataArray = pnode.AsArray();
foreach (var itemDataObject in dataArray.Select(x => x.AsObject()))
{
if (itemDataObject.TryGetPropertyValue("name", out var namep))
{
if (itemDataObject.TryGetPropertyValue("value", out var valuep))
{
rval.Add(namep.GetValue<string>(), valuep.GetValue<string>());
}
}
}
}
return rval;
});
var newjr = new JsonArray(jcol.ToArray());
var response = req.CreateResponse(HttpStatusCode.OK);
response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
response.WriteString(newjr.ToJsonString());
return response;
}
private static void CopyValue(JsonObject from, JsonObject to, string propName)
{
if (from.TryGetPropertyValue(propName, out var pnode))
{
to.Add(propName, pnode.GetValue<string>());
}
}
}
}

getting error response for the Update api

I am new to this swagger and i created a small demo project in node-js to see how swagger will really works. I created 5 api's which 4 are working perfectly and when it comes to PUT api I am getting error ,but when i tried in postman it is working. Please look at the code below.
export let updateUser = async(req: Request, resp: Response) => {
try{
const use = await User.findById(req.params.id);
use.name = req.body.name;
// use.email = req.body.email;
const a1 = await use.save();
resp.json("successfully updated");
} catch(err) {
resp.send('Error')
}
}
this is the api which is calling above method in app.ts
//put-request
app.put('/user/update/:id',controller.updateUser);
This is the the swagger json of put API
"/user/update/{id}": {
"put": {
"tags": [
"Update-Api"
],
"summary": "Update-user",
"description": "To updatre the particular user",
"operationId": "updateUser",
"consumes": ["application/json"],
"parameters":[
{
"name":"Id",
"in":"path",
"description":"enter the id of the user",
"required":true,
"type":"string"
},
{
"name":"body",
"in":"body",
"description":"Enter the update value",
"required":true,
"$schema": {
"type": "#/definations/User"
}
}
],
"responses": {
"400": {
"description": "Invalid user supplied"
},
"404": {
"description": "User not found"
}
}
}
}
If you paste your API definition into https://editor.swagger.io, it will flag 2 syntax errors in the PUT operation. Make sure to fix these errors before testing your API.
In the parameter definition, change "name":"Id" to "name":"id" (lowercase id) to match the parameter letter case in the path template.
In the body parameter, change $schema to schema.

Azure AD - No application roles claims in id_token in /authorize login but there are in /token login

I have an application in Azure AD where I would like to login in. I try to go into such address (with /authorize at the end):
https://login.microsoftonline.com/{tenant}.onmicrosoft.com/oauth2/v2.0/authorize?{clientId}&redirect_uri={redirectUrl}&response_mode=fragment&response_type=id_token&scope=openid&nonce=dummy&state=12345
But after login i get no application roles in the it_token.
Strangely enough when I run almost the same address by from code (with /token at the end):
var values = new Dictionary<string, string>
{
{ "username", email },
{ "password", password },
{ "grant_type", "password" },
{ "scope", "openid" },
{ "client_id", clientId },
{ "client_secret", secret }
};
var content = new FormUrlEncodedContent(values);
var response = await client.PostAsync($"https://login.microsoftonline.com/{tenant}.onmicrosoft.com/oauth2/v2.0/token", content);
It returns application roles in id_token:
What is going on? What am I missing?
I just tried both /authorize and /token endpoints, the id token always contains
"roles": [
"Writer"
],
Here is the id token info I got from /authorize endpoint:
Please make sure you used the same user to login. Here are the main steps.
1.add the role in manifest.
"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"displayName": "Writer",
"id": "d1c2ade8-98f8-45fd-aa4a-6d06b947c66f",
"isEnabled": true,
"description": "Writers Have the ability to create tasks.",
"value": "Writer"
}
]
2.assign the user to roles

is it possible to customize the events that a blob within a storage account fires on blob creation?

Is it possible to change the default event that is fired on blobcreated?
Storage accounts have the ability to fire events when blobs are deleted/created:
If you add a new event subscription, you can choose between these three:
I'd like to be able to use the Custom Input Schema. However, there's no documentation on how to use it.
How do we customize the custom input schema?
The default schema looks something like this:
{
"topic": "/subscriptions/xxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.Storage/storageAccounts/mystoraccount",
"subject": "/blobServices/default/containers/xmlinput/blobs/myj.json",
"eventType": "Microsoft.Storage.BlobCreated",
"eventTime": "2019-05-20T18:58:28.7390111Z",
"id": "xxxxxxxxxxxxxxxx",
"data": {
"api": "PutBlockList",
"clientRequestId": "xxxxxxxxxxxxxxxx",
"requestId": "xxxxxxxxxxxxxxxx",
"eTag": "0x8D6DD55254EBE75",
"contentType": "application/json",
"contentLength": 874636,
"blobType": "BlockBlob",
"url": "https://mystoraccount.blob.core.windows.net/xmlinput/myj.json",
"sequencer": "00000000000000000000000000005FAC0000000000614963",
"storageDiagnostics": {
"batchId": "xxxxxxxxxxxxxxxx"
}
},
"dataVersion": "",
"metadataVersion": "1"
}
I'd like to ONLY return the file name, in this case it is a substring of the subject, myj.json.
How do we customize the event that's being fired?
Desired result:
{
"filename": "myj.json"
}
The Azure Event Grid supports a CustomInputSchema only for Custom and Event Domain topics. In other words, the AEG built-in event sources can be distributed only with the EventGridSchema (default schema) or CloudEventV01Schema.
For your solution, when your consumer requires to subscribe to the AEG events with a custom schema, you need to chain events to the custom topic with a CustomInputSchema. The following screen snippet shows this concept:
For topic chaining (integrator) can be used a serverless Azure Function or Api Management. In my test (like is shown in the above picture) the EventGridTrigger function has been used.
The integrator has a responsibility to fire the AEG custom topic endpoint with a custom schema.
The following code snippet shows an example of the EventGridTrigger integrator:
#r "Newtonsoft.Json"
using System;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
static HttpClient client = new HttpClient() { BaseAddress = new Uri (Environment.GetEnvironmentVariable("CustomTopicEndpointEventGrid")) };
public static async Task Run(JObject eventGridEvent, ILogger log)
{
log.LogInformation(eventGridEvent.ToString());
string url = $"{eventGridEvent["data"]?["url"]?.Value<string>()}";
if(!string.IsNullOrEmpty(url))
{
// Fire event
var response = await client.PostAsJsonAsync("", new[] { new { filename = url.Substring(url.LastIndexOf('/') + 1) } });
log.LogInformation(response.ToString());
}
await Task.CompletedTask;
}
Note, that the CustomInputSchema is still in the preview, so to create a custom topic with a custom input schema follow docs here. Also, the REST API can be used, see more details here.
The following is my example of the payload for creating a custom topic with a CustomInputSchema using a REST Api:
{
"location": "westus",
"tags": {
"tag1": "abcd",
"tag2": "ABCD"
},
"properties": {
"inputSchema": "CustomEventSchema",
"inputSchemaMapping": {
"properties": {
"id": {
"sourceField": null
},
"topic": {
"sourceField": null
},
"eventTime": {
"sourceField": null
},
"eventType": {
"sourceField": "myEventType",
"defaultValue": "BlobCreated"
},
"subject": {
"sourceField": "mySubject",
"defaultValue": "/containers/xmlinput/blobs"
},
"dataVersion": {
"sourceField": null,
"defaultValue": "1.0"
}
},
"inputSchemaMappingType": "Json"
}
}
}
Once you have a custom topic with a CustomInputSchema, the output delivery schema will be followed by schema from the input. In the case, when your subscription on this custom topic will be delivered with an EventGridSchema, than the above mapping will be applied for the event delivery.

Using Cloudformation to Deploy Lamba, Including a Parameter that the function will have access to

We have an API that will be used to provision certain resources in AWS using Cloud Formation. This includes a Lambda function that will send events to S3, with the bucket being configurable. The thing is, we will know the bucket name when we provision the lambda, not within the lambda code itself.
As far as I can tell, there is no way to inject the S3 bucket name at the time of provisioning, in the Cloud Formation Template itself. Is that true?
The only solution I can see is to generate the function code on the fly, and embed that into the Cloud Formation template. This would make us unable to use any NPM dependencies along with the function code. Is there a better option?
So, I realized I had never updated this question with my eventual solution. I ended up embedding a proxy lambda function into the cloudformation template, which enabled me to inject template parameters.
Example:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates a function to relay messages from a Kinesis instance to S3",
"Parameters": {
"S3Bucket" : {
"Type": "String",
"Description": "The name of the S3 bucket where the data will be stored"
},
"S3Key": {
"Type": "String",
"Description": "The key of the directory where the data will be stored"
}
},
"Resources": {
"mainLambda": {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"Handler" : "index.handler",
"Description" : "Writes events to S3",
"Role" : { "Ref": "LambdaRoleARN" },
"Runtime" : "nodejs4.3",
"Code" : {
"S3Bucket": "streams-resources",
"S3Key": "astro-bass/${GIT_COMMIT}/lambda/astro-bass.zip"
}
}
},
"lambdaProxy": {
"Type" : "AWS::Lambda::Function",
"Properties" : {
"Handler" : "index.handler",
"Runtime" : "nodejs",
"Code" : {
"ZipFile": { "Fn::Join": ["", [
"var AWS = require('aws-sdk');",
"var lambda = new AWS.Lambda();",
"exports.handler = function(event, context) {",
"event.bundledParams = ['",
{ "Ref": "S3Bucket" },
"','",
{ "Ref": "S3Key" },
"'];",
"lambda.invoke({",
"FunctionName: '",
{ "Ref": "mainLambda" },
"',",
"Payload: JSON.stringify(event, null, 2),",
"InvocationType: 'Event'",
"}, function(err, data) {",
"if(err) {",
"context.fail(err);",
"}",
"context.done();",
"});",
"};"
]]}
}
}
},
},
...
}
The proxy function had the parameters injected into its code (s3bucket/key), and then it invokes the main lambda with a modified event object. It's a little unorthodox but struck me as much cleaner than the other available solutions, such as parse stacknames/etc. Worked well thus far.
Note that this solution only works currently with the legacy node environment. Not an issue, but worrisome in terms of the longevity of this solution.
UPDATE:
We ran into limitations with the previous solution and had to devise yet another one. We ended up with an off-label usage of the description field to embed configuration values. Here is our Lambda
'use strict';
var aws = require('aws-sdk');
var lambda = new aws.Lambda({apiVersion: '2014-11-11'});
let promise = lambda.getFunctionConfiguration({ FunctionName: process.env['AWS_LAMBDA_FUNCTION_NAME'] }).promise();
exports.handler = async function getTheConfig(event, context, cb) {
try {
let data = await promise;
cb(null, JSON.parse(data.Description).bucket);
} catch(e) {
cb(e);
}
};
Then, in the description field, you can embed a simple JSON snipped like so:
{
"bucket": "bucket-name"
}
Moreover, this structure, using the promise outside of the handler, limits the request to only occurring when the container is spawned - not for each individual lambda execution.
Not quite the cleanest solution, but the most functional one we've found.
There is no way of passing parameters to a Lambda function beside the event itself at the moment.
If you are creating a Lambda function with CloudFormation you could use the following workaround:
Use the Lambda function name to derive the CloudFormation stack name.
Use the CloudFormation stack name to access resources, or parameters of the stack when executing the Lambda function.
I would suggest doing it like this.
First create an index.js file and add this code.
var AWS = require('aws-sdk');
const s3 = new AWS.S3();
const https = require('https');
exports.handler = (event, context, callback) => {
const options = {
hostname: process.env.ApiUrl,
port: 443,
path: '/todos',
method: 'GET'
};
const req = https.request(options, (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
});
req.on('error', (e) => {
console.error(e);
});
req.end();
};
Zip the index.js file and upload it to an S3 bucket in the same region as your lambda function.
Then use this Cloudformation template make sure you specific the correct bucket name.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "ApiWorkflow",
"Metadata": {
},
"Parameters": {
"ApiUrl": {
"Description": "Specify the api url",
"Type": "String",
"Default": "jsonplaceholder.typicode.com"
},
},
"Mappings": {
},
"Conditions": {
},
"Resources": {
"lambdaVodFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": "lamdba-exec-tests",
"S3Key": "index.js.zip"
},
"Handler": "index.handler",
"Role": "arn:aws:iam::000000000:role/BasicLambdaExecRole",
"Runtime": "nodejs10.x",
"FunctionName": "ApiWorkflow",
"MemorySize": 128,
"Timeout": 5,
"Description": "Texting Lambda",
"Environment": {
"Variables": {
"ApiUrl": {
"Ref": "ApiUrl"
},
"Test2": "Hello World"
}
},
}
}
},
"Outputs": {
"ApiUrl": {
"Description": "Set api url",
"Value": {
"Ref": "ApiUrl"
}
}
}
}
You should see in the template Environmental variables you can access these in your NodeJS Lambda function like this.
process.env.ApiUrl

Resources