ARM Deployment Error- The request content was invalid and could not be deserialized: 'Cannot deserialize the current JSON array - azure

I had gone through the previous posts similar and not able to find any solution for my situation. So asking again. Please consider.
I am trying to deploy Azure Policy using ARM templates. So, I have created
1- Policy Definition File
2- Policy Parameter File
3- Power Shell script – Run with both Policy and Parameter file as input.
But when I trying to deploy, I am getting the error as attached. The “policyParameters” are being passed as Object type. Seems like the problem resides there. It would be great if you could look at this screen shot attached and advice.
Also the Powershell script out put shows the values expected I think but "ProvisioningState : Failed".
Thanks,
PolicyFile
Error Output
Parameter File
JSON-part1
JSON-Part2

You have to create a variable for policyParametars:
"variables": {
"policyParameters": {
"policyDefinitionId": {
"defaultValue": "[parameters('policyDefinitionId')]",
"type": "String"
},
...
This variable has to be passed to your parameters:
"parameters": "[variables('policyParameters')]",
You can find a sample here:
Configure Azure Diagnostic Settings with Azure Policies

Related

Azure-Environmentvariable not readable on NLog - Config

I deployed a webjob onto Azure (under the home\site\wwwroot\App_Data\jobs\triggered directory). This application contains NLog-logging which is configured in the appsettings and uses environment-variable for the logfile-path:
"NLog": {
"throwConfigExceptions": true,
"targets": {
"logfile": {
"type": "File",
"fileName": "${environment:variable=DEPLOYMENT_SOURCE}\\LogFiles\\timer-${shortdate}.log",
"layout": "${message} "
},
The DEPLOYMENT_SOURCE - Environment-variable contains a valid path when displaying it in kudu:
echo %DEPLOYMENT_SOURCE%
C:\home
But Nlog does not seem to be able to resolve that environment var. When enabling Trace-Log I receive the following error message:
Debug Creating file appender: C:\LogFiles\timer-2020-11-13.log
Trace Opening C:\LogFiles\timer-2020-11-13.log with allowFileSharedWriting=False
Error FileTarget(Name=logfile): Failed write to file 'C:\LogFiles\timer-2020-11-13.log'. Exception: > System.UnauthorizedAccessException: Access to the path 'C:\LogFiles\timer-2020-11-13.log' is denied.
So it seems like DEPLOYMENT_SOURCE is simply an empty string.
When testing this locally though with a valid Windows-Env like %TEMP% everything works fine.
What has to be done to access Azure-Environments in Dotnetcoreapps/NLog-Config?
I solved this issue.
The problem is that Triggered WebJobs on Azure DevOps do NOT have the same environment variables available as the Kudu console.
So while Kudu displayed different environment variables like DEPLOYMENT_SOURCE, this variable is not available for webjobs.
But there are other environments (in this case "HOME", like Rolf already mentioned in the comments) that also points to C:\home on Azure. (D:\home in the past)

Error "BadRequest" when calling Azure Function in ADF

I am creating an extensive data factory work flow that will create and fill a data warehouse for multiple customers automatic, however i'm running into an error. I am going to post the questions first, since the remaining info is a bit long. Keep in mind i'm new to data factory and JSON coding.
Questions & comments
How do i correctly pass the parameter through to an Execute Pipeline activity?
How do i add said parameter to an Azure Function activity?
The issue may lie with correctly passing the parameter through, or it may lie in picking it up - i can't seem to determine which one. If you spot an error with the current setup, dont hesitate to let me know - all help is appreciated
The Error
{
"errorCode": "BadRequest",
"message": "Operation on target FetchEntries failed: Call to provided Azure function
'' failed with status-'BadRequest' and message -
'{\"Message\":\"Please pass 'customerId' on the query string or in the request body\"}'.",
"failureType": "UserError",
"target": "ExecuteFullLoad"
}
The Setup:
The whole setup starts with a function call to get new customers from an online economic platform. It the writes them to a SQL table, from which they are processed and loaded into the final table, after which a new pipeline is executed. This process works perfectly. From there the following pipeline is executed:
As you can see it all works well until the ForEach loop tries to execute another pipeline, that contains an azure function that calls a .NET scripted function that fills said warehouse (complex i know). This azure function needs a customerid to retrieve tokens and load the data into the warehouse. I'm trying to pass those tokens from the InternalCustomerID lookup through the ForEach into the pipeline and into the function. The ForEach works actually, but fails "Because an inner activity failed".
The Execute Pipeline task contains the following settings, where i'm trying to pass the parameter through which comes from the foreach loop. This part of the process also works, since it executes twice (as it should in this test phase):
I dont know if it doesn't successfully pass the parameter through or it fails at adding it to the body of the azure function.
The child pipeline (FullLoad) contains the following parameters. I'm not sure if i should set a default value to be overwritten or how that actually works. The guides i've look at on the internet havent had a default value.
Finally there is the settings for the Azure function. I'm not sure what i need to write in order to correctly capture the parameter and/or what to fill in - if it's the header or the body regarding the error message. I know a post cannot be executed without a body.
If i run this specific funtion by hand (using the Function App part of portal.azure.com) it works fine, by using the following settings:
I viewed all of your detailed question and I think the key of the issue is the format of Azure Function Request Body.
I'm afraid this is incorrect. Please see my below steps based on your description:
Work Flow:
Inside ForEach Activity, only one Azure Function Activity:
The preview data of LookUp Activity:
Then the configuration of ForEach Activity: #activity('Lookup1').output.value
The configuration of Azure Function Activity: #json(concat('{"name":"',item().name,'"}'))
From the azure function, I only output the input data. Sample Output as below:
Tips: I saw your step is executing azure function in another pipeline and using Execute Pipeline Activity, (I don't know why you have to follow such steps), but I think it doesn't matter because you only need to focus on the Body format, if your acceptable format is JSON, you could use #json(....),if the acceptable format is String, you could use #cancat(....). Besides, you could check the sample from the ADF UI portal which uses pipeline().parameters

arm template functions not working with default values

I am deploying a bunch of resources from Arm template. I am trying to provide the resource name unique by using this "[uniqueString(subscription().subscriptionId)]" . I have the templates hosted in github and using the Deploy to Azure button am trying to deploy, but the site just shows the plain string with the function and not the value. Any idea would be appreciated.
Between here's my code
"parameters": {
"functionAppName": {
"type": "string",
"metadata": {
"description": "Name of the function app"
},
"defaultValue": "[concat('asfnapp',uniqueString(resourceGroup().id))]"
}
}
I have the rest of the parameters in the same way.
Edit : Added repository URL - GITHUB
Ok, I thought you were referring to one of the templates in the QuickStart repo - they all (by default) go through this UX: https://ms.portal.azure.com/#create/Microsoft.Template
it looks like you're not using that UX - and I suspect that what you're using does not handle expressions in parameters (just assumes they are strings). So nothing you can do to fix that (your template is fine).
A workaround would be to use this:
https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fhariharan618%2Ftest%2Fmaster%2Fazuredeploy.json

Does chrome.storage.managed policy work for self-hosted extensions?

I am trying to configure an extension using the method described at Manifest for storage areas
. I'm pretty sure I have everything set up correctly, but I am not seeing the policy value in chrome://policy(it is shown as Not set) and, obviously, there is no policy seen from
chrome.storage.managed.get(null,(d)=>{console.log(d)});
I've checked my schema and the config file I uploaded in the Admin Panel at https://www.jsonschemavalidator.net and it seems to match. It's very simple, schema.json is
{
"type" : "object",
"properties" : {
"PolicyTest" : {
"type" : "string"
}
}
}
and in the json file
{
"PolicyTest" : "test"
}
Before I spend a bunch of time debugging this, I thought I would quickly ask- could this be because the extension that I am configuring this for is not hosted on the Chrome Web Store? It's hosted myself using the method described at Autoupdating.
Other than that, I'm not really sure why this isn't working- the device running Chrome is Linux, although I have also checked on a managed Chromebook, and I've checked things like making sure I've selected the right OU, refreshing the policy, and so on.
Ok, I figured it out by reference to the Chromium documentation. To answer the question in my title- yes, it works fine for extensions not hosted on the Web Store, I just had the wrong format for my options.
Basically, my json file didn't actually match the schema. For the schema I posted in my question, you actually need this :
{
"PolicyTest" : {
"Value" : "test"
}
}
Basically, your properties need to be an object with a Value field. The core sentence that I missed is : The txt file should contain a valid JSON object, mapping a policy name to an object describing the policy.
Mea cupla, I should have read the documentation more carefully. It's quite frustrating that the official Chrome extension developer documentation doesn't have a simple example of a schema and a matching config file, since there is no UI feedback if your format is wrong.
I note also there was some plan to publish a tool to build a template from a schema. I guess that never happened, it would be useful too.

Issues deploying dscExtension to Azure VMSS

I've been having some issues deploying a dscExtension to an Azure virtual machine scale set (VMSS) using a deployment template.
Here's how I've added it to my template:
{
"name": "dscExtension",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"ModulesUrl": "[concat(parameters('_artifactsLocation'), '/', 'MyDscPackage.zip', parameters('_artifactsLocationSasToken'))]",
"ConfigurationFunction": "CmvmProcessor.ps1\\CmvmProcessor",
"Properties": [
{
"Name": "ServiceCredentials",
"Value": {
"UserName": "parameters('administratorLogin')",
"Password": "parameters('administratorLoginPassword')"
},
"TypeName": "System.Management.Automation.PSCredential"
}
]
}
}
}
The VMSS itself is successfully deploying, but when I browse the InstanceView of the individual VMs, the dscExtension shows the failed status with an error message.
The problems I'm having are as follows:
The ARM deployment does not try to update the dscExtension upon redeploy. I am used to MSDeploy web app extensions where the artifacts are updated and the code is redeployed on each new deployment. I do not know how to force it to update the dscExtension with new binaries. In fact it only seems to give an error on the first deploy of the VMSS, then it won't even try again.
The error I'm getting is for old code that doesn't exist anymore.
I had a bug previously in a custom DSC Powershell script where I tried to use the -replace operator which is supposed to create a $Matches variable but it was saying $Matches didn't exist.
In any case, I've since refactored the code and deleted the entire resource group then redeployed. The dscExtension is still giving the same error. I've verified the blob storage account where my DSC .zip is located no longer has the code which is capable of producing this error message. Azure must be caching the dscExtension somewhere. I can't get it to use my new blob .zip that I upload before each deployment.
Any insight into the DSC Extension and how to force it to update on deploy?
It sounds like you may be running into multiple things here, so trying the simple one first. In order to get a VM extension to run on a subsequent deployment you have to "seed" it. (and you're right this is different than the rest of AzureRM) Take a look at this template:
https://github.com/bmoore-msft/AzureRM-Samples/blob/master/VMDSCInstallFile/azuredeploy.json
There is a property on the DSC extension called:
"forceUpdateTag" : "changeThisToEnsureScriptRuns-maxlength=50",
The property value must be different if you ever want the extension to run again. So for example, if you wanted it to run every time you'd seed it with a random number or a guid. You could also use version numbers if you wanted to version it somehow. The point is, if the value in the template is the same as the one you're passing in, the extension won't run again.
That sample uses a VM, but the VMSS syntax should be the same. That property also applies to other extensions (e.g. custom script).
The part that seems odd is that you said you deleted the entire RG and couldn't get it to accept the new package... That sounds bad (i.e. like a bug). If the above doesn't fix it, we may need to dig deeper into the template and script. LMK...

Resources