I have a custom DSC module which is class based. During the initial sync process the target machine tried to generate a MOF in C:\Windows\System32\dsc which results in an error - this causes the initial sync to report as failed, even though all the individual configuration resource tasks show as succeeded. The ones that are based on the resource who's MOF was not generated report as succeeded, but in fact have not executed at all.
This is the error:
{
"JobId": "4deeaf52-aa56-11e6-a940-000d3ad04eaa",
"OperationType": "Initial",
"ReportFormatVersion": "2.0",
"ConfigurationVersion": "2.0.0",
"StartTime": "2016-11-14T21:37:14.2770000+11:00",
"Errors": [{
"ErrorSource": "DSCPowershellResource",
"Locale": "en-US",
"Errors": {
"Exception": {
"Message": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"Data": {
},
"InnerException": null,
"TargetSite": null,
"StackTrace": null,
"HelpLink": null,
"Source": null,
"HResult": -2146233079
},
"TargetObject": null,
"CategoryInfo": {
"Category": 6,
"Activity": "",
"Reason": "InvalidOperationException",
"TargetName": "",
"TargetType": ""
},
"FullyQualifiedErrorId": "ProviderSchemaNotFound",
"ErrorDetails": null,
"InvocationInfo": null,
"ScriptStackTrace": null,
"PipelineIterationInfo": []
},
"ErrorCode": "6",
"ErrorMessage": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"ResourceId": "[tAzureStorageFileSync]CDrive"
}],
"StatusData": [],
"AdditionalData": [{
"Key": "OSVersion",
"Value": {
"VersionString": "MicrosoftWindowsNT10.0.14393.0",
"ServicePack": "",
"Platform": "Win32NT"
}
},
{
"Key": "PSVersion",
"Value": {
"CLRVersion": "4.0.30319.42000",
"PSVersion": "5.1.14393.206",
"BuildVersion": "10.0.14393.206"
}
}]
}
I have tried manually generating the MOF and including it in the module, but that didn't help (or perhaps I did it wrong). Even though this is a class-based resource I added the MOF with the name of the class in a \DSCResources\ className \ classname .schema.mof file. I note that the one generated in the C:\windows\system32\dsc folder includes the version number, which mine does not. Perhaps that's the problem.
After the failed initial sync, the subsequent consistency check does pass, and the MOF is created at the location mentioned in the error message.
The class itself contains a function that calls Import-Module Azure.Storage which is installed on the machine by a different DSC resource, and has been installed at the point of the consistency check, but (obviously) not at the point the initial sync starts. The resource that installs the module is marked as a dependency of the class-resource in the configuration, but I think MOF generation must happen at the point the modules are deployed which is logically before the initial sync has run.
At least that's what I think is happening.
Would be grateful if anyone could instruct me on what can be done in this instance, and whether my assumptions (above) are correct? I can't seem to get any additional errors or telemetry from the MOF compilation process itself to see why the MOF compilation is failing.
#Mods I'm really not clear on what basis this would be downvoted - I don't think asking a question nobody can answer is really grounds for "punishment".
Posting an answer as nobody really had anything to contribute here and I appear to
have solved it on my own. I believe the issue is a matter of timing. The DSC dependent modules are delivered from the pull server and compiled before any of them are executed. The dependency of my class module on Azure.Storage meant that the .psm1 file couldn't be compiled (since the module didn't exist on the machine yet - it would be devlivered via a DSC resource at a later time).
Perhaps there is some mechanism that accounts for these dependencies in PS-based modules, or there is some leniency applied that isn't the case for class-based resources. That's still not clear.
After some experimentation I have begun creating and shipping the MOF files alongside the psm1 and psd1 file, rather than in the DSCResources... child folder as outlined in my question, and this appears to have resolved the issue.
Hopefully this helps someone and doesn't attract more downvotes.
Related
We get an error on deploying our Logic-App with Azure DevOps.
I can't explain why this error occurs all at once.
Has anyone seen this error message before?
InvalidRequestContent:
Request content contains one or more instances of unsupported reference property names ($id, $ref, $values) creating ambiguity in paths 'properties.definition.actions.Parse_JSON.inputs.schema.properties.caseId.$ref,properties.definition.actions.Parse_JSON.inputs.schema.properties.integrationId.$ref'.
Please remove the use of reference property names and try again.
Our logic-app contains following JSON-Parse code. Apparently the variable "#/definitions/nonEmptyString" is defined twice.
"caseId": {
"$ref": "#/definitions/nonEmptyString",
"type": "string"
},
Issue reproduced from my end and got expected results.
The issue is with $ref which is not supported by Azure logicapps as mentioned in error got.
Created logic app as shown below and the sample JSON-Parse code is taken as per your requirement
{
"caseId": {
"$ref": "#/definitions/nonEmptyString",
"type": "string"
}
}
By taking $ref got the same error as shown below
Failed to save logic app parselp. Request content contains one or more instances of unsupported reference property names ($id, $ref, $values) creating ambiguity in paths 'properties.definition.actions.Parse_JSON.inputs.schema.caseId.$ref'. Please remove the use of reference property names and try again.
Then removed $ and taken ref in Parse Json as shown and logic App saved successfully without that error and workflow ran successfully.
I have fixed the problem by changing the following code
"definitions":{
"nonEmptyString":{
"minLength":1,
"type":"string"
}
},
"properties":{
"caseId":{
"$ref":"#/definitions/nonEmptyString",
"type":"string"
}
to this code
"properties":{
"caseId":{
"minLength":1,
"type":"string"
}
Maybe the problem was simply that my old solution defined "type": "string" twice. But I have not tested that yet.
I'm trying to perform a Patient level bulk export:
"Endpoint - All Patients Export a detailed set of FHIR resources of diverse resource types pertaining to all patients. [fhir base]/Patient/$export
"
I have a fhir server running on a smile CDR instance loaded with some basic data with the Synthea tool. I've generated 11 patients and more data related to it.
Resources loaded in the database are:
"AllergyIntolerance", "Bundle", "CarePlan", "CareTeam", "Claim", "Condition", "Coverage", "DiagnosticReport","DocumentReference", "Encounter", "ExplanationOfBenefit", "ImagingStudy", "Immunization", "Location", "Medication","MedicationAdministration", "MedicationRequest", "Observation", "Organization","Patient", "Practitioner","PractitionerRole","Procedure", "Provenance", "ServiceRequest"
When I request a resource export (Patient, Practitioner, Organization) the bulk export works:
http://localhost:8000/$export?_type=Organization
{
"resourceType": "Organization",
"id": "1633",
"meta": {
"versionId": "1",
"lastUpdated": "2021-11-12T20:42:45.627+00:00",
"source": "#HJck1YaOzVjNjBTA",
"profile": [
"http://hl7.org/fhir/us/core/StructureDefinition/us-core-organization"
]
},....
}
Now, the patient level export is generating a status job with no results at all. First I launch the bulk job with:
http://localhost:8000/Patient/$export
and then, I ask for the job status with the provided url:
http://localhost:8000/$export-poll-status?_jobId=4aaadbc9-fbe8-44e1-b631-9335fc1c2712
And the response is always the same, with no results at all (I can see in the logs that the job is completed).
{
"transactionTime": "2021-12-01T19:37:46.341+00:00",
"request": "/Patient/$export?_outputFormat=application%2Ffhir%2Bndjson"
}
By reading the documentation I think the issue is related to the bulk export permissions. In FHIR_OP_INITIATE_BULK_DATA_EXPORT, I've configured "Patient" as permission, but no matter what word I wrote there, the behavior is the same (I mean, the resource export works, but not the patient level export).
I would like to understand what I should configure on the FHIR_OP_INITIATE_BULK_DATA_EXPORT permission and on the other ones (FHIR_OP_INITIATE_BULK_DATA_EXPORT_GROUP, FHIR_OP_INITIATE_BULK_DATA_EXPORT_PATIENT, FHIR_OP_INITIATE_BULK_DATA_EXPORT_SYSTEM) to allow a user to download everything, like a super user.
Is it possible to perform custom validation on two parameters and ensure they are equal?
I want to have something like password and password_confirm that must be equal before deploying any of the resources.
yeah, you can hack something like that, just create a resource that will fail and all the other resources would depend on it and then on the resource condition do:
"condition": "[not(equals(parameters('password'), parameters('password_confirm'))]"
that way if they are not equal fake resource would start getting deployed and would blow up (make sure you code it to blow up) and nothing would get deployed
now that I think of it, instead of creating a resource, just put a condition on all of the resources in the template:
"condition": "[equals(parameters('password'), parameters('password_confirm')]"
that way they will only get deployed if these match, but you won't have a failure.
Another option would be to add a parameter to do the validation... this is simpler but not as robust because the user could override the defaultValue of the parameter:
"validatePasswords": {
"type": "bool",
"allowedValues": [
true
],
"defaultValue": "[equals(parameters('password'), parameters('password_confirm'))]",
"metadata": {
"description": "Check to see if the 2 passwords match."
}
},
Putting a condition on each resource will work (and is harder to fool), but the deployment may succeed even though nothing is deployed.
I have created a virtual assistant using the Microsoft virtual assistant template. When testing in the emulator whatever message i send i am getting a 'something went wrong reply.'
I am new to the entire bot framework ecosystem and it is becoming very difficult to proceed.
In the log what i can see is:
[11:26:32]Emulator listening on http://localhost:65233
[11:26:32]ngrok not configured (only needed when connecting to remotely hosted bots)
[11:26:32]Connecting to bots hosted remotely
[11:26:32]Edit ngrok settings
[11:26:32]POST201directline.startConversation
[11:26:39]<-messageapplication/vnd.microsoft.card.adaptive
[11:26:39]POST200conversations.replyToActivity
[11:26:54]->messagehi
[11:26:55]<-traceThe given key 'en' was not present in the dictiona...
[11:26:55]POST200conversations.replyToActivity
[11:26:55]<-trace at System.Collections.Generic.Dictionary`2.get_...
[11:26:55]POST200conversations.replyToActivity
[11:26:55]<-messageSorry, it looks like something went wrong.
[11:26:55]POST200conversations.replyToActivity
[11:26:55]POST200directline.postActivity
[11:27:48]->messagehello
[11:27:48]<-traceThe given key 'en' was not present in the dictiona...
[11:27:48]POST200conversations.replyToActivity
[11:27:48]<-trace at System.Collections.Generic.Dictionary`2.get_...
[11:27:48]POST200conversations.replyToActivity
[11:27:48]<-messageSorry, it looks like something went wrong.
[11:27:48]POST200conversations.replyToActivity
[11:27:48]POST200directline.postActivity
From what I understood the 'en' is not present in dictionary and I am not sure what is means. I checked in the Responses folder and could not see an en file not sure if that is the issue:
My emulator screenshot is attached:
Any help would be useful.
I believe the issue you are experiencing is a problem on the following lines inside MainDialog.cs:
var locale = CultureInfo.CurrentUICulture.TwoLetterISOLanguageName;
var cognitiveModels = _services.CognitiveModelSets[locale];
This tries to use the locale (retrieved from the current thread as per this documentation) as the key to access the cognitive models in your cognitivemodels.json file.
Inside your cognitivemodels.json file it should look like:
{
"cognitiveModels": {
// This line below here is what could be missing/incorrect in yours
"en": {
"dispatchModel": {
"type": "dispatch",
"region": "westus",
...
},
"knowledgebases": [
{
"id": "chitchat",
"name": "chitchat",
...
},
{
"id": "faq",
"name": "faq",
...
},
],
"languageModels": [
{
"id": "general",
"name": "msag-test-va-boten_general",
"region": "westus",
...
}
]
}
},
"defaultLocale": "en-us"
}
The en key insides the cognitiveModels object is what the code is trying to use to retrieve your cognitive models, thus if the locale pulled out in the code doesn't match the locale keys in your cognitivemodels.json then you will get the dictionary key error.
EDIT
The issue the OP has was a failed deploy. The steps we took were to:
Checked the deploy_log.txt inside the Deployment folder for errors.
If this case it was empty - not a good sign.
Checked the deploy_cognitive_models_log.txt inside the Deployment folder for errors.
There was an error present Error: Cannot find module 'C:\Users\dip_chatterjee\AppData\Roaming\npm\node_modules\botdispatch\bin\dispatch.js.
To fix this error we reinstalled all of the required npm packages as per step 5 of this guide then ran the deploy script as per this guide.
Given the URL https://github.com/foo/bar, I want to be able to get all the repos for foo. If foo is a user, I need to call api.repos.getForUser({username: 'foo'}) and if an org, I'll need to call api.repos.getForOrg({org: 'foo'}).
My problem is: how can I tell if "foo" is an org or a user?
Right now, I "solve" it in the costly way of trying to get an org called "foo", if I got it, I try to get its repos; if I end up getting an exception (I use promises) and if the code of the exception is "404", I assume "foo" is a user, and try to get user repos.
This is obviously inefficient, and has the side effect of adding calls that may trigger rate limit.
Is there an easier way to know whether "foo" is a user or an org?
As we all know, handling exceptions is costly. So instead of trying to get an org and handling the 404, you could instead look at the type property of the response to https://api.github.com/users/<username> in order to determine if the "user" is a user or organization and then proceed accordingly.
For example a call to my GitHub user API https://api.github.com/users/raghav710 returns
{
"login": "raghav710",
...
"type": "User",
...
}
And a call to an organization like https://api.github.com/users/Microsoft returns
{
"login": "Microsoft",
...
"type": "Organization",
...
}
Update: Doing it in a single call
I understand that you are already trying to access a URL https://github.com/<user or organization>/<repo name> and therein trying to get all the repos of that user or organization.
A suggestion is, instead of trying to do a GET on the above link, you could do a GET on https://api.github.com/repos/<user or organization>/<repo name>
For example doing a GET on https://api.github.com/repos/Microsoft/vscode
Gives
{
"id": 41881900,
"name": "vscode",
"full_name": "Microsoft/vscode",
"owner": {
"login": "Microsoft",
"id": 6154722,
...
"type": "Organization",
},
"private": false,
"html_url": "https://github.com/Microsoft/vscode",
...
}
As you can see the familiar type field is available under the owner key. You can use this to decide your next call
So even if there's an API call to differentiate, you'll still need that call to determine whether it's a user or not.
Considering the fact that most repositories should belong to users, why don't you reverse the checks? Try to retrieve the user repository, if that fails get the organization repository.
In addition, I'd cache as much information as necessary. Don't retrieve everything in real time while users use your tool. Many things won't change that often, if they're even able to be changed.