Virtal Assistant throwing 'Sorry it looks like something went wrong' - azure

I have created a virtual assistant using the Microsoft virtual assistant template. When testing in the emulator whatever message i send i am getting a 'something went wrong reply.'
I am new to the entire bot framework ecosystem and it is becoming very difficult to proceed.
In the log what i can see is:
[11:26:32]Emulator listening on http://localhost:65233
[11:26:32]ngrok not configured (only needed when connecting to remotely hosted bots)
[11:26:32]Connecting to bots hosted remotely
[11:26:32]Edit ngrok settings
[11:26:32]POST201directline.startConversation
[11:26:39]<-messageapplication/vnd.microsoft.card.adaptive
[11:26:39]POST200conversations.replyToActivity
[11:26:54]->messagehi
[11:26:55]<-traceThe given key 'en' was not present in the dictiona...
[11:26:55]POST200conversations.replyToActivity
[11:26:55]<-trace at System.Collections.Generic.Dictionary`2.get_...
[11:26:55]POST200conversations.replyToActivity
[11:26:55]<-messageSorry, it looks like something went wrong.
[11:26:55]POST200conversations.replyToActivity
[11:26:55]POST200directline.postActivity
[11:27:48]->messagehello
[11:27:48]<-traceThe given key 'en' was not present in the dictiona...
[11:27:48]POST200conversations.replyToActivity
[11:27:48]<-trace at System.Collections.Generic.Dictionary`2.get_...
[11:27:48]POST200conversations.replyToActivity
[11:27:48]<-messageSorry, it looks like something went wrong.
[11:27:48]POST200conversations.replyToActivity
[11:27:48]POST200directline.postActivity
From what I understood the 'en' is not present in dictionary and I am not sure what is means. I checked in the Responses folder and could not see an en file not sure if that is the issue:
My emulator screenshot is attached:
Any help would be useful.

I believe the issue you are experiencing is a problem on the following lines inside MainDialog.cs:
var locale = CultureInfo.CurrentUICulture.TwoLetterISOLanguageName;
var cognitiveModels = _services.CognitiveModelSets[locale];
This tries to use the locale (retrieved from the current thread as per this documentation) as the key to access the cognitive models in your cognitivemodels.json file.
Inside your cognitivemodels.json file it should look like:
{
"cognitiveModels": {
// This line below here is what could be missing/incorrect in yours
"en": {
"dispatchModel": {
"type": "dispatch",
"region": "westus",
...
},
"knowledgebases": [
{
"id": "chitchat",
"name": "chitchat",
...
},
{
"id": "faq",
"name": "faq",
...
},
],
"languageModels": [
{
"id": "general",
"name": "msag-test-va-boten_general",
"region": "westus",
...
}
]
}
},
"defaultLocale": "en-us"
}
The en key insides the cognitiveModels object is what the code is trying to use to retrieve your cognitive models, thus if the locale pulled out in the code doesn't match the locale keys in your cognitivemodels.json then you will get the dictionary key error.
EDIT
The issue the OP has was a failed deploy. The steps we took were to:
Checked the deploy_log.txt inside the Deployment folder for errors.
If this case it was empty - not a good sign.
Checked the deploy_cognitive_models_log.txt inside the Deployment folder for errors.
There was an error present Error: Cannot find module 'C:\Users\dip_chatterjee\AppData\Roaming\npm\node_modules\botdispatch\bin\dispatch.js.
To fix this error we reinstalled all of the required npm packages as per step 5 of this guide then ran the deploy script as per this guide.

Related

Azure DevOps Deployment shows InvalidRequestContent: Request content contains one or more instances of unsupported reference property names

We get an error on deploying our Logic-App with Azure DevOps.
I can't explain why this error occurs all at once.
Has anyone seen this error message before?
InvalidRequestContent:
Request content contains one or more instances of unsupported reference property names ($id, $ref, $values) creating ambiguity in paths 'properties.definition.actions.Parse_JSON.inputs.schema.properties.caseId.$ref,properties.definition.actions.Parse_JSON.inputs.schema.properties.integrationId.$ref'.
Please remove the use of reference property names and try again.
Our logic-app contains following JSON-Parse code. Apparently the variable "#/definitions/nonEmptyString" is defined twice.
"caseId": {
"$ref": "#/definitions/nonEmptyString",
"type": "string"
},
Issue reproduced from my end and got expected results.
The issue is with $ref which is not supported by Azure logicapps as mentioned in error got.
Created logic app as shown below and the sample JSON-Parse code is taken as per your requirement
{
"caseId": {
"$ref": "#/definitions/nonEmptyString",
"type": "string"
}
}
By taking $ref got the same error as shown below
Failed to save logic app parselp. Request content contains one or more instances of unsupported reference property names ($id, $ref, $values) creating ambiguity in paths 'properties.definition.actions.Parse_JSON.inputs.schema.caseId.$ref'. Please remove the use of reference property names and try again.
Then removed $ and taken ref in Parse Json as shown and logic App saved successfully without that error and workflow ran successfully.
I have fixed the problem by changing the following code
"definitions":{
"nonEmptyString":{
"minLength":1,
"type":"string"
}
},
"properties":{
"caseId":{
"$ref":"#/definitions/nonEmptyString",
"type":"string"
}
to this code
"properties":{
"caseId":{
"minLength":1,
"type":"string"
}
Maybe the problem was simply that my old solution defined "type": "string" twice. But I have not tested that yet.

Azure Data Factory not interpreting well an array global parameter

We have an Azure Data Factory using Global Parameters, it's working fine on our Dev environment, but we when try do deploy it to QA environment using an Azure DevOps pipeline, it seems it's not understanding the only Global Parameter with type = array; even though all of the other parameters are good.
This is the guide we're using to build the CI/CD pipelines.
We have something similar to this in the Global Parameters JSON file:
{
"FilesToProcess": {
"type": "array",
"value": [
"VALUE01",
"VALUE02",
"VALUE03",
"VALUE04",
"VALUE05",
"VALUE06",
"VALUE07",
"VALUE08",
"VALUE09",
"VALUE10",
"VALUE11",
"VALUE12",
"VALUE13",
"VALUE14",
"VALUE15",
"VALUE16",
"VALUE17",
"VALUE18",
"VALUE19",
"VALUE20",
"VALUE21",
"VALUE22",
"VALUE23",
"VALUE24",
"VALUE25",
"VALUE26",
"VALUE27"
]
},
"EmailLogicAppUrl": {
"type": "string",
"value": "URL
}
}
All of the paremeters are deployed fine, except for the array one, and we're getting this:
We have debugged the PS script to update the Global Parameters, and it seems it's understanding well the array, so it has to be something else.
Any help will be highly appreciated.
Thanks!

Azure Logic Apps - Moving Email Message with Move Message action

I'm fairly new to Logic Apps and I have an app that I'm trying to get to move an email to a subfolder of the Inbox in a shared mailbox, but I'm trying to generate the path based on the date and I cannot for the life of me get it to work. I don't know if my path syntax is wrong or what.
The subfolder structure is basically
- Inbox
- 2018
- Jan
- Feb
- Mar
- Etc
And I'm trying to generate the path based off the year and the month using the Expressions part of a field. I've got an expression that generates the path for me
concat('Inbox\',formatDateTime(convertFromUtc(utcNow(),'Mountain Standard Time'),'MMM'),'\',formatDateTime(convertFromUtc(utcNow(),'Mountain Standard Time'),'yyyy'))
When the logic app runs this generates the correct path string of Inbox\2018\Jan but when the Move Email action runs it always escapes the backslash and then says it can't find the folder Inbox\\2018\\Jan.
So I either have this format wrong, I can't put the email in a subfolder or there's another way to do this.
I tried using the folder picker to pick one of the month subfolders and then peeked at the code and it uses some base64 encoded string for the path. I've pasted below what the peeked code shows
{
"inputs": {
"host": {
"connection": {
"name": "#parameters('$connections')['office365']['connectionId']"
}
},
"method": "post",
"path": "/Mail/Move/#{encodeURIComponent(triggerBody()?['Id'])}",
"queries": {
"folderPath": "Id::AAMkADRmOTgyMDI1LThkODYtNDMwYy1iYThiLTIzODQwN2Y1OGMzYQAuAAAAAAA6K3dJssnITb8NwkAsBOo7AQBaJ9ZTcg-MSoOEUUjjUdOAAAAD0nvYAAA="
},
"authentication": "#parameters('$authentication')"
},
"metadata": {
"Id::AAMkADRmOTgyMDI1LThkODYtNDMwYy1iYThiLTIzODQwN2Y1OGMzYQAuAAAAAAA6K3dJssnITb8NwkAsBOo7AQBaJ9ZTcg-MSoOEUUjjUdOAAAAD0nvYAAA=": "Jan"
}
}
Does anyone know how I would be able to move an email to a subfolder without using the folder picker?
Edit: Since posting I've also tried using the following strings that also do not work
Inbox/2018/Jan
Inbox:/2018/Jan
/Inbox/2018/Jan
You cant really have the path in terms of a hierarchy folder structure in this particular logic app.
If you look at the Documentation for Office 365 Mail rest operations #
https://msdn.microsoft.com/office/office365/api/mail-rest-operations#MoveCopyMessages
You will notice that to Move messages what you actually need is a folder ID. Also if you look at the logic app Designer, when you select a folder directly from there and then look at the code view you will see an ID. It looks something like
"method": "post",
"path": "/Mail/Move/#{encodeURIComponent(triggerBody()?['Id'])}",
"queries": {
"folderPath": "Id::AAMkADZmZDQ5OWNhLTU3NzQtNDRlZC1iMDRlLTg5NTA1NGM3NWJlZgAuAAAAAAAhZj7Qt8LySYhKvlgbXRNVAQBT8bGPBJK8Qqoy01hgwH4rAAEJysaQAAA="
}
},
"metadata": {
"Id::AAMkADZmZDQ5OWNhLTU3NzQtNDRlZC1iMDRlLTg5NTA1NGM3NWJlZgAuAAAAAAAhZj7Qt8LySYhKvlgbXRNVAQBT8bGPBJK8Qqoy01hgwH4rAAEJysaQAAA=": "Jan"
},
The FolderID is unique to every folder. One easy way to find the FolderIDs for a folder is to use
https://developer.microsoft.com/en-us/graph/graph-explorer#
and after signing in , posting
https://graph.microsoft.com/beta/me/mailFolders/Inbox/childFolders
as the query which will give you the ChildFolders for Inbox the values will look something like the following for every folder
"value": [
{
"id": "AAMkADZmZDQ5OWNhLTU3NzQtNDRlZC1iMDRlLTg5NTA1NGM3NWJlZgAuAAAAAAAhZj7Qt8LySYhKvlgbXRNVAQBT8bGPBJK8Qqoy01hgwH4rAAEJysWPAAA=",
"displayName": "AZCommunity",
"parentFolderId": "AAMkADZmZDQ5OWNhLTU3NzQtNDRlZC1iMDRlLTg5NTA1NGM3NWJlZgAuAAAAAAAhZj7Qt8LySYhKvlgbXRNVAQDX8XL9o4tkR5vF5sEdh44eAIYnQnhhAAA=",
"childFolderCount": 0,
"unreadItemCount": 5,
"totalItemCount": 169,
"wellKnownName": null
},
For what you are trying to do, you will have to do additional work to map the folders to the folder ID and then assign using that. I would suggest using Azure Functions to easily do this.

Azure DSC error on initial sync generating mof

I have a custom DSC module which is class based. During the initial sync process the target machine tried to generate a MOF in C:\Windows\System32\dsc which results in an error - this causes the initial sync to report as failed, even though all the individual configuration resource tasks show as succeeded. The ones that are based on the resource who's MOF was not generated report as succeeded, but in fact have not executed at all.
This is the error:
{
"JobId": "4deeaf52-aa56-11e6-a940-000d3ad04eaa",
"OperationType": "Initial",
"ReportFormatVersion": "2.0",
"ConfigurationVersion": "2.0.0",
"StartTime": "2016-11-14T21:37:14.2770000+11:00",
"Errors": [{
"ErrorSource": "DSCPowershellResource",
"Locale": "en-US",
"Errors": {
"Exception": {
"Message": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"Data": {
},
"InnerException": null,
"TargetSite": null,
"StackTrace": null,
"HelpLink": null,
"Source": null,
"HResult": -2146233079
},
"TargetObject": null,
"CategoryInfo": {
"Category": 6,
"Activity": "",
"Reason": "InvalidOperationException",
"TargetName": "",
"TargetType": ""
},
"FullyQualifiedErrorId": "ProviderSchemaNotFound",
"ErrorDetails": null,
"InvocationInfo": null,
"ScriptStackTrace": null,
"PipelineIterationInfo": []
},
"ErrorCode": "6",
"ErrorMessage": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"ResourceId": "[tAzureStorageFileSync]CDrive"
}],
"StatusData": [],
"AdditionalData": [{
"Key": "OSVersion",
"Value": {
"VersionString": "MicrosoftWindowsNT10.0.14393.0",
"ServicePack": "",
"Platform": "Win32NT"
}
},
{
"Key": "PSVersion",
"Value": {
"CLRVersion": "4.0.30319.42000",
"PSVersion": "5.1.14393.206",
"BuildVersion": "10.0.14393.206"
}
}]
}
I have tried manually generating the MOF and including it in the module, but that didn't help (or perhaps I did it wrong). Even though this is a class-based resource I added the MOF with the name of the class in a \DSCResources\ className \ classname .schema.mof file. I note that the one generated in the C:\windows\system32\dsc folder includes the version number, which mine does not. Perhaps that's the problem.
After the failed initial sync, the subsequent consistency check does pass, and the MOF is created at the location mentioned in the error message.
The class itself contains a function that calls Import-Module Azure.Storage which is installed on the machine by a different DSC resource, and has been installed at the point of the consistency check, but (obviously) not at the point the initial sync starts. The resource that installs the module is marked as a dependency of the class-resource in the configuration, but I think MOF generation must happen at the point the modules are deployed which is logically before the initial sync has run.
At least that's what I think is happening.
Would be grateful if anyone could instruct me on what can be done in this instance, and whether my assumptions (above) are correct? I can't seem to get any additional errors or telemetry from the MOF compilation process itself to see why the MOF compilation is failing.
#Mods I'm really not clear on what basis this would be downvoted - I don't think asking a question nobody can answer is really grounds for "punishment".
Posting an answer as nobody really had anything to contribute here and I appear to
have solved it on my own. I believe the issue is a matter of timing. The DSC dependent modules are delivered from the pull server and compiled before any of them are executed. The dependency of my class module on Azure.Storage meant that the .psm1 file couldn't be compiled (since the module didn't exist on the machine yet - it would be devlivered via a DSC resource at a later time).
Perhaps there is some mechanism that accounts for these dependencies in PS-based modules, or there is some leniency applied that isn't the case for class-based resources. That's still not clear.
After some experimentation I have begun creating and shipping the MOF files alongside the psm1 and psd1 file, rather than in the DSCResources... child folder as outlined in my question, and this appears to have resolved the issue.
Hopefully this helps someone and doesn't attract more downvotes.

XPages Extlib REST update (HTTP PUT) on calendar event is returning error code (cserror : 1028)

I am trying to update a calendar event in Domino Mail Calendar by using the REST data service "calendar" from the latest xpages extlib release "ExtensionLibraryOpenNTF-901v00_13.20150611-0803".
Has anybody done this with a successful return?
Unfortunately I haven't had any success trying to update a calendar event. I was successful getting the list of events, creating events, deleting an event, but to update an event seems to be somehow special. The PDF documentation for the calendar service is quite short on this point. My domino server is accepting all protocols including PUT. I am using the JSON format for my REST calls. The UPDATE I tried as described in the documentation with iCAL as well, but getting the same error.
I am using the Firefox REST Plugin for checking out the service, before I am implementing it.
Im using PUT, with content-type "text/calendar" as well "application/json".
My URL:
http://sitlap55.xyzgmbh.de:8080/mail/padmin.nsf/api/calendar/events/4D750E2B8159D254C1257E9C0066D48D
My Body looks like this, which is the easiest event type, a reminder (but I tried it with meeting and appointment as well):
{"events":[{"UID:"4D750E2B8159D254C1257E9C0066D48D","summary":"Ein Reminder update","start":{"date":"2015-08-13","time":"13:00:00","utc":true}}]}
This is how I return the event by a GET:
{
"href": "/mail/padmin.nsf/api/calendar/events/4D750E2B8159D254C1257E9C0066D48D-Lotus_Notes_Generated",
"id": "4D750E2B8159D254C1257E9C0066D48D-Lotus_Notes_Generated",
"summary": "Ein Reminder",
"start": {
"date": "2015-08-12",
"time": "07:00:00",
"utc": true
},
"class": "public",
"transparency": "transparent",
"sequence": 0,
"x-lotus-summarydataonly": {
"data": "TRUE"
},
"x-lotus-appttype": {
"data": "4"
}
}
This is the error I get:
{
"code": 400,
"text": "Bad Request",
"cserror": 1028,
"message": "Error updating event",
"type": "text",
"data": "com.ibm.domino.calendar.store.StoreException: Error updating event\r\n\tat com.ibm.domino.calendar.dbstore.NotesCalendarStore.updateEvent(NotesCalendarStore.java:229)\r\n\tat ... 65 more\r\n"
}
The attributes in the body I tried a lot of different things, using the id, no id, an UID like in the calendar service doumentation, ...
What am I doing wrong here?
The solution:
Using the PUT method, the URL which worked looks like this:
http://sitlap55.xyzgmbh.de:8080/mail/padmin.nsf/api/calendar/events/4D750E2B8159D254C1257E9C0066D48D-Lotus_Notes_Generated
the BODY looks like this:
{"events":[{"id":"4D750E2B8159D254C1257E9C0066D48D-Lotus_Notes_Generated","summary":"Some Reminder update #6","start":{"date":"2015-08-13","time":"10:00:00","utc":true}}]}
What I figured out was, that the "id" attribute is required ! a bit strange, because it is already in the URL.
I just checked against the documentation for Domino Access Services (DAS) 9.0.1 - and the example they have there actually works.
I tried a few things before that, e.g. if I could PATCH (change individual fields) or PUT (change ALL fields) just specifying the non-system fields. None of these worked. But taking the response from creating (or getting) the event and put it into a PUT request and adjust e.g. start time works fine.
Looking at your example I think the issue is similar as you do not include the end time in the request. But even so you seem to have to include the entire record as it is returned from the service - and please note that the url must end on the ENTIRE id (i.e. including "...-Lotus_Auto_Generated") :-)
/John
Edit:
It seems you don't need to add all fields... But be aware of the side effects of not specifying fields... You need to test it for yourself!

Resources