I am trying to understand the provisioning process of Digital Twins and I am reading this doc: https://learn.microsoft.com/en-us/azure/digital-twins/tutorial-facilities-setup
But I can not follow the point in this section
however, I can not understand the response for "dotnet run GetOntologies"
Anyone can help me to understand better what are those values? and how are they related to "models are available"?
In Azure Digital Twins, the Ontology entity contains a set of all types and subtypes that can be used in your application. In your example the "Required" and "Default" ontology are enabled (this is by default). If you use the REST API to see what the "Default" ontology contains you get the following:
{
"id": 2,
"name": "Default",
"loaded": true,
"types": [
{
"id": 17,
"category": "SensorDataType",
"name": "Humidity",
"disabled": false,
"logicalOrder": 0
},
{
"id": 18,
"category": "SensorDataType",
"name": "Temperature",
"disabled": false,
"logicalOrder": 0
},
{
"id": 19,
"category": "SensorDataSubtype",
"name": "RoomHumidity",
"disabled": false,
"logicalOrder": 0,
"friendlyName": "Room Humidity"
}, // etc etc
As you can see in the example above, the ontology has basic definitions for the types of sensors/spaces/data types for things related to Smart Building scenarios. The BACnet and Advanced ontologies just add different and more specific types. When you set an ontology to 'enabled', you can start using those types/subtypes. You can check them out in the REST API with:
https://your-url.your-region.azuresmartspaces.net/management/api/v1.0/ontologies/3?includes=Types
Related
I'm trying to create ADOS build definitions programmatically. I found a similar question with an answer here: How to create Build Definitions through VSTS REST API
In the answer example, the steps property is empty. I included some steps (taken from a JSON gotten from another build definition using the same API). The result is that the created build definitions has no steps.
I dug into the .NET API browser, and found that there is a BuildProcess classs with a Process property which should take a DesignerProcess for TFVC pipelines (since YAML is only suported for Git repos), DesignerProcess has a Phase property which is readonly, that maybe the reason why it's not creating my steps
However I still need to find out a way to create a builds steps programmatically
However I still need to find out a way to create a builds steps programmatically.
If you don't know what to add to the step property, you can grab request body in developer console window when saving a Classic UI Pipeline.
Here are the detailed steps:
Create a Classic UI with steps you want in ADOS. (Don't save it in this step)
If you are using edge, press F12 to open developer console window. Then choose 'NetWork'.
Click Save and you will find a record called 'definitions'.
Click it and the request body is at the bottom of the page. You will find steps-related information in Process and processParameters properties.
If you are using a different browser, there might be some slight differences in step 2, 3 and 4.
Then you can edit and add the script in your REST API request body.
Here is a simple example of request body that includes a Command Line task.
"process": {
"phases": [
{
"condition": "succeeded()",
"dependencies": [],
"jobAuthorizationScope": 1,
"jobCancelTimeoutInMinutes": 0,
"jobTimeoutInMinutes": 0,
"name": "Agent job 1",
"refName": "Job_1",
"steps": [
{
"displayName": "Command Line Script",
"refName": null,
"enabled": true,
"continueOnError": false,
"timeoutInMinutes": 0,
"alwaysRun": false,
"condition": "succeeded()",
"inputs": {
"script": "echo Hello world\n",
"workingDirectory": "",
"failOnStderr": "false"
},
"overrideInputs": {},
"environment": {},
"task": {
"id": "d9bafed4-0b18-4f58-968d-86655b4d2ce9",
"definitionType": "task",
"versionSpec": "2.*"
}
}
],
"target": {
"type": 1,
"demands": [],
"executionOptions": {
"type": 0
}
},
"variables": {}
}
],
"type": 1,
"target": {
"agentSpecification": {
"metadataDocument": "https://mmsprodea1.vstsmms.visualstudio.com/_apis/mms/images/VS2017/metadata",
"identifier": "vs2017-win2016",
"url": "https://mmsprodea1.vstsmms.visualstudio.com/_apis/mms/images/VS2017"
}
},
"resources": {}
}
What's more, creating YAML pipelines by REST API is not supported currently. Click this question for detailed information.
I'm having problem with loading $schema in SPFx within my new web part for SP. Web part is working on benchmark.aspx but my whole manifest is not being processed so I can't set preconfiguredEntries and it's big problem for me.
error is:
Problems loading reference 'https://developer.microsoft.com/json-schemas/spfx/client-side-manifest-base.schema.json': Request vscode/content failed unexpectedly without providing any details.(768)
Any idea on this issue please?
{
"$schema": "https://developer.microsoft.com/json-schemas/spfx/client-side-web-part-manifest.schema.json",
"id": "56dab116-67ba-453f-883d-b7a11690e965",
"alias": "ReadListWebPart",
"supportedHosts": ["SharePointWebPart"],
"componentType": "Webpart",
"version": "1.0",
"manifestVersion": "2",
"requiresCustomScript": false,
"preconfiguredEntries": [{
"groupId": "5c03119e-3074-46fd-976b-c60198311f70",
"group": { "default": "Other" },
"title": { "default": "read-list" },
"description": { "default": "popis web party" },
"officeFabricIconFontName": "Page",
"properties": {
"vedouci_velke_foto": true,
"asistenti_pod_vedoucim": false,
"nazev_web_party": "To jsme my"
}
}]
}
I checked the manifest.json, will be the same as yours, have the following waring:
Then tested to access "https://developer.microsoft.com/json-schemas/spfx/client-side-web-part-manifest.schema.json" in my local, no problem, still can be accessed.
After this, I tested to output the preconfigured properties in React SPFX Web Part like this:
Props.ts
WebPart.ts
.tsx
Still able to output properties:
In conclusion, you can just igore this issue, it's still able to read preconfiguredEntries.
I am pulling Recommendations from the Azure Advisor Rest Api and am not able to retrieve the extendedProperties values.
Specifically, I am looking for savings data from Recommendations of the Cost category.
In the following video at 58 seconds there is an example of the expected response.
https://www.youtube.com/watch?v=hAxrdmOAB8s
Are there specific permissions necessary to give my account in order to pull the data, or is the API not capable of supplying the values?
I am able to see the data in the portal, but the extendedProperties property is always empty.
I'm supposing you're trying the Recommendations - List API.
Essentially, extended properties expose additional information about a recommendation from Azure Advisor.
AFAIK, they need not be present for every recommendation, and shouldn't need additional privileges to list. It could just be the case that the type of recommendations you are receiving do not have any to list.
Here is a sample response that I received that has a mix of both:
[
{
"properties": {
"category": "Cost",
"impact": "Medium",
"impactedField": "Microsoft.Network/publicIPAddresses",
"impactedValue": "foo",
"lastUpdated": "2020-03-20T14:10:24.6928024Z",
"recommendationTypeId": "1b4dd958-c202-47af-af97-99bfc98376a5",
"shortDescription": {
"problem": "Delete Public IP address not associated to a running Azure resource",
"solution": "Delete Public IP address not associated to a running Azure resource"
},
"extendedProperties": {}
},
"id": "xxx",
"type": "Microsoft.Advisor/recommendations",
"name": "xxx"
},
{
"properties": {
"category": "Cost",
"impact": "Medium",
"impactedField": "Microsoft.Sql/servers/databases",
"impactedValue": "bar",
"lastUpdated": "2020-03-20T13:27:35.8394386Z",
"recommendationTypeId": "b83241d3-47ba-4603-8d5a-a1b3331e74f4",
"shortDescription": {
"problem": "Right-size underutilized SQL Databases",
"solution": "Right-size underutilized SQL Databases"
},
"extendedProperties": {
"ServerName": "fooserver",
"DatabaseName": "fooDB",
"IsInReplication": "1",
"ResourceGroup": "xyz",
"DatabaseSize": "6",
"Region": "East US 2",
"ObservationPeriodStartDate": "03/04/2020 00:00:00",
"ObservationPeriodEndDate": "03/19/2020 00:00:00",
"Recommended_DTU": "10",
"Recommended_SKU": "S0",
"HasRecommendation": "true"
}
}
}
]
I was trying out Phoenetic search using Azure Search without much luck. My objective is to work out an Index configuration that can handle typos and accomodate phonetic search for end users.
With the below configuration and sample data, I was trying to search for intentionally misspelled words like 'softvare' or 'alek'. I got results for 'alek' thanks for Phonetic analyzer; but didn't get any results for 'softvare'.
Looks like for this requirement phonetic search will not do the trick.
Only option that I found was to use synonyms map. The major pitfall is that I'm unable to use the Phonetics / Custom analyzer along with Synonyms :(
What are the various strategies that you would recommend for taking care of typos?
search query used
?api-version=2017-11-11&search=alec
?api-version=2017-11-11&search=softvare
Here is the index configuration
"name": "phonetichotels",
"fields": [
{"name": "hotelId", "type": "Edm.String", "key":true, "searchable": false},
{"name": "baseRate", "type": "Edm.Double"},
{"name": "description", "type": "Edm.String", "filterable": false, "sortable": false, "facetable": false, "analyzer":"my_standard"},
{"name": "hotelName", "type": "Edm.String", "analyzer":"my_standard"},
{"name": "category", "type": "Edm.String", "analyzer":"my_standard"},
{"name": "tags", "type": "Collection(Edm.String)", "analyzer":"my_standard"},
{"name": "parkingIncluded", "type": "Edm.Boolean"},
{"name": "smokingAllowed", "type": "Edm.Boolean"},
{"name": "lastRenovationDate", "type": "Edm.DateTimeOffset"},
{"name": "rating", "type": "Edm.Int32"},
{"name": "location", "type": "Edm.GeographyPoint"}
],
Analyzer (part of the index creation)
"analyzers":[
{
"name":"my_standard",
"#odata.type":"#Microsoft.Azure.Search.CustomAnalyzer",
"tokenizer":"standard_v2",
"tokenFilters":[ "lowercase", "asciifolding", "phonetic" ]
}
]
Analyze API Input and Output for 'software'
{
"analyzer":"my_standard",
"text": "software"
}
{
"#odata.context": "https://ctsazuresearchpoc.search.windows.net/$metadata#Microsoft.Azure.Search.V2017_11_11.AnalyzeResult",
"tokens": [
{
"token": "SFTW",
"startOffset": 0,
"endOffset": 8,
"position": 0
}
]
}
Analyze API Input and Output for 'softvare'
{
"analyzer":"my_standard",
"text": "softvare"
}
{
"#odata.context": "https://ctsazuresearchpoc.search.windows.net/$metadata#Microsoft.Azure.Search.V2017_11_11.AnalyzeResult",
"tokens": [
{
"token": "SFTF",
"startOffset": 0,
"endOffset": 8,
"position": 0
}
]
}
Sample data that I loaded
{
"#search.action": "upload",
"hotelId": "5",
"baseRate": 199.0,
"description": "Best hotel in town for software people",
"hotelName": "Fancy Stay",
"category": "Luxury",
"tags": ["pool", "view", "wifi", "concierge"],
"parkingIncluded": false,
"smokingAllowed": false,
"lastRenovationDate": "2010-06-27T00:00:00Z",
"rating": 5,
"location": { "type": "Point", "coordinates": [-122.131577, 47.678581] }
},
{
"#search.action": "upload",
"hotelId": "6",
"baseRate": 79.99,
"description": "Cheapest hotel in town ",
"hotelName": " Alec Baldwin Motel",
"category": "Budget",
"tags": ["motel", "budget"],
"parkingIncluded": true,
"smokingAllowed": true,
"lastRenovationDate": "1982-04-28T00:00:00Z",
"rating": 1,
"location": { "type": "Point", "coordinates": [-122.131577, 49.678581] }
},
With the right configuration, I should have got results even with the misspelled words.
I work on Azure Search. Before I suggest approaches to handle misspelled words, it would be helpful to look at your custom analyzer (my_standard) configuration. It might tell us why it's not able to handle the case for 'softvare'. As a DIY, you can use the Analyze API to see the tokens created using your custom analyzer and it should contain 'software' to actually match the docs.
Now then, here are a few ways that can be used independently or in conjunction to handle misspelled words. The best approach varies depending on the use-case and I strongly suggest you experiment with these to figure out the best one in your case.
You are already familiar with phonetic filters which is a common approach to handle similarly pronounced terms. If you haven't already, try different encoders for the filter to evaluate which configuration gives you the best results. Check out the list of encoders here.
Use fuzzy queries supported as part of the Lucene query syntax in Azure Search which returns terms that are near the original query term based on a distance metric. The limitation here is that it works on a single term. Check the docs for more details. Sample query would look like - search=softvare~1 You can also use term boosting to give the original term more boost in cases where the original term is also a valid term.
You also alluded to synonyms which is also used to query with misspelled terms. This approach gives you the most control over the process of handling typos but also require you to have prior knowledge of different typos for terms. You can use these docs if you want to experiment with synonyms.
As you could read in my post; my Objective was to handle the typos.
The only easy option is to use the inbuilt Lucene functionality - Fuzzy Search. I'm yet to check on the response times as the querytype has to be set to 'full' for using fuzzy search. Otherwise, the results were satisfactory.
Example:
search=softvare~&fuzzy=true&querytype=full
will return all documents with the 'Software' in it.
For further reading please go through Documentation
I'm attempting to utilize Contentful on a current project of mine and I'm trying to understand how to filter my query results based on a field in a linked object.
My top level object contains a Link defined as such:
"name": "Service_Description",
"fields": [
{
"name": "Header",
"id": "header",
"type": "Link",
"linkType": "Entry",
"required": true,
"validations": [
{
"linkContentType": [
"offerGeneral"
]
}
],
"localized": false,
"disabled": false,
"omitted": false
},
This "header" field links to another content type that has this definition:
"fields": [
{
"name": "General",
"id": "general",
"type": "Link",
"linkType": "Entry",
"required": true,
"validations": [
{
"linkContentType": [
"genericGeneral"
]
}
],
"localized": false,
"disabled": false,
"omitted": false
},
which then links to the lowest level:
"fields": [{
"name": "TagList",
"id": "tagList",
"type": "Array",
"items": {
"type": "Link",
"linkType": "Entry",
"validations": [
{
"linkContentType": [
"tag"
]
}
]
},
"validations": []
}
where tagList is an array of tags this piece of content may have.
I want to be able to run a query from the top level object that says get me X number of these "Service_Description" content entries where it contains a tag from a supplied list of tags.
In PostMan, I've been running with this:
https://cdn.contentful.com/spaces/{SPACE_ID}/entries?access_token={ACCESS_TOKEN}&content_type=serviceDescription&include=3
I'm trying to add a filter something like so:
fields.header.fields.general.fields.tagList.sys.id%5Bin%5D={TAG_SYS_ID}
This is clearly incorrect, but I've been struggling with how to walk this relationship to achieve my goal. Perusing the documentation this seems to have something to do with includes, but I'm unsure of how to rectify the problem.
Any direction on how to achieve my goal or if this is possible?
This is now possible, something I believe was solved for in the API based on requests for this functionality. You can see the thread here.
This gist of it is that you have to query on the entries that have linked entries and then include the contentType for those linked entries in the query like so:
contentfulClient.getEntries({
'content_type': 'location',
'fields.market.fields.marketName': 'New York',
'fields.market.sys.contentType.sys.id': 'marketRegion'
})
Unfortunately what you are requesting is not currently possible in Contentful.
We were facing a very similar issue with nested/referenced content types and support said it wasn't possible.
We ended up writing a very complicated system that allowed us to do what you want. Essentially doing a full text search for the referenced content and then querying all of the parents entries. We then matched the relationships by iterating over the parents to find the relationship.
Sorry it couldn't be easier. Hopefully the devs work on something that improve this complication. We have brought this to their attention.