Azure Rule engine File extension with json file - azure

I am trying to create a rule engine in a cdn endpoint. like this:
But using a Json file (The result in the image has been achieved manually but now i want to automate this).
So far I got this:
"deliveryPolicy": {
"description": "Rewrite and Redirect",
"rules": [
{
"name" : "UrlFileExtension",
"order": 2,
"conditions": [
{
"name": "UrlFileExtension",
"parameters": {
"#odata.type": "#Microsoft.Azure.Cdn.Models.UrlFileExtensionMatchConditionParameters",
"Extension": 0,
"operator": "LessThanOrEqual",
"matchValues": [0]
}
}
],
"actions": [
{
"name": "UrlRewrite",
"parameters": {
"#odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlRewriteActionParameters",
"sourcePattern": "/",
"destination": "/index.html",
"preserveUnmatchedPath": false
}
}
]
},
The action works just fine, but the urlfileextentionI cant get it to work, it does not recognize the odata.type either.
Please any hint ot suggestion how to fix the condition?

You might want to try with this odata.type for the condition
"#odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlFileExtensionMatchConditionParameters",
instead of
"#odata.type": "#Microsoft.Azure.Cdn.Models.UrlFileExtensionMatchConditionParameters",
https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.models.urlfileextensionmatchconditionparameters?view=azure-python
(I'm aware the documentation says Python, I could not find better, but it could be your solution)

Related

how to get multiple folder path on the azure logic app

I have a scenario : I want to build an azure logic app, where I have to got documents from various folder from the Sharepoint get process and give email notification. My confusion is how can I give multiple input folder path?
I'm going to make an assumptions with your architecture in my answer. I'm assuming you want to process multiple files in different sites within the same SharePoint tenant. So, not across tenants.
To achieve what you're asking for, I created a Parse JSON action which takes in the following structure (as an example, obviously the structure is the key point here, not the data) ...
Scenario 1 - Specific Files
[
{
"SiteName": "ExampleSolution",
"FileName": "/Shared Documents/General/Book.xlsx"
},
{
"SiteName": "TestSite",
"FileName": "/Shared Documents/Test Folder/Document.docx"
}
]
The SP tenant needs to be authenticated to with the appropriate user.
Then, in a For Each action, loop through each item and retrieve the contents of each document using the Get file content using path action.
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Path = File Name (from Dynamic Content)
It will then retrieve the contents dynamically using those expressions.
File 1 (Excel Document)
File 2 (Word Document)
Scenario 2 - All Files
If you want to do it for all files, just change it up slightly ...
[
{
"FolderName": "/Shared Documents/General",
"SiteName": "ExampleSolution"
},
{
"FolderName": "/Shared Documents/Test Folder",
"SiteName": "TestSite"
}
]
Site Address = concat('https://yourtenant.sharepoint.com/sites/', items('For_each')?['SiteName'])
File Identifier = Folder Name (from Dynamic Content)
Output - Folder 1
[
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fBook.xlsx",
"Name": "Book.xlsx",
"DisplayName": "Book.xlsx",
"Path": "/Shared Documents/General/Book.xlsx",
"LastModified": "2021-12-24T02:56:14Z",
"Size": 15330,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{23948609-0DA0-43E0-994C-2703FEEC8567},7\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uLnNoYXJlcG9pbnQuY29tL3NpdGVzL0V4YW1wbGVTb2x1dGlvbg==,id=JTI1MmZTaGFyZWQlMmJEb2N1bWVudHMlMjUyZkdlbmVyYWwlMjUyZkJvb2sueGxzeA==",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fTest%2bDocument.docx",
"Name": "Test Document.docx",
"DisplayName": "Test Document.docx",
"Path": "/Shared Documents/General/Test Document.docx",
"LastModified": "2021-12-30T11:49:28Z",
"Size": 17959,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{7A3C7133-02FC-4A63-9A58-E11A815AB351},8\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fGeneral%252fHierarchy.xlsx",
"Name": "Hierarchy.xlsx",
"DisplayName": "Hierarchy.xlsx",
"Path": "/Shared Documents/General/Hierarchy.xlsx",
"LastModified": "2022-01-07T02:49:38Z",
"Size": 41719,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{C919454C-48AB-4897-AD8C-E3F873B52E50},72\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9uL etc",
"LastModifiedBy": null
}
]
Output - Folder 2
[
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fTest.xlsx",
"Name": "Test.xlsx",
"DisplayName": "Test.xlsx",
"Path": "/Shared Documents/Test Folder/Test.xlsx",
"LastModified": "2022-01-09T11:08:31Z",
"Size": 17014,
"MediaType": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"IsFolder": false,
"ETag": "\"{CCF71CE7-89E7-4F89-B5CB-0F078E22C951},163\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG9u etc",
"LastModifiedBy": null
},
{
"Id": "%252fShared%2bDocuments%252fTest%2bFolder%252fDocument.docx",
"Name": "Document.docx",
"DisplayName": "Document.docx",
"Path": "/Shared Documents/Test Folder/Document.docx",
"LastModified": "2022-01-09T11:08:16Z",
"Size": 17293,
"MediaType": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"IsFolder": false,
"ETag": "\"{317C5767-04EC-4264-A58B-27A3FA8E4DF3},3\"",
"FileLocator": "dataset=aHR0cHM6Ly9icmFka2RpeG etc",
"LastModifiedBy": null
}
]
From here, just process each file individually using one of the files actions like in the first scenario above.
Note: You'll need to work through sub folders and recursion. There doesn't appear to be a way to do that easily.
You've provided very little information but it should be enough for you to adapt it accordingly.
Also, I strongly recommend you use a means other than a hardcoded JSON document in the action itself. There are way better means for housing that information which wouldn't result in a need to update the action itself everytime you want to add or delete a file.
The concept of the loop and and the expressions are the most important part to grasp as they will give you what you want.

Apollo GraphQL Deprecation Warnings

I am getting lots of deprecation warnings. I am not sure why I am getting it. Could anyone please guide me out.
I have created a boilerplate for apollo graph ql with koa node js frameworks. I am planning to use a graph ql subscription in my next project. However, these warnings or errors making me confused. this is my git https://github.com/sumitbhavra/graphql-koa.git
{
"name": "include",
"description": "Directs the executor to include this field or fragment only when the `if` argument is true.",
"locations": [
"FIELD",
"FRAGMENT_SPREAD",
"INLINE_FRAGMENT"
],
"args": [
{
"name": "if",
"description": "Included when true.",
"type": {
"kind": "NON_NULL",
"name": null,
"ofType": {
"kind": "SCALAR",
"name": "Boolean",
"ofType": null
}
},
"defaultValue": null
}
]
},
{
"name": "deprecated",
"description": "Marks an element of a GraphQL schema as no longer supported.",
"locations": [
"FIELD_DEFINITION",
"ENUM_VALUE"
],
"args": [
{
"name": "reason",
"description": "Explains why this element was deprecated, usually also including a suggestion for how to access supported similar data. Formatted using the Markdown syntax, as specified by [CommonMark](https://commonmark.org/).",
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": "\"No longer supported\""
}
]
}
That is not a deprecation warning. That is an introspection result that includes information about the #deprecated directive, which can be used inside your type definitions to mark individual fields or enum values as deprecated.

how to map distinct custom skillset to index

trying to add custom skill in the skillset and map it in the index
here is in detail
I'm using the azure Named Entity Recognition in my skillset as
{
"#odata.type": "#Microsoft.Skills.Text.MergeSkill",
"description": "Merge text content with image tags",
"insertPreTag": " ",
"context": "/document",
"inputs": [
{
"name": "text",
"source": "/document/fullTextAndCaptions"
},
{
"name": "itemsToInsert",
"source": "/document/normalized_images/*/Tags/*/name"
}
],
"outputs": [
{
"name": "mergedText",
"targetName": "finalText"
}
]
}
and in the indexer as
{
"sourceFieldName": "/document/finalText/pages/*/entities/*/value",
"targetFieldName": "entities"
},
{
"sourceFieldName": "/document/finalText/pages/*/locations/*",
"targetFieldName": "locations"
},
and it works 100% now I want to add the Distinct custom skill from https://github.com/Azure-Samples/azure-search-power-skills/tree/master/Text/Distinct
I did publish the function and when I go to test it manually it works as expected.
however overall its not working in skillset. I want it to take the location and filter it and output the distinct only in it's own field in the search index.
I'm having a really hard time to configure the skillset and indexer to get it to work.
any help please?
You'll need to add the distinct custom skill like this, assuming you want to dedup over the whole document
{
"skills": [
...
{
"#odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"description": "Distinct skill",
"uri": "<https://distinct-skill>",
"context": "/document",
"inputs": [
{
"name": "locations",
"source": /document/finalText/pages/*/locations/*"
}
],
"outputs": [
{
"name": "distinct",
"targetName": "distinctLocations"
}
]
}
...
]
}
and an output field mapping to put it into the index.
{
"sourceFieldName": "/document/distinctLocations",
"targetFieldName": "distinctLocations"
}
See https://learn.microsoft.com/en-us/azure/search/cognitive-search-custom-skill-interface#consuming-custom-skills-from-skillset for adding a custom skill.
The skill inputs for the custom skill must be configured to point to the data you want to disambiguate. In this case, you didn't really need to modify the code, all you had to do was have an input with name 'words' and source '/document/finalText/pages//locations/'.

How to query by array of objects in Contentful

I have an content type entry in Contentful that has fields like this:
"fields": {
"title": "How It Works",
"slug": "how-it-works",
"countries": [
{
"sys": {
"type": "Link",
"linkType": "Entry",
"id": "3S5dbLRGjS2k8QSWqsKK86"
}
},
{
"sys": {
"type": "Link",
"linkType": "Entry",
"id": "wHfipcJS6WUSaKae0uOw8"
}
}
],
"content": [
{
"sys": {
"type": "Link",
"linkType": "Entry",
"id": "72R0oUMi3uUGMEa80kkSSA"
}
}
]
}
I'd like to run a query that would only return entries if they contain a particular country.
I played around with this query:
https://cdn.contentful.com/spaces/aoeuaoeuao/entries?content_type=contentPage&fields.countries=3S5dbLRGjS2k8QSWqsKK86
However get this error:
The equals operator cannot be used on fields.countries.en-AU because it has type Object.
I'm playing around with postman, but will be using the .NET API.
Is it possible to search for entities, and filter on arrays that contain Objects?
Still learning the API, so I'm guessing it should be pretty straight forward.
Update:
I looked at the request the Contentful Web CMS makes, as this functionality is possible there. They use query params like this:
filters.0.key=fields.countries.sys.id&filters.0.val=3S5dbLRGjS2k8QSWqsKK86
However, this did not work in the delivery API, and might only be an internal query format.
Figured this out. I used the following URL:
https://cdn.contentful.com/spaces/aoeuaoeua/entries?content_type=contentPage&fields.countries.sys.id=wHfipcJS6WUSaKae0uOw8
Note the query parameter fields.countries.sys.id

Error when creating Custom Analyzer in API version 2016-09-01

I am currently working with Custom Analyzers in Azure Search. I have previously had a lot of success with the preview version of the Azure Search API "2015-02-28-Preview" which introduced the feature. I'm currently trying to migrate my custom analyzers to API version "2016-09-01" which according to this article (https://learn.microsoft.com/en-us/azure/search/search-api-migration) includes Custom Anlayzer support. My analyzers are configured as follows:
"analyzers": [
{
"name": "phonetic_area_analyzer",
"#odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"tokenizer": "area_standard",
"tokenFilters": [ "lowercase", "asciifolding", "areas_phonetc" ]
},
{
"name": "partial_area_analyzer",
"#odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"tokenizer": "area_standard",
"tokenFilters": [ "lowercase", "area_token_edge" ]
},
{
"name": "startsWith_area_analyzer",
"#odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"tokenizer": "area_keyword",
"tokenFilters": [ "lowercase", "asciifolding", "area_edge" ]
}
],
"charFilters": [],
"tokenizers": [
{
"name":"area_standard",
"#odata.type":"#Microsoft.Azure.Search.StandardTokenizer"
},
{
"name":"area_keyword",
"#odata.type":"#Microsoft.Azure.Search.KeywordTokenizer"
}
],
"tokenFilters": [
{
"name": "area_edge",
"#odata.type": "#Microsoft.Azure.Search.EdgeNGramTokenFilter",
"minGram": 2,
"maxGram": 50
},
{
"name": "area_token_edge",
"#odata.type": "#Microsoft.Azure.Search.EdgeNGramTokenFilter",
"minGram": 2,
"maxGram": 20
},
{
"name": "areas_phonetc",
"#odata.type": "#Microsoft.Azure.Search.PhoneticTokenFilter",
"encoder": "doubleMetaphone"
}
]
This configuration works when using version "2015-02-28-Preview" but when I try version "2016-09-01" I get the following error as a response:
{
"error": {
"code": "",
"message": "The request is invalid. Details: index : The tokenizer of type 'standard' is not supported in the API version '2016-09-01'.\r\n"
}
}
Is there a problem with my configuration, or does version "2016-09-01" only allow for a limited subset of custom analyzer features? If this is the case, could someone please point me in the direction of some documentation detailing which features are supported?
Sorry, there was a delay in the process that updates the documentation. Here is my pull request that has the changes we introduced in 2016-09-01: https://github.com/Azure/azure-docs-rest-apis/pull/218 (request access here https://azure.github.io/)
In your example, change KeywordTokenizer to KeywordTokenizerV2, same for the StandardTokenizer and the EdgeNGramTokenFilter.
Update:
The new version of the documentation is online: https://learn.microsoft.com/en-us/rest/api/searchservice/custom-analyzers-in-azure-search

Resources