Matching Github and GHE with Webextensions Match Patterns - google-chrome-extension

I'm trying to write an extension that only runs on *://github.com/notifications or *://github.*.com/notifications (to cover Github Enterprise URLs). Unfortunately the second match pattern is one of the invalid match patterns in the firefox docs and the google docs, see picture:
This presumably means I'd have to use *://*/notifications instead, and then filter it in the app, which seems like a pain, as it means I have to use a wider scope than I'd need to.
So my question is, is there an easy way to match these urls that I'm missing? Is there a reason that this match is disallowed?
App is here in case that helps.

Okay, fixed it with the include_globs option as suggested in the comment
"content_scripts": [
{
"matches": [
"*://*/*"
],
"include_globs": [
"*://*github*notifications*"
],
"js": [
"browser-polyfill.js",
"button.js"
]
}
]
Still seems odd that you are forced to do this more general match and then filter down with include_globs, but maybe this makes it clear that you're actually matching any domain.

Related

Azure Data Factory Pipelines API: Cannot apply filters

The issue might be trivial, but I cannot find out what am I doing wrong. I am trying to check whether there are any "in progress" runs of a specific pipeline within my data factory. The below call gives me a full list of all runs in my ADF (correct):'
However, the very moment I add either a filter on a pipeline name, or a filter on status, the results are empty. Even though there are valid runs that should be returned.
Assuming we are talking about the same API then the documentation said that lastUpdatedAfter and lastUpdatedBefore are required fields in the body.
Take note of the format of these dates in the example Microsoft provides:
{
"lastUpdatedAfter": "2018-06-16T00:36:44.3345758Z",
"lastUpdatedBefore": "2018-06-16T00:49:48.3686473Z",
"filters": [
{
"operand": "PipelineName",
"operator": "Equals",
"values": [
"examplePipeline"
]
}
]
}

Unable to search for more than 20 chars in azure search

We are currently running into an issue when expanding our current azure search features.
When we have the following string indexed in azure search:
AEDE190EACWWG4VGLDE02000UHKPT
And we search for that complete string, we are not able to find it.
However, when we only use 20 chars, we are able to find it.
So the with the string below, we are able to find it
AEDE190EACWWG4VGLDE
However, when adding just 1 more char it disappears again. And this is not only within our implementation. This is also in azure itself when entering this within the query string
The field is set up as
Retrievable
Filterable
Searchable
Anyone know how to solve this issue?
I tested your scenario now, and it works fine. I cannot reproduce the problem you have. You don't specify which analyzer you use, so I'm going to assume you use the standard analyzer.
Here is how I tested.
I create a new index with two fields Id and Ordcode.
I upload two records via Postman
"value": [
{
"#search.action": "mergeOrUpload",
"Id": "1",
"Ordcode" : "AEDE190EACWWG4VGLDE02000UHKPT"
},
{
"#search.action": "mergeOrUpload",
"Id": "2",
"Ordcode": "ABC123"
}]
I search for your the string AEDE190EACWWG4VGLDE02000UHKPT using searchMode=all, queryType=full. The response is as expected.
{
"#odata.context": "https://<search-service>.search.windows.net/indexes('dg-test-65143696')/$metadata#docs(*)",
"#odata.count": 1,
"value": [
{
"#search.score": 0.2876821,
"Id": "1",
"Ordcode": "AEDE190EACWWG4VGLDE02000UHKPT"
}
]
}
I also tried to reproduce via the Search Explorer in the Azure Portal, even with simple mode and any (the default).
search=AEDE190EACWWG4VGLDE02000UHKPT&$count=true&$select=Id,Ordcode
There is a limit on the tokens produced (depending on the analyzer you use), but it's not 20 unless you have defined a shorter max token length.

Azure Spell not detecting spelling mistakes

I've written up a quick proof of concept console app to test out the functionality of the AzureSpell Cognitive Services product, however it doesn't seem to often detect obvious spelling mistakes.
Having experimented with recommendations through other SO answers, I've had limited success.
Even using the demo located at https://azure.microsoft.com/en-us/services/cognitive-services/spell-check/ produces no results.
For example, consider the following piece of text: "Currently growing my compny which is a UK based Online compny with clients across the world. Working since 2001 to help indivduals."
This produces no results. I've looked at regional settings, PROOF vs SPELL, character counts to no avail.
Has anyone had any success with this service, or, even better, does the above text snippet produce results for you?
Spell mode is working for me with your sample, see below:
The JSON result is:
{
"_type": "SpellCheck",
"flaggedTokens": [
{
"offset": 21,
"token": "compny",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "company",
"score": 0.9264452620075305
}
]
},
{
"offset": 55,
"token": "compny",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "company",
"score": 0.8740149238635179
}
]
},
{
"offset": 120,
"token": "indivduals",
"type": "UnknownToken",
"suggestions": [
{
"suggestion": "individuals",
"score": 0.753968656686115
}
]
}
]
}
Ok, so after a fair amount of trial I've had some success, which has solved some issues and created others. I've not been able to get a reliable result from Spell mode, but I have with Proof, however after adding a fairly short piece of text, it would again not report any results. Inspecting the API shows the text is encoded in the POST, removing both "%0D" and "%0A", line feed chars allows me to Proof long texts with success, which would be fine, however being UK based, lots of correct spellings are now flagged as incorrect as the PROOF mode is only available in the US. So, I've still been unable to solve getting a functioning SPELL result (which works for very short pieces of text). I understand the documentation states upto 130 chars for GET, but 10,000 chars for POST and my typical example POSTS are around 1,000 chars. Possibly a ticket with MS unless anyone has any ideas?

How to include multiple TLDs in permissions of manifest.json?

Which is the best way for including the following urls (multiple TLDs) in manifest.json of a Chrome Extension? I want to include:
https://www.corporate.com/
https://www.corporate.de/
https://www.corporate.co.uk/
But the following form gives an error:
"permissions": [
"https://www.corporate.*/",
]
because the match pattern rules https://developer.chrome.com/apps/match_patterns indicate that the part can include a * only in the beginning.

Unable to full text search in Solr

I have some data in solr. I want to search which name is Chinmay Sahu See below I have 3 results in output. But I got 3 instead of 1. Because Content name searched partially.
I want to full search those name having Chinmay Sahu only that contents will come.
Output:
"docs": [
{
"id": "741fde46a654879949473b2cdc577913",
"content_id": "1277",
"name": "Chinmay Sahu",
"_version_": 1596995745829879800
},
{
"id": "4e98d680efaab3afe051f3ddc00dc5f2",
"content_id": "1825",
"name": "Chinmay Panda",
"_version_": 1596995745829879800
}
{
"id": "741fde46a654879949473b2cdc577913",
"content_id": "1259",
"name": "Sasmita Sahu",
"_version_": 1596995745829879800
}
]
Query:
name:Chinmay Sahu
Expected :
"docs": [
{
"id": "741fde46a654879949473b2cdc577913",
"content_id": "1277",
"name": "Chinmay Sahu",
"_version_": 1596995745829879800
},
]
Please help
Try doing this
name:"Chinmay Sahu"
You need to do a phrase query to match the exact name.
I am guessing in your case the name field is using Standard tokenizer which will split tokens if whitespace is there. So while indexing in all the 3 docs there will be a token called "chinmay".
While you search using
name:Chinmay Sahu
Solr will search it like this since if there is no fieldName specified before a token solr automatically searches it in default_field.(however default field is removed from solr 7.3, So it depends on what version of solr are you using.
)
Name:chinmay AND default_field:sahu
So since all the three docs are having chinmay as a token in the index,the query will match all 3 docs.
Now i dont know what your default field is? can you post your solr schema? That way we can explain why you are seeing those 3 docs.
Since root545 already explained that field:foo bar will search for foo in field and bar in the default search field, I'll suggest that it seems like you don't want to concern yourself with the exact Lucene syntax for searching. The edismax query parser is well suited for separating the typed search string from what fields are being searched and whether you want all tokens to match.
The query in that case would be just Chinmay Sahu, while you'd set q.op=AND (all terms must match), defType=edismax (use the edismax query parser) and qf=name (search the name field):
q=Chinmay Sahu&q.op=AND&defType=edismax&qf=name
You can also tune the different phrase parameters to make sure that names with the tokens in the exact same sequence will be boosted higher than those that have them in the opposite sequence (i.e. Sahu Chinmay).
If this is a programmatic search where no user is actually typing in the suggestion, using a phrase search as suggested is the way to go (name:"Chinmay Sahu").
I would suggest using query like
name:(Chinmay Sahu)
And make sure default operator is AND either in settings or query string like q.op=AND
With that approach you can use user input much easier since you don't need to parse it too much.

Resources