Activate chrome extension manifest v3 for link checker - google-chrome-extension

There are several link checkers in the web store which are using manifest v2. As we don't mention any domain names or rules in manifest v2, these link checkers works fine. In manifest v3, we need to mention the matches in the manifest file.
When i use fetch api in Service worker, it ignores the url if it is not matched with manifest matches
How do we design a manifest so that we can hit any url from the service worker using fetch api?
I tried to add regex patterns, and it works if they are matching. But in link checker, i cannot add all as it can be any link.

Related

API Management Developer Portal CI/CD

I'm looking to set up a CI/CD flow for the content in the developer portal, so we can make changes in our dev environment and promote them to higher environments. From what I found, this appears to have information on the suggested way to perform this:
https://learn.microsoft.com/en-us/azure/api-management/automate-portal-deployments
which links to this repo:
https://github.com/Azure/api-management-developer-portal
As far as I can tell, the majority of the code in that repository. is for the developer portal itself in case you wanted to self host. I'm not interested in self hosting, I'm only interested in the scripts contained there, as they will allow me to extract and publish the content between environments. Is there a repository with ONLY the code required for the scripts to run (specifically v3)? I'd prefer to avoid manually going through and deleting files that are non-script related if possible, as I don't really know/understand what they all are.
If such a repository existed, it would enable my ideal scenario which would be to fork that repository, run the "capture" script, then check the extracted developer portal content in to the new repository.
Well, so why don't you just copy the scripts.v3 folder and use it? As you noticed you don't need rest files if you are not running self hosted version. So you can just simply copy paste them. Those scripts are nothing more than a client for Azure REST API endpoints written in node.js. And they can run completely independently from the rest of the repository.
If you don't like node.js you can even write your own scripts to deploy developer portal - with languange of your choice.
Developer portal contains Content Types which contain Content Items. One extra thing is media (fonts, images etc.) that are stored in the APIM blob storage. Those two things determinate how Developer Portal looks like.
So all you need to do is:
Grab all content items (using Azure REST API) from one instance and put them to another APIM
Connect to APIM blob storage and grab all media blobs and put them to another APIM blob storage. You can get SAS url to the blob storage using Azure REST API as well.
And if you examine carefully, those scripts are doing exactly the same thing:
capture.js - will take all files from given APIM instance and put it in to your local folder
generate.js - will take files from your local folder and put it to APIM instance of your choice
migrate.js - is just a combination of previous two scripts. It will take files from one instance and put it to another.
cleanup.js - it is the same thing like reset button in developer portal. It brings back the default state.

Securing assets on Microsoft Azure CDN (Verizon Premium) using token authentication

I am attempting to secure files in one folder on Verizon Premium CDN using token authentication. I have followed the docs here.
I have successfully installed the PHP module that encrypts requests and it is working correctly in my application, tested against the encrypt/decrypt tool in the Azure Portal.
I have set up a rule in the rules engine on the CDN endpoint, but when I access the files directly on the CDN endpoint they are not locked or secured and do not require any token key to load them. I am not sure if I have missed something, or if my Rules Engine rule is wrong. My custom rule in the rules engine is set up with this logic:
If 'URL Path Directory' 'matches' '/assets/v1/' then 'enable' 'Token Authentication', 'ignore case', relative to 'root' (the other option rather than root would be 'origin').
Have I got the path wrong? Am I missing some regular expression detail?
The only similar questions here on SO have not successfully dealt with this question and have just talked about privacy options on blobs/containers etc. I originally tried all the same setup on an Azure Storage using a container and blobs but these did not get secured in any way either. I have now set it up to get the files from the web application web site instead. I am quite happy to set it up whichever way works best.

Configuring Access-Control-Allow-Headers in Azure Functions

I have an API built through Azure Functions that works when called through JavaScript in all browsers except for Safari. From another question, it appears I need to allow a header of "Origin" in the CORS configuration. The only configuration I see in the Azure portal is allowing origins. How do I configure allowed headers?
If it matters, this is developed and published through VS2017 Azure function tools.
How do I configure allowed headers?
It seems that we also could do that with Azure portal. More details info please refer to the screenshot.
To allow all, use "*" and remove all other origins from the list. Slashes are not allowed as part of domain or after TLD.
Allowed headers are not configurable within Azure Functions as it allows all.

Azure Web Apps : How to access specific instance directly via URL?

We have deployed our Sitecore CMS on to Azure Web Apps and having some indexing issues or similar. i.e. the updated changes is reflected for some users and not for all.
We have a scale turned on to 2.
I would like to troubleshoot by accessing the instance 1 and 2 directly via URL to make sure both instances have index built 100%.
How do I access each Azure Web Role instances directly via URL?
Thanks.
The first step is to get the list of instance names. There is an Azure API for it, which you can easily invoke using Resource Explorer (https://resources.azure.com/). Use these steps:
In Resource Explorer, find your Web App (in the tree or using search box)
Under the app, click on Instances, which gives you an array of instances. Each instance has a long name like 622e6b27f9077701f23789e5e512844d22a7dfdd29261bc226f65cd000e2d94a
Once you have the instance names, you can add a cookie in your requests to aim at a specific instance by setting the ARRAffinity cookie to that value. e.g.
ARRAffinity=622e6b27f9077701f23789e5e512844d22a7dfdd29261bc226f65cd000e2d94a
You can do it using a tool like curl. Or I like to use the EditThisCookie Chrome extension (link), which lets you set it from the browser.
In fact, you'll find that after hitting the page normally from the browser, you'll already get an ARRAffinity, as it's used for session stickiness. But the Chrome extension lets you change it and aim at other instances.
See also related blog post: http://blog.amitapple.com/post/2014/03/access-specific-instance/

Exclude Google Analytics data from outside a Chrome extension

I created a Google Analytics account for my Chrome extension. I use a faked website url in the parameters because GA doesn't accept a protocol like chrome-extension://...
As GA isn't linked to a specific domain which I own, it doesn't block data from outside. Is there a solution for this ? Can GA use chrome-extension:// or my extension id ?
Thx
One of the solutions is to create a real web page on your host, for example http://example.com/analytics.html. And insert in this page the google analytics script.
Then inject this page as an iframe using content scripts into the websites you need. This will trigger google analytics without problems.

Resources