Does the upgrade-insecure-requests csp header update form actions? - content-security-policy

Having trouble finding an answer to this. If I set the CSP "upgrade-insecure-requests" header on a page will it upgrade form actions? The MDN docs on the topics say "non-navigational insecure resource requests" are upgraded, but it's not clear if form actions count.

Related

Is there a GitLab REST endpoint/GraphQL solution to fetch changes requested on merge requests?

I am using this endpoint /api/v4/groups/::group_id/merge_requests to fetch merge requests in a group.
However what I need is the number of times changes were requested on a particular merge request, and the content of those requests if possible.
While working with GitHub, fetching count of review comments is possible with repos/{orgName}/{repoName}/pulls/{pullNumber}, which returns a field
{"review_comments": 0} as the number of review comments on the pull request.
Is the same possible with the available GitLab REST endpoints?
There is a thread in the GitLab forum on this that could help.
https://forum.gitlab.com/t/gitlab-rest-api-endpoint-for-changes-requested-on-merge-requests/77164/4

What does atlOrigin query parameter mean in Jira URL?

When I open a Jira issue link from a third party or copy from a clipboard I always find the URL looks like this:
https://mycompany.atlassian.net/browse/comapnyAlias-issueNumber?atlOrigin=longCharacters
I am curious what does atlOrigin means? and why do they use it?
There is a small explanation here https://developer.atlassian.com/developer-guide/client-identification/
Origin ID for links to Atlassian UI
Identify HTML links into our product user interface with a unique atlOrigin query string parameter added to the URL. This is similar to the UTM parameter used for click tracking in marketing web sites. In this context, uniqueness is guaranteed by Atlassian issuing the origin ID. For example, if linking into Jira’s Issue page:
https://devpartisan.atlassian.net/browse/TIS-11?atlOrigin=abc123
We generate a unique value for each integration so if you have more than 1 integration, please make sure we have properly associated each integration to a different origin value.
We do not recommend using this parameter in REST APIs or OAuth 2.0 flows, where this parameter might not be properly ignored.
Result is very Google Search - unfriendly to come up 😕

SharePoint Graph API - There is no update page endpoint - looking for suggestions on best way to do this

I am currently working with The SharePoint Graph API.
This is the scenario I need to achieve:
Somehow call page by name not ID
If page called my_page_name.aspx exists
update it
else
create the page
As far as I can work out, there is no update page end point in the documentation here:
https://learn.microsoft.com/en-us/graph/api/resources/search-api-overview?view=graph-rest-beta
The only thing I can think of to do it the following:
Call the page by name
delete it
create a new page with the same name with the new content I want to update
Any help on how I achieve what I need would be great
Yes, As of now we don't have any 'Update Page' endpoint feature and there is already a uservoice for this here. Please upvote this uservoice so that this feature could be developed in future. And also remember that since the pages endpoint is in beta these are subjected to change and its not recommended to use for production purpose.

Acumatica - Opportunities Create Quote button - Contract-Based REST API

Good day
I have created an endpoint to use the Create Quote action on the Opportunities page. When I sent the POST to the Action I get a 202 response but the Quote is never made. I believe it is because I am only "clicking" the Create New Quote and not the Create button(or Create and Review) on the Dialog box?
Is this a limitation on the REST API? because in the Wiki they do take about the Pop-up Panels with the SOAP connection:
https://help-2020r1.acumatica.com/Help?ScreenId=ShowWiki&pageid=0ff94cd6-a46e-4cf7-8b97-51b79a6b3257
The 202 response only means that the action was accepted and is in progress.
In order to see the actual result you will need to perform a GET on the URL in the Location header of the response
You can find more information about this here :
https://help-2020r1.acumatica.com/Help?ScreenId=ShowWiki&pageid=91bf9106-062a-47a8-be1f-b48517a54324
Check the Response and Status Code sections

How fast does Google take to crawl new page, and can we influence Google's crawler?

I want to submit my site to Google. How much time does it take to crawl a new post on the website?
Also, is there a way to feed this post to Google crawler as soon as a post is created?
Google has three modes of entering a website into its results - discover, crawl, index.
In order to 'discover' your site, it must be made aware of it's existence - normally through back-links. If you're site is brand new you can use the submit URL form - but this isn't really a trusted method. You're better off signing up for a Google Webmaster Tools account and submitting your site. An additional step is to submit an XML sitemap of your site. If you are publishing to your site in a blogging/posting way - you can always consider PubSubHubbub.
From there on, crawl frequency is normally based on site popularity (as measured by ye olde PageRank). Depth of crawl (crawl-budget) is also determined by PR.
There are a couple ways to help "feed" the Google Crawler a URL.
The first way is to go here and submit a URL ---> www.google.com/webmasters/tools/submit-url/
The second way is to go to your Google Webmasters Tools and clicking "Fetch as GoogleBot"
And then inputting the URL you want to add:
http://i.stack.imgur.com/Q3Iva.png
The URL will then appear similar to this:
http:\\example.site Web Success URL submitted to index 1/22/12 2:51 AM
As for how long it takes for a question on here to appear on google, there are many factors that are put in to this.
If the owners of the site use Google Webmasters Tools, the following setting is available:
http://i.stack.imgur.com/RqvOi.png
For fast crawl you should submit your xml sitemap in google web master and manually crawled and index your web pages url through google webmaster fetch.
I also used google crawled and index method and after that this practices give me best result.
This is a great resource that really breaks down all the factors that affect a crawl budget and how to optimize your website to increase it. Cleaning up your broken links and removing outdated content, for example, can work wonders. https://prerender.io/crawl-budget-seo/ 
I acknowledged error in my response by adding a comment to original question a long time ago. Now, I am updating this post in interest of keeping future readers from being misguided as I was. Please see notes from other users below - they are correct. Google does not make use of the revisit-after meta tag. I am still keeping the original response text here to make sure that anyone else looking for similar answer will find it here along with this note confirming that this meta tag IS NOT VALID! Hope this helps someone.
You may use HTML meta tag as follows:
<meta name="revisit-after" content="1 day">
Adjust time period as necessary. There is no guarantee that robots will return in given time frame but this is how you are telling robots about how often a given page is likely to change.
The Revisit Meta Tag is used to tell search engines when to come back next.

Resources