get url from new google site - google-sites

I created site by new google site. Now I need to get the calling url within the site by html. When using window.location.href or document.URL it return something else then the url (e.g. https://[1234]-sites-embeds.googleusercontent.com/s/embeds/code/inner-frame-minified.html?jsh=[] )
Is there anyway to get the original called url, e.g. https://sites.google.com/view/[name]/home?
Thanks!

Although the New Google Sites have made a long way since it was launched in November 2016, it is still very limited in terms of embedding and frames.
The only option is to use Classic Google Sites, as it allows you to use window.location.href through XML. XML are no longer available in the new Sites.

Related

Using Microsoft Graph API to Query Certain Sharepoint URIs with .aspx Extensions

I have a question about sharepoint combined with the graph API. I'm trying to do a GET request against a sharepoint site, but it doesn't populate when the url has a .aspx extension. For example if I do 'GET https://graph.microsoft.com/v1.0/sites/hostname.sharepoint.com:/sites/blablabla/UK' this populates a response fine, but if I do 'GET https://graph.microsoft.com/v1.0/sites/hostname.sharepoint.com:/sites/blablabla/UKDTAppKZ/something.aspx' then I get a 404 error suggesting this site doesn't exist... Could I get some clarification on how to use graph GET queries with sharepoint urls, specifically .aspx extensions?
In your first URL you're accessing the Site object for the /sites/blablabla/UK sub site, so you should get back a valid site object (assuming the URL is correct) as you indicated. To access the files in that site you need to access the Drives (Document Libraries) and then get the children or the specific item you're looking for. So the URL would look something like:
Path support isn't always consistent right now so wherever possible I like to use IDs if I know them.
So with an ID it would be:
https://graph.microsoft.com/v1.0/sites/HOSTNAME.sharepoint.com,SITECOLLECTIONGUID,SITEGUID/drives/DRIVEID/root/children
OR
https://graph.microsoft.com/v1.0/sites/HOSTNAME.sharepoint.com,SITECOLLECTIONGUID,SITEGUID/drives/DRIVEID/items/ITEMID
For Pages specifically though I would take a look at the Beta Pages API we added recently. If you want to do any operations (like publishing) to the page you'll want that API instead of the basic drive API.

How do I capture the URL after an external website login in ReactJS?

I want to retrieve the URL after opening an external website pop up in my ReactJS/NodeJS application. Basically in my application, I have a button that redirects the page to microsoft online login page. What I want is the URL of the page after the user logs into microsoft online.
Is there any way that's possible? If so, what are my options?
If you navigate to another webpage, your React application is no longer being served to your browser, and can't do anything. You would need to have a script running on the microsoft website, either by writing it in the source code (which I doubt you can do) or by some other method such as a browser extension.
There is no way to track different systems like methods #izb mentioned, if they already dont provide.
Many systems provides information from their servers, push/ping systems.
One of the payment systems, I redirect request, customer pays, and they redirects the page I entered before in their panel, like successful or fail pages.

Specifying which Google account to use when executing a Google Apps Script webapp request

I have a browser extension which is scraping the threadId from the URL when the user is reading an email in Gmail, and is using this threadId to fetch circumstantial data using the Google Apps Script API.
The extension do however not know which of maybe several Google accounts are reading this message; it knows only the URL to my Apps Script webapp and the threadId. So when it executes the fetch, the webapp will the interpret request as coming from the default user session, which in some cases is wrong and will thus result in an null when executing GmailApp.getThreadById(e.parameter.threadId).
So what I am wondering is whether it is possible to specify what Google account to use when querying the webapp. Are there any possibilities other than asking the user to log off all other accounts and set the current one as default?
Unfortunately Google Apps Script does not have good support for multiple logins. See this page for more information.
You can add an authuser parameters to the requests you make to your the Google Apps script.
The authuser param's values are zero based indexes for all the Google accounts that are logged in the current browser session.
Now for extracting what index value you need to send, you can scrape the current page for profiles.google.com links that have authuser param and extract your value from them and send it with your requests.
The link might look like this:
https://profiles.google.com/ ... authuser=0
Specifically for gmail, the url also contains the current authuser index, For example:
https://mail.google.com/mail/u/1/#inbox
This URL above contains the authuser value 1 at the end (before the fragment and after /u/)
I know this is very complex and is looks more like a hack. But I think this surely is a workaround, until Google provides a better way to specifying the context for your apps script requests.
I hope this might be helpful.
Thanks.

How fast does Google take to crawl new page, and can we influence Google's crawler?

I want to submit my site to Google. How much time does it take to crawl a new post on the website?
Also, is there a way to feed this post to Google crawler as soon as a post is created?
Google has three modes of entering a website into its results - discover, crawl, index.
In order to 'discover' your site, it must be made aware of it's existence - normally through back-links. If you're site is brand new you can use the submit URL form - but this isn't really a trusted method. You're better off signing up for a Google Webmaster Tools account and submitting your site. An additional step is to submit an XML sitemap of your site. If you are publishing to your site in a blogging/posting way - you can always consider PubSubHubbub.
From there on, crawl frequency is normally based on site popularity (as measured by ye olde PageRank). Depth of crawl (crawl-budget) is also determined by PR.
There are a couple ways to help "feed" the Google Crawler a URL.
The first way is to go here and submit a URL ---> www.google.com/webmasters/tools/submit-url/
The second way is to go to your Google Webmasters Tools and clicking "Fetch as GoogleBot"
And then inputting the URL you want to add:
http://i.stack.imgur.com/Q3Iva.png
The URL will then appear similar to this:
http:\\example.site Web Success URL submitted to index 1/22/12 2:51 AM
As for how long it takes for a question on here to appear on google, there are many factors that are put in to this.
If the owners of the site use Google Webmasters Tools, the following setting is available:
http://i.stack.imgur.com/RqvOi.png
For fast crawl you should submit your xml sitemap in google web master and manually crawled and index your web pages url through google webmaster fetch.
I also used google crawled and index method and after that this practices give me best result.
This is a great resource that really breaks down all the factors that affect a crawl budget and how to optimize your website to increase it. Cleaning up your broken links and removing outdated content, for example, can work wonders. https://prerender.io/crawl-budget-seo/ 
I acknowledged error in my response by adding a comment to original question a long time ago. Now, I am updating this post in interest of keeping future readers from being misguided as I was. Please see notes from other users below - they are correct. Google does not make use of the revisit-after meta tag. I am still keeping the original response text here to make sure that anyone else looking for similar answer will find it here along with this note confirming that this meta tag IS NOT VALID! Hope this helps someone.
You may use HTML meta tag as follows:
<meta name="revisit-after" content="1 day">
Adjust time period as necessary. There is no guarantee that robots will return in given time frame but this is how you are telling robots about how often a given page is likely to change.
The Revisit Meta Tag is used to tell search engines when to come back next.

How do search engines recognize search boxes on websites?

I've noticed that a lot of the time when i search something on Google, Google automatically uses the search function of relevant websites and return the result of the website search as if it was just another URL.
How do i let Google and other search engines know what is the search box on my own website and does Open Search has anything to do with it?
do you maybe mean the site search function via the google chrome omnibar?
to get there you just need to have a
form with method type GET
input type text element
submit button
on the root page of your domain
if users go directly to your root page and search something there, google learns of this form and adds it to the search engines accessible via the omnibar (the google chrome address bar).
did you mean this?
Google doesn't use anyones search forms - it just finds a link to search results, you need to
Use GET for your search parameters to make this possible
Create links to common/useful search results pages
Make sure google finds those links
Google makes it look like just another URL because that is exactly what it is.
Most of the time though Google will do a better job than your search engine so actually doing this could lower the quality of results from your site...
I don't think it does. It's impossible to spider sites in real time.
It's just a SEO technique some sites use to improve their ranking by spamming Google with fake results. They feed the Google bot with an endless stream of links to bogus pages:
http://en.wikipedia.org/wiki/Spamdexing

Resources