Google docs viewer url parameters - google-docs

Is there any sort of documentation on exactly what parameters you can put in the url of Google viewer?
Originally, I thought it was just url,embedded,chrome, but I've recently come accross other funny ones like a,pagenumber, and a few others for authentication etc.
Any clues?

One I know is "chrome"
If you've got https://docs.google.com/viewer?........;chrome=true
then you see a fairly heavy UI version of that doc, however with "chrome=false" you get a compact version.
But indeed, I'd like a complete list myself!

I know this question is very old and perhaps you already solved your issue, but for anyone on the internet who might be looking for an answer...
I have been looking for this recently, following a guide I found on GitHub Gist
https://gist.github.com/tzmartin/1cf85dc3d975f94cfddc04bc0dd399be
More specifically, the option to embed a certain page of pdf using
<iframe src="https://docs.google.com/viewer?srcid=[put your file id here]&pid=explorer&efh=false&a=v&chrome=false&embedded=true" width="580px" height="480px"></iframe>
The best I could fing was this article (I suppose from a long time now)
https://weekly-geekly.github.io/articles/111647/index.html
HOWEVER, I tried modifying the attributes and the result was simply a redirect to
https://drive.google.com/file/d/[ID]/edit
https://drive.google.com/file/d/[ID]/preview or
https://drive.google.com/file/d/[ID]/view
AS OF MAY 2020, THIS SOLUTION PROBABLY DOESN'T WORK

I'm also on a quest to discover some of the parameters of the viewer.
the "chrome" parameter doesn't seem to do anything, though. Is this
supposed to be the same as embedded=true?
Parameters I know of:
url= (obviously)
embedded= (obviously)
hl= set language of UI (tooltips)
#:0.page.1 = jump to page 2 (page 1 is numbered 0) - this is unreliable and often requires a refresh after the first load,
defeating the purpose.
That said, when I use the Google Docs viewer on my site, "fit page to
screen" is the default view without any parameters. So maybe I'm
misunderstanding your question.
Source: For convenience, this is a full quote of the sole answer (it is from user k3david) to the crosspost of this question #Doc has posted to the Google support forum in 2011.

You can pass q=whatever to pass a search query to the viewer.

Related

Simplest suggestion box (like google suggestion when you using search) in PHP (without jQuery)

I can't find simple solution for suggestion in php/java script (no jQuery or something) so would like if some have some advice?
It is simple, when people search my site I want they to see suggestions, I made everything for 'feeding' that box, but can't code or find somewhere simple php/java script only solution, so please give me some useful link or code.
Also, forgot to say that I found "XMLHttpRequest" and made implementation which works great, but since I never ever used XMLHttpRequest I am not sure will it work all platforms and browsers? I tested in few (IE,F,Chorme,Safary) and it works on windows platform, but not sure will it work on other (Mobile platform for example) that XMLHttpRequest solution?
Go to the below link to find an awesome auto complete script without using any kind of javascript frameworks like jQuery or Prototype. Probably this might be the thing you are looking for.
http://dhtmlx.com/docs/products/dhtmlxCombo/index.shtml
UPDATE
Giving some extra links below you can find similar auto complete or
auto search without jQuery
http://www.brandspankingnew.net/specials/ajax_autosuggest/ajax_autosuggest_autocomplete.html
http://www.codeproject.com/Articles/8020/Auto-complete-Control
http://mattberseth.com/blog/2007/12/creating_a_google_suggest_styl.html
http://www.devthought.com/2008/01/12/textboxlist-meets-autocompletion/
http://wick.sourceforge.net/

problems testing sharepoint with selenium (timeouts, repeating auth and missed links)

I have some serious problems testing a sharepoint site with selenium/bromine. As I did't find an answer via various searches I hope someone here can point me in the right direction.
I am constantly getting timeouts opening the main page, but the server is definetly fast enough to answer the request and at 90% idle. Nevertheless I just get logs like these:
open http://username:passwd#10.13.110.54/default.aspx | Timed out after 90000ms
Test terminated The selenium server did not return OK
The auth popup is popping up at irregular intervals (every 5 to 10 clicks) although every open command uses the http://username:passwd#10.13.110.54/ as prefix
Clicking on elements is sometimes not registered, the logs show a successful
isElementPresent link=myLink
click link=myLink
but the browser doesn't react. These are mainly in-page links which open a new folder or an editing box.
I'm not sure whether I should have posted the in three separate questions, but I didn't want to spam.
Hope someone can help me, as I have these problems now for nearly 3 weeks.
Thanks in advance
Thomas
For your question number 2: Okay, this is a really late reply. I stumbled on this page looking for the answer myself. Given that I have solved it in the meantime, I figured I'd post my answer for other people stumbling onto this page.
General solution:
You need to create or use a profile that will let firefox automatically forward your credentials to the sharepoint website. You can create the profile manually and call it each time, see https://applicationtestingtips.wordpress.com/2009/12/21/seleniumrc-handle-windows-authentication-firefox/ for instructions.
Programmer solution: (works in python, should work similarly in Java)
Or you can create a new profile on the fly each time. I did that based on the information in the previously mentioned website. I use python for calling selenium, but this should be rather similar in whatever language you use to call selenium:
sharepointHosts = 'sharepoint1.mycompany.com,sharepoint2.mycompany.com' #have all your sharepoint hosts here in a comma-separated list
ffProfile.set_preference('network.automatic-ntlm-auth.trusted-uris', sharepointHosts)
ffProfile.set_preference('network.negotiate-auth.delegation-uris', sharepointHosts)
ffProfile.set_preference('network.negotiate-auth.trusted-uris', sharepointHosts)
driver = webdriver.Firefox(firefox_profile=ffProfile)

How do I move old content down in the search engine rankings?

There is some precedent for search-engine-ranking-related questions on StackOverflow, so please don't close this question. It's programming-related to the extent that HTML META tags can be called "programming".
Here's the problem:
We make FogBugz, the software project planning and bug tracking suite.
Either we did a great job with our old documentation or a crummy job with our new documentation, but for most of the popular searches on FogBugz terms, documentation for our old versions comes up.
Here's an example. For context, our current FogBugz version is FogBugz 7. The top two results for that search are for FogBugz 5, which is positively ancient.
As best I can tell, there are several options for getting these results out of the top slots, but each has problems:
A NOINDEX tag, but what happens if someone is actually searching for help on an old version?
Finding the incoming links to the old documentation and placing a NOFOLLOW on them to deprive the old docs of PageRank. Problem here is that it's really fiddly to find the links to the content, rather than changing the content itself.
The unavailable_after tag, which is just a time-delayed NOINDEX, with the same problem of removal rather than demotion.
I just want these old documentation versions to stop competing with our current versions, without being completely unavailable.
An approach I used in the past (3 years ago)
Change the URL to your old documentation, and change your own links to point to the new url. e.g. abc.com/docs/fogzbugz/v5/xyz becomes abc.com/docs/fogzbugz/ancient/v5/xyz
Using the old URLs, implement a 301 redirection to your new v7 content. e.g. a request to abc.com/docs/fogzbugz/v5/GettingStarted.html is redirected to abc.com/docs/fogzbugz/v7/GettingStarted.html
In this way, existing links from external sites will take browsers to the latest documentation, and inform indexing robots that the page has moved.
Google will find the new links to your old documentation by indexing your site, but there will be no external links, thus reducing page rank.
Google will also find the new links to your new documentation, and as more sites link to it, its page rank will increase and so take priority.
This worked for me on a small scale (100 or so pages) site, and visitor attempts to view the old content rapidly dropped off.
If a user does land on a v5 page, how about the MSDN approach of explicitly stating the version that the page describes, and providing links to the equivalent topic in the v6 and v7 docs?
I would suggest that external links to older versions get redirected to the latest version - with some sort of note that if you really needed version 5 the link is here.
I think a lot of the problem deals with the fact that search engines give something a high rank if a lot of people are linking to a specific page. Unless you can get all the people linking to your old documentation, to link to your new documentation, then you are going to have a problem with the older documents being rated artificially high. In order to overcome this, you might need to change the way you handle documentation pages. One good way would be to always show the newest information on a particular topic, and then only by clicking on a link on the page, do you get to the older versions. Optimally, this would be the same page, with a different parameter, to state which version you want to get documentation for.
What about trying the MSDN approach? You assign a version tag to your pages. When this page is displayed, its version number is displayed as well. Users will be able to see immediately that this information is deprecated.
You may need to write some stubs for new version pages like "This problem has been resolved in the current version" so that the users don't have to think you didn't do anything in 5 years. Some writing work, some interlinking but it's doable for a limited number of problematic pages.

develope a website using visulaforce

I'm trying to develop a website using Visualforce with Apex.
But I'm unable.
Please me with the documents and websites regarding them, I'm unable to find docs regarded Visualforce.
The best thing you can do to get started is checkout:
An Introduction to Force.com Sites
Visualforce documentation: http://www.salesforce.com/us/developer/docs/pages/index.htm
Based on the comment to the question (doesn't really answer the original question):
Let's say your salesforce environment is on https://na5.salesforce.com, your visualforce page is called "myPage", and you want to display on it data from "myCustomObject__c". Let's also say that there exists at least 1 record in the table of myCustomObjects and it's ID is "0067000000AH3ME" (you can see the ID of the object in the URL when you view it like https://na5.salesforce.com/0067000000AH3ME).
If your visualforce page looks similar to this:
<apex:page standardController="myCustomObject__c">
<apex:pageBlock title="Hi Javatechi">
<p>You're viewing the {!myCustomObject__c.Name} record.</p>
</apex:pageBlock>
<apex:detail relatedList="false" />
</apex:page>
Then this page should display something meaningful when you visit it via this url:
https://na5.salesforce.com/apex/myPage/?id=0067000000AH3ME
If you'll be able to make this example work for you, all the rest like <apex:inputField> and <apex:commandButton> should start to work too.
Run screaming from salesforce now, before it is too late. It only gets worse.

How to get a description of a URL

I have a list of URLs and am trying to collect their "descriptions." By description I mean what comes up, for example, if you Googled the link. For example, http://stackoverflow.com">Google: http://stackoverflow.com shows the description as
A language-independent collaboratively
edited question and answer site for
programmers. Questions and answers
displayed by user votes and tags.
This the data I'm trying to accumulate for the URLs I have.
I tried parsing the URL's meta-descriptions, however most of them are lacking a meta-description (yet Google and other search engines manage to get a description somehow).
Any ideas? Should I just "google" each link and scrape the data? I have a feeling Google wouldn't like this...
Thanks guys.
Different search engines have different algorithms to get the description out of the page if/when they are lacking the description meta tag. Some ignore the tag even it it's there.
If you want the description Google has, the most accurate way to get it would be to scrape it. Otherwise, you could write your own or look around on the web for code that does it.
These are called snippets.
Google use proprietary (and possibly patented) methods to garner this information, so there is no simple answer.
As you suggest, they will use meta-description information if it is there. (How to set the meta-information to help Google.)
They will also honour requests from the page authors to NOT include snippets. (How to prevent Google from displaying snippets) You should probably respect this too (as well as robots.txt, of course.)
You may have some luck with existing auto-summary packages, such as OTS.
You may want to check AboutUs.org (i.e. http://www.aboutus.org/StackOverflow.com).
But, there's little chance that the site will have an aboutus page and not have a meta description.
Some info that might explain how google does this:
Webmasters/Site owners Help
Adding a URL to google
I am not familiar with Google APIs, but perhaps there is an official way to get such information.
Interesting. some sources are better than others.
For "audiotuts.com" google has a worse description than AboutUs.com.
Google
Nov 18th in General by Joel Falconer ·
1. Recently, an AUDIOTUTS reader asked me about creative process. While this
is a topic that can’t be made into a
...
AboutUs.com:
AUDIOTUTS is a blog/tutorial site for
musicians, producers and audio
junkies! It is the sister site of the
popular PSDTUTS, VECTORTUTS and
NETTUTS.
I hate problems like these... they should be trivial but they aren't!
If you can assume English content, you can first look for Meta Description, and if that doesn't work, you can look for the first two or three sentence-like word sequences.
A product I worked on looked for the first P or DIV that contained more than one sequence of > n "words" delimited by periods. It would use the two or three sentence-like sequences, up to x total words, as a summary paragraph. It wasn't 100% accurate, but good enough for the average case. The number of words was adjusted a few times to eliminate things like navigation elements.

Resources