Google Document export via API - google-docs

After using Zend_GData to retrieve a document list feed, I can use the content URLs in the form:
http://docs.google.com/document/edit?id=<docid>&hl=en
but the source URLs in the form
http://docs.google.com/feeds/download/documents/Export?docId=<docid>&exportFormat=html
are returning 404 errors. That URL should return the content of the document in the requested format but it is returning 404.
This problem is mentioned without resolution on a Google API forum. As indicated in that forum post, this problem only seems to affect new documents. My code works perfectly retrieving old documents, but new ones are 404.
Has something changed in the way Google references new documents or in the way permissions are assigned?
The code I'm using is essentially the same as the code on this page but this does not seem to be an issue specific to PHP/Zend_Gdata.

This appears to be a bug in Google's code. Fix pending:
http://code.google.com/p/gdata-issues/issues/detail?id=2023

Apparently, the (current) correct download URL for a document is
https://docs.google.com/feeds/download/documents/export/Export?id=<docid>&format=<format>

Related

How to update link to documents in Sharepoint

Recently, we updated the URL for our Sharepoint site. I have added AAMs to direct any users trying to access the site through the old URL to the new URL. So far, everything has been working fine. Recently, while browsing through a document library, I noticed when I clicked on the ellipses on the name of a document so I could copy the link associated to the document, the URL appears as http://NewSiteURl/http://OldSiteURL/...
I notice it is not every single document in my library that presents links like this, but I do notice it in random documents throughout the library. How can I fix this issue so all links to documents within the library so it doesn't contain the old site url?

Using Microsoft Graph API to Query Certain Sharepoint URIs with .aspx Extensions

I have a question about sharepoint combined with the graph API. I'm trying to do a GET request against a sharepoint site, but it doesn't populate when the url has a .aspx extension. For example if I do 'GET https://graph.microsoft.com/v1.0/sites/hostname.sharepoint.com:/sites/blablabla/UK' this populates a response fine, but if I do 'GET https://graph.microsoft.com/v1.0/sites/hostname.sharepoint.com:/sites/blablabla/UKDTAppKZ/something.aspx' then I get a 404 error suggesting this site doesn't exist... Could I get some clarification on how to use graph GET queries with sharepoint urls, specifically .aspx extensions?
In your first URL you're accessing the Site object for the /sites/blablabla/UK sub site, so you should get back a valid site object (assuming the URL is correct) as you indicated. To access the files in that site you need to access the Drives (Document Libraries) and then get the children or the specific item you're looking for. So the URL would look something like:
Path support isn't always consistent right now so wherever possible I like to use IDs if I know them.
So with an ID it would be:
https://graph.microsoft.com/v1.0/sites/HOSTNAME.sharepoint.com,SITECOLLECTIONGUID,SITEGUID/drives/DRIVEID/root/children
OR
https://graph.microsoft.com/v1.0/sites/HOSTNAME.sharepoint.com,SITECOLLECTIONGUID,SITEGUID/drives/DRIVEID/items/ITEMID
For Pages specifically though I would take a look at the Beta Pages API we added recently. If you want to do any operations (like publishing) to the page you'll want that API instead of the basic drive API.

Where can I find actual specs for Imgur api?

I'm trying implement Imgur API, but facing with problem. Data models present in https://api.imgur.com/models have wrong optional/non-optional attributes.
I know, that Imgur API has new docs at https://apidocs.imgur.com , but links on Response Models follow to old docs.
For example: Image model
Seems like that title field is non-null, but it can be equal null in a response. Is this object invalid? Should I reject this object?
When I arrive at the link above, I get a notification to go the new documentation site # http://apidoscs.imgur.com.
Are you perhaps viewing the old site?

Kimono Desktop's payload url and index fields missing

With the Kimono Web, in the crawled payload there was always url and index field in every source URL JSON. But with the desktop, these fields are missing and my product was totally depends on it.
I'm browsing the source codes of Kimono Desktop but I couldn't manage to find that part.
The index field is explained in there ; https://help.kimonolabs.com/hc/en-us/articles/203349674-Add-a-unique-index-to-each-result-object-
Can anyone help me with it ?
Thanks
I've had the same issue. I found this workaround for the missing url field with the desktop application http://mudd.com/blog/how-to-extract-vdp-data-from-your-website/
Also, in case you used the crawl scheduling feature with the Kimono web app, I found that if I edit my APIs and save them again it lets me choose a crawl frequency. I just discovered this so I'm crossing my fingers and waiting to see if it's really going to work.

Retrieving files from blog media entries

The tool I'm building needs pull data from IBM Connections Ideation Blogs. I therefore use the Connections API with basic authentication to read Blog Entries. This goes well until the description contains images. When I ask the API to provide media resources for the blog, it does not show any entries of the /BLOGS_UPLOADED_IMAGES location - the one containing images uploaded through the blog's richtext editor. The user I use in my API call is the same user who created blog entries and uploaded pictures.
However the API call DOES contain images I publish using the API and a POST request to the blog's media entry collection. This is where the next problem appears. Those Atom entries for images contain various links, one of them with a ref="enclosure", of which the API documentation (link) tells me to "Use the web address in the href attribute to obtain the binary content of the file". However, my calls to this adress are always answered with 404 response code.
Another url in the Atom entry (this time of the element) is described by the same documentation (see link above) as: "Provides access the document's media. The following operation is supported: GET: Use the web address to obtain the media." When I make a call to this url, as always with basic authentication credentials attached, the response contains the html of the login form of Connections, so API authentication does not seem to be supported on this url. This is only the case for non-public communities, which require authentication, of course, if the picture is publicly availabe all works just fine.
Am I missing something out? Is there another way to retrieve the actual image from a blog's media entry through the API? Are manually uploaded pictures never contained in the media entries result or is this a bug?
It now magically works using the link with ref="enclosure" from the atom entry. I might have gotten something wrong with authentication I guess (although I'm not actually realizing what I'm doing different now than I did before).
Problem remaining: Pictures uploaded through the rich-text editor in the folder /BLOGS_UPLOADED_IMAGES do not appear in the media feed of the blog.

Resources