In Socrata catalog Search API Beta, links property seem to be incomplete for some "page" resource type.
For example in:
http://api.us.socrata.com/api/catalog/v1?&domains=data.cityofnewyork.us&only=pages
many of the resource "link" property are incomplete
link: "https://data.cityofnewyork.us/view/"
Most of these pages also seem to exists under a different "id" in the public web site.
Thanks for pointing that out! I've filed a bug with our engineering team to fix the bad links.
Related
While creating knowledge base in dialog flow from URL, I am getting message "Error". However I am able to see FAQ which are on this URL when opening in browser. For reference please find below screenshot below, If feasible suggest how can I find exact reason for this error as dialog flow don't give any other relevant error for this.
URL which I am configuring knowledge base is :
https://www.owens.edu/faq/early-alert/
enter image description here
The full error message is the following:
"Failed to crawl https://www.owens.edu/faq/early-alert. Please verify that your URL is publicly accessible and is hosted on a site that can be indexed by Google Search."
I have tested the FAQ page you shared and by using the "Developer tools" of Chrome, I was able to see that error message. I suggest you to take a look at the "Supported content" documentation for knowledge bases in Dialogflow. In there, you can see the following statement:
Files from public URLs must have been crawled by the Google search indexer, so that they exist in the search index. You can check this with the Google Search Console. Note that the indexer does not keep your content fresh. You must explicitly update your knowledge document when the source content changes.
Therefore, make sure to meet all the requirements listed there.
The media:search endpoint is okay, but I really need a keyword search. Just like the actual website has. i.e. football
I've tried using the Google Custom Search API and pointing it to photos.google.com, but that is unable to get past the login screen even though I'm authenticated.
Anyone else have any workarounds for a keyword search?
Keyword search is currently not supported in the Google Photos Library API.
There's a feature request on the issue tracker that you can star to draw attention to it and be notified of updates: https://issuetracker.google.com/110300471
At the moment you can search the library by what's in the photo ("content categories"), dates, media types and archived state. More information about what's currently supported is in the developer documentation: https://developers.google.com/photos/library/guides/apply-filters
We are using SCA's default feature to Share Product's details on Social Platform.
SCA's and Open Graph Protocol's Documentation says to get the product title, description, image, url, will require meta tags on html page.
We tried configuring all require meta tags and it is coming under of element(to view this you need to open console and in Element tab under head tag), but We are not able to see Image and Description on Social Platform after sharing the product url. It only gives us product url.
4. If you view source on product details page, you will not see any configured meta tags there(og tags), we thought this could be the reason. as Ptoduct details page comes under Shopping ssp, We tried adding heard coded meta tags in shopping.ssp file it self, That works for us.
But the question here is that, We will require actual product image, description, title on shopping.ssp, how would we can get that on shopping.spp file.
Or us there any way get the meta tags working with the default feature.
How I can add meta tags in shopping.ssp file, or how to get item details in shopping.ssp file.
Which social platform are you trying to use? Facebook has a resource to trouble shoot the implementation at https://developers.facebook.com/.
One thing we discovered is that the URL for the image has to match exactly, so because we were using a folder titled "item images", our images were not showing up because of the space between the two words.
There's also more information at the developer's site:
https://developers.suitecommerce.com/
I was wondering if the two following queries are actually the same or supposed to be the same:
GET https://graph.microsoft.com/beta/sharepoint/sites/{spsite-id},{spweb-id}/drives
and
GET https://graph.microsoft.com/beta/sharepoint/sites:/MYPATH:/drives
I would like to access a Document Library Item in a sharepoint site through the relative path.
Please mind that both endpoints below are the same for getting all doc libraries or drive in a site according to the current beta microsoft graph documentation. The latter becomes handy when you dont know the site id yet but the relative site url.
https://graph.microsoft.com/beta/sites/[domain.sharepoint.com]:/[relative-url]:/drives
https://graph.microsoft.com/beta/sites/[site-id]/drives
(e.g. site id: "cie493742.sharepoint.com,4af352a7-a53b-43d9-b0a3-da372b392ea0,52c490f3-3354-40b9-a3c9-fefb08cb5c88" )
Now to get the document library item
Get Document library id from list of drives
https://graph.microsoft.com/beta/sites/[site-id]/drives
Get item id from list of items
https://graph.microsoft.com/beta/sites/[site-id]/drives/[drive-id]/items
Final API call
https://graph.microsoft.com/beta/sites/[site-id]/drives/[drive-id]/items/[item-id]
you could try experimenting the Graph API from here
With the Kimono Web, in the crawled payload there was always url and index field in every source URL JSON. But with the desktop, these fields are missing and my product was totally depends on it.
I'm browsing the source codes of Kimono Desktop but I couldn't manage to find that part.
The index field is explained in there ; https://help.kimonolabs.com/hc/en-us/articles/203349674-Add-a-unique-index-to-each-result-object-
Can anyone help me with it ?
Thanks
I've had the same issue. I found this workaround for the missing url field with the desktop application http://mudd.com/blog/how-to-extract-vdp-data-from-your-website/
Also, in case you used the crawl scheduling feature with the Kimono web app, I found that if I edit my APIs and save them again it lets me choose a crawl frequency. I just discovered this so I'm crossing my fingers and waiting to see if it's really going to work.