While creating knowledge base in dialog flow from URL, I am getting message "Error". However I am able to see FAQ which are on this URL when opening in browser. For reference please find below screenshot below, If feasible suggest how can I find exact reason for this error as dialog flow don't give any other relevant error for this.
URL which I am configuring knowledge base is :
https://www.owens.edu/faq/early-alert/
enter image description here
The full error message is the following:
"Failed to crawl https://www.owens.edu/faq/early-alert. Please verify that your URL is publicly accessible and is hosted on a site that can be indexed by Google Search."
I have tested the FAQ page you shared and by using the "Developer tools" of Chrome, I was able to see that error message. I suggest you to take a look at the "Supported content" documentation for knowledge bases in Dialogflow. In there, you can see the following statement:
Files from public URLs must have been crawled by the Google search indexer, so that they exist in the search index. You can check this with the Google Search Console. Note that the indexer does not keep your content fresh. You must explicitly update your knowledge document when the source content changes.
Therefore, make sure to meet all the requirements listed there.
Related
I have this weird problem when I try to use a simple default flow template to save email attachments to the company main SharePoint site: company.sharepoint.com (not subsite).
So I get started, by taking all the defaults of this flow, however, once i get to the point of providing the site address and document library path I get the error highlighted in red.
Where I get confused is that when I create a subsite like company.sharepoint.com/sites/testsite I enter the subsite address and the folder path automatically populates the folder structure for me to pick where I want to save such attachment.
I have given full owner permission to this test account with same results. So permission is not the problem.
My question is, could it be I'm using the wrong flow to save to a main SharePoint site? or this is something not allowed?
You could check the connector and recreate a new connection to SharePoint.
In many cases, an error code of 403 appears in a flow fail because of an authentication error. If you have this type of error, you can usually fix an authentication error by updating the connection, please make sure you have update the connection.
You could refer to this article.
Just in case anyone has a similar problem, the account to which you are creating a power automate flow must be a site collector to the root SharePoint site.
I have created an agent in the dialog flow and I want to use that agent in the Appian BPM. For integration, I have used google service account JSON file
but I do not find the base URL what URL I have to keep
I want through this link https://dialogflow.com/docs but I didn't find anything in it.And I also tried https://dialogflow.googleapis.com/v2beta1/{session=projects//agent/sessions/}:detectIntent this i didn't understand how to use.
Can some please explain clearly with a proper example
For reference please check this link:
https://photos.app.goo.gl/agNkDFgQkxomJZBb9
As per https://cloud.google.com/dialogflow/docs/quick/api#detect_intent, it's https://dialogflow.googleapis.com/v2/
Btw, the Google Photo link you shared doesn't work for me.
Reply from : https://github.com/google/physical-web/issues/595
For example, I am transmitting www.starbucks.com
http://www.starbucks.com as the URL.
My phone looks for physical web pages and say it detects www.starbucks.com
and shows it to me in my physical web present in my chrome.
As a user, this is how it will appear to me presently
» Now this does not convey much information to me.
» The text "Order while you wait" has been taken from the metadata
description of the page( as far as I know) and the title "Starbucks" *has
been taken from the *title tag.
Now, say if I can custom define these parameters, for example like this
Here, I custom defined the text of the same starbucks URL that my phone's
physical web scanned for.
This adds for relevancy to the URL. A user gets a clear message. Also, it
allows the stores to convey an effective contextual message.
This is possible when you use ReactJS and JSX?, because only you have one HTML file and always show the title default that is in this html, even if you change it with document.title = "other title" in the notification show the first and not the new title
The text shown in the Physical Web notification is strictly given by the target website and you can influence it only there.
The Chrome is actually not analyzing the target website. Its a Google server (Physical Web Service) that analysis it and this one provides information to Chrome. You seem to need changing the title instantly and often. So be careful about caching of already resolved webs on the server.
The website analysis does not execute any Javascript. It takes only what is written in HTML directly. So the trick with document.title wont work.
But there is a different way how to get the notifications. Look at the Google Nearby Notifications. In summary this works based on Eddystone-UID. You register your UID with the service and configure to redirect to target website. But in the configuration you can specify the title and description. Look at the mentioned page for the details.
With the Kimono Web, in the crawled payload there was always url and index field in every source URL JSON. But with the desktop, these fields are missing and my product was totally depends on it.
I'm browsing the source codes of Kimono Desktop but I couldn't manage to find that part.
The index field is explained in there ; https://help.kimonolabs.com/hc/en-us/articles/203349674-Add-a-unique-index-to-each-result-object-
Can anyone help me with it ?
Thanks
I've had the same issue. I found this workaround for the missing url field with the desktop application http://mudd.com/blog/how-to-extract-vdp-data-from-your-website/
Also, in case you used the crawl scheduling feature with the Kimono web app, I found that if I edit my APIs and save them again it lets me choose a crawl frequency. I just discovered this so I'm crossing my fingers and waiting to see if it's really going to work.
The tool I'm building needs pull data from IBM Connections Ideation Blogs. I therefore use the Connections API with basic authentication to read Blog Entries. This goes well until the description contains images. When I ask the API to provide media resources for the blog, it does not show any entries of the /BLOGS_UPLOADED_IMAGES location - the one containing images uploaded through the blog's richtext editor. The user I use in my API call is the same user who created blog entries and uploaded pictures.
However the API call DOES contain images I publish using the API and a POST request to the blog's media entry collection. This is where the next problem appears. Those Atom entries for images contain various links, one of them with a ref="enclosure", of which the API documentation (link) tells me to "Use the web address in the href attribute to obtain the binary content of the file". However, my calls to this adress are always answered with 404 response code.
Another url in the Atom entry (this time of the element) is described by the same documentation (see link above) as: "Provides access the document's media. The following operation is supported: GET: Use the web address to obtain the media." When I make a call to this url, as always with basic authentication credentials attached, the response contains the html of the login form of Connections, so API authentication does not seem to be supported on this url. This is only the case for non-public communities, which require authentication, of course, if the picture is publicly availabe all works just fine.
Am I missing something out? Is there another way to retrieve the actual image from a blog's media entry through the API? Are manually uploaded pictures never contained in the media entries result or is this a bug?
It now magically works using the link with ref="enclosure" from the atom entry. I might have gotten something wrong with authentication I guess (although I'm not actually realizing what I'm doing different now than I did before).
Problem remaining: Pictures uploaded through the rich-text editor in the folder /BLOGS_UPLOADED_IMAGES do not appear in the media feed of the blog.