Dialogflow strange extracting parameters - dialogflow-es

I have a serious strange problem with dialogflow. Please see the following pictures:
When I try this sentence "Let's set up a new FMR under fmr test with rents, studio at 1000, one bedroom at 1200, two bedroom rented at 1400, three bedroom priced at 1600 and four bedroom at 1800.", it only picks up some of "number"s.
Would you please help me on this to figure out why this happens? Is there any limitation in the number of dialogflow parameters now?

I believe there is also a 255 character/symbol limit that might be causing the issue.
I just replicated your issue. However, I get the proper result by removing "at" and some random short words.

Maybe you should add more "User Says" samples to train the engine a lot more. Also can you use entities in your training. The results should be better.

Related

Using GPT2 to find commonalities in text records

I have a dataset with many incidents and most of the data is in free text form. One row per incident and a text field of what happened. I tried to train a gpt2 model on the free text then try prompts such as
"The person got burned because" and want to find the most common causes of burns.
The causes may be written in many ways so I thought maybe to get the meaning of each might work.
The prompts work but give some funny made up reasons so I do not think it's working well for what I want to do.

Any way to get passed the minimum of 20 tokens for text classification - Google NLP API

Is there anyway to get passed the minimum token requirement for google's NLP API text classification method? I'm trying to input a short simple sentence such as "I can't wait for the presidential debates" but this would return an error saying:
Invalid text content: too few tokens (words) to process.
Is there any way to get around this? I've inputting random words until the inputted string got to 20 characters but that messes up the labels and confidence a lot of the time. If there is any way around this such as setting an option or adding something that would be awesome! If there is no workaround, let me know if you know of another pre-trained text classification model that would work for me!
Also, I can't create the categorizes and labels I want. There would just be too many needed for what I'm doing so that's why these predefined categories in nlp api is great. Just need to get rid of that 20 character requirement.
As clarified in the official Content Classification documentation:
Important: You must supply a text block (document) with at least twenty tokens (words) to the classifyText method.
Considering that, checking for possible alternatives, it seems that, unfortunately, there isn't a way to workaround this. Indeed, you will need to supply at least 20 words.
For this reason, searching around, I found this one here and this other - this one in Chinese, but it might help you :) - of pre-trained models for Text Classification that I believe might help you.
Anyway, feel free to raise a Feature Request in Google's Issue Tracker, for them to check about the possibility of removing this limitation.
Let me know if the information helped you!

Trouble recognizing one word intents

I'm using wit to recognize different intents in a retail context. Some of them trigger (successfully) FAQ answers, other initiate a business logic.
Surprisingly, I'm having a lot of trouble with the most basics conversational intents, like answering a hi or hello. Specially if they come as a single word (it doesn't get hi or hello but it successfully returns the correct intent for hi buddy or hey dude). Obviously there's a high chance that the first thing an user would say is just a simple hello, any of you found the same issue? Any guidance on that?
It is actually the first time I experience this issue, and I haven't heard about it. Could it be related to the increasing number of intents created (now 15+)? I'm using trait as a search strategy.
Greetings intent
Click on image for a larger version of the image.
Thank you very much for your help,

wit.ai is not training new examples / training status stays "clean"

yesterday I added a bunch of new training examples to my wit.ai-Project but the training status got stuck somehow. The status always stays "clean" (green icon) when i add new examples – it seems, that the training process can't get triggered anymore. Thats pretty annoying, because none of the new examples work.
Can anybody help? Am I doing something wrong? (If someone at wit.ai reads this: Project name is ts_bot_dev_1).
According to Wit.ai Hackers FB Group , there were some issues with training services in Wit the past couple days. It should be fixed by now.

Microsoft Translator charges me 1000 characters for a 20 character translation

Can someone explain why this is happening? I'm using their TranslationContainer sample they've provided.
This strikes me as wrong; if I'm translating lots of small pieces of text I'm going to be changed per translation rather than against my quota of characters.
Can someone explain what's happening here
As of today, this is how the Translator service usage is measured. The default transaction length is 1,000 characters.
EDIT 3/18/2012: I reached out to the team and heard back: You are getting the proper number of characters per month, per subscription details, but the counter display isn't reliable at the moment. A fix is coming. I don't have an exact date for the fix.
EDIT 5/17/2012: Looks like this has been fixed. There's also a blog post about real-time updates to remaining characters per month, as well as low-balance notifications:

Resources