wit.ai is not training new examples / training status stays "clean" - nlp

yesterday I added a bunch of new training examples to my wit.ai-Project but the training status got stuck somehow. The status always stays "clean" (green icon) when i add new examples – it seems, that the training process can't get triggered anymore. Thats pretty annoying, because none of the new examples work.
Can anybody help? Am I doing something wrong? (If someone at wit.ai reads this: Project name is ts_bot_dev_1).

According to Wit.ai Hackers FB Group , there were some issues with training services in Wit the past couple days. It should be fixed by now.

Related

Using GPT2 to find commonalities in text records

I have a dataset with many incidents and most of the data is in free text form. One row per incident and a text field of what happened. I tried to train a gpt2 model on the free text then try prompts such as
"The person got burned because" and want to find the most common causes of burns.
The causes may be written in many ways so I thought maybe to get the meaning of each might work.
The prompts work but give some funny made up reasons so I do not think it's working well for what I want to do.

How do I force learning in the new UI when utterances are shown as Not Learned?

I want there to be a way to force these two items to become learned or at least to reveal to me why they are not Not Learned. But I can't figure out what if anything I am supposed to do.
I have run into this a couple of times where I was "sure" my training was correct, yet I got the "not learned" status
I suggest you check to see if runs properly using the "Aligned NL" feature in the V2 training when you choose Run in the Simulator
If that works, the fix I'm using might work for you.
I found that if I add a vocab to work with my enum, then my training gets learned
When training utterances aren't being learned, odds are that there are conflicting examples that Bixby cannot reconcile.
In this case, I would recommend first attempting to teach Bixby a few "follow" utterances until you can confirm Bixby is able to learn that. Then, attempt a few "unfollow" utterances. If the previously learned "follow" utterances become unlearned, the "follow" and "unfollow" utterances are conflicting.
I would also recommend that you look through the training documentation as it explains the nuance of training Bixby.

Dialogflow training tab has suddenly stopped displaying user phrases

Training tab that has always shown user phrases is suddenly empty. Support has not responded. Logging out and back in and clearing cache, and exporting/reimporting the agent has done nothing to solve it. Someone else already asked this question here but i can't upvote that one, or comment, and if i star it but they have already moved on because it's fixed for them it's not much help.
Dialogflow "Training" menu is empty always
Has anyone else experienced this and resolved it? We are on standard V2 edition for the last year-ish. Just started being empty last week. We can see questions coming in on the history and analytics tabs but the training tab remains empty. We average 10k questions a week.
There was nothing we could do on our side to fix this. Support finally got us a solution but did not provide any details over what had caused the issue. I asked for more info but all I got was this:
Hi Vanessa,
Thanks for reaching out to Dialogflow Support.
There was some internal issue which was resolved.
We will conduct an internal investigation on the issue and make appropriate
improvements to our systems to help prevent or minimize future recurrence.
Please accept our apologies for the inconvenience.

Trouble recognizing one word intents

I'm using wit to recognize different intents in a retail context. Some of them trigger (successfully) FAQ answers, other initiate a business logic.
Surprisingly, I'm having a lot of trouble with the most basics conversational intents, like answering a hi or hello. Specially if they come as a single word (it doesn't get hi or hello but it successfully returns the correct intent for hi buddy or hey dude). Obviously there's a high chance that the first thing an user would say is just a simple hello, any of you found the same issue? Any guidance on that?
It is actually the first time I experience this issue, and I haven't heard about it. Could it be related to the increasing number of intents created (now 15+)? I'm using trait as a search strategy.
Greetings intent
Click on image for a larger version of the image.
Thank you very much for your help,

Why isn't Stanford Topic Modeling Toolbox producing lda-output directory?

I tried to run this code from github (following the 1-2-3 steps) which identifies 30 topics in Sarah Palin's 14,500 emails. The topics discovered by the author are here. However, Stanford Topic Modeling Toolbox is not producing lda-output directory for me. It produced the lda-86a58136-30-2b1a90a6, but the summary.txt in this folder only shows the initial assignment of topics, not the final one. Any idea how to produce lda-output directory with the final summary of topics discovered? Thanks in advance!
Have you tried the instructions posted here?
Note that I see the original investigator trained the model with Sarah Palin's emails, and then used that trained model to analyze Sarah Palin's emails. While I am not an LDA expert, this typically smacks of "finding what you have".
In most disciplines, training would be done over a known set of items which had been classified according to discriminant by experts. This means that the training would consist of feeding a set of data in known likely topics from other sources, and then would use the LDA library to determine distance from the topics in the "learned" database.
In any event, good luck.
In the event you encounter a specific issue, please post the error, and the steps you took to arrive at that error. Few people invest the time to attempt to reproduce an issue (a typical prerequisite for correcting an issue) without direction, or even the ability to determine if their encountered issue is similar to yours.

Resources