The 2019-08-30 pre-release has a lot of different dataset, but doesn't seem to have a dataset dedicated to the 'see also'.
I know that the articles inside the see also section are technically pagelinks, but I have no idea how (and if) I can distinguish see-also-pagelinks from the rest from the pagelink dataset (which is called wikilinks).
Would I be able to get the information from the existing dataset, or is there already a dataset for it I don't know?
Related
I'm new to NLP. I am looking for recommendations for an Annotation tool to create a labeled NER dataset from raw texts.
In details:
I'm trying to create a labeled data set for specific types of Entities in order to develop my own NER project (rule based at first).
I assumed there will be some friendly frameworks that allows create tagging projects, tag text data, create a labeled dataset, and even share projects so several people could work on the same project, but I'm struggling to find one (I admit "friendly" or "intuitive" are subjective, yet this is my experience).
So far I've tried several Frameworks:
I tried LightTag. It makes the tagging itself fast and easy (i.e. marking the words and giving them labels) but the entire process of creating a useful dataset is not as intuitive as I expected (i.e. uploading the text files, split to different tagging objects, save the tags, etc.)
I've installed and tried LabelStudio and found it less mature then LightTag (don't mean to judge here :))
I've also read about spaCy's Prodigy, which offers a paid annotation tool. I would consider purchasing it, but their website only offers a live demo of the the tagging phase and I can't access if their product is superior to the other two products above.
Even in StackOverflow the latest question I found on that matter is over 5 years ago.
Do you have any recommendation for a tool to create a labeled NER dataset from raw text?
⚠️ Disclaimer
I am the author of Acharya. I would limit my answers to the points raised in the question.
Based on your question, Acharya would help you in creating the project and upload your raw text data and annotate them to create a labeled dataset.
It would allow you to mark records individually for train or test in the dataset and would give data-centric reports to identify and fix annotation/labeling errors.
It allows you to add different algorithms (bring your own algorithm) to the project and train the model regularly. Once trained, it can give annotation suggestions from the trained models on untagged data to make the labeling process faster.
If you want to train in a different setup, it allows you to export the labeled dataset in multiple supported formats.
Currently, it does not support sharing of projects.
Acharya community edition is in alpha release.
github page (https://github.com/astutic/Acharya)
website (https://acharya.astutic.com/)
Doccano is another open-source annotation tool that you can check out https://github.com/doccano/doccano
I have used both DOCCANO (https://github.com/doccano/doccano) and BRAT (https://brat.nlplab.org/).
Find the latter very good and it supports more functions. Both are free to use.
I trained a GPT-J and GPT-Neo models (fine tuning) on my texts and am trying to generate new text. But very often the sentences are very long (sometimes 300 characters each), although in the dataset the sentences are of normal length (50-100 characters usually). I tried a lot of things, changed, adjusted the temperature, top_k, but still half of the results with long phrases and I neen more short.
What can you try?
Here are long examples of generated results:
The support system that they have built has allowed us as users who
are not code programmers or IT administrators some ability to create
our own custom solutions without needing much programming experience
ourselves from scratch!
All it requires are documents about your inventory process but
I've found them helpful as they make sure you do everything right for
maximum efficiency because their knowledge base keeps reminding me
there's new ways i can be doing some things wrong since upgrading my
license so even though its good at finding errors with documentation
like an auditor may bring up later downline someone else might benefit
if those files dont exist anymore after one year when upgrades renews
automatically!
With all GPT models you can specify the "max_length" parameter during generation. This will force the model to generate an amount of tokens equal to max_length. You could also play with num_return_sequences and use a helper function to choose the shortest sequence.
Example:
output = model.generate(input_ids, do_sample=True, top_k=50, max_length=100, top_p=0.95, num_return_sequences=1)
These large language models are trained on massive amounts of data, and fine-tuning them can take patience as they learn to adapt to what you're feeding it. Try different things - adjust your training data format, try different samples, use a pre-prompt during generation to guide the model, etc.. A model like GPT-J does a mind-numbingly large amount of calculations just to spit out a single word, so it is hard to predict what exactly is causing it to say one thing over another.
I have 20,000 messages (combination of email and live chat) between my customer and my support staff. I also have a knowledge base for my product.
Often times, the questions customers ask are quite simple and my support staff simply point them to the right knowledge base article.
What I would like to do, in order to save my support staff time, is to show my staff a list of articles that may likely be relevant based on the initial user's support request. This way they can just copy and paste the link to the help article instead of loading up the knowledge base and searching for the article manually.
I'm wondering what solutions I should investigate.
My current line of thinking is to run analysis on existing data and use a text classification approach:
For each message, see if there is a response with a link to a how-to article
If Yes, extract key phrases (microsoft cognitive services)
TF-IDF?
Treat each how-to as a 'classification' that belongs to sets of key phrases
Use some supervised machine learning, support vector machines maybe to predict which 'classification, aka how-to article' belongs to key phrase determined from a new support ticket.
Feed new responses back into the set to make the system smarter.
Not sure if I'm over complicating things. Any advice on how this is done would be appreciated.
PS: naive approach of just dumping 'key phrases' into search query of our knowledge base yielded poor results since the content of the help article is often different than how a person phrases their question in an email or live chat.
A simple classifier along the lines of a "spam" classifier might work, except that each FAQ would be a feature as opposed to a single feature classifier of spam, not-spam.
Most spam-classifiers start-off with a dictionary of words/phrases. You already have a start on this with your naive approach. However, unlike your approach a spam classifier does much more than a text search. Essentially, in a spam classifier, each word in the customer's email is given a weight and the sum of weights indicates if the message is spam or not-spam. Now, extend this to as many features as FAQs. That is, features like: FAQ1 or not-FAQ1, FAQ2 or not-FAQ2, etc.
Since your support people can easily identify which of the FAQs an e-mail requires then using a supervised learning algorithm would be appropriate. To reduce the impact of any miss-classification errors, then consider the application presenting a support person with the customer's email followed by the computer generated response and all the support person would have to-do is approve the response or modify it. Modifying a response should result in a new entry in the training set.
Support Vector Machines are one method to implement machine learning. However, you are probably suggesting this solution way too early in the process of first identifying the problem and then getting a simple method to work, as well as possible, before using more sophisticated methods. After all, if a multi-feature spam classifier works why invest more time and money in something else that also works?
Finally, depending on your system this is something I would like to work-on.
Is there a way to use secondary user actions derived from the user click stream to improve recommendations when using Spark Mllib ALS?
I have gone through the explicit and implicit feedback based example mentioned here : https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html that uses the same ratings RDD for the train() and trainImplicit() methods.
Does this mean I need to call trainImplicit() on the same model object with a RDD(user,item,action) for each secondary user action? Or train multiple models , retrieve recommendations based on each action and then combine them linearly?
For additional context, the crux of the question is if Spark ALS can model secondary actions like Mahout's spark item similarity job. Any pointers would help.
Disclaimer: I work with Mahout's Spark Item Similarity.
ALS does not work well for multiple actions in general. First an illustration. The way we consume multiple actions in ALS is to weight one above the other. For instance buy = 5, view = 3. ALS was designed in the days when ratings seemed important and predicting them was the question. We now know that ranking is more important. In any case ALS uses predicted ratings/weights to rank results. This means that a view is really telling ALS nothing since a rating of 3 means what? Like? Dislike? ALS tries to get around this by adding a regularization parameter and this will help in deciding if 3 is a like or not.
But the problem is more fundamental than that, it is one of user intent. When a user views a product (using the above ecom type example) how much "buy" intent is involved? From my own experience there may be none or there may be a lot. The product was new, or had a flashy image or other clickbait. Or I'm shopping and look at 10 things before buying. I once tested this with a large ecom dataset and found no combination of regularization parameter (used with ALS trainImplicit) and action weights that would beat the offline precision of "buy" events used alone.
So if you are using ALS, check your results before assuming that combining different events will help. Using two models with ALS doesn't solve the problem either because from buy events you are recommending that a person buy something, from view (or secondary dataset) you are recommending a person view something. The fundamental nature of intent is not solved. A linear combination of recs still mixes the intents and may very well lead to decreased quality.
What Mahout's Spark Item Similarity does is to correlate views with buys--actually it correlates a primary action, one where you are clear about user intent, with other actions or information about the user. It builds a correlation matrix that in effect scrubs the views of the ones that did not correlate to buys. We can then use the data. This is a very powerful idea because now almost any user attribute, or action (virtually the entire clickstream) may be used in making recs since the correlation is always tested. Often there is little correlation but that's ok, it's an optimization to remove from the calculation since the correlation matrix will add very little to the recs.
BTW if you find integration of Mahout's Spark Item Similarity daunting compared to using MLlib ALS, I'm about to donate an end-to-end implementation as a template for Prediction.io, all of which is Apache licensed open source.
I'm building a network business model in excel. A similar model is that of Gawker Media.
In my model I have a number properties that have some over lap of audience. Each property attracts users, which in turn affords cross promotional opportunities. In the case of Gawker they have a series of blogs whose audience will likely read several of their blogs in their network.
If gawker launched a new blog they're able to direct traffic from their blog network.
Creating a model for a single blog is fairly simple - although the initial assumptions are harder. The next step is to model the network effect.
Excel provides a scenarios manager that allows me to vary the key assumptions in the basic model. This is almost perfect, I can model the launch of 10 properties, each with different launch assumptions and see the summary.
Where I need help is figuring out how I can vary the initial number of users for the launch of each property. In other words, once the network is established, its possible to drive people to any new property launched on the network.
I don't believe the scenario manager will do what I need.
So, I'm wondering if its possible to use the model work sheet as a UDF? The UDF would need to spit out the monthly revenue and unique users given a number of input assumptions.
I would then be able to create my own summary sheet for the 10 properties and using the total uniques for each property get a summary for the network. This network summary would be used to determine how many people could be driven to the launch of a new property.
In effect, the only difference to the scenario manager is that I need one of my input variables (initial users) to be programmatically generated as a function of the number of people in the network at the time of launch.
I'm hoping its possible to achieve something along these lines in excel. I could drop down and create the whole model in Java, but then its much harder to share with business colleagues!
Thanks - Matt.
You could try Data Table.
It only allows you to analyse the effect of varying 2 input parameters, but you can create several data tables, and each parameter can take hundreds of different values.
It's little know, but efficient and available since Excel 3.0.
There is a product that I have researched but never used - search for calc4web. It takes a sheet of formulas and generates code (C++) that can be compiled into an XLL add-in. Then you can call a function that does what your sheet does. But of course then you have an XLL to distribute, and a build step every time you change your logic, which defeats much of the point of using a spreadsheet.
In my case, I wound up writing some very simple VBA code to vary my sheet "inputs" using the scenario manager, and capture my "outputs". This works if you have a batch of inputs that you can just point your macro at and step through.
EDIT:
See here for a VBA-only example of doing this:
using a sheet in an excel user defined function