Conversion of IOB to spacy JSON taking alot of time ( IOB has 1 million lines) - nlp

I just want little guidance that there are 3 IOB file dev, test & train.
Dev has 1 million lines.
Test has 4 million lines.
Train has 30 million.
I am currently just converting dev file as of now because i wasn't sure whether is there any error or not in it.
(the IOB format is correct) It's been over 3 hours as of now can idea will this file work or shall I use something else.
I am fine-tuning a bert model using spacy in google colab the Runtime hardware chosen is GPU and the , and for reference I have followed this article:
https://towardsdatascience.com/how-to-fine-tune-bert-transformer-with-spacy-3-6a90bfe57647
I have followed the exact steps of the article.
I am not familiar with NLP domain neither do I have profound knowledge of pipelining. Can someone please help regarding this, it's really important.
Below i would attach the image regarding time and the statement executed for conversion.
Image showing time elapsed and command executed

Related

Stanford CoreNLP Train custom NER model

I was making some tests by training custom models with crf, and since i don't have a proper training file i would like to make by myself a list of 5 tags and maybe 10 words only to start with and the plan is to keep improving the model with more incoming data in the future. but the results i get are plenty of false positives (it tags many words which have nothing to do with the original one in the training file) i imagine since the models created are probabilistic and take into considerarion more than just separate words
Let's say i want to train corenlp to detect a small list of words without caring about the context are there some special settings for that? if not, is there a way to calculate how much data is needed to get an accurate model?
After some tests and research find out a really good option for my case is RegexNER which works in a deterministic way and can also be combined with NER. So far tried with smaller set of rules and does the job pretty well. Next step is to determine how scalable and usable is in a high traffic stress scenario (the one i'm interested of) and compare with other solutions based in python

CTC + BLSTM Architecture Stalls/Hangs before 1st epoch

I am working on a code which recognizes online handwriting recognition.
It works with CTC loss function and Word Beam Search (custom implementation: githubharald)
TF Version: 1.14.0
Following are the parameters used:
batch_size: 128
total_epoches: 300
hidden_unit_size: 128
num_layers: 2
input_dims: 10 (number of input Features)
num_classes: 80 (CTC output logits)
save_freq: 5
learning_rate: 0.001
decay_rate: 0.99
momentum: 0.9
max_length: 1940.0 (BLSTM with variable length time stamps)
label_pad: 63
The problem that I'm facing is, that after changing the decoder from CTC Greedy Decoder to Word Beam Search, my code stalls after a particular step. It does not show the output of the first epoch and is stuck there for about 5-6 hours now.
The step it is stuck after: tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10
I am using a Nvidia DGX-2 for training (name: Tesla V100-SXM3-32GB)
Here is the paper describing word beam search, maybe it contains some useful information for you (I'm the author of the paper).
I would look at your task as two separate parts:
optical model, i.e. train a model that is as good as possible at reading text just by "looking" at it
language model, i.e. use a large enough text corpus, use a fast enough mode of the decoder
To select the best model for part (1), using best path (greedy) decoding for validation is good enough.
If the best path contains wrong characters, chances are high that also beam search has no chance to recover (even when using language models).
Now to part (2). Regarding runtime of word beam search: you are using "NGramsForecast" mode, which is the slowest of all modes. It has running time O(W*log(W)) with W being the number of words in the dictionary. "NGrams" has O(log(W)).
If you look into the paper and go to Table 1, you see that the runtime gets much worse when using the forecast modes ("NGramsForecast" or "NGramsForecastAndSample"), while character error rate may or may not get better (e.g. "Words" mode has 90ms runtime, while "NGramsForecast" has over 16s for the IAM dataset).
For practical use cases, I suggest the following:
if you have a dictionary (that means, a list of unique words), then use "Words" mode
if you have a large text corpus containing enough sentences in the target language, then use "NGrams" mode
don't use the forecast modes, instead use "Words" or "NGrams" mode and increase the beam width if you need better character error rate

Small Data training in CMU Sphinx

I have installed sphinxbase, sphinxtrain and pocketsphinx in Linux (Ubuntu). Now I am trying to train data with speechcorps,transcriptions, dictionary etc obtained from VOXFORGE. (My etc and wav folder's data is obtained from VOXFORGE)
As I am new so I just want to train data and get some results with few line of transcripts and few wav files. let say 10 wav file and 10 transcript lines cosponsoring to it. Like this person in doing in this video
but when I run sphinxtrain then I am getting error.
Estimated Total Hours Training: 0.07021431623931
This is a small amount of data, no comment at this time
If I do CFG_CD_TRAIN= no I dont know what it means.
What changes I need to make? So I am able to remove this error.
PS: I can not add more data because I want to see some results first for my better understanding the whole scenario.
Not enough data for the training, we can only train CI models
You need at least 30 minutes of audio data to train CI models. Alternatively, you can set CFG_CD_TRAIN to "no".

applying word2vec on small text files

I'm totally new to word2vec so please bear it with me. I have a set of text files each containing a set of tweets, between 1000-3000. I have chosen a common keyword ("kw1") and I want to find semantically relevant terms for "kw1" using word2vec. For example if the keyword is "apple" I would expect to see related terms such as "ipad" "os" "mac"... based on the input file. So this set of related terms for "kw1" would be different for each input file as word2vec would be trained on individual files (eg., 5 input files, run word2vec 5 times on each file).
My goal is to find sets of related terms for each input file given the common keyword ("kw1"), which would be used for some other purposes.
My questions/doubts are:
Does it make sense to use word2vec for a task like this? is it technically right to use considering the small size of an input file?
I have downloaded the code from code.google.com: https://code.google.com/p/word2vec/ and have just given it a dry run as follows:
time ./word2vec -train $file -output vectors.bin -cbow 1 -size 200 -window 10 -negative 25 -hs 1 -sample 1e-3 -threads 12 -binary 1 -iter 50
./distance vectors.bin
From my results I saw I'm getting many noisy terms (stopwords) when I'm using the 'distance' tool to get related terms to "kw1". So I did remove stopwords and other noisy terms such as user mentions. But I haven't seen anywhere that word2vec requires cleaned input data?
How do you choose right parameters? I see the results (from running the distance tool) varies greatly when I change parameters such as '-window', '-iter'. Which technique should I use to find the correct values for the parameters. (manual trial and error is not possible for me as I'll be scaling up the dataset).
First Question:
Yes, for almost any task that I can imagine word2vec being applied to you are going to have to clean the data - especially if you are interested in semantics (not syntax) which is the usual reason to run word2vec. Also, it is not just about removing stopwords although that is a good first step. Typically you are going to want to have a tokenizer and sentence segmenter as well, I think if you look at the document for deeplearning4java (which has a word2vec implementation) it shows using these tools. This is important since you probably don't care about the relationship between apple and the number "5", apple and "'s", etc...
For more discussion on preprocessing for word2vec see https://groups.google.com/forum/#!topic/word2vec-toolkit/TI-TQC-b53w
Second Question:
There is no automatic tuning available for word2vec AFAIK, since that implys the author of the implementation knows what you plan to do with it. Typically default values for the implementation are the "best" values for whoever implemented on a (or a set of) tasks. Sorry, word2vec isn't a turn-key solution. You will need to understand the parameters and adjust them to fix your task accordingly.

Sentiment analysis with NLTK python for sentences using sample data or webservice?

I am embarking upon a NLP project for sentiment analysis.
I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task.
Here is my task:
I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice)
I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??)
Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron"
Then I would like to check for positive/negative sentiment in each sentence and count them accordingly
NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm.
Here are the troubles I am having:
All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?)
I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary?
I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link?
Any help is much appreciated and will save me much hair!
Cheers Ke
The movie review data has already been marked by humans as being positive or negative (the person who made the review gave the movie a rating which is used to determine polarity). These gold standard labels allow you to train a classifier, which you could then use for other movie reviews. You could train a classifier in NLTK with that data, but applying the results to election tweets might be less accurate than randomly guessing positive or negative. Alternatively, you can go through and label a few thousand tweets yourself as positive or negative and use this as your training set.
For a description of using Naive Bayes for sentiment analysis with NLTK: http://streamhacker.com/2010/05/10/text-classification-sentiment-analysis-naive-bayes-classifier/
Then in that code, instead of using the movie corpus, use your own data to calculate word counts (in the word_feats method).
Why dont you use WSD. Use Disambiguation tool to find senses. and use map polarity to the senses instead of word. In this case you will get a bit more accurate results as compared to word index polarity.

Resources