Orange3 txt files don't seem to work with logistic regression - text

I just started working with Orange3 doing text analytics. When I create a corpus made up of .txt files I can get the workflow to function where I can generate a word cloud and I can view the corpus on the corpus viewer, but if I want to do a logistic regression on the bag of words it always throws an error that reads "Data has no target variable" and it won't run that widget.
The corpus that works that is preloaded in Orange3 is .TAB and when I attempt to save all the .txt documents to a spreadsheet (one document per line) it still won't recognize the files.

Related

How do I load EMNIST-letters dataset with fastai?

I've been following the fastai course on machine learning. Got up to lesson four and thought I'd use what I've learned to create a model that predicts hand-written letters. The code they used to load their training dataset is as follows:
pets1 = DataBlock(blocks = (ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(seed=42),
get_y=using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'))
pets1.summary(path/"images")
This works when you have image files but the dataset files I have are
emnist-letters-train-images-idx3-ubyte
emnist-letters-train-labels-idx1-ubyte
emnist-letters-test-images-idx3-ubyte
emnist-letters-test-labels-idx1-ubyte
I could extract all the images from those files but is there a way I can load the ubyte files into my program? The files have the same format as the MNIST digits dataset.

How to prepare custom training data for LayoutLM

I want to train a LayoutLM through huggingface transformer, however I need help in creating the training data for LayoutLM from my pdf documents.
Multi page Document Classification can be effectively done by SequenceClassifiers. So here, is a strategy:
Convert Your PDF pages into images and make directory for each different category.
Iterate through all images and create a csv with image Path and label.
Then define your important features and encode the dataset.
Save it in your disk.
Load it back when you need it using Load_from_disk and dataloader.

How to store Word vector Embeddings?

I am using BERT Word Embeddings for sentence classification task with 3 labels. I am using Google Colab for coding. My problem is, since I will have to execute the embedding part every time I restart the kernel, is there any way to save these word embeddings once it is generated? Because, it takes a lot of time to generate those embeddings.
The code I am using to generate BERT Word Embeddings is -
[get_features(text_list[i]) for text_list[i] in text_list]
Here, gen_features is a function which returns word embedding for each i in my list text_list.
I read that converting embeddings into bumpy tensors and then using np.save can do it. But I actually don't know how to code it.
You can save your embeddings data to a numpy file by following these steps:
all_embeddings = here_is_your_function_return_all_data()
all_embeddings = np.array(all_embeddings)
np.save('embeddings.npy', all_embeddings)
If you're saving into google colab, then you can download it to your local computer. Whenever you need it, just upload it and load it.
all_embeddings = np.load('embeddings.npy')
That's it.
Btw, You can also directly save your file to google drive.

Training Stanford POS tagger using multiple text files

I have a corpus of about 20000 text files and i want to train the tagger using these text files, which is better,to group these text files into one text file(i don't know if it will affect tagging accuracy or not) or to include all these text files in the props file?
I don't think it matters. The code should just load all of the data in, it's just for convenience if you have it split into multiple files. Also, you can specify different input formats for different files, but this is not going to affect the final model.

Text Classification using Naive bayes

Do guide me along if I am not posting in the right section.
I have some text files for my training data which are unformatted in word documents. They all contain ASCII characters only.
I would like to train a model on the text files using data mining methods.
The text files do have about 300 words in each file on average.
Are there any software that are recommended for me to start on it?
My initial idea is to use all the words in one of the file as training data and the remaining as test data. This is to perform cross fold validation.
However, I have tools such as weka but it does not seem to satisfy my needs as converting to csv files does not seem to be feasible in my case as the text files are separated
I have trying to perform cross validation in such a way that all the words in the training data are considered as features.
You need to use weka StringToWord filter and convert your text files to arff files. After that you can use weka classification algorithms. Watch following video to learn basics.

Resources