How to map audio to target text transcription - python-3.x

I am new to deep learning, I am making a basic end to end voice recognizer using tensorflow API, LSTM model and ctc loss function. I have extracted my audio features to mfccs. i don't really know how to map my audios to transcriptions, i know ctc is use for the purpose, I know how ctc works but don't know the code to implement it.
Here is my code to extract features
import os
import numpy as np
import glob
import scipy.io.wavfile as wav
from python_speech_features import mfcc, logfbank
# Read the input audio file
for f in glob.glob('Downloads/DataVoices/Training/**/*.wav', recursive=True):
(rate,sig) = wav.read(f)
sig = sig.astype(np.float64)
# Take the first 10,000 samples for analysis
#sig = sig[:10000]
mfcc_feat = mfcc(sig,rate,winlen=0.025, winstep=0.01,
numcep=13, nfilt=26, nfft=512, lowfreq=0, highfreq=None,
preemph=0.97, ceplifter=22, appendEnergy=True)
fbank_feat = logfbank(sig, rate)
acoustic_features = np.concatenate((mfcc_feat, fbank_feat), axis=1) # time_stamp x n_features
print(acoustic_features)
I have also made a training list.txt file where i have provided transcriptions with audio path like:
this is example/001/001.wav
this is example/001/001(1).wav
where 001 is folder and 001.wav and 0001(1).wav are two wave files of one utterance.

I am posting this as a contrived example assuming that this would give an idea of how a CSV file and filenames inside the CSV can be read. You could modify this to suit your needs.
Let's say I have this CSV file. The first column is your transcript. The file path is your audio file. In my case it is just a text file with random text.
Script1,D:/PycharmProjects/TensorFlow/script1.txt
Script2,D:/PycharmProjects/TensorFlow/script2.txt
This is the code I use to test it. Please remember this is an example.
import tensorflow as tf
batch_size = 1
record_defaults = [ ['Test'],['D:/PycharmProjects/TensorFlow/script1.txt']]
def readbatch(data_queue) :
reader = tf.TextLineReader()
_, rows = reader.read_up_to(data_queue, batch_size)
transcript,wav_filename = tf.decode_csv(rows, record_defaults,field_delim=",")
audioreader = tf.WholeFileReader()
print(wav_filename)
_, audio = audioreader.read( tf.train.string_input_producer(wav_filename) )
return [audio,transcript]
data_queue = tf.train.string_input_producer(['D:\\PycharmProjects\\TensorFlow\\script.csv'], shuffle=False)
batch_data = readbatch(data_queue)
batch_values = tf.train.batch(batch_data, shapes=[tf.TensorShape(()),tf.TensorShape(batch_size,)], batch_size=batch_size, enqueue_many=False )
init = tf.initialize_all_variables()
with tf.Session() as sess:
sess.run(init)
sess.run(tf.initialize_local_variables())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
try:
step = 0
while not coord.should_stop():
step += 1
feat = sess.run([batch_values])
audio = feat[0][0]
print(audio)
script = feat[0][1]
print(script)
except tf.errors.OutOfRangeError:
print(' training for 1 epochs, %d steps', step)
finally:
coord.request_stop()
coord.join(threads)

Related

parallelising tf.data.Dataset.from_generator with TF2.1

They are already 2 posts about this topics, but they have not been updated for the recent TF2.1 release...
In brief, I've got a lot of tif images to read and parse with a specific pipeline.
import tensorflow as tf
import numpy as np
files = # a list of str
labels = # a list of int
n_unique_label = len(np.unique(labels))
gen = functools.partial(generator, file_list=files, label_list=labels, param1=x1, param2=x2)
dataset = tf.data.Dataset.from_generator(gen, output_types=(tf.float32, tf.int32))
dataset = dataset.map(lambda b, c: (b, tf.one_hot(c, depth=n_unique_label)))
This processing works well. Nevertheless, I need to parallelize the file parsing part, trying the following solution:
files = # a list of str
files = tensorflow.data.Dataset.from_tensor_slices(files)
def wrapper(file_path):
parser = partial(tif_parser, param1=x1, param2=x2)
return tf.py_function(parser, inp=[file_path], Tout=[tf.float32])
dataset = files.map(wrapper, num_parallel_calls=2)
The difference is that I parse one file at a time here with the parser function. However, but it does not work:
File "loader.py", line 643, in tif_parser
image = numpy.array(Image.open(file_path)).astype(float)
File "python3.7/site-packages/PIL/Image.py", line 2815, in open
fp = io.BytesIO(fp.read())
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'read'
[[{{node EagerPyFunc}}]] [Op:IteratorGetNextSync]
As far as I understand, the tif_parser function does not receive a string but an (unevaluated) tensor. At now, this function is fairly simple:
def tif_parser(file_path, param1=1, param2=2):
image = numpy.array(Image.open(file_path)).astype(float)
image /= 255.0
return image
Here is how I have proceeded
dataset = tf.data.Dataset.from_tensor_slices((files, labels))
def wrapper(file_path, label):
import functools
parser = functools.partial(tif_parser, param1=x1, param2=x2)
return tf.data.Dataset.from_generator(parser, (tf.float32, tf.int32), args=(file_path, label))
dataset = dataset.interleave(wrapper, cycle_length=tf.data.experimental.AUTOTUNE)
# The labels are converted to 1-hot vectors, could be integrated in tif_parser
dataset = dataset.map(lambda i, l: (i, tf.one_hot(l, depth=unique_label_count)))
dataset = dataset.shuffle(buffer_size=file_count, reshuffle_each_iteration=True)
dataset = dataset.batch(batch_size=batch_size, drop_remainder=False)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)
Concretely, I generate a data set every time the parser is called. The parser is run cycle_length time at each call, meaning that cycle_length images are read at once. This is suited to my specific case, because I cannot load all the images in memory. I am unsure whether the prefetch is used correctly or not here.

librosa.util.exceptions.ParameterError: Invalid shape for monophonic audio: ndim=2, shape=(172972, 2)

Please somebody help me to solve this
I was following this tutorial:
https://data-flair.training/blogs/python-mini-project-speech-emotion-recognition/
And used their dataset which they took from the RAVDESS Dataset and lowered the sample rate of them. I can train using this data easily. But when I use Original data from here:
https://zenodo.org/record/1188976
Just "Audio_Speech_Actors_01-24.zip" and try to train model it gives me below error:
Traceback (most recent call last):
File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 64, in <module>
x_train, x_test, y_train, y_test = load_data(test_size=0.20)
File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 57, in load_data
feature = extract_feature(file, mfcc=True, chroma=True, mel=True)
File "C:/Users/raj.pandey/Desktop/speech-emotion-recognition/main.py", line 32, in extract_feature
stft = np.abs(librosa.stft(X))
File "C:\Users\raj.pandey\Desktop\speech-emotion-recognition\lib\site-packages\librosa\core\spectrum.py", line 215, in stft
util.valid_audio(y)
File "C:\Users\raj.pandey\Desktop\speech-emotion-recognition\lib\site-packages\librosa\util\utils.py", line 268, in valid_audio
'ndim={:d}, shape={}'.format(y.ndim, y.shape))
librosa.util.exceptions.ParameterError: Invalid shape for monophonic audio: ndim=2, shape=(172972, 2)
The tutorial provided by the trains at the same dataset but just that they have lowered the sample rate. Why isn't it running on the original one?
Does it have to do anything with this in the code:
X = sound_file.read(dtype="float32")
I have was also just out of curiosity tried to predict from a .mp3 file and it resulted in an error. Then I converted that .mp3 file in wav and tried but still gives error in the title.
How to solve this error and make it train on the original data? If it starts training on original then I think it can predict on the .mp3 to wav converted file.
Below is the code that I am using:
import librosa
import soundfile
import os
import glob
import pickle
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score
# DataFlair - Emotions in the RAVDESS dataset
emotions = {
'01': 'neutral',
'02': 'calm',
'03': 'happy',
'04': 'sad',
'05': 'angry',
'06': 'fearful',
'07': 'disgust',
'08': 'surprised'
}
# DataFlair - Emotions to observe
observed_emotions = ['calm', 'happy', 'fearful', 'disgust']
# DataFlair - Extract features (mfcc, chroma, mel) from a sound file
def extract_feature(file_name, mfcc, chroma, mel):
with soundfile.SoundFile(file_name) as sound_file:
X = sound_file.read(dtype="float32")
sample_rate = sound_file.samplerate
if chroma:
stft = np.abs(librosa.stft(X))
result = np.array([])
if mfcc:
mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T, axis=0)
result = np.hstack((result, mfccs))
if chroma:
chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T, axis=0)
result = np.hstack((result, chroma))
if mel:
mel = np.mean(librosa.feature.melspectrogram(X, sr=sample_rate).T, axis=0)
result = np.hstack((result, mel))
return result
# DataFlair - Load the data and extract features for each sound file
def load_data(test_size=0.2):
x, y = [], []
for file in glob.glob("C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\Actor_*\\*.wav"):
# for file in glob.glob("C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\newactor\\*.wav"):
file_name = os.path.basename(file)
emotion = emotions[file_name.split("-")[2]]
if emotion not in observed_emotions:
continue
feature = extract_feature(file, mfcc=True, chroma=True, mel=True)
x.append(feature)
y.append(emotion)
return train_test_split(np.array(x), y, test_size=test_size, random_state=9)
# DataFlair - Split the dataset
x_train, x_test, y_train, y_test = load_data(test_size=0.20)
# DataFlair - Get the shape of the training and testing datasets
# print((x_train.shape[0], x_test.shape[0]))
# DataFlair - Get the number of features extracted
# print(f'Features extracted: {x_train.shape[1]}')
# DataFlair - Initialize the Multi Layer Perceptron Classifier
model = MLPClassifier(alpha=0.01, batch_size=256, epsilon=1e-08, hidden_layer_sizes=(300,), learning_rate='adaptive',
max_iter=500)
# DataFlair - Train the model
model.fit(x_train, y_train)
# print(model.fit(x_train, y_train))
# DataFlair - Predict for the test set
y_pred = model.predict(x_test)
# print("This is y_pred: ", y_pred)
# DataFlair - Calculate the accuracy of our model
accuracy = accuracy_score(y_true=y_test, y_pred=y_pred)
# DataFlair - Print the accuracy
# print("Accuracy: {:.2f}%".format(accuracy * 100))
# Predicting random files
tar_file = "C:\\Users\\raj.pandey\\Desktop\\speech-emotion-recognition\\Dataset\\newactor\\pls-hold-while-try.wav"
new_feature = extract_feature(tar_file, mfcc=True, chroma=True, mel=True)
data = []
data.append(new_feature)
data = np.array(data)
z_pred = model.predict(data)
print("This is output: ", z_pred)
The dataset provided by the tutorial to train was this: https://drive.google.com/file/d/1wWsrN2Ep7x6lWqOXfr4rpKGYrJhWc8z7/view
The original dataset you can get from here(which isn't working with the program):https://zenodo.org/record/1188976 (Audio_speech_actor one)
In predicting random files if you put any .wav files with a speech in it, it results in an error. And if you try text to speech converter and get the .wav and pass it here it will always say "fearfull". I have tried converting a .mp3 to .wav to get it to work nicely but nope still an error.
Anyone checked yet how can I get it working?
I've just ran into the same problem. For anyone reading this that prefers not to delete the stereo files, it is possible to convert them to mono using the command line tool ffmpeg:
ffmpeg -i stereo_file_name.wav -ac 1 mono_file_name.wav
Link to ffmpeg
Related Stack Overflow Post
from pydub import AudioSegment
file_name=os.path.basename(file)
#converting stereo audio to mono
sound = AudioSegment.from_wav(file)
sound = sound.set_channels(1)
sound.export(file, format="wav")
emotion=emotions[file_name.split("-")[2]]
if emotion not in observed_emotions:
continue
feature=extract_feature(file, mfcc=True, chroma=True, mel=True)
x.append(feature)
y.append(emotion)
return train_test_split(np.array(x), y, test_size=test_size, random_state=9)
Cause the audio file has 2 audio chanle, it must be only one audio chanle.
I'm working on the same dataset and got the same error as well. What I did was convert the audio to mono using an online converter https://convertio.co/ by following this website's directions https://videoconvert.minitool.com/video-converter/stereo-to-mono.html (point 1, convertio)
array(['fearful'], dtype='<U7')
The above line is my output too, it predicts it as fearful, might be because of the accuracy (mine is 73.96%, but it varies)
The tutorial provided by the trains at the same dataset but just that they have lowered the sample rate. Why isn't it running on the original one?
Even tho people already gave an answer to this question, The author or the authors of that tutorial didn't specify the fact that the dataset posted on their Google Drive have all audio tracks with mono channels while in the original one there are some audio tracks that are in stereo channels.
As already Arryan Sinha showed, just use the package pydub and the job is done.
Other than this, I would suggest to not give so much attention to that tutorial because the results from the classifier are, most of the time, about 50% of accuracy, which is not great. To verify effectively if the classifier is good try to print a confusion matrix. That surely helps to see if a classifier is good or not.
Some of the audio files are stereo,(and code needs mono) those are causing break. Removing those files from dataset eliminates this error. In load_data(), add a line, print(file). It will tell you which files are breaking the code, then just remove them.
def load_data(test_size=0.2):
x,y=[],[]
for file in glob.glob("\Actor_*\\*.wav"):
file_name=os.path.basename(file)
print(file)
emotion=emotions[file_name.split("-")[2]]
...
I found 4 files that are causing this :
Actor_01/03-01-02-01-01-02-01.wav
Actor_05/03-01-02-01-02-02-05.wav
Actor_20/03-01-03-01-02-01-20.wav
Actor_20/03-01-06-01-01-02-20.wav

MGCA technique for speech features extraction shows this error (IndexError: list index out of range)

By executing this program for speech features extraction from wav file
, i got problem in code ,error say IndexError: list index out of range
File
"C:/Users/KALEEM/PycharmProjects/Speech_Processing/2-Speech_Signal_Processing_and_Classification-master/feature_extraction_techniques/mgca.py",
line 77, in
mel_Generalized() File "C:/Users/KALEEM/PycharmProjects/Speech_Processing/2-Speech_Signal_Processing_and_Classification-master/feature_extraction_techniques/mgca.py",
line 74, in mel_Generalized
mgca_feature_extraction(wav) File "C:/Users/KALEEM/PycharmProjects/Speech_Processing/2-Speech_Signal_Processing_and_Classification-master/feature_extraction_techniques/mgca.py",
line 66, in mgca_feature_extraction
writeFeatures(mgca_features,wav) File "C:/Users/KALEEM/PycharmProjects/Speech_Processing/2-Speech_Signal_Processing_and_Classification-master/feature_extraction_techniques/mgca.py",
line 46, in writeFeatures
wav = makeFormat(wav) File "C:/Users/KALEEM/PycharmProjects/Speech_Processing/2-Speech_Signal_Processing_and_Classification-master/feature_extraction_techniques/mgca.py",
line 53, in makeFormat
wav = wav.split('/')[1].split('-')[1] IndexError: list index out of range
Process finished with exit code 1
#!usr/bin/python
from pysptk import *
from scipy import hamming
import numpy.matlib
import scipy
import scipy.io.wavfile as wav
import numpy as np
import wave
from python_speech_features.sigproc import *
from math import *
from six.moves import input as raw_input
def readWavFile(wav):
#given a path from the keyboard to read a .wav file
#wav = raw_input('Give me the path of the .wav file you want to read: ')
inputWav = 'C:/Users/KALEEM/PycharmProjects/Speech_Processing/2-Speech_Signal_Processing_and_Classification-master/feature_extraction_techniques'+wav
return inputWav
#reading the .wav file (signal file) and extract the information we need
def initialize(inputWav):
rate , signal = wav.read(readWavFile(inputWav)) # returns a wave_read object , rate: sampling frequency
sig = wave.open(readWavFile(inputWav))
# signal is the numpy 2D array with the date of the .wav file
# len(signal) number of samples
sampwidth = sig.getsampwidth()
print ('The sample rate of the audio is: ',rate)
print ('Sampwidth: ',sampwidth)
return signal , rate
#implementation of the low-pass filter
def lowPassFilter(signal, coeff=0.97):
return np.append(signal[0], signal[1:] - coeff * signal[:-1]) #y[n] = x[n] - a*x[n-1] , a = 0.97 , a>0 for low-pass filters
def preEmphasis(wav):
#taking the signal
signal , rate = initialize(wav)
#Pre-emphasis Stage
preEmphasis = 0.97
emphasizedSignal = lowPassFilter(signal)
Time=np.linspace(0, len(signal)/rate, num=len(signal))
EmphasizedTime=np.linspace(0, len(emphasizedSignal)/rate, num=len(emphasizedSignal))
return emphasizedSignal, signal , rate
def writeFeatures(mgca_features,wav):
#write in a txt file the output vectors of every sample
f = open('mel_generalized_features.txt','a')#sample ID
#f = open('mfcc_featuresLR.txt','a')#only to initiate the input for the ROC curve
wav = makeFormat(wav)
np.savetxt(f,mgca_features,newline=",")
f.write(wav)
f.write('\n')
def makeFormat(wav):
#if i want to keep only the gender (male,female)
wav = wav.split('/')[1].split('-')[1]
#only to make the format for Logistic Regression
if (wav=='Female'):
wav='1'
else:
wav='0'
return wav
def mgca_feature_extraction(wav):
#I pre-emphasized the signal with a low pass filter
emphasizedSignal,signal,rate = preEmphasis(wav)
#and now I have the signal windowed
emphasizedSignal*=np.hamming(len(emphasizedSignal))
mgca_features = 'mgcep(emphasizedSignal,order=12)'
writeFeatures(mgca_features,wav)
def mel_Generalized():
folder = raw_input('Give the name of the folder that you want to read data: ')
amount = raw_input('Give the number of samples in the specific folder: ')
for x in range(1,int(amount)+1):
wav = '/'+folder+'/'+str(x)+'.wav'
print (wav)
mgca_feature_extraction(wav)
#def main():
mel_Generalized()
#main()
The problem is most likely due to unexpected input, which would be difficult for us to test.
More specifically, in the code below:
def makeFormat(wav):
#if i want to keep only the gender (male,female)
wav = wav.split('/')[1].split('-')[1]
#only to make the format for Logistic Regression
if (wav=='Female'):
wav='1'
else:
wav='0'
return wav
I would assume that wav is a str-like object (or anyway something that supports .split()). The result of split is generally an Iterable, for example a list. If such Iterable has 0 or 1 elements, trying to access its second element (using [1]) would raise the IndexError: list index out of range you are getting.
In your case, wav does not contain enough / (at least 1), enough - (also at least 1), or both.

Keras: difference in flow from directory and own input

I noticed a performance drop from around 10% in accuracy between what Keras gives as output and when I test it myself. So I reproduced this, see the small code snippet below. I generate input in two ways. input is generated by the Keras ImageGenerator (no augmentations) and input2 is produced without ImageGenerator.
import numpy as np
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
import os
import pdb
def preprocess(img):
img = image.array_to_img(img)
width, height = img.size
# Crop 48x48px
desired_width, desired_height = 48, 48
if width < 48:
desired_width = width
start_x = np.maximum(0, int((width-desired_width)/2))
img = img.crop((start_x, np.maximum(0, height-desired_height), start_x+desired_width, height))
img = img.resize((48, 48))
img = image.img_to_array(img)
return img / 255.
datagen = ImageDataGenerator(
featurewise_center=False,
featurewise_std_normalization=False,
preprocessing_function=preprocess)
generator = datagen.flow_from_directory(
'numbers_train',
target_size=(48,48),
batch_size=1024, # Only 405 images in directory, so batch always the same
classes=['02'],
shuffle=False,
class_mode='sparse')
inputs, targets = next(generator)
folder = 'numbers_train/02'
files = os.listdir(folder)
files = list(map(lambda x: os.path.join(folder, x), files))
images = []
for f in files:
img = image.load_img(f)
images.append(preprocess(img))
inputs2 = np.asarray(images)
print(np.mean(inputs))
print(np.mean(inputs2))
This gives two different values, where I expect that input and input2 are the same.
0.403158
0.41354
This causes a difference in accuracy of around 10%. What is happening here?
Edit: It seems to be something with the resizing of the images. Remove the img.resize in preprocess and add this line in the for loop before preprocessing and the means will be the same. But what I want is that the resizing is done after the cropping.
Edit2: So the ImageDataGenerator does first the resizing to (48,48) and then it calls the preprocess function. I want it the other way around. Does someone know a trick to do this?

How to predict Label of an email using a trained NB Classifier in sklearn?

I have created a Gaussian Naive Bayes classifier on a email (spam/not spam) dataset and was able to run it successfully. I vectorized the data, divided in it train and test sets and then calculated the accuracy, all the features that are present in the sklearn-Gaussian Naive Bayes classifier.
Now I want to be able to use this classifier to predict "labels" for new emails - whether they are by spam or not.
For example say I have an email. I want to feed it to my classifier and get the prediction as to whether it is a spam or not. How can I achieve this? Please Help.
Code for classifier file.
#!/usr/bin/python
import sys
from time import time
import logging
# Display progress logs on stdout
logging.basicConfig(level = logging.DEBUG, format = '%(asctime)s %(message)s')
sys.path.append("../DatasetProcessing/")
from vectorize_split_dataset import preprocess
### features_train and features_test are the features
for the training and testing datasets, respectively### labels_train and labels_test are the corresponding item labels
features_train, features_test, labels_train, labels_test = preprocess()
#########################################################
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
t0 = time()
clf.fit(features_train, labels_train)
pred = clf.predict(features_test)
print("training time:", round(time() - t0, 3), "s")
print(clf.score(features_test, labels_test))
## Printing Metrics
for Training and Testing
print("No. of Testing Features:" + str(len(features_test)))
print("No. of Testing Features Label:" + str(len(labels_test)))
print("No. of Training Features:" + str(len(features_train)))
print("No. of Training Features Label:" + str(len(labels_train)))
print("No. of Predicted Features:" + str(len(pred)))
## Calculating Classifier Performance
from sklearn.metrics import classification_report
y_true = labels_test
y_pred = pred
labels = ['0', '1']
target_names = ['class 0', 'class 1']
print(classification_report(y_true, y_pred, target_names = target_names, labels = labels))
# How to predict label of a new text
new_text = "You won a lottery at UK lottery commission. Reply to claim it"
Code for Vectorization
#!/usr/bin/python
import os
import pickle
import numpy
numpy.random.seed(42)
path = os.path.dirname(os.path.abspath(__file__))
### The words(features) and label_data(labels), already largely processed.###These files should have been created beforehand
feature_data_file = path + "./createdDataset/dataSet.pkl"
label_data_file = path + "./createdDataset/dataLabel.pkl"
feature_data = pickle.load(open(feature_data_file, "rb"))
label_data = pickle.load(open(label_data_file, "rb"))
### test_size is the percentage of events assigned to the test set(the### remainder go into training)### feature matrices changed to dense representations
for compatibility with### classifier functions in versions 0.15.2 and earlier
from sklearn import cross_validation
features_train, features_test, labels_train, labels_test = cross_validation.train_test_split(feature_data, label_data, test_size = 0.1, random_state = 42)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(sublinear_tf = True, max_df = 0.5, stop_words = 'english')
features_train = vectorizer.fit_transform(features_train)
features_test = vectorizer.transform(features_test)#.toarray()
## feature selection to reduce dimensionality
from sklearn.feature_selection import SelectPercentile, f_classif
selector = SelectPercentile(f_classif, percentile = 5)
selector.fit(features_train, labels_train)
features_train_transformed_reduced = selector.transform(features_train).toarray()
features_test_transformed_reduced = selector.transform(features_test).toarray()
features_train = features_train_transformed_reduced
features_test = features_test_transformed_reduced
def preprocess():
return features_train, features_test, labels_train, labels_test
Code for dataset generation
#!/usr/bin/python
import os
import pickle
import re
import sys
# sys.path.append("../tools/")
""
"
Starter code to process the texts of accuate and inaccurate category to extract
the features and get the documents ready for classification.
The list of all the texts from accurate category are in the accurate_files list
likewise for texts of inaccurate category are in (inaccurate_files)
The data is stored in lists and packed away in pickle files at the end.
"
""
accurate_files = open("./rawDatasetLocation/accurateFiles.txt", "r")
inaccurate_files = open("./rawDatasetLocation/inaccurateFiles.txt", "r")
label_data = []
feature_data = []
### temp_counter is a way to speed up the development--there are### thousands of lines of accurate and inaccurate text, so running over all of them### can take a long time### temp_counter helps you only look at the first 200 lines in the list so you### can iterate your modifications quicker
temp_counter = 0
for name, from_text in [("accurate", accurate_files), ("inaccurate", inaccurate_files)]:
for path in from_text: ###only look at first 200 texts when developing### once everything is working, remove this line to run over full dataset
temp_counter = 1
if temp_counter < 200:
path = os.path.join('..', path[: -1])
print(path)
text = open(path, "r")
line = text.readline()
while line: ###use a
function parseOutText to extract the text from the opened text# stem_text = parseOutText(text)
stem_text = text.readline().strip()
print(stem_text)### use str.replace() to remove any instances of the words# stem_text = stem_text.replace("germani", "")### append the text to feature_data
feature_data.append(stem_text)### append a 0 to label_data
if text is from Sara, and 1
if text is from Chris
if (name == "accurate"):
label_data.append("0")
elif(name == "inaccurate"):
label_data.append("1")
line = text.readline()
text.close()
print("texts processed")
accurate_files.close()
inaccurate_files.close()
pickle.dump(feature_data, open("./createdDataset/dataSet.pkl", "wb"))
pickle.dump(label_data, open("./createdDataset/dataLabel.pkl", "wb"))
Also I want to know whether i can incrementally train the classifier meaning thereby that retrain a created model with newer data for refining the model over time?
I would be really glad if someone can help me out with this. I am really stuck at this point.
You are already using your model to predict labels of emails in your test set. This is what pred = clf.predict(features_test) does. If you want to see these labels, do print pred.
But perhaps you what to know how you can predict labels for emails that you discover in the future and that are not currently in your test set? If so, you can think of your new email(s) as a new test set. As with your previous test set, you will need to run several key processing steps on the data:
1) The first thing you need to do is to generate features for your new email data. The feature generation step is not included in your code above, but will need to occur.
2) You are using a Tfidf vectorizer, which converts a collection of documents to a matrix of Tfidf features based upon term frequency and inverse document frequency. You need to put your new email test feature data through the vectorizer that you fit on your training data.
3) Then your new email test feature data will need to go through dimensionality reduction using the same selector that you fit on your training data.
4) Finally, run predict on your new test data. Use print pred if you want to view the new label(s).
To respond to your final question about iteratively re-training your model, yes you definitely can do this. It's just a matter of selecting a frequency, producing a script that expands your data set with incoming data, then re-running all steps from there, from pre-processing to Tfidf vectorization, to dimensionality reduction, to fitting, and prediction.

Resources