Accumulating results with different data loaders - pytorch

I want to do training with a small part of the dataset let’s say 5000 images and then I would like to add 2000 images during each cycle
Let’s say Training Images is 50,000
Cycle 1: Training in the first step only with 5000 images (now the images will be 45,000)
from cycle 2 to 9:
Cycle 2: Training in the next cycle with 2000 images and accumulate the results with previous learning (now the remaining images will be 43,000 and will decrease in every cycle)
How can I add the results of the first training into the next training cycles from 2 to 9

Related

Assistance with Excel Formatting/Formula/Equations

I have been building an automated trading robot. I have done this via using a virtual GUI in order to integrate it.
I want to be able to analyze the data using Excel. The problem is that I don't know how to organize the data on excel to give me the most vital information. So I thought there wouldn't be a better place to post this then stackoverflow. I have been looking for the "right" youtube video, which would be able to describe what I am looking for.
1.) Count for maximum loss trade sequence( I.e. If the program loss three times in the row as the max, this would be the count for it.)
2.)Count for maximum win trade sequence ( Same thing as mentioned previously, but for wins)
3.) Count for "Pages" loss trade sequence (Every 10 trades equals 1 page. This would mean that if we have 6 losses out of the 10 trades this consists of 1 loss page, if next ten trades consist the same thing or more losses then this means we have 2 pages of loss trade sequence.)
4.) Count for "Pages" Win trade sequence (Every 10 trades equals 1 page. This would mean the same as "Count for Pages loss trade sequence" just for wins instead.)
5.) Count for "Pages" Draw trade sequence ( Every 10 trades equals 1 page. This would mean we had 5 winning and losing trades out of the 10)
6.) Maximum Win pages in a row ( How many pages we won in a row based on #4 calculation)
7.) Maximum Lost pages in a row ( How many pages we lost in a row based on #3 calculation)
8.) Average of winning trades # (Averaging total amount of wins in a row)
9.) Average of losing trades # (Averaging total amount of losses in a row)
Examples below are correlated numerically with list above.
H:27-H:22 is the maximum lost trade sequence = 6
H:1-H:5 is the maximum win sequence = 5
(Rounding) is the total of losing pages = 2 (locations H:39-H:30 and H:29-H:20)
(Rounding) is the total of winning pages = 2 (locations H:50- H:41 and H-10:H-1)
(Ronding) is the total of drawn pages = 1 (location H:60-H:69)
There were no wins in a row
Two page losses in a row (H:39-H:20)
(would take count of wins / sequential wins count) i.e. if we had 10 sequences of winning in a row out of 100 wins this would equal 1 every ten times as the average
(would take count of losses / sequential losses) ^^ same thing except for losses
Any help would be greatly appreciated, heck even a youtube video. I have been working on this program for a while. I learned the hard way, and noticed that I need better data management. Thanks in advance!
"Excel Sheet"

LSTM Future Prediction

Why the LSTM prediction value gives the randomize number ? From that how to take decision.
Scenario is: I want to predict in future power cut in house holds. I have a past 3 month hour based energy meter reading. I trained and LSTM model gives randomized number in next 24 hr..
How do I conclude which hour power On/Off?
Sample output shown here:

How to train LSTM model on multiple time series data each having different timesteps?

How to train LSTM model on multiple time series data where each time series has different timesteps/lags.
Use case: I have daily transactions(WIthdrawals) of 100 ATMS for last 5 years. Need to forecast next withdrawals based on timesteps for each ATM.
How to use embedding(latent representation for each ATM ID) if I pass ATM ID as feature and train on all data at once
How to pass all time series data to LSTM at once, how LSTM will distinguish one time series from another?

How to normalise a list of more than 25 Millions records using MinMaxScaler's fit_transform

I have a list of more than 25 million records(1D Array). I want to normalise the values between 0 to 5.
I'm using scikit-learn's MinMaxScaler for this. This thing is working fine to records till 20M but as the size increasing it is taking huge of time.
Any suggestions how to do this in optimised way.

Train LSTM model on keras with several multivariate time series

I have a time series dataset of customer behavior. For each month, I have one row per customer which includes a set of features(for example, the amount of spending, number of visits, etc.) and a target value(a binary value; does the customer buy product "A" or not?).
My problem is: I want to train an LSTM model to predict the target value for the next month(does the customer buy product "A" next month?). Since I have multiple time series(one per customer), I have more than one sample per timestamp(for example, for January 2010, I have more than 1000 samples, and so on). How do I train the model? Do I go epoch by epoch and for each epoch, fit the model one by one on all customers? Is there another side to this I'm missing?
Dataset features:
Number of customers: 1500;
Length of time series: 120;
Number of features per customer: 80(before adding time-shifted features);

Resources