Out of memory error because of giant input data - python-3.x

I am using RandomForestClassifier in python to predict whether the pixel in the input image is inside the cell or outside it as a pre-processing stage to improve the image , the problem is that the data size of the training set is 8.36GB and also the size of the test data is 8.29GB so whenever I run my program I get (out of memory) error. Will extending the memory not work?. Is there any way to read csv files which contain the data in more than one step and then free the memory after each step?

Hopefully you are using pandas to process this csv file as it would be nearly impossible in native python. As for your memory problem here is a great article explaining how to process large csv files by chunking the data in pandas.
http://pythondata.com/working-large-csv-files-python/

Related

Read .PART files

I am building a model which used large datasets in .csv files (~50Gb). My machine is a Windows 10 with 16Gb of RAM.
Since I don't have enough RAM to load the whole dataset, I used Dask to read the file and split them into smaller data sets. It worked just fine and I was able to save it into files like these. However, when I read the files, it only showed ... in every boxes like in this image
I have tried
!pip install dask
import dask.dataframe as dd
cat = dd.read_csv(paths.data + "cat.csv/*")
cat.head(5)
but it simply kept loading even though the data is kept to a minimum.
Can anyone please help me? Thank you.
The ... symbol is expected, since the data is not loaded in memory. There is a detailed tutorial on dask dataframes here: https://tutorial.dask.org/04_dataframe.html

Python Memory Error (After Appending DataFrame)

I want to know more about this as this is new for me..
I am trying to query InfluxDB with python to fetch data in 5 min time interval. I used a simple for-loop to get my data in small chunks and appended the chunks into another empty dataframe inside for loop one after another. This worked out pretty smoothly and I see my output. But while I try to perform mathematical operations on this large dataframe , it gives me a Memory error stated below:
"Memory Error : Unable to allocate 6.95GiB for an array with shape (993407736) and datatype int64"
My system has these info 8.00GB RAM, 64 bit OS x64 based processor.
Could my system be not supporting this ?
Is there an alternate way I can append small dataframes into another dataframe without these memory issues. I am new to this data stuff with python and I need to work with this large chunk of data.... may be an year
Even though, your system has 8GB memory, it will be used by OS and other applications running in your system. Hence it is not able to allocate 6.95GiB only for this program. In case you are building a ML model & trying to run with huge data, You need to consider any of the below options
Use GPU machines offered by any of the cloud provider.
Process the data in small chunks (If it is not ML)

Processing large grids with Dask results in MemoryError

I have three GeoTIFFs, each roughly 500 MB in size on AWS' S3, which I am trying to process on an EMR cluster using Dask, but I obtain a MemoryError after the processing the first tiff.
After reading the GeoTIFF using xarray.open_rasterio(), I convert the grid values to boolean then multiply the array by a floating point value. This workflow has executed successfully on three GeoTIFFs 50 MBs in size. Additionally, I have tried using chunking when reading with xarray, but have obtained the same results.
Is there a size limitation with Dask or another possible issue I could be running into?
Is there a size limitation with Dask or another possible issue I could be running into?
Dask itself does not artificially impose any size limitations. It is just a normal Python process. I recommend thinking about normal Python or hardware issues. My first guess would be that you're using very small VMs, but that's just a guess. Good luck!

TextLMDataBunch Memory issue Language Model Fastai

I have a dataset with 45 million rows of data. I have three 6gb ram gpu. I am trying to train a language model on the data.
For that, I am trying to load the data as the fastai data bunch. But this part always fails because of the memory issue.
data_lm = TextLMDataBunch.from_df('./', train_df=df_trn,
valid_df=df_val, bs=10)
How do I handle this issue?
When you use this function, your Dataframe is loaded in memory. Since you have a very big dataframe, this causes your memory error. Fastai handles tokenization with a chunksize, so you should still be able to tokenize your text.
Here are two things you should try :
Add a chunksize argument (the default value is 10k) to your TextLMDataBunch.from_df, so that the tokenization process needs less memory.
If this is not enough, I would suggest not to load your whole dataframe into memory. Unfortunately, even if you use TextLMDataBunch.from_folder, it just loads the full DataFrame and pass it to TextLMDataBunch.from_df, you might have to create your own DataBunch constructor. Feel free to comment if you need help on that.

Create recordio file for linear regression

I'm using AWS Sagemaker to run linear regression on a CSV dataset. I have made some tests, and with my sample dataset that is 10% of the full dataset, the csv file ends up at 1.5 GB in size.
Now I want to run the full dataset, but I'm facing issues with the 15 GB file. When I compress the file with Gzip, it ends up only 20 MB. However, Sagemaker only supports Gzip on "Protobuf-Recordio" files. I know I can make Recordio files with im2rec, but it seems to be intended for image files for image classication. I'm also not sure how to generate the protobuf file.
To make things even worse(?) :) I'm generating the dataset in Node.
I would be very grateful to get some pointers in the right direction how to do this.
This link https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-preprocess-data-transform.html has useful information if you are willing to use a Python script to transform your data.
The actual code from the SDK is https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/amazon/common.py
Basically, you could load your CSV data into an NDArray (in batches so that you can write to multiple files), and then use https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/amazon/common.py to convert to Recordio-protobuf. You should be able to write the buffer with the Recordio-protobuf into a file.
Thanks

Resources