Linux memory allocation on Rstudio/Rstudio Server - linux

I am trying to do clustering with CLARA using Rstudio on Linux and I have a very large dataset.
However, it seemed that the memory is not enough for the whole dataset?
## Estimating the number of clusters ----
fviz_nbclust(df, clara, method = "silhouette", k.max = 15)
It showed me this:
Error: cannot allocate vector of size 339.8 GB
So I tried all of this and it still didn't work. memory.limit is also specific for Windows only (I still gave it a try tho).
# devtools::install_github("krlmlr/ulimit")
# gc()
# memory.limit(9999999999)
#
#
# install.packages("devtools", dependencies = TRUE)
# devtools::install_github("krlmlr/ulimit")
# ulimit::memory_limit(2000)
#
# devtools::install_github("jeroen/unix")
#
#
# if(.Platform$OS.type == "windows") withAutoprint({
# memory.size()
# memory.size(TRUE)
# memory.limit()
# })
# memory.limit(size=56000)
# memory.size(max = FALSE)
Can somebody help me with this?
Any help would be appreciated!

The error simply means that it cannot allocate 339.8 GB to your RAM. Do you have 360GB of RAM?
If not, you will just have to dplyr::nsample() and just run the function on a subset of your dataset.

Related

How to Fully Utilize CPU cores for skopt.forest_minimize

So I have the following code for running skopt.forest_minimize(), but the biggest challenge I am facing right now is that it is taking upwards of days to finish running even just 2 iterations.
SPACE = [skopt.space.Integer(4, max_neighbour, name='n_neighbors', prior='log-uniform'),
skopt.space.Integer(6, 10, name='nr_cubes', prior='uniform'),
skopt.space.Categorical(overlap_cat, name='overlap_perc')]
#skopt.utils.use_named_args(SPACE)
def objective(**params):
score, scomp = tune_clustering(X_cont=X_cont, df=df, pl_brewer=pl_brewer, **params)
if score == 0:
print('saving new scomp')
with open(scomp_file, 'w') as filehandle:
json.dump(scomp, filehandle, default = json_default)
return score
results = skopt.forest_minimize(objective, SPACE, n_calls=1, n_initial_points=1, callback=[scoring])
Is it possible to optimize the following code so that it can compute faster? I noticed that it was barely making use of my CPU, highest CPU utilized is about 30% (it's i7 9th gen with
8 cores).
Also a question while I'm at it, is it possible to utilize a GPU for these computational tasks? I have a 3050 that I can use.

Submitted metrics not showing up on prometheus endpoint

I have a code which looks like this, it is supposed to collect some custom metrics and expose it over prometheus.
def collect_metrics():
registry = prometheus_client.CollectorRegistry()
label_names = ['parent', 'namespace','team', 'name', 'status']
sib = Gauge(f'disk_sizeInBytes','Gets the size of the disk in bytes.', label_names, registry=registry)
msib = Gauge(f'disk_maxSizeInMegabytes', 'Gets or sets the maximum size of the disk in megabytes, which is the size of memory allocated for the disk.', label_names, registry=registry)
...
sib.labels(parent=parent_name, namespace=namespace_name, team=team, name=disk_name, status=disk_status).set(disk_list[dp]["sizeInBytes"])
msib.labels(parent=parent_name, namespace=namespace_name, team=team, name=disk_name, status=disk_status).set(disk_list[dp]["maxSizeInMegabytes"])
print(f'{datetime.datetime.now()} | disk_name: {disk_name} | sib: {disk_list[dp]["sizeInBytes"]} | msib: {disk_list[dp]["maxSizeInMegabytes"]}')
...
if __name__ == '__main__':
...
start_htdp_server(8005)
collect_metrics()
The code works fine without any errors, however I don’t see anything being shown over endpoint http://localhost:8005/, though i see some default metrics being shown such as:
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 403.0
python_gc_objects_collected_total{generation="1"} 0.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 39.0
python_gc_collections_total{generation="1"} 3.0
python_gc_collections_total{generation="2"} 0.0
# HELP python_info Python platform information
# TYPE python_info gauge
python_info{implementation="CPython",major="3",minor="10",patchlevel="4",version="3.10.4"} 1.0
Can someone point me, what is the issue here?
Couple of things:
Remove registry = prometheus_client.CollectorRegistry()
Remove registry=registry from the Gauge declarations
Add a loop to keep the process running.
import datetime
import re
import time
from prometheus_client import CollectorRegistry,Gauge
from prometheus_client import start_http_server
def collect_metrics():
label_names = ['parent', 'namespace','team', 'name', 'status']
sib = Gauge(
'disk_sizeInBytes',
'Gets the size of the disk in bytes.',
label_names,
)
msib = Gauge(
'disk_maxSizeInMegabytes',
'Gets or sets the maximum size of the disk in megabytes, which is the size of memory allocated for the disk.',
label_names,
)
sib.labels(
parent="parent_name",
namespace="namespace_name",
team="team",
name="disk_name",
status="disk_status",
).set(10.0)
msib.labels(
parent="parent_name",
namespace="namespace_name",
team="team",
name="disk_name",
status="disk_status",
).set(5.0)
if __name__ == '__main__':
...
start_http_server(8005)
collect_metrics()
while True:
time.sleep(5)
# HELP disk_sizeInBytes Gets the size of the disk in bytes.
# TYPE disk_sizeInBytes gauge
disk_sizeInBytes{name="disk_name",namespace="namespace_name",parent="parent_name",status="disk_status",team="team"} 10.0
# HELP disk_maxSizeInMegabytes Gets or sets the maximum size of the disk in megabytes, which is the size of memory allocated for the disk.
# TYPE disk_maxSizeInMegabytes gauge
disk_maxSizeInMegabytes{name="disk_name",namespace="namespace_name",parent="parent_name",status="disk_status",team="team"} 5.0

RuntimeError on running ALBERT for obtaining encoding vectors from text

I’m trying to get feature vectors from the encoder model using pre-trained ALBERT v2 weights. i have a nvidia 1650ti gpu (4 GB) , and sufficient RAM(8GB) but for some reason I’m getting Runtime error saying -
RuntimeError: [enforce fail at …\c10\core\CPUAllocator.cpp:75] data.
DefaultCPUAllocator: not enough memory: you tried to allocate
491520000 bytes. Buy new RAM!
I’m really new to pytorch and deep learning in general. Can anyone please tell me what is wrong?
My entire code -
encoded_test_data = tokenized_test_values[‘input_ids’]
encoded_test_masks = tokenized_test_values[‘attention_mask’]
encoded_train_data = torch.from_numpy(encoded_train_data).to(device)
encoded_masks = torch.from_numpy(encoded_masks).to(device)
encoded_test_data = torch.from_numpy(encoded_test_data).to(device)
encoded_test_masks = torch.from_numpy(encoded_test_masks).to(device)
config = EncoderDecoderConfig.from_encoder_decoder_configs(BertConfig(),BertConfig())
EnD_model = EncoderDecoderModel.from_pretrained(‘albert-base-v2’,config=config)
feature_extractor = EnD_model.get_encoder()
feature_vector = feature_extractor.forward(input_ids=encoded_train_data,attention_mask = encoded_masks)
feature_test_vector = feature_extractor.forward(input_ids = encoded_test_data, attention_mask = encoded_test_masks)
Also 491520000 bytes is about 490 MB which should not be a problem.
I tried reducing the number of training examples and also the length of the maximum padded input . The OOM error still exists even though the required space now is 153 MB , which should easily be managable.
I also have maxed out the RAM limit of the heap of pycharm software to 2048 MB. I really dont know what to do now…

Feature extraction in loop seems to cause memory leak in pytorch

I have spent considerable time trying to debug some pytorch code which I have created a minimal example of for the purpose of helping to better understand what the issue might be.
I have removed all necessary portions of the code which are unrelated to the issue so the remaining piece of code won't make much sense from a functional standpoint but it still displays the error I'm facing.
The overall task I'm working on is in a loop and every pass of the loop is computing the embedding of the image and adding it to a variable storing it. It's effectively aggregating it (not concatenating, so the size remains the same). I don't expect the number of iterations to force the datatype to overflow, I don't see this happening here nor in my code.
I have added multiple metrics to evaluate the size of the tensors I'm working with to make sure they're not growing in memory footprint
I'm checking the overall GPU memory usage to verify the issue leading to the final RuntimeError: CUDA out of memory..
My environment is as follows:
- python 3.6.2
- Pytorch 1.4.0
- Cudatoolkit 10.0
- Driver version 410.78
- GPU: Nvidia GeForce GT 1030 (2GB VRAM)
(though I've replicated this experiment with the same result on a Titan RTX with 24GB,
same pytorch version and cuda toolkit and driver, it only goes out of memory further in the loop).
Complete code below. I have marked 2 lines as culprits, as deleting them removes the issue, though obviously I need to find a way to execute them without having memory issues. Any help would be much appreciated! You may try with any image named "source_image.bmp" to replicate the issue.
import torch
from PIL import Image
import torchvision
from torchvision import transforms
from pynvml import nvmlDeviceGetHandleByIndex, nvmlDeviceGetMemoryInfo, nvmlInit
import sys
import os
os.environ["CUDA_VISIBLE_DEVICES"]='0' # this is necessary on my system to allow the environment to recognize my nvidia GPU for some reason
os.environ['CUDA_LAUNCH_BLOCKING'] = '1' # to debug by having all CUDA functions executed in place
torch.set_default_tensor_type('torch.cuda.FloatTensor')
# Preprocess image
tfms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),])
img = tfms(Image.open('source_image.bmp')).unsqueeze(0).cuda()
model = torchvision.models.resnet50(pretrained=True).cuda()
model.eval() # we put the model in evaluation mode, to prevent storage of gradient which might accumulate
nvmlInit()
h = nvmlDeviceGetHandleByIndex(0)
info = nvmlDeviceGetMemoryInfo(h)
print(f'Total available memory : {info.total / 1000000000}')
feature_extractor = torch.nn.Sequential(*list(model.children())[:-1])
orig_embedding = feature_extractor(img)
embedding_depth = 2048
mem0 = 0
embedding = torch.zeros(2048, img.shape[2], img.shape[3]) #, dtype=torch.float)
patch_size=[4,4]
patch_stride=[2,2]
patch_value=0.0
# Here, we iterate over the patch placement, defined at the top left location
for row in range(img.shape[2]-1):
for col in range(img.shape[3]-1):
print("######################################################")
######################################################
# Isolated line, culprit 1 of the GPU memory leak
######################################################
patched_embedding = feature_extractor(img)
delta_embedding = (patched_embedding - orig_embedding).view(-1, 1, 1)
######################################################
# Isolated line, culprit 2 of the GPU memory leak
######################################################
embedding[:,row:row+1,col:col+1] = torch.add(embedding[:,row:row+1,col:col+1], delta_embedding)
print("img size:\t\t", img.element_size() * img.nelement())
print("patched_embedding size:\t", patched_embedding.element_size() * patched_embedding.nelement())
print("delta_embedding size:\t", delta_embedding.element_size() * delta_embedding.nelement())
print("Embedding size:\t\t", embedding.element_size() * embedding.nelement())
del patched_embedding, delta_embedding
torch.cuda.empty_cache()
info = nvmlDeviceGetMemoryInfo(h)
print("\nMem usage increase:\t", info.used / 1000000000 - mem0)
mem0 = info.used / 1000000000
print(f'Free:\t\t\t {(info.total - info.used) / 1000000000}')
print("Done.")
Add this to your code as soon as you load the model
for param in model.parameters():
param.requires_grad = False
from https://pytorch.org/docs/stable/notes/autograd.html#excluding-subgraphs-from-backward

Mysql seconds_behind master very high

Hi we have mysql master slave replication, master is mysql 5.6 and slave is mysql 5.7, seconds behind master is 245000, how I make it catch up faster. Right now it is taking more than 6 hours to copy 100 000 seconds.
My slave ram is 128 GB. Below is my my.cnf
[mysqld]
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
innodb_buffer_pool_size = 110G
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
# These are commonly set, remove the # and set as required.
basedir = /usr/local/mysql
datadir = /disk1/mysqldata
port = 3306
#server_id = 3
socket = /var/run/mysqld/mysqld.sock
user=mysql
log_error = /var/log/mysql/error.log
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
join_buffer_size = 256M
sort_buffer_size = 128M
read_rnd_buffer_size = 2M
#copied from old config
#key_buffer = 16M
max_allowed_packet = 256M
thread_stack = 192K
thread_cache_size = 8
query_cache_limit = 1M
#disabling query_cache_size and type, for replication purpose, need to enable it when going live
query_cache_size = 0
#query_cache_size = 64M
#query_cache_type = 1
query_cache_type = OFF
#GroupBy
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
enforce-gtid-consistency
gtid-mode = ON
log_slave_updates=0
slave_transaction_retries = 100
#replication related changes
server-id = 2
relay-log = /disk1/mysqllog/mysql-relay-bin.log
log_bin = /disk1/mysqllog/binlog/mysql-bin.log
binlog_do_db = brandmanagement
#replicate_wild_do_table=brandmanagement.%
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_recent
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data
replicate-wild-ignore-table=brandmanagement.t\_fb\_rt\_data
replicate-wild-ignore-table=brandmanagement.t\_keyword\_tweets
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_old
replicate-wild-ignore-table=brandmanagement.t\_gnip\_data\_new
binlog_format=row
report-host=10.125.133.220
report-port=3306
#sync-master-info=1
read-only=1
net_read_timeout = 7200
net_write_timeout = 7200
innodb_flush_log_at_trx_commit = 2
sync_binlog=0
sync_relay_log_info=0
max_relay_log_size=268435456
Lots of possible solutions. But I'll go with the simplest one. Have you got enough network bandwidth to send all changes over the network? You're using "row" binlog, which may be good in case of random, unindexed updates. But if you're changing a lot of data using indexes only, then "mixed" binlog may be better.

Resources