Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I need to know how to react when I want to save data under Core Data and the system is reaching limits in terms of storage capacity.
How do I know that I do not have much storage left?
How do I know that I do not have enough space to store what I want to store?
How do I handle errors (what are they?) in case there are?
I have used Core Data quite a lot, but mainly for read only data or for storing small quantities.
Any suggestion on a good document or tutorial on the subject would be welcome.
I do not need any general introduction on Core Data though.
To check disk-space:
NSDictionary* fileAttribs = [[NSFileManager defaultManager] attributesOfFileSystemForPath:#"/" error:&error];
unsigned long long freeSpace = [[fileAttribs objectForKey:NSFileSystemFreeSize] longLongValue];
NSLog(#"free space: %dGB", (int)(freeSpace / 1073741824)); //Note you just have to change the value here to get KB, MB, GB, TB etc...
And the Cocoa domain error codes for file read/write are:
NSFileNoSuchFileError = 4
NSFileLockingError = 255
NSFileReadUnknownError = 256
NSFileReadNoPermissionError = 257
NSFileReadInvalidFileNameError = 258
NSFileReadCorruptFileError = 259
NSFileReadNoSuchFileError = 260
NSFileReadInapplicableStringEncodingError = 261
NSFileReadUnsupportedSchemeError = 262
NSFileReadTooLargeError = 263
NSFileReadUnknownStringEncodingError = 264
NSFileWriteUnknownError = 512
NSFileWriteNoPermissionError = 513
NSFileWriteInvalidFileNameError = 514
NSFileWriteFileExistsError = 516
NSFileWriteInapplicableStringEncodingError = 517
NSFileWriteUnsupportedSchemeError = 518
NSFileWriteOutOfSpaceError = 640
NSFileWriteVolumeReadOnlyError = 642
Related
So I have the following code for running skopt.forest_minimize(), but the biggest challenge I am facing right now is that it is taking upwards of days to finish running even just 2 iterations.
SPACE = [skopt.space.Integer(4, max_neighbour, name='n_neighbors', prior='log-uniform'),
skopt.space.Integer(6, 10, name='nr_cubes', prior='uniform'),
skopt.space.Categorical(overlap_cat, name='overlap_perc')]
#skopt.utils.use_named_args(SPACE)
def objective(**params):
score, scomp = tune_clustering(X_cont=X_cont, df=df, pl_brewer=pl_brewer, **params)
if score == 0:
print('saving new scomp')
with open(scomp_file, 'w') as filehandle:
json.dump(scomp, filehandle, default = json_default)
return score
results = skopt.forest_minimize(objective, SPACE, n_calls=1, n_initial_points=1, callback=[scoring])
Is it possible to optimize the following code so that it can compute faster? I noticed that it was barely making use of my CPU, highest CPU utilized is about 30% (it's i7 9th gen with
8 cores).
Also a question while I'm at it, is it possible to utilize a GPU for these computational tasks? I have a 3050 that I can use.
I'm working on multiple well-formed xml files, whose sizes range from 100 MB to 4 GB. My goal is to read them as strings and then import them as ElementTree objects using .fromstring() method (from xml.etree.ElementTree module).
However, as the process goes through and the string size increases, two exceptions occured related to memory restriction :
xml.etree.ElementTree.ParseError: out of memory: line 1, column 0
OverflowError: size does not fit in an int
It looks like .fromstring() method enforces a string size limit to the input, around 1GB... ?
To debug this, I wrote a short script using a for loop:
xmlFiles_list = [path1, path2, ...]
for fp in xmlFiles_list:
xml_fo = open(fp, mode='r', encoding="utf-8")
xml_asStr = xml_fo.read()
xml_fo.close()
print(len(xml_asStr.encode("utf-8")) / 10**9) # display string size in GB
try:
etree = cElementTree.fromstring(xml_asStr)
print(".fromstring() success!\n")
except Exception as e:
print(f"Error :{type(e)} {str(e)}\n")
continue
The ouput is as following :
0.895206753
.fromstring() success!
1.220224531
Error :<class 'xml.etree.ElementTree.ParseError'> out of memory: line 1, column 0
1.328233473
Erreur :<class 'xml.etree.ElementTree.ParseError'> out of memory: line 1, column 0
2.567867904
Error :<class 'OverflowError'> size does not fit in an int
4.080672538
Error :<class 'OverflowError'> size does not fit in an int
I found multiple workarounds to avoid this issue : .parse() method or lxml module for bette performance. I just hope someone could shed some light on this :
Is there a specific string size limit in xml.etree.ET module and .fromstring() method ?
Why do I end up with two different exceptions as the string size increases ? Are they related to the same memory-allocation restriction ?
Python version/system: 3.9 (64 bits)
RAM : 32go
Hope my topic is clear enough, I'm new on stackoverflow
I’m trying to get feature vectors from the encoder model using pre-trained ALBERT v2 weights. i have a nvidia 1650ti gpu (4 GB) , and sufficient RAM(8GB) but for some reason I’m getting Runtime error saying -
RuntimeError: [enforce fail at …\c10\core\CPUAllocator.cpp:75] data.
DefaultCPUAllocator: not enough memory: you tried to allocate
491520000 bytes. Buy new RAM!
I’m really new to pytorch and deep learning in general. Can anyone please tell me what is wrong?
My entire code -
encoded_test_data = tokenized_test_values[‘input_ids’]
encoded_test_masks = tokenized_test_values[‘attention_mask’]
encoded_train_data = torch.from_numpy(encoded_train_data).to(device)
encoded_masks = torch.from_numpy(encoded_masks).to(device)
encoded_test_data = torch.from_numpy(encoded_test_data).to(device)
encoded_test_masks = torch.from_numpy(encoded_test_masks).to(device)
config = EncoderDecoderConfig.from_encoder_decoder_configs(BertConfig(),BertConfig())
EnD_model = EncoderDecoderModel.from_pretrained(‘albert-base-v2’,config=config)
feature_extractor = EnD_model.get_encoder()
feature_vector = feature_extractor.forward(input_ids=encoded_train_data,attention_mask = encoded_masks)
feature_test_vector = feature_extractor.forward(input_ids = encoded_test_data, attention_mask = encoded_test_masks)
Also 491520000 bytes is about 490 MB which should not be a problem.
I tried reducing the number of training examples and also the length of the maximum padded input . The OOM error still exists even though the required space now is 153 MB , which should easily be managable.
I also have maxed out the RAM limit of the heap of pycharm software to 2048 MB. I really dont know what to do now…
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I want the program to return destination_index but it is not returning it. It shows nothing on the console.
destinations = ['Paris, France','Shanghai, China','Los Angeles, USA','São Paulo, Brazil','Cairo, Egypt']
test_traveler = ['Erin Wilkes', 'Shanghai, China', ['historical site', 'art']]
def get_destination_index(destination):
for destination_index in range(len(destinations)):
if destination == destinations[destination_index]:
return destination_index
get_destination_index('Cairo, Egypt')
you can try to add a print statement
destinations = ['Paris, France','Shanghai, China','Los Angeles, USA','São Paulo, Brazil','Cairo, Egypt']
test_traveler = ['Erin Wilkes', 'Shanghai, China', ['historical site', 'art']]
def get_destination_index(destination):
for destination_index in range(len(destinations)):
if destination == destinations[destination_index]:
return destination_index
print(get_destination_index('Cairo, Egypt'))
And the output will be
4
This question already has answers here:
Java process memory usage (jcmd vs pmap)
(3 answers)
Where do these java native memory allocated from?
(1 answer)
Closed 5 years ago.
I'm running jetty on my web server. My current jvm setting: -Xmx4g -Xms2g, however jetty uses a lot more memory and I don't know where these extra memory goes.
Jetty uses 4.547g memory in total:
heap usage shows heap memory usage at 2.5g:
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 483196928 (460.8125MB)
used = 277626712 (264.76546478271484MB)
free = 205570216 (196.04703521728516MB)
57.45622455612963% used
Eden Space:
capacity = 429522944 (409.625MB)
used = 251267840 (239.627685546875MB)
free = 178255104 (169.997314453125MB)
58.4992824038755% used
From Space:
capacity = 53673984 (51.1875MB)
used = 26358872 (25.137779235839844MB)
free = 27315112 (26.049720764160156MB)
49.109214624351345% used
To Space:
capacity = 53673984 (51.1875MB)
used = 0 (0.0MB)
free = 53673984 (51.1875MB)
0.0% used
concurrent mark-sweep generation:
capacity = 2166849536 (2066.46875MB)
used = 1317710872 (1256.6670150756836MB)
free = 849138664 (809.8017349243164MB)
60.81229222922842% used
Still 2g missing, then I use Native Memory Tracking, it shows:
Total: reserved=5986478KB, committed=3259678KB
- Java Heap (reserved=4194304KB, committed=2640352KB)
(mmap: reserved=4194304KB, committed=2640352KB)
- Class (reserved=1159154KB, committed=122778KB)
(classes #18260)
(malloc=4082KB #62204)
(mmap: reserved=1155072KB, committed=118696KB)
- Thread (reserved=145568KB, committed=145568KB)
(thread #141)
(stack: reserved=143920KB, committed=143920KB)
(malloc=461KB #707)
(arena=1187KB #280)
- Code (reserved=275048KB, committed=143620KB)
(malloc=25448KB #30875)
(mmap: reserved=249600KB, committed=118172KB)
- GC (reserved=25836KB, committed=20792KB)
(malloc=11492KB #1615)
(mmap: reserved=14344KB, committed=9300KB)
- Compiler (reserved=583KB, committed=583KB)
(malloc=453KB #769)
(arena=131KB #3)
- Internal (reserved=76399KB, committed=76399KB)
(malloc=76367KB #25878)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=21603KB, committed=21603KB)
(malloc=17791KB #201952)
(arena=3812KB #1)
- Native Memory Tracking (reserved=5096KB, committed=5096KB)
(malloc=22KB #261)
(tracking overhead=5074KB)
- Arena Chunk (reserved=190KB, committed=190KB)
(malloc=190KB)
- Unknown (reserved=82696KB, committed=82696KB)
(mmap: reserved=82696KB, committed=82696KB)
Still does't explain where memory goes, can someone shed light on how to locate the missing memory?