I want to extract the VGG features of a set of images and keep them in memory in a dictionary. The dictionary ends up holding 8091 tensors each of shape (1,4096), but my machine crashes with an out of memory error after about 6% of the way. Does anybody have a clue why this is happening and how to prevent it?
In fact, this seems to be triggered by the call to VGG rather than the memory space, since storing the VGG classification is sufficient to trigger the error.
Below is the simplest code I've found to reproduce the error. Once a helper function is defined:
import torch, torchvision
from tqdm import tqdm
vgg = torchvision.models.vgg16(weights='DEFAULT')
def try_and_crash(gen_data):
store_out = {}
for i in tqdm(range(8091)):
my_output = gen_data(torch.randn(1,3,224,224))
store_out[i] = my_output
return store_out
Calling it to quickly produce a large tensor doesn't cause a fuss
just_fine = try_and_crash(lambda x: torch.randn(1,4096))
but calling it to use vgg causes the machine to crash:
will_crash = try_and_crash(vgg)
The problem is that each element of the dictionary store_out[i] also stores the gradients that led to its computation, therefore ends up being much larger than a simple 1x4096 element tensor.
Running the code with torch.no_grad(), or equivalently with torch.set_grad_enabled(False) solves the issue. We can test it by slightly changing the helper function
def try_and_crash_grad(gen_data, grad_enabled):
store_out = {}
for i in tqdm(range(8091)):
with torch.set_grad_enabled(grad_enabled):
my_output = gen_data(torch.randn(1,3,224,224))
store_out[i] = my_output
return store_out
Now the following works
works_fine = try_and_crash_grad(vgg, False)
while the following throws an out of memory error
crashes = try_and_crash_grad(vgg, True)
Right now, if I want to create a tensor on gpu, I have to do it manually. For context, I'm sure that GPU support is available since
print(torch.backends.mps.is_available())# this ensures that the current current PyTorch installation was built with MPS activated.
print(torch.backends.mps.is_built())
returns True.
I've been doing this every time:
device = torch.device("mps")
a = torch.randn((), device=device, dtype=dtype)
Is there a way to specify, for a jupyter notebook, that all my tensors are supposed to be run on the GPU?
The convenient way
There is no convenient way to set default device to MPS as of 2022-12-22, per discussion on this issue.
The inconvenient way
You can accomplish the objective of 'I don't want to specify device= for tensor constructors, just use MPS' by intercepting calls to tensor constructors:
class MPSMode(torch.overrides.TorchFunctionMode):
def __init__(self):
# incomplete list; see link above for the full list
self.constructors = {getattr(torch, x) for x in "empty ones arange eye full fill linspace rand randn randint randperm range zeros tensor as_tensor".split()}
def __torch_function__(self, func, types, args=(), kwargs=None):
if kwargs is None:
kwargs = {}
if func in self.constructors:
if 'device' not in kwargs:
kwargs['device'] = 'mps'
return func(*args, **kwargs)
# sensible usage
with MPSMode():
print(torch.empty(1).device) # prints mps:0
# sneaky usage
MPSMode().__enter__()
print(torch.empty(1).device) # prints mps:0
The recommended way:
I would lean towards just putting your device in a config at the top of your notebook and using it explicitly:
class Conf: dev = torch.device("mps")
# ...
a = torch.randn(1, device=Conf.dev)
This requires you to type device=Conf.dev throughout the code. But you can easily switch your code to different devices, and you don't have any implicit global state to worry about.
as of 2023-01-20, with pytorch 2.0 nightly, you can set default device as mps using:
torch.set_default_device("mps")
I am currently developing some code that deals with big multidimensional arrays. Of course, Python gets very slow if you try to perform these computations in a serialized manner. Therefore, I got into code parallelization, and one of the possible solutions I found has to do with the multiprocessing library.
What I have come up with so far is first dividing the big array in smaller chunks and then do some operation on each of those chunks in a parallel fashion, using a Pool of workers from multiprocessing. For that to be efficient and based on this answer I believe that I should use a shared memory array object defined as a global variable, to avoid copying it every time a process from the pool is called.
Here I add some minimal example of what I'm trying to do, to illustrate the issue:
import numpy as np
from functools import partial
import multiprocessing as mp
import ctypes
class Trials:
# Perform computation along first dimension of shared array, representing the chunks
def Compute(i, shared_array):
shared_array[i] = shared_array[i] + 2
# The function you actually call
def DoSomething(self):
# Initializer function for Pool, should define the global variable shared_array
# I have also tried putting this function outside DoSomething, as a part of the class,
# with the same results
def initialize(base, State):
global shared_array
shared_array = np.ctypeslib.as_array(base.get_obj()).reshape(125, 100, 100) + State
base = mp.Array(ctypes.c_float, 125*100*100) # Create base array
state = np.random.rand(125,100,100) # Create seed
# Initialize pool of workers and perform calculations
with mp.Pool(processes = 10,
initializer = initialize,
initargs = (base, state,)) as pool:
run = partial(self.Compute,
shared_array = shared_array) # Here the error says that shared_array is not defined
pool.map(run, np.arange(125))
pool.close()
pool.join()
print(shared_array)
if __name__ == '__main__':
Trials = Trials()
Trials.DoSomething()
The trouble I am encountering is that when I define the partial function, I get the following error:
NameError: name 'shared_array' is not defined
For what I understand, I think that means that I cannot access the global variable shared_array. I'm sure that the initialize function is executing, as putting a print statement inside of it gives back a result in the terminal.
What am I doing incorrectly, is there any way to solve this issue?
I'm trying to restart an optimisation in pymoo.
I have a problem defined as:
class myOptProb(Problem):
"""my body goes here"""
algorithm = NSGA2(pop_size=24)
problem = myOptProblem(opt_obj=dp_ptr,
nvars=7,
nobj=4,
nconstr=0,
lb=0.3 * np.ones(7),
ub=0.7 * np.ones(7),
parallelization=('threads', cpu_count(),))
res = minimize(problem,
algorithm,
('n_gen', 100),
seed=1,
verbose=True)
During the optimisation I write the design vectors and results to a .csv file. An example of design_vectors.csv is:
5.000000000000000000e+00, 4.079711567060104183e-01, 6.583544872784267143e-01, 4.712364759485179189e-01, 6.859360188593541796e-01, 5.653765991273791425e-01, 5.486782880836487131e-01, 5.275405748345924906e-01,
7.000000000000000000e+00, 5.211287914743063521e-01, 6.368123569438421949e-01, 3.496693260479644128e-01, 4.116734716044557763e-01, 5.343037085833151068e-01, 6.878382993278697732e-01, 5.244120877022839800e-01,
9.000000000000000000e+00, 5.425317846613321171e-01, 5.275405748345924906e-01, 4.269449637288642574e-01, 6.954464617649794844e-01, 5.318980876983187001e-01, 4.520564690494201510e-01, 5.203792876471586837e-01,
1.100000000000000000e+01, 4.579502451694219545e-01, 6.853050113762846340e-01, 3.695822666721857441e-01, 3.505318077758549089e-01, 3.540316632186925050e-01, 5.022648662707586142e-01, 3.086099221096791911e-01,
3.000000000000000000e+00, 4.121775968257620493e-01, 6.157117313805953174e-01, 3.412904026310568106e-01, 4.791574104703620329e-01, 6.634382012372381787e-01, 4.174456593494717538e-01, 4.151101354345394512e-01,
The results.csv is:
5.000000000000000000e+00, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.000000000000000000e+05, 1.000000000000000000e+05,
7.000000000000000000e+00, 1.041682833582066703e+00, 3.481167125962069189e-03, -5.235115318709097909e-02, 4.634480813876099177e-03,
9.000000000000000000e+00, 1.067730307802263967e+00, 2.194702810002167534e-02, -3.195892023664552717e-01, 1.841232582360878426e-03,
1.100000000000000000e+01, 8.986880344052742275e-01, 2.969022150977750681e-03, -4.346692726475211849e-02, 4.995468429444801205e-03,
3.000000000000000000e+00, 9.638770499257821589e-01, 1.859596479928402393e-02, -2.723230073142696162e-01, 1.600910928983005632e-03,
The first column is the index of the design vector - because I thread asynchronously, I specify the indices.
I see that it should be possible to restart the optimisation via the sampling parameter for pymoo.algorithms.nsga2.NSGA2 but I couldn't find a working example. The documentation for both population and individuals is also not clear. So how can I restart a simulation with the previous results?
Yes, you can initialize the algorithm object with a population instead of doing it randomly.
I have written a small tutorial for a biased initialization:
https://pymoo.org/customization/initialization.html
Because in your case the data already exists, in a CSV or in-memory file, you might want to create a dummy problem (I have called it Constant in my example) to set the attributes in the Population object. (In the population X, F, G, CV and feasible needs to be set). Another way would be setting the attributes directly...
The biased initialization with a dummy problem is shown below. If you already use pymoo to store the csv files, you can also just np.save the Population object directly and load it. Then all intermediate steps are not necessary.
I am planning to improve checkpoint implementation in the future. So if you have some more feedback and use case which are not possible yet please let me know.
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.algorithms.so_genetic_algorithm import GA
from pymoo.factory import get_problem, G1, Problem
from pymoo.model.evaluator import Evaluator
from pymoo.model.population import Population
from pymoo.optimize import minimize
class YourProblem(Problem):
def __init__(self, n_var=10):
super().__init__(n_var=n_var, n_obj=1, n_constr=0, xl=-0, xu=1, type_var=np.double)
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = np.sum(np.square(x - 0.5), axis=1)
problem = YourProblem()
# create initial data and set to the population object - for your this is your file
N = 300
X = np.random.random((N, problem.n_var))
F = np.random.random((N, problem.n_obj))
G = np.random.random((N, problem.n_constr))
class Constant(YourProblem):
def _evaluate(self, x, out, *args, **kwargs):
out["F"] = F
out["G"] = G
pop = Population().new("X", X)
Evaluator().eval(Constant(), pop)
algorithm = GA(pop_size=100, sampling=pop)
minimize(problem,
algorithm,
('n_gen', 10),
seed=1,
verbose=True)
I am working with the following code, but I get an error
import pymc3 as pm
import theano.tensor as tt
with pm.Model() as model:
alpha = 1.0/count_data.mean() # Recall count_data is the
# variable that holds our txt counts
lambda_1 = pm.Exponential("lambda_1", alpha)
lambda_2 = pm.Exponential("lambda_2", alpha)
tau = pm.DiscreteUniform("tau", lower=0, upper=n_count_data - 1)
with model:
idx = np.arange(n_count_data) # Index
lambda_ = pm.math.switch(tau > idx, lambda_1, lambda_2)
with model:
observation = pm.Poisson("obs", lambda_, observed=count_data)
with model:
step = pm.Metropolis()
trace = pm.sample(10000, tune=5000,step=step)
But I get the error
ValueError: must use protocol 4 or greater to copy this object; since getnewargs_ex returned keyword arguments.
I have windows-10, python-3.5.6,
pymc3- 3.5, ipython-6.5.0. Any help is deeply appreciated. Thanks in advance.
It sounds like this exception is being thrown by the joblib library, which uses pickle to send the model to different processes. The easiest fix is to use only a single core, by changing the last line to
trace = pm.sample(10000, tune=5000, step=step, cores=1, chains=4)
It will be hard to diagnose the problem with joblib without more details. Creating a fresh conda environment might help.
The workaround suggested by colcarroll did not work for me. The behavior you are seeing is related to PR#3140 of PyMC3, which you may want to track there. The solution and/or workaround may depend on how you are running theano (with or without GPU support).