It seems that memory are allocated for a variable only after running its initializer in a session.
For some reason, I have to use some temporary variables. For example there isn't enough memory to run a big batch at one time, so I need to store the temporary gradients in extra variables. When I don't need this variables anymore, I want to make it at the uninitialized state (the state that no memory is allocated for it). Cause there is no memory for other computations.
The method I use now is to initialize variable in a session, then close this session and create a new session, where I have to fetch the persistent variables value in python ndarray( cpu memory),and initialize those variable in the new session, which is verbose.
So I'd like to know is there any method to free a variable in tensorflow.
Related
In OpenModelcia, do all variables of a model save in the ring buffer only when the delay operator is used, or is it done automatically whether we use delay or not? and if so, can we access the ring buffer from an external function C?
Since Modelica models can fail at any point in time all variables are saved in some sort of backup, for the C runtime in some buffers in the so-called thread data. And in some cases the OpenModelica compiler is able to revert the last step and try again, but slightly different.
For example an assert is throwing an error, because some variable has become negative but wasn't allowed to, so the compiler tries again with a smaller step size.
This backup is done independent of the presence of the delay operator and always done.
For the delay operator a different data structure is used, which is the RINGBUFFER you are probably referring to. It is only allocated if there are delay operators in the Modelica model.
There are no API functions provided to access (this) internal data of a OpenModelica simulation. So accessing the ring buffer would only be possible if you write such a function yourself, which is of course possible.
Question would be what you are trying to accomplish in the first place.
I have a program that increments a number in a variable: the variable currently is saved in memory, but I realize that heroku cycles the worker randomly, and that would mean my variable in memory would be lost. So, I intend to turn it into an environment variable; but, I don't know how to use it in my code.
The variable in my python code: cum_num = 98.
How would this work if I made an exact cum_num, but as a heroku environment variable?
Does it only protect against asynchronous updates or does it also cause other access to the variable to wait for the update? I'm using the same model for training and inference at the same time and want to make sure that inference is always done on a consistent model.
Passing use_locking=True when creating a TensorFlow optimizer, or a variable assignment op, causes a lock to be acquired around the relevant updates to the variable. Other optimizers/assignments on the same variable also created with use_locking=True will be serialized.
However, there are two caveats that you should bear in mind when using this option:
Reads to the variables are not performed under the lock, so it is possible to see intermediate states and partially-applied updates. Serializing reads requires additional coordination, such as that provided by tf.train.SyncReplicasOptimizer.
Writes (optimizers/assignments) to the same variable with use_locking=False are still possible, and will not acquire the lock. The programmer is responsible for ensuring that these writes do not occur.
I am reading the Threads in OS concepts and i come across "thread local storage (TLS)". What I understood is that the TLS is similar to static or global data, but it is more unique to individual thread. Its bit confusing on what is unique here?
Why can't we pass the data through runner (i.e., thread's actual codes) functions as params to this function?
Static and global data are shared across all the threads. If you modified a global/static variable it is visible to all the threads. Unlike global/shared variable if you create a variable in TLS, every thread has its own copy of the variable, i.e. changes to the variable is local to the thread. Unlike global variable where an access is made through ds segment, the TLS variable are accessed using (gs/fs) segment. A good way to learn about it is to look at the disassembly generated by the compiler.
Let's supposed you are working in Ada. In your Ada program you define a task (thread) that includes a [static] variable that can only be accessed by the task. You now create multiple instances of your task. Then you need a copy of that [static] variable for each task.
That's where your implementation could use Thread Local Storage. In other words, it is a static area of memory that gets copied for each thread in a program.
As an alternative to TLS, a thread could allocate such storage at the top of the stack.
We need thread-local storage to create libraries that have thread-safe functions, because of the thread-local storage each call to a function has its copy of the same global data, so it's safe I like to point out that the implementation is the same for copy on write technique.
in normal function with global data, the content of that data can be updated by multiple threads and make it unreliable, but in thread-local storage, you can think of it as
"global became local when multiple access happen "
I know it's generally considered best practice in javascript to always assign a new function to a variable, even if it's not used. But in relation to garbage collection in node.js, is v8 able to GC functions that are not assigned to variables or does it make no difference?
As long as all references to a variable (anonymous or assigned) have been destroyed (in this case, by deallocation of the containing function) v8 should garbage collect it.