If suppose we have a program1 inside that we are creating one object.
Now we have one object.
In the second program can we capture this object with out again instantiate,
Assume that program1 is still running and object is alive.
Actually we are using python celery, this always running and object is alive which i was created inside the code. my aim to capture that live object into another program.
is it possible?
Short Answer: kind-of.
Long-answer: you can't ever get "live objects" in a distributed environment unless you are doing something like CORBA. Instead, anytime you use celery, you are serializing objects onto and deserializing objects from a broker. Similarly, you can serialize return values as objects into the results backend. So, you can have celery serialize python objects by using the pickle serializer. But, in no circumstance will you get a live object. Instead, what you will be getting is your own copy of the object that Celery was working on.
Related
I am currently experimenting with making a debugging interface to a Micropython implementation running on a NIOS2 processor. It is a very rudimentary way of during it by only sending and receiving register write or read commands to/from the device over ethernet.
However, I've got the bright idea that it could be nice if I, with a decorator, could set a whole function to be executed on the device. By sending the entire source code to the device and then uses exec().
This works just fine.
But I ran into the problem with function arguments. If the argument is an already created object how do I get this data to the device? Pickle does not exist in the version of Micropython I'm stuck at using.
I've found that getting the __dict__ information is the way to go. I am though stuck at how to create an object in micropython without running its __init__() method.
IE the object must not be initialized twice since it will corrupt itself.
Is this possible to do?
Regards
I have a service which holds a javascript object, lets say obj. Initially when the server starts, the object is empty.
I receive pub sub messages. Each message has two attributes, the type of the message and the data. Based on the type of message, I modify the object.
For example, if I receive a message msg-0 of type start and receive some data abc-0. I add the data to my main object and now my object obj becomes {abc-0}.
Similarly, if I receive another message msg-1 of type start and receive some data abc-1. I add the data to my main object and now my object obj becomes {abc-0, abc-1}.
So imagine I receive 10 such messages then my object obj should look like {abc-0, abc-1, abc-2, abc-3, abc-4, abc-5, abc-6, abc-7, abc-8, abc-9, abc-10},
If I were to run just a single copy of this program, then, everything works fine.
But when I run 3 copies of this program, what I end up with is different. It creates 3 different objects, and each object holds {abc-0, abc-1, abc-2, abc-3}, {abc-4, abc-5, abc-6} and {abc-7, abc-8, abc-9, abc-10}.
I'm running 3 pods on kubernetes and this problem became evident. And, when I run just one pod, the problem goes away.
What could I be doing wrong? Is it some sort of a common error?
The issue here is that when running separate NodeJS instances, they are separate. They don't share memory. Objects declared in one are completely separate from the others. In order to get this working correctly, the processes need to communicate with each other. I'm not entirely familiar with Kubernetes, but I'm assuming a pod is the equivalent of a process on a Linux system. Please correct me if I'm wrong, but if this is the case, then that will be the source of the issue.
I would approach this issue by requiring timestamps on each chunk of incoming data, then keep a complete copy of the object in each process. When a new chunk comes in, place it in the correct position based on its timestamp and alert all other processes of an update and optionally share the new chunk or the complete object.
If the different processes (pods) are running synchronously (one after each other) then you'll need to inform the next process of the object at that point in time, so it can continue building the object as time goes on.
TL:DR This issue comes from NodeJS processes not sharing memory. In order to fix this, if the processes run concurrently, keep a running log of the object in all instances and alert all others, should new information be received. If not, inform next process of object at the time of process instantiation.
Hope I understood the question correctly, let me know if not.
I need to be able to to grab objects from Core Data and keep them in a mutable array in memory in order to avoid constant fetching and slow UI/UX. The problem is that I grab the objects on other threads. I also do writing to these objects at times on other threads. Because of this I can't just save the NSManagedObjects in an array and just call something like myManagedObjectContext.performBlock or myObject.managedObjectContext.PerformBlock since you are not supposed to pass MOCs between threads.
I was thinking of using a custom object to throw the data I need from the CD objects into. This feels a little stupid since I already made a Model/NSManagedObject class for the entities and since the custom object would be mutable it still would not be thread safe. This means I would have to do something like a serial queue for object manipulation on multiple threads? So for example any time I want to read/write/delete an object I have to throw it into my object serialQueue.
This all seems really nasty so I am wondering are there any common design patterns for this problem or something similar? Is there a better way of doing this?
I doubt you need custom objects between Core Data and your UI. There is a better answer:
Your UI should read from the managed objects that are associated with the main thread (which it sounds like you are doing).
When you make changes on another thread those changes will update the objects that are on your main thread. That is what Core Data is designed to do.
You just need to listen to those changes and have your UI react to them.
There are several ways to do this:
NSFetchedResultsController. Kind of like your mutable array but has a delegate it will notify when objects change. Highly recommended
Listen for KVO changes on the property that you are displaying in your UI. Whenever the property changes you get a KVO notification and can react to it. More code but also more narrowly focused.
Listen for NSManagedObjectContextDidSaveNotification events via the NSNotification center and react to the notification. The objects that are being changed will be in the userInfo of the notification.
Of the three, using a NSFetchedResultsController is usually the right answer. When that in place you just change what you need to change on other threads, save the context and you are done. The UI will update itself.
One pattern is to pass along only the object ids, which are NSString objects, immutable and thus thread safe, and query on the main thread after those ids. This way every NSManagedObject will belong to the appropriate thread.
Alternatively, you can use mergeChangesFromContextDidSaveNotification which will update the objects from the main thread with the changes made on the secondary thread. You'd still need fetching for new objects, though.
The "caveat" is that you need to save the secondary context in order to get your hands on a notification like this. Also any newly created, but not saved objects from the main thread will be lost after applying the merge - however this might not pose problems if your main thread only consumes CoreData objects.
In my play application I have several Jobs and I have a singleton class.
What I would like to do is for each Job to store data in the singleton class and I would like to be able to retrieve from this singleton class the data that corresponds to the current Job via yet another class.
In other words I would like to have something like this:
-Job 1 stores "Job1Data" in the singleton class
-Job 2 stores "Job2Data" in the singleton class
-Another class asks the singleton class data for the currently executing job (in the current thread I guess) and use it
To perform this I assumed each Job is run on a different thread. Then what I did is that data from each Job stored in the singleton class is stored in a Map that maps the current thread id with the data.
However I'm not sure this is the way I should do it because it may not be thread safe (although Hashtable is said to be thread-safe) and maybe another thread is created each time the Job is executed which would make my Map grow a lot and never clear it-self.
I thought of another way to do what I want. Maybe I could use the ThreadLocal class in my singleton to be sure it's thread-safe and that I store thread-specific data. However I don't know if it will work well if another thread is used each time a Job is executing. Furthermore, I read somewhere that ThreadLocal creates memory leaks if the data is not remove, and the problem is that I don't know when I can remove the data.
So, would anybody have a solution for my issue ? I would like to be sure data I would like to store during Job execution is stored in a global class and can be accessed by another class (with an access to the data of the correct Job, thus the correct thread I guess).
Thank you for your help
This seems to be an easy task, I just don't know which way to start using OmniThreadLibrary:
I create a Task that does some processing in the background. The results are stored in fields of the task class and are continuously filled with new values.
Now the main thread wants to read these fields and display their values from time to time.
Therefore it needs to access these fields and make sure that they are not written to at these moments (Synchronize).
How can this be done with OmniThreadLibrary?
There's no direct support for owner/thread data sharing in the OTL, because all my multithreaded experience tells me that this is always a bad thing to do. (Agree, sometimes it is the only solution but still it's a bad thing.)
You should go with the second mghie's suggestion - create an (optionally interface-based) object and pass this object (or its interface) to the thread. Something like this:
sharedData := TSharedData.Create;
task := CreateTask(worker).SetParameter('shared', sharedData).Run;
worker:
sharedData := Task.Param['shared'].AsObject as TSharedData;
Another way to solve the problem would be to send a 'please send update' message to the task whenever the user presses the UpdateNow button. That task would then respond with an object containing current state. However, if the task performs a lengthy uninterruptable calculation this solution is not really appropriate and the shared state approach works better.
Check out the OTL test 23, which implements a background file search. The SetParameter() method is used to set the search properties, the Comm channel is used to transfer results back to the main thread. The communication is already thread-safe, you need not implement any further synchronization.
Edit:
If you don't want a push but a pull model then you can of course use standard synchronization tools: an object that has a critical section which is used in all accessors to protect data from concurrent access. This object could be the task object itself, or any third object that is created by the GUI thread and passed to the task by (again) calling SetParameter(). If you don't use an object but an interface pointer you will get more safety, as the order of destruction is no longer important, the object holding the data will only be destroyed once the last reference to the interface it implements is reset.