Why Multiprocessing's Lock not blocking the object use by other processes? - python-3.x

The following code is of a shop that has 5 items and three customers each demanding one item.
import multiprocessing as mp
class Shop:
def __init__(self, stock=5):
self.stock = stock
def get_item(self, l, x):
l.acquire()
if self.stock >= x:
self.stock -= x
print(f"{self.stock} = remaining")
l.release()
if __name__ == "__main__":
l = mp.Lock()
obj = Shop()
p1 = mp.Process(target=obj.get_item, args=(l, 1))
p2 = mp.Process(target=obj.get_item, args=(l, 1))
p3 = mp.Process(target=obj.get_item, args=(l, 1))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
print("Final: ", obj.stock)
The output that I got is as follows
4 = remaining
4 = remaining
4 = remaining
Final: 5
However, since I'm using Lock I was expecting it to be
4 = remaining
3 = remaining
2 = remaining
Final: 2
Question: How to achieve the above output just with Locks(and no process communication i.e without Pipe/Queue)?

The reason this code is not working as you expect it to is because multiprocessing does not share its state with child processes. This means that each of the process you start, p1, p2 and p3, get a copy of the object of class Shop. It is NOT the same object. There are two ways you can fix this, share the instance attribute stock with the processes, or share the whole object itself. The second way is probably better for your larger use case if the shop object holds other data that needs to be shared between the processes to.
Method 1:
To share the value of only the stock instance variable, you can use multiprocessing.Value. The way to create shared integers using this and also access their value is here:
shared_int = multiprocessing.Value('i', 5)
print(f'Value is {shared_int.value}') # 5
Adapting to your use case, the code will then become:
import multiprocessing
class Shop:
def __init__(self, stock=5):
self.stock = multiprocessing.Value('i', stock)
def get_item(self, l, x):
l.acquire()
if self.stock.value >= x:
self.stock.value -= x
print(f"{self.stock.value} = remaining")
l.release()
if __name__ == "__main__":
l = multiprocessing.Lock()
obj = Shop()
p1 = multiprocessing.Process(target=obj.get_item, args=(l, 1))
p2 = multiprocessing.Process(target=obj.get_item, args=(l, 1))
p3 = multiprocessing.Process(target=obj.get_item, args=(l, 1))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
print("Final: ", obj.stock.value)
Output
4 = remaining
3 = remaining
2 = remaining
Final: 2
Method 2
Sharing the whole complex object is a more involved process. I had recently answered a similar question in detail about sharing complex objects (like the object of class Shop in this case), which also covered the reasoning behind the code provided below. I recommend that you give it a read since it explains the logic behind the code provided at the bottom in greater detail. The only major difference for this use-case is that you will want to use multiprocess, a fork of multiprocessing, instead of multiprocessing. This library works identically to the built-in multiprocessing except for the fact that it offers better pickling support which we will need.
Basically, you will want to use multiprocessing.Managers to share the state, and a suitable proxy to access the state. The ObjProxy provided in below code is one such proxy which shares the namespace as well as instance methods (apart from protected/private attributes). Once you have these, you just need to create the objects of class Shop using the manager and the proxy. This is done using the newly added create method of class Shop. This is a class constructor and all objects of Shop should be created using this method only rather than directly calling the constructor. Full code:
import multiprocess
from multiprocess import Manager, Process
from multiprocess.managers import NamespaceProxy, BaseManager
import types
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
def wrapper(*args, **kwargs):
return self._callmethod(name, args, kwargs)
return wrapper
return result
class Shop:
def __init__(self, stock=5):
self.stock = stock
#classmethod
def create(cls, *args, **kwargs):
# Register class
class_str = cls.__name__
BaseManager.register(class_str, cls, ObjProxy, exposed=tuple(dir(cls)))
# Start a manager process
manager = BaseManager()
manager.start()
# Create and return this proxy instance. Using this proxy allows sharing of state between processes.
inst = eval("manager.{}(*args, **kwargs)".format(class_str))
return inst
def get_item(self, l, x):
with l:
if self.stock >= x:
self.stock -= x
print(f"{self.stock} = remaining")
def k(self, l, n):
pass
if __name__ == "__main__":
manager = Manager()
l = manager.Lock()
obj = Shop.create()
p1 = Process(target=obj.get_item, args=(l, 1, ))
p2 = Process(target=obj.get_item, args=(l, 1, ))
p3 = Process(target=obj.get_item, args=(l, 1, ))
p1.start()
p2.start()
p3.start()
p1.join()
p2.join()
p3.join()
print("Final: ", obj.stock)
Output
4 = remaining
3 = remaining
2 = remaining
Final: 2
Note : Explanation for these 2 lines:
manager = Manager()
l = manager.Lock()
The reason why we didn't need to create a manager (and subsequently a proxy) for the lock before in your example is outlined here. The reason why it does not work with the above code without creating a proxy is because we are no longer creating the processes in the main process, and the lock does not exist in the current processes memory space (since creating a manager for our complex object to share its state spawned its own server process)

Related

Multiprocessing obtaining array

I want to get the result_1 and result_2 arrays with the following code:
import multiprocessing as mp
import numpy as np
result_1=[]
result_2=[]
a=np.random.rand(10,10)
b=np.random.rand(7,7)
def inv_1(x):
result_1.append(np.linalg.inv(x))
def inv_2(y):
result_2.append(np.linalg.inv(y))
if __name__ == "__main__":
p1 = mp.Process(target=inv_1, args=(a),)
p2 = mp.Process(target=inv_2, args=(b),)
p1.start()
p2.start()
p1.join()
p2.join()
print(result_1, result_2)
However, when I run the code, I get the following output:
[] []
How can I solve this problem?
Unlike threads, you can't share arbitrary variables between processes. To do what you're trying to do, you can create shared lists using a multiprocessing.Manager object, e.g:
import multiprocessing as mp
import numpy as np
a=np.random.rand(10,10)
b=np.random.rand(7,7)
def inv_1(x, target):
target.append(np.linalg.inv(x))
def inv_2(y, target):
target.append(np.linalg.inv(y))
if __name__ == "__main__":
mgr = mp.Manager()
result_1 = mgr.list()
result_2 = mgr.list()
q = mp.Queue()
p1 = mp.Process(target=inv_1, args=(a, result_1),)
p2 = mp.Process(target=inv_2, args=(b, result_2),)
p1.start()
p2.start()
p1.join()
p2.join()
print('RESULT 1:', result_1)
print('RESULT 2:', result_2)
This does what you're trying to do, although it's not clear to me why you're doing it this way -- both result_1 and result_2 only have a single value (because you're just appending an item to an empty list), so it's not clear why you need a list in the first place.
More broadly, you might want to redesign your code so that it doesn't rely on shared variables. A common solution is to use a queue to pass data from your workers back to the main process.

How to create the attribute of a class object instance on multiprocessing in python?

I am trying to create attributes of an instance in parallel to learn more about multiprocessing.
My objective is to avoid creating the attributes in a sequential way, assuming that they are independent of each other. I read that multiprocessing creates its own space and that is possible to establish a connection between the process.
I think that this connection can help me to share the same object among the workers, but I did not find any post that could show a way to implement this. If I try to create the attributes in parallel I'm not able to access them on the main when the process concludes. Can someone help me with that? What do I need to do?
Below I provide a MRE about what I'm trying to get by using the MPIRE package. Hope that it can illustrate my question.
from mpire import WorkerPool
import os
class B:
def __init__(self):
pass
class A:
def __init__(self):
self.model = B()
def do_something(self, var):
if var == 'var1':
self.model.var1 = var
elif var == 'var2':
self.model.var2 = var
else:
print('other var.')
def do_something2(self, model, var):
if var == 'var1':
model.var1 = var
print(f"Worker {os.getpid()} is processing do_something2({var})")
elif var == 'var2':
model.var2 = var
print(f"Worker {os.getpid()} is processing do_something2({var})")
else:
print(f"Worker {os.getpid()} is processing do_something2({var})")
def init_var(self):
self.do_something('var1')
self.do_something('test')
print(self.model.var1)
print(vars(self.model).keys())
# Trying to create the attributes in parallel
print('')
self.model = B()
self.__sets_list = ['var1', 'var2', 'var3']
with WorkerPool(n_jobs=3, start_method='fork') as pool:
model = self.model
pool.set_shared_objects(model)
pool.map(self.do_something2,self.__sets_list)
print(self.model.var1)
print(vars(self.model).keys())
def main(): # this main will be in another file that call different classes
obj = A()
obj.init_var()
if __name__ == '__main__':
main = main()
It generates the following output:
python src/test_change_object.py
other var.
var1
dict_keys(['var1'])
Worker 20040 is processing do_something2(var1)
Worker 20041 is processing do_something2(var2)
Worker 20042 is processing do_something2(var3)
Traceback (most recent call last):
File "/mnt/c/git/bioactives/src/test_change_object.py", line 59, in
main = main()
File "/mnt/c/git/bioactives/src/test_change_object.py", line 55, in main
obj.init_var()
File "/mnt/c/git/bioactives/src/test_change_object.py", line 49, in init_var
print(self.model.var1)
AttributeError: 'B' object has no attribute 'var1'
I appreciate any help. Tkx
Would a solution without using mpire work? You could achieve what you are after, i.e. sharing state of complex objects, by using multiprocessing primitives.
TL;DR
This code works:
import os
from multiprocessing import Pool
from multiprocessing.managers import BaseManager, NamespaceProxy
import types
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
def wrapper(*args, **kwargs):
return self._callmethod(name, args, kwargs)
return wrapper
return result
class B:
def __init__(self):
pass
#classmethod
def create(cls, *args, **kwargs):
# Register class
class_str = cls.__name__
BaseManager.register(class_str, cls, ObjProxy, exposed=tuple(dir(cls)))
# Start a manager process
manager = BaseManager()
manager.start()
# Create and return this proxy instance. Using this proxy allows sharing of state between processes.
inst = eval("manager.{}(*args, **kwargs)".format(class_str))
return inst
class A:
def __init__(self):
self.model = B.create()
def do_something(self, var):
if var == 'var1':
self.model.var1 = var
elif var == 'var2':
self.model.var2 = var
else:
print('other var.')
def do_something2(self, model, var):
if var == 'var1':
model.var1 = var
print(f"Worker {os.getpid()} is processing do_something2({var})")
elif var == 'var2':
model.var2 = var
print(f"Worker {os.getpid()} is processing do_something2({var})")
else:
print(f"Worker {os.getpid()} is processing do_something2({var})")
def init_var(self):
self.do_something('var1')
self.do_something('test')
print(self.model.var1)
print(vars(self.model).keys())
# Trying to create the attributes in parallel
print('')
self.model = B.create()
self.__sets_list = [(self.model, 'var1'), (self.model, 'var2'), (self.model, 'var3')]
with Pool(3) as pool:
# model = self.model
# pool.set_shared_objects(model)
pool.starmap(self.do_something2, self.__sets_list)
print(self.model.var1)
print(vars(self.model).keys())
def main(): # this main will be in another file that call different classes
obj = A()
obj.init_var()
if __name__ == '__main__':
main = main()
Longer, detailed explanation
Here is what I think is happening. Even though you are setting self.model as a shared object among your workers, the fact that you alter it within the workers force a copy being made (i.e, the shared objects are not writable). From the documentation for shared objects in mpire:
For the start method fork these shared objects are treated as copy-on-write, which means they are only copied once changes are made to them. Otherwise they share the same memory address
Therefore, it suggests that shared objects with method fork is only useful for cases where you would only be reading from the objects. The documentation also provides such a use case
This is convenient if you want to let workers access a large dataset that wouldn’t fit in memory when copied multiple times.
Take this with a grain of salt though, since again, I have not used mpire. Hopefully someone with more experience with the library can provide further clarifications.
Anyway, moving on, you can achieve this using multiprocessing managers. Managers allow you to share complex objects (an object of class B in this context) between processes and workers. You can use them to also share nested dictionaries, lists, etc. They do this by spawning a server process, where the shared object is actually stored, and allow other processes to access the object through proxies (more on this later), and by pickling/unpickling any arguments and return values passed to and from the server process. As a sidenote, using pickling/unpickling also leads to restrictive structures. For example, in our context, it would mean that any function arguments and instance variables you make for class B should be picklable.
Coming back, I mentioned that we can access the server process through proxies. Proxies are basically just wrappers which mimic the properties and functions of the original object. Most utilize specific dunder methods like __setattr__ and __getattr__, an example given below (from here):
class Proxy(object):
def __init__(self, original):
self.original = original
def __getattr__(self, attr):
return getattr(self.original, attr)
class MyObj(object):
def bar(self):
print 'bar'
obj = MyObj()
proxy = Proxy(obj)
proxy.bar() # 'bar'
obj.bar() # 'bar'
A huge plus of using proxies is that they are picklable, which is important when dealing with shared objects. Under the hood, manager creates a proxy for you whenever you create a shared object through it. However, this default proxy (called AutoProxy) does not share the namespace of the object. This will not work for us since we are using the class B's namespace and want that to be shared as well. Therefore, we create our own proxy by inheriting another, undocumented proxy provided by multiprocessing: NamespaceProxy. As the name suggests, this one does share the namespace, but conversely, does not share any instance methods. This is why we need to create our own proxy which is the best of both worlds:
from multiprocessing.managers import NamespaceProxy
import types
class ObjProxy(NamespaceProxy):
"""Returns a proxy instance for any user defined data-type. The proxy instance will have the namespace and
functions of the data-type (except private/protected callables/attributes). Furthermore, the proxy will be
pickable and can its state can be shared among different processes. """
def __getattr__(self, name):
result = super().__getattr__(name)
if isinstance(result, types.MethodType):
def wrapper(*args, **kwargs):
return self._callmethod(name, args, kwargs)
return wrapper
return result
More info on why this works. Keep in mind that these proxies do not share private or protected attributes/functions (check this question).
After we have achieved this, the rest is just some boilerplate-ish code which uses this proxy by default to create shareable complex objects for particular datatypes. In our context this means that code for class B will become this:
from multiprocessing import Manager, Queue, Pool
from multiprocessing.managers import BaseManager
class B:
def __init__(self):
pass
#classmethod
def create(cls, *args, **kwargs):
# Register class
class_str = cls.__name__
BaseManager.register(class_str, cls, ObjProxy, exposed=tuple(dir(cls)))
# Start a manager process
manager = BaseManager()
manager.start()
# Create and return this proxy instance. Using this proxy allows sharing of state between processes.
inst = eval("manager.{}(*args, **kwargs)".format(class_str))
return inst
In the above code, the create function is a general class constructor which automatically uses our new proxy and managers to share the object. It can be used for any class, not only B, to do so. The only thing now left is to change usage of mpire pool to multiprocessing pool in init_var. Note how we use B.create() instead of simply using B() to create objects of class B!:
def init_var(self):
self.do_something('var1')
self.do_something('test')
print(self.model.var1)
print(vars(self.model).keys())
# Trying to create the attributes in parallel
print('')
self.model = B.create()
self.__sets_list = [(self.model, 'var1'), (self.model, 'var2'), (self.model, 'var3')]
with Pool(3) as pool:
# model = self.model
# pool.set_shared_objects(model)
pool.starmap(self.do_something2, self.__sets_list)
print(self.model.var1)
print(vars(self.model).keys())
Note : I have only tested this on Windows multiprocessing which does not use "fork" method but rather "spawn" method to start a process. More information here

Why serial code is faster than concurrent.futures in this case?

I am using the following code to process some pictures for my ML project and I would like to parallelize it.
import multiprocessing as mp
import concurrent.futures
def track_ids(seq):
'''The func is so big I can not put it here'''
ood = {}
for i in seq:
# I load around 500 images and process them
ood[i] = some Value
return ood
seqs = []
for seq in range(1, 10):# len(seqs)+1):
seq = txt+str(seq)
seqs.append(seq)
# serial call of the function
track_ids(seq)
#parallel call of the function
with concurrent.futures.ProcessPoolExecutor(max_workers=mp.cpu_count()) as ex:
ood_id = ex.map(track_ids, seqs)
if I run the code serially it takes 3.0 minutes but for parallel with concurrent, it takes 3.5 minutes.
can someone please explain why is that? and present a way to solve the problem.
btw, I have 12 cores.
Thanks
Here's a brief example of how one might go about profiling multiprocessing code vs serial execution:
from multiprocessing import Pool
from cProfile import Profile
from pstats import Stats
import concurrent.futures
def track_ids(seq):
'''The func is so big I can not put it here'''
ood = {}
for i in seq:
# I load around 500 images and process them
ood[i] = some Value
return ood
def profile_seq():
p = Profile() #one and only profiler instance
p.enable()
seqs = []
for seq in range(1, 10):# len(seqs)+1):
seq = txt+str(seq)
seqs.append(seq)
# serial call of the function
track_ids(seq)
p.disable()
return Stats(p), seqs
def track_ids_pr(seq):
p = Profile() #profile the child tasks
p.enable()
retval = track_ids(seq)
p.disable()
return (Stats(p, stream="dummy"), retval)
def profile_parallel():
p = Profile() #profile stuff in the main process
p.enable()
with concurrent.futures.ProcessPoolExecutor(max_workers=mp.cpu_count()) as ex:
retvals = ex.map(track_ids_pr, seqs)
p.disable()
s = Stats(p)
out = []
for ret in retvals:
s.add(ret[0])
out.append(ret[1])
return s, out
if __name__ == "__main__":
stat, retval = profile_parallel()
stat.print_stats()
EDIT: Unfortunately I found out that pstat.Stats objects cannot be used normally with multiprocessing.Queue because it is not pickleable (which is needed for the operation of concurrent.futures). Evidently it normally will store a reference to a file for the purpose of writing statistics to that file, and if none is given, it will by default grab a reference to sys.stdout. We don't actually need that reference however until we actually want to print out the statistics, so we can just give it a temporary value to prevent the pickle error, and then restore an appropriate value later. The following example should be copy-paste-able and run just fine rather than the pseudocode-ish example above.
from multiprocessing import Queue, Process
from cProfile import Profile
from pstats import Stats
import sys
def isprime(x):
for d in range(2, int(x**.5)):
if x % d == 0:
return False
return True
def foo(retq):
p = Profile()
p.enable()
primes = []
max_n = 2**20
for n in range(3, max_n):
if isprime(n):
primes.append(n)
p.disable()
retq.put(Stats(p, stream="dummy")) #Dirty hack: set `stream` to something picklable then override later
if __name__ == "__main__":
q = Queue()
p1 = Process(target=foo, args=(q,))
p1.start()
p2 = Process(target=foo, args=(q,))
p2.start()
s1 = q.get()
s1.stream = sys.stdout #restore original file
s2 = q.get()
# s2.stream #if we are just adding this `Stats` object to another the `stream` just gets thrown away anyway.
s1.add(s2) #add up the stats from both child processes.
s1.print_stats() #s1.stream gets used here, but not before. If you provide a file to write to instead of sys.stdout, it will write to that file)
p1.join()
p2.join()

python apply_async does not call method

I have a method which needs to process through a large database, that would take hours/days to dig through
The arguments are stored in a (long) list of which max X should be processed in one batch. The method does not need to return anything, yet i return "True" for "fun"...
The function is working perfectly when I'm iterating through it linearly (generating/appending the results in other tables not seen here), yet I am unable to get apply_async or map_async work. (it worked before in other projects)
Any hint of what might I be doing wrong would be appreciated, thanks in advance!
See code below:
import multiprocessing as mp
class mainClass:
#loads of stuff
def main():
multiprocess = True
batchSize = 35
mC = mainClass()
while True:
toCheck = [key for key, value in mC.lCheckSet.items()] #the tasks are stored in a dictionary, I'm referring to them with their keys, which I turn to a list here for iteration.
if multiprocess == False:
#this version works perfectly fine
for i in toCheck[:batchSize]:
mC.check(i)
else:
#the async version does not, either with apply_async...
with mp.Pool(processes = 8) as pool:
temp = [pool.apply_async(mC.check, args=(toCheck[n],)) for n in range(len(toCheck[:batchSize]))]
results = [t.get() for t in temp]
#...or as map_async
pool = mp.Pool(processes = 8)
temp = pool.map_async(mC.check, toCheck[:batchSize])
pool.close()
pool.join()
if __name__=="__main__":
main()
The "smell" here is that you are instantiating your maincClass on the main Process, just once, and then trying to call a method on it on the different processes - but note that when you pass mC.check to your process pool, it is a method already bound to the class instantiated in this process.
I'd guess there is where your problem lies. Although that could possibly work - and it does - I made this simplified version and it works as intended :
import multiprocessing as mp
import random, time
class MainClass:
def __init__(self):
self.value = 1
def check(self, arg):
time.sleep(random.uniform(0.01, 0.3))
print(id(self),self.value, arg)
def main():
mc = MainClass()
with mp.Pool(processes = 4) as pool:
temp = [pool.apply_async(mc.check, (i,)) for i in range(8)]
results = [t.get() for t in temp]
main()
(Have you tried just adding some prints to make sure the method is not running at all?)
So, the problem lies likely in some complex state in your MainClass that does not make it to the parallel processes in a good way. A possible work-around is to instantiate your mainclasses inside each process - that can be easily done since MultiProcessing allow you to get the current_process, and use this object as a namespace to keep data in the process instantiated in the worker Pool, across different calls to apply async.
So, create a new check function like the one bellow - and instead of instantiating your mainclass in the mainprocess, instantiate it inside each process in the pool:
import multiprocessing as mp
import random, time
def check(arg):
process = mp.current_process
if not hasattr(process, "main_class"):
process.main_class = MainClass()
process.main_class.check(arg)
class MainClass:
def __init__(self):
self.value = random.randrange(100)
def check(self, arg):
time.sleep(random.uniform(0.01, 0.3))
print(id(self),self.value, arg)
def main():
mc = MainClass()
with mp.Pool(processes = 2) as pool:
temp = [pool.apply_async(check, (i,)) for i in range(8)]
results = [t.get() for t in temp]
main()
I got to this question with the same problem, my apply_async calls not called at all, but the reason on my case was that the parameters number on apply_async call was different to the number on function declaration

QRunnable in multiple cores

I am learning about QRunnable and I have the following code:
from PyQt5.QtCore import QThreadPool, QRunnable
class SomeObjectToDoComplicatedStuff(QRunnable):
def __init__(self, name):
QRunnable.__init__(self)
self.name = name
def run(self):
print('running', self.name)
a = 10
b = 30
c = 0
for i in range(5000000):
c += a**b
print('done', self.name)
pool = QThreadPool.globalInstance()
pool.setMaxThreadCount(10)
batch_size = 100
workers = [None] * batch_size
for i in range(batch_size):
worker = SomeObjectToDoComplicatedStuff('object ' + str(i))
workers[i] = worker
pool.start(worker)
print('All cued')
pool.waitForDone()
# processing the results back
for i in range(batch_size):
print(workers[i].name, ' - examining again.')
I see that indeed there are different processes being alternated, but all is happening on a single core.
How can I make this code run using all the processor cores?
PS: This code is just a simplification of a super complicated number crunching application I am making. In it, I want to to do Monte Carlo in several threads and the worker itself is a complex optimization problem.
I have tried the python multiprocessing module but it doesn't handle scipy too well.
Not sure how much use this will be, but a multiprocessing version of your example script would be something like this:
from multiprocessing import Pool
class Worker(object):
def __init__(self, name):
self.name = name
def run(self):
print('running', self.name)
a = 10
b = 30
c = 0
for i in range(5000000):
c += a**b
print('done', self.name)
return self.name, c
def caller(worker):
return worker.run()
def run():
pool = Pool()
batch_size = 10
workers = (Worker('object%d' % i) for i in range(batch_size))
result = pool.map(caller, workers)
for item in result:
print('%s = %s' % item)
if __name__ == '__main__':
run()
How can I make this code run using all the processor cores?
Using PyQt (QRunner/QThread and likely), I think it's almost impossible because they (the python version, not the C++) are using the GIL.
The easiest solution would be to use multiprocessing, but since you have some problem using it along scipy you should look for some non-standard library.
I suggest you to take a look at ipyparallel, AFAIK they're developed under the same umbrella, so they're likely to work seamlessy.

Resources