How to share readonly VkDeviceMemory for VkImages across VkInstances? - graphics

In my vulkan app, I use some VKImages to render something. The app may be started multiple times at the sametime, so there may be multiple instances of the program, each will have its own VkInstance.
What I want to do, is to share the readonly VkImages across VkInstances, so that I can greatly reduce the GPU memory footprint and increase cache performance. I want to know the right way to do it.
Since VkImage is backed by VkDeviceMemory, and Vulkan only supports VkDeviceMemory sharing by external memory extensions, I think I could share the memory objects as the following:
First, I start a central program S as a "memory pool" that manages the sharable memory objects.
When a VkImage is used by app instance A, it asks S to give it a memory object, by passing the VkImage's custom ID.
In S, if the ID is requested for the first time, it allocates the VkDeviceMemory and export a sharable handle back to A.
In A, a VkDeviceMemory is created by importing the handle and bind it to the VkImage object.
Next, A will initialize the content of the VkImage using a compute shader, which has the RGBA8 format by the way. Then, A CPU sync is performed, making sure the memory is initialized.
After some time, A app instance B is also request for the VkImage with the same ID. Then S returns the created sharable handle back to B, while telling B that the VKImage has been initialized already.
In B, A VkImage is created, VkDeviceMemory object is imported, but the VkImage's initialization is skipped.
Both in A and B, the VkImages will be used as normal textures to render something.
Note that when rendering, both A and B does no synchronization operations on the shared memory now.
When I test the app on several NVIDIA cards, everything works and no visible rendering errors are observed.
However, it's wrong when I test the app on AMD cards. It seems that A's rendering seems fine, but in B, the content of the image is incorrectly arranged. The image seems to be divided into blocks and rotated separately:
Process A
Process B
So here are my questions:
Why does this happen? Is it due to the lacking of synchronization between A, B and S? If so, how to make it right exactly.
Why no visual errors at all on NVIDIA cards?

Related

vulkan barriers and multi-threading

I want to share my thoughts about how to keep memory barriers in sync in multi-threading rendering. Please let me know if my thoughts about Vulkan memory barrier is wrong or if my current plan makes any sense. I don't have anyone at work to discuss with, so I'll ask here for help.
For resources in Vulkan, when I set memory barriers for them among drawcalls, I need to set both srcAccessMask and dst AccessMask. This is simple for single threaded rendering. But for multi-threading rendering, it gets complicated. dst AccessMask is not a problem, since we always know what the resource is going to be used for. But for srcAccessMask, when one command buffer tries to read the current access mask of some resource, there might be other command buffers changing it to something else. So my current thoughts of solving it is:
Each resource keeps its own state, I'll only update the state right before submitting command buffers to command queue, I will describe it later. Each command buffer maintains tracking record of how the resource state changed inside it. Doing this way, within the same command buffer the access state of each resource is clear, the only problem is the beginning state of the resource for each command buffer.
When submitting multiple command buffers to execute, as the order of command buffers are fixed now, I check the tracking record of each resource among all command buffers, update resource's state based on the end state of the resource in each command buffer, and use that to correct the beginning state of the same resource in each command buffer's tracking record.
Then I need to either insert a new command buffer to have extra memory barrier to transition resource to correct state for the first command buffer, or insert memory barrier into previous command buffer for the rest command buffers.When all these are done, I can finally submit the command buffers together as a batch.
Do these make sense to you? Are there better solutions to solve it? Or do we even need to solve the "synchronization" issue of access state for each resource?
Thank you for your time
What you're talking about only makes sense in a world where none of these rendering operations have even the slightest idea what's going on elsewhere. Where the consumer of an image has no idea how the data in the image got there. Which probably means that it doesn't really know what that image means conceptually.
Vulkan is a low-level API. The idea is that you can connect the high-level concepts of your rendering system directly to Vulkan. So at a high level, you know that resource X has meaning Y and in this frame will have its data generated from operation Z. Not because of something stored in resource X but because it is resource X; that's what resource X is for. So both the operation generating it and the operation consuming it know what's going on and how it got there.
For example, if you're doing deferred rendering and SSAO, then your SSAO renderpass knows that the texture containing the depth buffer had its values generated by rendering. The depth buffer doesn't need something stored in it to say that; that's simply the nature of your rendering. It's hard-coded to work that way.
Most of your resource dependencies are (or ought to be) that way.
If you're doing some render-to-texture operation via the framebuffer, then the consumer probably doesn't even need to know about the dependency. You can just set an appropriate external dependency for the renderpass and the subpass that generates it. And you probably know why you did the render-to-texture op, and you probably know where it's going. If you're doing RTT for reflection, you know that the destination will be some kind of shader stage texture fetch. And if you don't know how it's going to be used, then you can just be safe and set all of the destination stage bits.
What you're talking about makes some degree of sense if you're dealing with streamed objects, where objects are popping into and outof memory with some regularity. But even then, that's not really a property of each individual resource.
When you load a streamed chunk, you upload its data by generating command buffer(s) and submitting them. And here's where we have an implementation-specific divergence. Your best bet for performance is to execute these CBs on a queue dedicated for transfer operations. But since Vulkan doesn't guarantee all implementations have those, you need to be able to deliver those transfer CBs to the main rendering queue.
So you need a way to communicate to rendering threads when they can expect to start being able to use the resources. But even that doesn't need to be on a per-resource basis; they can be told "stuff from block X is available", and then they can start using it.
Furthermore, that implementation divergence becomes important. See, if it's done on another queue, a barrier isn't the right synchronization primitive. Your rendering CBs now have to have their submitted batches wait on a semaphore. And that semaphore should handle all of the synchronization needs of the memory (ie: the destination bits being everything). So in the implementation where the transfer CBs are executed on the same queue as your rendering CBs, you may as well save yourself some trouble and issue a single barrier at the end of the transfer CB that makes all of the given resources available to all stages.
So as previously stated, this kind of automated system is only useful if you have no real control over the structure of rendering. This would principally be true if you're writing some kind of middleware, where the higher-level code defines the structure of rendering. However, if that's the case, Vulkan probably isn't the right tool for that job.

OpenGL loading resource in separated thread

I am new to multithread OpenGL. I don't want to use shared context in separated thread. I found a way that we can map memory to do asynchronous resource loading.
However I need to tell glBufferData or glTexImage2D to reserve the exact memory size for me. For BMP, we have information in the header. But to know the number of vertices in an obj file, we need to iterate through the whole file... How do commercial game engine do it? Design its own format?
A way to load resources asynchronously that worked for me is to push the inevitable read-from-file operations into designated threads. It can be easily combined with lazy resource management like so:
Set up your scene but don't immediately load your resources from file. Instead, flag it as unitialized and store the file paths for later use.
Start rendering your scene
As soon as you encounter a resource flagged as uninitialized, start a thread that asynchronously loads the data from file into a CPU buffer and store meta info (e.g. width and height of textures, number of vertices for meshes etc.) as well. For now, replace the requested resource by some default or simply "do nothing" (assuming your renderer can handle "empty" resources).
In subsequent rendering steps and requests for the resource, check if the associated loading thread has finished and, if so, create the necessary GPU buffers, upload the data and flag the resource as initialized. The stored metadata helps you to figure out the necessary size of the resulting buffers. You can now use the resource as intended.
Handling resources this way avoids sharing OpenGL contexts between multiple threads since you only handle the (presumably very heavy) CPU-bound load-from-file operations asynchronously. Of course, you will have to cope with mutual exclusion to safely check whether or not a loading thread has finished. Furthermore, you might consider defining and maintaining an upload budget for limiting the amount of data transferred from CPU to GPU per frame in order to avoid framerate drops.

Designing concurrency in a Python program

I'm designing a large-scale project, and I think I see a way I could drastically improve performance by taking advantage of multiple cores. However, I have zero experience with multiprocessing, and I'm a little concerned that my ideas might not be good ones.
Idea
The program is a video game that procedurally generates massive amounts of content. Since there's far too much to generate all at once, the program instead tries to generate what it needs as or slightly before it needs it, and expends a large amount of effort trying to predict what it will need in the near future and how near that future is. The entire program, therefore, is built around a task scheduler, which gets passed function objects with bits of metadata attached to help determine what order they should be processed in and calls them in that order.
Motivation
It seems to be like it ought to be easy to make these functions execute concurrently in their own processes. But looking at the documentation for the multiprocessing modules makes me reconsider- there doesn't seem to be any simple way to share large data structures between threads. I can't help but imagine this is intentional.
Questions
So I suppose the fundamental questions I need to know the answers to are thus:
Is there any practical way to allow multiple threads to access the same list/dict/etc... for both reading and writing at the same time? Can I just launch multiple instances of my star generator, give it access to the dict that holds all the stars, and have new objects appear to just pop into existence in the dict from the perspective of other threads (that is, I wouldn't have to explicitly grab the star from the process that made it; I'd just pull it out of the dict as if the main thread had put it there itself).
If not, is there any practical way to allow multiple threads to read the same data structure at the same time, but feed their resultant data back to a main thread to be rolled into that same data structure safely?
Would this design work even if I ensured that no two concurrent functions tried to access the same data structure at the same time, either for reading or for writing?
Can data structures be inherently shared between processes at all, or do I always explicitly have to send data from one process to another as I would with processes communicating over a TCP stream? I know there are objects that abstract away that sort of thing, but I'm asking if it can be done away with entirely; have the object each thread is looking at actually be the same block of memory.
How flexible are the objects that the modules provide to abstract away the communication between processes? Can I use them as a drop-in replacement for data structures used in existing code and not notice any differences? If I do such a thing, would it cause an unmanageable amount of overhead?
Sorry for my naivete, but I don't have a formal computer science education (at least, not yet) and I've never worked with concurrent systems before. Is the idea I'm trying to implement here even remotely practical, or would any solution that allows me to transparently execute arbitrary functions concurrently cause so much overhead that I'd be better off doing everything in one thread?
Example
For maximum clarity, here's an example of how I imagine the system would work:
The UI module has been instructed by the player to move the view over to a certain area of space. It informs the content management module of this, and asks it to make sure that all of the stars the player can currently click on are fully generated and ready to be clicked on.
The content management module checks and sees that a couple of the stars the UI is saying the player could potentially try to interact with have not, in fact, had the details that would show upon click generated yet. It produces a number of Task objects containing the methods of those stars that, when called, will generate the necessary data. It also adds some metadata to these task objects, assuming (possibly based on further information collected from the UI module) that it will be 0.1 seconds before the player tries to click anything, and that stars whose icons are closest to the cursor have the greatest chance of being clicked on and should therefore be requested for a time slightly sooner than the stars further from the cursor. It then adds these objects to the scheduler queue.
The scheduler quickly sorts its queue by how soon each task needs to be done, then pops the first task object off the queue, makes a new process from the function it contains, and then thinks no more about that process, instead just popping another task off the queue and stuffing it into a process too, then the next one, then the next one...
Meanwhile, the new process executes, stores the data it generates on the star object it is a method of, and terminates when it gets to the return statement.
The UI then registers that the player has indeed clicked on a star now, and looks up the data it needs to display on the star object whose representative sprite has been clicked. If the data is there, it displays it; if it isn't, the UI displays a message asking the player to wait and continues repeatedly trying to access the necessary attributes of the star object until it succeeds.
Even though your problem seems very complicated, there is a very easy solution. You can hide away all the complicated stuff of sharing you objects across processes using a proxy.
The basic idea is that you create some manager that manages all your objects that should be shared across processes. This manager then creates its own process where it waits that some other process instructs it to change the object. But enough said. It looks like this:
import multiprocessing as m
manager = m.Manager()
starsdict = manager.dict()
process = Process(target=yourfunction, args=(starsdict,))
process.run()
The object stored in starsdict is not the real dict. instead it sends all changes and requests, you do with it, to its manager. This is called a "proxy", it has almost exactly the same API as the object it mimics. These proxies are pickleable, so you can pass as arguments to functions in new processes (like shown above) or send them through queues.
You can read more about this in the documentation.
I don't know how proxies react if two processes are accessing them simultaneously. Since they're made for parallelism I guess they should be safe, even though I heard they're not. It would be best if you test this yourself or look for it in the documentation.

Using Google map objects within a web worker?

The situation:
Too much stuff is running in the main thread of a page making a google map with overlays representing ZIP territories coming from US census data and stuff the client has asked for grouping territories into discreet groups. While there is no major issue on desktops, mobile devices (iPad) decide that the thread is taking too long (max of 6 seconds after data returns) and therefore must have crashed.
Solution: Offload the looping function to gather the points for the shape from each row to a web worker that can work as fast or slow as resources allow on a mobile device. (Three for loops, 1st to select row, 2nd to select column, 3rd for each point within the column. Execution time: matter of 3-6 seconds total for over 2000+ rows with numerous points)
The catch: In order for this to be properly efficient, the points must be made into a shape (polygon) within the web worker. HOWEVER since it is a google.maps.polygon object made up of google.maps.latlng objects it [the web worker] needs to have some knowledge of what those items are within the web worker. Web workers require you to not use window or the DOM so it must import the script and the intent was to pass back just the object as a JSON encoded item. The code fails on any reference of google objects even with importScript() due to the fact those items rely on the window element.
Further complications: Google's API is technically proprietary. The web app code that this is for is bound by NDA so pointed questions could be asked but not a copy/paste of all code.
The solution/any vague ideas:???
TLDR: Need to access google.maps.latlng object and create new instances of (minimally) within a web worker. Web worker should either return Objects ready to be popped into a google.maps.polygon object or should return a google.maps.polygon object. How do I reference the google maps API if I cannot use the default method of importing scripts due to an issue requiring the window object?
UPDATE: Since this writing Ive managed to offload the majority of the grunt work from the main thread to the web worker allowing it to parse through the data asynchronously and assign the data to custom made latlng object.
The catch now is getting the returned values to run the function in the proper context to see if the custom latlng is sufficient for google.maps.polygon to work its magic.
Excerpt from the file that calls the web worker and listens for its response (Coffeescript)
#shapeWorker.onmessage= (event)->
console.log "--------------------TESTING---------------"
data=JSON.parse(event.data)
console.log data
#generateShapes(data.poly,data.center,data.zipNum)
For some reason, its trying to evaluate GenerateShapes in the context of the web worker rather than in the context of the class its in.
Once again it was a complication of too many things going on at once. The scope was restricted due to the usage of -> rather than => which expands the scope to allow the parent class functions.
Apparently the issue resided with the version of iOS this web app needed to run on and a bug with the storage being set arbitrarily low (a tenth of its previous size). With some shrinking of the data and a fix to the iOS version in question I was able to get it running without the usage of web workers. One day I may be able to come back to it with web workers to increase efficiency.

How can I load a texture in separate thread in cocos2d-x?

I faced the need to use multi-threading to load an additional texture on-the-fly in order to reduce the memory footprint.
The example case is that I have 10 types of enemy to use in the a single level but the enemies will come out type by type. The context of "type by type" means one type of enemy comes out and the player kills all of its instances, then it's time to call in another type. The process goes like this until all types come out, then the level is complete.
You can see it's better to not initially load all enemy's texture at once in the starting time (it's pretty big 2048*2048 with lots of animation frames inside which I need to create them in time of creation for each type of enemy). I turn this to multi-thread to load an additional texture when I need it. But I knew that cocos2d-x is not thread-safe. I planned to use CCSpriteFrameCache class to load a texture from .plist + .png file then re-create animation there and finally create a CCSprite from it to represent a new type of enemy instance. If I don't use multi-thread, I might suffer from delay of lag that would occur of loading a large size of texture.
So how can I load a texture in separate thread in cocos2d-x following my goal above? Any idea to avoid thread-safe issue but still can accomplish my goal is also appreciated.
Note: I'm developing on iOS platform.
I found that async-loading of image is already there inside cocos2d-x.
You can build a testing project of cocos2d-x and look into "Texture2DTest", then tap on the left arrow to see how async-loading look like.
I have taken a look inside the code.
You can use addImageAsync method of CCtextureCache to load additional texture on-the-fly without interfere or slow down other parts such as the current animation that is running.
In fact, addImageAsync of CCTextureCache will load CCTexture2D object for you and return back to its callback method to receive. You have additional task to make use of it on your behalf.
Please note that CCSpriteFrameCache uses CCTextureCache to load frames. So this applies to it as well for my case to load spritesheet consisting of frames to be used in animation creation. But unfortunately async type of method is not provided for CCSpriteFrameCache class. You have to manually load texture object via CCTextureCache then plug it in
void CCSpriteFrameCache::addSpriteFramesWithFile(const char *pszPlist, CCTexture2D *pobTexture)
There's 2 file in testing project you can take a look at.
Texture2dTest.cpp
TextureCacheTest.cpp

Resources