"The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing."
I get this error when I try to use a spriteFont to draw my FPS on the screen, on the line where I call spriteBatch.End()
My effect doesn't even use texture coordinates.
But I have found the root of the problem, just not how to fix it.
I have a separate thread that builds the geometry (an LOD algorithm) and somehow this seems to be why I have the problem.
If I make it non-threaded and just update my mesh per frame I don't get an error.
And also if I keep it multithreaded but don't try to draw text on the screen it works fine.
I just can't do both.
To make it even more strange, it actually compiles and runs for a little bit. But always crashes.
I put a Thread.Sleep in the method that builds the mesh so that it happens less often and I saw that the more I make this thread sleep, and the less it gets called, the longer it will run on average before crashing.
If it sleeps for 1000ms it runs for maybe a minute. If it sleeps for 10ms it doesn't even show one frame before crashing. This makes me believe that it has to do with a certain line of code being executed on the mesh building thread at the same time you are drawing text on the screen.
It seems like maybe I have to lock something when drawing the text, but I have no clue.
Any ideas?
My information comes from the presentation "Understanding XNA Framework Performance" from GDC 2008. It says:
GraphicsDevice is somewhat thread-safe
Cannot render from more than one thread at a time
Can create resources and SetData while another thread renders
ContentManager is not thread-safe
Ok to have multiple instances, but only one per thread
My guess is that you're breaking one of these rules somewhere. Or you're modifying a buffer that is being used to render without the appropriate locking.
Related
I am using vulkan (ash) to render a scene.
The rendering algorithm first does a raytracing pass (using the traditional pipeline not the NV extensions) then it renders a few frames normally.
On Nvidia it renders fine, on AMD I am experiencing a flicker.
The draw structure is:
Raytrace
Render 6 meshes with 6 draw calls
If I render that in immediate mode, I get flickering, if I do so in FIFO mode I don't. In immediate mode, if I put the thread to sleep in between the raytracing call and the mesh calls I get flickering no matter how slow the sleep is even at a 1 second sleep the flcikering occurs. The pattern of the flicker is 4 calls rendered normally, 2 calls the RT image is the only thing rendered, as if it had been called after the 6 regular mesh calls, even though it;s issued before them.
All f this suggests a synchronization bug. The issue is I have no idea what I forgot to do.
Each draw call is drawing to a swapchain image, each draw call has this structure:
Set up uniforms, desciptor sets, pipelines...
Create a fence
Submit draw command to queue signaling fence
Wait for fence
Wait for device
Delete fence
I am currently doing that at the end of every single one of the 7 calls. What did I forget to sync? What barrier am I missing?
I am using the passless rendering extension, so I don;t have any render passes. I might need an image barrier on the swapchain as a consequence but I don;t know if, and what it would need to be.
I switched to Vulkan from OpenGL to use multi-threading improvements.
In OpenGL, I was able to load dynamically object to the scene (buffer, textures, etc) while rendering by using a waiting system. I was loading all app-side stuffs in a thread, then when it was ready, just before a frame render in the main thread, I was sending everything into the video memory. That was fine.
With Vulkan, I know I can call some functions between threads without provoking the well known segfault from OpenGL. But, this doesn't works with vkQueueSubmit(). I already know, I tried the naive way. To me, it seems logical you can't bother a queue from multiple threads.
I came with some ideas, but I don't know which one is good or bad.
First, I would go the OpenGL way, I will prepare everything I can from the CPU/App side, then just before render a frame, I will submit buffers (with transfer queue) to the video memory. But I feel there is no a real improvement from OpenGL way...
Second, I will try to use the synchronization mechanism to be able to send buffers in a thread and render from an other. But I keep reading there is a lot of way to slow down everything by causing irrelevant locks or by using incorrectly semaphores and fences.
So my question, is basically what path to pick to solve this problem ? How can I load a buffer dynamically from an other thread while the main thread is rendering without making too much pain to performances ? How Vulkan can help ?
If you want to stream resources for immediate use (i.e. the main render cannot proceed without them), then you're pretty much going to either block the main thread waiting, or have it spin doing something visually interesting (e.g. an animated loading screen) waiting for the resources to load.
If you want to stream resources while the app is doing real rendering then the main trick here is to load resources asynchronously in the background and only switch to using those resources in the main thread once they are already loaded. If the main thread ever ends up actually blocked on a semaphore then you've probably already started dropping frames, so your "engine" design needs to ensure that never happens. A lot of game use simple low-detail proxy objects as stand-in versions while the high-detail version is loading in the background.
None of this is particularly related to the graphics API - both GL and Vulkan need the same macro-scale behavior. Vulkan API features don't particularly help because the major bottlenecks which cause problems here are storage/network/CPU which have nothing to do with the graphics part of the problem.
I decided to trust threads !
In the first place it seems to work, I get a lot of :
[MESSAGE:Validation Error: [ UNASSIGNED-Threading-MultipleThreads ] Object 0: handle = 0x56414228bad8, type = VK_OBJECT_TYPE_QUEUE; | MessageID = 0x141cb623 | THREADING ERROR : vkQueueSubmit(): object of type VkQueue is simultaneously used in thread 0x7f6b977fe640 and thread 0x7f6bc2bcb740]
But it works !
So, the basic idea is to have a thread for loading objects while the engine is drawing. This thread takes care of creating the UBO for the location of the object, then when the geometry is loaded from RAM, it creates the VBO and IBO (I left material with image/UBO on hold for now), then creates the graphics pipeline (with layout, descriptor layout, shaders compiled with GLSLang on the fly) (The next idea is to reuse pipeline for similar needs) and finallly sets a flag to say the object is ready to use. In the other hand, I have my main thread rendering and takes new objects when they shows up ready.
I think it works because I have a gentle video card (GTX 1070) with multiple queues setup, I had one for graphics and an other one for transfer setup.
I'm pretty sure, this will crash or goes poorly with a GPU with a single queue, and this should be why the validation layers tolds me these messages.
I'm writing an app that embeds a bunch of matplotlib figures into a PyQt GUI. The updating of these figures can take up to a few seconds, so I would like to introduce a waiting indicator to display while the plots are being drawn. I've moved all the data processing code into its own thread, but it seems the actual plotting calls are often making up the majority of processing time.
I have written a waiting indicator that uses a QTimer instance to trigger paintEvent on the widget. This works just fine when all the intensive processing can be pushed into another thread. The problem is that these calls to construct the matplotlib plots cannot be moved outside of the main thread due to the way Qt is designed, and so block the updating of the waiting indicator, rendering it kind of useless.
I've introduced some calls to QCoreApplication.processEvents() after the updating of each figure, which improves the performance a little. I've also toyed with the idea of monkeypatching a bunch of methods of matplotlib.axes.Axes to include calls to QCoreApplication.processEvents(), but I can see that getting messy. Is this the best I can do? Is there any way to interrupt the main thread at regular intervals and force it to process new events?
It should also help a big deal to do the actual drawing on a QPixmap in a thread. Drawing that pixmap with QPainters drawPixmap() method is very fast. And you need to recreate the pixmap only when really needed (e.g. after Zooming or so). In the meantime you just have to reuse that already drawn pixmap. The actual paintEvents using drawPixmap() will cost close to nothing and your GUI will be completely responsive.
Clobbering the code with processEvent() is not only ugly but can cause very nasty and very hard to debug malfunctions. E.g. it might cause premature deletion of objects which are still in use but were scheduled for deletion using deleteLater().
This Answer might be also of use: Python - matplotlib - PyQT: plot to QPixmap
I havn't used matplotlib, yet. But in case it uses directly QWidgets and can not be used without it won't be so easy as you mentioned above. But you could do the drawing in another process started by your GUI which uses matplotlib as in the link above and stores the pixmap to disk and your gui loads whenever a new pixmap is ready. QFileSystemWatcher might help here to avoid polling.
I am using LibGDX to make a game. I want to simultaneously load/unload assets on the fly as needed. However, waiting for assets to load in the main thread causes lag. In order to remedy this, I've created a background thread that monitors which assets need to be loaded (textures, sounds, etc.) and loads/unloads them appropriately.
Unfortunately, I get the following error when calling AssetManager.update() from that thread.
com.badlogic.gdx.utils.GdxRuntimeException: java.lang.RuntimeException: No OpenGL context found in the current thread.
I've tried runing the background thread in the main thread in the beginning and just dealing with the first few screens, and everything works fine. I can also change the algorithm to just load everything into memory from the start in the same thread, and that works as well. However, neither works in the background thread.
When I run this on Android with OpenGL ES 2.0 (which is flexible in odd ways) instead of on Windows, everything runs fine, and I can even get the pixel dimensions of the images - but the textures render black.
My searches have told me that this is an issue of the OpenGL context being bound to a single thread, but not much else. This explains why everything works when I shove it in the main thread, and not when I put it in a different one. How do I fix this context problem?
First things first, you should not access the OpenGL context outside of the rendering thread.
I assume you have looked at these already, but just to make sure read up on the AssetManager wiki article, which talks a bit about how to use the AssetManager for asynchronous managing of assets. In addition to the wiki article, check out the AssetManagerTest to better understand how to use it. The asset manager test is probably your best bet into loading at how to dynamically load assets.
If you are loading a ton of stuff, you may want to look into creating a loading bar to load anything large upfront. It might work to check assets and such from another thread (and set a flag to call update), but at the end of the day you will need to call update() on the rendering thread.
Keeping in mind you have to call update() it from a different thread, I don't see why you would want another thread to check conditions and set a flag. There is probably more overhead using another thread and synchronizing the update() call than to just do it all on the rendering thread. Also, the update() method only pauses for a couple milliseconds at a time as it incrementally loads files. Typically, you would simply call load() for your asset, then check isLoaded() on your asset. If it isn't loaded you would then call update() once per frame until isLoaded() returns true. Once it returns true, you can then call get() and get whatever asset you were loading. This can all be done via the main rendering thread without having the app lag while its loading.
If you really want your other thread to call update(), you need to create a Runnable object and call postRunnable() such as how they have it described in the wiki article on multi-threading with libGDX. However, this defeats the whole point of using other threads because anything you use with postRunnable runs synchronously on the rendering thread.
I faced the need to use multi-threading to load an additional texture on-the-fly in order to reduce the memory footprint.
The example case is that I have 10 types of enemy to use in the a single level but the enemies will come out type by type. The context of "type by type" means one type of enemy comes out and the player kills all of its instances, then it's time to call in another type. The process goes like this until all types come out, then the level is complete.
You can see it's better to not initially load all enemy's texture at once in the starting time (it's pretty big 2048*2048 with lots of animation frames inside which I need to create them in time of creation for each type of enemy). I turn this to multi-thread to load an additional texture when I need it. But I knew that cocos2d-x is not thread-safe. I planned to use CCSpriteFrameCache class to load a texture from .plist + .png file then re-create animation there and finally create a CCSprite from it to represent a new type of enemy instance. If I don't use multi-thread, I might suffer from delay of lag that would occur of loading a large size of texture.
So how can I load a texture in separate thread in cocos2d-x following my goal above? Any idea to avoid thread-safe issue but still can accomplish my goal is also appreciated.
Note: I'm developing on iOS platform.
I found that async-loading of image is already there inside cocos2d-x.
You can build a testing project of cocos2d-x and look into "Texture2DTest", then tap on the left arrow to see how async-loading look like.
I have taken a look inside the code.
You can use addImageAsync method of CCtextureCache to load additional texture on-the-fly without interfere or slow down other parts such as the current animation that is running.
In fact, addImageAsync of CCTextureCache will load CCTexture2D object for you and return back to its callback method to receive. You have additional task to make use of it on your behalf.
Please note that CCSpriteFrameCache uses CCTextureCache to load frames. So this applies to it as well for my case to load spritesheet consisting of frames to be used in animation creation. But unfortunately async type of method is not provided for CCSpriteFrameCache class. You have to manually load texture object via CCTextureCache then plug it in
void CCSpriteFrameCache::addSpriteFramesWithFile(const char *pszPlist, CCTexture2D *pobTexture)
There's 2 file in testing project you can take a look at.
Texture2dTest.cpp
TextureCacheTest.cpp