I am working on legacy code that relies on GDI (not GDI+) for drawing. GDI has no provision for variable width pens and you need to create/destroy new ones every time.
Is there any hack that allows to bypass that creation process and directly modify the width of an existing pen ?
I did not find a way to alter a pen. So the solution I adopted is to keep a permanent pen of with 1 (most often used), and temporarily create a custom one when another width is requested. This at least allows me to mix two different widths without multiple creations/deletions.
Related
I'm currently implementing a basic deferred renderer with multithreading in Vulkan. Since my G-Buffer should have the same resolution as the final image I want to do it in a single render-pass with multiple sub-passes, according to this presentation, on slide 44 (page 138). It says:
vkCmdBeginCommandBuffer
vkCmdBeginRenderPass
vkCmdExecuteCommands
vkCmdNextSubpass
vkCmdExecuteCommands
vkCmdEndRenderPass
vkCmdEndCommandBuffer
I get that in the first sub-pass, you iterate the scene graph and record one secondary commandbuffer for each entity/mesh. What I don't get is how you are supposed to do the shading pass with secondary command buffers. Do you somehow spit the screen into parts and render each part in a separate thread or just record one secondary commandbuffer for the entire second sub-pass?
To me, like you said, you can need to multi thread your command buffer for the "building g-buffer subpass". However for the shading pass, it must depends on how are you doing things. To me (again), you do not need to multi thread your shading subpasses. However, you must take into consideration that you can have one "by region dependency".
So, I encourage you to procede that way.
Before to begin your RenderPass, use a Compute Shader to splat all your lights on the screen (here you have a kind of array of "quad").
By splatting I mean this kind of thing. You have a point light (for example), the idea is to compute the quad in screen space affected by the light. With that you have 4 vertices (that represents a quad) that you put into a SSBO and you can use it as a vertex Buffer in the shading subpass.
Now you begin the render pass.
MT the scene graph rendering if needed. and do your vkCmdExecuteCommands();
NextSubpass
Use the "array of quads" you create from the earlier compute shader (do not forget a VK_SUBPASS_EXTERNAL dependency).
NextSubpass and so on
However, you said
you iterate the scene graph and record one secondary commandbuffer for each entity/mesh.
I am not sure I really understand what you meant, but if you intend to have one secondary command buffer for one mesh, I really advice you to change the way you are doing. You must use batching. Let's say you have 64 000 different meshes to draw. You could for exemple create 64 command buffers (that you dispatch on 4 threads) and each command buffers have 1000 meshes to draw. (The number are took randomly, so profile your application).
So to answer your question for the shading subpass, I would not use command buffers or only very few (by kind of lights (punctual, directional))
What I don't get is how you are supposed to do the shading pass with secondary command buffers.
The shading pass (assumably the second subpass) would possibly take the G-buffers created by the first subpass as an Input Attachment. Then it would draw to equally sized screen-size quad using data from the G-buffers + from a set of lights (or whatever your deferred shader tries to defer).
The presentation you link tries to hint at this structure style starting at page 13 (marked "Page 107").
First step would be to make it working. Use e.g. this SW example. Then the next step of optimizing it into single renderpass should be easier.
I'm working on a windows app in order to learn how to make them in general, and one issue I'm continuously having is the fact that when I go test it, the controls only take up a portion of the screen because they are sized to fit a smaller screen. How can I make them fit for all screens? If I need to provide screenshots to illustrate this point I can.. using forms this was accomplished via docking, but the apps don't seem to have that same capability.
I assume that by "windows app" you mean a Windows Runtime app, probably in Xaml.
You can get dock-like behavior by using the VerticalAlignment and HorizontalAlignment properties on your FrameworkElement (including Controls). This allows forcing the control to the left, right, top, bottom, or stretching to fill the area it is in.
Combine this with flexible layout controls such as Grids. A top level Grid will fill the screen and can contain rows and columns with relative sizes. This allows the page layout to shrink or grow to cover a fairly wide range of sizes with a single layout.
For larger changes (such as switching between portrait and landscape aspect ratios, or to support a skinny snapped window) you can use VisualStates to either move the controls or to switch between different sets of controls. If the controls are data bound then either set will work automatically with the underlying data.
MSDN has some good documentation on these concepts at Guidelines for supporting multiple screen sizes and Quickstart: Designing apps for different window sizes
Apologies if there is a thread for this already, I couldn't find one that I could get my teeth into.
Anyway, I'm new to WPF and want to create a custom control that will be a sort of graphic control. The graphic will always consist of a circle, containing a matrix of several squares (from several hundred to several thousand actually) The squares need to respond to mouse click and mouse over events (and ideally be possible to navigate/select via keyboard.) Each square will represent an object I've coded.
In the past I've used a grid control to display the coloured squares (with VCL in CBuilder) but I would like to make a graphical version. (Actually, another question I'd like to ask is, is there a WPF grid control where I can set the colours of individual cells?)
The question is, where to start? Do I start with a canvas and draw on it? Do I derive from an existing object? I'm just a little lacking on ideas on implementation so any pointers or advice you can offer will be greatly received.
BBz
First off I would suggest getting a decent handle on WPF and how it approaches the problem set. It is vastly different from previous .NET Desktop technologies such as WinForms. Once you have a decent understanding in regards to the separation of logic from UI and how WPF approaches the problem then you can dive in and begin making the right decisions based upon what you encounter.
The problem you mention can be solved in multiple ways. In regards to your question about making use of a Grid, that could be done as that is a layout type. It is vastly superior to the Canvas in terms of arranging your visual structure. The defined rows/columns are nothing more then containers which can hold varying UI objects. Therefore pushing a Rectangle into the Grid and coloring as desired would give you the effect you are looking for. This Rectangle could then become a custom control which would allow you to define varying properties on, as well as specific triggers for mouse overs, etc...
At a higher level you will want to encapsulate this logic as a UserControl which will also hold your custom control. Perhaps the UserControl contains the Grid which will make use of your custom control.
Hopefully this gives you some ideas around how to get started, however getting a better understanding of WPF will help you immensely in achieving your goal.
I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!
I need to render some CPU generated images in Direct3D 9 and I'm not sure of the best way to get the texture data onto the graphics card as there seems to be a number of approaches.
My usage path goes along the following lines each frame
Render a bunch of stuff with the textures
Update a few parts of the texture (which may have been used by the previous renders)
Render some more stuff with the texture
Update another part of the texture
and so on
Ive thought of a couple of ways to do this, however I'm not sure which one to go with. I considered benchmarking each method however I have no way to know if any results I get are representative of hardware in general, or only my hardware.
Which pool is best for a texture for this task?
Whats the best way to update this texture?
Call LockRect and UnlockRect for each region I need to update
Call LockRect and UnlockRect for the entire texture
Call LockRect and UnlockRect for the entire texture with D3DLOCK_DISCARD and copy in a bitmap from RAM.
Create a completely new texture each time I need to "update it"
Use 1,2 or 3 to update a surface in D3DPOOL_SYSMEM, then UpdateSurface to update level 0 of my texture from this surface
Same as 5 but specify RECT to cover the entire area I need
Same as 5 but make multiple calls, one for each region I updated
Probably yet another way to do this I haven't thought of yet...
It should be noted that the areas I'm updating are usually fairly small compared to the size of the entire texture, eg the texture may be 1024*1024 and I might want to update 5 or so 64*64 regions of it.
If you need to update multiple areas, you should lock the whole texture and use the D3DLOCK_NO_DIRTY_UPDATE flag, then for each area call AddDirtyRect before unlocking.
This of course all depends on the size of the texture etc, for small texture it may be more efficient to copy the whole thing from ram.
D3DPOOL_DEFAULT
D3DUSAGE_DYNAMIC
call LockRect and UnlockRect for each region you need to update
--> This is the fastest!
Benchmark will follow...