Return value for MeasureOverride, if constraint.Width is Infinity and all available space should be used? - wpf-controls

I am writing a custom control which inherits from FrameworkElement. I use DrawingVisual to render to the screen. Therefore, I have to calculate myself the optimal size of the Visuals I draw. If HorizontalAlignment is set to Stretch, I would like to use all available space. Only WPF doesn't tell how much space is really available :-(
In MeasureOverride, the constraint.Width is Infinity. I need to return a desired size which is less than Infinity, otherwise I get the exception "InvalidOperationException: Layout measurement override of element 'xyz' should not return PositiveInfinity as its DesiredSize, even if Infinity is passed in as available size."
I tried to return 0 in the hope that Arrange would pass all the available space this time, since in MeasureOverride it said Infinity width is available. But ArrangeOverride says the arrangeBounds.Width is 0.
I tried to return an arbitrary width (well, actually I returned the Monitor width to MeasureOverride), but then ArrangeOverride tells me I can use the complete monitor, which is, of course, not true :-(
So which width do I return in MeasureOverride to get the maximum space really available ?
Update 23.1.23
My control is in a library and has no idea who is the container holding it. I inherit from that control to make many different controls, for example a line graph that can contain millions of measurements. So if there is unlimited space, the graph gets millions of pixel wide. If the space is limited, let's say 1000 pixels, then the graph displays in 1 pixel the average of thousands of measurements.

After using WPF now for nearly 20 years, I think the proper solution is to raise an exception when the available width is Infinity, but my control cannot deal with that. The exception demands that the control is put into a container which gives limited width to its children.
For example a ScrollViewer gives unlimited space to its children. But since my line graph, or any other control I design that can not deal with unlimited space, gets unreasonably big, it is better to alert the developer using the control immediately that the control needs to be hosted in a container like a Grid, where the Grid tells the child the exact size available. Of course, also the Grid might give unlimited space when the column width is set to auto, but in this case my Control should raise an exception, so that the column width can get changed to a number of pixels or *.

Related

Why GBuffers need to be created for each frame in D3D12?

I have experience with D3D11 and want to learn D3D12. I am reading the official D3D12 multithread example and don't understand why the shadow map (generated in the first pass as a DSV, consumed in the second pass as SRV) is created for each frame (actually only 2 copies, as the FrameResource is reused every 2 frames).
The code that creates the shadow map resource is here, in the FrameResource class, instances of which is created here.
There is actually another resource that is created for each frame, the constant buffer. I kind of understand the constant buffer. Because it is written by CPU (D3D11 dynamic usage) and need to remain unchanged until the GPU finish using it, so there need to be 2 copies. However, I don't understand why the shadow map needs to do the same, because it is only modified by GPU (D3D11 default usage), and there are fence commands to separate reading and writing to that texture anyway. As long as the GPU follows the fence, a single texture should be enough for the GPU to work correctly. Where am I wrong?
Thanks in advance.
EDIT
According to the comment below, the "fence" I mentioned above should more accurately be called "resource barrier".
The key issue is that you don't want to stall the GPU for best performance. Double-buffering is a minimal requirement, but typically triple-buffering is better for smoothing out frame-to-frame rendering spikes, etc.
FWIW, the default behavior of DXGI Present is to stall only after you have submitted THREE frames of work, not two.
Of course, there's a trade-off between triple-buffering and input responsiveness, but if you are maintaining 60 Hz or better than it's likely not noticeable.
With all that said, typically you don't need to double-buffered depth/stencil buffers for rendering, although if you wanted to make the initial write of the depth-buffer overlap with the read of the previous depth-buffer passes then you would want distinct buffers per frame for performance and correctness.
The 'writes' are all complete before the 'reads' in DX12 because of the injection of the 'Resource Barrier' into the command-list:
void FrameResource::SwapBarriers()
{
// Transition the shadow map from writeable to readable.
m_commandLists[CommandListMid]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_DEPTH_WRITE, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
}
void FrameResource::Finish()
{
m_commandLists[CommandListPost]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_DEPTH_WRITE));
}
Note that this sample is a port/rewrite of the older legacy DirectX SDK sample MultithreadedRendering11, so it may be just an artifact of convenience to have two shadow buffers instead of just one.

Python script skyrockets size of pagefile.sys

I wrote a Python script that tends to crash sometimes with a Memory Allocation Error. I noticed that the pagefile.sys of my Win10 64 system skyrockets in this script and exceeds the free memory.
My current solution is to run the script in steps, so that every time the script runs through, the pagefile empties.
I would like the script to run through all at once, though.
Moving the pagefile to another drive is not an option, unfortunately, because I only have this one drive and moving the pagefile to an external drive does not seem to work.
During my research, I found out about the module gc but that is not working:
import gc
and after every iteration I use
gc.collect()
Am I using it wrong or is there another (python-based!) option?
[Edit:]
The script is very basic and only iterates over image files (using Pillow). The script only checks for width, height and resolution of the image, calculates the dimensions in cm.
If height > width, the image is rotated 90° counterclockwise.
The images are meant to be enlarged or shrunk to A3 size (42 x 29.7cm), so I use the width/height ratio to calculate whether I can enlarge the width to 42cm and the height remains < 29.7cm and in case the height is > 29.7cm, I enlarge the height to 29.7 cm.
For the moment, I do the actual enlargement/shrinking still in Photoshop. Based on whether it is a width/height enlargement, the file is moved to a certain folder that contains either one of those file types.
Anyways, the memory explosion happens in the iteration that only reads the file dimensions.
For that I use
with Image.open(imgOri) as pic:
widthPX = pic.size[0]
heightPX = pic.size[1]
resolution = pic.info["dpi"][0]
widthCM = float(widthPX) / resolution * 2.54
heightCM = float(heightPX) / resolution * 2.54
I also calculate whether the shrinking would be too strong, the image gets divided in half and re-evaluated.
Even though it is unnecessary, I still added pic.close
to the with open()statement, because I thought Python may be keeping the image files open, but that didn't help.
Once the iteration finished, the pagefile.sys goes back to its original size, so in case that error occurs, I take some files out and do them gradually.

Phaser bars that grow independently and restart

I've tried a few things but nothing solid enough to post. I'm trying to create a few bars (think mana and hp) that all grow at the same time but at different rates. Then when one has reach it's full size they would all stop and wait some sort of signal. After that it would reset back to zero and start again with all the other bars resuming their progress.
Even if I don't get code, concept is enough.
Since you said you're okay with a concept and didn't include your current code, I'll provide a concept answer. :)
One option:
Create variables to store current and maximum hp and mana (currentHP, maximumHp, currentMana, maximumMana).
In your state's update check if the current value of both hp and mana is less than the maximum. If they are you'll move onto step 3, otherwise you'll check to see if your signal is triggered. This could be as simple as just checking to see if a variable has been set.
Since neither the hp and mana values are at the maximum, you increase the current hp and mana values by whatever values are needed.

explain me a difference of how MRTG measures incoming data

Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial

Xna Xbox framedrops when GC kicks in

I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587
After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!

Resources