Python script skyrockets size of pagefile.sys - python-3.x

I wrote a Python script that tends to crash sometimes with a Memory Allocation Error. I noticed that the pagefile.sys of my Win10 64 system skyrockets in this script and exceeds the free memory.
My current solution is to run the script in steps, so that every time the script runs through, the pagefile empties.
I would like the script to run through all at once, though.
Moving the pagefile to another drive is not an option, unfortunately, because I only have this one drive and moving the pagefile to an external drive does not seem to work.
During my research, I found out about the module gc but that is not working:
import gc
and after every iteration I use
gc.collect()
Am I using it wrong or is there another (python-based!) option?
[Edit:]
The script is very basic and only iterates over image files (using Pillow). The script only checks for width, height and resolution of the image, calculates the dimensions in cm.
If height > width, the image is rotated 90° counterclockwise.
The images are meant to be enlarged or shrunk to A3 size (42 x 29.7cm), so I use the width/height ratio to calculate whether I can enlarge the width to 42cm and the height remains < 29.7cm and in case the height is > 29.7cm, I enlarge the height to 29.7 cm.
For the moment, I do the actual enlargement/shrinking still in Photoshop. Based on whether it is a width/height enlargement, the file is moved to a certain folder that contains either one of those file types.
Anyways, the memory explosion happens in the iteration that only reads the file dimensions.
For that I use
with Image.open(imgOri) as pic:
widthPX = pic.size[0]
heightPX = pic.size[1]
resolution = pic.info["dpi"][0]
widthCM = float(widthPX) / resolution * 2.54
heightCM = float(heightPX) / resolution * 2.54
I also calculate whether the shrinking would be too strong, the image gets divided in half and re-evaluated.
Even though it is unnecessary, I still added pic.close
to the with open()statement, because I thought Python may be keeping the image files open, but that didn't help.
Once the iteration finished, the pagefile.sys goes back to its original size, so in case that error occurs, I take some files out and do them gradually.

Related

AVAssetWriter getting raw bytes makes corrupt videos on device (works on sim)

So my goal is to add CVPixelBuffers into my AVAssetWriter / AVAssetWriterInputPixelBufferAdaptor with super high speed. My previous solution used CGContextDrawImage but it is very slow (0.1s) to draw. The reason seems to be with color matching and converting, but that's another question I think.
My current solution is trying to read the bytes of the image directly to skip the draw call. I do this:
CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
CGDataProviderRef dataProvider = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(dataProvider);
CFDataRef da = CGDataProviderCopyData(dataProvider);
CVPixelBufferCreateWithBytes(NULL,
CGImageGetWidth(cgImageRef),
CGImageGetHeight(cgImageRef),
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(da),
CGImageGetBytesPerRow(cgImageRef),
NULL,
0,
NULL,
&pixelBuffer);
[writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentTime];
-- releases here --
This works fine on my simulator and inside an app. But when I run the code inside the SpringBoard process, it comes out as the images below. Running it outside the sandbox is a requirement, it is meant for jailbroken devices.
I have tried to play around with e.g. pixel format styles but it mostly comes out with differently corrupted images.
The proper image/video file looks fine:
But this is what I get in the broken state:
Answering my own question as I think I got the answer(s). The resolution difference was a simple code error, not using the device bounds on the latter ones.
The color issues. In short, the CGImages I got when running outside of the sandbox was using more bytes per pixel, 8 bytes. The images I get when running inside the sandbox was 4 bytes. So basically I was simply writing the wrong data into the buffer.
So, instead of simply slapping all of the bytes from the larger image into the smaller buffer. I loop through the pixel buffer row-by-row, byte-by-byte and I pick the RGBA values for each pixel. I essentially had to skip every other byte from the source image to get the right data into the right place within the buffer.

cairo surface flushes only 1 fps

I have constructed a cairo (v1.12.16) image surface with:
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, size.width, size.height);
and for 60 fps; cleared it, drew stuff and flushed with:
cairo_surface_flush(surface);
then, got the resulting canvas with:
unsigned char * data = cairo_image_surface_get_data(surface);
but the resulting data variable was only modified (approximately) every second, not 60 times a second. I got the same (unexpected) result even when using cairo's quartz backend... Are there any flush/refresh rate settings in cairo that I am not (yet) aware of?
Edit: I am just trying to draw some filled (random and/or calculated) rectangles; tested 100 to 10K rects in each frame. All related code is run in the same (display?) thread. I am not caching the 'data' variable. I even modified one corner of it to flicker and I could see flickers in 60fps (for 100 rects) and 2-3 fps (for 10K rects); meaning the 'data' variable returned is not refreshed!? In a different project using cairo's quartz backend, I got the same 1 fps result!??
Edit2: The culprit turned out to be the time() function; when used in srand(time(NULL)) it was producing the same random variables in the same second; used srand(std::clock()) instead. Thanks to the quick response/reply (and it still answers my question!!)..
No there are no such flush/refresh rate settings. Cairo draws everything you tell it to and then just returns control.
I have two ideas:
Either cairo is drawing fast enough and something else is slowing things down (e.g. your copying the result of the drawing somewhere). You should measure the time that elapses between when you begin drawing and your call to cairo_surface_flush().
You are drawing something really, really complex and cairo really does need a second to render this (However, I have no idea how one could accidentally cause such a complex rendering).

Return value for MeasureOverride, if constraint.Width is Infinity and all available space should be used?

I am writing a custom control which inherits from FrameworkElement. I use DrawingVisual to render to the screen. Therefore, I have to calculate myself the optimal size of the Visuals I draw. If HorizontalAlignment is set to Stretch, I would like to use all available space. Only WPF doesn't tell how much space is really available :-(
In MeasureOverride, the constraint.Width is Infinity. I need to return a desired size which is less than Infinity, otherwise I get the exception "InvalidOperationException: Layout measurement override of element 'xyz' should not return PositiveInfinity as its DesiredSize, even if Infinity is passed in as available size."
I tried to return 0 in the hope that Arrange would pass all the available space this time, since in MeasureOverride it said Infinity width is available. But ArrangeOverride says the arrangeBounds.Width is 0.
I tried to return an arbitrary width (well, actually I returned the Monitor width to MeasureOverride), but then ArrangeOverride tells me I can use the complete monitor, which is, of course, not true :-(
So which width do I return in MeasureOverride to get the maximum space really available ?
Update 23.1.23
My control is in a library and has no idea who is the container holding it. I inherit from that control to make many different controls, for example a line graph that can contain millions of measurements. So if there is unlimited space, the graph gets millions of pixel wide. If the space is limited, let's say 1000 pixels, then the graph displays in 1 pixel the average of thousands of measurements.
After using WPF now for nearly 20 years, I think the proper solution is to raise an exception when the available width is Infinity, but my control cannot deal with that. The exception demands that the control is put into a container which gives limited width to its children.
For example a ScrollViewer gives unlimited space to its children. But since my line graph, or any other control I design that can not deal with unlimited space, gets unreasonably big, it is better to alert the developer using the control immediately that the control needs to be hosted in a container like a Grid, where the Grid tells the child the exact size available. Of course, also the Grid might give unlimited space when the column width is set to auto, but in this case my Control should raise an exception, so that the column width can get changed to a number of pixels or *.

Xna Xbox framedrops when GC kicks in

I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587
After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!

How to estimate size of a Jpeg file before saving it with a certain quality factor?

I'v got a bitmap 24bits, I am writing application in c++, MFC,
I am using libjpeg for encoding the bitmap into jpeg file 24bits.
When this bitmap's width is M, and height is N.
How to estimate jpeg file size before saving it with certain quality factor N (0-100).
Is it possible to do this?
For example.
I want to implement a slide bar, which represent save a current bitmap with certain quality factor N.
A label is beside it. shows the approximate file size when decode the bitmap with this quality factor.
When user move the slide bar. He can have a approximate preview of the filesize of the tobe saved jpeg file.
In libjpeg, you can write a custom destination manager that doesn't actually call fwrite, but just counts the number of bytes written.
Start with the stdio destination manager in jdatadst.c, and have a look at the documentation in libjpeg.doc.
Your init_destination and term_destination methods will be very minimal (just alloc/dealloc), and your empty_output_buffer method will do the actual counting. Once you have completed the JPEG writing, you'll have to read the count value out of your custom structure. Make sure you do this before term_destination is called.
It also depends on the compression you are using and to be more specific how many bits per color pixel are you using.
The quality factor wont help you here as a quality factor of 100 can range (in most cases) from 6 bits per color pixel to ~10 bits per color pixel, maybe even more (Not sure).
so once you know that its really straight forward from there..
If you know the Sub Sampling Factor this can be estimated. That information comes from the start of frame marker.
In the same marker right before the width and height so is the bit depth.
If you let
int subSampleFactorH = 2, subSampleFactorV = 1;
Then
int totalImageBytes = (Image.Width / subSampleFactorH) * (Image.Height / subSampleFactorV);
Then you can also optionally add more bytes to account for container data also.
int totalBytes = totalImageBytes + someConstantOverhead;

Resources