Is there a way to dynamically determine the vhdSize flag? - azure

I am using the MSIX manager tool to convert a *.msix (an application installer) to a *.vhdx so that it can be mounted in an Azure virtual machine. One of the flags that the tool requires is -vhdSize, which is in megabytes. This has proven to be problematic because I have to guess what the size should be based off the MSIX. I have ran into numerous creation errors due to too small of a vhdSize.
I could set it to an arbitrarily high value in order to get around these failures, but that is not ideal. Alternatively, guessing the correct size is an imprecise science and a chore to do repeatedly.
Is there a way to have the tool dynamically set the vhdSize, or am I stuck guessing a value that is both large enough to accommodate the file, but not too large as to waste disk space? Or, is there a better way to create a *.vhdx file?
https://techcommunity.microsoft.com/t5/windows-virtual-desktop/simplify-msix-image-creation-with-the-msixmgr-tool/m-p/2118585

There is an MSIX Hero app that could select a size for you, it will automatically check how big the uncompressed files are, add an extra buffer for safety (currently double the original size), and round it to the next 10MB. Reference from https://msixhero.net/documentation/creating-vhd-for-msix-app-attach/

Related

about managing file system space

Space Issues in a filesystem on Linux
Lets call it FILESYSTEM1
Normally, space in FILESYSTEM1 is only about 40-50% used
and clients run some reports or run some queries and these reports produce massive files about 4-5GB in size and this instantly fills up FILESYSTEM1.
We have some cleanup scripts in place but they never catch this because it happens in a matter of minutes and the cleanup scripts usually clean data that is more than 5-7 days old.
Another set of scripts are also in place and these report when free space in a filesystem is less than a certain threshold
we thought of possible solutions to detect and act on this proactively.
Increase the FILESYSTEM1 file system to double its size.
set the threshold in the Alert Scripts for this filesystem to alert when 50% full.
This will hopefully give us enough time to catch this and act before the client reports issues due to FILESYSTEM1 being full.
Even though this solution works, does not seem to be the best way to deal with the situation.
Any suggestions / comments / solutions are welcome.
thanks
It sounds like what you've found is that simple threshold-based monitoring doesn't work well for the usage patterns you're dealing with. I'd suggest something that pairs high-frequency sampling (say, once a minute) with a monitoring tool that can do some kind of regression on your data to predict when space will run out.
In addition to knowing when you've already run out of space, you also need to know whether you're about to run out of space. Several tools can do this, or you can write your own. One existing tool is Zabbix, which has predictive trigger functions that can be used to alert when file system usage seems likely to cross a threshold within a certain period of time. This may be useful in reacting to rapid changes that, left unchecked, would fill the file system.

Compare filesystem space usage over time

Is there a good, graphical way to represent disk usage changes in a linux/unix filesystem over time?
Let me elaborate: there are several good ways to represent disk usage in a filesystem. I'm not interested in summary statistics such total space used (as given by du(1)), but more advanced interactive/visualization tools such as ncdu, gdmap, filelight or baobab, that can give me an idea of where the space is being used.
From a technical perspective, I think the best approach is squarified tree-maps (as available in gdmap), since it makes a better use of the visual space available. The circular approach used by filelight for instance cannot represent huge hierarchies efficiently, and it's dubious how to account for the increasing area of the outer rings in the representation from a human perspective. Looks nice, but that's about it.
Treemaps are perfect to have the current snapshot of disk usage in the filesystem, but I'd like to have something similar to see how disk usage has been evolving over time.
My current solution is very simple: I'm dumping the filesystem usage state using "ncdu -o" over time, and then I compare them side-by-side using two ncdu instances. It's very inefficient, but does the job. I'd like something more visual though.
All the relevant information can be dumped using:
find [dir] -printf "%P\t%s\n"
I did a crappy hack to load this state information in gdmap, so I can use two gdmap instances instead. Still not optimal though, as a treemap will fit the total allocated space into the same rectangle. As such, you cannot really tell if the same area is equivalent to more or less space. If two big directories grow proportionately, they will not change the visualization.
I need something better than that. Obviously, I cannot plot the cumulative directory sizes in a simple line plot, as I would have too many directories.
I'd like something similar to a treemap, where maybe the color of the square represents size increase/decrease using some colormap. However, since a treemap will show individual files as opposed to directories, it's not obvious on how to color-map a directory in which the allocated space has been growing/shrinking due to new/removed files.
What kind of visualization techniques could be used to see the evolution of allocated space over time, which take the whole underlying tree into account?
To elaborate even more, in a squarified treemap the whole allocated space is proportionally divided by file size, and each directory logically clusters the allocated space within it. As such, we don't "see" directories, we see the proportional space taken by it's content.
How we could extend and/or improve the visualization in order to see how the allocated space has been moved to a different area of the treemap?
You can usee Cacti for this.
You need to install snmp deamon on you machine and install cacti (freeware) localy or on any other PC and monitor you linux machine.
http://blog.securactive.net/wp-content/uploads/2012/12/cacti_performance_vision1.png
You can monitor network interfaces, spaces of any partitions and lot of other parameters of your LINUX OS.
apt-get install cacti
vim /etc/snmp/snmpd.conf
add this at about 42 line:
view systemonly included .1.3.6.1
close and restart snmpd deamon
go to cacti config and try to discover your linux machine.

Bash PATH length restrictions

Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run).
I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts.
One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines.
Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Does path length affect performance?
There's a hard limit. It's something like 32MB.
Yes, you can get it long enough to affect performance easily. Number of entries is the primary limiting factor, followed by number of / characters (this should not show itself unless the path depth exceeds some outrageous number like 30).

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

How might one go about implementing a disk fragmenter?

I have a few ideas I would like to try out in the Disk Defragmentation Arena. I came to the conclusion that as a precursor to the implementation, it would be useful, to be able to put a disk into a state where it was fragmented. This seems to me to be a state that is more difficult to achieve than a defragmented one. I would assume that the commercial defragmenter companies probably have solved this issue.
So my question.....
How might one go about implementing a fragmenter? What makes sense in the context that it would be used, to test a defragmenter?
Maybe instead of fragmenting the actual disk, you should really test your defragmentation algorithm on a simulation/mock disk? Only once you're satisfied the algorithm itself works as specified, you could do the testing on actual disks using the actual disk API.
You could even take snapshots of actual fragmented disks (yours or of someone you know) and use this data as a mock model for testing.
How you can best fragement depends on the file system.
In general, concurrently open a large number of files. Opening a file will create a new directory entry but won't cause a block to be written for that file. But now go through each file in turn, writing one block. This typically will cause the next free block to be consumed, which will lead to all your files being fragmented with regard to each other.
Fragmenting existing files is another matter. Basically, do the same, but do it on a file copy of existing files, doing a delete of the original and rename of copy.
I may be oversimplifying here but if you artificially fragment the disk won't any tests you run will be only true for the fragmentation created by your fragmenter rather than any real world fragmentation. You may end up optimising for assumptions in the fragmenter tool that don't represent real world occurrences.
Wouldn't it be easier and more accurate to take some disk images of fragmented disks? Do you have any friends or colleagues who trust you not to do anything anti-social with their data?
Fragmentation is a mathematical problem such that you are trying to maximize the distance the head of the hard drive is traveling while performing a specific operation. So in order to effectively fragment something you need to define the specific operation first

Resources