There are 2 pages where Inno Setup shows the required disk space: wpSelectDir and wpSelectComponents. On the wpSelectComponents page everything is shown correctly, but it's not on the wpSelectDir page.
What is the difference between DiskSpaceLabel and ComponentsDiskSpaceLabel? Aren't those the same?
I understand that the ComponentsDiskSpaceLabel shows the sum of all the components checked. What does DiskSpaceLabel show then?
The DiskSpaceLabel displays the minimal space needed for the application. It includes only the files that are installed unconditionally (those that do not belong to any components or tasks, etc).
The ComponentsDiskSpaceLabel adds the files belonging to the selected components to the size calculation.
Both calculations reflect the ExtraDiskSpaceRequired directive.
Note that Check parameters are not considered for the calculation. Neither DestDir parameter is (so temporary files are also included). The dontcopy flag is not considered either (which you have reported).
Related
I know this is bad, but I'm asked to adopt to a given interface, which manually performs layout transitions of render targets before submit/present calls.
So, while I usually would want to specify (in the creation of the corresponding render pass) the initialLayout resp. finalLayout of the VkAttachmentDescription corresponding to such an render target to be VK_IMAGE_LAYOUT_UNDEFINED resp. VK_IMAGE_LAYOUT_PRESENT_SRC_KHR and the layout of the corresponding VkAttachmentReference to be VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL, I have no idea how I should set these fields now.
It seems like there is no possibility to tell the API to perform a no-op, i.e. preserve the current layout of given the attachment.
(BTW, is there an analogue of D3D12's D3D12_RESOURCE_STATES::D3D12_RESOURCE_STATE_UNORDERED_ACCESS?)
A "no-op" doesn't make sense. You must control the layout, and you are required to know the layout of every image you are using for any given purpose at the time you use it for that purpose. If you are working in code where someone else determines what the layout is, then those people need to tell you what layout those images are in (if you need the render pass image's data to be preserved, which is rather rare) and/or need to be in after the render pass.
If you're working with an API or code structure that does not give you this information, then that is what you need to change.
I was reading GM's wiki to determine the difference between #downloadURL & #updateURL (which I didn't). But what confused me even more that both are unadvised:
It is unusual to specify this value. Most scripts should omit it.
I'm surprised by that as it's the only way for scripts to auto-update and I don't see why these keys shouldn't be used.
The wiki itself is pretty lacking and no other forum sources are advised, so I have to ask here. Also would appreciate more detailed info on these keys.
Use of those keys is discouraged mainly by Greasemonkey's lead developer. Most others, including the Tampermonkey team feel no need for such a warning.
Also note that those directives are not always required for auto-updates to work.
Some reasons why he would say that it was unusual and that "most" scripts should omit it:
In most all cases it is not needed, see how updates work and how those directives work, below.
Adding and using those directives are just more items that the script writer must check and maintain. Why make work if it is not needed?.
The update implementation and those directives have been buggy and, perhaps, not well implemented in Greasemonkey.
Tampermonkey, and other engines, implement updates, and those directives, in a slightly different manner. This means that code that works on Tampermonkey may fail on Greasemonkey.
Note that that wiki entry was made by Greasemonkey's lead developer (Arantius) himself; so it wasn't just wiki noise.
How updates work:
Script updates are conducted in 4 phases:
The enabled phase and/or "forced" updates.
The check phase.
The download phase.
The parse and install phase.
For this question, we are only concerned with the check and download phases. We stipulate that updates are enabled and that the updated script was valid and installed correctly.
When updating scripts, Greasemonkey (and Tampermonkey) download files twice:
The first download, controlled by the script's updateURL value, is just to check the file's #version (if any) and date -- to see if an update is available.
The second download, controlled by the script's downloadURL value, is the actual download of the new script to install.
This download will only occur if the server file has a higher #version number than the local file and/or if the server file has a later date than the local file. (Beware that there are critical differences here between script engines.)
See "Why you might use #downloadURL and #updateURL", below, for reasons why 2 file downloads are used.
How #downloadURL and #updateURL work:
#downloadURL merely overrides the default internal "download URL" location.
#updateURL merely overrides the default internal "update URL" (or check) location.
In most cases, there is no need to do this. See, below.
When you install a userscript, Greasemonkey automatically records the install location. No meta directive is needed.
By default, this is where Greasemonkey will both check for updates and download any updates.
But, if #downloadURL is specified, then Greasemonkey will both check and download from the specified location rather than the stored location.
But, if #updateURL is specified, then Greasemonkey will check (not download) from the "update" location given.
So: #updateURL overrides both #downloadURL and the default location for checking operations only.
While: #downloadURL overrides the default location for both checking and downloading (unless #updateURL is present).
Why you might use #downloadURL and #updateURL:
First, there are 2 downloads and potentially 2 different locations mainly for speed and bandwidth reasons.
Consider a scenario where a very large userscript has thousands of users:
Those users' browsers would constantly hammer the host server checking to see if an update was available. Most of the time, one wouldn't be and the large file would be downloaded over and over again unnecessarily.
This got to be a problem for sites like the now defunct userscripts.org.
Thus a system developed whereby a separate file was created to just hold version (and date) information. So the server would now have veryLarge.user.js and veryLarge.meta.js
veryLarge.meta.js would be updated (by the developer) every time the userscript was and would just contain the Metadata Block from veryLarge.user.js.
So the thousands of browsers would just repeatedly download the much smaller veryLarge.meta.js -- saving everybody time and saving the server bandwidth.
Nowadays, both Greasemonkey and Tampermonkey will automatically look for a *.meta.js file, so there is normally no need to specify one separately.
So, why explicitly specify #downloadURL and/or #updateURL? Some possible reasons:
Your script can be installed multiple ways or from multiple sources (cut and paste, locally copied file, secondary server, etc.) and you only want to maintain one "master" version.
You want to track how many initial and/or upgrade downloads your script has.
#downloadURL is also a handy "self documenting" way of recording/conveying where the user got the script from.
You want the *.meta.js file on a different server than the userscript for some reason.
Possibly http versus https issues (need to dig into this some day).
You are a bad guy and you want the script to update a malicious version at some future date from a server that you control -- that is not where the script was installed from.
Some differences between Greasemonkey and Tampermonkey:
(Warning: I haven't verified all of this in a while. Subject to change anyway as Tampermonkey is constantly improving (and Greasemonkey changes a lot too).)
Tampermonkey requires a #version directive on both the current and newer file. This is how Tampermonkey determines if an update is available.
Greasemonkey will also use this method, so always include #version in scripts you might want to auto-update.
However, Greasemonkey also requires that the update file be newer. And if no version is present, Greasemonkey will just compare the dates only. Note that this has caused problems in Greasemonkey in the past and also foolishly assumes that many different machines are accurately synched with the correct date and time.
Greasemonkey will only update from https:// schemes by default, but can optionally be set to allow http:// and ftp:// schemes.
Both engines never allow updates from file:// schemes.
I'm writing an Inno Setup installer, which can be installed multiple times and installs third party products as well. The third party products are needed for every installation, so when uninstalling my app, I would run a check function on an [UninstallRun] entry to see if there are any more instances of my app still on the system. If so, than I would not remove the third party products (only the uninstallation of the last instance of my app would uninstall these...).
The problem is that it seems, that the [UninstallRun] check function executed and evaluated during installation and not uninstallation (as the uninstall data is built during install). This means that if I uninstall the firstly installed instance of my app, than it removes the third party products (because the time of the installation of the first there were no other instances of my app).
Is there a way to tell Inno Setup to execute the check function of the [UninstallRun] section during, and only at uninstall?
If not any ideas how to achieve the required behaviour?
There is not.
However, you can write some [Code] that will execute at uninstall time. Typically something along these lines:
procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep);
begin
if CurUninstallStep = usUninstall then begin
if ShouldUninstallComponentX() then begin
UninstallComponentX();
end;
end;
end;
You will need to fill in the ComponentX functions yourself of course, and you will want to add error checking etc as appropriate.
You still need to take care when doing auto-uninstalls of third party products, though. Unless there's something unique to your applications about the way they were installed (eg. a custom instance name in the case of a database server) then you might still inadvertently uninstall the product when it was still in use by some other application, or it might uninstall it when you still needed it.
If it's something big enough to be a "product" then it ought to have its own entry in Programs and Features -- and if that's the case then it may be best to leave it to the user to decide when to remove it, either by never automatically removing it yourself, or by asking the user at the time that you think it's safe to uninstall (ie. when the last copy of your apps is removed) whether they really want to or not.
Smaller shared library components typically adopt a different approach; instead of running a full install/uninstall program you would include the libraries directly in [Files] and use the sharedfile flag to track when they're safe to remove. (This relies on all applications doing the same thing, of course -- but this works even for applications that do not use Inno as their installer.)
Ideally, if the product is intended to be used as a shared component it should have documentation on how to properly determine when no other applications require it.
I've got a form that gets filled out in stages so I wanted to direct users to a secondary edit form part way through the process. Is this possible?
Hide and reveal using JQuery on the editform.aspx would be my initial choice. I've done this type of work for a very well known bank and it worked very well. Single form with different sections of the form to fill in dependant on the answers provided (and the user group membership)
If you actually want to maintain two lists and hence two forms and redirect between the two - I would look into changing the "source" querystring parameter in the editform so that on completion of the form, you get directed to an alternate location. Not tried it but it would be a sensible place to start looking.
im trying to create a block that shows a random image from a pool of 20 in a dedicated folder, inside /files/. the first step i guess is creating a view that outputs a block. but afaik its only possible to display cck fields in this block, and not make it read from a folder on the server?
if not, what's the best way to go about this?
Finally, Id like to show this block only on pages that belong to a certain taxonomy term. In the admin for this block I can enter PHP that should return TRUE on pages where the block is to be shown. I'm just wondering - are taxonomy terms available here?
Best way is to make a small module for this.
The module will publish a block, and you will position this block wherever you wish on your pages. In the module's code, you will put the statements that will get the image and return a link to it.
Only thing, if you are using caching you will need to do some extra work, because the cache will prevent the random behavior: you can either disable it, or force a cache cleanup before displaying.
Here is the guide to do this: http://drupal.org/developing/modules
And here is specifically the task you need, the creation of blocks: http://drupal.org/node/206758
Take a look at http://code.google.com/p/fpss-drupal/
This a module for the popular FrontPageSlideShow for Joomla. It has a few themes but easily customisable.