I'm writing an Inno Setup installer, which can be installed multiple times and installs third party products as well. The third party products are needed for every installation, so when uninstalling my app, I would run a check function on an [UninstallRun] entry to see if there are any more instances of my app still on the system. If so, than I would not remove the third party products (only the uninstallation of the last instance of my app would uninstall these...).
The problem is that it seems, that the [UninstallRun] check function executed and evaluated during installation and not uninstallation (as the uninstall data is built during install). This means that if I uninstall the firstly installed instance of my app, than it removes the third party products (because the time of the installation of the first there were no other instances of my app).
Is there a way to tell Inno Setup to execute the check function of the [UninstallRun] section during, and only at uninstall?
If not any ideas how to achieve the required behaviour?
There is not.
However, you can write some [Code] that will execute at uninstall time. Typically something along these lines:
procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep);
begin
if CurUninstallStep = usUninstall then begin
if ShouldUninstallComponentX() then begin
UninstallComponentX();
end;
end;
end;
You will need to fill in the ComponentX functions yourself of course, and you will want to add error checking etc as appropriate.
You still need to take care when doing auto-uninstalls of third party products, though. Unless there's something unique to your applications about the way they were installed (eg. a custom instance name in the case of a database server) then you might still inadvertently uninstall the product when it was still in use by some other application, or it might uninstall it when you still needed it.
If it's something big enough to be a "product" then it ought to have its own entry in Programs and Features -- and if that's the case then it may be best to leave it to the user to decide when to remove it, either by never automatically removing it yourself, or by asking the user at the time that you think it's safe to uninstall (ie. when the last copy of your apps is removed) whether they really want to or not.
Smaller shared library components typically adopt a different approach; instead of running a full install/uninstall program you would include the libraries directly in [Files] and use the sharedfile flag to track when they're safe to remove. (This relies on all applications doing the same thing, of course -- but this works even for applications that do not use Inno as their installer.)
Ideally, if the product is intended to be used as a shared component it should have documentation on how to properly determine when no other applications require it.
Related
I have a custom service module (myproject-service & myproject-api).
With Liferay 7.2 and previous versions, when I changed my database model (for exemple : add a new column in a table in the service.xml), I used an UpgradeProcess and an UpgradeStepRegistrator, with an incrementation of the Liferay-Require-SchemaVersion.
Since the 7.3 version, the autoupgrade has been moved to a property and changed to false value. In developpement, this value is true and everything works fine but in production, my custom service doesn't upgrade now at server start.
Is there a solution to make this system works again automatically ? I've seen that now we have to do the upgrade manually in the gogo shell with upgrade:execute command.
You are probably looking for
Set this to true to execute the upgrade process when the portal starts and modules are activated.
upgrade.database.auto.run=false
You still need to build the "upgradeProcess", as in:
https://help.liferay.com/hc/en-us/articles/360018162851-Creating-Data-Upgrade-Processes-for-Modules-
Technically, you could activate the same property in production systems. However, this is neither safe, nor performant: The solution for table updates is generic and (as far as I know) will
export your table's data,
DROP TABLE,
CREATE TABLE (with the new structure)
populate the table with the previously saved data.
Now, apart from this being horribly slow for large amounts of data, there are some other shortcomings:
if you have renamed a column, or
added a non-nullable column,
this would fail to do the work as you expected it (even in development).
Further:
If this process is interrupted at any time, you might lose all of your data
In many cases, a simple ALTER TABLE xxx ADD COLUMN yyy would be sufficient, and is quick, safe and easy to do within SQL. That's where your UpgradeProcess kicks in. You wouldn't want to do that after every little bit in development (hence the property), but you certainly don't want to DROP TABLE with important data in production, and wait for who-knows-how-long, when there was just a trivial change.
From that point of view: You want to write a custom UpgradeProcess, even if you don't know that you do. And there's even a great starting point, that takes away the repetitive and low level work.
There are 2 pages where Inno Setup shows the required disk space: wpSelectDir and wpSelectComponents. On the wpSelectComponents page everything is shown correctly, but it's not on the wpSelectDir page.
What is the difference between DiskSpaceLabel and ComponentsDiskSpaceLabel? Aren't those the same?
I understand that the ComponentsDiskSpaceLabel shows the sum of all the components checked. What does DiskSpaceLabel show then?
The DiskSpaceLabel displays the minimal space needed for the application. It includes only the files that are installed unconditionally (those that do not belong to any components or tasks, etc).
The ComponentsDiskSpaceLabel adds the files belonging to the selected components to the size calculation.
Both calculations reflect the ExtraDiskSpaceRequired directive.
Note that Check parameters are not considered for the calculation. Neither DestDir parameter is (so temporary files are also included). The dontcopy flag is not considered either (which you have reported).
Background:
I am trying to automate an installer that will be distributed to a bunch of different computers. Some of these already have a MS distributable file, some of them don't. The ones without this file have this inside the window control identifiers:
child_window(class_name="SysHeader32")
The reason this is important is that this will be an extra step in the installation that needs to have a button pressed. Is there a way to make an if loop similar to:
if main_dlg.child_window(class_name="SysHeader32") exists:
click install
proceed normally
else:
Proceed normally
How would I implement this?
I have it working without the extra step, but if this extra step is present, the install fails.
There is method .exists(timeout=5) which returns True/False instead of raising exception like other methods do. Of course, try-except block is also possible, but .exists() looks better as a logic than an error handling.
BTW, else branch is not needed. Just proceed normally after the condition code is executed or not.
I was reading GM's wiki to determine the difference between #downloadURL & #updateURL (which I didn't). But what confused me even more that both are unadvised:
It is unusual to specify this value. Most scripts should omit it.
I'm surprised by that as it's the only way for scripts to auto-update and I don't see why these keys shouldn't be used.
The wiki itself is pretty lacking and no other forum sources are advised, so I have to ask here. Also would appreciate more detailed info on these keys.
Use of those keys is discouraged mainly by Greasemonkey's lead developer. Most others, including the Tampermonkey team feel no need for such a warning.
Also note that those directives are not always required for auto-updates to work.
Some reasons why he would say that it was unusual and that "most" scripts should omit it:
In most all cases it is not needed, see how updates work and how those directives work, below.
Adding and using those directives are just more items that the script writer must check and maintain. Why make work if it is not needed?.
The update implementation and those directives have been buggy and, perhaps, not well implemented in Greasemonkey.
Tampermonkey, and other engines, implement updates, and those directives, in a slightly different manner. This means that code that works on Tampermonkey may fail on Greasemonkey.
Note that that wiki entry was made by Greasemonkey's lead developer (Arantius) himself; so it wasn't just wiki noise.
How updates work:
Script updates are conducted in 4 phases:
The enabled phase and/or "forced" updates.
The check phase.
The download phase.
The parse and install phase.
For this question, we are only concerned with the check and download phases. We stipulate that updates are enabled and that the updated script was valid and installed correctly.
When updating scripts, Greasemonkey (and Tampermonkey) download files twice:
The first download, controlled by the script's updateURL value, is just to check the file's #version (if any) and date -- to see if an update is available.
The second download, controlled by the script's downloadURL value, is the actual download of the new script to install.
This download will only occur if the server file has a higher #version number than the local file and/or if the server file has a later date than the local file. (Beware that there are critical differences here between script engines.)
See "Why you might use #downloadURL and #updateURL", below, for reasons why 2 file downloads are used.
How #downloadURL and #updateURL work:
#downloadURL merely overrides the default internal "download URL" location.
#updateURL merely overrides the default internal "update URL" (or check) location.
In most cases, there is no need to do this. See, below.
When you install a userscript, Greasemonkey automatically records the install location. No meta directive is needed.
By default, this is where Greasemonkey will both check for updates and download any updates.
But, if #downloadURL is specified, then Greasemonkey will both check and download from the specified location rather than the stored location.
But, if #updateURL is specified, then Greasemonkey will check (not download) from the "update" location given.
So: #updateURL overrides both #downloadURL and the default location for checking operations only.
While: #downloadURL overrides the default location for both checking and downloading (unless #updateURL is present).
Why you might use #downloadURL and #updateURL:
First, there are 2 downloads and potentially 2 different locations mainly for speed and bandwidth reasons.
Consider a scenario where a very large userscript has thousands of users:
Those users' browsers would constantly hammer the host server checking to see if an update was available. Most of the time, one wouldn't be and the large file would be downloaded over and over again unnecessarily.
This got to be a problem for sites like the now defunct userscripts.org.
Thus a system developed whereby a separate file was created to just hold version (and date) information. So the server would now have veryLarge.user.js and veryLarge.meta.js
veryLarge.meta.js would be updated (by the developer) every time the userscript was and would just contain the Metadata Block from veryLarge.user.js.
So the thousands of browsers would just repeatedly download the much smaller veryLarge.meta.js -- saving everybody time and saving the server bandwidth.
Nowadays, both Greasemonkey and Tampermonkey will automatically look for a *.meta.js file, so there is normally no need to specify one separately.
So, why explicitly specify #downloadURL and/or #updateURL? Some possible reasons:
Your script can be installed multiple ways or from multiple sources (cut and paste, locally copied file, secondary server, etc.) and you only want to maintain one "master" version.
You want to track how many initial and/or upgrade downloads your script has.
#downloadURL is also a handy "self documenting" way of recording/conveying where the user got the script from.
You want the *.meta.js file on a different server than the userscript for some reason.
Possibly http versus https issues (need to dig into this some day).
You are a bad guy and you want the script to update a malicious version at some future date from a server that you control -- that is not where the script was installed from.
Some differences between Greasemonkey and Tampermonkey:
(Warning: I haven't verified all of this in a while. Subject to change anyway as Tampermonkey is constantly improving (and Greasemonkey changes a lot too).)
Tampermonkey requires a #version directive on both the current and newer file. This is how Tampermonkey determines if an update is available.
Greasemonkey will also use this method, so always include #version in scripts you might want to auto-update.
However, Greasemonkey also requires that the update file be newer. And if no version is present, Greasemonkey will just compare the dates only. Note that this has caused problems in Greasemonkey in the past and also foolishly assumes that many different machines are accurately synched with the correct date and time.
Greasemonkey will only update from https:// schemes by default, but can optionally be set to allow http:// and ftp:// schemes.
Both engines never allow updates from file:// schemes.
We have a complex scenario which requires a timer job to run after content deployment to a SP 2010 site collection. The timer job automatically deactivates/reactivates a branding feature which is responsible for setting the master page for the site collection, among other things.
We have had several feature upgrades along the way, and neglected to call .Update() on the feature in that specific site collection. So all of the updated CSS, master page, page layouts etc. are out of date on that SC.
The strange part is that when I checked the version number of that feature in this SC, it shows as the latest version. The custom upgrade action clearly didn't run and update the files, because nobody called .Upgrade().
One of my colleagues suggested that the deactivate/reactivate process done by the timer job would update the version number, meaning that I can no longer call Upgrade()!
Is that true? Does a deactivate/reactivate cycle for a feature automatically update the feature version number?
Is there an easy way to fix this mess? Some way to decrement the version number programmatically, then call Upgrade()??
On 1: No. Feature deactivating / activating does not trigger an update. See this article by Chris O' Brian: http://www.sharepointnutsandbolts.com/2010/06/feature-upgrade-part-1-fundamentals.html
Feature upgrade does NOT happen automatically (including when the
Feature is deactivated/reactivated)! The only way to upgrade a Feature
is to call SPFeature.Upgrade(), typically in conjunction with one of
the QueryFeatures() methods. My tool which I’ll go on to talk about is
a custom application page which helps you with this part – note there
is no STSADM command, PowerShell cmdlet or user interface to do this
out-of-the-box.
Is your timer job cycling the feature activation with Force? Then, yes, it is triggering the feature upgrade/feature update see the following screenshot from SPFeature.Activate (see my yellow marking):
Why the feature version is incremented, I'm not sure. When you have a feature, install a new feature version and activate / deactivate, the feature version stays the same unless you run an Upgrade, see also this related question stating the same: https://sharepoint.stackexchange.com/questions/41476/feature-upgrading-question
I'm guessing your timer job is using force? Otherwise I'm not quite sure what is happening.
On 2: Don't know if it is possible to decrease the version number, but the safest way would be to just create a new version including a grand "clean up" feature receiver which sets everything correct, i.e. checks which steps of the feature upgrade have happened already (e.g. new list created, new content type added) and which haven't. Depending on that just execute the same steps again which have not executed yet. For the latter part you can fortunately use the existing code, so you would only need the "clean up" or checking code.
After some testing I found that simply deactivating and reactivating the feature will increment the version number and completely screw up your upgrade! I even watched the update come through in the content database. As soon as you deactivate/reactivate the updated feature, the new version number pops into the content DB. Of course the upgrade doesn't actually run, it just increments the version number.
This means that if you then call .Upgrade() it won't work because SharePoint thinks it's already been upgraded!!
To fix this I updated the row in the content database to set the feature version back to 0.0.0.0 for that particular web and then I could run .Upgrade() just fine....but that's not exactly a supported solution. If anyone else has a better idea drop a reply.