Setting Version of InstallShield 2009 InstallScript project from IsCmdBld.exe - installshield

I am very new to InstallShield and have inherited a InstallScript project. I have mostly figured out my way around and fixed most of the problems. However, I wish to build this project automatically on our build server with each build of our product. I have this working fine. For some reason, though, I cannot get the version number to increase.
I am using the command:
IsCmdBld.exe
-P <.ism location>
-L <some_path_variable>=<some_value>
-L <some_path_variable2>=<some_value2>
This works.
However, adding -y 1.2.3, -y "1.2.3", -z Version=1.2.3, -z Version="1.2.3", -z "Version=1.2.3", -z ProductVersion=1.2.3, -z ProductVersion="1.2.3", or -z "ProductVersion=1.2.3". does not work.
When I say that it doesn't work, I mean that using the resulting installer does not attempt to do an upgrade like it would if I manually increased the Version string in the Product Properties table from inside InstallShield.
Is there something I am missing? I know I am not providing much to go on, just hoping someone has come across this problem before. Also, using the -c COMP switch does not work.
Any thoughts appreciated.

I believe IsCmdBld only supports passing ProductVersion properties for MSI projects, but not for InstallScript projects. I believe you need to do something like this prior to calling IsCmdBld:
set project = CreateObject("ISWiAuto15.ISWiProject")
project.OpenProject "C:\test.ism"
project.ProductVersion = "2.0.0"
project.CloseProject
set project = nothing
Alternatively you can save your project type in XML format and use an XPath / XPoke to update the ProductVersion in the Property table. The syntax is a little scary because of the DTD, but it can be done.

This is an old question but I was finally able to figure out how to make it work from the command line for me so I thought I would share it. I created a Path Variable (VersionNumber in the example below) in the project and set the product version to that path variable in the "General Information" section.
Then you can set it at the command line using the -l flag.
ISCmdBld.exe -p project.ism -l VersionNumber=1.1.0

Related

How to install Gnatcoll Postgres on Linux Centos 7

I have installed gprbuild, xmlada, and gnatcoll. I am now attempting to install gnatcoll_postgres. Which I have downladed from here: https://github.com/AdaCore/gnatcoll-db/
Within the Postgres folder is a Makefile, which I execute like so...
[parallels#localhost postgres]$ ls
gnatcoll_postgres.gpr gnatcoll-sql-postgres-gnade.ads
gnatcoll-sql-postgres.adb gnatcoll-sql-ranges.adb
gnatcoll-sql-postgres.ads gnatcoll-sql-ranges.ads
gnatcoll-sql-postgres-builder.adb Makefile
gnatcoll-sql-postgres-builder.ads postgres_support.c
gnatcoll-sql-postgres-gnade.adb README.md
[parallels#localhost postgres]$ make Makefile
which: no gnatls in (/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/var/lib/snapd/snap/bin:/home/parallels/.local/bin:/home/parallels/bin)
make: Nothing to be done for `Makefile'.
[parallels#localhost postgres]$
Would anybody please be able to tell me what this means...
which: no gnatls in (/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/var/lib/snapd/snap/bin:/home/parallels/.local/bin:/home/parallels/bin)
make: Nothing to be done for `Makefile'.
Any help would be greatly appreciated.
Please see the xmlada and gnatcoll in my project below, does this look like it's installed correctly? I'm presuming this isn't correct...
Thanks,
Lloyd
It means that your GNAT installation binaries aren’t on your PATH.
The README.txt from the adacore.com site says, in part,
To start using the tools in command-line mode, you will need to add
{install_prefix}/bin
to your PATH environment variable. Alternatively, you can simply launch
{install_prefix}/bin/gps
and GPS will automatically add itself to the PATH - it will also find the
cross compiler, if you have installed everything in the default locations.
Note that GPS will add this at the end of the PATH, meaning that it will find first any other GNAT installations that you have in your PATH.
I strongly suspect that you’ve been doing the latter, so that GPS adds itself (actually, of course, its own location) to the PATH, so that when it launches the compiler it finds the correct one.
When you run make from the terminal, the compiler isn’t on the PATH, so neither are gnatls, gprconfig, gprbuild and the rest of the GNAT tools.
What you need to do is to take the first choice from the README, and add /home/parallel/opt/GNAT/2019/bin to (the front of) your default PATH. How you do that depends on your shell.
You will find xmlada, gnatcoll already installed.

How to specify different feedback for different platforms if AC_CHECK_HEADER fails in autoconf/configure.ac?

I have a check for a header file in configure.ac in the source root
AC_CHECK_HEADER(log4c.h,
[],
[AC_MSG_ERROR([Couldn't find or include log4c.h])])
and I'd like to give different feedback on different platform to reflect different most straight forward ways of providing the header:
on Debian it should error with the message Couldn't find or include log4c.h. Install log4c using 'sudo apt-get install liblog4c-dev'
on OpenSUSE it should error with ... Install log4c using 'sudo yum install log4c-devel' (didn't research the package name, but you catch my drift)
on other systems (where I'm too lazy to research the package name) it should error with ... Install log4c by fetching ftp://.../log4c.tar.gz and installing with './configure && make && make install' in the source root
I
checked the AM_CONDITIONAL macro, but I don't get how to use it in configure.ac rather than in Makefile.am (as described in autoconf/automake: conditional compilation based on presence of library?)
found the tip to run esyscmd in stackoverflow.com/questions/4627900/m4-executing-a-shell-command, but adding esyscmd (/bin/echo abc) to configure.ac doesn't print anything when I run autoreconf --install --verbose --force.
Both answers describing the usage of conditional macros without the shell commands for the mentioned OS and links to predefined macros (like AC_CHECK_HEADER_DEBIAN, AC_CHECK_HEADER_SUSE, etc.) are appreciated.
The following configure.ac doesn't work:
AC_INIT([cndrvcups-common], [2.90], [krichter722#aol.de])
AC_CONFIG_MACRO_DIR([m4])
AM_INIT_AUTOMAKE([foreign -Wall subdir-objects])
AC_PROG_CC
AM_PROG_AR
AM_PROG_CC_C_O
AC_MSG_NOTICE([Hello, world.])
AC_INCLUDES_DEFAULT
AC_CHECK_HEADER(check.h,
[],
[
AS_IF (test "$(lsb_release -cs)" = "vivid", [echo aaaaaa], [echo bbbbbb])
])
LT_INIT # needs to be after AM_PROGS_AR
AC_CONFIG_HEADERS([config.h])
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
because ./configure fails with
checking check.h usability... no
checking check.h presence... no
checking for check.h... no
./configure: line 4433: syntax error near unexpected token `;'
./configure: line 4433: ` if ; then :'
There's also ./configure: line 4427: #include: command not found which happens no matter whether AC_CHECK_HEADER is specified.
Your configure.ac is almost ok. The only problem is space between AS_IF and the parenthesis. No whitespace is allowed between a macro name and the opening parenthesis in m4 scripts. This is correct syntax:
AC_CHECK_HEADER(check.h,
[],
[
AS_IF(test "$(lsb_release -cs)" = "vivid", [echo aaaaaa], [echo bbbbbb])
])
If you are looking for a way to detect different distros look for example at configure.ac of cgmanager.
Update
I noticed one more problem in your configure.ac.
AC_INCLUDES_DEFAULT macro expands to a set of default includes and can't be used here. It is not needed also. It will be used by default in your AC_CHECK_HEADER macro, as you omit last parameter.
This is the cause of line 4427: #include: command not found error you mention.
Update to your comment
First of all, running a system command itself, like lsb_release is not portable. You should first check, for example with AC_CHECK_PROG, for its presence.
Regarding the syntax I would first get the output of the command using backticks: result=`lsb_release -cs` and later test resulting output: test "x$result" = "xvivid". x is needed to avoid problems with empty value in some shells.
At last, I have doubts whether configure script is a proper place for all this distro specific messages. You may consider placing it in the README file.
Avoid those system specific messages.
Print one message which allows people to figure out what package to install on their respective system, but avoid naming the system specific package names and system specific installation tools.
You will never be able to add messages for all systems, so it is better to go the part of the way which you know and let your users go the rest of the way because they know their systems better than you can.
The proper way would be to write a software package outside but called from your configure which, given a header filename, foo.pc filename, library name, etc. figures out how to install that on the respective system. Then let system specific maintainers fix that package, and call it from configure if it is installed, and issue a generic error message otherwise.
A portable shell script local to your software package might do the same job to some extent. You still have to maintain all the system specific parts for all possible systems, though.
Hmm... now that I am thinking about that, the idea appears not that bad. I might add such a script to some of the projects I maintain and see how it turns out in practical use.
I would still try to keep most of that logic outside configure, though.

I cannot compile Z3 using Visual C++ & gcc

I'm starter Z3 so my question may be too basic.
But If you let me know some information for my question, I'm very happy.
I searched before history in this site.
But I couldn't get detailed information for me. ( because maybe..my question is too basic..)
[using Visual C++]
1) First of all, I downloaded "z3 4.3.0 for window" at codePlex site.
But this file doesn't have example file(test_capi.c).
So I got "z3-89c1785b73225a1b363c0e485f854613121b70a7.zip" for example file.
( I cannot remember what I can get... :( )
I succeeded compiling python file as codeplex site quide.
But I cannot compile test_capi.c using Visual C++.
I also added "test_capi.c" at "z3 4.3.0 for window" folder but I cannot also compile.
Lastly, I just tried using "test_capi.vcxproj" of "z3-src-4.1.1" and this is succeeded.
I cannot understand.
If i want to test "my file", what file is needed at "z3 4.3.0 for window"?
Or
Do I have to use only "z3 4.1.1" for visual c++ and add "my file" at some location of "z3 4.1.1"? ( All files of Z3 4.1.1 is needed?? AND what is the Some location?)
I read other some comment - "Z3 4.3.0" is simplified.
I understood this comment that I can use only "z3 4.3.0" and test successfully.
But as i told you, I cannot compile.
Please give me some information..
[using gcc in ubuntu]
First of all, I downloaded "z3-4.3.2.07d56bdc705c-x86-ubuntu-12.04.zip" from codeplex site.
Because I tried git command for getting source code but i cannot find source code.
( I also don't know the reason..)
Anyway... "z3-4.3.2.07d56bdc705c-x86-ubuntu-12.04.zip" doesn't have any example file and only bin & include folder is existed.
So I also used "z3 4.1.1" but i cannot compile using below command.
gcc -fopenmp -o test_capi -I ../../Include -L ../../lib test_capi.c -lz3-gmp
Error is "cannot find -lz3-gmp."
In some comment, I found "use "sudo install"" but i don't know how i can install lz3.
(Of course only "sudo install" doesn't work and "sudo apt-get install z3" also doesn't work...)
For compiling "test_capi.c" using gcc, could you explain in detail..?
I'm confused many kinds of guide but i couldn't get basic information for me.
Thank you in advance and I hope to get information...even if my question is too basic..
First, you should use only one version of the source code. Version 4.1.1 is very old and newer versions do not come with test_capi.vcxproj anymore, instead everything is done via the Makefile. For the very latest version please use the unstable branch (e.g., by selecting unstable here and then clicking download.)
The examples can be compiled by calling nmake examples (on Windows) or make examples (on Linux) in the build directory. The makefile has a target called _ex_c_example which shows how to call the compiler for the C example. The various variables that this target uses are defined in build/config.mk. Note that these variables are set to different values on Windows and Linux (this file is produced by python scripts/mk_make.py).
The git command on many Linux distributions is not compatible with the codeplex git server (for a fix see here), but of course this is not necessary if you download the source code from the webpage directly.

define NDK_ROOT in cocos2DX mutiplatform game environment

I have just started working with cocos2dx android and I am following wonderful tutorial of http://www.raywenderlich.com/33750/cocos2d-x-tutorial-for-ios-and-android-getting-started . Now, I have successfully run my first hello world demo project by following this link. I also set environment parameters:
NDK_ROOT_LOCAL="/MY ANDROID NDK PATH/"
ANDROID_SDK_ROOT_LOCAL="/MY ANDROID SDK PATH/"
I followed tutorial perfectly as given in it, still I am facing problem while running my project second time, means I have to export DNK_ROOT every time from terminal to run my project & it's really tired and seemed not working for my further implementation.. and while I run project it says please define NDK_ROOT though I already define
second thing
I also manually define these variables in my .bash profile (create-android-project.sh) this way
NDK_ROOT_LOCAL = "/MY ANDROID NDK PATH/"
ANDROID_SDK_ROOT_LOCAL = "/MY ANDROID SDK PATH/"
What am I missing in setting up this?
To make those variables permanent (so every terminal shell you open hereafter has then) use your favorite text editor to update your bash profile (I chose vi to keep it in the terminal)
NOTE: the use of "~" in a path is just shorthand for your user directory. In your case it appears to be synonymous with saying "~" = "/Users/alex"
vi ~/.bash_profile
add the following lines and save (update these names and paths to match your actual environment, I am assuming everything is in the root of your user directory here):
export NDK_ROOT_LOCAL=~/android-ndk-r10b
export ANDROID_SDK_ROOT_LOCAL=~/sdk
Use source to run the profile in the current terminal session or just open a new terminal
source ~/.bash_profile
You can test to see if the variables are defined here (use whatever you named them)
echo $NDK_ROOT_LOCAL
echo $ANDROID_SDK_ROOT_LOCAL
[EDIT: noted that paths need to be tuned to your environment]
this way i can define my NDK ROOT
export NDK_ROOT=/Users/alex/android-ndk-r8b
If you are using MAC OSX please consider adding NDK_ROOT variable in Environments file. Linux directly read it when the instance of bash is initiated but in MAC you need to add it in a bit more detail. Try adding it.

qmake -query internal settings in Linux - where are they?

I am building a Linux system with cross-compiler using ptxdist. It allows me to configure Qt4 for installation and it builds and installs qt-everywhere-opensource-src-4.6.3 Ok. However, the qmake internal settings are screwed up and I don't know how to fix them.
When I run qmake -query I get:
me#ubuntu:~$ qmake -query
QT_INSTALL_PREFIX:/
QT_INSTALL_DATA:/
QT_INSTALL_DOCS://doc
QT_INSTALL_HEADERS://include
QT_INSTALL_LIBS://lib
QT_INSTALL_BINS://bin
QT_INSTALL_PLUGINS://plugins
QT_INSTALL_TRANSLATIONS://translations
QT_INSTALL_CONFIGURATION:/etc/xdg
QT_INSTALL_EXAMPLES://examples
QT_INSTALL_DEMOS://demos
QMAKE_MKSPECS://mkspecs
QMAKE_VERSION:2.01a
QT_VERSION:4.6.3
Through some research, it looks like this can be fixed by simply rebuilding Qt, but it's not fixing this problem. I dug into the build output a bit and it looks like the ./configure command for the Qt build has "-prefix /usr" so I don't know why this isn't being fixed.
I would like to fix these internal values manually if possible because the Qt build takes hours. Does anyone know how to do this?
At configure time these paths are hardcoded in 'src/corelib/global/qconfig.cpp', and end up hardcoded into qmake when it is built. They are also hardcoded into many other files, like all the .la and .pc files, not to mention the Makefile install rules.
The only way to fix this is to figure out why configure keeps screwing up the prefix. configure is a big shell script, so it's easy to see where $QT_INSTALL_PREFIX is assigned from the '-prefix' argument, and then the different checks that are done on it (like running it through 'config.tests/unix/makeabs'). Try putting print statements before/after $QT_INSTALL_PREFIX is changed, and you should be able to find out where the path gets screwed up.
Also, you don't have to wait for the full build to complete to tell if the prefix was set
correctly. After configure runs, take a look in 'src/corelib/global/qconfig.cpp' and see what 'qt_configure_prefix_path_str' is set to.
You can manually set these properties using
qmake -set VARIABLE VALUE
They are stored using QSettings, the Qt built-in persistent applications settings.
see Configuring qmake's Environment
Configure scripts can be fuzzy about slashes. Are you sure that the build prefix is /usr and not /usr/ .

Resources