wix IIS version in uninstall condition fails - iis

I have a custom control as shown below.
During uninstall the condition that checks for IIS_MAJOR_VERSION="#7" AND IIS_MINOR_VERSION="#5" seems to fail although during install this condition is true.
I did check in the uninstall file that the property for IIS_MAJOR_VERSION="#7" AND IIS_MINOR_VERSION="#5". Does anyone know what did I do wrong?
<Property Id="IIS_MAJOR_VERSION">
<RegistrySearch Id="CheckIISVersion"
Root="HKLM"
Key="SOFTWARE\Microsoft\InetStp"
Name="MajorVersion"
Type="raw" />
</Property>
<Property Id="IIS_MINOR_VERSION">
<RegistrySearch Id="CheckIISMinorVersion"
Root="HKLM"
Key="SOFTWARE\Microsoft\InetStp"
Name="MinorVersion"
Type="raw" />
<Custom Action="DropDBUSerIIS75" Before="InstallFinalize">Installed AND NOT UPGRADINGPRODUCTCODE AND IIS_MAJOR_VERSION="#7" AND IIS_MINOR_VERSION="#5"</Custom>

Even I am not sure about the code why is it going wrong, but for precaution use this code to get the value of IIS version because even if IIS is un-installed the above registry key values will persist.
<Property Id="IIS_MAJOR_VERSION">
<RegistrySearch Id="CheckIISVersion"
Root="HKLM"
Key="SYSTEM\CurrentControlSet\services\W3SVC\Parameters"
Name="MajorVersion"
Type="raw" />
</Property>
<Property Id="IIS_MINOR_VERSION">
<RegistrySearch Id="CheckIISMinorVersion"
Root="HKLM"
Key="SYSTEM\CurrentControlSet\services\W3SVC\Parameters"
Name="MinorVersion"
Type="raw" />

Related

How to use #RefreshScrop in spring-integration project configured as xml config

I am using spring integration to ftp files to a remote server and I am using xml based config. I would like to use spring cloud config , so I can move all the properties files to git and use #RefreshScope to refresh the properties. What's the best way to achieve this in spring integration which has only xmls.
I have the below code :
<bean id="inDefaultSftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${sftp.host}" />
<property name="port" value="${sftp.port}" />
<property name="user"
value="${sftp.username}" />
<property name="password"
value="${sftp.password}" />
<property name="allowUnknownKeys" value="true" />
</bean>
Try scope="refresh" and <aop:scoped-proxy/> for that bean definition.
Something like this:
<bean id="inDefaultSftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory"
scope="refresh">
<property name="host" value="${sftp.host}" />
<property name="port" value="${sftp.port}" />
<property name="user" value="${sftp.username}" />
<property name="password" value="${sftp.password}" />
<property name="allowUnknownKeys" value="true" />
<aop:scoped-proxy/>
</bean>

no bean named > 'mystoreBrandCategoryCodeValueProvider' available (hybris)

Caused by: java.util.concurrent.ExecutionException:
de.hybris.platform.solrfacetsearch.indexer.exceptions.IndexerRuntimeException:
de.hybris.platform.solrfacetsearch.indexer.exceptions.IndexerException:
Failed to index item with PK 8796431187969: No bean named
'mystoreBrandCategoryCodeValueProvider' available
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_171]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_171]
at de.hybris.platform.solrfacetsearch.indexer.strategies.impl.DefaultIndexerStrategy.runWorkers(DefaultIndexerStrategy.java:141)
~[solrfacetsearchserver.jar:?]
I get this error when i try to go to localhost for mystore.
My steps:
i created b2b from b2c as described on helphybris
it is working well because i can visit powertools website
I copied all impexes from powertools to mystore which is under mystoreinitialdata/import
then i went to backoffice/wcms and saw my store as url
and also i could see my catalogs on catalogs tab; product, catalog and classification. Just like powertools.
What i want is, with powertools impexes copied to mystore, i want to see powertools items under mystore.
But it gives error which i posted in the beginning.
I only copied impexes.
For example
mystore/solr.impex
has
;$solrIndexedType; color ;string;;;Refine;Alpha; 4000;true;;mystoreVariantCategoryCodeValueProvider;categoryFacetDisplayNameProvider;defaultTopValuesProvider
which i copied from powertools. But powertools has
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<alias alias="b2bAcceleratorCoreSystemSetup" name="powertoolsStoreSystemSetup" />
<bean id="powertoolsStoreSystemSetup" class="de.hybris.platform.powertoolsstore.setup.PowertoolsStoreSystemSetup" parent="abstractCoreSystemSetup">
<property name="powertoolsCoreDataImportService" ref="powertoolsCoreDataImportService"/>
<property name="powertoolsSampleDataImportService" ref="powertoolsSampleDataImportService"/>
</bean>
<bean id="powertoolsSampleDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsSampleDataImportService"
parent="sampleDataImportService">
</bean>
<bean id="powertoolsCoreDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsCoreDataImportService"
parent="coreDataImportService">
</bean>
<!-- Solr field value providers TEMPORARY FOR NOW SO DO NOT NEED TO DEPEND ON yb2bacceleratorcore -->
<bean id="powertoolsCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="powertoolsCategorySource"/>
</bean>
<bean id="powertoolsBrandCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="powertoolsBrandCategorySource"/>
</bean>
<bean id="powertoolsVariantCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="powertoolsVariantCategorySource"/>
</bean>
<bean id="powertoolsCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="powertoolsCategorySource"/>
</bean>
<bean id="powertoolsBrandCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="powertoolsBrandCategorySource"/>
</bean>
<bean id="powertoolsCategorySource" parent="variantCategorySource">
<property name="rootCategory" value="1"/> <!-- '1' is the root icecat category -->
</bean>
<bean id="powertoolsVariantCategorySource" parent="variantCategorySource"/>
<bean id="powertoolsBrandCategorySource" parent="defaultCategorySource">
<property name="rootCategory" value="brands"/> <!-- 'brands' is the root of the brands hierarchy -->
</bean>
<!-- Solr field value providers TEMPORARY FOR NOW SO DO NOT NEED TO DEPEND ON yb2bacceleratorcore -->
</beans>
this in powertoolsspring-xml
there is no folder as mystorestore because the directory is powertoolsstore in
<bean id="powertoolsSampleDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsSampleDataImportService"
parent="sampleDataImportService">
and for
class="de.hybris.platform.powertoolsstore.setup.PowertoolsStoreSystemSetup"
mystore only has
mystore/initialdata/setup/InitialDataSystemSetup.java
and for
<bean id="powertoolsSampleDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsSampleDataImportService"
parent="sampleDataImportService">
mystore doesnot havea services.
What should i do? I want to see localhost with items. so i thought best way is to copy from powertools?
you solr indexer cron job is searching for bean 'mystoreBrandCategoryCodeValueProvider', so this bean should be defined in your spring file, remove it if not used.
possible solutions:
1. update solr.impex : remove this bean if you are not using it and import the impex via hac or update the system and make your your impex is being imported while system update.
Check your solrIndexedType if some old filed is using this bean, remove it (via hmc)
2.Add this bean into spring file if you are using it.
Hope you have copied all Impex correctly
Make sure
Copy impex correctly in right folder path
/mystoreinitialdata/resources/mystoreinitialdata/import/sampledata/productCatalogs/mystoreProductCatalog/products-media.impex
Update powertool word reference with mystore
Point siteResource to correct path
$siteResource=jar:com.mystore.initialdata.constants.MystoreInitialDataConstants&/mystoreinitialdata/import/sampledata/productCatalogs/$productCatalog
Correct the InitialDataSystemSetup class
Like
public static final String MYSTORE = "mystore";
#SystemSetup(type = Type.PROJECT, process = Process.ALL)
public void createProjectData(final SystemSetupContext context)
{
final List<ImportData> importData = new ArrayList<ImportData>();
final ImportData mystoreImportData = new ImportData();
mystoreImportData.setProductCatalogName(MYSTORE);
mystoreImportData.setContentCatalogNames(Arrays.asList(MYSTORE));
mystoreImportData.setStoreNames(Arrays.asList(MYSTORE));
importData.add(mystoreImportData);
/* uncomment below line to test mystoreinitialdata */
getCoreDataImportService().execute(this, context, importData);
getEventService().publishEvent(new CoreDataImportedEvent(context, importData));
getSampleDataImportService().execute(this, context, importData);
getEventService().publishEvent(new SampleDataImportedEvent(context, importData));
}
Correct/Add the bean in your *core-spring.xml which you have used in your impex.
Like
<bean id="yAcceleratorInitialDataSystemSetup"
class="com.store.initialdata.setup.InitialDataSystemSetup"
parent="abstractCoreSystemSetup">
<property name="coreDataImportService" ref="coreDataImportService"/>
<property name="sampleDataImportService" ref="sampleDataImportService"/>
</bean>
<!-- Solr ValueProvider -->
<bean id="mystoreCategorySource" parent="variantCategorySource">
<property name="rootCategory" value="1" /> <!-- '1' is the root icecat category -->
</bean>
<bean id="mystoreVariantCategorySource" parent="variantCategorySource" />
<bean id="mystoreBrandCategorySource" parent="defaultCategorySource">
<property name="rootCategory" value="brands" /> <!-- 'brands' is the root of the brands hierarchy -->
</bean>
<bean id="mystoreCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="mystoreCategorySource" />
</bean>
<bean id="mystoreBrandCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="mystoreBrandCategorySource" />
</bean>
<bean id="mystoreVariantCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="mystoreVariantCategorySource" />
</bean>
<bean id="mystoreCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="mystoreCategorySource" />
</bean>
<bean id="mystoreBrandCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="mystoreBrandCategorySource" />
</bean>
Update your system
Update the running system!
hac > Platform > Update

Integrating Apache Cassandra with Apache Ignite

I'm trying to integrate Apache Ignite with Apache Cassandra(3.11.2) as I want to use Ignite to cache the data present in my already existing Cassandra database.
After going through the online resources, I've done the following till now:
Downloaded Apache Ignite.
Copied all the folders present in "libs/optional/" to "libs/"(I don't know which ones will be required for Cassandra).
Created 3 xmls in the config folder i.e. "cassandra-config.xml", "connection-settings.xml" and "persistance-settings.xml". Currently I'm using the same node(172.16.129.68) for both Cassandra and Ignite.
cassandra-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!-- Cassandra connection settings -->
<import resource="connection-settings.xml" />
<!-- Persistence settings for 'cache1' -->
<bean id="cache1_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
<constructor-arg type="org.springframework.core.io.Resource" value="file:/home/cass/apache_ignite/apache-ignite-fabric-2.4.0-bin/config/persistance-settings.xml" />
</bean>
<!-- Ignite configuration -->
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<!-- Configuring persistence for "cache1" cache -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="writeBehindEnabled" value="true"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
<property name="dataSourceBean" value="cassandraAdminDataSource"/>
<property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
</bean>
</property>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>172.16.129.68:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
connection-settings.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="loadBalancingPolicy" class="com.datastax.driver.core.policies.TokenAwarePolicy">
<constructor-arg type="com.datastax.driver.core.policies.LoadBalancingPolicy">
<bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
</constructor-arg>
</bean>
<bean id="cassandraAdminDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
<property name="port" value="9042"/>
<property name="contactPoints" value="172.16.129.68"/>
<property name="readConsistency" value="ONE"/>
<property name="writeConsistency" value="ONE"/>
<property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
</bean>
persistance-settings.xml
<persistence keyspace="test" table="epc_table">
<keyPersistence class="java.lang.String" strategy="PRIMITIVE" column="imsi"/>
<valuePersistence strategy="BLOB"/>
</persistence>
I run the following command to start Ignite from bin folder.
ignite.sh ../config/cassandra-config.xml
Now, I want to take a look at the cassandra table via sqlline. I've tried the following:
./sqlline.sh -u jdbc:cassandra://172.16.129.68:9042/test //(test is the name of the keyspace)
I get the following output:
No known driver to handle "jdbc:cassandra://172.16.129.68:9042/test". Searching for known drivers...
java.lang.NullPointerException
sqlline version 1.3.0
0: jdbc:cassandra://172.16.129.68:9042/test>
I've also tried:
./sqlline.sh -u jdbc:ignite:thin://172.16.129.68
but when I use "!tables", I'm not able to see any table.
What exactly has been missing? How to access/modify the tables present in Cassandra using sqlline?
Operating System: RHEL 6.5
Apache Ignite is a key-value database and there are no tables created by default that you are able to view with JDBC connector. CacheStore is a way to integrate Ignite with external DB or any other storage, and it loads data as a key-value pair.
In your config you said Ignite that you want to store and load entries in/from Cassandra, but still Ignite doesn't know entries structure (BTW Ignite really doesn't care what objects were putted into it).
To be able to list tables and do queries on it, you need to create tables. For that you need to have ignite-indexing in /lib directory and set QueryEntity or indexed types if you have annotated POJOs. Here is example of such configuration:
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="mycache"/>
<!-- Configure query entities -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Long"/>
<property name="valueType" value="org.apache.ignite.examples.Person"/>
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"/>
<entry key="orgId" value="java.lang.Long"/>
<entry key="firstName" value="java.lang.String"/>
<entry key="lastName" value="java.lang.String"/>
<entry key="resume" value="java.lang.String"/>
<entry key="salary" value="java.lang.Double"/>
</map>
</property>
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="id"/>
</bean>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="orgId"/>
</bean>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="salary"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
If you configure that, you'll get an ability to enlist and query that tables over SQLine. (Please note, that you cannot query data that are not loaded into Ignite. To load them, you may use IgniteCache.get() with enabled readThrough option or IgniteCache.loadCache() to load everything from Cassandra table).
To query Cassandra with JDBC, you need a JDBC driver for it, try, for example DBSchema.

Need help on starting the Ignite cache in 2 node cluster

I am novice in Ignite and trying to utilize Ignite to setup in-memory cache. I did some basic configuration and started the Ignite based pluggable persistence work on single node. Now, I am planning to test the performance on 2 node cluster and setting up the ignite configuration as per below:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="countryCacheStoreFactory" class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg><value>com.xyz.exploreignite.cache.CustomCacheStore</value></constructor-arg>
</bean>
<bean id="stateCacheStoreFactory" class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg><value>com.xyz.exploreignite.cache.CustomStateCacheStore</value></constructor-arg>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="false"/>
<property name="clientMode" value="true"/>
<property name="gridName" value="clusterGrid"/>
<property name="cacheConfiguration">
<list>
<!-- Partitioned cache example configuration (Atomic mode). -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
<property name="name" value="customCountryCache"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="writeBehindEnabled" value="true"/>
<property name="copyOnRead" value="true"/>
<property name="memoryMode" value="OFFHEAP_TIERED"/>
<property name="atomicWriteOrderMode" value="PRIMARY"/>
<property name="indexedTypes" >
<list>
<value>java.lang.Integer</value>
<value>com.xyz.exploreignite.pojo.Country</value>
</list>
</property>
<!-- Cache store. -->
<property name="cacheStoreFactory" ref="countryCacheStoreFactory"/>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
<property name="name" value="customStateCache"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="writeBehindEnabled" value="true"/>
<property name="copyOnRead" value="true"/>
<property name="memoryMode" value="OFFHEAP_TIERED"/>
<property name="atomicWriteOrderMode" value="PRIMARY"/>
<property name="indexedTypes" >
<list>
<value>java.lang.Integer</value>
<value>com.xyz.exploreignite.pojo.State</value>
</list>
</property>
<!-- Cache store. -->
<property name="cacheStoreFactory" ref="stateCacheStoreFactory"/>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">-->
<property name="addresses">
<list>
<value>127.0.0.1:47500..47509</value>
<value>172.26.49.1:47500..47509</value>
<!-- In distributed environment, replace with actual host IP address. -->
<value>172.26.49.2:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
Now, while starting ignite with "bin/ignite.sh " on both the nodes it shows failed to connect to server. While running only "bin/ignite.sh" in parallel to above one, both the individual ignite config instance starts in standalone mode with only 1 client. I need to have both of them utilizing the shared instance. Please suggest possible issues in my deployment/execution.
Try to remove <value>127.0.0.1:47500..47509</value> line from the discovery configuration. It doesn't make much for a distributed cluster.
That configuration is in <property name="clientMode" value="true"/> You need to have at least one node set to <property name="clientMode" value="false"/> or else your clients won't have any server nodes to connect to.

When I try to start Hazelcast Managment Center, it fails to start

I am trying to Hazelcast 3.2.4 Management Center to start up in TC Server 3.2.4 Tomcat 7. But even though it seems to start without errors in the logs, I can't access the page:
In Tomcat.log I can see
Line 203004: INFO: Deploying web application archive /vc2cmmkb019231n/app/pm13/process-1.3-build317/instances/vm1/webapps/mancenter-3.2.4.war
Line 203016: INFO: notifyApplicationLifecycle(localhost|mancenter-3.2.4)[START]
Line 203018: WARNING: handleStartEvent(localhost|mancenter-3.2.4)[START] failed (ConnectException) to send ping: No current registered listener
Line 204206: INFO: notifyApplicationLifecycle(localhost|mancenter-3.2.4)[STOP]
that indicates that mancenter has started and stopped, but when I try to access the web page:
http://my-host-name:8080/mancenter-3.2.4/
it doesn't load up.
I am using the following hazelcast spring configuration:
<bean id="hcMonitorInstance" class="com.hazelcast.core.Hazelcast" destroy-method="shutdown" factory-method="newHazelcastInstance">
<constructor-arg>
<bean class="com.hazelcast.config.Config">
<property name="instanceName" value="hcMonitorInstanceConfig"/>
<property name="groupConfig">
<bean class="com.hazelcast.config.GroupConfig">
<property name="name" value="${px-monitor-monitor.com.hazelcast.config.GroupConfig.name}"/>
<property name="password" value="${px-monitor-monitor.com.com.hazelcast.config.GroupConfig.password}"/>
</bean>
</property>
<property name="networkConfig">
<bean class="com.hazelcast.config.NetworkConfig">
<property name="join" ref="join"/>
<property name="port" value="${px-monitor-monitor.hazelcastInstanceConfig.port}"/>
<property name="portAutoIncrement" value="true"/> <!--THIS is FALSE in CIWS-->
<property name="interfaces">
<bean class="com.hazelcast.config.InterfacesConfig">
<property name="interfaces">
<list>
<value>*</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="managementCenterConfig">
<bean class="com.hazelcast.config.ManagementCenterConfig">
<property name="enabled" value="${hz.management.center.enabled}"/>
<property name="url" value="${hz.management.center.url}"/>
</bean>
</property>
</bean>
</constructor-arg>
</bean>
Any Ideas on why it won't start up? I have also looked in Catalina.out for errors, but none show up. I've also tried hitting http://my-host:8080/mancenter/ but that doesn't work either. I can see the web app expanded in tomcat webapps folder and it looks to be correct.
Run java -version. If it is 1.8.0_91 then that's the culprit. Following is what you do
point to an older version of java (1.6 or 1.7 are fine) executable to run the mancenter; something similar to java -jar mancenter-3.6.2.war 8200 mancenter; I ran it on port 8200 so as to not conflict with existing app running on 8080
that should bring up the mancenter; create the admin user
now you should be able to start your 'normal' java version (1.8.0_91 most likely)
It appears the problem occurs with java 1.8.0_91 when no user has been created. Let me know if this worked.

Resources