Caused by: java.util.concurrent.ExecutionException:
de.hybris.platform.solrfacetsearch.indexer.exceptions.IndexerRuntimeException:
de.hybris.platform.solrfacetsearch.indexer.exceptions.IndexerException:
Failed to index item with PK 8796431187969: No bean named
'mystoreBrandCategoryCodeValueProvider' available
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_171]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_171]
at de.hybris.platform.solrfacetsearch.indexer.strategies.impl.DefaultIndexerStrategy.runWorkers(DefaultIndexerStrategy.java:141)
~[solrfacetsearchserver.jar:?]
I get this error when i try to go to localhost for mystore.
My steps:
i created b2b from b2c as described on helphybris
it is working well because i can visit powertools website
I copied all impexes from powertools to mystore which is under mystoreinitialdata/import
then i went to backoffice/wcms and saw my store as url
and also i could see my catalogs on catalogs tab; product, catalog and classification. Just like powertools.
What i want is, with powertools impexes copied to mystore, i want to see powertools items under mystore.
But it gives error which i posted in the beginning.
I only copied impexes.
For example
mystore/solr.impex
has
;$solrIndexedType; color ;string;;;Refine;Alpha; 4000;true;;mystoreVariantCategoryCodeValueProvider;categoryFacetDisplayNameProvider;defaultTopValuesProvider
which i copied from powertools. But powertools has
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">
<context:annotation-config/>
<alias alias="b2bAcceleratorCoreSystemSetup" name="powertoolsStoreSystemSetup" />
<bean id="powertoolsStoreSystemSetup" class="de.hybris.platform.powertoolsstore.setup.PowertoolsStoreSystemSetup" parent="abstractCoreSystemSetup">
<property name="powertoolsCoreDataImportService" ref="powertoolsCoreDataImportService"/>
<property name="powertoolsSampleDataImportService" ref="powertoolsSampleDataImportService"/>
</bean>
<bean id="powertoolsSampleDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsSampleDataImportService"
parent="sampleDataImportService">
</bean>
<bean id="powertoolsCoreDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsCoreDataImportService"
parent="coreDataImportService">
</bean>
<!-- Solr field value providers TEMPORARY FOR NOW SO DO NOT NEED TO DEPEND ON yb2bacceleratorcore -->
<bean id="powertoolsCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="powertoolsCategorySource"/>
</bean>
<bean id="powertoolsBrandCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="powertoolsBrandCategorySource"/>
</bean>
<bean id="powertoolsVariantCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="powertoolsVariantCategorySource"/>
</bean>
<bean id="powertoolsCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="powertoolsCategorySource"/>
</bean>
<bean id="powertoolsBrandCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="powertoolsBrandCategorySource"/>
</bean>
<bean id="powertoolsCategorySource" parent="variantCategorySource">
<property name="rootCategory" value="1"/> <!-- '1' is the root icecat category -->
</bean>
<bean id="powertoolsVariantCategorySource" parent="variantCategorySource"/>
<bean id="powertoolsBrandCategorySource" parent="defaultCategorySource">
<property name="rootCategory" value="brands"/> <!-- 'brands' is the root of the brands hierarchy -->
</bean>
<!-- Solr field value providers TEMPORARY FOR NOW SO DO NOT NEED TO DEPEND ON yb2bacceleratorcore -->
</beans>
this in powertoolsspring-xml
there is no folder as mystorestore because the directory is powertoolsstore in
<bean id="powertoolsSampleDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsSampleDataImportService"
parent="sampleDataImportService">
and for
class="de.hybris.platform.powertoolsstore.setup.PowertoolsStoreSystemSetup"
mystore only has
mystore/initialdata/setup/InitialDataSystemSetup.java
and for
<bean id="powertoolsSampleDataImportService" class="de.hybris.platform.powertoolsstore.services.dataimport.impl.PowertoolsSampleDataImportService"
parent="sampleDataImportService">
mystore doesnot havea services.
What should i do? I want to see localhost with items. so i thought best way is to copy from powertools?
you solr indexer cron job is searching for bean 'mystoreBrandCategoryCodeValueProvider', so this bean should be defined in your spring file, remove it if not used.
possible solutions:
1. update solr.impex : remove this bean if you are not using it and import the impex via hac or update the system and make your your impex is being imported while system update.
Check your solrIndexedType if some old filed is using this bean, remove it (via hmc)
2.Add this bean into spring file if you are using it.
Hope you have copied all Impex correctly
Make sure
Copy impex correctly in right folder path
/mystoreinitialdata/resources/mystoreinitialdata/import/sampledata/productCatalogs/mystoreProductCatalog/products-media.impex
Update powertool word reference with mystore
Point siteResource to correct path
$siteResource=jar:com.mystore.initialdata.constants.MystoreInitialDataConstants&/mystoreinitialdata/import/sampledata/productCatalogs/$productCatalog
Correct the InitialDataSystemSetup class
Like
public static final String MYSTORE = "mystore";
#SystemSetup(type = Type.PROJECT, process = Process.ALL)
public void createProjectData(final SystemSetupContext context)
{
final List<ImportData> importData = new ArrayList<ImportData>();
final ImportData mystoreImportData = new ImportData();
mystoreImportData.setProductCatalogName(MYSTORE);
mystoreImportData.setContentCatalogNames(Arrays.asList(MYSTORE));
mystoreImportData.setStoreNames(Arrays.asList(MYSTORE));
importData.add(mystoreImportData);
/* uncomment below line to test mystoreinitialdata */
getCoreDataImportService().execute(this, context, importData);
getEventService().publishEvent(new CoreDataImportedEvent(context, importData));
getSampleDataImportService().execute(this, context, importData);
getEventService().publishEvent(new SampleDataImportedEvent(context, importData));
}
Correct/Add the bean in your *core-spring.xml which you have used in your impex.
Like
<bean id="yAcceleratorInitialDataSystemSetup"
class="com.store.initialdata.setup.InitialDataSystemSetup"
parent="abstractCoreSystemSetup">
<property name="coreDataImportService" ref="coreDataImportService"/>
<property name="sampleDataImportService" ref="sampleDataImportService"/>
</bean>
<!-- Solr ValueProvider -->
<bean id="mystoreCategorySource" parent="variantCategorySource">
<property name="rootCategory" value="1" /> <!-- '1' is the root icecat category -->
</bean>
<bean id="mystoreVariantCategorySource" parent="variantCategorySource" />
<bean id="mystoreBrandCategorySource" parent="defaultCategorySource">
<property name="rootCategory" value="brands" /> <!-- 'brands' is the root of the brands hierarchy -->
</bean>
<bean id="mystoreCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="mystoreCategorySource" />
</bean>
<bean id="mystoreBrandCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="mystoreBrandCategorySource" />
</bean>
<bean id="mystoreVariantCategoryCodeValueProvider" parent="abstractCategoryCodeValueProvider">
<property name="categorySource" ref="mystoreVariantCategorySource" />
</bean>
<bean id="mystoreCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="mystoreCategorySource" />
</bean>
<bean id="mystoreBrandCategoryNameValueProvider" parent="abstractCategoryNameValueProvider">
<property name="categorySource" ref="mystoreBrandCategorySource" />
</bean>
Update your system
Update the running system!
hac > Platform > Update
Related
I'm trying to integrate Apache Ignite with Apache Cassandra(3.11.2) as I want to use Ignite to cache the data present in my already existing Cassandra database.
After going through the online resources, I've done the following till now:
Downloaded Apache Ignite.
Copied all the folders present in "libs/optional/" to "libs/"(I don't know which ones will be required for Cassandra).
Created 3 xmls in the config folder i.e. "cassandra-config.xml", "connection-settings.xml" and "persistance-settings.xml". Currently I'm using the same node(172.16.129.68) for both Cassandra and Ignite.
cassandra-config.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!-- Cassandra connection settings -->
<import resource="connection-settings.xml" />
<!-- Persistence settings for 'cache1' -->
<bean id="cache1_persistence_settings" class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
<constructor-arg type="org.springframework.core.io.Resource" value="file:/home/cass/apache_ignite/apache-ignite-fabric-2.4.0-bin/config/persistance-settings.xml" />
</bean>
<!-- Ignite configuration -->
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="cacheConfiguration">
<list>
<!-- Configuring persistence for "cache1" cache -->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="cache1"/>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="writeBehindEnabled" value="true"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
<property name="dataSourceBean" value="cassandraAdminDataSource"/>
<property name="persistenceSettingsBean" value="cache1_persistence_settings"/>
</bean>
</property>
</bean>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>172.16.129.68:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
connection-settings.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="loadBalancingPolicy" class="com.datastax.driver.core.policies.TokenAwarePolicy">
<constructor-arg type="com.datastax.driver.core.policies.LoadBalancingPolicy">
<bean class="com.datastax.driver.core.policies.RoundRobinPolicy"/>
</constructor-arg>
</bean>
<bean id="cassandraAdminDataSource" class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
<property name="port" value="9042"/>
<property name="contactPoints" value="172.16.129.68"/>
<property name="readConsistency" value="ONE"/>
<property name="writeConsistency" value="ONE"/>
<property name="loadBalancingPolicy" ref="loadBalancingPolicy"/>
</bean>
persistance-settings.xml
<persistence keyspace="test" table="epc_table">
<keyPersistence class="java.lang.String" strategy="PRIMITIVE" column="imsi"/>
<valuePersistence strategy="BLOB"/>
</persistence>
I run the following command to start Ignite from bin folder.
ignite.sh ../config/cassandra-config.xml
Now, I want to take a look at the cassandra table via sqlline. I've tried the following:
./sqlline.sh -u jdbc:cassandra://172.16.129.68:9042/test //(test is the name of the keyspace)
I get the following output:
No known driver to handle "jdbc:cassandra://172.16.129.68:9042/test". Searching for known drivers...
java.lang.NullPointerException
sqlline version 1.3.0
0: jdbc:cassandra://172.16.129.68:9042/test>
I've also tried:
./sqlline.sh -u jdbc:ignite:thin://172.16.129.68
but when I use "!tables", I'm not able to see any table.
What exactly has been missing? How to access/modify the tables present in Cassandra using sqlline?
Operating System: RHEL 6.5
Apache Ignite is a key-value database and there are no tables created by default that you are able to view with JDBC connector. CacheStore is a way to integrate Ignite with external DB or any other storage, and it loads data as a key-value pair.
In your config you said Ignite that you want to store and load entries in/from Cassandra, but still Ignite doesn't know entries structure (BTW Ignite really doesn't care what objects were putted into it).
To be able to list tables and do queries on it, you need to create tables. For that you need to have ignite-indexing in /lib directory and set QueryEntity or indexed types if you have annotated POJOs. Here is example of such configuration:
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="mycache"/>
<!-- Configure query entities -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Long"/>
<property name="valueType" value="org.apache.ignite.examples.Person"/>
<property name="fields">
<map>
<entry key="id" value="java.lang.Long"/>
<entry key="orgId" value="java.lang.Long"/>
<entry key="firstName" value="java.lang.String"/>
<entry key="lastName" value="java.lang.String"/>
<entry key="resume" value="java.lang.String"/>
<entry key="salary" value="java.lang.Double"/>
</map>
</property>
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="id"/>
</bean>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="orgId"/>
</bean>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="salary"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
If you configure that, you'll get an ability to enlist and query that tables over SQLine. (Please note, that you cannot query data that are not loaded into Ignite. To load them, you may use IgniteCache.get() with enabled readThrough option or IgniteCache.loadCache() to load everything from Cassandra table).
To query Cassandra with JDBC, you need a JDBC driver for it, try, for example DBSchema.
I am using s3 module to poll files from s3.It downloads the file to local system and starts processing it.I am running this on 3 node cluster with module count as 1.Now lets assume the file is downloaded to local system from s3 and xd is processing it.If xd node goes down it would have processed half the message.When the server comes up it will start processing file again hence I will get duplicate message.I am trying to change to idempotent pattern with message store to change the module count to 3 but still this duplicate message issues will be there.
<int:poller fixed-delay="${fixedDelay}" default="true">
<int:advice-chain>
<ref bean="pollAdvise"/>
</int:advice-chain>
</int:poller>
<bean id="pollAdvise" class="org.springframework.integration.scheduling.PollSkipAdvice">
<constructor-arg ref="healthCheckStrategy"/>
</bean>
<bean id="healthCheckStrategy" class="ServiceHealthCheckPollSkipStrategy">
<property name="url" value="${url}"/>
<property name="doHealthCheck" value="${doHealthCheck}"/>
</bean>
<bean id="credentials" class="org.springframework.integration.aws.core.BasicAWSCredentials">
<property name="accessKey" value="${accessKey}"/>
<property name="secretKey" value="${secretKey}"/>
</bean>
<bean id="clientConfiguration" class="com.amazonaws.ClientConfiguration">
<property name="proxyHost" value="${proxyHost}"/>
<property name="proxyPort" value="${proxyPort}"/>
<property name="preemptiveBasicProxyAuth" value="false"/>
</bean>
<bean id="s3Operations" class="org.springframework.integration.aws.s3.core.CustomC1AmazonS3Operations">
<constructor-arg index="0" ref="credentials"/>
<constructor-arg index="1" ref="clientConfiguration"/>
<property name="awsEndpoint" value="s3.amazonaws.com"/>
<property name="temporaryDirectory" value="${temporaryDirectory}"/>
<property name="awsSecurityKey" value=""/>
</bean>
<!-- aws-endpoint="https://s3.amazonaws.com" -->
<int-aws:s3-inbound-channel-adapter aws-endpoint="s3.amazonaws.com"
bucket="${bucket}"
s3-operations="s3Operations"
credentials-ref="credentials"
file-name-wildcard="${fileNameWildcard}"
remote-directory="${remoteDirectory}"
channel="splitChannel"
local-directory="${localDirectory}"
accept-sub-folders="false"
delete-source-files="true"
archive-bucket="${archiveBucket}"
archive-directory="${archiveDirectory}">
</int-aws:s3-inbound-channel-adapter>
<int-file:splitter input-channel="splitChannel" output-channel="output" markers="false" charset="UTF-8">
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpression" value="payload.delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:splitter>
<int:idempotent-receiver id="expressionInterceptor" endpoint="output"
metadata-store="redisMessageStore"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload"/>
<bean id="redisMessageStore" class="o.s.i.redis.store.RedisChannelMessageStore">
<constructor-arg ref="redisConnectionFactory"/>
</bean>
<bean id="redisConnectionFactory"
class="o.s.data.redis.connection.jedis.JedisConnectionFactory">
<property name="port" value="7379" />
</bean>
<int:channel id="output"/>
Update 2
This configuration worked for me Thanks for your help.
<int:idempotent-receiver id="s3Interceptor" endpoint="s3splitter"
metadata-store="redisMessageStore"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload.name"/>
<bean id="redisMessageStore" class="org.springframework.integration.redis.metadata.RedisMetadataStore">
<constructor-arg ref="redisConnectionFactory"/>
</bean>
<bean id="redisConnectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<property name="port" value="6379" />
</bean>
<int:bridge id="batchBridge" input-channel="bridge" output-channel="output">
</int:bridge>
<int:idempotent-receiver id="splitterInterceptor" endpoint="batchBridge"
metadata-store="redisMessageStore"
discard-channel="nullChannel"
throw-exception-on-rejection="false"
key-expression="payload"/>
<int:channel id="output"/>
I had few doubts wanted to clarify If i am doing right.
1)As you can see I have ExpressionEvaluatingRequestHandlerAdvice to delete the file.Will the file get deleted after i read the file into redis or after last record is read?
2)I explored redis using desktop manager I see this I have a MetaData as man Key
Both (file and payload) metadatastore key and value are going to same table is this fine?or should it be different metadatastore?
Can i use hash of payload instead of payload as key?Is there something like payload.hash!
Looks like it is continuation of the Multiple message processed, but unfortunately we don't see <idempotent-receiver> configuration in your case.
According to your comment there looks like you continue to use SimpleMetadataStore or clean the shared one (Redis/Mongo) very often.
You should share more info where to dig. Some logs and DEBUG investigation would be good, too.
UPDATE
The Idempotent Receiver is exactly for endpoint. In your config it is for the MessageChannel. That's why you don't achieve any proper work, because the MessageChannel is just ignored from the IdempotentReceiverInterceptor.
you should add an id for your <int-file:splitter> and use that id from the endpoint attribute. Not should if that would be good idea to use File as a key for idempotency. The name sounds better.
UPDATE 2
If a node goes down and lets assume a file is dowloaded(file size with million records may be gb) to xd node and I would have processed half the records and node crashes .When server comes up I think we will process same records again?
OK. I got your point finally! you have an issue with splitted lines from the file already.
Also I'd use Idempotent Receiver for the <splitter> as well to avoid duplicate files from S3.
To fix your use-case you should place in between <splitter> and output channel one more endpoint - <bridge> to skip duplicate lines with the Idempotent Receiver.
I am working on a Spring Batch project, this is the configuration of my Reader.
<bean id="personneReaderCSV" class="org.springframework.batch.item.file.FlatFileItemReader" >
<property name="resource" value="input/personne.txt" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=","/>
<property name="names" value="id,nom,prenom,civilite" />
</bean>
</property>
<property name="fieldSetMapper">
<bean class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="ma.ensa.Personne" />
</bean>
</property>
</bean>
</property>
</bean>
Now I want to use a file Upload with JSF to choose the file the I want Read data from, so how I can do it to make the value of the property resource dynamic.
Help and Thank's.
You may want to parameterize the input filename in your job by setting a property in the JobParameters of the job and then injecting it into your reader bean in this way:
<bean id="personneReaderCSV" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="#{jobParameters[filename]}" />
<!-- ... -->
</bean>
Please note that you should also add scope="step" to your bean.
This is called late binding and is illustrated in ยง5.4 of the official spring-batch documentation.
Spring Boot update:
#Bean
#StepScope
public FlatFileItemReader<CarInfo> itemReader(#Value("#{jobParameters['fileName']}") String fileLocation) {
FlatFileItemReader<CarInfo> flatFileItemReader = new FlatFileItemReader<>();
flatFileItemReader.setResource(new FileSystemResource(fileLocation));
flatFileItemReader.setName("CSV-Reader");
flatFileItemReader.setLinesToSkip(1);
flatFileItemReader.setLineMapper(lineMapper());
return flatFileItemReader;
}
I hope you can have the Multipartfile either saved into the resource folder and share it as Resource or else find a way to pass that as resource object to the param list. Then get it injected here shown above
I have a file dropped at the ftp location which should be picked up by ftp-inbound-adapter. This file is saved to a local-directory. This local-directory is in turn polled by spring file-inbound-adapter. The filenamegenerator bean is used in the file-inbound-adapter and decides the destination dynamically. I have also posted another question about the file in the local-directory not being deleted. This is the problem I am facing.
This is a my entire configuration
<util:properties id="someid" location="classpath:config/config.properties"/>
<mvc:annotation-driven />
<context:component-scan base-package="com.dms" />
<bean
class="org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name="prefix">
<value>/WEB-INF/</value>
</property>
<property name="suffix">
<value>.jsp</value>
</property>
</bean>
<context:property-placeholder location="classpath:config/jdbc.properties,classpath:config/config.properties,classpath:config/ftp.properties"/>
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource"
>
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource">
<ref bean="dataSource" />
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.format_sql">${hibernate.format_sql}</prop>
<prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop>
</props>
</property>
<property name="packagesToScan">
<list>
<value>com.dms.entity</value>
</list>
</property>
</bean>
<tx:annotation-driven />
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="multipartResolver"
class="org.springframework.web.multipart.commons.CommonsMultipartResolver">
<!-- setting maximum upload size -->
<property name="maxUploadSize" value="10485760" />
</bean>
<!-- scheduler to pickup temp folder files to permanent location -->
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
<property name="triggers">
<list>
<ref bean="simpleTrigger" />
</list>
</property>
</bean>
<bean id="dmsFilesDetectionJob" class="com.dms.scheduler.job.DMSFilesDetectionJob">
</bean>
<bean id="dmsFilesDetectionJobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject" ref="dmsFilesDetectionJob" />
<property name="targetMethod" value="pollTempFolder" />
<property name="concurrent" value="false" />
</bean>
<bean id="simpleTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="dmsFilesDetectionJobDetail" />
<!-- <property name="cronExpression" value="1 * * * * ?" /> -->
<property name="cronExpression" value="0 0/1 * * * ?" />
</bean>
<bean id="fileNameGenerator" class="com.dms.util.FileNameGenerator"/>
<int-file:inbound-channel-adapter id="filesIn" directory="file:${paths.root}" channel="abc" filter="compositeFilter" >
<int:poller id="poller" fixed-delay="5000" />
</int-file:inbound-channel-adapter>
<int:channel id="abc"/>
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<!-- Ensures that the file is whole before processing it -->
<bean class="com.dms.util.CustomFileFilter"/>
<!-- Ensures files are picked up only once from the directory -->
<bean class="org.springframework.integration.file.filters.AcceptOnceFileListFilter" />
</list>
</constructor-arg>
</bean>
<int-file:outbound-channel-adapter channel="abc" id="filesOut"
directory-expression="#outPathBean.getPath()"
delete-source-files="true" filename-generator="fileNameGenerator" />
<bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter">
<property name="messageConverters">
<list>
<ref bean="jsonMessageConverter"/>
</list>
</property>
</bean>
<bean id="jsonMessageConverter" class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter">
<!-- <property name="prefixJson" value="false"/> -->
<!-- <property name="objectMapper">
<bean class="com.dms.util.HibernateAwareObjectMapper" />
</property> -->
<property name="supportedMediaTypes" value="application/json"/>
</bean>
<bean id="ftpClientFactory"
class="org.springframework.integration.ftp.session.DefaultFtpSessionFactory">
<property name="host" value="${ftp.ip}"/>
<property name="port" value="${ftp.port}"/>
<property name="username" value="${ftp.username}"/>
<property name="password" value="${ftp.password}"/>
<property name="clientMode" value="0"/>
<property name="fileType" value="2"/>
<property name="bufferSize" value="100000"/>
</bean>
<int-ftp:outbound-channel-adapter id="ftpOutbound"
channel="ftpChannel"
session-factory="ftpClientFactory"
charset="UTF-8"
remote-file-separator="/"
auto-create-directory="true"
remote-directory="."
use-temporary-file-name="true"
auto-startup="true"
/>
<int-ftp:inbound-channel-adapter id="ftpInbound"
channel="ftpChannel"
session-factory="ftpClientFactory"
charset="UTF-8"
local-directory="file:${paths.root}"
delete-remote-files="true"
temporary-file-suffix=".writing"
remote-directory="."
filename-pattern="${file.char}*${file.char}"
preserve-timestamp="true"
auto-startup="true">
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
<int:channel id="ftpChannel" />
This is the error I am getting
18:02:34.655 E|LoggingHandler |org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'org.springframework.web.context.WebApplicationContext:/DMS/DMS-dispatcher.ftpChannel'.
This exception does not appear everytime.
As you can see I have added auto-startup="true". Have used unique id's for both the channels as well as adapters. Please let me know what is wrong here!
Thanks
I just had to deal with this but with a file inbound-channel-adapter. The issue is intermittent, and only at startup. I think adapters with pollers can start pulling in messages before Spring Integration has fully initialized.
My fix is to disable the adapter at startup. The details of the adapter are not so important, other than it has an id and is set to not autostart:
<!--
Read files from an "inbox" directory, placing them on an "inbox" channel...
-->
<int-file:inbound-channel-adapter id="inboxScanner"
directory="$import{inbox}"
auto-create-directory="true"
channel="fileInbox"
prevent-duplicates="false"
auto-startup="false">
<int:poller fixed-rate="$import{inbox.scan.rate.seconds}"
time-unit="SECONDS"
max-messages-per-poll="$import{inbox.max.imports.per.scan}"/>
</int-file:inbound-channel-adapter>
I then tap into Spring's application lifecycle events, and once the application context is finished being created (or refreshed), I tell the adapter to start:
<!--
Only start the scanner after the application has finished initializing...
-->
<int-event:inbound-channel-adapter event-types="org.springframework.context.event.ContextRefreshedEvent"
channel="contextRefreshEvents"/>
<int:publish-subscribe-channel id="contextRefreshEvents"/>
<int:outbound-channel-adapter channel="contextRefreshEvents"
expression="#inboxScanner.start()" />
The "event" components are from spring-integration-event.
First of all your integration flow ins't clear. But I see that your issue is for this
<int-ftp:inbound-channel-adapter id="ftpInbound" channel="ftpChannel"
Where there is really no one who is subscribed to that ftpChannel SubscribableChannel.
So, that adapter starts his work and sends message to that channel, but ... Dispatcher has no subscribers.
Try to fix that issue and figure out how to go ahead.
EDIT
Not sure what you have found in my answer so bad that it forced you to downvote it. Anyway.
There was some phase problem before, but as you see it has been fixed in the version 4.1.
So, to reach the immediate fix right now you should do this:
phase="0x7fffffff" // Integer.MAX_VALUE
by default phase is 0, so the inbound channel adapter may start before outbound channel adapter.
Or just upgrade to the latest Spring Integration!
I am using OpenDS for Authentication of my Application. I am able to Authenticate the user successfully but not able get the roles of the user.
The following is the configuration in the XML file.....
<bean id="secondLdapProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider">
<constructor-arg>
<bean class="org.springframework.security.ldap.authentication.BindAuthenticator">
<constructor-arg ref="contextSource" />
<property name="userSearch">
<bean id="userSearch" class="org.springframework.security.ldap.search.FilterBasedLdapUserSearch">
<constructor-arg index="0" value="ou=people"/>
<constructor-arg index="1" value="(uid={0})"/>
<constructor-arg index="2" ref="contextSource" />
</bean>
</property>
</bean>
</constructor-arg>
<constructor-arg>
<bean class="org.springframework.security.ldap.userdetails.DefaultLdapAuthoritiesPopulator">
<constructor-arg ref="contextSource" />
<constructor-arg value="ou=groups" />
<property name="groupSearchFilter" value="(member={0})"/>
<property name="rolePrefix" value="ROLE_"/>
<property name="searchSubtree" value="true"/>
<property name="convertToUpperCase" value="true"/>
</bean>
</constructor-arg>
</bean>
Please help me to get the roles.
Collection<? extends GrantedAuthority> roles = SecurityContextHolder.getContext().getAuthentication().getAuthorities();
That will return you the roles ("authorities") as found by the DefaultLdapAuthoritiesPopulator
The search-filter is "(member={0})" in ou "groups", ie roles are retrieved by searching for entries in the "groups" ou with a "member" attribute with value matching the users dn. In your example ldif in the comment below, it looks like you use "uniqueMember" instead of "member" as your group membership attribute,
If you read the documentation carefully
(http://static.springsource.org/spring-security/site/docs/3.1.x/apidocs/org/springframework/security/ldap/userdetails/DefaultLdapAuthoritiesPopulator.html) you'll see examples of ldif and how the different attributes map in the populator.