cluster event bus consumer not getting the message - hazelcast

I have two services(HttpServer and a Consumer) running on two different servers.
I am using Hazelcast for clustering.
Here is my code :
public class SampleRestService extends StartVerticle {
#Override
public void start() {
Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
router.post("/subscribe").handler(this::handleSubscription);
vertx.createHttpServer().requestHandler(router::accept).listen(9001);
}
public void handleSubscription(RoutingContext routingContext) {
System.out.println("sending to event bus");
vertx.eventBus().send("myserviceadd", routingContext.request().path(), ar -> {
if (ar.succeeded()) {
//req.response().setStatusCode(200).write(result.result().body()).end();
System.out.println("received response from event bus");
routingContext.response().setStatusCode(200).end("myhttp-response" + ar.result().body() + " ** " + routingContext.request().path());
} else if(ar.failed()) {
System.out.println(" response from event bus is failed");
ar.cause().printStackTrace();
routingContext.response().setStatusCode(500).end("failed to subscribeS");
}
});
}
}
Consumer Code :
public class SampleRestComsumer extends SampleConsumerParent {
#Override
public void start() {
vertx.eventBus().consumer("myserviceadd", message -> {
System.out.println("Recevied message: " + message.body());
message.reply(new JsonObject().put("responseCode", "OK").put("message", "This is your response to your event"));
});
}
}
Here is my cluster.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config
hazelcast-config-3.2.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<properties>
<property name="hazelcast.wait.seconds.before.join">0</property>
<property name="hazelcast.logging.type">jdk</property>
</properties>
<group>
<name>dev-test</name>
<password>dev-pass</password>
</group>
<management-center enabled="false">http://localhost:8080/mancenter</management-center>
<network>
<port auto-increment="false" port-count="10000">5701</port>
<outbound-ports>
<!--
Allowed port range when connecting to other nodes.
0 or * means use system provided port.
-->
<ports>0</ports>
</outbound-ports>
<join>
<multicast enabled="false">
<!--<multicast-group>224.2.2.3</multicast-group>-->
<!--<multicast-port>54327</multicast-port>-->
</multicast>
<tcp-ip enabled="true">
<!-- <interface>127.0.0.1</interface> -->
<interface>10.27.92.45</interface>
<interface>10.27.92.47</interface>
</tcp-ip>
<aws enabled="false">
</aws>
</join>
<interfaces enabled="true">
<interface>10.27.92.*</interface>
</interfaces>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<map name="__vertx.subs">
<!--
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. 0 means no backup.
-->
<backup-count>1</backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<!--
Valid values are:
NONE (no eviction),
LRU (Least Recently Used),
LFU (Least Frequently Used).
NONE is the default.
-->
<eviction-policy>NONE</eviction-policy>
<!--
Maximum size of the map. When max size is reached,
map is evicted based on the policy defined.
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
-->
<max-size policy="PER_NODE">0</max-size>
<!--
When max. size is reached, specified percentage of
the map will be evicted. Any integer between 0 and 100.
If 25 is set for example, 25% of the entries will
get evicted.
-->
<eviction-percentage>25</eviction-percentage>
<merge-policy>
com.hazelcast.map.merge.LatestUpdateMapMergePolicy
</merge-policy>
</map>
<!-- Used internally in Vert.x to implement async locks -->
<semaphore name="__vertx.*">
<initial-permits>1</initial-permits>
</semaphore>
</hazelcast>
When i am running both the services. They are getting added to cluster and both services are up too but i am getting below error on server
response from event bus is failed
(TIMEOUT,-1) Timed out after waiting 30000(ms) for a reply. address:
7dfec6d2-3eaf-4006-90b1-6c2eaddb28bd
at io.vertx.core.eventbus.impl.HandlerRegistration.sendAsyncResultFailure(HandlerRegistration.java:118)
at io.vertx.core.eventbus.impl.HandlerRegistration.lambda$new$0(HandlerRegistration.java:65)
Please let me know what i am doing wrong.

Related

unable to connect hazzelcast IMDG locally from .net hazelcast client

I have downloaded the hazelcast locally. but when i am trying to connect it from .net client it does not get connected.
below exception is shown in console of Hazecast.
om.hazelcast.internal.nio.tcp.TcpIpConnection
WARNING: [10.253.200.102]:5701 [dev] [4.0.1] Connection[id=7, /10.253.200.102:5701->/10.253.200.102:56020, qualifier=null, endpoint=null, alive=false, connectionType=NONE] closed. Reason: Exception in Connection[id=7, /10.253.200.102:5701->/10.253.200.102:56020, qualifier=null, endpoint=null, alive=true, connectionType=NONE], thread=hz.epic_northcutt.IO.thread-in-0
java.lang.IllegalStateException: Unknown protocol: CB2
at com.hazelcast.internal.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:116)
at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:137)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:382)
at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:367)
at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:293)
at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:248)
enter image description here
**Default Hazelcast Config file:**
<hazelcast xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/config
http://www.hazelcast.com/schema/config/hazelcast-config-4.0.xsd">
<cluster-name>dev</cluster-name>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<!--
Allowed port range when connecting to other nodes.
0 or * means use system provided port.
-->
<ports>0</ports>
</outbound-ports>
<join>
<multicast enabled="true">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="false">
<interface>127.0.0.1</interface>
<member-list>
<member>127.0.0.1</member>
</member-list>
</tcp-ip>
<aws enabled="false">
<access-key>my-access-key</access-key>
<secret-key>my-secret-key</secret-key>
<!--optional, default is us-east-1 -->
<region>us-west-1</region>
<!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
<host-header>ec2.amazonaws.com</host-header>
<!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
<gcp enabled="false">
<zones>us-east1-b,us-east1-c</zones>
</gcp>
<azure enabled="false">
<client-id>CLIENT_ID</client-id>
<client-secret>CLIENT_SECRET</client-secret>
<tenant-id>TENANT_ID</tenant-id>
<subscription-id>SUB_ID</subscription-id>
<cluster-id>HZLCAST001</cluster-id>
<group-name>RESOURCE-GROUP-NAME</group-name>
</azure>
<kubernetes enabled="false">
<namespace>MY-KUBERNETES-NAMESPACE</namespace>
<service-name>MY-SERVICE-NAME</service-name>
<service-label-name>MY-SERVICE-LABEL-NAME</service-label-name>
<service-label-value>MY-SERVICE-LABEL-VALUE</service-label-value>
</kubernetes>
<eureka enabled="false">
<self-registration>true</self-registration>
<namespace>hazelcast</namespace>
</eureka>
<discovery-strategies>
</discovery-strategies>
</join>
<interfaces enabled="false">
<interface>10.10.1.*</interface>
</interfaces>
<ssl enabled="false"/>
<socket-interceptor enabled="false"/>
<symmetric-encryption enabled="false">
<!--
encryption algorithm such as
DES/ECB/PKCS5Padding,
PBEWithMD5AndDES,
AES/CBC/PKCS5Padding,
Blowfish,
DESede
-->
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
<failure-detector>
<icmp enabled="false"/>
</failure-detector>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<security>
<client-block-unmapped-actions>true</client-block-unmapped-actions>
</security>
<queue name="default">
<!--
Maximum size of the queue. When a JVM's local queue size reaches the maximum,
all put/offer operations will get blocked until the queue size
of the JVM goes down below the maximum.
Any integer between 0 and Integer.MAX_VALUE. 0 means
Integer.MAX_VALUE. Default is 0.
-->
<max-size>0</max-size>
<!--
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. 0 means no backup.
-->
<backup-count>1</backup-count>
<!--
Number of async backups. 0 means no backup.
-->
<async-backup-count>0</async-backup-count>
<empty-queue-ttl>-1</empty-queue-ttl>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</queue>
<map name="default">
<!--
Data type that will be used for storing recordMap.
Possible values:
BINARY (default): keys and values will be stored as binary data
OBJECT : values will be stored in their object forms
NATIVE : values will be stored in non-heap region of JVM
-->
<in-memory-format>BINARY</in-memory-format>
<!--
Metadata creation policy for this map. Hazelcast may process objects of supported types ahead of time to
create additional metadata about them. This metadata then is used to make querying and indexing faster.
Metadata creation may decrease put throughput.
Valid values are:
CREATE_ON_UPDATE (default): Objects of supported types are pre-processed when they are created and updated.
OFF: No metadata is created.
-->
<metadata-policy>CREATE_ON_UPDATE</metadata-policy>
<!--
Number of backups. If 1 is set as the backup-count for example,
then all entries of the map will be copied to another JVM for
fail-safety. 0 means no backup.
-->
<backup-count>1</backup-count>
<!--
Number of async backups. 0 means no backup.
-->
<async-backup-count>0</async-backup-count>
<!--
Maximum number of seconds for each entry to stay in the map. Entries that are
older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
will get automatically evicted from the map.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0
-->
<time-to-live-seconds>0</time-to-live-seconds>
<!--
Maximum number of seconds for each entry to stay idle in the map. Entries that are
idle(not touched) for more than <max-idle-seconds> will get
automatically evicted from the map. Entry is touched if get, put or containsKey is called.
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
-->
<max-idle-seconds>0</max-idle-seconds>
<eviction eviction-policy="NONE" max-size-policy="PER_NODE" size="0"/>
<!--
While recovering from split-brain (network partitioning),
map entries in the small cluster will merge into the bigger cluster
based on the policy set here. When an entry merge into the
cluster, there might an existing entry with the same key already.
Values of these entries might be different for that same key.
Which value should be set for the key? Conflict is resolved by
the policy set here. Default policy is PutIfAbsentMapMergePolicy
There are built-in merge policies such as
com.hazelcast.spi.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key.
com.hazelcast.spi.merge.PutIfAbsentMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster.
com.hazelcast.spi.merge.HigherHitsMergePolicy ; entry with the higher hits wins.
com.hazelcast.spi.merge.LatestUpdateMergePolicy ; entry with the latest update wins.
-->
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
<!--
Control caching of de-serialized values. Caching makes query evaluation faster, but it cost memory.
Possible Values:
NEVER: Never cache deserialized object
INDEX-ONLY: Caches values only when they are inserted into an index.
ALWAYS: Always cache deserialized values.
-->
<cache-deserialized-values>INDEX-ONLY</cache-deserialized-values>
</map>
<multimap name="default">
<backup-count>1</backup-count>
<value-collection-type>SET</value-collection-type>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</multimap>
<replicatedmap name="default">
<in-memory-format>OBJECT</in-memory-format>
<async-fillup>true</async-fillup>
<statistics-enabled>true</statistics-enabled>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</replicatedmap>
<list name="default">
<backup-count>1</backup-count>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</list>
<set name="default">
<backup-count>1</backup-count>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</set>
<reliable-topic name="default">
<read-batch-size>10</read-batch-size>
<topic-overload-policy>BLOCK</topic-overload-policy>
<statistics-enabled>true</statistics-enabled>
</reliable-topic>
<ringbuffer name="default">
<capacity>10000</capacity>
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<in-memory-format>BINARY</in-memory-format>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</ringbuffer>
<flake-id-generator name="default">
<prefetch-count>100</prefetch-count>
<prefetch-validity-millis>600000</prefetch-validity-millis>
<epoch-start>1514764800000</epoch-start>
<node-id-offset>0</node-id-offset>
<bits-sequence>6</bits-sequence>
<bits-node-id>16</bits-node-id>
<allowed-future-millis>15000</allowed-future-millis>
<statistics-enabled>true</statistics-enabled>
</flake-id-generator>
<serialization>
<portable-version>0</portable-version>
</serialization>
<lite-member enabled="false"/>
<cardinality-estimator name="default">
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<merge-policy batch-size="100">HyperLogLogMergePolicy</merge-policy>
</cardinality-estimator>
<scheduled-executor-service name="default">
<capacity>100</capacity>
<durability>1</durability>
<pool-size>16</pool-size>
<merge-policy batch-size="100">com.hazelcast.spi.merge.PutIfAbsentMergePolicy</merge-policy>
</scheduled-executor-service>
<crdt-replication>
<replication-period-millis>1000</replication-period-millis>
<max-concurrent-replication-targets>1</max-concurrent-replication-targets>
</crdt-replication>
<pn-counter name="default">
<replica-count>2147483647</replica-count>
<statistics-enabled>true</statistics-enabled>
</pn-counter>
<cp-subsystem>
<cp-member-count>0</cp-member-count>
<group-size>0</group-size>
<session-time-to-live-seconds>300</session-time-to-live-seconds>
<session-heartbeat-interval-seconds>5</session-heartbeat-interval-seconds>
<missing-cp-member-auto-removal-seconds>14400</missing-cp-member-auto-removal-seconds>
<fail-on-indeterminate-operation-state>false</fail-on-indeterminate-operation-state>
<raft-algorithm>
<leader-election-timeout-in-millis>2000</leader-election-timeout-in-millis>
<leader-heartbeat-period-in-millis>5000</leader-heartbeat-period-in-millis>
<max-missed-leader-heartbeat-count>5</max-missed-leader-heartbeat-count>
<append-request-max-entry-count>100</append-request-max-entry-count>
<commit-index-advance-count-to-snapshot>10000</commit-index-advance-count-to-snapshot>
<uncommitted-entry-count-to-reject-new-appends>100</uncommitted-entry-count-to-reject-new-appends>
<append-request-backoff-timeout-in-millis>100</append-request-backoff-timeout-in-millis>
</raft-algorithm>
</cp-subsystem>
<metrics enabled="true">
<management-center enabled="true">
<retention-seconds>5</retention-seconds>
</management-center>
<jmx enabled="true"/>
<collection-frequency-seconds>5</collection-frequency-seconds>
</metrics>
</hazelcast>
**Client Code:**
using Hazelcast.Client;
using Hazelcast.Config;
using Hazelcast.Core;
using System;
namespace HazelCastClientApp1
{
public class HazelcastClientFactory
{
static IHazelcastInstance client;
public static IHazelcastInstance GetClient()
{
if (client == null)
{
InitializeClient();
}
return client;
}
private static void InitializeClient()
{
var cfg = new ClientConfig();
var hazelcastUrl = "127.0.0.1:5701";
cfg.GetNetworkConfig().AddAddress(hazelcastUrl);
client = HazelcastClient.NewHazelcastClient(cfg);
Console.WriteLine("Local address : {0}", client.GetLocalEndpoint().GetSocketAddress());
Console.ReadKey();
}
}
}
The [4.0.1] and Unknown protocol: CB2 mentions in the error messages indicate that you are trying to connect a v3 client to a v4 server. The v3 client does not support v4 servers. Please ensure that you are using compatible client and server.

log4j2 RollingFileAppender old file gets removed after 7 rollovers

I use following log4j RollingFile appender in my webapp.
<Appenders>
<RollingFile name="logFile"
fileName="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log" immediateFlush="true"
filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i">
<PatternLayout pattern="%d{yyyyMMdd-HHmmss.SSS}|%X{username}|%-5p|%t| %-100m (%c{1})%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
With filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i", when log is rolled over, old file gets renamed to a filename with an index number (specified with %i), so all old files should get renamed and should be preserved.
I rollover the log programmatically with following code.
org.apache.logging.log4j.Logger logManagerLogger = LogManager.getLogger();
Map<String, org.apache.logging.log4j.core.Appender> appenders = ((org.apache.logging.log4j.core.Logger) logManagerLogger).getAppenders();
appenders.forEach((appenderName, appender) -> {
if (appender instanceof RollingFileAppender) {
LOGGER.info("Switching log for appender " + appenderName);
((RollingFileAppender) appender).getManager().rollover();
}
});
But, after 7 rollovers, the existing file gets removed (not renamed according to the specified filePattern) and log is continued in a new file.
What could be the issue here?
set DefaultRolloverStrategy(default is 7), In your config will be:
<Appenders>
<RollingFile name="logFile"
fileName="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log" immediateFlush="true"
filePattern="${env:SYSTEM_LOGS}/${env:LOG_FILE_NAME}.log.%d{yyyy_MM_dd.HH_mm_ss}.%i">
<PatternLayout pattern="%d{yyyyMMdd-HHmmss.SSS}|%X{username}|%-5p|%t| %-100m (%c{1})%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
</Policies>
<DefaultRolloverStrategy max="100"/>
</RollingFile>
</Appenders>
now, it will have 100 log file to rollover.
If you want unlimited rollingfile,
According to Log4j2 documentation, from release 2.8, it can be done by setting fileIndex attribute to nomax. For example:
<DefaultRolloverStrategy fileIndex="nomax" />

connection not getting established between hazelcast client and hazelcast server

The client and server configs for hazelcast v3.6 are copied below. I can run the server (listening on 127.0.0.1:5706)
I get the following error on the hazelcast client side:
[warn] c.h.c.c.n.ClientConnection - Connection [/127.0.0.1:5701] lost. Reason: java.lang.NullPointerException[null]
[warn] c.h.c.s.i.ClusterListenerSupport - Unable to get alive cluster connection, try in 2986 ms later, attempt 1 of 2.
hazelcast-client.xml
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast-client xsi:schemaLocation="http://www.hazelcast.com/schema/client-config hazelcast-client-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">60000</property>
<property name="hazelcast.client.heartbeat.interval">5000</property>
<property name="hazelcast.client.event.thread.count">5</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
</properties>
<network>
<cluster-members>
<address>127.0.0.1:5701</address>
<!-- <address>0.0.0.0</address> -->
</cluster-members>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>60000</connection-timeout>
<connection-attempt-period>3000</connection-attempt-period>
<connection-attempt-limit>2</connection-attempt-limit>
<socket-options>
<tcp-no-delay>false</tcp-no-delay>
<keep-alive>true</keep-alive>
<reuse-address>true</reuse-address>
<linger-seconds>3</linger-seconds>
<timeout>-1</timeout>
<buffer-size>32</buffer-size>
</socket-options>
<socket-interceptor enabled="false">
<class-name>com.hazelcast.examples.MySocketInterceptor</class-name>
<properties>
<property name="foo">bar</property>
</properties>
</socket-interceptor>
<ssl enabled="false">
<factory-class-name>com.hazelcast.examples.MySslFactory</factory-class-name>
</ssl>
<aws enabled="false" connection-timeout-seconds="11">
<inside-aws>true</inside-aws>
<access-key>TEST_ACCESS_KEY</access-key>
<secret-key>TEST_SECRET_KEY</secret-key>
<region>us-east-1</region>
<host-header>ec2.amazonaws.com</host-header>
<security-group-name>hazelcast-sg</security-group-name>
<tag-key>type</tag-key>
<tag-value>hz-nodes</tag-value>
</aws>
</network>
<executor-pool-size>40</executor-pool-size> <!-- reduce the pool size after profiling -->
<security>
<credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials>
</security>
<listeners>
<!--<listener>com.hazelcast.examples.MembershipListener</listener>
<listener>com.hazelcast.examples.InstanceListener</listener>
<listener>com.hazelcast.examples.MigrationListener</listener>
-->
</listeners>
<!-- change to kryo -->
<!-- <serialization>
<portable-version>3</portable-version>
<use-native-byte-order>true</use-native-byte-order>
<byte-order>BIG_ENDIAN</byte-order>
<enable-compression>false</enable-compression>
<enable-shared-object>true</enable-shared-object>
<allow-unsafe>false</allow-unsafe>
<data-serializable-factories>
<data-serializable-factory factory-id="1">com.hazelcast.examples.DataSerializableFactory
</data-serializable-factory>
</data-serializable-factories>
<portable-factories>
<portable-factory factory-id="2">com.hazelcast.examples.PortableFactory</portable-factory>
</portable-factories>
<serializers>
<global-serializer>com.hazelcast.examples.GlobalSerializerFactory</global-serializer>
<serializer type-class="com.hazelcast.examples.DummyType"
class-name="com.hazelcast.examples.SerializerFactory"/>
</serializers>
<check-class-def-errors>true</check-class-def-errors>
</serialization>
-->
<native-memory enabled="false" allocator-type="POOLED">
<size unit="MEGABYTES" value="128" />
<min-block-size>1</min-block-size>
<page-size>1</page-size>
<metadata-space-percentage>40.5</metadata-space-percentage>
</native-memory>
<!--
<proxy-factories>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ1" service="sampleService1"/>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ2" service="sampleService1"/>
<proxy-factory class-name="com.hazelcast.examples.ProxyXYZ3" service="sampleService3"/>
</proxy-factories>
-->
<load-balancer type="random"/>
<!--
Beware that near-cache eviction configuration is different for NATIVE in-memory format.
Proper eviction configuration example for NATIVE in-memory format :
`<eviction max-size-policy="USED_NATIVE_MEMORY_SIZE" eviction-policy="LFU" size="60"/>`
-->
<!-- <near-cache name="default">
<max-size>2000</max-size>
<time-to-live-seconds>90</time-to-live-seconds>
<max-idle-seconds>100</max-idle-seconds>
<eviction-policy>LFU</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<in-memory-format>OBJECT</in-memory-format>
<local-update-policy>INVALIDATE</local-update-policy>
</near-cache>
-->
<!--
<query-caches>
<query-cache name="query-cache-name" mapName="map-name">
<predicate type="class-name">com.hazelcast.examples.ExamplePredicate</predicate>
<entry-listeners>
<entry-listener include-value="true" local="false">com.hazelcast.examples.EntryListener</entry-listener>
</entry-listeners>
<include-value>true</include-value>
<batch-size>1</batch-size>
<buffer-size>16</buffer-size>
<delay-seconds>0</delay-seconds>
<in-memory-format>BINARY</in-memory-format>
<coalesce>false</coalesce>
<populate>true</populate>
<eviction eviction-policy="LRU" max-size-policy="ENTRY_COUNT" size="10000"/>
<indexes>
<index ordered="false">name</index>
</indexes>
</query-cache>
</query-caches>
-->
</hazelcast-client>
hazelcast server
here is the console message on the server and server config file:
console message
INFO: [127.0.0.1]:5701 [dev] [3.6] Established socket connection between /127.0.1.1:5701 and /127.0.0.1:47301
Mar 10, 2016 12:01:48 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [127.0.0.1]:5701 [dev] [3.6] Connection [/127.0.0.1:47301] lost. Reason: java.io.EOFException[Remote socket closed!]
hazelcast.xml (server)
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.6.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<group>
<name>dev</name>
<password>dev-pass</password>
</group>
<network>
<port auto-increment="true" port-count="100">5701</port>
<outbound-ports>
<ports>0-5900</ports>
</outbound-ports>
<join>
<multicast enabled="false">
<!--<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>-->
</multicast>
<tcp-ip enabled="true">
<member>127.0.0.1</member>
</tcp-ip>
</join>
<interfaces enabled="true">
<interface>127.0.0.1</interface>
</interfaces>
<ssl enabled="false" />
<socket-interceptor enabled="false" />
<symmetric-encryption enabled="false">
<algorithm>PBEWithMD5AndDES</algorithm>
<!-- salt value to use when generating the secret key -->
<salt>thesalt</salt>
<!-- pass phrase to use when generating the secret key -->
<password>thepass</password>
<!-- iteration count to use when generating the secret key -->
<iteration-count>19</iteration-count>
</symmetric-encryption>
</network>
<partition-group enabled="false"/>
<executor-service name="default">
<pool-size>16</pool-size>
<!--Queue capacity. 0 means Integer.MAX_VALUE.-->
<queue-capacity>0</queue-capacity>
</executor-service>
<map name="userMap">
<async-backup-count>1</async-backup-count>
<near-cache>
<max-size>5000</max-size>
<invalidate-on-change>true</invalidate-on-change>
</near-cache>
<map-store enabled="false">
<class-name></class-name>
<write-delay-seconds>0</write-delay-seconds>
</map-store>
</map>
</hazelcast>
Client code
ClientConfig clientConfig = new XmlClientConfigBuilder().build(); //the xml file is being loaded
HazelcastInstance hazelcastClient = HazelcastClient.newHazelcastClient(clientConfig);
I do not have a firewall running on my computer. Any thoughts on what I may have misconfigured?
Update:
I am able to connect when i specify the ip address programmatically so I am assuming the issue is either with my client config or how I am readig it:
ClientCOnfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().addAddress("127.0.0.1");
HazelcastInstance hcastClient = HazelcastClient.newHazelcastClient(clientConfig);
The issue was caused by the following line in the client config xml file:
<security>
<credentials>com.hazelcast.security.UsernamePasswordCredentials</credentials>
</security>
Once this was commented out, the client was able to connect to the server. I will update once I gather more information regarding its usage.
Only seems to be available in enterprise edition, not the community one but ideally, it should have either let the connection establish or generate a meaningful error message.

spring integration-sftp inbound adapter with polling facility at server startup

I am trying to sftp file with Spring integration using a maven web projecy
Need a polling facility. If I am starting the SftpInbound.java, the polling is working. Need to have the polling at server start up.
The content of java file and configuration
SftpInbound.java
package com.myproj.integration.bsy.sftp;
import java.io.File;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.integration.endpoint.SourcePollingChannelAdapter;
import org.springframework.integration.file.remote.RemoteFileTemplate;
import org.springframework.integration.file.remote.session.CachingSessionFactory;
import org.springframework.integration.file.remote.session.SessionFactory;
import org.springframework.messaging.Message;
import org.springframework.messaging.PollableChannel;
import org.springframework.scheduling.annotation.Scheduled;
import com.myproj.integration.bsy.sftp.*;
import com.jcraft.jsch.ChannelSftp.LsEntry;
public class SftpInboundReceive {
#Scheduled(fixedRate=5000)
public void inboundSftpPoll(){
ConfigurableApplicationContext context =
new ClassPathXmlApplicationContext("/META-INF/spring/integration/sftp/SftpInboundReceive-context.xml", this.getClass());
RemoteFileTemplate<LsEntry> template = null;
String file1 = "a.txt";
String file2 = "b.txt";
String file3 = "c.bar";
new File("local-dir", file1).delete();
new File("local-dir", file2).delete();
try {
PollableChannel localFileChannel = context.getBean("receiveChannel", PollableChannel.class);
#SuppressWarnings("unchecked")
SessionFactory<LsEntry> sessionFactory = context.getBean(CachingSessionFactory.class);
template = new RemoteFileTemplate<LsEntry>(sessionFactory);
System.out.println("here 1" +template);
SourcePollingChannelAdapter adapter = context.getBean("sftpInbondAdapter",SourcePollingChannelAdapter.class);
adapter.start();
Message<?> received = localFileChannel.receive();
System.out.println("Received first file message 1: " + received);
received = localFileChannel.receive();
System.out.println("Received second file message: " + received);
received = localFileChannel.receive(1000);
System.out.println("Third file was received as expected" +received);
}catch(Exception e){
e.printStackTrace();
}
finally {
SftpUtils.cleanUp(template, file1, file2, file3);
//context.close();
}
}
public static void main(String args[])
{
SftpInboundReceive oInboundReceiveSample = new SftpInboundReceive();
oInboundReceiveSample.inboundSftpPoll();
}
}
The xml file SftpInboundReceive-context.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-sftp="http://www.springframework.org/schema/integration/sftp"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
http://www.springframework.org/schema/integration http://www.springframework.org/schema/integration/spring-integration-4.1.xsd
http://www.springframework.org/schema/integration/sftp http://www.springframework.org/schema/integration/sftp/spring-integration-sftp-4.1.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-4.1.xsd">
<!-- <import resource="SftpSampleCommon.xml"/> -->
<context:property-placeholder order="1"
location="classpath:/sftpuser.properties" ignore-unresolvable="true"/>
<bean id="sftpSessionFactory"
class="org.springframework.integration.file.remote.session.CachingSessionFactory">
<constructor-arg ref="defaultSftpSessionFactory" />
</bean>
<!-- host=xxx.xx.128.143 port=22 username=xxxuser passphrase= private.keyfile=classpath:META-INF/keys/sftp_rsa -->
<bean id="defaultSftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${sftp.host}" />
<property name="port" value="${sftp.port}" />
<property name="user" value="${sftp.username}" />
<property name="privateKey" value="${private.keyfile}" />
<property name="privateKeyPassphrase" value="${passphrase}" />
</bean>
<!-- username & password from property file... tested <bean id="defaultSftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="${sftp.host}"/> <property name="port" value="${sftp.port}"/>
<property name="user" value="${sftp.username}"/> <property name="password"
value="${sftp.password}"/> </bean> -->
<!-- hardcoded, username & password... tested <bean id="defaultSftpSessionFactory"
class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="xxx.xx.128.143"/> <property name="port" value="22"/>
<property name="user" value="xxxuser"/> <property name="password" value="xxxuser#123"/>
</bean> -->
<!-- Inbound channel adapter for SFTP call . with poll facility -->
<int-sftp:inbound-channel-adapter id="sftpInbondAdapter"
auto-startup="true" channel="receiveChannel" session-factory="sftpSessionFactory"
local-directory="file:/target/foo" remote-directory="${sftp.inboundremotedir}"
auto-create-local-directory="true" delete-remote-files="false"
filename-pattern="*.txt">
<int:poller fixed-rate="100000" max-messages-per-poll="1" />
</int-sftp:inbound-channel-adapter>
<int:channel id="receiveChannel">
<int:queue />
</int:channel>
</beans>
STackTrace
p-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 5 of 12 in additional filter chain; firing Filter: 'DefaultLoginPageGeneratingFilter'
16:02:58.368 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 6 of 12 in additional filter chain; firing Filter: 'BasicAuthenticationFilter'
16:02:58.368 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 7 of 12 in additional filter chain; firing Filter: 'RequestCacheAwareFilter'
16:02:58.368 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 8 of 12 in additional filter chain; firing Filter: 'SecurityContextHolderAwareRequestFilter'
16:02:58.370 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 9 of 12 in additional filter chain; firing Filter: 'AnonymousAuthenticationFilter'
16:02:58.371 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.authentication.AnonymousAuthenticationFilter] Populated SecurityContextHolder with anonymous token: 'org.springframework.security.authentication.AnonymousAuthenticationToken#9055e4a6: Principal: anonymousUser; Credentials: [PROTECTED]; Authenticated: true; Details: org.springframework.security.web.authentication.WebAuthenticationDetails#957e: RemoteIpAddress: 127.0.0.1; SessionId: null; Granted Authorities: ROLE_ANONYMOUS'
16:02:58.371 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 10 of 12 in additional filter chain; firing Filter: 'SessionManagementFilter'
16:02:58.371 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 11 of 12 in additional filter chain; firing Filter: 'ExceptionTranslationFilter'
16:02:58.371 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / at position 12 of 12 in additional filter chain; firing Filter: 'FilterSecurityInterceptor'
16:02:58.371 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.util.matcher.AntPathRequestMatcher] Checking match of request : '/'; against '/services/employee/*'
16:02:58.371 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.access.intercept.FilterSecurityInterceptor] Public object - authentication not attempted
16:02:58.371 TRACE [http-bio-8080-exec-3][org.springframework.web.context.support.XmlWebApplicationContext] Publishing event in Root WebApplicationContext: org.springframework.security.access.event.PublicInvocationEvent[source=FilterInvocation: URL: /]
16:02:58.372 DEBUG [http-bio-8080-exec-3][org.springframework.beans.factory.support.DefaultListableBeanFactory] Returning cached instance of singleton bean 'org.springframework.integration.internalMessagingAnnotationPostProcessor'
16:02:58.372 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.FilterChainProxy] / reached end of additional filter chain; proceeding with original chain
16:02:58.375 TRACE [http-bio-8080-exec-3][org.springframework.web.servlet.DispatcherServlet] Bound request context to thread: SecurityContextHolderAwareRequestWrapper[ org.springframework.security.web.context.HttpSessionSecurityContextRepository$Servlet3SaveToSessionRequestWrapper#1b0b34e]
16:02:58.376 DEBUG [http-bio-8080-exec-3][org.springframework.web.servlet.DispatcherServlet] DispatcherServlet with name 'Information Exchange Gateway Integration' processing GET request for [/DummyDataIntg/]
16:02:58.376 TRACE [http-bio-8080-exec-3][org.springframework.web.servlet.DispatcherServlet] Testing handler map [org.springframework.integration.http.inbound.IntegrationRequestMappingHandlerMapping#cdca7] in DispatcherServlet with name 'Information Exchange Gateway Integration'
16:02:58.378 WARN [http-bio-8080-exec-3][org.springframework.web.servlet.PageNotFound] No mapping found for HTTP request with URI [/DummyDataIntg/] in DispatcherServlet with name 'Information Exchange Gateway Integration'
16:02:58.378 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.context.HttpSessionSecurityContextRepository] SecurityContext is empty or contents are anonymous - context will not be stored in HttpSession.
16:02:58.378 TRACE [http-bio-8080-exec-3][org.springframework.web.servlet.DispatcherServlet] Cleared thread-bound request context: SecurityContextHolderAwareRequestWrapper[ org.springframework.security.web.context.HttpSessionSecurityContextRepository$Servlet3SaveToSessionRequestWrapper#1b0b34e]
16:02:58.378 DEBUG [http-bio-8080-exec-3][org.springframework.web.servlet.DispatcherServlet] Successfully completed request
16:02:58.378 TRACE [http-bio-8080-exec-3][org.springframework.web.context.support.XmlWebApplicationContext] Publishing event in WebApplicationContext for namespace 'Information Exchange Gateway Integration-servlet': ServletRequestHandledEvent: url=[/DummyDataIntg/]; client=[127.0.0.1]; method=[GET]; servlet=[Information Exchange Gateway Integration]; session=[null]; user=[null]; time=[6ms]; status=[OK]
16:02:58.378 TRACE [http-bio-8080-exec-3][org.springframework.web.context.support.XmlWebApplicationContext] Publishing event in Root WebApplicationContext: ServletRequestHandledEvent: url=[/DummyDataIntg/]; client=[127.0.0.1]; method=[GET]; servlet=[Information Exchange Gateway Integration]; session=[null]; user=[null]; time=[6ms]; status=[OK]
16:02:58.378 DEBUG [http-bio-8080-exec-3][org.springframework.beans.factory.support.DefaultListableBeanFactory] Returning cached instance of singleton bean 'org.springframework.integration.internalMessagingAnnotationPostProcessor'
16:02:58.378 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.access.ExceptionTranslationFilter] Chain processed normally
16:02:58.378 DEBUG [http-bio-8080-exec-3][org.springframework.security.web.context.SecurityContextPersistenceFilter] SecurityContextHolder now cleared, as request processing completed
16:04:37.403 INFO [task-scheduler-4][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\b.txt, headers={timestamp=1420540477403, id=f8c32928-411b-99b7-f4a0-0dd1b119fc44}]]
16:04:37.403 ERROR [task-scheduler-4][org.springframework.integration.handler.LoggingHandler] \target\foo\b.txt
16:06:17.403 INFO [task-scheduler-9][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\brjb.txt, headers={timestamp=1420540577403, id=061634bf-0562-962b-e583-53a302cdb0d4}]]
16:06:17.403 ERROR [task-scheduler-9][org.springframework.integration.handler.LoggingHandler] \target\foo\brjb.txt
16:07:57.403 INFO [task-scheduler-10][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\d.txt, headers={timestamp=1420540677403, id=abea1188-fc5a-15b0-c40c-73ea686a88c0}]]
16:07:57.403 ERROR [task-scheduler-10][org.springframework.integration.handler.LoggingHandler] \target\foo\d.txt
16:09:37.403 INFO [task-scheduler-4][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\g.txt, headers={timestamp=1420540777403, id=461a8e48-6ebf-5d1c-ab3f-ce28e54e00b8}]]
16:09:37.403 ERROR [task-scheduler-4][org.springframework.integration.handler.LoggingHandler] \target\foo\g.txt
16:11:17.403 INFO [task-scheduler-3][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\h.txt, headers={timestamp=1420540877403, id=4cbeeceb-5949-0e1c-1492-45ec17172480}]]
16:11:17.403 ERROR [task-scheduler-3][org.springframework.integration.handler.LoggingHandler] \target\foo\h.txt
16:12:57.403 INFO [task-scheduler-9][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\p.txt, headers={timestamp=1420540977403, id=3767b4bb-4de4-3178-f693-5ac0bd94a766}]]
16:12:57.403 ERROR [task-scheduler-9][org.springframework.integration.handler.LoggingHandler] \target\foo\p.txt
16:14:37.403 INFO [task-scheduler-6][org.springframework.integration.file.FileReadingMessageSource] Created message: [GenericMessage [payload=\target\foo\wiki.txt, headers={timestamp=1420541077403, id=bd3d0082-9788-fc31-1596-ab7a79186c17}]]
16:14:37.403 ERROR [task-scheduler-6][org.springframework.integration.handler.LoggingHandler] \target\foo\wiki.txt
error log in java file**strong text**
20:25:52.807 ERROR [task-scheduler-1][org.springframework.integration.handler.LoggingHandler] org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is org.springframework.core.NestedIOException: Failed to list files; nested exception is 2: No such file
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:209)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:167)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:57)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:64)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:124)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:192)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:149)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:146)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:298)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:52)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:292)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is org.springframework.core.NestedIOException: Failed to list files; nested exception is 2: No such file
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:343)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:167)
... 22 more
Caused by: org.springframework.core.NestedIOException: Failed to list files; nested exception is 2: No such file
at org.springframework.integration.sftp.session.SftpSession.list(SftpSession.java:103)
at org.springframework.integration.sftp.session.SftpSession.list(SftpSession.java:50)
at org.springframework.integration.file.remote.session.CachingSessionFactory$CachedSession.list(CachingSessionFactory.java:205)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer$1.doInSession(AbstractInboundFileSynchronizer.java:171)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer$1.doInSession(AbstractInboundFileSynchronizer.java:167)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:334)
... 23 more
Caused by: 2: No such file
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2846)
at com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2198)
at com.jcraft.jsch.ChannelSftp._stat(ChannelSftp.java:2215)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1565)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1526)
at org.springframework.integration.sftp.session.SftpSession.list(SftpSession.java:91)
... 28 more
You need to take a look how to start Spring Context from Web application using web.xml or WebApplicationInitializer for Servlet 3 environment. In this case the SftpInboundReceive-context.xml can be a part of common ApplicationContext and the polling facility (<int-sftp:inbound-channel-adapter>) will start automatically on application startup, which is caused on server start, when the last one see the web context of your application.
Please, read more docs for Spring Framework: http://projects.spring.io/spring-framework/
Spring Integration is just an EIP extension and follow with the same configuration and lifecycle rules.
I see that you just use the SFTP sample from Spring Intregration. You can found there samples for Tomcat and for Spring Boot as well.

how to configure log4j in a web application using jboss 7.1.1?

The steps to configure log4j are:
Step 1.
Create the file: jboss-deployment-structure.xml
<jboss-deployment-structure>
<deployment>
<exclusions>
<module name="org.apache.log4j" slot="main"/>
<module name="org.apache.commons.logging"/>
</exclusions>
</deployment>
</jboss-deployment-structure>
Step 2.
Create the servlet: Log4jInitServlet.java
import java.io.File;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletConfig;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.commons.logging.LogFactory;
import org.apache.commons.logging.Log;
public class Log4JInitServlet extends HttpServlet{
/**
*
*/
private static final long serialVersionUID = -3677208571865966932L;
private static final Log log=LogFactory.getLog(Log4JInitServlet.class);
public Log4JInitServlet(){
}
protected void doGet(HttpServletRequest request
,HttpServletResponse response) throws ServletException,IOException{
PrintWriter out = response.getWriter();
out.write("<h1>LogTester Application Version Guide Erasmo Marciano 1.0</h1>");
out.write("<p>Loading this page generates multiple log events for the it.deinformatica.marciano.logtest category.</p>");
out.write("<p>Click on F5 reload this web-page.</p>");
out.write("<p>You wii find level log:debug|fatal|error|trace|info|warn</p>");
out.close();
for (int i = 1; i <= 20; i++) {
log.debug("This is DEBUG message. Event number " + i);
log.fatal("This is FATAL message. Event number " + i);
log.info("This is INFO message. Event number " + i);
log.error("This is ERROR message. Event number " + i);
log.trace("This is TRACE message. Event number " + i);
log.warn("This is WARN message. Event number " + i);
}
}
protected void doPost(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
// TODO Auto-generated method stub
}
}
Step 3.
create the file log4j.properties
### set log levels - for more verbose logging change 'info' to 'debug' ###
log4j.rootLogger=info, stdout
### direct log messages to stdout ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
What happens is that only shows INFO messages and no DEBUG. What am I doing wrong
or should do to display messages with lo4j DEBUG?
Please if anyone had a similar problem and solved it.
I have also faced problem for Jboss EAP 6. I have resolved. My working code is like follows:
1. WEB-INF/jboss-deployment-structure.xml file
<?xml version="1.0" encoding="UTF-8"?>
<jboss-deployment-structure>
<deployment>
<exclusions>
<!-- first exclude -->
<module name="javaee.api" />
<module name="org.apache.log4j"/>
<module name="org.slf4j"/>
</exclusions>
<dependencies>
<!-- then include filtered -->
<module name="org.apache.log4j" />
</dependencies>
<exclude-subsystems> <subsystem name="jpa" /> </exclude-subsystems>
</deployment>
</jboss-deployment-structure>
2. resources/log4j.properties file
# Root logger option
log4j.rootLogger=INFO, stdout, INF, DBG, ERR
#---------------------------------------------
# Redirect log messages to a log file
#---------------------------------------------
# Output to Tomcat home
logs.dir=${jboss.home}/standalone/log/
logs.fmt.dly=.yyyy-MM-dd
logs.fmt.date=yyyy-MM-dd HH:mm:ss
# Direct log messages to stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
# DEBUG Logs
log4j.appender.DBG.Threshold=DEBUG
log4j.appender.DBG.filter=org.apache.log4j.varia.LevelRangeFilter
#log4j.appender.DBG.filter.LevelMin=DEBUG
log4j.appender.DBG.filter.LevelMax=DEBUG
log4j.appender.DBG.filter.AcceptOnMatch=True
log4j.appender.DBG=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DBG.File=${jboss.server.log.dir}/app-debug-log.log
log4j.appender.DBG.DatePattern=${logs.dly.ptrn}
log4j.appender.DBG.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.DBG.layout.ConversionPattern=%d{${logs.fmt.date}} %-5p [%c{1}:%L] - %m%n
# INFO Logs
log4j.appender.INF=org.apache.log4j.DailyRollingFileAppender
log4j.appender.INF.File=${jboss.server.log.dir}/app-info-log.log
log4j.appender.INF.DatePattern=${logs.fmt.dly}
log4j.appender.INF.Threshold=INFO
#log4j.appender.DBG.filter.LevelMin=INFO
log4j.appender.DBG.filter.LevelMax=INFO
log4j.appender.INF.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.INF.layout.ConversionPattern=%d{${logs.fmt.date}} %-5p [%c{1}:%L] - %m%n
# ERROR Logs
log4j.appender.ERR=org.apache.log4j.DailyRollingFileAppender
log4j.appender.ERR.File=${jboss.server.log.dir}/app-err-log.log
log4j.appender.ERR.DatePattern=${logs.fmt.dly}
log4j.appender.ERR.Threshold=ERROR
#log4j.appender.DBG.filter.LevelMin=ERROR
#log4j.appender.DBG.filter.LevelMax=ERROR
log4j.appender.ERR.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.ERR.layout.ConversionPattern=%d{${logs.fmt.date}} %-5p [%c{1}:%L] - %m%n
Try to exclude even jboss logging, and slf4j if you use it.
Remenber the xmlns in your xml, and put the file in WEB-INF folder of your webapp:
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.1">
<deployment>
<exclusions>
<module name="org.apache.log4j" />
<module name="org.slf4j" />
<module name="org.apache.commons.logging"/>
<module name="org.log4j"/>
<module name="org.jboss.logging"/>
</exclusions>
</deployment>
</jboss-deployment-structure>

Resources