i'm new to the fabric world and i'm testing the Java SDK in Fabric 1.4.
I've already set up a smart contract through Visual Studio Code and the fabric local env.
Now, I want to interact with it calling a smart contract endpoint via Java.
private Optional<FileAsset> sendTransaction(HFClient client, String functionToCall, String... args) {
Channel channel = client.getChannel(channelName);
TransactionProposalRequest tpr = TransactionProposalRequest.newInstance(client.getUserContext());
ChaincodeID fileChainCCId = ChaincodeID
.newBuilder()
.setName(chaincodeName)
.setVersion(chaincodeVersion)
.build();
tpr.setChaincodeID(fileChainCCId);
tpr.setChaincodeLanguage(Type.NODE);
tpr.setFcn(functionToCall).setArgs(args);
tpr.setProposalWaitTime(6_000);
log.info("Calling function: {} with args: {}", functionToCall, args);
Map<String, byte[]> tm2 = new HashMap<>();
tm2.put("HyperLedgerFabric", "TransactionProposalRequest:JavaSDK".getBytes(UTF_8)); // Just
tm2.put("method", "TransactionProposalRequest".getBytes(UTF_8)); // ditto
tm2.put("result", ":)".getBytes(UTF_8)); // This should be returned in
// the payload see chaincode
// why.
tm2.put(EXPECTED_EVENT_NAME, EXPECTED_EVENT_DATA); // This should
// trigger an event
// see chaincode
// why.
try {
tpr.setTransientMap(tm2);
} catch (InvalidArgumentException e) {
log.error("Error setting transient map. Err: {}", e.getMessage());
return Optional.empty();
}
Collection<ProposalResponse> responses = null;
try {
responses = channel.sendTransactionProposal(tpr, channel.getPeers());
} catch (ProposalException | InvalidArgumentException e) {
log.error("Error sending transaction proposal: {}. Err: {}", tpr, e.getMessage());
return Optional.empty();
}
List<ProposalResponse> invalid = responses.stream().filter(r -> r.isInvalid()).collect(Collectors.toList());
if (!invalid.isEmpty()) {
invalid.forEach(response -> {
log.error(response.getMessage());
});
return Optional.empty();
}
List<ProposalResponse> resps = responses.stream()
.filter(r -> r.getStatus().equals(ProposalResponse.Status.SUCCESS)).collect(Collectors.toList());
resps.stream().forEach(resp -> {
log.info("Successful transaction proposal response Txid: {} from peer {}", resp.getTransactionID(),
resp.getPeer().getName());
});
try {
Collection<Set<ProposalResponse>> proposalConsistencySets = SDKUtils.getProposalConsistencySets(responses);
if (proposalConsistencySets.size() != 1) {
log.error("Expected only one set of consistent proposal responses but got {}",
proposalConsistencySets.size());
return Optional.empty();
}
} catch (InvalidArgumentException e) {
log.error("Error generating proposal consistency sets. Err: {}", e.getMessage());
return Optional.empty();
}
// If all responses are fine, then we can proceed with sending the
// transaction to an orderer
CompletableFuture<TransactionEvent> sentTransaction = channel.sendTransaction(resps, channel.getOrderers(), client.getUserContext());
// For simplicity, we just request result with a timeout and verify that
// for validity.
BlockEvent.TransactionEvent event = null;
try {
event = sentTransaction.get(60, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
log.error("Error sending transaction to orders. Err: {}", e.getMessage());
return Optional.empty();
}
if (event.isValid()) {
log.info("Transacion tx: " + event.getTransactionID() + " is completed.");
} else {
log.error("Transaction tx: " + event.getTransactionID() + " is invalid.");
}
return Optional.empty();
}
On
CompletableFuture<TransactionEvent> sentTransaction = channel.sendTransaction(resps, channel.getOrderers(), client.getUserContext());
I get the following error:
UNIMPLEMENTED: unknown service orderer.AtomicBroadcast
2019-05-08 18:49:00.761 ERROR 15748 --- [ault-executor-0] o.hyperledger.fabric.sdk.OrdererClient : OrdererClient{id: 7, channel: mychannel, name: peer0.org1.example.com, url: grpc://localhost:17051} managed channel isTerminated: false, isShutdown: false, state: READY
2019-05-08 18:49:00.768 ERROR 15748 --- [ault-executor-0] o.hyperledger.fabric.sdk.OrdererClient : Received error org.hyperledger.fabric.sdk.OrdererClient$1#326e7643 UNIMPLEMENTED: unknown service orderer.AtomicBroadcast
io.grpc.StatusRuntimeException: UNIMPLEMENTED: unknown service orderer.AtomicBroadcast
at io.grpc.Status.asRuntimeException(Status.java:530) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:434) [grpc-stub-1.17.1.jar:1.17.1]
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:694) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:397) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) [grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) [grpc-core-1.17.1.jar:1.17.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_152]
2019-05-08 18:49:00.769 ERROR 15748 --- [ main] o.hyperledger.fabric.sdk.OrdererClient : OrdererClient{id: 7, channel: mychannel, name: peer0.org1.example.com, url: grpc://localhost:17051} grpc status Code:unknown service orderer.AtomicBroadcast, Description UNIMPLEMENTED,
2019-05-08 18:49:00.770 ERROR 15748 --- [ main] o.hyperledger.fabric.sdk.OrdererClient : OrdererClient{id: 7, channel: mychannel, name: peer0.org1.example.com, url: grpc://localhost:17051}sendTransaction error Channel mychannel, send transaction failed on orderer OrdererClient{id: 7, channel: mychannel, name: peer0.org1.example.com, url: grpc://localhost:17051}. Reason: UNIMPLEMENTED: unknown service orderer.AtomicBroadcast
org.hyperledger.fabric.sdk.exception.TransactionException: Channel mychannel, send transaction failed on orderer OrdererClient{id: 7, channel: mychannel, name: peer0.org1.example.com, url: grpc://localhost:17051}. Reason: UNIMPLEMENTED: unknown service orderer.AtomicBroadcast
at org.hyperledger.fabric.sdk.OrdererClient.sendTransaction(OrdererClient.java:236) ~[fabric-sdk-java-1.4.1.jar:na]
at org.hyperledger.fabric.sdk.Orderer.sendTransaction(Orderer.java:161) [fabric-sdk-java-1.4.1.jar:na]
at org.hyperledger.fabric.sdk.Channel.sendTransaction(Channel.java:4971) [fabric-sdk-java-1.4.1.jar:na]
at org.hyperledger.fabric.sdk.Channel.sendTransaction(Channel.java:4504) [fabric-sdk-java-1.4.1.jar:na]
at it.blockchain.fabric.service.impl.FileChainSmartContractServiceImpl.sendTransaction(FileChainSmartContractServiceImpl.java:205) [classes/:na]
at it.blockchain.fabric.service.impl.FileChainSmartContractServiceImpl.addFileAsset(FileChainSmartContractServiceImpl.java:65) [classes/:na]
at it.blockchain.fabric.clr.DemoCLR.run(DemoCLR.java:40) [classes/:na]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:813) [spring-boot-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:797) [spring-boot-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:324) [spring-boot-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260) [spring-boot-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248) [spring-boot-2.1.4.RELEASE.jar:2.1.4.RELEASE]
at it.blockchain.fabric.FilechainFabricDemoApplication.main(FilechainFabricDemoApplication.java:10) [classes/:na]
Caused by: io.grpc.StatusRuntimeException: UNIMPLEMENTED: unknown service orderer.AtomicBroadcast
at io.grpc.Status.asRuntimeException(Status.java:530) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:434) ~[grpc-stub-1.17.1.jar:1.17.1]
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:694) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:397) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) ~[grpc-core-1.17.1.jar:1.17.1]
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) ~[grpc-core-1.17.1.jar:1.17.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_152]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_152]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
2019-05-08 18:49:00.771 ERROR 15748 --- [ main] org.hyperledger.fabric.sdk.Channel : Channel mychannel unsuccessful sendTransaction to orderer peer0.org1.example.com (grpc://localhost:17051)
Any suggetions about how to resolve it?
You are targeting a peer for AtomicBroadcast - peer0.org1.example.com
you need to target an orderer. seems like your network configuration is wrong.
Related
I run cassandra benchmarks ReadWriteTest with idea on centos7
`
package org.apache.cassandra.test.microbench;
#BenchmarkMode(Mode.Throughput)
#OutputTimeUnit(TimeUnit.MILLISECONDS)
#Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
#Measurement(iterations = 5, time = 2, timeUnit = TimeUnit.SECONDS)
#Fork(value = 1)
#Threads(1)
#State(Scope.Benchmark)
public class ReadWriteTest extends CQLTester
{
static String keyspace;
String table;
String writeStatement;
String readStatement;
long numRows = 0;
ColumnFamilyStore cfs;
#Setup(Level.Trial)
public void setup() throws Throwable
{
CQLTester.setUpClass();
keyspace = createKeyspace("CREATE KEYSPACE %s with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 } and durable_writes = false");
table = createTable(keyspace, "CREATE TABLE %s ( userid bigint, picid bigint, commentid bigint, PRIMARY KEY(userid, picid))");
execute("use "+keyspace+";");
writeStatement = "INSERT INTO "+table+"(userid,picid,commentid)VALUES(?,?,?)";
readStatement = "SELECT * from "+table+" limit 100";
cfs = Keyspace.open(keyspace).getColumnFamilyStore(table);
cfs.disableAutoCompaction();
//Warm up
System.err.println("Writing 50k");
for (long i = 0; i < 5000; i++)
execute(writeStatement, i, i, i );
}
#TearDown(Level.Trial)
public void teardown() throws IOException, ExecutionException, InterruptedException
{
CQLTester.cleanup();
}
#Benchmark
public Object write() throws Throwable
{
numRows++;
return execute(writeStatement, numRows, numRows, numRows );
}
#Benchmark
public Object read() throws Throwable
{
return execute(readStatement);
}
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(ReadWriteTest.class.getSimpleName())
.build();
new Runner(opt).run();
}
}
it seems to be successful at start, "java.lang.IndexOutOfBoundsException: Index: 0, Size: 0" occurs like this
01:37:25.884 [org.apache.cassandra.test.microbench.ReadWriteTest.read-jmh-worker-1] DEBUG org.apache.cassandra.db.ReadCommand - --in queryMemtableAndDiskInternal, partitionKey token:-2532556674411782010, partitionKey:DecoratedKey(-2532556674411782010, 6b657973706163655f30)
01:37:25.888 [org.apache.cassandra.test.microbench.ReadWriteTest.read-jmh-worker-1] INFO o.a.cassandra.db.ColumnFamilyStore - Initializing keyspace_0.globalReplicaTable
01:37:25.889 [org.apache.cassandra.test.microbench.ReadWriteTest.read-jmh-worker-1] DEBUG o.a.cassandra.db.DiskBoundaryManager - Refreshing disk boundary cache for keyspace_0.globalReplicaTable
01:37:25.890 [org.apache.cassandra.test.microbench.ReadWriteTest.read-jmh-worker-1] DEBUG o.a.cassandra.db.DiskBoundaryManager - Got local ranges [] (ringVersion = 0)
01:37:25.890 [org.apache.cassandra.test.microbench.ReadWriteTest.read-jmh-worker-1] DEBUG o.a.cassandra.db.DiskBoundaryManager - Updating boundaries from null to DiskBoundaries{directories=[DataDirectory{location=/home/cjx/Downloads/depart0/depart/data/data}], positions=null, ringVersion=0, directoriesVersion=0} for keyspace_0.globalReplicaTable
01:37:25.890 [org.apache.cassandra.test.microbench.ReadWriteTest.read-jmh-worker-1] DEBUG org.apache.cassandra.config.Schema - Adding org.apache.cassandra.config.CFMetaData#44380385[cfId=bfe68a70-569b-11ed-974a-eb765e7c415c,ksName=keyspace_0,cfName=globalReplicaTable,flags=[COMPOUND],params=TableParams{comment=, read_repair_chance=0.0, dclocal_read_repair_chance=0.1, bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 'ALL', 'rows_per_partition' : 'NONE'}, compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy, options={max_threshold=32, min_threshold=4}}, compression=org.apache.cassandra.schema.CompressionParams#920394f7, extensions={}, cdc=false},comparator=comparator(org.apache.cassandra.db.marshal.LongType),partitionColumns=[[] | [commentid]],partitionKeyColumns=[userid],clusteringColumns=[picid],keyValidator=org.apache.cassandra.db.marshal.LongType,columnMetadata=[userid, picid, commentid],droppedColumns={},triggers=[],indexes=[]] to cfIdMap
Writing 50k
<failure>
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:659)
at java.util.ArrayList.get(ArrayList.java:435)
at org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:1079)
at org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:107)
at org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:4066)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:619)
at org.apache.cassandra.db.Mutation.apply(Mutation.java:227)
at org.apache.cassandra.db.Mutation.apply(Mutation.java:232)
at org.apache.cassandra.db.Mutation.apply(Mutation.java:241)
at org.apache.cassandra.cql3.statements.ModificationStatement.executeInternalWithoutCondition(ModificationStatement.java:587)
at org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:581)
at org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:315)
at org.apache.cassandra.cql3.CQLTester.executeFormattedQuery(CQLTester.java:792)
at org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:780)
at org.apache.cassandra.test.microbench.ReadWriteTest.setup(ReadWriteTest.java:94)
at org.apache.cassandra.test.microbench.generated.ReadWriteTest_read_jmhTest._jmh_tryInit_f_readwritetest0_G(ReadWriteTest_read_jmhTest.java:438)
at org.apache.cassandra.test.microbench.generated.ReadWriteTest_read_jmhTest.read_Throughput(ReadWriteTest_read_jmhTest.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
`
then I use ant microbench -Dbenchmark.name=ReadWriteTest to run it, also it leads to the same error.
Any and all help would be appreciated.
Thanks.
of course I run the microbenchs with cassandra running, I also meet mistake when I shut down cassandra
`
INFO [StorageServiceShutdownHook] 2022-10-28 00:58:28,491 Server.java:176 - Stop listening for CQL clients
INFO [StorageServiceShutdownHook] 2022-10-28 00:58:28,491 Gossiper.java:1551 - Announcing shutdown
INFO [StorageServiceShutdownHook] 2022-10-28 00:58:28,492 StorageService.java:2454 - Node /192.168.199.135 state jump to shutdown
INFO [StorageServiceShutdownHook] 2022-10-28 00:58:30,495 MessagingService.java:981 - Waiting for messaging service to quiesce
INFO [ACCEPT-/192.168.199.135] 2022-10-28 00:58:30,495 MessagingService.java:1336 - MessagingService has terminated the accept() thread
WARN [MemtableFlushWriter:3] 2022-10-28 00:58:30,498 NativeLibrary.java:304 - open(/home/cjx/Downloads/depart0/depart/data/data/system_auth/roles-5bc52802de2535edaeab188eecebb090, O_RDONLY) failed, errno (2).
ERROR [MemtableFlushWriter:3] 2022-10-28 00:58:30,500 LogTransaction.java:272 - Transaction log [md_txn_flush_4ff62d10-5696-11ed-80e2-25f08a3965a3.log in /home/cjx/Downloads/depart0/depart/data/data/system_auth/roles-5bc52802de2535edaeab188eecebb090] indicates txn was not completed, trying to abort it now
ERROR [MemtableFlushWriter:3] 2022-10-28 00:58:30,505 LogTransaction.java:275 - Failed to abort transaction log [md_txn_flush_4ff62d10-5696-11ed-80e2-25f08a3965a3.log in /home/cjx/Downloads/depart0/depart/data/data/system_auth/roles-5bc52802de2535edaeab188eecebb090]
java.lang.RuntimeException: java.nio.file.NoSuchFileException: /home/cjx/Downloads/depart0/depart/data/data/system_auth/roles-5bc52802de2535edaeab188eecebb090/md_txn_flush_4ff62d10-5696-11ed-80e2-25f08a3965a3.log
at org.apache.cassandra.io.util.FileUtils.write(FileUtils.java:590) ~[main/:na]
at org.apache.cassandra.io.util.FileUtils.appendAndSync(FileUtils.java:571) ~[main/:na]
at org.apache.cassandra.db.lifecycle.LogReplica.append(LogReplica.java:85) ~[main/:na]
at org.apache.cassandra.db.lifecycle.LogReplicaSet.lambda$null$5(LogReplicaSet.java:210) ~[main/:na]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:113) ~[main/:na]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:103) ~[main/:na]
at org.apache.cassandra.db.lifecycle.LogReplicaSet.append(LogReplicaSet.java:210) ~[main/:na]
at org.apache.cassandra.db.lifecycle.LogFile.addRecord(LogFile.java:338) ~[main/:na]
at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:255) ~[main/:na]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:113) ~[main/:na]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:103) ~[main/:na]
at org.apache.cassandra.utils.Throwables.perform(Throwables.java:98) ~[main/:na]
at org.apache.cassandra.db.lifecycle.LogTransaction$TransactionTidier.run(LogTransaction.java:273) [main/:na]
at org.apache.cassandra.db.lifecycle.LogTransaction$TransactionTidier.tidy(LogTransaction.java:257) [main/:na]
at org.apache.cassandra.utils.concurrent.Ref$GlobalState.release(Ref.java:322) [main/:na]
at org.apache.cassandra.utils.concurrent.Ref$State.ensureReleased(Ref.java:200) [main/:na]
at org.apache.cassandra.utils.concurrent.Ref.ensureReleased(Ref.java:120) [main/:na]
at org.apache.cassandra.db.lifecycle.LogTransaction.complete(LogTransaction.java:392) [main/:na]
at org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:409) [main/:na]
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144) [main/:na]
at org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:244) [main/:na]
at org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144) [main/:na]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1234) [main/:na]
at org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1180) [main/:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_332]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_332]
at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [main/:na]
at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_332]
Caused by: java.nio.file.NoSuchFileException: /home/cjx/Downloads/depart0/depart/data/data/system_auth/roles-5bc52802de2535edaeab188eecebb090/md_txn_flush_4ff62d10-5696-11ed-80e2-25f08a3965a3.log
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[na:1.8.0_332]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[na:1.8.0_332]
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[na:1.8.0_332]
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[na:1.8.0_332]
at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434) ~[na:1.8.0_332]
at java.nio.file.Files.newOutputStream(Files.java:216) ~[na:1.8.0_332]
at java.nio.file.Files.write(Files.java:3351) ~[na:1.8.0_332]
at org.apache.cassandra.io.util.FileUtils.write(FileUtils.java:583) ~[main/:na]
... 27 common frames omitted
`
it's so wired and I could not know how to fix it
I encountered a problem when connecting to a NAS shared directory using spring-integration-smb.
The problem is that I was able to connect to another shared Nas directory but for the pre-prod Nas, I encountered this problem.
Also, the shared server administrator confirmed that both directories have the same configuration.
You will find below the stack encountered
07 mars 2022;14:49:50.702 [scheduling-1] WARN jcifs.smb.SmbTransportImpl - Disconnecting transport while still in use Transport12[NAS03/XXXXXXXX:445,state=5,signingEnforced=false,usage=1]: [SmbSession[credentials=XXXXXXXXXX,targetHost=nas03,targetDomain=null,uid=0,connectionState=2,usage=1]]
07 mars 2022;14:49:50.702 [scheduling-1] WARN jcifs.smb.SmbSessionImpl - Logging off session while still in use SmbSession[credentials=XXXXXXXXX,targetHost=nas03,targetDomain=null,uid=0,connectionState=3,usage=1]:[SmbTree[share=PPD,service=null,tid=4,inDfs=false,inDomainDfs=false,connectionState=0,usage=2]]
07 mars 2022;14:49:50.737 [scheduling-1] ERROR o.s.i.handler.LoggingHandler - org.springframework.messaging.MessagingException: Problem occurred while synchronizing '' to local directory; nested exception is org.springframework.messaging.MessagingException: Failure occurred while copying '/test.csv' from the remote to the local directory; nested exception is org.springframework.core.NestedIOException: Failed to read resource [/test.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:348)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:267)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:69)
at org.springframework.integration.endpoint.AbstractFetchLimitingMessageSource.doReceive(AbstractFetchLimitingMessageSource.java:47)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:142)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:212)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:444)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.pollForMessage(AbstractPollingEndpoint.java:413)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$4(AbstractPollingEndpoint.java:348)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.lambda$execute$0(ErrorHandlingTaskExecutor.java:57)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:55)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.lambda$createPoller$5(AbstractPollingEndpoint.java:341)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:95)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.messaging.MessagingException: Failure occurred while copying '/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv' from the remote to the local directory; nested exception is org.springframework.core.NestedIOException: Failed to read resource [/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyRemoteContentToLocalFile(AbstractInboundFileSynchronizer.java:551)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyFileToLocalDirectory(AbstractInboundFileSynchronizer.java:488)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyIfNotNull(AbstractInboundFileSynchronizer.java:403)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.transferFilesFromRemoteToLocal(AbstractInboundFileSynchronizer.java:386)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.lambda$synchronizeToLocalDirectory$0(AbstractInboundFileSynchronizer.java:342)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:452)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:341)
... 21 more
Caused by: org.springframework.core.NestedIOException: Failed to read resource [/BE1_2_MOUVEMENTS_Valorisation_20211231_20220218_164451.csv].; nested exception is jcifs.smb.SmbException: The parameter is incorrect.
at org.springframework.integration.smb.session.SmbSession.read(SmbSession.java:188)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.copyRemoteContentToLocalFile(AbstractInboundFileSynchronizer.java:545)
... 27 more
Caused by: jcifs.smb.SmbException: The parameter is incorrect.
at jcifs.smb.SmbTransportImpl.checkStatus2(SmbTransportImpl.java:1467)
at jcifs.smb.SmbTransportImpl.checkStatus(SmbTransportImpl.java:1578)
at jcifs.smb.SmbTransportImpl.sendrecv(SmbTransportImpl.java:1027)
at jcifs.smb.SmbTransportImpl.send(SmbTransportImpl.java:1549)
at jcifs.smb.SmbSessionImpl.send(SmbSessionImpl.java:409)
at jcifs.smb.SmbTreeImpl.send(SmbTreeImpl.java:472)
at jcifs.smb.SmbTreeConnection.send0(SmbTreeConnection.java:404)
at jcifs.smb.SmbTreeConnection.send(SmbTreeConnection.java:318)
at jcifs.smb.SmbTreeConnection.send(SmbTreeConnection.java:298)
at jcifs.smb.SmbTreeHandleImpl.send(SmbTreeHandleImpl.java:130)
at jcifs.smb.SmbTreeHandleImpl.send(SmbTreeHandleImpl.java:117)
at jcifs.smb.SmbFile.withOpen(SmbFile.java:1775)
at jcifs.smb.SmbFile.withOpen(SmbFile.java:1744)
at jcifs.smb.SmbFile.queryPath(SmbFile.java:793)
at jcifs.smb.SmbFile.exists(SmbFile.java:879)
at jcifs.smb.SmbFile.isFile(SmbFile.java:1102)
at org.springframework.integration.smb.session.SmbSession.read(SmbSession.java:182)
... 28 more
here is my code :
#Bean
public SmbSessionFactory smbSessionFactory() {
VaultResponse vaultResponse = vaultTemplate
.opsForKeyValue(vaultPath, VaultKeyValueOperationsSupport.KeyValueBackend.KV_2).get(vaultSecretsPath.toLowerCase());
SmbSessionFactory smbSession = new SmbSessionFactory();
smbSession.setHost(properties.getNasHost());
smbSession.setPort(properties.getNasPort());
smbSession.setDomain(properties.getNasDomain());
if (vaultResponse != null) {
Map<String, Object> data = vaultResponse.getData();
smbSession.setUsername(data != null && data.get("nasUsername") != null ? (String) data.get("nasUsername") : "");
smbSession.setPassword(data != null && data.get("nasPassword") != null ? (String) data.get("nasPassword") : "");
}
smbSession.setShareAndDir(properties.getNasShareAndDir());
smbSession.setReplaceFile(true);
smbSession.setSmbMinVersion(DialectVersion.SMB1);
smbSession.setSmbMaxVersion(DialectVersion.SMB311);
return smbSession;
}
thank you in advance,
I am using hazelcast 3.8.3. Occasionally, I have Operation TimeoutException (GetOperation). I have 4 nodes in the same Hz cluster.
Application throws error
..
Caused by: java.util.concurrent.ExecutionException: com.hazelcast.core.OperationTimeoutException: GetOperation invocation failed to complete due to operation-heartbeat-timeout. Current time: 2020-04-23 19:17:37.414. Start time: 2020-04-23 19:14:16.456. Total elapsed time: 200958 ms. Last operation heartbeat: never. Last operation heartbeat from member: 2020-04-23 19:16:59.335. Invocation{op=com.hazelcast.map.impl.operation.GetOperation{serviceName='hz:impl:mapService', identityHash=300649867, partitionId=30, replicaIndex=0, callId=-10707, invocationTime=1587665497325 (2020-04-23 19:11:37.325), waitTimeout=-1, callTimeout=180000, name=ENRICHMENT-CACHE-DEFAULT}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=180000, firstInvocationTimeMs=1587665656456, firstInvocationTime='2020-04-23 19:14:16.456', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 01:00:00.000', target=[10.195.40.212]:5701, pendingResponse={VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=Connection[id=56, /10.195.40.210:5702->/10.195.40.212:60231, endpoint=[10.195.40.212]:5701, alive=true, type=MEMBER]}
at java.util.concurrent.FutureTask.report(Unknown Source)
at java.util.concurrent.FutureTask.get(Unknown Source)
at com.experian.eda.ea.core.enrichmentpack.MultiThreadEnrichmentPackManager.evaluateAllPreEnrichmentRulesWithSizePopulator(MultiThreadEnrichmentPackManager.java:123)
... 90 common frames omitted
Caused by: com.hazelcast.core.OperationTimeoutException: GetOperation invocation failed to complete due to operation-heartbeat-timeout. Current time: 2020-04-23 19:17:37.414. Start time: 2020-04-23 19:14:16.456. Total elapsed time: 200958 ms. Last operation heartbeat: never. Last operation heartbeat from member: 2020-04-23 19:16:59.335. Invocation{op=com.hazelcast.map.impl.operation.GetOperation{serviceName='hz:impl:mapService', identityHash=300649867, partitionId=30, replicaIndex=0, callId=-10707, invocationTime=1587665497325 (2020-04-23 19:11:37.325), waitTimeout=-1, callTimeout=180000, name=ENRICHMENT-CACHE-DEFAULT}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=180000, firstInvocationTimeMs=1587665656456, firstInvocationTime='2020-04-23 19:14:16.456', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 01:00:00.000', target=[10.195.40.212]:5701, pendingResponse={VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=Connection[id=56, /10.195.40.210:5702->/10.195.40.212:60231, endpoint=[10.195.40.212]:5701, alive=true, type=MEMBER]}
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.newOperationTimeoutException(InvocationFuture.java:151)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:101)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrowIfException(InvocationFuture.java:75)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:155)
at com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:373)
at com.hazelcast.map.impl.proxy.MapProxySupport.getInternal(MapProxySupport.java:304)
at com.hazelcast.map.impl.proxy.MapProxyImpl.get(MapProxyImpl.java:114)
Here is the setting I use.
<map name="ENRICHMENT-CACHE-*">
<in-memory-format>BINARY</in-memory-format>
<backup-count>0</backup-count>
<max-idle-seconds>600</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="PER_NODE">5000</max-size>
<eviction-percentage>25</eviction-percentage>
</map>
<properties>
<property name="hazelcast.logging.type">slf4j</property>
<property name="hazelcast.client.invocation.timeout.seconds">360</property>
<property name="hazelcast.operation.call.timeout.millis">180000</property>
</properties>
I see the code is written this way.
#Override
public final V get() throws InterruptedException, ExecutionException {
Object response = registerWaiter(Thread.currentThread(), null);
if (response != VOID) {
// no registration was done since a value is available.
return resolveAndThrowIfException(response);
}
boolean interrupted = false;
try {
for (; ; ) {
park();
if (isDone()) {
return resolveAndThrowIfException(state);
} else if (Thread.interrupted()) {
interrupted = true;
onInterruptDetected();
}
}
} finally {
restoreInterrupt(interrupted);
}
}
The current thread is parked, waiting for interruption. Could it be the heartbeat timeout exception interrupted the thread, and woke it in an errorneous state?
If yes, would increment of the hazelcast.max.no.heartbeat.seconds help?
If this is not the case, do you have any idea why the error occured & how to solve it?
Thank you.
Regards,
Khairul
I have an application that monitors FTP folder for a specific csv file foo.csv, once the file is located it pulls it to my local and generate a new output format bar.csv, the application then will send the new file finalBEY.csv back to the FTP folder and erase it from local.
Now that I have introduced a process using publishSubscribeChannel where it transforms the file to a message then uses jobLaunchingGateway that will read finalBEY.csv using batch and print it to the consol, it is not working since the finalBEY.csv is being deleted from the local forlder after sending it back to FTP, I'm using .channel("nullChannel") on the jobLaunchingGateway within the first subscribe which suppose to hold it until having a reply from the batch and then moves to the next subscribe which will send it to the ftp and remove it from local, but seems it is not the case where it is removing it from local and thus the batch is not finding finalBEY.csv and throws an error that I'm pasting below with the code.
If I remove the advice from the second subscribe it works fine as this will not delete it from local anymore.
Can you please assist on this matter?
public IntegrationFlow localToFtpFlow(Branch myBranch) {
return IntegrationFlows.from(Files.inboundAdapter(new File(myBranch.getBranchCode()))
.filter(new ChainFileListFilter<File>()
.addFilter(new RegexPatternFileListFilter("final" + myBranch.getBranchCode() + ".csv"))
.addFilter(new FileSystemPersistentAcceptOnceFileListFilter(metadataStore(dataSource), "foo"))),//FileSystemPersistentAcceptOnceFileListFilter
e -> e.poller(Pollers.fixedDelay(10_000)))
.enrichHeaders(h ->h.headerExpression("file_originalFile", "new java.io.File('"+ myBranch.getBranchCode() +"/FEFOexport" + myBranch.getBranchCode() + ".csv')",true))
.transform(p -> {
LOG.info("Sending file " + p + " to FTP branch " + myBranch.getBranchCode());
return p;
})
.log()
.transform(m -> {
this.defaultSessionFactoryLocator.addSessionFactory(myBranch.getBranchCode(),createNewFtpSessionFactory(myBranch));
LOG.info("Adding factory to delegation");
return m;
})
.publishSubscribeChannel(s ->
s.subscribe(f ->f.transform(fileMessageToJobRequest())
.handle(jobLaunchingGateway()).channel("nullChannel"))
.subscribe(h -> h.handle(Ftp.outboundAdapter(createNewFtpSessionFactory(myBranch), FileExistsMode.REPLACE)
.useTemporaryFileName(true)
.autoCreateDirectory(false)
.remoteDirectory(myBranch.getFolderPath()), e -> e.advice(expressionAdvice()))))
.get();
}
#Bean
public FileMessageToJobRequest fileMessageToJobRequest(){
FileMessageToJobRequest fileMessageToJobRequest = new FileMessageToJobRequest();
fileMessageToJobRequest.setFileParameterName("file_path");
fileMessageToJobRequest.setJob(orderJob);
return fileMessageToJobRequest;
}
#Bean
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
return jobLaunchingGateway;
}
/**
* Creating the advice for routing the payload of the outbound message on different expressions (success, failure)
* #return Advice
*/
#Bean
public Advice expressionAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setSuccessChannelName("success.input");
advice.setOnSuccessExpressionString("payload.delete() + ' was successful'");
//advice.setOnSuccessExpressionString("inputMessage.headers['file_originalFile'].renameTo(new java.io.File(payload.absolutePath + '.success.to.send'))");
//advice.setFailureChannelName("failure.input");
advice.setOnFailureExpressionString("payload + ' was bad, with reason: ' + #exception.cause.message");
advice.setTrapException(true);
return advice;
}
Here is the error and as you can see the first lines show that it is transferred to FTP and then initiated the batch job while in the subscribe should do the batch first...
INFO 10452 --- [ask-scheduler-2] o.s.integration.ftp.session.FtpSession : File has been successfully renamed from: /ftp/erbranch/EDMS/FEFO/finalBEY.csv.writing to /ftp/erbranch/EDMS/FEFO/finalBEY.csv
Caused by: java.lang.IllegalStateException: Input resource must exist (reader is in 'strict' mode): file [C:\Java Programs\spring4ftpappftp\BEY\finalBEY.csv]
at org.springframework.batch.item.file.FlatFileItemReader.doOpen(FlatFileItemReader.java:251) ~[spring-batch-infrastructure-4.0.1.RELEASE.jar:4.0.1.RELEASE]
at org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader.open(AbstractItemCountingItemStreamItemReader.java:146) ~[spring-batch-infrastructure-4.0.1.RELEASE.jar:4.0.1.RELEASE]
... 123 common frames omitted
Debugging code:
2019-07-15 10:43:02.838 INFO 4280 --- [ask-scheduler-2] o.s.integration.ftp.session.FtpSession : File has been successfully transferred to: /ftp/erbranch/EDMS/FEFO/finalBEY.csv.writing
2019-07-15 10:43:02.845 INFO 4280 --- [ask-scheduler-2] o.s.integration.ftp.session.FtpSession : File has been successfully renamed from: /ftp/erbranch/EDMS/FEFO/finalBEY.csv.writing to /ftp/erbranch/EDMS/FEFO/finalBEY.csv
2019-07-15 10:43:02.845 DEBUG 4280 --- [ask-scheduler-2] o.s.b.f.s.DefaultListableBeanFactory : Returning cached instance of singleton bean 'integrationEvaluationContext'
2019-07-15 10:43:02.848 DEBUG 4280 --- [ask-scheduler-2] o.s.b.f.s.DefaultListableBeanFactory : Returning cached instance of singleton bean 'success.input'
2019-07-15 10:43:02.849 DEBUG 4280 --- [ask-scheduler-2] o.s.integration.channel.DirectChannel : preSend on channel 'success.input', message: AdviceMessage [payload=true was successful, headers={id=eca55e1d-918e-3334-afce-66f8ab650748, timestamp=1563176582848}, inputMessage=GenericMessage [payload=BEY\finalBEY.csv, headers={file_originalFile=BEY\FEFOexportBEY.csv, id=a2f029b0-2609-1a11-67ef-4f56c7dd0752, file_name=finalBEY.csv, file_relativePath=finalBEY.csv, timestamp=1563176582787}]]
2019-07-15 10:43:02.849 DEBUG 4280 --- [ask-scheduler-2] o.s.i.t.MessageTransformingHandler : success.org.springframework.integration.transformer.MessageTransformingHandler#0 received message: AdviceMessage [payload=true was successful, headers={id=eca55e1d-918e-3334-afce-66f8ab650748, timestamp=1563176582848},
inputMessage=GenericMessage [payload=BEY\finalBEY.csv, headers={file_originalFile=BEY\FEFOexportBEY.csv, id=a2f029b0-2609-1a11-67ef-4f56c7dd0752, file_name=finalBEY.csv, file_relativePath=finalBEY.csv, timestamp=1563176582787}]]
2019-07-15 10:43:02.951 DEBUG 4280 --- [ask-scheduler-2] o.s.b.i.launch.JobLaunchingGateway : jobLaunchingGateway received message: GenericMessage [payload=JobLaunchRequest: orderJob, parameters={file_path=C:\Java Programs\spring4ftpappftp\BEY\finalBEY.csv, dummy=1563176582946}, headers={file_originalFile=BEY\FEFOexportBEY.csv, id=c98ad6cb-cced-c911-1b93-9d054baeb9d0, file_name=finalBEY.csv, file_relativePath=finalBEY.csv, timestamp=1563176582951}]
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.i.channel.PublishSubscribeChannel : Channel 'application.FTPOutp' has 1 subscriber(s).
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.i.endpoint.EventDrivenConsumer : started 1o.org.springframework.integration.config.ConsumerEndpointFactoryBean#3
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.i.endpoint.EventDrivenConsumer : Adding {message-handler} as a subscriber to the '1o.subFlow#1.channel#0' channel
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.integration.channel.DirectChannel : Channel 'application.1o.subFlow#1.channel#0' has 1 subscriber(s).
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.i.endpoint.EventDrivenConsumer : started 1o.subFlow#1.org.springframework.integration.config.ConsumerEndpointFactoryBean#1
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.i.endpoint.EventDrivenConsumer : Adding {bridge} as a subscriber to the 'FTPOutp' channel
2019-07-16 08:35:29.442 INFO 10208 --- [nio-8081-exec-3] o.s.i.channel.PublishSubscribeChannel : Channel 'application.FTPOutp' has 2 subscriber(s).
Since you are using a SyncTaskExecutor the batch job should run on the calling thread and then followed by the FTP adapter.
Use DEBUG logging and follow the message flow to see why that's not happening.
I have the following code:
StringSerializer ss = StringSerializer.get();
String cf = "TEST";
CassandraHostConfigurator conf = new CassandraHostConfigurator("localhost:9160");
conf.setCassandraThriftSocketTimeout(40000);
conf.setExhaustedPolicy(ExhaustedPolicy.WHEN_EXHAUSTED_BLOCK);
conf.setRetryDownedHostsDelayInSeconds(5);
conf.setRetryDownedHostsQueueSize(128);
conf.setRetryDownedHosts(true);
conf.setLoadBalancingPolicy(new LeastActiveBalancingPolicy());
String key = Long.toString(System.currentTimeMillis());
Cluster cluster = HFactory.getOrCreateCluster("TestCluster", conf);
Keyspace keyspace = HFactory.createKeyspace("TestCluster", cluster);
Mutator<String> mutator = HFactory.createMutator(keyspace, StringSerializer.get()); int count = 0;
while (!"q".equals(new Scanner( System.in).next())) {
try{
mutator.insert(key, cf, HFactory.createColumn("column_" + count, "v_" + count, ss, ss));
count++;
} catch (Exception e) {
e.printStackTrace();
}
}
and I can write some values using it, but when I restart cassandra, it fails. Here is the log:
[15:11:07] INFO [CassandraHostRetryService ] Downed Host Retry service started with >queue size 128 and retry delay 5s
[15:11:07] INFO [JmxMonitor ] Registering JMX >me.prettyprint.cassandra.service_ASG:ServiceType=hector,MonitorType=hector
[15:11:17] ERROR [HThriftClient ] Could not flush transport (to be expected >if the pool is shutting down) in close for client: CassandraClient
org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:98)
at >me.prettyprint.cassandra.connection.client.HThriftClient.close(HThriftClient.java:26)
at >me.prettyprint.cassandra.connection.HConnectionManager.closeClient(HConnectionManager.java:308)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:257)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] ERROR [HConnectionManager ] MARK HOST AS DOWN TRIGGERED for host >localhost(127.0.0.1):9160
[15:11:17] ERROR [HConnectionManager ] Pool state on shutdown: >:{localhost(127.0.0.1):9160}; IsActive?: true; Active: 1; Blocked: 0; Idle: 15; NumBeforeExhausted: 49
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown triggered on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [ConcurrentHClientPool ] Shutdown complete on :{localhost(127.0.0.1):9160}
[15:11:17] INFO [CassandraHostRetryService ] Host detected as down was added to retry queue: localhost(127.0.0.1):9160
[15:11:17] WARN [HConnectionManager ] Could not fullfill request on this host CassandraClient
[15:11:17] WARN [HConnectionManager ] Exception:
me.prettyprint.hector.api.exceptions.HectorTransportException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:82)
at >me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:236)
at >me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:97)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:69)
at com.app.App.main(App.java:40)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:157)
at org.apache.cassandra.thrift.Cassandra$Client.send_set_keyspace(Cassandra.java:466)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:455)
at >me.prettyprint.cassandra.connection.client.HThriftClient.getCassandra(HThriftClient.java:78)
... 5 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at >org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)
... 9 more
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Client CassandraClient released to inactive or dead pool. Closing.
[15:11:17] INFO [HConnectionManager ] Added host localhost(127.0.0.1):9160 to pool
You have set -
conf.setRetryDownedHostsDelayInSeconds(5);
Try to to wait after the restart for more than 5 seconds.
Also, you may need to upgrade.
What is the size thrift_max_message_length_in_mb you have set?
Kind regards.