How to resolve "error sending: timeout expired while executing transaction" in hyperledger fabric? - node.js

I'm trying to upload a bulk data. Im splitting the records like 100 & trying to invoke . The thing is first 100 transactions executing well after that Im facing an issue like below.
DLT Error { Error: failed to execute transaction 53842934bed9ad4b1f604bdc253f2e06f5677383c0c91cc20f313fa40a85ebf8: error sending: timeout expired while executing transaction
at self._endorserClient.processProposal (/home/user/Project/node_modules/fabric-client/lib/Peer.js:140:36)
at Object.onReceiveStatus (/home/user/Project/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:1207:9)
at InterceptingListener._callNext (/home/user/Project/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (/home/user/Project/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:618:8)
at callback (/home/user/Project/node_modules/fabric-client/node_modules/grpc/src/client_interceptors.js:845:24)
status: 500,
payload: <Buffer >,
peer:
{ url: 'grpcs://ip:7051',
name: 'ip:7051',
options:
{ 'grpc.max_receive_message_length': -1,
'grpc.max_send_message_length': -1,
'grpc.keepalive_time_ms': 120000,
'grpc.http2.min_time_between_pings_ms': 120000,
'grpc.keepalive_timeout_ms': 20000,
'grpc.http2.max_pings_without_data': 0,
'grpc.keepalive_permit_without_calls': 1,
'grpc.ssl_target_name_override': 'peer0.tata.com',
'grpc.default_authority': 'peer0.tata.com' } },
isProposalResponse: true }
I have tried reducing 100 to 50 Records at a time & also increased the timeout in invoking file
let handle = setTimeout(() => {
event_hub.unregisterTxEvent(transaction_id_string);
event_hub.disconnect();
resolve({event_status : 'TIMEOUT'});
}, 70000);
But still facing the same issue. Can anybody please help me to fix this Issue?

Related

Trying to fetch spot price using uniswap SDK but transaction is throwing error LOK?

const quotedAmountOut = await quoterContract.callStatic.quoteExactInputSingle(
immutables.token0,
immutables.token1,
immutables.fee,
amountIn,
0
)
I created two erc20 dummy tokens and created a pool for them using uniswapV3Factory createPool() method and obtained the pool address. But when I wanted to fetch the spot price for the tokens i have used using the above script it is throwing following Error:
Error: call revert exception; VM Exception while processing transaction: reverted with reason string "LOK" [ See: https://links.ethers.org/v5-errors-CALL_EXCEPTION ] (method="quoteExactInputSingle(address,address,uint24,uint256,uint160)", data="0x08c379a0000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000034c4f4b0000000000000000000000000000000000000000000000000000000000", errorArgs=["LOK"], errorName="Error", errorSignature="Error(string)", reason="LOK", code=CALL_EXCEPTION, version=abi/5.7.0)
at Logger.makeError (/Users/apple/Desktop/solidity/deploy/node_modules/#ethersproject/contracts/lib/index.js:20:58)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
reason: 'LOK',
code: 'CALL_EXCEPTION',
method: 'quoteExactInputSingle(address,address,uint24,uint256,uint160)',
data: '0x08c379a0000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000034c4f4b0000000000000000000000000000000000000000000000000000000000',
errorArgs: [ 'LOK' ],
errorName: 'Error',
errorSignature: 'Error(string)',
address: '0xb27308f9F90D607463bb33eA1BeBb41C27CE5AB6',
args: [
'<Token-1-Address>',
'<Token-2-Address>',
500,
BigNumber { _hex: '0x0de0b6b3a7640000', _isBigNumber: true },
0
],
transaction: {
data: '0xf7729d4300000000000000000000000008a2e53a8ddd2dd1d895c18928fc63778d97a55a0000000000000000000000006d7a02e23505a74143199abb5fb07e6ea20c6d6300000000000000000000000000000000000000000000000000000000000001f40000000000000000000000000000000000000000000000000de0b6b3a76400000000000000000000000000000000000000000000000000000000000000000000',
to: '0xb27308f9F90D607463bb33eA1BeBb41C27CE5AB6'
}
}
i found the issue here is your token address you provided, like token0 is WETH9 you must provide the exactly WETH9 token address, you can find it on etherscan.io

Failed to flush task queue within 120 seconds on Automatic ML

I'm running a Automatic ML on Azure but it fails and this is the error the child jobs are giving me.
Starting the automl_batch_driver setup...
Set enable_streaming flag to False
Batch Run Id in the real script: AutoML_11f87010-f7b0-4819-aecb-ad6b31f2469a_worker_11
2022-06-07 17:46:03,026817 - INFO - Beginning batch driver wrapper.
2022-06-07 17:46:03.297 - INFO - Successfully got the cache data store, caching enabled.
2022-06-07 17:46:03.297 - INFO - Took 0.13631367683410645 seconds to retrieve cache data store
2022-06-07 17:46:03.306 - INFO - No files are available for the cache store locally, downloading files from the Run.
2022-06-07 17:48:04.348 - ERROR - Type: {'code': 'ResourceExhausted', 'inner_error': {'code': 'Timeout'}}
Class: AzureMLException
Message: AzureMLException:
Message: Failed to flush task queue within 120 seconds
InnerException None
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Failed to flush task queue within 120 seconds",
"inner_error": {
"code": "ResourceExhausted",
"inner_error": {
"code": "Timeout"
}
}
}
}

How can I debug "Build failed: Too many concurrent builds" error when only one function is being deployed via Google Cloud Function?

I'm currently trying to deploy a function via the console. I have added variables, package specs, and service account credentials.
When I hit deploy, the status was in build with the spinning wheel for about ten minutes before coming back with a build failed icon.
When I went to the logs I am seeing the following:
status: {
code: 8
message: "Build failed: Too many concurrent builds, please stagger your deployments."
}
with severity: ERROR under resource.
There are several other cloud functions that are already deployed and active; they were deployed some time ago and are not currently being redeployed.
I have attempted to redeploy the function in question but that resulted in a timeout after 60 seconds.
Full logs below:
{
protoPayload: {
#type: "type.googleapis.com/google.cloud.audit.AuditLog"
status: {
code: 8
message: "Build failed: Too many concurrent builds, please stagger your deployments."
}
authenticationInfo: {
principalEmail: "user#user"
}
serviceName: "cloudfunctions.googleapis.com"
methodName: "google.cloud.functions.v1.CloudFunctionsService.CreateFunction"
resourceName: "projects/resource_name"
}
insertId: "-n11hqacqvq"
resource: {
type: "cloud_function"
labels: {3}
}
timestamp: "2021-02-18T22:16:56.681559Z"
severity: "ERROR"
logName: "projects/.../logs/cloudaudit.googleapis.com%2Factivity"
operation: {
id: "operations/..."
producer: "cloudfunctions.googleapis.com"
last: true
}
receiveTimestamp: "2021-02-18T22:16:56.858611526Z"
}

Google Cloud PubSub/Datastore Error 13 & 14: "GOAWAY received" and "TCP Read/Write Fail"

Sorry for the long title. Having some issues randomly pop up (every handful of hours, but not on a regular schedule, could be anywhere from 3 hours to 8) when streaming data from Cloud PubSub into Cloud Datastore using Cloud Functions.
Source is a Node.js 6 script that receives an HTTP Post with info, writes to PubSub topic, then publishes topic to Cloud Datastore.
It is a modified version of this:
https://github.com/CiscoSE/serverless-cmx
Errors:
This first one happens sometimes with TCP Write instead of Read, but it's the same error.
ERROR: { Error: 14 UNAVAILABLE: TCP Read failed
at Object.exports.createStatusError (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:1188:28)
at InterceptingListener._callNext (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:841:24)
code: 14,
metadata: Metadata { _internal_repr: {} },
details: 'TCP Read failed' }
And:
ERROR: { Error: 13 INTERNAL: GOAWAY received
at Object.exports.createStatusError (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:1188:28)
at InterceptingListener._callNext (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/user_code/node_modules/#google-cloud/datastore/node_modules/grpc/src/client_interceptors.js:841:24)
code: 13,
metadata: Metadata { _internal_repr: {} },
details: 'GOAWAY received' }
It looks like there is a similar error for other services and the workaround is just to retry.

Regarding the error occurred in my test case in nodeload.js script

I have designed an app using elastic search. And when Iam trying to write the test case using node load.js. I have got a problem that when I increase the number users I was getting the warning that "WARN: Error during HTTP request: Error: ECONNREFUSED, Could not contact DNS servers" and Iam unable rectify the problem so please help me in solving this error.
nl.run({
name: "test",
host: 'localhost',
port: 9200,
//path: '/my_river/page/_search?q=sweden',
numUsers: 2000, //Increased my num of user**
timeLimit: 180,
targetRps: 500,
stats: [
'result-codes',
{ name: 'latency', percentiles: [0.9, 0.99] },
'concurrency',
'rps',
'uniques',
{ name: 'http-errors', successCodes: [200,404], log: 'http-errors.log' }
],

Resources