I Launch demo app from https://github.com/microsoft/FluidExamples
I Use "inMemory": false in "config.json"
"db": {
"inMemory": false,
"path": "/var/tmp/db"
}
I Launch the "Tinylicious" server
When I try to create a new document, I get an error:
error: Error writing checkpoint to MongoDB: {} {"messageMetaData": {"documentId": "1610173896601", "tenantId": "tinylicious"}, "label": "winston", "timestamp": "2021-01- 09T06: 32: 34.968Z "}
what am I doing wrong?
thks!
Related
I have connected Eclipse hono with Eclipse ditto using the Connectivity api. When I set it up, this works fine. However, after some time the forwarding connection fails. When I retrieve the metrics, I'm getting following response:
{
"?": {
"?": {
"type": "connectivity.responses:aggregatedResponse",
"status": 200,
"connectionId": "<connectionId>",
"responsesType": "connectivity.responses:retrieveConnectionMetrics",
"responses": {
"connectivity-7cc7b5dc4c-6nn59": {
"type": "connectivity.responses:retrieveConnectionMetrics",
"status": 200,
"connectionId": "<connectionId>",
"connectionMetrics": {
"connectionStatus": "open",
"connectionStatusDetails": "Connected at 2019-03-19T08:28:53.211Z",
"inConnectionStatusSince": "2019-03-19T08:28:53.211Z",
"clientState": "CONNECTED",
"sourcesMetrics": [],
"targetsMetrics": [
{
"addressMetrics": {
"gw/{{ thing:namespace }}/{{ thing:id }}": {
"status": "failed",
"statusDetails": "Producer closed at 2019-03-19T21:00:16.466Z",
"messageCount": 2048,
"lastMessageAt": "2019-03-19T21:00:05.361Z"
}
},
"publishedMessages": 4070
}
]
}
}
}
}
}
}
I've been checking the logs around the time mentioned, but I'm not getting any errors. The logs I'm posting here are the last one before and the first one after the mentioned timestamp (2019-03-19T21:00:16.466Z).
2019-03-19 21:00:11,771 DEBUG [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-42872] o.e.d.s.c.m.a.AmqpPublisherActor akka://ditto-cluster/system/sharding/connection/7/tenant_aloxy_consumer-aloxy-forward/pa/$a/c1/amqpPublisherActor3
- Message JmsTextMessage { org.apache.qpid.jms.provider.amqp.message.AmqpJmsTextMessageFacade#9bc051af } sent successfully.
2019-03-19 21:01:11,733 DEBUG [ID:AMQP_NO_PREFIX:TelemetrySenderImpl-42872] o.e.d.s.c.m.a.AmqpClientActor akka://ditto-cluster/system/sharding/connection/1/tenant_aloxy_consumer-aloxy/pa/$a/c1 - Inbound message: JmsInboundMessageDispatch { sequence = 38885, messageId = TelemetrySenderImpl-42873, consumerId = ID:a4925b59-1bb4-4cd8-9151-96ad422c36df:1:1:1 }
Although the log levels for all ditto services are set to debug, I'm not getting any useful logging.
Does any of you have any idea how I can get the loggging to investigate this problem or, even better, have any idea on what the problem might be and how to fix it?
When I delete the connection and recreate it, everything works as expected again. Maybe ditto can do this under the hood automatically?
UPDATE
When retrieving the connection via the API, I'm getting following response (including the failoverEnabled property which is set to true). This also indicates that the connection uses AMQP 1.0. The broker used is Enmasse.
{
"?": {
"?": {
"type": "connectivity.responses:retrieveConnection",
"status": 200,
"connection": {
"id": "<connectionId>",
"name": null,
"connectionType": "amqp-10",
"connectionStatus": "open",
"uri": "amqp://<consumer>:<password>#<amqp-host>:5672",
"sources": [],
"targets": [
{
"address": "gw/{{ thing:namespace }}/{{ thing:id }}",
"topics": [
"_/_/things/twin/events?filter=exists(features/alp)"
],
"authorizationContext": [
"<auth-context>"
]
}
],
"clientCount": 1,
"failoverEnabled": true,
"validateCertificates": true,
"processorPoolSize": 5,
"tags": []
}
}
}
}
Eclipse Ditto does an automatic failover if configured to so do (see https://www.eclipse.org/ditto/basic-connections.html - "failoverEnabled" property in the model).
It could however be that this was improved since the release 0.8.0 you are using.
The Ditto team is currently working towards a 0.9.0-M1 release which would contain an improved reconnection behavior.
Does the connection to Eclipse Hono automatically reconnect?
You described that the "forwarding connection" fails from time to time. Which technology (broker, etc.) is as endpoint for that gw/{{ thing:namespace }}/{{ thing:id }} address?
Getting the following error while starting explorer:
<<<<<<<<<<<<<<<<<<<<<<<<<< Explorer Error >>>>>>>>>>>>>>>>>>>>>
TypeError: Cannot read property 'size' of undefined
at Platform.initialize (/home/kp/Desktop/blockchain-explorer/app/platform/fabric/Platform.js:52:45)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
Received kill signal, shutting down gracefully
Closed out connections
Using Fabric v1.2 and Explorer v0.3.6 I have replaced grpcs with grpc but that is also not helping much. Not sure on where to pass 'size' property in the config file.
{
"network-configs":{
"network-1":{
"version":"1.0",
"clients":{
"client-1":{
"tlsEnable":true,
"organization":"Org1MSP",
"channel":"mychannel",
"credentialStore":{
"path":"./tmp/credentialStore_Org1/credential",
"cryptoStore":{
"path":"./tmp/credentialStore_Org1/crypto"
}
}
}
},
"channels":{
"mychannel":{
"peers":{
"peer0.org1.example.com":{
}
},
"connection":{
"timeout":{
"peer":{
"endorser":"6000",
"eventHub":"6000",
"eventReg":"6000"
}
}
}
}
},
"organizations":{
"Org1MSP":{
"mspid":"Org1MSP",
"fullpath":false,
"adminPrivateKey":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/"
},
"signedCert":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/"
}
},
"Org2MSP":{
"mspid":"Org2MSP",
"adminPrivateKey":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp/keystore/"
}
},
"OrdererMSP":{
"mspid":"OrdererMSP",
"adminPrivateKey":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/ordererOrganizations/example.com/users/Admin#example.com/msp/keystore/"
}
}
},
"peers":{
"peer0.org1.example.com":{
"tlsCACerts":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"url":"grpc://localhost:7051",
"eventUrl":"grpc://localhost:7053",
"grpcOptions":{
"ssl-target-name-override":"peer0.org1.example.com"
}
},
"peer1.org1.example.com":{
"tlsCACerts":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt"
},
"url":"grpc://localhost:8051",
"eventUrl":"grpc://localhost:8053",
"grpcOptions":{
"ssl-target-name-override":"peer1.org1.example.com"
}
},
"peer0.org2.example.com":{
"tlsCACerts":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt"
},
"url":"grpc://localhost:9051",
"eventUrl":"grpc://localhost:9053",
"grpcOptions":{
"ssl-target-name-override":"peer0.org2.example.com"
}
},
"peer1.org2.example.com":{
"tlsCACerts":{
"path":"/home/kp/Desktop/bct/fabric-samples/first-network/crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt"
},
"url":"grpc://localhost:10051",
"eventUrl":"grpc://localhost:10053",
"grpcOptions":{
"ssl-target-name-override":"peer1.org2.example.com"
}
}
},
"orderers":{
"orderer.example.com":{
"url":"grpc://localhost:7050"
}
}
},
"network-2":{
}
},
"configtxgenToolPath":"/home/kp/Desktop/bct/fabric-samples/bin/",
"license":"Apache-2.0"
}
TypeError: Cannot read property 'size' of undefined
at Platform.initialize (/home/kp/Desktop/blockchain-explorer/app/platform/fabric/Platform.js:52:45)
According to Platform.js, it means to have been failed to load your config.json in some reason. You need to review with the following view points:
Check logs/app/app.log
Difference from orignal config.json
And I don't think it related to this problem, but you need to turn 'tlsEnable' in config.json into 'false' when disabling TLS.
{
"network-configs": {
"network-1": {
"clients": {
"client-1": {
"tlsEnable": false,
^^^^^
Do you have any solution for this problem?
I deployed Hyperledger Explorer Step by Step with link
https://github.com/hyperledger/blockchain-explorer and I got the same error with you
I use with fabric v1.2 and composer v0.20.0
I get the same error, on my side the tag "network-configs": was missing.
In the console log, just before this error, I get a msg saying '******* Initialization started for hyperledger fabric platform ******,',undefined
If you checked the code in platform.js you can see that the undefined variable correspond to network-configs.
I created a simple config.json file in the original folder like the following :
{
"network-configs": {
"first-network": {
"name": "firstnetwork",
"profile": "./connection-profile/first-network.json",
"enableAuthentication": false
}
},
"license": "Apache-2.0"
Than I created a full profile doc starting with
{
"name": "first-network",
"version": "1.0.0",
I am not sure if splitting the file is really necessary but anyway on my side this procedure fixed the issue.
I'm using data factory with blob storage.
I sometime get the below error intermittently - this can occur on different pipelines/data-sources. However I always get the same error, regardless of which task fails - 400 The specified block list is invalid.
Copy activity encountered a user error at Sink side: ErrorCode=UserErrorBlobUploadFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error occurred when trying to upload blob 'https://blob.core.windows.net/', detailed message: The remote server returned an error: (400) Bad Request.,Source=,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=The remote server returned an error: (400) Bad Request.,Source=Microsoft.WindowsAzure.Storage,StorageExtendedMessage=The specified block list is invalid.
Type=System.Net.WebException,Message=The remote server returned an error: (400) Bad Request.,Source=Microsoft.WindowsAzure.Storage
This seems to be most common if there is more than one task running at a time that is writing data to the storage. Is there anything I can do to make this process more reliable? Is it possible something has been misconfigured? It's causing slices to fail in data factory, so I'd really love to know what I should be investigating.
A sample pipeline that has suffered from this issue:
{
"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/Microsoft.DataFactory.Pipeline.json",
"name": "Pipeline",
"properties": {
"description": "Pipeline to copy Processed CSV from Data Lake to blob storage",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "AzureDataLakeStoreSource"
},
"sink": {
"type": "BlobSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [ { "name": "DataLake" } ],
"outputs": [ { "name": "Blob" } ],
"policy": {
"concurrency": 10,
"executionPriorityOrder": "OldestFirst",
"retry": 0,
"timeout": "01:00:00"
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "CopyActivity"
}
],
"start": "2016-02-28",
"end": "2016-02-29",
"isPaused": false,
"pipelineMode": "Scheduled"
}
}
I'm only using LRS standard storage, but I still wouldn't expect it to intermittently throw errors.
EDIT: adding linked service json
{
"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/Microsoft.DataFactory.LinkedService.json",
"name": "Ls-Staging-Storage",
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=;AccountKey="
}
}
}
Such error is mostly caused by racing issues. E.g. multiple concurrent activity runs write to the same blob file.
Could you further check your pipelines settings whether it is the case? And please avoid such setting if so.
I am not geeting output and error
------Exception-------
Class: Kitchen::ActionFailed
Message: 1 actions failed."
cookbook/test/integration/nodes
Json file
{
"id": "hive server",
"chef_type": "node",
"environment": "dev",
"json_class": "Chef::Node",
"run_list": [],
"automatic": {
"hostname": "test.net",
"fqdn": "127.0.0.1",
"name": "test.net",
"ipaddress": "127.0.0.1",
"node_zone": "green",
"roles": []
},
"attributes": {
"hiveserver": "true"
}
}
Recipe
hiveNodes = search(:node, "hiveserver:true AND environment:node.environment AND node_color:node["node_color"])
# hiveserverList = ""
# hiveNodes.each |hnode| do
# hiveserverList += hnode
#end
#file '/tmp/test.txt' do
# content '#{hiveserverList}'
#end
I think you mean to be using "hiveserver:true AND chef_environment:#{node.chef_environment} AND node_color:#{node["node_color"]}" as your search string. The #{} syntax is how you embed a Ruby expression value in to a string. Also for reasons of complex backwards compat, the environment on a node is called chef_environment.
I trying to run sync gateway code into the terminal but i don't understand how it works, even through needless response. see the code below,
{
"log": ["HTTP+"],
"databases": {
"grocery-sync": {
"server": "http://localhost:8091",
"bucket": "grocery-sync",
"users": {
"GUEST": {"disabled": false, "admin_channels": ["*"] }}}}
}
But,i getting below response but i couldn't understand what exactly need to do for auto replication.