bluetooth disconnected just after connected on ubuntu 20.04 - bluetooth

i'm trying to make my own photobooth using instant photo printer which uses bluetooth and dye sublimation type. For doing that, BT connection needs to be implemented in the python or node js code. Basically, i'm planing just utilize the system command.
for now, however, having tested Polaroid Hi-print and Kodak P300R, but none of these are not even connected. in specifically, it disconnected by remote user(device) just after connected.(Actually, they only supports for mobile platforms like android or iOS)
now even I'm confusing it's possible. Could you guys help me to get through with?
Here's are the btmon logs of connection using "hcitool cc [btaddr]" ..
# RAW Open: hcitool (privileged) version 2.22 {0x0002} 3407.844746
# RAW Close: hcitool {0x0002} 3407.844761
# RAW Open: hcitool (privileged) version 2.22 {0x0002} [hci0] 3407.844771
< HCI Command: Create Connection (0x01|0x0005) plen 13 #196 [hci0] 3407.844787
Address: 00:15:83:41:DB:94 (IVT corporation)
Packet type: 0xcc18
DM1 may be used
DH1 may be used
DM3 may be used
DH3 may be used
DM5 may be used
DH5 may be used
Page scan repetition mode: R2 (0x02)
Page scan mode: Mandatory (0x00)
Clock offset: 0x0000
Role switch: Allow slave (0x01)
> HCI Event: Command Status (0x0f) plen 4 #197 [hci0] 3407.982359
Create Connection (0x01|0x0005) ncmd 2
Status: Success (0x00)
> HCI Event: Role Change (0x12) plen 8 #198 [hci0] 3408.627344
Status: Success (0x00)
Address: 00:15:83:41:DB:94 (IVT corporation)
Role: Slave (0x01)
> HCI Event: Connect Complete (0x03) plen 11 #199 [hci0] 3408.633340
Status: Success (0x00)
Handle: 3
Address: 00:15:83:41:DB:94 (IVT corporation)
Link type: ACL (0x01)
Encryption: Disabled (0x00)
# RAW Close: hcitool {0x0002} [hci0] 3408.633412
< HCI Command: Read Remote Supp.. (0x01|0x001b) plen 2 #200 [hci0] 3408.633427
Handle: 3
> HCI Event: Command Status (0x0f) plen 4 #201 [hci0] 3408.637320
Read Remote Supported Features (0x01|0x001b) ncmd 2
Status: Success (0x00)
> HCI Event: Max Slots Change (0x1b) plen 3 #202 [hci0] 3408.638316
Handle: 3
Max slots: 5
> HCI Event: Max Slots Change (0x1b) plen 3 #203 [hci0] 3408.644343
Handle: 3
Max slots: 5
> HCI Event: Read Remote Supported Fe.. (0x0b) plen 11 #204 [hci0] 3408.646313
Status: Success (0x00)
Handle: 3
Features: 0xff 0xff 0xc9 0xfa 0x83 0xa7 0x79 0x87
3 slot packets
5 slot packets
Encryption
Slot offset
Timing accuracy
Role switch
Hold mode
Sniff mode
Park state
Power control requests
Channel quality driven data rate (CQDDR)
SCO link
HV2 packets
HV3 packets
u-law log synchronous data
A-law log synchronous data
CVSD synchronous data
Transparent synchronous data
Flow control lag (most significant bit)
Broadcast Encryption
Enhanced Data Rate ACL 2 Mbps mode
Enhanced inquiry scan
Interlaced inquiry scan
Interlaced page scan
RSSI with inquiry results
Extended SCO link (EV3 packets)
EV4 packets
EV5 packets
3-slot Enhanced Data Rate ACL packets
5-slot Enhanced Data Rate ACL packets
Sniff subrating
Pause encryption
Enhanced Data Rate eSCO 2 Mbps mode
3-slot Enhanced Data Rate eSCO packets
Extended Inquiry Response
Secure Simple Pairing
Encapsulated PDU
Erroneous Data Reporting
Non-flushable Packet Boundary Flag
Link Supervision Timeout Changed Event
Inquiry TX Power Level
Enhanced Power Control
Extended features
< HCI Command: Read Remote Exte.. (0x01|0x001c) plen 3 #205 [hci0] 3408.646327
Handle: 3
Page: 1
> HCI Event: Command Status (0x0f) plen 4 #206 [hci0] 3408.647314
Read Remote Extended Features (0x01|0x001c) ncmd 2
Status: Success (0x00)
> HCI Event: Read Remote Extended Fea.. (0x23) plen 13 #207 [hci0] 3408.678320
Status: Success (0x00)
Handle: 3
Page: 1/1
Features: 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Secure Simple Pairing (Host Support)
< HCI Command: Remote Name Req.. (0x01|0x0019) plen 10 #208 [hci0] 3408.678378
Address: 00:15:83:41:DB:94 (IVT corporation)
Page scan repetition mode: R2 (0x02)
Page scan mode: Mandatory (0x00)
Clock offset: 0x0000
< ACL Data TX: Handle 3 flags 0x00 dlen 10 #209 [hci0] 3408.678386
L2CAP: Information Request (0x0a) ident 1 len 2
Type: Extended features supported (0x0002)
> HCI Event: Command Status (0x0f) plen 4 #210 [hci0] 3408.680339
Remote Name Request (0x01|0x0019) ncmd 2
Status: Success (0x00)
> HCI Event: Number of Completed Packets (0x13) plen 5 #211 [hci0] 3408.706320
Num handles: 1
Handle: 3
Count: 1
> ACL Data RX: Handle 3 flags 0x02 dlen 16 #212 [hci0] 3408.708440
L2CAP: Information Response (0x0b) ident 1 len 8
Type: Extended features supported (0x0002)
Result: Success (0x0000)
Features: 0x00000080
Fixed Channels
< ACL Data TX: Handle 3 flags 0x00 dlen 10 #213 [hci0] 3408.708483
L2CAP: Information Request (0x0a) ident 2 len 2
Type: Fixed channels supported (0x0003)
> HCI Event: Number of Completed Packets (0x13) plen 5 #214 [hci0] 3408.712315
Num handles: 1
Handle: 3
Count: 1
> ACL Data RX: Handle 3 flags 0x02 dlen 20 #215 [hci0] 3408.714439
L2CAP: Information Response (0x0b) ident 2 len 12
Type: Fixed channels supported (0x0003)
Result: Success (0x0000)
Channels: 0x0000000000000002
L2CAP Signaling (BR/EDR)
> HCI Event: Remote Name Req Complete (0x07) plen 255 #216 [hci0] 3408.733311
Status: Success (0x00)
Address: 00:15:83:41:DB:94 (IVT corporation)
Name: Hi-Print 2×3 - DB94
# MGMT Event: Device Connected (0x000b) plen 35 {0x0003} [hci0] 3408.733351
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Flags: 0x00000000
Data length: 22
Name (complete): Hi-Print 2×3 - DB94
# MGMT Event: Device Connected (0x000b) plen 35 {0x0001} [hci0] 3408.733351
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Flags: 0x00000000
Data length: 22
Name (complete): Hi-Print 2×3 - DB94
< HCI Command: Disconnect (0x01|0x0006) plen 3 #217 [hci0] 3410.692257
Handle: 3
Reason: Remote User Terminated Connection (0x13)
> HCI Event: Command Status (0x0f) plen 4 #218 [hci0] 3410.693261
Disconnect (0x01|0x0006) ncmd 2
Status: Success (0x00)
> HCI Event: Disconnect Complete (0x05) plen 4 #219 [hci0] 3410.790256
Status: Success (0x00)
Handle: 3
Reason: Connection Terminated By Local Host (0x16)
# MGMT Event: Device Disconnected (0x000c) plen 8 {0x0003} [hci0] 3410.790295
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Reason: Connection terminated by local host (0x02)
# MGMT Event: Device Disconnected (0x000c) plen 8 {0x0001} [hci0] 3410.790295
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Reason: Connection terminated by local host (0x02)

Removing the device (previously paired) and remove from the list.
Then do fresh connection it will work.

Related

Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked

Orderers are failing when to try to create a channel
Setup:
Three Orderers | Tried with 5 orderers also
kubernetes
Raft Consensus
1.4.3 & 1.4.1
Its working perfectly with docker swarm
below the error log from one of the orderer
2019-09-04 13:02:11.488 UTC [orderer.consensus.etcdraft] HandleChain -> INFO 079 EvictionSuspicion not set, defaulting to 10m0s
2019-09-04 13:02:11.489 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 07a Found WAL data at path '/var/hyperledger/production/orderer/etcdraft/wal/nath41channel', replaying it channel=nath41channel node=2
2019-09-04 13:02:11.489 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 07b [channel: nath41channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
panic: [channel: nath41channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
goroutine 86 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00018bce0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000134280, 0x4, 0x1040dcc, 0x2a, 0xc000721548, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000134280, 0x1040dcc, 0x2a, 0xc000721548, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000134288, 0x1040dcc, 0x2a, 0xc000721548, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/common/multichannel.newChainSupport(0xc000170000, 0xc00028f5e0, 0xc0004ef260, 0x1145580, 0x1b8f970, 0xc0004ed3a0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/chainsupport.go:74 +0x710
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newChain(0xc000170000, 0xc0008dc870)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:327 +0x1df
github.com/hyperledger/fabric/orderer/common/multichannel.(*BlockWriter).WriteConfigBlock(0xc0005a8000, 0xc000483940, 0xc0008e3360, 0xb, 0xb)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/blockwriter.go:118 +0x2f3
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).writeConfigBlock(0xc0001f8f00, 0xc000483940, 0x7)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:1266 +0x1b4
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).writeBlock(0xc0001f8f00, 0xc000483940, 0x7)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:839 +0x18f
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).apply(0xc0001f8f00, 0xc0004ea240, 0x3, 0x4)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:1030 +0x250
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).serveRequest(0xc0001f8f00)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:748 +0x954
created by github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:336 +0x1e0
2019-09-04 19:12:24.951 UTC [orderer.commmon.multichannel] commitBlock -> PANI 03c [channel: rak25syschannel] Could not append block: unexpected Previous block hash. Expected PreviousHash = [99f567ec6a4f92583076be9d414c47f990559a0f5f24bd0273ba13bbfefd60f8], PreviousHash referred in the latest block= [d1507d8cf004d1dd7cd7940eb3c0c314fd82dcafd1e6edf784df3893cc938a64]
panic: [channel: rak25syschannel] Could not append block: unexpected Previous block hash. Expected PreviousHash = [99f567ec6a4f92583076be9d414c47f990559a0f5f24bd0273ba13bbfefd60f8], PreviousHash referred in the latest block= [d1507d8cf004d1dd7cd7940eb3c0c314fd82dcafd1e6edf784df3893cc938a64]
Complete log of one of the orderer: http://ideone.com/TidhFt
I have got the solution !!
Its because I am doing automation, three orderers generate three genesis blocks using supportive tools.
Even though it is the same configuration we shouldn't do because when multiple genesis blocks it starts forking thats the way hyperledger fabric protocol designed

hyperledger-fabric fabcar example : Container ... is not running

Problem
I'm following this tutorial on official document:
https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html
but stuck in 'launch the network' (https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html#launch-the-network)
./startFabric.sh javascript returns following error message
Error response from daemon: Container 8d4a67101bafc10453ab0a6c7d4afda63edc686ca157f8279ed1ebd11145b25a is not running
Environment
Below is my environment:
OS: Ubuntu 18.04.1 LTS
DOCKER: Docker version 18.06.1-ce, build e68fc7a
DOCKER-COMPOSE: docker-compose version 1.18.0, build 8dd22a9
GO: go version go1.12.4 linux/amd64
NPM: 3.5.2
NODE: v8.10.0
Python 2.7.15rc1
/etc/profile:
...
export PATH=$PATH:/usr/local/go/bin
export PATH=/home/sw/fabric/fabric-samples/bin:$PATH
export GOPATH=$HOME/go
(PATH variable, GOPATH SET)
(my $HOME/go directory is empty)
also my ubuntu user is registered to sudo group and also docker group
$ groups
... ... ... sudo ... ... docker
(I hid the rest)
below is the output when I try to start fabric
$ ./startFabric.sh javascript
# don't rewrite paths for Windows Git Bash users
export MSYS_NO_PATHCONV=1
docker-compose -f docker-compose.yml down
Removing peer0.org1.example.com ... done
Removing couchdb ... done
Removing ca.example.com ... done
Removing orderer.example.com ... done
Removing network net_basic
docker-compose -f docker-compose.yml up -d ca.example.com orderer.example.com peer0.org1.example.com couchdb
Creating couchdb ... done
Creating peer0.org1.example.com ... done
Creating orderer.example.com ...
Creating couchdb ...
Creating peer0.org1.example.com ...
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90b396af1160 hyperledger/fabric-peer "peer node start" 1 second ago Up Less than a second 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
a0038f19943f hyperledger/fabric-ca "sh -c 'fabric-ca-se…" 5 seconds ago Up 2 seconds 0.0.0.0:7054->7054/tcp ca.example.com
77a56465104c hyperledger/fabric-couchdb "tini -- /docker-ent…" 5 seconds ago Up 1 second 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
7ed9d7dbf17f hyperledger/fabric-orderer "orderer" 5 seconds ago Up 3 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
a4397f663fdd tensorflow/tensorflow "/run_jupyter.sh --a…" 6 months ago Exited (0) 6 months ago jolly_vaughan
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
export FABRIC_START_TIMEOUT=10
#echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c mychannel -f /etc/hyperledger/configtx/channel.tx
Error response from daemon: Container 90b396af1160b6c7e3a35ec41806b428c299f598208dc77c4194ee1fa76a351a is not running
seemed like hyperledger/fabric-peer Image is not starting.
So I check the docker log
$ docker logs 90b396
2019-04-24 05:31:04.584 UTC [nodeCmd] serve -> INFO 001 Starting peer:
Version: 1.4.1
Commit SHA: 87074a7
Go version: go1.11.5
OS/Arch: linux/amd64
Chaincode:
Base Image Version: 0.4.15
Base Docker Namespace: hyperledger
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2019-04-24 05:31:04.585 UTC [ledgermgmt] initialize -> INFO 002 Initializing ledger mgmt
2019-04-24 05:31:04.585 UTC [kvledger] NewProvider -> INFO 003 Initializing ledger provider
2019-04-24 05:31:04.873 UTC [kvledger] NewProvider -> INFO 004 ledger provider Initialized
2019-04-24 05:31:05.002 UTC [couchdb] handleRequest -> WARN 005 Retrying couchdb request in 125ms. Attempt:1 Error:Get http://couchdb:5984/: dial tcp 172.18.0.3:5984: connect: connection refused
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x63 pc=0x7f12f457d259]
runtime stack:
runtime.throw(0x1272c18, 0x2a)
/opt/go/src/runtime/panic.go:608 +0x72
runtime.sigpanic()
/opt/go/src/runtime/signal_unix.go:374 +0x2f2
goroutine 91 [syscall]:
runtime.cgocall(0xe455e0, 0xc0001a9e00, 0x29)
/opt/go/src/runtime/cgocall.go:128 +0x5e fp=0xc0001a9dc8 sp=0xc0001a9d90 pc=0x4039ee
net._C2func_getaddrinfo(0xc0004580c0, 0x0, 0xc0001d2240, 0xc00079e140, 0x0, 0x0, 0x0)
_cgo_gotypes.go:91 +0x55 fp=0xc0001a9e00 sp=0xc0001a9dc8 pc=0x616c85
net.cgoLookupIPCNAME.func1(0xc0004580c0, 0x0, 0xc0001d2240, 0xc00079e140, 0x8, 0x8, 0xc0007b0370)
/opt/go/src/net/cgo_unix.go:149 +0x131 fp=0xc0001a9e48 sp=0xc0001a9e00 pc=0x61c3b1
net.cgoLookupIPCNAME(0xc0004580b0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/net/cgo_unix.go:149 +0x153 fp=0xc0001a9f38 sp=0xc0001a9e48 pc=0x618243
net.cgoIPLookup(0xc0005144e0, 0xc0004580b0, 0x7)
/opt/go/src/net/cgo_unix.go:201 +0x4d fp=0xc0001a9fc8 sp=0xc0001a9f38 pc=0x6188fd
runtime.goexit()
/opt/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0001a9fd0 sp=0xc0001a9fc8 pc=0x45de51
created by net.cgoLookupIP
/opt/go/src/net/cgo_unix.go:211 +0xad
goroutine 1 [select]:
net/http.(*Transport).getConn(0xc0004c9680, 0xc0001d2120, 0x0, 0xc000674000, 0x4, 0xc0004580b0, 0xc, 0x0, 0x0, 0x20)
/opt/go/src/net/http/transport.go:1004 +0x58e
net/http.(*Transport).roundTrip(0xc0004c9680, 0xc000798200, 0xc0001d20f0, 0xc000458098, 0xc0004580a0)
/opt/go/src/net/http/transport.go:451 +0x690
net/http.(*Transport).RoundTrip(0xc0004c9680, 0xc000798200, 0xc0004c9680, 0xbf281b0f07aea3df, 0x851175c92)
/opt/go/src/net/http/roundtrip.go:17 +0x35
net/http.send(0xc000798000, 0x139e6e0, 0xc0004c9680, 0xbf281b0f07aea3df, 0x851175c92, 0x1fa1740, 0xc00079e110, 0xbf281b0f07aea3df, 0xc0004aab48, 0x1)
/opt/go/src/net/http/client.go:250 +0x14b
net/http.(*Client).send(0xc00066f560, 0xc000798000, 0xbf281b0f07aea3df, 0x851175c92, 0x1fa1740, 0xc00079e110, 0x0, 0x1, 0x0)
/opt/go/src/net/http/client.go:174 +0xfa
net/http.(*Client).do(0xc00066f560, 0xc000798000, 0x0, 0x0, 0x0)
/opt/go/src/net/http/client.go:641 +0x2a8
net/http.(*Client).Do(0xc00066f560, 0xc000798000, 0x10, 0xc0004aae40, 0x1)
/opt/go/src/net/http/client.go:509 +0x35
github.com/hyperledger/fabric/core/ledger/util/couchdb.(*CouchInstance).handleRequest(0xc00067f740, 0x13b7a20, 0xc000046090, 0x123cf31, 0x3, 0x0, 0x0, 0x124c88f, 0x11, 0xc000128a80, ...)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/util/couchdb/couchdb.go:1752 +0x64e
github.com/hyperledger/fabric/core/ledger/util/couchdb.(*CouchInstance).VerifyCouchConfig(0xc00067f740, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/util/couchdb/couchdb.go:410 +0x345
github.com/hyperledger/fabric/core/ledger/util/couchdb.CreateCouchInstance(0xc0000440af, 0xc, 0x0, 0x0, 0x0, 0x0, 0x3, 0xc, 0x826299e00, 0xc000042000, ...)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/util/couchdb/couchdbutil.go:58 +0x29e
github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/statedb/statecouchdb.NewVersionedDBProvider(0x13b0260, 0x1fc5e60, 0xb972cb, 0x10d80c0, 0xc000670018)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb.go:46 +0xe4
github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/privacyenabledstate.NewCommonStorageDBProvider(0x13a2ce0, 0xc000670018, 0x13b0260, 0x1fc5e60, 0x139cac0, 0xc0007b4c00, 0x2, 0x4, 0x0, 0xc000128800)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/privacyenabledstate/common_storage_db.go:48 +0x48
github.com/hyperledger/fabric/core/ledger/kvledger.(*Provider).Initialize(0xc000128800, 0xc00062fda0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger_provider.go:88 +0x25e
github.com/hyperledger/fabric/core/ledger/ledgermgmt.initialize(0xc00046ed70)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:73 +0x4b4
github.com/hyperledger/fabric/core/ledger/ledgermgmt.Initialize.func1()
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:53 +0x2a
sync.(*Once).Do(0x1fc5f38, 0xc0004794e0)
/opt/go/src/sync/once.go:44 +0xb3
github.com/hyperledger/fabric/core/ledger/ledgermgmt.Initialize(0xc00046ed70)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:52 +0x55
github.com/hyperledger/fabric/peer/node.serve(0x1fc5e60, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/peer/node/start.go:176 +0x5bd
github.com/hyperledger/fabric/peer/node.glob..func1(0x1eb3b00, 0x1fc5e60, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/peer/node/start.go:121 +0x9c
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).execute(0x1eb3b00, 0x1fc5e60, 0x0, 0x0, 0x1eb3b00, 0x1fc5e60)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:762 +0x473
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1eb4220, 0x8, 0x0, 0x1eb33e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).Execute(0x1eb4220, 0xc0004a7f40, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:53 +0x2f7
goroutine 8 [syscall]:
os/signal.signal_recv(0x0)
/opt/go/src/runtime/sigqueue.go:139 +0x9c
os/signal.loop()
/opt/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/opt/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 21 [IO wait]:
internal/poll.runtime_pollWait(0x7f12f5994f00, 0x72, 0x0)
/opt/go/src/runtime/netpoll.go:173 +0x66
internal/poll.(*pollDesc).wait(0xc00045c198, 0x72, 0xc000082000, 0x0, 0x0)
/opt/go/src/internal/poll/fd_poll_runtime.go:85 +0x9a
internal/poll.(*pollDesc).waitRead(0xc00045c198, 0xffffffffffffff00, 0x0, 0x0)
/opt/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Accept(0xc00045c180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/internal/poll/fd_unix.go:384 +0x1a0
net.(*netFD).accept(0xc00045c180, 0x7f12fa405000, 0x0, 0xc000058eb0)
/opt/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc00079e690, 0xc000058eb8, 0x40d1d8, 0x30)
/opt/go/src/net/tcpsock_posix.go:139 +0x2e
net.(*TCPListener).Accept(0xc00079e690, 0x1174aa0, 0xc0001d60c0, 0x1074180, 0x1ea5270)
/opt/go/src/net/tcpsock.go:260 +0x47
net/http.(*Server).Serve(0xc000665a00, 0x13b6a20, 0xc00079e690, 0x0, 0x0)
/opt/go/src/net/http/server.go:2826 +0x22f
created by github.com/hyperledger/fabric/core/operations.(*System).Start
/opt/gopath/src/github.com/hyperledger/fabric/core/operations/system.go:121 +0x1a3
goroutine 22 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0001f8a80)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 10 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 11 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 12 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 13 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 14 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0001f82a0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 15 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 16 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 66 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 67 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 68 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a40e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 69 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 70 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 71 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 72 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 73 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a42a0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 74 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 75 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 76 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 77 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 78 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a4460)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 79 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 80 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 81 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 82 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 83 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a4620)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 36 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 37 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 38 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 39 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 89 [select]:
net.(*Resolver).LookupIPAddr(0x1fa0d00, 0x13b7a20, 0xc000046090, 0xc0004580b0, 0x7, 0xc0004580b8, 0x4, 0x1760, 0x0, 0x0)
/opt/go/src/net/lookup.go:227 +0x55f
net.(*Resolver).internetAddrList(0x1fa0d00, 0x13b7a20, 0xc000046090, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0x0, 0x0, 0x0, ...)
/opt/go/src/net/ipsock.go:279 +0x614
net.(*Resolver).resolveAddrList(0x1fa0d00, 0x13b7a20, 0xc000046090, 0x123da6a, 0x4, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0x0, ...)
/opt/go/src/net/dial.go:202 +0x4fb
net.(*Dialer).DialContext(0x1fa18c0, 0x13b7a20, 0xc000046090, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0x0, 0x0, 0x0, ...)
/opt/go/src/net/dial.go:384 +0x201
net/http.(*Transport).dial(0xc0004c9680, 0x13b7a20, 0xc000046090, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0xc00062e700, 0xc0005b9db8, 0xc0005b9c00, ...)
/opt/go/src/net/http/transport.go:925 +0x17f
net/http.(*Transport).dialConn(0xc0004c9680, 0x13b7a20, 0xc000046090, 0x0, 0xc000674000, 0x4, 0xc0004580b0, 0xc, 0x0, 0x0, ...)
/opt/go/src/net/http/transport.go:1240 +0x313
net/http.(*Transport).getConn.func4(0xc0004c9680, 0x13b7a20, 0xc000046090, 0xc0001d2150, 0xc0004684e0)
/opt/go/src/net/http/transport.go:999 +0x6e
created by net/http.(*Transport).getConn
/opt/go/src/net/http/transport.go:998 +0x3d7
goroutine 90 [select]:
net.cgoLookupIP(0x13b79e0, 0xc00049e140, 0xc0004580b0, 0x7, 0x0, 0xc000797bc0, 0x1069d40, 0xc000520030, 0x1010720, 0xc0007b1350)
/opt/go/src/net/cgo_unix.go:212 +0x17b
net.(*Resolver).lookupIP(0x1fa0d00, 0x13b79e0, 0xc00049e140, 0xc0004580b0, 0x7, 0x0, 0xc000462d80, 0xc0006779c0, 0xc000797fa0, 0x0)
/opt/go/src/net/lookup_unix.go:95 +0x166
net.(*Resolver).lookupIP-fm(0x13b79e0, 0xc00049e140, 0xc0004580b0, 0x7, 0x42be22, 0xc000000008, 0xc0006779c0, 0xc0007b0370, 0xc0001a9ea0)
/opt/go/src/net/lookup.go:207 +0x56
net.glob..func1(0x13b79e0, 0xc00049e140, 0xc000796350, 0xc0004580b0, 0x7, 0xc000796a70, 0x1069d40, 0xc00019d740, 0x1069d40, 0xc0007a3560)
/opt/go/src/net/hook.go:19 +0x52
net.(*Resolver).LookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
/opt/go/src/net/lookup.go:221 +0xd8
internal/singleflight.(*Group).doCall(0x1fa0d10, 0xc000012230, 0xc0004580b0, 0x7, 0xc0001d21e0)
/opt/go/src/internal/singleflight/singleflight.go:95 +0x2e
created by internal/singleflight.(*Group).DoChan
/opt/go/src/internal/singleflight/singleflight.go:88 +0x2a0
goroutine 88 [select]:
net/http.setRequestCancel.func3(0x0, 0xc0001d20f0, 0xc0000121e0, 0xc000458098, 0xc000468480)
/opt/go/src/net/http/client.go:321 +0xcf
created by net/http.setRequestCancel
/opt/go/src/net/http/client.go:320 +0x24e
looked like below is the main cause.
[signal SIGSEGV: segmentation violation code=0x1 addr=0x63 pc=0x7f12f457d259]
I have tried
- deleting and reinstalling
- shutting down pre-exist network (https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html#set-up-the-blockchain-network)
but the problem remained the same.
So these are all I've got.
Can someone please let me know what is the problem and how to fix it?
You can try to set the following environment variable in the peer: GODEBUG=netdns=go

how to configure and run Reaper to repair cassandra in linux( centos environment)

I'm trying to install and run Reaper 1.4 on my centos VM. And followed the same installation step as in given video (https://www.youtube.com/watch?v=0dub29BgwPI), but still no success in getting reaper started.Can anyone please help me with proper/complete document. however i have read and followed
http://cassandra-reaper.io/docs/download/
Below given is my cassandra-reaper.yaml settings:
segmentCountPerNode: 16
repairParallelism: DATACENTER_AWARE
repairIntensity: 0.9
scheduleDaysBetween: 7
repairRunThreadCount: 15
hangingRepairTimeoutMins: 30
storageType: cassandra
enableCrossOrigin: true
incrementalRepair: false
blacklistTwcsTables: false
enableDynamicSeedList: true
repairManagerSchedulingIntervalSeconds: 10
activateQueryLogger: false
jmxConnectionTimeoutInSeconds: 5
useAddressTranslator: false
# purgeRecordsAfterInDays: 30
# numberOfRunsToKeepPerUnit: 10
jmxPorts:
#127.0.0.1: 7100
#10.X.X.X: 7199
#127.0.0.2: 7200
#127.0.0.3: 7300
#127.0.0.4: 7400
#127.0.0.5: 7500
#127.0.0.6: 7600
#127.0.0.7: 7700
#127.0.0.8: 7800
jmxAuth:
username: *****
password: *****
server:
type: default
applicationConnectors:
- type: http
port: 8080
bindHost: 0.0.0.0
adminConnectors:
- type: http
port: 8081
bindHost: 0.0.0.0
requestLog:
appenders: []
cassandra:
clusterName: "dc1"
contactPoints: ["10.X.X.1","10.X.X.2","10.X.X.3","10.X.X.4","10.X.X.5"]
#contactPoints: ["127.0.0.1"]
keyspace: "reaper_db"
loadBalancingPolicy:
type: tokenAware
shuffleReplicas: true
subPolicy:
type: dcAwareRoundRobin
localDC:
usedHostsPerRemoteDC: 0
allowRemoteDCsForLocalConsistencyLevel: false
authProvider:
type: plainText
username: cass
password: cass
ssl:
type: jdk
autoScheduling:
enabled: false
initialDelayPeriod: PT15S
periodBetweenPolls: PT10M
timeBeforeFirstSchedule: PT5M
scheduleSpreadPeriod: PT6H
excludedKeyspaces:
- keyspace1
- keyspace2
accessControl:
sessionTimeout: PT10M
shiro:
iniConfigs: ["classpath:shiro.ini"]
log from /var/log/cassandra-reaper/reaper.log
INFO [main] i.c.ReaperApplication - initializing runner thread pool with 15 threads
INFO [main] i.c.ReaperApplication - initializing storage of type: cassandra
INFO [main] c.d.d.core - DataStax Java driver 3.5.0 for Apache Cassandra
INFO [main] c.d.d.c.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer
INFO [main] c.d.d.c.ClockFactory - Using native clock to generate timestamps.
INFO [main] c.d.d.c.NettyUtil - Found Netty's native epoll transport in the classpath, using it
INFO [main] o.a.s.c.ReflectionBuilder - An instance with name 'authc' already exists. Redefining this object as a new instance of type org.apache.shiro.web.filter.authc.PassThruAuthenticationFilter
log from /var/log/cassandra-reaper.err
at org.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(ScannerImpl.java:415)
at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:226)
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingValue.produce(ParserImpl.java:586)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:168)
at com.fasterxml.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:347)
... 11 more
ls: cannot access server/target/cassandra-reaper-*.jar: No such file or directory
io.dropwizard.configuration.ConfigurationParsingException: /etc/cassandra-reaper/cassandra-reaper.yaml has an error:
* Malformed YAML at line: 27, column: 11; while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
^
at [Source: (ByteArrayInputStream); line: 26, column: 10]
at io.dropwizard.configuration.ConfigurationParsingException$Builder.build(ConfigurationParsingException.java:279)
at io.dropwizard.configuration.BaseConfigurationFactory.build(BaseConfigurationFactory.java:96)
at io.dropwizard.cli.ConfiguredCommand.parseConfiguration(ConfiguredCommand.java:126)
at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:74)
at io.dropwizard.cli.Cli.run(Cli.java:78)
at io.dropwizard.Application.run(Application.java:93)
at io.cassandrareaper.ReaperApplication.main(ReaperApplication.java:99)
Caused by: com.fasterxml.jackson.dataformat.yaml.snakeyaml.error.MarkedYAMLException: while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
^
Malformed YAML at line: 27, column: 11; while scanning for the next token; found character '\t' that cannot start any token; in 'reader', line 27, column 1:
clusterName: "dc1"
You need to remove any tab whitespaces in your yaml file and replace it with 4 spaces instead.
See the answer here for why this is common when manipulating YAML files.
A YAML file cannot contain tabs as indentation

How to monitor the throughput of Heron Cluster

I needed to get the throughput of Heron Cluster for some reasons, but there is no metric in the Heron UI. So do you have any ideas about how to monitor the throughput of Heron Cluster? Thanks.
The result of running heron-explorer as follows:
yitian#heron01:~$ heron-explorer metrics aurora/yitian/devel SentenceWordCountTopology
[2018-08-03 21:02:09 +0000] [INFO]: Using tracker URL: http://127.0.0.1:8888
'spout' metrics:
container id jvm-uptime-secs jvm-process-cpu-load jvm-memory-used-mb emit-count ack-count fail-count
------------------- ----------------- ---------------------- -------------------- ------------ ----------- ------------
container_3_spout_6 2053 0.253257 146 1.13288e+07 1.13278e+07 0
container_4_spout_7 2091 0.150625 137.5 1.1624e+07 1.16228e+07 231
'count' metrics:
container id jvm-uptime-secs jvm-process-cpu-load jvm-memory-used-mb emit-count execute-count ack-count fail-count
-------------------- ----------------- ---------------------- -------------------- ------------ --------------- ----------- ------------
container_6_count_12 2092 0.184742 155.167 0 4.6026e+07 4.6026e+07 0
container_5_count_9 2091 0.387867 146 0 4.60069e+07 4.60069e+07 0
container_6_count_11 2092 0.184488 157.833 0 4.58158e+07 4.58158e+07 0
container_4_count_8 2091 0.443688 129.833 0 4.58722e+07 4.58722e+07 0
container_5_count_10 2091 0.382577 118.5 0 4.60091e+07 4.60091e+07 0
'split' metrics:
container id jvm-uptime-secs jvm-process-cpu-load jvm-memory-used-mb emit-count execute-count ack-count fail-count
------------------- ----------------- ---------------------- -------------------- ------------ --------------- ----------- ------------
container_1_split_2 2091 0.143034 75.3333 4.59453e+07 4.59453e+06 4.59453e+06 0
container_3_split_5 2042 1.12248 79.1667 4.64862e+07 4.64862e+06 4.64862e+06 0
container_2_split_3 2150 0.139837 83.6667 4.59443e+07 4.59443e+06 4.59443e+06 0
container_1_split_1 2091 0.145702 104.167 4.59454e+07 4.59454e+06 4.59454e+06 0
container_2_split_4 2150 0.138453 106.333 4.59443e+07 4.59443e+06 4.59443e+06 0
[2018-08-03 21:02:09 +0000] [INFO]: Elapsed time: 0.031s.
You can use the execute-count of you sink component to measure the output of your topology. If each of your components have a 1:1 input:output ratio then this will be your throughput.
However, if you are windowing tuples into batches or splitting tuples (like separating sentences into individual words) then things get a little more complicated. You can get the input into your topology by looking at the emit-count of your spout components. You could then use this in comparison to you bolt execute-counts to create your own throughput metric.
An easy way to get programmatic access to these metrics is via the Heron Tracker REST API. You can use your chosen language's HTTP library (like Requests for Python) to query the last 3 hours of data for a running topology. If you require more than 3 hours of data (the maximum stored by the topology TMaster) you will need to use one of the other metrics sinks to send metrics to an external database. Heron currently provides sinks for saving to local files, Graphite or Prometheus. InfluxDB support is in the works.

Ceph OSD always 'down' in Ubuntu 14.04.1

I am trying to install and deploy a ceph cluster. As I don't have enough physical servers, I create 4 VMs on my OpenStack using official Ubuntu 14.04 image. I want to deploy a cluster with 1 monitor node and 3 OSD node with ceph version 0.80.7-0ubuntu0.14.04.1. I followed the steps from manual deployment document, and successfully installed the monitor node. However, after the installation of OSD node, it seems that the OSD daemons are running but not correctly report to the monitor node. The osd tree always shows down when I request command ceph --cluster cephcluster1 osd tree.
Following are the commands and corresponding results that may be related to my problem.
root#monitor:/home/ubuntu# ceph --cluster cephcluster1 osd tree
# id weight type name up/down reweight
-1 3 root default
-2 1 host osd1
0 1 osd.0 down 1
-3 1 host osd2
1 1 osd.1 down 1
-4 1 host osd3
2 1 osd.2 down 1
root#monitor:/home/ubuntu# ceph --cluster cephcluster1 -s
cluster fd78cbf8-8c64-4b12-9cfa-0e75bc6c8d98
health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 3/3 in osds are down
monmap e1: 1 mons at {monitor=172.26.111.4:6789/0}, election epoch 1, quorum 0 monitor
osdmap e21: 3 osds: 0 up, 3 in
pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
192 creating
The configuration file /etc/ceph/cephcluster1.conf on all nodes:
[global]
fsid = fd78cbf8-8c64-4b12-9cfa-0e75bc6c8d98
mon initial members = monitor
mon host = 172.26.111.4
public network = 10.5.0.0/16
cluster network = 172.26.111.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
[osd]
osd journal size = 1024
[osd.0]
osd host = osd1
[osd.1]
osd host = osd2
[osd.2]
osd host = osd3
Logs when I start one of the osd daemons through start ceph-osd cluster=cephcluster1 id=x where x is the OSD ID:
/var/log/ceph/cephcluster1-osd.0.log on the OSD node #1:
2015-02-11 09:59:56.626899 7f5409d74800 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 11230
2015-02-11 09:59:56.646218 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is supported and appears to work
2015-02-11 09:59:56.646372 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-02-11 09:59:56.658227 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-02-11 09:59:56.679515 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) limited size xattrs
2015-02-11 09:59:56.699721 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2015-02-11 09:59:56.700107 7f5409d74800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-11 09:59:56.700454 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 20: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.704025 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 20: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.704884 7f5409d74800 1 journal close /var/lib/ceph/osd/cephcluster1-0/journal
2015-02-11 09:59:56.725281 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is supported and appears to work
2015-02-11 09:59:56.725397 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-02-11 09:59:56.736445 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-02-11 09:59:56.756912 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) limited size xattrs
2015-02-11 09:59:56.776471 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) mount: WRITEAHEAD journal mode explicitly enabled in conf
2015-02-11 09:59:56.776748 7f5409d74800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-11 09:59:56.776848 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 21: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.777069 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 21: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.783019 7f5409d74800 0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello
2015-02-11 09:59:56.783584 7f5409d74800 0 osd.0 11 crush map has features 1107558400, adjusting msgr requires for clients
2015-02-11 09:59:56.783645 7f5409d74800 0 osd.0 11 crush map has features 1107558400 was 8705, adjusting msgr requires for mons
2015-02-11 09:59:56.783687 7f5409d74800 0 osd.0 11 crush map has features 1107558400, adjusting msgr requires for osds
2015-02-11 09:59:56.783750 7f5409d74800 0 osd.0 11 load_pgs
2015-02-11 09:59:56.783831 7f5409d74800 0 osd.0 11 load_pgs opened 0 pgs
2015-02-11 09:59:56.792167 7f53f9b57700 0 osd.0 11 ignoring osdmap until we have initialized
2015-02-11 09:59:56.792334 7f53f9b57700 0 osd.0 11 ignoring osdmap until we have initialized
2015-02-11 09:59:56.792838 7f5409d74800 0 osd.0 11 done with init, starting boot process
/var/log/ceph/ceph-mon.monitor.log on the monitor node:
2015-02-11 09:59:56.593494 7f24cc41d700 0 mon.monitor#0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=osd1", "root=default"], "id": 0, "weight": 0.05} v 0) v1
2015-02-11 09:59:56.593955 7f24cc41d700 0 mon.monitor#0(leader).osd e21 create-or-move crush item name 'osd.0' initial_weight 0.05 at location {host=osd1,root=default}
Any suggestion is appreciate. Many thanks!

Resources