hyperledger-fabric fabcar example : Container ... is not running - hyperledger-fabric

Problem
I'm following this tutorial on official document:
https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html
but stuck in 'launch the network' (https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html#launch-the-network)
./startFabric.sh javascript returns following error message
Error response from daemon: Container 8d4a67101bafc10453ab0a6c7d4afda63edc686ca157f8279ed1ebd11145b25a is not running
Environment
Below is my environment:
OS: Ubuntu 18.04.1 LTS
DOCKER: Docker version 18.06.1-ce, build e68fc7a
DOCKER-COMPOSE: docker-compose version 1.18.0, build 8dd22a9
GO: go version go1.12.4 linux/amd64
NPM: 3.5.2
NODE: v8.10.0
Python 2.7.15rc1
/etc/profile:
...
export PATH=$PATH:/usr/local/go/bin
export PATH=/home/sw/fabric/fabric-samples/bin:$PATH
export GOPATH=$HOME/go
(PATH variable, GOPATH SET)
(my $HOME/go directory is empty)
also my ubuntu user is registered to sudo group and also docker group
$ groups
... ... ... sudo ... ... docker
(I hid the rest)
below is the output when I try to start fabric
$ ./startFabric.sh javascript
# don't rewrite paths for Windows Git Bash users
export MSYS_NO_PATHCONV=1
docker-compose -f docker-compose.yml down
Removing peer0.org1.example.com ... done
Removing couchdb ... done
Removing ca.example.com ... done
Removing orderer.example.com ... done
Removing network net_basic
docker-compose -f docker-compose.yml up -d ca.example.com orderer.example.com peer0.org1.example.com couchdb
Creating couchdb ... done
Creating peer0.org1.example.com ... done
Creating orderer.example.com ...
Creating couchdb ...
Creating peer0.org1.example.com ...
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90b396af1160 hyperledger/fabric-peer "peer node start" 1 second ago Up Less than a second 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
a0038f19943f hyperledger/fabric-ca "sh -c 'fabric-ca-se…" 5 seconds ago Up 2 seconds 0.0.0.0:7054->7054/tcp ca.example.com
77a56465104c hyperledger/fabric-couchdb "tini -- /docker-ent…" 5 seconds ago Up 1 second 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
7ed9d7dbf17f hyperledger/fabric-orderer "orderer" 5 seconds ago Up 3 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
a4397f663fdd tensorflow/tensorflow "/run_jupyter.sh --a…" 6 months ago Exited (0) 6 months ago jolly_vaughan
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
export FABRIC_START_TIMEOUT=10
#echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c mychannel -f /etc/hyperledger/configtx/channel.tx
Error response from daemon: Container 90b396af1160b6c7e3a35ec41806b428c299f598208dc77c4194ee1fa76a351a is not running
seemed like hyperledger/fabric-peer Image is not starting.
So I check the docker log
$ docker logs 90b396
2019-04-24 05:31:04.584 UTC [nodeCmd] serve -> INFO 001 Starting peer:
Version: 1.4.1
Commit SHA: 87074a7
Go version: go1.11.5
OS/Arch: linux/amd64
Chaincode:
Base Image Version: 0.4.15
Base Docker Namespace: hyperledger
Base Docker Label: org.hyperledger.fabric
Docker Namespace: hyperledger
2019-04-24 05:31:04.585 UTC [ledgermgmt] initialize -> INFO 002 Initializing ledger mgmt
2019-04-24 05:31:04.585 UTC [kvledger] NewProvider -> INFO 003 Initializing ledger provider
2019-04-24 05:31:04.873 UTC [kvledger] NewProvider -> INFO 004 ledger provider Initialized
2019-04-24 05:31:05.002 UTC [couchdb] handleRequest -> WARN 005 Retrying couchdb request in 125ms. Attempt:1 Error:Get http://couchdb:5984/: dial tcp 172.18.0.3:5984: connect: connection refused
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x63 pc=0x7f12f457d259]
runtime stack:
runtime.throw(0x1272c18, 0x2a)
/opt/go/src/runtime/panic.go:608 +0x72
runtime.sigpanic()
/opt/go/src/runtime/signal_unix.go:374 +0x2f2
goroutine 91 [syscall]:
runtime.cgocall(0xe455e0, 0xc0001a9e00, 0x29)
/opt/go/src/runtime/cgocall.go:128 +0x5e fp=0xc0001a9dc8 sp=0xc0001a9d90 pc=0x4039ee
net._C2func_getaddrinfo(0xc0004580c0, 0x0, 0xc0001d2240, 0xc00079e140, 0x0, 0x0, 0x0)
_cgo_gotypes.go:91 +0x55 fp=0xc0001a9e00 sp=0xc0001a9dc8 pc=0x616c85
net.cgoLookupIPCNAME.func1(0xc0004580c0, 0x0, 0xc0001d2240, 0xc00079e140, 0x8, 0x8, 0xc0007b0370)
/opt/go/src/net/cgo_unix.go:149 +0x131 fp=0xc0001a9e48 sp=0xc0001a9e00 pc=0x61c3b1
net.cgoLookupIPCNAME(0xc0004580b0, 0x7, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/net/cgo_unix.go:149 +0x153 fp=0xc0001a9f38 sp=0xc0001a9e48 pc=0x618243
net.cgoIPLookup(0xc0005144e0, 0xc0004580b0, 0x7)
/opt/go/src/net/cgo_unix.go:201 +0x4d fp=0xc0001a9fc8 sp=0xc0001a9f38 pc=0x6188fd
runtime.goexit()
/opt/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0001a9fd0 sp=0xc0001a9fc8 pc=0x45de51
created by net.cgoLookupIP
/opt/go/src/net/cgo_unix.go:211 +0xad
goroutine 1 [select]:
net/http.(*Transport).getConn(0xc0004c9680, 0xc0001d2120, 0x0, 0xc000674000, 0x4, 0xc0004580b0, 0xc, 0x0, 0x0, 0x20)
/opt/go/src/net/http/transport.go:1004 +0x58e
net/http.(*Transport).roundTrip(0xc0004c9680, 0xc000798200, 0xc0001d20f0, 0xc000458098, 0xc0004580a0)
/opt/go/src/net/http/transport.go:451 +0x690
net/http.(*Transport).RoundTrip(0xc0004c9680, 0xc000798200, 0xc0004c9680, 0xbf281b0f07aea3df, 0x851175c92)
/opt/go/src/net/http/roundtrip.go:17 +0x35
net/http.send(0xc000798000, 0x139e6e0, 0xc0004c9680, 0xbf281b0f07aea3df, 0x851175c92, 0x1fa1740, 0xc00079e110, 0xbf281b0f07aea3df, 0xc0004aab48, 0x1)
/opt/go/src/net/http/client.go:250 +0x14b
net/http.(*Client).send(0xc00066f560, 0xc000798000, 0xbf281b0f07aea3df, 0x851175c92, 0x1fa1740, 0xc00079e110, 0x0, 0x1, 0x0)
/opt/go/src/net/http/client.go:174 +0xfa
net/http.(*Client).do(0xc00066f560, 0xc000798000, 0x0, 0x0, 0x0)
/opt/go/src/net/http/client.go:641 +0x2a8
net/http.(*Client).Do(0xc00066f560, 0xc000798000, 0x10, 0xc0004aae40, 0x1)
/opt/go/src/net/http/client.go:509 +0x35
github.com/hyperledger/fabric/core/ledger/util/couchdb.(*CouchInstance).handleRequest(0xc00067f740, 0x13b7a20, 0xc000046090, 0x123cf31, 0x3, 0x0, 0x0, 0x124c88f, 0x11, 0xc000128a80, ...)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/util/couchdb/couchdb.go:1752 +0x64e
github.com/hyperledger/fabric/core/ledger/util/couchdb.(*CouchInstance).VerifyCouchConfig(0xc00067f740, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/util/couchdb/couchdb.go:410 +0x345
github.com/hyperledger/fabric/core/ledger/util/couchdb.CreateCouchInstance(0xc0000440af, 0xc, 0x0, 0x0, 0x0, 0x0, 0x3, 0xc, 0x826299e00, 0xc000042000, ...)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/util/couchdb/couchdbutil.go:58 +0x29e
github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/statedb/statecouchdb.NewVersionedDBProvider(0x13b0260, 0x1fc5e60, 0xb972cb, 0x10d80c0, 0xc000670018)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/statedb/statecouchdb/statecouchdb.go:46 +0xe4
github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/privacyenabledstate.NewCommonStorageDBProvider(0x13a2ce0, 0xc000670018, 0x13b0260, 0x1fc5e60, 0x139cac0, 0xc0007b4c00, 0x2, 0x4, 0x0, 0xc000128800)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/txmgmt/privacyenabledstate/common_storage_db.go:48 +0x48
github.com/hyperledger/fabric/core/ledger/kvledger.(*Provider).Initialize(0xc000128800, 0xc00062fda0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger_provider.go:88 +0x25e
github.com/hyperledger/fabric/core/ledger/ledgermgmt.initialize(0xc00046ed70)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:73 +0x4b4
github.com/hyperledger/fabric/core/ledger/ledgermgmt.Initialize.func1()
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:53 +0x2a
sync.(*Once).Do(0x1fc5f38, 0xc0004794e0)
/opt/go/src/sync/once.go:44 +0xb3
github.com/hyperledger/fabric/core/ledger/ledgermgmt.Initialize(0xc00046ed70)
/opt/gopath/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:52 +0x55
github.com/hyperledger/fabric/peer/node.serve(0x1fc5e60, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/peer/node/start.go:176 +0x5bd
github.com/hyperledger/fabric/peer/node.glob..func1(0x1eb3b00, 0x1fc5e60, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/peer/node/start.go:121 +0x9c
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).execute(0x1eb3b00, 0x1fc5e60, 0x0, 0x0, 0x1eb3b00, 0x1fc5e60)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:762 +0x473
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x1eb4220, 0x8, 0x0, 0x1eb33e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:852 +0x2fd
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).Execute(0x1eb4220, 0xc0004a7f40, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:800 +0x2b
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:53 +0x2f7
goroutine 8 [syscall]:
os/signal.signal_recv(0x0)
/opt/go/src/runtime/sigqueue.go:139 +0x9c
os/signal.loop()
/opt/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/opt/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 21 [IO wait]:
internal/poll.runtime_pollWait(0x7f12f5994f00, 0x72, 0x0)
/opt/go/src/runtime/netpoll.go:173 +0x66
internal/poll.(*pollDesc).wait(0xc00045c198, 0x72, 0xc000082000, 0x0, 0x0)
/opt/go/src/internal/poll/fd_poll_runtime.go:85 +0x9a
internal/poll.(*pollDesc).waitRead(0xc00045c198, 0xffffffffffffff00, 0x0, 0x0)
/opt/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Accept(0xc00045c180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/internal/poll/fd_unix.go:384 +0x1a0
net.(*netFD).accept(0xc00045c180, 0x7f12fa405000, 0x0, 0xc000058eb0)
/opt/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc00079e690, 0xc000058eb8, 0x40d1d8, 0x30)
/opt/go/src/net/tcpsock_posix.go:139 +0x2e
net.(*TCPListener).Accept(0xc00079e690, 0x1174aa0, 0xc0001d60c0, 0x1074180, 0x1ea5270)
/opt/go/src/net/tcpsock.go:260 +0x47
net/http.(*Server).Serve(0xc000665a00, 0x13b6a20, 0xc00079e690, 0x0, 0x0)
/opt/go/src/net/http/server.go:2826 +0x22f
created by github.com/hyperledger/fabric/core/operations.(*System).Start
/opt/gopath/src/github.com/hyperledger/fabric/core/operations/system.go:121 +0x1a3
goroutine 22 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0001f8a80)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 10 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 11 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 12 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 13 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b31e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 14 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0001f82a0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 15 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 16 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 66 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 67 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3380)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 68 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a40e0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 69 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 70 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 71 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 72 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3520)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 73 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a42a0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 74 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 75 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 76 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 77 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b36c0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 78 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a4460)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 79 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 80 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 81 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 82 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3860)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 83 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc0007a4620)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x12a
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x177
goroutine 36 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).compactionError(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:90 +0xd3
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:142 +0x40c
goroutine 37 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mpoolDrain(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_state.go:101 +0xe7
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:143 +0x42e
goroutine 38 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).tCompaction(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:834 +0x331
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:149 +0x58c
goroutine 39 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.(*DB).mCompaction(0xc0000b3a00)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db_compaction.go:762 +0x12e
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.openDB
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:150 +0x5ae
goroutine 89 [select]:
net.(*Resolver).LookupIPAddr(0x1fa0d00, 0x13b7a20, 0xc000046090, 0xc0004580b0, 0x7, 0xc0004580b8, 0x4, 0x1760, 0x0, 0x0)
/opt/go/src/net/lookup.go:227 +0x55f
net.(*Resolver).internetAddrList(0x1fa0d00, 0x13b7a20, 0xc000046090, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0x0, 0x0, 0x0, ...)
/opt/go/src/net/ipsock.go:279 +0x614
net.(*Resolver).resolveAddrList(0x1fa0d00, 0x13b7a20, 0xc000046090, 0x123da6a, 0x4, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0x0, ...)
/opt/go/src/net/dial.go:202 +0x4fb
net.(*Dialer).DialContext(0x1fa18c0, 0x13b7a20, 0xc000046090, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0x0, 0x0, 0x0, ...)
/opt/go/src/net/dial.go:384 +0x201
net/http.(*Transport).dial(0xc0004c9680, 0x13b7a20, 0xc000046090, 0x123d330, 0x3, 0xc0004580b0, 0xc, 0xc00062e700, 0xc0005b9db8, 0xc0005b9c00, ...)
/opt/go/src/net/http/transport.go:925 +0x17f
net/http.(*Transport).dialConn(0xc0004c9680, 0x13b7a20, 0xc000046090, 0x0, 0xc000674000, 0x4, 0xc0004580b0, 0xc, 0x0, 0x0, ...)
/opt/go/src/net/http/transport.go:1240 +0x313
net/http.(*Transport).getConn.func4(0xc0004c9680, 0x13b7a20, 0xc000046090, 0xc0001d2150, 0xc0004684e0)
/opt/go/src/net/http/transport.go:999 +0x6e
created by net/http.(*Transport).getConn
/opt/go/src/net/http/transport.go:998 +0x3d7
goroutine 90 [select]:
net.cgoLookupIP(0x13b79e0, 0xc00049e140, 0xc0004580b0, 0x7, 0x0, 0xc000797bc0, 0x1069d40, 0xc000520030, 0x1010720, 0xc0007b1350)
/opt/go/src/net/cgo_unix.go:212 +0x17b
net.(*Resolver).lookupIP(0x1fa0d00, 0x13b79e0, 0xc00049e140, 0xc0004580b0, 0x7, 0x0, 0xc000462d80, 0xc0006779c0, 0xc000797fa0, 0x0)
/opt/go/src/net/lookup_unix.go:95 +0x166
net.(*Resolver).lookupIP-fm(0x13b79e0, 0xc00049e140, 0xc0004580b0, 0x7, 0x42be22, 0xc000000008, 0xc0006779c0, 0xc0007b0370, 0xc0001a9ea0)
/opt/go/src/net/lookup.go:207 +0x56
net.glob..func1(0x13b79e0, 0xc00049e140, 0xc000796350, 0xc0004580b0, 0x7, 0xc000796a70, 0x1069d40, 0xc00019d740, 0x1069d40, 0xc0007a3560)
/opt/go/src/net/hook.go:19 +0x52
net.(*Resolver).LookupIPAddr.func1(0x0, 0x0, 0x0, 0x0)
/opt/go/src/net/lookup.go:221 +0xd8
internal/singleflight.(*Group).doCall(0x1fa0d10, 0xc000012230, 0xc0004580b0, 0x7, 0xc0001d21e0)
/opt/go/src/internal/singleflight/singleflight.go:95 +0x2e
created by internal/singleflight.(*Group).DoChan
/opt/go/src/internal/singleflight/singleflight.go:88 +0x2a0
goroutine 88 [select]:
net/http.setRequestCancel.func3(0x0, 0xc0001d20f0, 0xc0000121e0, 0xc000458098, 0xc000468480)
/opt/go/src/net/http/client.go:321 +0xcf
created by net/http.setRequestCancel
/opt/go/src/net/http/client.go:320 +0x24e
looked like below is the main cause.
[signal SIGSEGV: segmentation violation code=0x1 addr=0x63 pc=0x7f12f457d259]
I have tried
- deleting and reinstalling
- shutting down pre-exist network (https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html#set-up-the-blockchain-network)
but the problem remained the same.
So these are all I've got.
Can someone please let me know what is the problem and how to fix it?

You can try to set the following environment variable in the peer: GODEBUG=netdns=go

Related

Hyperledger Fabric | Orderer PODs keeps restarting | Error: Was the raft log corrupted, truncated, or lost?

I'm running Hyperledger Fabric network in Azure Kubernetes Cluster. I'm using single Azure Files volume (1000 GB) as my persistent volume.
However, my Orderer POD keeps restarting over and over again.
Orderer POD is logging following error:
2022-02-13 04:40:22.342 UTC 0080 PANI [orderer.consensus.etcdraft] commitTo -> tocommit(8) is out of range [lastIndex(5)]. Was the raft log corrupted, truncated, or lost? channel=system-channel node=3
panic: tocommit(8) is out of range [lastIndex(5)]. Was the raft log corrupted, truncated, or lost?
Following are the detailed logs from Orderer POD:
2022-02-13 04:40:22.342 UTC 007f INFO [orderer.consensus.etcdraft] becomeFollower -> 3 became follower at term 2 channel=system-channel node=3
2022-02-13 04:40:22.342 UTC 0080 PANI [orderer.consensus.etcdraft] commitTo -> tocommit(8) is out of range [lastIndex(5)]. Was the raft log corrupted, truncated, or lost? channel=system-channel node=3
panic: tocommit(8) is out of range [lastIndex(5)]. Was the raft log corrupted, truncated, or lost?
go.uber.org/zap.(*SugaredLogger).log(0xc000332e20, 0xf0000000000004, 0x10b467e, 0x5d, 0xc000533b00, 0x2, 0x2, 0x0, 0x0, 0x0)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
go.uber.org/zap.(*SugaredLogger).Panicf(...)
/go/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000332e28, 0x10b467e, 0x5d, 0xc000533b00, 0x2, 0x2)
/go/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x7c
go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc0001a4310, 0x8)
/go/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/log.go:203 +0x135
go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc000aaab40, 0x8, 0x3, 0x1, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1324 +0x54
go.etcd.io/etcd/raft.stepFollower(0xc000aaab40, 0x8, 0x3, 0x1, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1269 +0x439
go.etcd.io/etcd/raft.(*raft).Step(0xc000aaab40, 0x8, 0x3, 0x1, 0x2, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:971 +0x1235
go.etcd.io/etcd/raft.(*node).run(0xc00007e660, 0xc000aaab40)
/go/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:357 +0xd78
created by go.etcd.io/etcd/raft.StartNode
/go/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:233 +0x409
2022-02-13 04:45:28.330 UTC 0001 WARN [localconfig] completeInitialization -> General.GenesisFile should be replaced by General.BootstrapFile
2022-02-13 04:45:28.331 UTC 0002 INFO [localconfig] completeInitialization -> Kafka.Version unset, setting to 0.10.2.0
2022-02-13 04:45:28.331 UTC 0003 INFO [orderer.common.server] prettyPrintStruct -> Orderer config values:
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
General.TLS.Enabled = true
General.TLS.PrivateKey = "/organizations/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.key"
General.TLS.Certificate = "/organizations/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt"
General.TLS.RootCAs = [/organizations/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/ca.crt]
General.TLS.ClientAuthRequired = false
General.TLS.ClientRootCAs = []
General.TLS.TLSHandshakeTimeShift = 0s
General.Cluster.ListenAddress = ""
General.Cluster.ListenPort = 0
General.Cluster.ServerCertificate = ""
General.Cluster.ServerPrivateKey = ""
General.Cluster.ClientCertificate = "/organizations/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.crt"
General.Cluster.ClientPrivateKey = "/organizations/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/server.key"
General.Cluster.RootCAs = []
General.Cluster.DialTimeout = 5s
General.Cluster.RPCTimeout = 7s
General.Cluster.ReplicationBufferSize = 20971520
General.Cluster.ReplicationPullTimeout = 5s
General.Cluster.ReplicationRetryTimeout = 5s
General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
General.Cluster.ReplicationMaxRetries = 12
General.Cluster.SendBufferSize = 10
General.Cluster.CertExpirationWarningThreshold = 168h0m0s
General.Cluster.TLSHandshakeTimeShift = 0s
General.Keepalive.ServerMinInterval = 1m0s
General.Keepalive.ServerInterval = 2h0m0s
General.Keepalive.ServerTimeout = 20s
General.ConnectionTimeout = 0s
General.GenesisMethod = "file"
General.GenesisFile = "/system-genesis-block/genesis.block"
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = false
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.TLS.TLSHandshakeTimeShift = 0s
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 3
Debug.BroadcastTraceDir = ""
Debug.DeliverTraceDir = ""
Consensus = map[SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot WALDir:/var/hyperledger/production/orderer/etcdraft/wal]
Operations.ListenAddress = "127.0.0.1:8443"
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Operations.TLS.TLSHandshakeTimeShift = 0s
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
ChannelParticipation.Enabled = false
ChannelParticipation.MaxRequestBodySize = 1048576
Admin.ListenAddress = "127.0.0.1:9443"
Admin.TLS.Enabled = false
Admin.TLS.PrivateKey = ""
Admin.TLS.Certificate = ""
Admin.TLS.RootCAs = []
Admin.TLS.ClientAuthRequired = true
Admin.TLS.ClientRootCAs = []
Admin.TLS.TLSHandshakeTimeShift = 0s
2022-02-13 04:45:28.773 UTC 0004 INFO [orderer.common.server] initializeServerConfig -> Starting orderer with TLS enabled
2022-02-13 04:45:28.822 UTC 0005 INFO [blkstorage] NewProvider -> Creating new file ledger directory at /var/hyperledger/production/orderer/chains
2022-02-13 04:45:28.870 UTC 0006 INFO [orderer.common.server] Main -> Bootstrapping the system channel
2022-02-13 04:45:28.880 UTC 0007 INFO [blkstorage] newBlockfileMgr -> Getting block information from block storage
2022-02-13 04:45:28.920 UTC 0008 INFO [orderer.common.server] initializeBootstrapChannel -> Initialized the system channel 'system-channel' from bootstrap block
2022-02-13 04:45:28.923 UTC 0009 INFO [orderer.common.server] extractSystemChannel -> Found system channel config block, number: 0
2022-02-13 04:45:28.923 UTC 000a INFO [orderer.common.server] selectClusterBootBlock -> Cluster boot block is bootstrap (genesis) block; Blocks Header.Number system-channel=0, bootstrap=0
2022-02-13 04:45:28.926 UTC 000b INFO [orderer.common.server] Main -> Starting with system channel: system-channel, consensus type: etcdraft
2022-02-13 04:45:28.926 UTC 000c INFO [orderer.common.server] Main -> Setting up cluster
2022-02-13 04:45:28.926 UTC 000d INFO [orderer.common.server] reuseListener -> Cluster listener is not configured, defaulting to use the general listener on port 7050
2022-02-13 04:45:28.959 UTC 000e INFO [orderer.common.cluster] loadVerifier -> Loaded verifier for channel system-channel from config block at index 0
2022-02-13 04:45:28.960 UTC 000f INFO [certmonitor] trackCertExpiration -> The enrollment certificate will expire on 2023-02-10 09:12:00 +0000 UTC
2022-02-13 04:45:28.960 UTC 0010 INFO [certmonitor] trackCertExpiration -> The server TLS certificate will expire on 2023-02-10 09:13:00 +0000 UTC
2022-02-13 04:45:28.960 UTC 0011 INFO [certmonitor] trackCertExpiration -> The client TLS certificate will expire on 2023-02-10 09:13:00 +0000 UTC
2022-02-13 04:45:28.966 UTC 0012 INFO [orderer.consensus.etcdraft] HandleChain -> EvictionSuspicion not set, defaulting to 10m0s
2022-02-13 04:45:28.966 UTC 0013 INFO [orderer.consensus.etcdraft] HandleChain -> With system channel: after eviction InactiveChainRegistry.TrackChain will be called
2022-02-13 04:45:28.966 UTC 0014 INFO [orderer.consensus.etcdraft] createOrReadWAL -> No WAL data found, creating new WAL at path '/var/hyperledger/production/orderer/etcdraft/wal/system-channel' channel=system-channel node=3
2022-02-13 04:45:28.983 UTC 0015 INFO [orderer.commmon.multichannel] initSystemChannel -> Starting system channel 'system-channel' with genesis block hash ac270210ec2258b99948adc06552a14d49463c3457933b1e24a151502c6487e5 and orderer type etcdraft
2022-02-13 04:45:28.983 UTC 0016 INFO [orderer.consensus.etcdraft] Start -> Starting Raft node channel=system-channel node=3
2022-02-13 04:45:28.983 UTC 0017 INFO [orderer.common.cluster] Configure -> Entering, channel: system-channel, nodes: [ID: 1,
Are there any config/env vars we can set in configtx.yaml or orderer-deployment.yaml to avoid this problem?
Are there any timeouts etc. we can increase/set in configtx.yaml or orderer-deployment.yaml to avoid this problem?
One more observation I made is, when we delete (manually) the Orderer POD (one or two) and when it spins up new Orderer PODs in place automatically, they will keep restarting with the same error.
Turns out my WAL logs directory was deleted. Anyone landing on this question, please set following (if not already set) ENV variables on your Orderer deployments:
- name: ORDERER_CONSENSUS_SNAPDIR
value: /var/hyperledger/production/orderer/etcdraft/snapshot
- name: ORDERER_CONSENSUS_WALDIR
value: /var/hyperledger/production/orderer/etcdraft/wal
If you already have above ENV variables set on your Orderer deployments then check if the directories are empty for restarting Orderers at above locations. If they are empty, it means your WAL logs were deleted. Orderer can't recover if WAL logs are not available.

bluetooth disconnected just after connected on ubuntu 20.04

i'm trying to make my own photobooth using instant photo printer which uses bluetooth and dye sublimation type. For doing that, BT connection needs to be implemented in the python or node js code. Basically, i'm planing just utilize the system command.
for now, however, having tested Polaroid Hi-print and Kodak P300R, but none of these are not even connected. in specifically, it disconnected by remote user(device) just after connected.(Actually, they only supports for mobile platforms like android or iOS)
now even I'm confusing it's possible. Could you guys help me to get through with?
Here's are the btmon logs of connection using "hcitool cc [btaddr]" ..
# RAW Open: hcitool (privileged) version 2.22 {0x0002} 3407.844746
# RAW Close: hcitool {0x0002} 3407.844761
# RAW Open: hcitool (privileged) version 2.22 {0x0002} [hci0] 3407.844771
< HCI Command: Create Connection (0x01|0x0005) plen 13 #196 [hci0] 3407.844787
Address: 00:15:83:41:DB:94 (IVT corporation)
Packet type: 0xcc18
DM1 may be used
DH1 may be used
DM3 may be used
DH3 may be used
DM5 may be used
DH5 may be used
Page scan repetition mode: R2 (0x02)
Page scan mode: Mandatory (0x00)
Clock offset: 0x0000
Role switch: Allow slave (0x01)
> HCI Event: Command Status (0x0f) plen 4 #197 [hci0] 3407.982359
Create Connection (0x01|0x0005) ncmd 2
Status: Success (0x00)
> HCI Event: Role Change (0x12) plen 8 #198 [hci0] 3408.627344
Status: Success (0x00)
Address: 00:15:83:41:DB:94 (IVT corporation)
Role: Slave (0x01)
> HCI Event: Connect Complete (0x03) plen 11 #199 [hci0] 3408.633340
Status: Success (0x00)
Handle: 3
Address: 00:15:83:41:DB:94 (IVT corporation)
Link type: ACL (0x01)
Encryption: Disabled (0x00)
# RAW Close: hcitool {0x0002} [hci0] 3408.633412
< HCI Command: Read Remote Supp.. (0x01|0x001b) plen 2 #200 [hci0] 3408.633427
Handle: 3
> HCI Event: Command Status (0x0f) plen 4 #201 [hci0] 3408.637320
Read Remote Supported Features (0x01|0x001b) ncmd 2
Status: Success (0x00)
> HCI Event: Max Slots Change (0x1b) plen 3 #202 [hci0] 3408.638316
Handle: 3
Max slots: 5
> HCI Event: Max Slots Change (0x1b) plen 3 #203 [hci0] 3408.644343
Handle: 3
Max slots: 5
> HCI Event: Read Remote Supported Fe.. (0x0b) plen 11 #204 [hci0] 3408.646313
Status: Success (0x00)
Handle: 3
Features: 0xff 0xff 0xc9 0xfa 0x83 0xa7 0x79 0x87
3 slot packets
5 slot packets
Encryption
Slot offset
Timing accuracy
Role switch
Hold mode
Sniff mode
Park state
Power control requests
Channel quality driven data rate (CQDDR)
SCO link
HV2 packets
HV3 packets
u-law log synchronous data
A-law log synchronous data
CVSD synchronous data
Transparent synchronous data
Flow control lag (most significant bit)
Broadcast Encryption
Enhanced Data Rate ACL 2 Mbps mode
Enhanced inquiry scan
Interlaced inquiry scan
Interlaced page scan
RSSI with inquiry results
Extended SCO link (EV3 packets)
EV4 packets
EV5 packets
3-slot Enhanced Data Rate ACL packets
5-slot Enhanced Data Rate ACL packets
Sniff subrating
Pause encryption
Enhanced Data Rate eSCO 2 Mbps mode
3-slot Enhanced Data Rate eSCO packets
Extended Inquiry Response
Secure Simple Pairing
Encapsulated PDU
Erroneous Data Reporting
Non-flushable Packet Boundary Flag
Link Supervision Timeout Changed Event
Inquiry TX Power Level
Enhanced Power Control
Extended features
< HCI Command: Read Remote Exte.. (0x01|0x001c) plen 3 #205 [hci0] 3408.646327
Handle: 3
Page: 1
> HCI Event: Command Status (0x0f) plen 4 #206 [hci0] 3408.647314
Read Remote Extended Features (0x01|0x001c) ncmd 2
Status: Success (0x00)
> HCI Event: Read Remote Extended Fea.. (0x23) plen 13 #207 [hci0] 3408.678320
Status: Success (0x00)
Handle: 3
Page: 1/1
Features: 0x01 0x00 0x00 0x00 0x00 0x00 0x00 0x00
Secure Simple Pairing (Host Support)
< HCI Command: Remote Name Req.. (0x01|0x0019) plen 10 #208 [hci0] 3408.678378
Address: 00:15:83:41:DB:94 (IVT corporation)
Page scan repetition mode: R2 (0x02)
Page scan mode: Mandatory (0x00)
Clock offset: 0x0000
< ACL Data TX: Handle 3 flags 0x00 dlen 10 #209 [hci0] 3408.678386
L2CAP: Information Request (0x0a) ident 1 len 2
Type: Extended features supported (0x0002)
> HCI Event: Command Status (0x0f) plen 4 #210 [hci0] 3408.680339
Remote Name Request (0x01|0x0019) ncmd 2
Status: Success (0x00)
> HCI Event: Number of Completed Packets (0x13) plen 5 #211 [hci0] 3408.706320
Num handles: 1
Handle: 3
Count: 1
> ACL Data RX: Handle 3 flags 0x02 dlen 16 #212 [hci0] 3408.708440
L2CAP: Information Response (0x0b) ident 1 len 8
Type: Extended features supported (0x0002)
Result: Success (0x0000)
Features: 0x00000080
Fixed Channels
< ACL Data TX: Handle 3 flags 0x00 dlen 10 #213 [hci0] 3408.708483
L2CAP: Information Request (0x0a) ident 2 len 2
Type: Fixed channels supported (0x0003)
> HCI Event: Number of Completed Packets (0x13) plen 5 #214 [hci0] 3408.712315
Num handles: 1
Handle: 3
Count: 1
> ACL Data RX: Handle 3 flags 0x02 dlen 20 #215 [hci0] 3408.714439
L2CAP: Information Response (0x0b) ident 2 len 12
Type: Fixed channels supported (0x0003)
Result: Success (0x0000)
Channels: 0x0000000000000002
L2CAP Signaling (BR/EDR)
> HCI Event: Remote Name Req Complete (0x07) plen 255 #216 [hci0] 3408.733311
Status: Success (0x00)
Address: 00:15:83:41:DB:94 (IVT corporation)
Name: Hi-Print 2×3 - DB94
# MGMT Event: Device Connected (0x000b) plen 35 {0x0003} [hci0] 3408.733351
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Flags: 0x00000000
Data length: 22
Name (complete): Hi-Print 2×3 - DB94
# MGMT Event: Device Connected (0x000b) plen 35 {0x0001} [hci0] 3408.733351
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Flags: 0x00000000
Data length: 22
Name (complete): Hi-Print 2×3 - DB94
< HCI Command: Disconnect (0x01|0x0006) plen 3 #217 [hci0] 3410.692257
Handle: 3
Reason: Remote User Terminated Connection (0x13)
> HCI Event: Command Status (0x0f) plen 4 #218 [hci0] 3410.693261
Disconnect (0x01|0x0006) ncmd 2
Status: Success (0x00)
> HCI Event: Disconnect Complete (0x05) plen 4 #219 [hci0] 3410.790256
Status: Success (0x00)
Handle: 3
Reason: Connection Terminated By Local Host (0x16)
# MGMT Event: Device Disconnected (0x000c) plen 8 {0x0003} [hci0] 3410.790295
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Reason: Connection terminated by local host (0x02)
# MGMT Event: Device Disconnected (0x000c) plen 8 {0x0001} [hci0] 3410.790295
BR/EDR Address: 00:15:83:41:DB:94 (IVT corporation)
Reason: Connection terminated by local host (0x02)
Removing the device (previously paired) and remove from the list.
Then do fresh connection it will work.

Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked

Orderers are failing when to try to create a channel
Setup:
Three Orderers | Tried with 5 orderers also
kubernetes
Raft Consensus
1.4.3 & 1.4.1
Its working perfectly with docker swarm
below the error log from one of the orderer
2019-09-04 13:02:11.488 UTC [orderer.consensus.etcdraft] HandleChain -> INFO 079 EvictionSuspicion not set, defaulting to 10m0s
2019-09-04 13:02:11.489 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 07a Found WAL data at path '/var/hyperledger/production/orderer/etcdraft/wal/nath41channel', replaying it channel=nath41channel node=2
2019-09-04 13:02:11.489 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 07b [channel: nath41channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
panic: [channel: nath41channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
goroutine 86 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00018bce0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000134280, 0x4, 0x1040dcc, 0x2a, 0xc000721548, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000134280, 0x1040dcc, 0x2a, 0xc000721548, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000134288, 0x1040dcc, 0x2a, 0xc000721548, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/common/multichannel.newChainSupport(0xc000170000, 0xc00028f5e0, 0xc0004ef260, 0x1145580, 0x1b8f970, 0xc0004ed3a0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/chainsupport.go:74 +0x710
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newChain(0xc000170000, 0xc0008dc870)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:327 +0x1df
github.com/hyperledger/fabric/orderer/common/multichannel.(*BlockWriter).WriteConfigBlock(0xc0005a8000, 0xc000483940, 0xc0008e3360, 0xb, 0xb)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/blockwriter.go:118 +0x2f3
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).writeConfigBlock(0xc0001f8f00, 0xc000483940, 0x7)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:1266 +0x1b4
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).writeBlock(0xc0001f8f00, 0xc000483940, 0x7)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:839 +0x18f
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).apply(0xc0001f8f00, 0xc0004ea240, 0x3, 0x4)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:1030 +0x250
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).serveRequest(0xc0001f8f00)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:748 +0x954
created by github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*Chain).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:336 +0x1e0
2019-09-04 19:12:24.951 UTC [orderer.commmon.multichannel] commitBlock -> PANI 03c [channel: rak25syschannel] Could not append block: unexpected Previous block hash. Expected PreviousHash = [99f567ec6a4f92583076be9d414c47f990559a0f5f24bd0273ba13bbfefd60f8], PreviousHash referred in the latest block= [d1507d8cf004d1dd7cd7940eb3c0c314fd82dcafd1e6edf784df3893cc938a64]
panic: [channel: rak25syschannel] Could not append block: unexpected Previous block hash. Expected PreviousHash = [99f567ec6a4f92583076be9d414c47f990559a0f5f24bd0273ba13bbfefd60f8], PreviousHash referred in the latest block= [d1507d8cf004d1dd7cd7940eb3c0c314fd82dcafd1e6edf784df3893cc938a64]
Complete log of one of the orderer: http://ideone.com/TidhFt
I have got the solution !!
Its because I am doing automation, three orderers generate three genesis blocks using supportive tools.
Even though it is the same configuration we shouldn't do because when multiple genesis blocks it starts forking thats the way hyperledger fabric protocol designed

Torque Job Stuck with mpirun

I ran the following PBS script to run a job with my newly configured Torque:
#!/bin/sh
#PBS -N asyn
#PBS -q batch
#PBS -l nodes=2:ppn=2
#PBS -l walltime=120:00:00
cd $PBS_O_WORKDIR
cat $PBS_NODEFILE>nodes
mpirun -np 4 gmx_mpi mdrun -deffnm asyn_10ns
It gets stuck with an R status. The time remains 00:00:00. When I checked the tracejob, it gets me the following details:
[root#headnode ~]# tracejob 11
/var/spool/torque/mom_logs/20170825: No such file or directory
Job: 11.headnode
08/25/2017 13:49:31.230 S enqueuing into batch, state 1 hop 1
08/25/2017 13:49:31.360 S Job Modified at request of root#headnode
08/25/2017 13:49:31.373 L Job Run
08/25/2017 13:49:31.361 S Job Run at request of root#headnode
08/25/2017 13:49:31.374 S Not sending email: User does not want mail of this type.
08/25/2017 13:49:31 A queue=batch
08/25/2017 13:49:31 A user=souparno group=souparno jobname=asyn queue=batch ctime=1503649171 qtime=1503649171 etime=1503649171
start=1503649171 owner=souparno#headnode exec_host=headnode3/0-1+headnode2/0-1 Resource_List.nodes=2:ppn=2
Resource_List.walltime=120:00:00 Resource_List.nodect=2 Resource_List.neednodes=2:ppn=2
The sched_log is giving me the following details:
08/25/2017 13:49:31.373;64; pbs_sched.25166;Job;11.headnode;Job Run
The server_log gives the following output:
08/25/2017 13:49:31.230;256;PBS_Server.25216;Job;11.headnode;enqueuing into batc
h, state 1 hop 1
08/25/2017 13:49:31.230;08;PBS_Server.25216;Job;perform_commit_work;job_id: 11.headnode
08/25/2017 13:49:31.230;02;PBS_Server.25216;node;close_conn;Closing connection 8 and calling its accompanying function on close
08/25/2017 13:49:31.360;08;PBS_Server.25134;Job;11.headnode;Job Modified at request of root#headnode
08/25/2017 13:49:31.361;08;PBS_Server.25134;Job;11.headnode;Job Run at request of root#headnode
08/25/2017 13:49:31.374;13;PBS_Server.25134;Job;11.headnode;Not sending email: User does not want mail of this type.
08/25/2017 13:50:59.137;02;PBS_Server.25119;Svr;PBS_Server;Torque Server Version = 6.1.1.1, loglevel = 0
What could be the possible problem for which the job is stuck? To mention about the file "nodes", it is not also created.
pbsnodes -a gives the following output:
[root#headnode ~]# pbsnodes -a
headnode2
state = free
power_state = Running
np = 22
ntype = cluster
jobs = 0-1/18.headnode
status = opsys=linux,uname=Linux headnode2 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64,sessions=2406 3695 3699 3701 3731 3733 3757,nsessions=7,nusers=2,idletime=2901,totmem=82448372kb,availmem=80025348kb,physmem=49401852kb,ncpus=24,loadave=23.00,gres=,netload=1677000736,state=free,varattr= ,cpuclock=OnDemand:2301MHz,macaddr=34:40:b5:e5:4a:fa,version=6.1.1.1,rectime=1503919171,jobs=18.headnode
mom_service_port = 15002
mom_manager_port = 15003
headnode3
state = free
power_state = Running
np = 22
ntype = cluster
jobs = 0-1/18.headnode
status = opsys=linux,uname=Linux headnode3 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64,sessions=3570 3574 3576 3602 3604 3628 3803 32545,nsessions=8,nusers=3,idletime=882,totmem=98996200kb,availmem=97047600kb,physmem=65949680kb,ncpus=24,loadave=16.00,gres=,netload=1740623635,state=free,varattr= ,cpuclock=OnDemand:2301MHz,macaddr=34:40:b5:e5:43:52,version=6.1.1.1,rectime=1503919176,jobs=18.headnode
mom_service_port = 15002
mom_manager_port = 15003
headnode4
state = free
power_state = Running
np = 22
ntype = cluster
status = opsys=linux,uname=Linux headnode4 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64,sessions=3592 4057 27119,nsessions=3,nusers=1,idletime=73567,totmem=98991080kb,availmem=96941208kb,physmem=65944560kb,ncpus=24,loadave=23.99,gres=,netload=727722516,state=free,varattr= ,cpuclock=OnDemand:2200MHz,macaddr=34:40:b5:e5:49:8a,version=6.1.1.1,rectime=1503919177,jobs=
mom_service_port = 15002
mom_manager_port = 15003
headnode5
state = free
power_state = Running
np = 22
ntype = cluster
status = opsys=linux,uname=Linux headnode5 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64,sessions=17666,nsessions=1,nusers=1,idletime=2897,totmem=74170352kb,availmem=71840968kb,physmem=49397752kb,ncpus=24,loadave=23.04,gres=,netload=5756452931,state=free,varattr= ,cpuclock=OnDemand:2200MHz,macaddr=34:40:b5:e5:4a:a2,version=6.1.1.1,rectime=1503919174,jobs=
mom_service_port = 15002
mom_manager_port = 15003
headnode6
state = free
power_state = Running
np = 22
ntype = cluster
status = opsys=linux,uname=Linux headnode6 2.6.32-358.el6.x86_64 #1 SMP Tue Jan 29 11:47:41 EST 2013 x86_64,sessions=3678 24197,nsessions=2,nusers=1,idletime=70315,totmem=98991080kb,availmem=97279540kb,physmem=65944560kb,ncpus=24,loadave=16.00,gres=,netload=711846161,state=free,varattr= ,cpuclock=OnDemand:2200MHz,macaddr=34:40:b5:e5:44:52,version=6.1.1.1,rectime=1503919171,jobs=
mom_service_port = 15002
mom_manager_port = 15003

Ceph OSD always 'down' in Ubuntu 14.04.1

I am trying to install and deploy a ceph cluster. As I don't have enough physical servers, I create 4 VMs on my OpenStack using official Ubuntu 14.04 image. I want to deploy a cluster with 1 monitor node and 3 OSD node with ceph version 0.80.7-0ubuntu0.14.04.1. I followed the steps from manual deployment document, and successfully installed the monitor node. However, after the installation of OSD node, it seems that the OSD daemons are running but not correctly report to the monitor node. The osd tree always shows down when I request command ceph --cluster cephcluster1 osd tree.
Following are the commands and corresponding results that may be related to my problem.
root#monitor:/home/ubuntu# ceph --cluster cephcluster1 osd tree
# id weight type name up/down reweight
-1 3 root default
-2 1 host osd1
0 1 osd.0 down 1
-3 1 host osd2
1 1 osd.1 down 1
-4 1 host osd3
2 1 osd.2 down 1
root#monitor:/home/ubuntu# ceph --cluster cephcluster1 -s
cluster fd78cbf8-8c64-4b12-9cfa-0e75bc6c8d98
health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 3/3 in osds are down
monmap e1: 1 mons at {monitor=172.26.111.4:6789/0}, election epoch 1, quorum 0 monitor
osdmap e21: 3 osds: 0 up, 3 in
pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
192 creating
The configuration file /etc/ceph/cephcluster1.conf on all nodes:
[global]
fsid = fd78cbf8-8c64-4b12-9cfa-0e75bc6c8d98
mon initial members = monitor
mon host = 172.26.111.4
public network = 10.5.0.0/16
cluster network = 172.26.111.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
[osd]
osd journal size = 1024
[osd.0]
osd host = osd1
[osd.1]
osd host = osd2
[osd.2]
osd host = osd3
Logs when I start one of the osd daemons through start ceph-osd cluster=cephcluster1 id=x where x is the OSD ID:
/var/log/ceph/cephcluster1-osd.0.log on the OSD node #1:
2015-02-11 09:59:56.626899 7f5409d74800 0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 11230
2015-02-11 09:59:56.646218 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is supported and appears to work
2015-02-11 09:59:56.646372 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-02-11 09:59:56.658227 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-02-11 09:59:56.679515 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) limited size xattrs
2015-02-11 09:59:56.699721 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2015-02-11 09:59:56.700107 7f5409d74800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-11 09:59:56.700454 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 20: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.704025 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 20: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.704884 7f5409d74800 1 journal close /var/lib/ceph/osd/cephcluster1-0/journal
2015-02-11 09:59:56.725281 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is supported and appears to work
2015-02-11 09:59:56.725397 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-02-11 09:59:56.736445 7f5409d74800 0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-02-11 09:59:56.756912 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) limited size xattrs
2015-02-11 09:59:56.776471 7f5409d74800 0 filestore(/var/lib/ceph/osd/cephcluster1-0) mount: WRITEAHEAD journal mode explicitly enabled in conf
2015-02-11 09:59:56.776748 7f5409d74800 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
2015-02-11 09:59:56.776848 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 21: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.777069 7f5409d74800 1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 21: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.783019 7f5409d74800 0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello
2015-02-11 09:59:56.783584 7f5409d74800 0 osd.0 11 crush map has features 1107558400, adjusting msgr requires for clients
2015-02-11 09:59:56.783645 7f5409d74800 0 osd.0 11 crush map has features 1107558400 was 8705, adjusting msgr requires for mons
2015-02-11 09:59:56.783687 7f5409d74800 0 osd.0 11 crush map has features 1107558400, adjusting msgr requires for osds
2015-02-11 09:59:56.783750 7f5409d74800 0 osd.0 11 load_pgs
2015-02-11 09:59:56.783831 7f5409d74800 0 osd.0 11 load_pgs opened 0 pgs
2015-02-11 09:59:56.792167 7f53f9b57700 0 osd.0 11 ignoring osdmap until we have initialized
2015-02-11 09:59:56.792334 7f53f9b57700 0 osd.0 11 ignoring osdmap until we have initialized
2015-02-11 09:59:56.792838 7f5409d74800 0 osd.0 11 done with init, starting boot process
/var/log/ceph/ceph-mon.monitor.log on the monitor node:
2015-02-11 09:59:56.593494 7f24cc41d700 0 mon.monitor#0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=osd1", "root=default"], "id": 0, "weight": 0.05} v 0) v1
2015-02-11 09:59:56.593955 7f24cc41d700 0 mon.monitor#0(leader).osd e21 create-or-move crush item name 'osd.0' initial_weight 0.05 at location {host=osd1,root=default}
Any suggestion is appreciate. Many thanks!

Resources