I am trying to use a Raspberry pi to host a leanote,when i try to run the leanote binary it keep give me this error where no reachable host, what could be the Possible reason?
Panic: no reachable servers
goroutine 1 [running]:
github.com/leanote/leanote/app/db.Init(0x111a4ab0, 0x21, 0x1103c46a, 0x7)
/Users/life/Documents/Go/package_base/src/github.com/leanote/leanote/app/db/Mgo.go:104 +0x500
github.com/leanote/leanote/app.init.1.func27()
/Users/life/Documents/Go/package_base/src/github.com/leanote/leanote/app/init.go:413 +0x2c
github.com/revel/revel.runStartupHooks()
/Users/life/Documents/Go/package_base/src/github.com/revel/revel/server.go:135 +0x70
github.com/revel/revel.Run(0x1f90)
/Users/life/Documents/Go/package_base/src/github.com/revel/revel/server.go:92 +0x20c
main.main()
/Users/life/leanote2/app/tmp/main.go:2294 +0x4f3c4
goroutine 9 [sleep]:
time.Sleep(0x1dcd6500, 0x0)
/Users/life/app/go1.5.1/src/runtime/time.go:59 +0x104
gopkg.in/mgo%2ev2.(*mongoCluster).syncServersLoop(0x110c50e0)
/Users/life/Documents/Go/package_base/src/gopkg.in/mgo.v2/cluster.go:383 +0x410
created by gopkg.in/mgo%2ev2.newCluster
/Users/life/Documents/Go/package_base/src/gopkg.in/mgo.v2/cluster.go:76 +0x1c4
goroutine 49 [sleep]:
time.Sleep(0x2a05f200, 0x1)
/Users/life/app/go1.5.1/src/runtime/time.go:59 +0x104
gopkg.in/mgo%2ev2.(*mongoServer).pinger(0x111de0a0, 0x1)
/Users/life/Documents/Go/package_base/src/gopkg.in/mgo.v2/server.go:297 +0x180
created by gopkg.in/mgo%2ev2.newServer
/Users/life/Documents/Go/package_base/src/gopkg.in/mgo.v2/server.go:90 +0x140
goroutine 52 [sleep]:
time.Sleep(0x2a05f200, 0x1)
/Users/life/app/go1.5.1/src/runtime/time.go:59 +0x104
gopkg.in/mgo%2ev2.(*mongoServer).pinger(0x10f0c460, 0x1)
/Users/life/Documents/Go/package_base/src/gopkg.in/mgo.v2/server.go:297 +0x180
created by gopkg.in/mgo%2ev2.newServer
/Users/life/Documents/Go/package_base/src/gopkg.in/mgo.v2/server.go:90 +0x140
sorry i am new to this XD
Most likely your mongoDB server is not running, try checking ps and netstat to find out if mongod is running and is connected to an IP and port.
Related
I just updated docker desktop for windows and when I run docker compose up --build app to refresh a remote image after changes to files, I get the following error. Every other compose command seems to be in order.
Docker compose version: v2.12.2
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x187d8c6]
goroutine 12 [running]:
github.com/docker/compose/v2/pkg/compose.(*composeService).getDrivers(0xc000399f80, {0x20a2448, 0xc0005d8db0})
github.com/docker/compose/v2/pkg/compose/build_buildkit.go:95 +0xc6
github.com/docker/compose/v2/pkg/compose.(*composeService).doBuildBuildkit(0xc000399f80, {0x20a2448, 0xc0005d8db0}, 0x0?, {0x1d918cd, 0x4})
github.com/docker/compose/v2/pkg/compose/build_buildkit.go:47 +0x87
github.com/docker/compose/v2/pkg/compose.(*composeService).doBuild(0xc000399f80, {0x20a2448, 0xc0005d8db0}, 0xc0005a55e0?, 0xc0004b1ad0, {0x1d918cd, 0x4})
github.com/docker/compose/v2/pkg/compose/build.go:228 +0xc5
github.com/docker/compose/v2/pkg/compose.(*composeService).ensureImagesExists(0x0?, {0x20a2448, 0xc0005d8db0}, 0xc0005a55e0, 0x0)
github.com/docker/compose/v2/pkg/compose/build.go:134 +0x14d
github.com/docker/compose/v2/pkg/compose.(*composeService).create(0xc000399f80?, {0x20a2448, 0xc0005d8db0}, 0xc0005a55e0, {{0xc000178a80, 0x1, 0x2}, 0x0, 0x0, {0x1d97342, ...}, ...})
github.com/docker/compose/v2/pkg/compose/create.go:67 +0x173
github.com/docker/compose/v2/pkg/compose.(*composeService).Up.func1({0x20a2448, 0xc0005d8db0})
github.com/docker/compose/v2/pkg/compose/up.go:36 +0xaa
github.com/docker/compose/v2/pkg/progress.Run.func1({0x20a2448?, 0xc0005d8db0?})
github.com/docker/compose/v2/pkg/progress/writer.go:61 +0x27
github.com/docker/compose/v2/pkg/progress.RunWithStatus.func2()
github.com/docker/compose/v2/pkg/progress/writer.go:82 +0x87
golang.org/x/sync/errgroup.(*Group).Go.func1()
golang.org/x/sync#v0.0.0-20220819030929-7fc1605a5dde/errgroup/errgroup.go:75 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
golang.org/x/sync#v0.0.0-20220819030929-7fc1605a5dde/errgroup/errgroup.go:72 +0xa5
you have two ways to fix the issue
using the command docker compose down (or docker-compose down)
uninstall and install docker
and also there is another way
stop docker by using "sudo service docker stop"
and try to remove docker from lib "sudo rm -rf /var/lib/docker"
start docker again "sudo service docker start"
I have set up my core.yaml to use the CouchDB instance on the same node as the peer, I am not using docker but directly the peer command on the node.
While running the peer node start for the first time, I get the following error:
[36m2020-09-03 17:35:35.619 UTC [kvledger] recoverUnderConstructionLedger -> DEBU 083[0m Recovering under construction ledger
[36m2020-09-03 17:35:35.619 UTC [kvledger] recoverUnderConstructionLedger -> DEBU 084[0m No under construction ledger found. Quitting recovery
panic: Error in instantiating ledger provider: sync /vagrant/runtime/peer0/ledger/ledgersData/snapshots: invalid argument
error while synching dir:/vagrant/runtime/peer0/ledger/ledgersData/snapshots
github.com/hyperledger/fabric/core/ledger/kvledger.syncDir
/__w/1/go/src/github.com/hyperledger/fabric/core/ledger/kvledger/snapshot.go:186
github.com/hyperledger/fabric/core/ledger/kvledger.(*Provider).initSnapshotDir
/__w/1/go/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger_provider.go:266
github.com/hyperledger/fabric/core/ledger/kvledger.NewProvider
/__w/1/go/src/github.com/hyperledger/fabric/core/ledger/kvledger/kv_ledger_provider.go:132
github.com/hyperledger/fabric/core/ledger/ledgermgmt.NewLedgerMgr
/__w/1/go/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:65
github.com/hyperledger/fabric/internal/peer/node.serve
/__w/1/go/src/github.com/hyperledger/fabric/internal/peer/node/start.go:426
github.com/hyperledger/fabric/internal/peer/node.glob..func6
/__w/1/go/src/github.com/hyperledger/fabric/internal/peer/node/start.go:127
github.com/spf13/cobra.(*Command).execute
/__w/1/go/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:762
github.com/spf13/cobra.(*Command).ExecuteC
/__w/1/go/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:852
github.com/spf13/cobra.(*Command).Execute
/__w/1/go/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:800
main.main
/__w/1/go/src/github.com/hyperledger/fabric/cmd/peer/main.go:54
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
goroutine 1 [running]:
github.com/hyperledger/fabric/core/ledger/ledgermgmt.NewLedgerMgr(0xc000367968, 0x1b0d960)
/__w/1/go/src/github.com/hyperledger/fabric/core/ledger/ledgermgmt/ledger_mgmt.go:79 +0x782
github.com/hyperledger/fabric/internal/peer/node.serve(0x24a9520, 0x0, 0x0, 0x0, 0x0)
/__w/1/go/src/github.com/hyperledger/fabric/internal/peer/node/start.go:426 +0x1f62
github.com/hyperledger/fabric/internal/peer/node.glob..func6(0x2377120, 0x24a9520, 0x0, 0x0, 0x0, 0x0)
/__w/1/go/src/github.com/hyperledger/fabric/internal/peer/node/start.go:127 +0x9c
github.com/spf13/cobra.(*Command).execute(0x2377120, 0x24a9520, 0x0, 0x0, 0x2377120, 0x24a9520)
/__w/1/go/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:762 +0x453
github.com/spf13/cobra.(*Command).ExecuteC(0x2377840, 0xc0005a5f50, 0x1, 0x1)
/__w/1/go/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:852 +0x2ea
github.com/spf13/cobra.(*Command).Execute(...)
/__w/1/go/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:800
main.main()
/__w/1/go/src/github.com/hyperledger/fabric/cmd/peer/main.go:54 +0x45b
I searched around but it seems there is not much reference to this particular error.
What am I doing wrong ?
Found the solution.
The usual vagrant problem with synced directories in the host (e.g.
https://medium.com/#dtinth/isolating-node-modules-in-vagrant-9e646067b36)
I just had to mount with binding the /vagrant/runtime to $HOME/runtime and the node was starting just fine.
I followed all the steps and the prerequisites in the documentation but I get stuck in this command ./startFabric.sh javascript
https://hyperledger-fabric.readthedocs.io/en/release-1.4/write_first_app.html
Can anyone help please?
I tried using an older version, 1.1 and it worked but I am curious why this does not work
I tried creating channel-artifacts folder and inside it the genesis.block file but it did not help
Below is the output in the command line when I run the command
$ ./startFabric.sh javascript
Stopping for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds
proceeding ...
WARNING: The BYFN_CA1_PRIVATE_KEY variable is not set. Defaulting to a blank string.
WARNING: The BYFN_CA2_PRIVATE_KEY variable is not set. Defaulting to a blank string.
Removing network net_byfn
WARNING: Network net_byfn not found.
Removing volume net_orderer.example.com
WARNING: Volume net_orderer.example.com not found.
Removing volume net_peer0.org1.example.com
WARNING: Volume net_peer0.org1.example.com not found.
Removing volume net_peer1.org1.example.com
WARNING: Volume net_peer1.org1.example.com not found.
Removing volume net_peer0.org2.example.com
WARNING: Volume net_peer0.org2.example.com not found.
Removing volume net_peer1.org2.example.com
WARNING: Volume net_peer1.org2.example.com not found.
Removing volume net_orderer2.example.com
WARNING: Volume net_orderer2.example.com not found.
Removing volume net_orderer3.example.com
WARNING: Volume net_orderer3.example.com not found.
Removing volume net_orderer4.example.com
WARNING: Volume net_orderer4.example.com not found.
Removing volume net_orderer5.example.com
WARNING: Volume net_orderer5.example.com not found.
Removing volume net_peer0.org3.example.com
WARNING: Volume net_peer0.org3.example.com not found.
Removing volume net_peer1.org3.example.com
WARNING: Volume net_peer1.org3.example.com not found.
---- No containers available for deletion ----
---- No images available for deletion ----
Starting for channel 'mychannel' with CLI timeout of '10' seconds and CLI delay of '3' seconds and using database 'couchdb'
proceeding ...
LOCAL_VERSION=1.4.3
DOCKER_IMAGE_VERSION=1.4.3
/c/Users/Marina/Test/fabric-samples/bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
+ cryptogen generate --config=./crypto-config.yaml
org1.example.com
org2.example.com
+ res=0
+ set +x
Generate CCP files for Org1 and Org2
/c/Users/Marina/Test/fabric-samples/bin/configtxgen
##########################################################
######### Generating Orderer Genesis block ##############
##########################################################
CONSENSUS_TYPE=solo
+ '[' solo == solo ']'
+ configtxgen -profile TwoOrgsOrdererGenesis -channelID byfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
2019-09-28 01:46:41.269 EET [common.tools.configtxgen] main -> INFO 001 Loading configuration
2019-09-28 01:46:41.271 EET [common.tools.configtxgen.localconfig] Load -> PANI 002 Error reading configuration: Unsupported Config Type ""
2019-09-28 01:46:41.276 EET [common.tools.configtxgen] func1 -> PANI 003 Error reading configuration: Unsupported Config Type ""
panic: Error reading configuration: Unsupported Config Type "" [recovered]
panic: Error reading configuration: Unsupported Config Type ""
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000e9ce0, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x51c
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000006190, 0xc000095804, 0xc00002c900, 0x38, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xfd
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000006190, 0xc00002c900, 0x38, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x80
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panic(0xc000006198, 0xc000095908, 0x1, 0x1)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:73 +0x7c
main.main.func1()
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:260 +0x1b0
panic(0xa3c820, 0xc000063f70)
/opt/go/go1.11.5.linux.amd64/src/runtime/panic.go:513 +0x1c7
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000e9ce0, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x51c
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000006170, 0xc000095c04, 0xc00002c800, 0x38, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xfd
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000006170, 0xc00002c800, 0x38, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x80
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panic(0xc000006178, 0xc000095d88, 0x2, 0x2)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:73 +0x7c
github.com/hyperledger/fabric/common/tools/configtxgen/localconfig.Load(0xc00006c0c0, 0x15, 0x0, 0x0, 0x0, 0xc000422380)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/localconfig/config.go:276 +0x426
main.main()
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:271 +0xce7
+ res=2
+ set +x
Failed to generate orderer genesis block...
Try to define an environment variabled called FABRIC_CFG_PATH and set it to the folder that contains configtx.yaml (you should have such a folder somewhere in that sample)
Most of time, my rsh cycle is general OK, we could get following logs from rshd:
Aug 19 04:36:34 shmm500 authpriv.info in.rshd[21343]: connect from 172.17.0.40 (172.17.0.40)
Aug 19 04:36:34 shmm500 auth.info rshd[21344]: root#172.17.0.40 as root: cmd='echo 481'
While for some error case, the rsh could success but there are several seconds delay, see the following timestamp:
Aug 19 04:12:24 shmm500 authpriv.info in.rshd[17968]: connect from 172.17.0.40 (172.17.0.40)
Aug 19 04:12:27 shmm500 auth.info rshd[17972]: root#172.17.0.40 as root: cmd='echo 18'
I also found that, for most normal case, the PID increased by 1, while for most error case, PID increasd by 4, see the PID in above logs, seems rshd forks some processes. So would you provide any explanation for why rshd took these several seconds and PID increase.
Our rsh is the old rsh, not ssh, I'm not sure, but seems the rsh is from netkit. And this is an embedded board with busybox, no strace/pstack.
For client side, I just 'rsh 172.17.0.8 pwd', not hostname is used.
Answer the question by myself:
This issue was caused by a frame loss. Either SYN or SYN+ACK in 3-way handshake was dropped at a rare rate for some reason, anyway the client peer didn't get the SYN+ACK within in 3 seconds timeout(this timeout is hardcoded in Linux kernel), then the connect() resent SYN again, and usually successful at the second try.
From the viewpoint of application, we got 3 seconds delay, or even 6 seconds if it failed at the second try.
Other relevant information:
The first log is from tcpd(aka tcp wrapper)
Aug 19 04:36:34 shmm500 authpriv.info in.rshd[21343]: connect from 172.17.0.40 (172.17.0.40)
The second log is from rshd in netkit 0.17
Aug 19 04:36:34 shmm500 auth.info rshd[21344]: root#172.17.0.40 as root: cmd='echo 481'
rsh need two tcp connections, the first is from rsh client to rshd, and the second tcp connection is from rshd to rsh client, which means the rshd is the tcp client. And my issue is frame loss on the second tcp connection.
I have a PPTP server running and I can connect to it from linux. When I try from windows 7 (2 instances tested) it fails. Here's the syslog for such a conn:
pptpd[540]: CTRL: Client 109.xxx.158.201 control connection started
pptpd[540]: CTRL: Starting call (launching pppd, opening GRE)
pppd[541]: Plugin radius.so loaded.
pppd[541]: RADIUS plugin initialized.
pppd[541]: Plugin radattr.so loaded.
pppd[541]: RADATTR plugin initialized.
pppd[541]: pppd 2.4.5 started by root, uid 0
pppd[541]: Using interface ppp0
pppd[541]: Connect: ppp0 <--> /dev/pts/1
pptpd[540]: GRE: Bad checksum from pppd.
pppd[541]: LCP: timeout sending Config-Requests
pppd[541]: Connection terminated.
pppd[541]: Modem hangup
pppd[541]: Exit.
pptpd[540]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs
pptpd[540]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7)
pptpd[540]: CTRL: Reaping child PPP[541]
pptpd[540]: CTRL: Client 109.xxx.158.201 control connection finished
I played with the mtu and ranged it from 900 to 1500 with no success. My pptp options:
name pptpd
refuse-pap
refuse-chap
refuse-mschap
require-mschap-v2
require-mppe-128
proxyarp
nodefaultroute
lock
nobsdcomp
ms-dns 10.10.0.1
noipx
mtu 1404
mru 1404
Remember! Linux client connects so ports and protocols should be enabled.
tcpdump -i eth0 port 1723 or proto 47 shows the following gist:
https://gist.github.com/ciokan/5595640
where 109.xxx.158.201 is me, the client.
No firewalls on client. Everything disabled. Im not a network admin and I can't understand jack from that tcpdump. HALP :)