Kafka - TimeoutError: Request timed out after 30000ms - node.js

Kafka connection timeout after the 30000ms.it showing error
{ TimeoutError: Request timed out after 30000ms
at new TimeoutError (/app/node_modules/kafka-node/lib/errors/TimeoutError.js:6:9)
at Timeout.timeoutId._createTimeout [as _onTimeout] (/app/node_modules/kafka-node/lib/kafkaClient.js:1007:14)
at listOnTimeout (internal/timers.js:535:17)
at processTimers (internal/timers.js:479:7) message: 'Request timed out after 30000ms' }
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient broker is now ready
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client updated internal metadata
Kafka Producer is connected and ready.
----->data PRODUCT_REF_TOKEN { hash:
'0x964f714829cece2c5f57d5c8d677c251eff82f7fba4b5ba27b4bd650da79a954',
success: 'true' }
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient compressing messages if needed
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient missing apiSupport waiting until broker is ready...
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient waitUntilReady [BrokerWrapper 127.0.0.1:9092 (connected: true) (ready: false) (idle: false) (needAuthentication: false) (authenticated: false)]
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client socket closed 127.0.0.1:9092 (hadError: true)
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client reconnecting to 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client socket closed 127.0.0.1:9092 (hadError: true)
Tue, 22 Oct 2019 10:10:26 GMT kafka-node:KafkaClient kafka-node-client reconnecting to 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:26 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
docker-compose.yml for kafka setup please let me know if any setup or properties need to be setup.
version: "3.5"
services:
api:
image: opschain-sapi
restart: always
command: ["yarn", "start"]
ports:
- ${API_PORT}:80
env_file:
- ./truffle/contracts.env
- ./.env
external_links:
- ganachecli-private
- ganachecli-public
networks:
- opschain_network
graphql-api:
build:
context: ./graphql-api
dockerfile: Dockerfile
command: npm run dev
ports:
- 9007:80
depends_on:
- mongodb
- graphql-api-watch
- api
volumes:
- ./graphql-api/dist:/app/dist:delegated
- ./graphql-api/src:/app/src:delegated
environment:
VIRTUAL_HOST: api.blockchain.docker
PORT: 80
OFFCHAIN_DB_URL: mongodb://root:password#mongodb:27017
OFFCHAIN_DB_NAME: opschain-wallet
OFFCHAIN_DB_USER_COLLECTION: user
JWT_PASSWORD: 'supersecret'
JWT_TOKEN_EXPIRE_TIME: 86400000
BLOCKCHAIN_API: api
networks:
- opschain_network
graphql-api-watch:
build:
context: ./graphql-api
dockerfile: Dockerfile
command: npm run watch
volumes:
- ./graphql-api/src:/app/src:delegated
- ./graphql-api/dist:/app/dist:delegated
networks:
- opschain_network
mongodb:
image: mongo:latest
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: opschain-wallet
logging:
options:
max-size: 100m
networks:
- opschain_network
ui:
build:
context: ./ui
dockerfile: Dockerfile
ports:
- 9000:3000
volumes:
- ./ui/public:/app/public:delegated
- ./ui/src:/app/src:delegated
depends_on:
- graphql-api
networks:
- opschain_network
environment:
VIRTUAL_HOST: tmna.csc.docker
REACT_APP_API_BASE_URL: http://localhost:8080
logging:
options:
max-size: 10m
test:
build: ./test
volumes:
- ./test/postman:/app/postman:delegated
networks:
- opschain_network
zoo1:
image: zookeeper:3.4.9
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888
volumes:
- ./pub-sub/zk-single-kafka-single/zoo1/data:/data
- ./pub-sub/zk-single-kafka-single/zoo1/datalog:/datalog
networks:
- opschain_network
kafka1:
image: confluentinc/cp-kafka:5.3.1
hostname: kafka1
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_ADVERTISED_HOST_NAME: localhost
# KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "cat:1:1"
volumes:
- ./pub-sub/zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
depends_on:
- zoo1
- api
networks:
- opschain_network
networks:
opschain_network:
external: true
in the above compose file i have exposed the port 9092 and zookeper port 2181. Exactly i am not sure what the issue is
const kafka = require('kafka-node');
const config = require('./configUtils');
function sendMessage({ topic, message }) {
let Producer = kafka.Producer,
client = new kafka.KafkaClient({ kafkaHost: config.kafka.host,autoConnect: true}),
producer = new Producer(client);
producer.on('ready', () => {
console.log('Kafka Producer is connected and ready.');
console.log('----->data',topic,message)
producer.send(
[
{
topic,
messages: [JSON.stringify(message)],
}
],
function(_err, data){
console.log('--err',_err)
console.log('------->message sent from kafka',data);
}
);
});
producer.on('error', error => {
console.error(error);
});
}
module.exports = sendMessage;
producer file where it connects to the kafka client and on ready it produces the message

I ran into a similar issue using the landoop/fast-data-dev image with docker-compose. I was able to solve it by making sure the ADV_HOST environment variable was configured to be the name of the kafka service (e.g. kafka1). Then setting the kafkaHost option to the name of service. (e.g. kafka1:9092).
The environment variable for your kafka image appears to be "KAFKA_ADVERTISED_HOST_NAME".

Related

NodeJS converting Docker Redis hostname to localhost

It seems the Redis container hostname is being converted to localhost by NodeJS.
Here are my files:
.env
REDIS_HOST=redis-eventsystem
REDIS_PORT=6379
REDIS_SECRET=secret
index.ts
// there are things above this
let Redis = require('redis');
let client : any = Redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
legacyMode: true
});
client.on('error', (err : Error) : void => {
console.error(
`Redis connection error: ${err}`
);
});
client.on('connect', (err : Error) : void => {
console.info(
`Redis connection success.`
);
});
client.connect();
// there are things bellow this
docker-compose.yml
version: '3.8'
services:
eventsystem:
image: eventsystem
restart: always
depends_on:
- "redis-eventsystem"
ports:
- "80:3000"
networks:
- eventsystem
redis-eventsystem:
image: redis
command: ["redis-server", "--bind", "redis-eventsystem", "--port", "6379", "--protected-mode", "no"]
restart: always
networks:
- eventsystem
networks:
eventsystem:
driver: bridge
docker log
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:C 21 Nov 2022 20:50:41.106 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:C 21 Nov 2022 20:50:41.106 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:C 21 Nov 2022 20:50:41.106 # Configuration loaded
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.106 * monotonic clock: POSIX clock_gettime
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.108 * Running mode=standalone, port=6379.
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.108 # Server initialized
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.108 * Ready to accept connections
2022-11-21 17:50:41 eventsystem-eventsystem-1 |
2022-11-21 17:50:41 eventsystem-eventsystem-1 | > eventsystem#1.0.0 start
2022-11-21 17:50:41 eventsystem-eventsystem-1 | > node index.js serve
2022-11-21 17:50:41 eventsystem-eventsystem-1 |
2022-11-21 17:50:42 eventsystem-eventsystem-1 | Application is listening at http://localhost:3000
2022-11-21 17:50:42 eventsystem-eventsystem-1 | Mon Nov 21 2022 20:50:42 GMT+0000 (Coordinated Universal Time) - Redis connection error: Error: connect ECONNREFUSED 127.0.0.1:6379
As you all can see the connection is refused for the IP 127.0.0.1 but on my application the redis is set to work on the hostname for the container which holds the redis server. I can't think of anything that may be causing this problem.
So to answer my own question, basically the problem was related to the variables passed on createClient at my code.
It seems that for some unknown reason the host and port variables need to be passed inside a variable called socket inside the createClient argument object.
So, instead of doing the usual and passing the host and port inside the argument object, you must do the following:
let client : any = Redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
socket: {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT
},
legacyMode: true
});
Hope to have helped someone else besides me.
Cheers!

Getting "Rejecting deliver request for IP:port because of consenter" error with raft

I'm running a multi-org setup on cloud consists of 2 Orgs, 4 Peers(2 peers per Org) and 3 ordering nodes. All the peer nodes and the node orderer0 are running on DigitalOcean droplet. orderer2 and orderer3 nodes are running on AWS and GCP respectively. For ordering service I'm using Raft and orderer2 was selected as a Leader. For creating the channel, installing/instantiating/querying the chaincode, I'm executing a scripts.sh(present in byfn under scripts directory) on the peer. The script executed successfully i.e channel was created(using orderer0 node), joined by all the peers and installation/instantiation/query of chaincode executed successfully. But when I checked the orderer0 logs I found below errors.
2019-11-15 13:33:08.814 UTC [common.deliver] deliverBlocks -> WARN 04a [channel: mychannel] Rejecting deliver request for 139.59.7.201:59304 because of consenter error
2019-11-15 13:33:08.815 UTC [comm.grpc.server] 1 -> INFO 04b streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=139.59.7.201:59304 grpc.code=OK grpc.call_duration=201.373401ms
After a few seconds
2019-11-15 13:33:09.654 UTC [orderer.consensus.etcdraft] run -> INFO 058 raft.node: 1 elected leader 2 at term 2 channel=mychannel node=1
2019-11-15 13:33:09.657 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 059 Raft leader changed: 0 -> 2 channel=mychannel node=1
2019-11-15 13:33:09.865 UTC [common.deliver] Handle -> WARN 05a Error reading from 139.59.7.201:59314: rpc error: code = Canceled desc = context canceled
docker-compose-orderer.yaml
version: '2'
networks:
byfn:
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:1.4.3
restart: always
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
- ORDERER_HOST=orderer.example.com
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_GENESISPROFILE=OrdererOrg
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
#- ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1
#- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_CHAINCODE_LOGGING_SHIM=DEBUG
- ORDERER_TLS_CLIENTROOTCAS_FILES=/var/hyperledger/users/Admin#example.com/tls/ca.crt
- ORDERER_TLS_CLIENTCERT_FILE=/var/hyperledger/users/Admin#example.com/tls/client.crt
- ORDERER_TLS_CLIENTKEY_FILE=/var/hyperledger/users/Admin#example.com/tls/client.key
- GODEBUG=netdns=go
extra_hosts:
- "peer0.org1.example.com:139.59.13.3"
- "peer1.org1.example.com:139.59.13.119"
- "peer0.org2.example.com:139.59.7.201"
- "peer1.org2.example.com:139.59.24.225"
- "orderer2.example.com:3.14.67.48"
- "orderer3.example.com:34.69.118.13"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/:/var/hyperledger/configs
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- ./crypto-config/ordererOrganizations/example.com/users:/var/hyperledger/users
#- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
networks:
- byfn
docker-compose-orderer2.yaml
version: '2'
networks:
byfn:
services:
orderer2.example.com:
container_name: orderer2.example.com
image: hyperledger/fabric-orderer:1.4.3
restart: always
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
- ORDERER_HOST=orderer2.example.com
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_GENESISPROFILE=OrdererOrg
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
#- ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1
#- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_CHAINCODE_LOGGING_SHIM=DEBUG
- ORDERER_TLS_CLIENTROOTCAS_FILES=/var/hyperledger/users/Admin#example.com/tls/ca.crt
- ORDERER_TLS_CLIENTCERT_FILE=/var/hyperledger/users/Admin#example.com/tls/client.crt
- ORDERER_TLS_CLIENTKEY_FILE=/var/hyperledger/users/Admin#example.com/tls/client.key
- GODEBUG=netdns=go
extra_hosts:
- "peer0.org1.example.com:139.59.13.3"
- "peer1.org1.example.com:139.59.13.119"
- "peer0.org2.example.com:139.59.7.201"
- "peer1.org2.example.com:139.59.24.225"
- "orderer.example.com:139.59.1.164"
- "orderer2.example.com:3.14.67.48"
- "orderer3.example.com:34.69.118.13"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/:/var/hyperledger/configs
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
- ./crypto-config/ordererOrganizations/example.com/users:/var/hyperledger/users
#- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
networks:
- byfn
docker-compose-orderer3.yaml
version: '2'
networks:
byfn:
services:
orderer3.example.com:
container_name: orderer3.example.com
image: hyperledger/fabric-orderer:1.4.3
restart: always
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
- ORDERER_HOST=orderer3.example.com
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_GENESISPROFILE=OrdererOrg
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
#- ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1
#- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_CHAINCODE_LOGGING_SHIM=DEBUG
- ORDERER_TLS_CLIENTROOTCAS_FILES=/var/hyperledger/users/Admin#example.com/tls/ca.crt
- ORDERER_TLS_CLIENTCERT_FILE=/var/hyperledger/users/Admin#example.com/tls/client.crt
- ORDERER_TLS_CLIENTKEY_FILE=/var/hyperledger/users/Admin#example.com/tls/client.key
- GODEBUG=netdns=go
extra_hosts:
- "peer0.org1.example.com:139.59.13.3"
- "peer1.org1.example.com:139.59.13.119"
- "peer0.org2.example.com:139.59.7.201"
- "peer1.org2.example.com:139.59.24.225"
- "orderer.example.com:139.59.1.164"
- "orderer2.example.com:3.14.67.48"
- "orderer3.example.com:34.69.118.13"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/:/var/hyperledger/configs
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer3.example.com/tls/:/var/hyperledger/orderer/tls
- ./crypto-config/ordererOrganizations/example.com/users:/var/hyperledger/users
#- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
networks:
- byfn
docker-compose-orderer.yaml
version: '2'
networks:
byfn:
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:1.4.3
restart: always
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn
- ORDERER_HOST=orderer.example.com
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=info
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_GENESISPROFILE=OrdererOrg
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
#- ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1
#- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_CHAINCODE_LOGGING_SHIM=DEBUG
- ORDERER_TLS_CLIENTROOTCAS_FILES=/var/hyperledger/users/Admin#example.com/tls/ca.crt
- ORDERER_TLS_CLIENTCERT_FILE=/var/hyperledger/users/Admin#example.com/tls/client.crt
- ORDERER_TLS_CLIENTKEY_FILE=/var/hyperledger/users/Admin#example.com/tls/client.key
- GODEBUG=netdns=go
extra_hosts:
- "peer0.org1.example.com:139.59.13.3"
- "peer1.org1.example.com:139.59.13.119"
- "peer0.org2.example.com:139.59.7.201"
- "peer1.org2.example.com:139.59.24.225"
- "orderer2.example.com:3.14.67.48"
- "orderer3.example.com:34.69.118.13"
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./channel-artifacts/:/var/hyperledger/configs
- ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
- ./crypto-config/ordererOrganizations/example.com/users:/var/hyperledger/users
#- orderer.example.com:/var/hyperledger/production/orderer
ports:
- 7050:7050
networks:
- byfn
orderer(orderer0) node logs
https://justpaste.it/49a1n
Orderer2 node logs ae huge hence sharing the link
https://justpaste.it/6ro0v
orderer3 logs
https://justpaste.it/5e4j8
peer0org1 logs
https://justpaste.it/33rm5
peer1org1 logs
https://justpaste.it/1s2uz
peer0org2 logs
https://justpaste.it/6emlk
peer1org2 logs
https://justpaste.it/53fna

Login to docker registry located in Gitlab

I created a docker registry and want to connect it with GitLab. I followed this documentation https://docs.gitlab.com/ce/user/project/container_registry.html. After that I tried to login to docker, but I received 401 or Access denied, do you know how to fix this ?
docker login url
Username: gitlab-ci-token
Password:
https://<url>/v2/: unauthorized: HTTP Basic: Access denied
docker login <url>
Username: knikolov
Password:
https://<url>/v2/: unauthorized: HTTP Basic: Access denied
docker login <url>
Username: knikolov
Password:
Error response from daemon: login attempt to https://<url>/v2/ failed with status: 401 Unauthorized
production.log
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:51 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:54 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:57 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:00 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:03 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:06 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:09 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:12 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:15 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:18 +0000
Started GET "/jwt/auth?account=knikolov&client_id=docker&offline_token=true&service=container_registry" for 172.17.0.1 at 2017-06-22 14:43:19 +0000
Processing by JwtController#auth as HTML
Parameters: {"account"=>"knikolov", "client_id"=>"docker", "offline_token"=>"true", "service"=>"container_registry"}
Completed 200 OK in 191ms (Views: 0.5ms | ActiveRecord: 5.7ms)
Started GET "/admin/logs" for 172.17.0.1 at 2017-06-22 14:43:21 +0000
Processing by Admin::LogsController#show as HTML
Form the registry log I received:
registry_1 | time="2017-06-25T17:34:31Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.7.3 http.request.host=<url> http.request.id=e088c13e-aa4c-4701-af26-29e12874519b http.request.method=GET http.request.remoteaddr=37.59.24.105 http.request.uri="/v2/" http.request.useragent="docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))" instance.id=c8d463e0-cf04-48f5-8daa-d096b4e75494 version=v2.6.1
registry_1 | 172.17.0.1 - - [25/Jun/2017:17:34:31 +0000] "GET /v2/ HTTP/1.0" 401 87 "" "docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))"
registry_1 | time="2017-06-25T17:34:32Z" level=info msg="token from untrusted issuer: \"omnibus-gitlab-issuer\""
registry_1 | time="2017-06-25T17:34:32Z" level=warning msg="error authorizing context: invalid token" go.version=go1.7.3 http.request.host=<url> http.request.id=ff0d15e4-3198-4d69-910b-50bc27dd02f2 http.request.method=GET http.request.remoteaddr=37.59.24.105 http.request.uri="/v2/" http.request.useragent="docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))" instance.id=c8d463e0-cf04-48f5-8daa-d096b4e75494 version=v2.6.1
registry_1 | 172.17.0.1 - - [25/Jun/2017:17:34:32 +0000] "GET /v2/ HTTP/1.0" 401 87 "" "docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))"
this is my config for my registry:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
auth:
token:
realm: https://<url>/jwt/auth
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /certs/registry.crt
docker-compose.yml
registry:
restart: always
image: registry:2
ports:
- 127.0.0.1:5000:5000
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- ./data:/var/lib/registry
- ./certs:/certs
- ./config.yml:/etc/docker/registry/config.yml
Gitlab docker-compose.yml
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: '<gitlab_url>'
container_name: gitlab
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url '<gitlab_url>'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
registry_external_url '<docker-registry_url>'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "172.17.0.1"
gitlab_rails['smtp_domain'] = "<smtp_domain>"
gitlab_rails['gitlab_email_from'] = '<gitlab_email_from>'
gitlab_rails['smtp_enable_starttls_auto'] = false
gitlab_rails['registry_enabled'] = true
registry_nginx['ssl_certificate'] = '/etc/gitlab/ssl/docker.registry.crt'
registry_nginx['ssl_certificate_key'] = '/etc/gitlab/ssl/docker.registry.key'
registry_nginx['proxy_set_headers'] = {
"Host" => "<dokcer-registry_url>"
}
nginx['listen_port'] = 80
nginx['listen_https'] = false
nginx['proxy_set_headers'] = {
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
}
ports:
- '127.0.0.1:5432:80'
- '2224:22'
volumes:
- '/home/gitlab/gitlab-ce/config:/etc/gitlab'
- '/home/gitlab/gitlab-ce/logs:/var/log/gitlab'
- '/home/gitlab/gitlab-ce/data:/var/opt/gitlab'
- '/home/docker-registry/data:/var/opt/gitlab/gitlab-rails/shared/registry'
Make sure the .crt file and .key file exists on the path specified here in gitlab.rb if not make the changes and restart gitlab with - sudo gitlab-ctl restart
external_url 'https://myrepo.xyz.com'
nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.xyz.com'
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/registry.xyz.com.crt"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/registry.xyz.com.key"
More details available at - Appychip
It seems like you are not using the same RSA keypair for your Gitlab registry backend and your Docker setup.
Check your gitlab_rails['registry_key_path'] setting in Gitlab.rb and consult this very detailed guide.
https://m42.sh/gitlab-registry.html (unfortunately offline, backup copy here: https://github.com/ipernet/gitlab-docs/blob/master/gitlab-registry.md)
Make Sure that
The Drive on Docker is shared
(If the drive is not shared: Go to Docker and make the settings as Shared)
Username matches
Remove any domain name if included.
Try this

SignalR reconnect after an Azure Web App restart

I'm facing a strange reconnecting behavior after restart an Azure Web App that hosts my SignalR Hub. When I restart, even if the application restarts in less than the DisconnectTimeout (tested with 2 min), the client doesn't reconnect.
Am I doing something wrong?
Hub Code
public class PingHub : Hub
{
public void Hello()
{
Clients.All.hello();
}
public override Task OnReconnected()
{
Trace.WriteLine("Reconnect");
return base.OnReconnected();
}
public override Task OnConnected()
{
Trace.WriteLine("Connect");
return base.OnConnected();
}
}
Client Code
var hubConnection = new HubConnection("http://url/");
hubConnection.TraceLevel = TraceLevels.All;
hubConnection.TraceWriter = Console.Out;
IHubProxy hubProxy = hubConnection.CreateHubProxy("PingHub");
hubProxy.On("hello", () => Console.WriteLine($"Hello {DateTime.Now.ToString()}"));
hubConnection.Reconnected += () =>
{
Console.WriteLine("Reconnected");
};
hubConnection.Start().Wait();
Client Trace Logs
16:55:48.3999367 - null - ChangeState(Disconnected, Connecting)
16:55:48.8459354 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/connect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5
16:55:48.9604385 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: initialized)
16:55:48.9609355 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {"C":"d-B53A1D13-E,0|F,0|G,1","S":1,"M":[]})
16:55:49.1059354 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - ChangeState(Connecting, Connected)
16:55:53.0300013 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:03.0655798 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:13.0791344 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:23.0965041 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:26.7919383 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - ChangeState(Connected, Reconnecting)
16:56:26.7939373 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/reconnect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5&messageId=d-B53A1D13-E%2C0%7CF%2C0%7CG%2C1
16:56:26.8962939 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(Microsoft.AspNet.SignalR.Client.HttpClientException: StatusCode: 503, ReasonPhrase: 'Service Unavailable', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Date: Tue, 15 Nov 2016 16:56:22 GMT
Set-Cookie: ARRAffinity=9fa33f4c59eaa0cb53ffc0472e2395fa67ff17a0f59613b57fb963b1519ab999;Path=/;Domain=gf-test-signalr.azurewebsites.net
Server: Microsoft-IIS/8.0
Content-Length: 326
Content-Type: text/html; charset=us-ascii
}
at Microsoft.AspNet.SignalR.Client.Http.DefaultHttpClient.<>c__DisplayClass5_0.<Get>b__1(HttpResponseMessage responseMessage)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.<>c__DisplayClass31_0`2.<Then>b__0(Task`1 t)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.TaskRunners`2.<>c__DisplayClass3_0.<RunTask>b__0(Task`1 t))
16:56:28.9148136 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/reconnect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5&messageId=d-B53A1D13-E%2C0%7CF%2C0%7CG%2C1
16:56:29.0051243 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(Microsoft.AspNet.SignalR.Client.HttpClientException: StatusCode: 503, ReasonPhrase: 'Service Unavailable', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Date: Tue, 15 Nov 2016 16:56:24 GMT
Server: Microsoft-IIS/8.0
Content-Length: 326
Content-Type: text/html; charset=us-ascii
}
at Microsoft.AspNet.SignalR.Client.Http.DefaultHttpClient.<>c__DisplayClass5_0.<Get>b__1(HttpResponseMessage responseMessage)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.<>c__DisplayClass31_0`2.<Then>b__0(Task`1 t)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.TaskRunners`2.<>c__DisplayClass3_0.<RunTask>b__0(Task`1 t))
16:56:31.0165736 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/reconnect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5&messageId=d-B53A1D13-E%2C0%7CF%2C0%7CG%2C1
16:56:56.7950186 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(System.TimeoutException: Couldn't reconnect within the configured timeout of 00:00:30, disconnecting.)
16:56:56.7959897 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - Disconnected
16:56:56.8103502 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - Transport.Dispose(6171c2d4-a9dd-4fa4-b710-0910af48132b)
16:56:56.8108527 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - Closed
16:56:56.7950186 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(System.TimeoutException: Couldn't reconnect within the configured timeout of 00:00:30, disconnecting.)
As far as I know, the default value of DisconnectTimeout is 30 seconds. And according to the logs, the reconnecting takes about 30 seconds, so please check if you set/change DisconnectTimeout setting in Application_Start.
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(30);
Besides, if you want to continuously reconnect to hub after a connection has been lost, you could call the Start method from disconnected event handler. For more detailed information, please refer to “How to continuously reconnect”.

Issue sending metrics with statsd

I was using the following instructions to install and configure StatsD on a Graphite server:
https://www.digitalocean.com/community/tutorials/how-to-configure-statsd-to-collect-arbitrary-stats-for-graphite-on-ubuntu-14-04
Now that I have a server with StatsD running, I do not see the metrics being logged under /var/log/statsd/statsd.log when I am testing sending them from the command line. Here is what I see:
29 Oct 02:30:39 - server is up
29 Oct 02:47:49 - reading config file: /etc/statsd/localConfig.js
29 Oct 02:47:49 - server is up
29 Oct 14:16:45 - reading config file: /etc/statsd/localConfig.js
29 Oct 14:16:45 - server is up
29 Oct 15:36:47 - reading config file: /etc/statsd/localConfig.js
29 Oct 15:36:47 - DEBUG: Loading server: ./servers/udp
29 Oct 15:36:47 - server is up
29 Oct 15:36:47 - DEBUG: Loading backend: ./backends/graphite
29 Oct 15:36:47 - DEBUG: numStats: 3
The log stays at the last entry of 'numStats: 3', even though I keep entering different metrics at the command line.
Here are a sample of the metrics I entered:
echo "sample.gauge:14|g" | nc -u -w0 127.0.0.1 8125
echo "sample.gauge:10|g" | nc -u -w0 127.0.0.1 8125
echo "sample.count:1|c" | nc -u -w0 127.0.0.1 8125
echo "sample.set:50|s" | nc -u -w0 127.0.0.1 8125
Of interest, I see this under /var/log/statsd/stderr.log:
events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE
at errnoException (net.js:901:11)
at Server._listen2 (net.js:1039:14)
at listen (net.js:1061:10)
at Server.listen (net.js:1135:5)
at /usr/share/statsd/stats.js:383:16
at null.<anonymous> (/usr/share/statsd/lib/config.js:40:5)
at EventEmitter.emit (events.js:95:17)
at /usr/share/statsd/lib/config.js:20:12
at fs.js:268:14
at Object.oncomplete (fs.js:107:15)
Here is what my localConfig.js file looks like:
{
graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, graphite: {
legacyNamespace: false
},
debug: true,
dumpMessages: true
}
Would anybody be able to shed some light as to where the problem lies?
Thanks!
There is a management interface available by default on port 8126: https://github.com/etsy/statsd/blob/master/docs/admin_interface.md
You likely have another service listening on that port in the same system.
Try this:
# localConfig.js
{
graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, mgmt_port: 8127
, graphite: {
legacyNamespace: false
},
debug: true,
dumpMessages: true
}
See https://github.com/etsy/statsd/blob/master/exampleConfig.js#L28

Resources