I am trying to deploy my containerized application to azure using docker compose aci context. For this I am following this tutorial: tutorial-docker-compose
It works fine, when I use a simple docker compose definition with only 3 services. See below the docker-compose.yaml that works:
version: "3"
services:
node1:
build:
context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-1.yaml -socket-address 0.0.0.0 -socket-port 8881
ports:
- 8881
networks:
local:
ipv4_address: 172.28.1.101
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node2:
build:
context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-2.yaml -socket-address 0.0.0.0 -socket-port 8882
ports:
- 8882
networks:
local:
ipv4_address: 172.28.1.102
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node3:
build:
context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-3.yaml -socket-address 0.0.0.0 -socket-port 8883
ports:
- 8883
networks:
local:
ipv4_address: 172.28.1.103
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
networks:
local:
driver: bridge
ipam:
config:
- subnet: 172.28.1.0/24
gateway: 172.28.1.1
The problem is when I try to increase the number of services, the containers still creating and I have the following error.
$ docker compose --verbose -f .\nodes-123.docker-compose.yaml up
level=debug msg="Up on project with name \"lab01-blockchain\""
[+] Running 1/7
- Group lab01-blockchain Created 7.7s
- node1 Creating 900.0s
- node2 Creating 900.0s
- node3 Creating 900.0s
- node4 Creating 900.0s
- node5 Creating 900.0s
- node6 Creating 900.0s
Future#WaitForCompletion: context has been cancelled: StatusCode=200 -- Original Error: context deadline exceeded
The docker-compose.yaml with 6 services is the following:
version: "3"
services:
node1:
# build:
# context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-1.yaml -socket-address 0.0.0.0 -socket-port 8881
ports:
- 8881
# networks:
# local:
# ipv4_address: 172.28.1.101
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node2:
# build:
# context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-2.yaml -socket-address 0.0.0.0 -socket-port 8882
ports:
- 8882
# networks:
# local:
# ipv4_address: 172.28.1.102
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node3:
# build:
# context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-3.yaml -socket-address 0.0.0.0 -socket-port 8883
ports:
- 8883
# networks:
# local:
# ipv4_address: 172.28.1.103
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node4:
# build:
# context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-3.yaml -socket-address 0.0.0.0 -socket-port 8884
ports:
- 8884
# networks:
# local:
# ipv4_address: 172.28.1.104
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node5:
# build:
# context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-3.yaml -socket-address 0.0.0.0 -socket-port 8885
ports:
- 8885
# networks:
# local:
# ipv4_address: 172.28.1.105
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
node6:
# build:
# context: server
image: ***.azurecr.io/node-server
command: ./server -config config/cloud-neighbour-3.yaml -socket-address 0.0.0.0 -socket-port 8886
ports:
- 8886
# networks:
# local:
# ipv4_address: 172.28.1.106
deploy:
resources:
limits:
cpus: '0.50'
memory: 1G
reservations:
cpus: '0.50'
memory: 1G
# client:
# image: ***.azurecr.io/client
# command: ./client
# networks:
# local:
# ipv4_address: 172.28.1.200
# deploy:
# resources:
# limits:
# cpus: '0.50'
# memory: 1G
# networks:
# local:
# driver: bridge
# ipam:
# config:
# - subnet: 172.28.1.0/24
# gateway: 172.28.1.1
I tried to remove the network on container to see if it is better, but not.
Please note, that I am using deploy.resources.limits on my docker-compose.yaml file to not exceed quota defined by Azure (4 CPU, 16G RAM).
Note alse, that I use the flag --verbose when using docker compose but it seems that I didn't have more logging..
Regards
Related
I am trying to use Nifi with Docker but I am stuck in this error and cannot solve it.
I am configuring Metabase, MySQL, Nifi and Nifi-registry in the same network. The docker-compose.yml is going up normally, but when I try to connect the Nifi with a bucket in Nifi-registry I keep getting this error:
error
My OS: Ubuntu 20.04.
I saw the documentation and it talks only about local installation, configuring the Nifi files.
I followed this video.
This is my docker-compose file:
version: "3"
services: db:
image: mysql
container_name: mysql_57
environment:
MYSQL_ROOT_PASSWORD: password
restart: always
ports:
- "3307:3306" metabase:
image: metabase/metabase
container_name: metabase
ports:
- "3000:3000"
volumes:
- data:/metabase nifi:
image: apache/nifi:latest
container_name: nifi
ports:
- "8443:8443"
cpus: 2
mem_limit: 2G
mem_reservation: 2G
environment:
- SINGLE_USER_CREDENTIALS_USERNAME=myusername
- SINGLE_USER_CREDENTIALS_PASSWORD=mypassword
- NIFI_SENSITIVE_PROPS_KEY=mykey
volumes:
- nifi_content_repository:/opt/nifi/nifi-current/content_repository
- nifi_database_repository:/opt/nifi/nifi-current/database_repository
- nifi_flowfile_repository:/opt/nifi/nifi-current/flowfile_repository
- nifi_provenance_repository:/opt/nifi/nifi-current/provenance_repository
- nifi_state:/opt/nifi/nifi-current/state
- nifi_logs:/opt/nifi/nifi-current/logs
- nifi_data:/opt/nifi/nifi-current/data
- nifi_conf:/opt/nifi/nifi-current/conf
- type: bind
source: ./drivers
target: /opt/nifi/nifi-current/drivers
networks:
- rede_projeto
nifi-registry:
image: apache/nifi-registry:latest
container_name: nifi-registry
ports:
- "18080:18080"
cpus: 1
mem_limit: 1G
mem_reservation: 1G
networks:
- rede_projeto
volumes: nifi_content_repository:
driver: local nifi_database_repository:
driver: local nifi_flowfile_repository:
driver: local nifi_provenance_repository:
driver: local nifi_conf:
driver: local nifi_state:
driver: local nifi_logs:
driver: local nifi_data:
driver: local nifi_drivers:
driver: local data:
driver: local
networks: rede_projeto:
external: true
I am new to containers and using GKE. I used to run my node server app with npm run debug and am trying to do this as well on GKE using the shell of my container. When I log into the shell of myapp container and do this I get:
> api_server#0.0.0 start /usr/src/app
> node src/
events.js:167
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8089
Normally I deal with this using something like killall -9 node but when I do this it looks like I am kicked out of my shell and the container is restarted by kubernetes. It seems node is using the port already or something:
netstat -tulpn | grep 8089
tcp 0 0 :::8089 :::* LISTEN 23/node
How can I start my server from the shell?
My config files:
Dockerfile:
FROM node:10-alpine
RUN apk add --update \
libc6-compat
WORKDIR /usr/src/app
COPY package*.json ./
COPY templates-mjml/ templates-mjml/
COPY public/ public/
COPY src/ src/
COPY data/ data/
COPY config/ config/
COPY migrations/ migrations/
ENV NODE_ENV 'development'
ENV PORT '8089'
RUN npm install --development
myapp.yaml:
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 8089
name: http
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject-224713:europe-west4:mydatabase=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
myrouter.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "*"
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myapp
weight: 100
websocketUpgrade: true
EDIT:
I got following logs:
EDIT 2:
After adding a featherjs health service I get following output for describe:
Name: myapp-95df4dcd6-lptnq
Namespace: default
Node: gke-standard-cluster-1-default-pool-59600833-pcj3/10.164.0.3
Start Time: Wed, 02 Jan 2019 22:08:33 +0100
Labels: app=myapp
pod-template-hash=518908782
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myapp; cpu request for container cloudsql-proxy
sidecar.istio.io/status:
{"version":"3c9617ff82c9962a58890e4fa987c69ca62487fda71c23f3a2aad1d7bb46c748","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
IP: 10.44.3.17
Controlled By: ReplicaSet/myapp-95df4dcd6
Init Containers:
istio-init:
Container ID: docker://768b2327c6cfa57b3d25a7029e52ce6a88dec6848e91dd7edcdf9074c91ff270
Image: gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxy_init#sha256:e30d47d2f269347a973523d0c5d7540dbf7f87d24aca2737ebc09dbe5be53134
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8089,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:34 +0100
Finished: Wed, 02 Jan 2019 22:08:35 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
myapp:
Container ID: docker://5566a3e8242ec6755dc2f26872cfb024fab42d5f64aadc3db1258fcb834f8418
Image: gcr.io/myproject-224713/firstapp:v4
Image ID: docker-pullable://gcr.io/myproject-224713/firstapp#sha256:0cbd4fae0b32fa0da5a8e6eb56cb9b86767568d243d4e01b22d332d568717f41
Port: 8089/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Jan 2019 22:09:19 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:35 +0100
Finished: Wed, 02 Jan 2019 22:09:19 +0100
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Liveness: http-get http://:8089/health delay=15s timeout=20s period=10s #success=1 #failure=3
Readiness: http-get http://:8089/health delay=5s timeout=5s period=10s #success=1 #failure=3
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
cloudsql-proxy:
Container ID: docker://414799a0699abe38c9759f82a77e1a3e06123714576d6d57390eeb07611f9a63
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy#sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject-224713:europe-west4:osm=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
istio-proxy:
Container ID: docker://898bc95c6f8bde18814ef01ce499820d545d7ea2d8bf494b0308f06ab419041e
Image: gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxyv2#sha256:826ef4469e4f1d4cabd0dc846f9b7de6507b54f5f0d0171430fcd3fb6f5132dc
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
myapp
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
default-token-9vtz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9vtz5
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 68s default-scheduler Successfully assigned myapp-95df4dcd6-lptnq to gke-standard-cluster-1-default-pool-59600833-pcj3
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-envoy"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "default-token-9vtz5"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "cloudsql-instance-credentials"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-certs"
Normal Pulled 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" already present on machine
Normal Created 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" already present on machine
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Warning Unhealthy 31s (x4 over 61s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 22s (x2 over 66s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/myproject-224713/firstapp:v4" already present on machine
Warning Unhealthy 22s (x3 over 42s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 22s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Killing container with id docker://myapp:Container failed liveness probe.. Container will be killed and recreated.
This is just how Kubernetes works, as long as your pod has processes running it will remain 'up'. The moment you kill one if its processes Kubernetes will restart the pod because it crashed or something went wrong.
If you really want to debug with npm run debug consider either:
Create a container with the CMD (at the end) or ENTRYPOINT value in your Dockerfile that is npm run debug. Then run it using a Deployment definition in Kubernetes.
Override the command in the myapp container in your deployment definition with something like:
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
command: ["npm", "run", "debug" ]
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
Getting following ERROR while creating a new channels using Kafka Orderer.
Error: timeout waiting for channel creation
Here is the screenshot :
Following are my Docker-Compose yaml file :
version: '2'
networks:
tranargy:
services:
zookeeper0:
container_name: zookeeper0
extends:
file: base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- tranargy
zookeeper1:
container_name: zookeeper1
extends:
file: base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- tranargy
zookeeper2:
container_name: zookeeper2
extends:
file: base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- tranargy
kafka0:
container_name: kafka0
extends:
file: base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- tranargy
kafka1:
container_name: kafka1
extends:
file: base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- tranargy
kafka2:
container_name: kafka2
extends:
file: base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=2
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- tranargy
kafka3:
container_name: kafka3
extends:
file: base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=3
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- tranargy
orderer.tranargy.com:
image: hyperledger/fabric-orderer:x86_64-1.0.0
container_name: orderer.tranargy.com
environment:
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
volumes:
- ./orderer/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./crypto-config/ordererOrganizations/tranargy.com/orderers/orderer.tranargy.com/msp:/var/hyperledger/orderer/msp
- ./crypto-config/ordererOrganizations/tranargy.com/orderers/orderer.tranargy.com/tls:/var/hyperledger/orderer/tls
command: orderer
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
ports:
- 7050:7050
depends_on:
- kafka0
- kafka1
- kafka2
- kafka3
networks:
- tranargy
peer0.org1.com:
container_name: peer0.org1.com
extends:
file: base.yaml
service: peer
environment:
- CORE_PEER_ID=peer0.org1.com
- CORE_PEER_ADDRESS=peer0.org1.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.com:7051
volumes:
- ./crypto-config/peerOrganizations/org1.com/peers/peer0.org1.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.com/peers/peer0.org1.com/tls:/etc/hyperledger/tls
ports:
- 7051:7051
- 7053:7053
depends_on:
- orderer.tranargy.com
networks:
- tranargy
peer1.org1.com:
container_name: peer1.org1.com
extends:
file: base.yaml
service: peer
environment:
- CORE_PEER_ID=peer1.org1.com
- CORE_PEER_ADDRESS=peer1.org1.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.com:7051
volumes:
- ./crypto-config/peerOrganizations/org1.com/peers/peer1.org1.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.com/peers/peer1.org1.com/tls:/etc/hyperledger/tls
ports:
- 8051:7051
- 8053:7053
depends_on:
- orderer.tranargy.com
networks:
- tranargy
peer0.org2.com:
container_name: peer0.org2.com
extends:
file: base.yaml
service: peer
environment:
- CORE_PEER_ID=peer0.org2.com
- CORE_PEER_ADDRESS=peer0.org2.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.com:7051
volumes:
- ./crypto-config/peerOrganizations/org2.com/peers/peer0.org2.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org2.com/peers/peer0.org2.com/tls:/etc/hyperledger/tls
ports:
- 9051:7051
- 9053:7053
depends_on:
- orderer.tranargy.com
networks:
- tranargy
peer1.org2.com:
container_name: peer1.org2.com
extends:
file: base.yaml
service: peer
environment:
- CORE_PEER_ID=peer1.org2.com
- CORE_PEER_ADDRESS=peer1.org2.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.com:7051
volumes:
- ./crypto-config/peerOrganizations/org2.com/peers/peer1.org2.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org2.com/peers/peer1.org2.com/tls:/etc/hyperledger/tls
ports:
- 10051:7051
- 10053:7053
depends_on:
- orderer.tranargy.com
networks:
- tranargy
cli.Org1:
extends:
file: base.yaml
service: cli
container_name: cli.Org1
environment:
- CORE_PEER_ID=cli.org1.com
- CORE_PEER_ADDRESS=peer0.org1.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.com/msp
volumes:
- ./crypto-config/peerOrganizations/org1.com:/etc/hyperledger/msp
- ./crypto-config/peerOrganizations/org2.com/peers/peer0.org2.com/tls:/etc/hyperledger/tls
depends_on:
- orderer.tranargy.com
- peer0.org1.com
networks:
- tranargy
cli.Org2:
extends:
file: base.yaml
service: cli
container_name: cli.Org2
environment:
- CORE_PEER_ID=cli.org2.com
- CORE_PEER_ADDRESS=peer0.org2.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org2.com/msp
volumes:
- ./crypto-config/peerOrganizations/org2.com:/etc/hyperledger/msp
- ./crypto-config/peerOrganizations/org2.com/peers/peer0.org2.com/tls:/etc/hyperledger/tls
depends_on:
- orderer.tranargy.com
- peer0.org2.com
networks:
- tranargy
couchdbOrg1:
container_name: couchdbOrg1
image: hyperledger/fabric-couchdb:x86_64-1.0.0
environment:
DB_URL: http://localhost:5984/
ports:
- "5984:5984"
networks:
- tranargy
And the base.yaml file is :
version: '2'
services:
zookeeper:
image: hyperledger/fabric-zookeeper
restart: always
ports:
- '2181'
- '2888'
- '3888'
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
ports:
- '9092'
peer:
image: hyperledger/fabric-peer:x86_64-1.0.0
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=tranargy_tranargy
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_PROFILE_ENABLED=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer/
command: peer node start
volumes:
- /var/run/:/host/var/run/
cli:
tty: true
image: hyperledger/fabric-tools:x86_64-1.0.0
environment:
- GOPATH=/opt/gopath
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- ./orderer/:/opt/gopath/src/github.com/hyperledger/fabric/peer/orderer
- ./chaincode:/opt/gopath/src/
- ./channels/:/opt/gopath/src/github.com/hyperledger/fabric/peer/channels
The setup is running fine when I tested on one system, but got the error when I shifted it to the other.
Unable to find out why its behaving like this. Can any one please help.
Since kafka order runs on distributed environment, it takes little time to create channel. We can increase the creation timeout time with -t flag.
Example :
peer channel create -c 'channelsId' -f 'channelsGenesisBlock' -o 'orderersAddress' -t 60s
"s" has to be mentioned in while mentioning the seconds
We had to increase the timeout on the create channel command to 10 seconds for the kafka configuration.
For example:
peer channel create -o orderer0:7050 -t 10s -c $CHANNEL_NAME ...
Kubernetes version --> 1.5.2
I am setting up DNS for Kubernetes services for the first time and I came across SkyDNS.
So following documentation, my skydns-svc.yaml file is :
apiVersion: v1
kind: Service
spec:
clusterIP: 10.100.0.100
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
And my skydns-rc.yaml file is :
apiVersion: v1
kind: ReplicationController
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v18
template:
metadata:
creationTimestamp: null
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
version: v18
spec:
containers:
- args:
- --domain=kube.local
- --dns-port=10053
image: gcr.io/google_containers/kubedns-amd64:1.6
imagePullPolicy: IfNotPresent
name: kubedns
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
terminationMessagePath: /dev/termination-log
- args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3
imagePullPolicy: IfNotPresent
name: dnsmasq
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
- args:
- -cmd=nslookup kubernetes.default.svc.kube.local 127.0.0.1 >/dev/null &&
nslookup kubernetes.default.svc.kube.local 127.0.0.1:10053 >/dev/null
- -port=8080
- -quiet
image: gcr.io/google_containers/exechealthz-amd64:1.0
imagePullPolicy: IfNotPresent
name: healthz
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
Also on my minions, I updated the /etc/systemd/system/multi-user.target.wants/kubelet.service file and added the following under the ExecStart section :
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS \
--cluster-dns=10.100.0.100 \
--cluster-domain=kubernetes \
Having done all of this and having successfully brought up the rc & svc :
[root#kubernetes-master DNS]# kubectl get po | grep dns
kube-dns-v18-hl8z6 3/3 Running 0 6s
[root#kubernetes-master DNS]# kubectl get svc | grep dns
kube-dns 10.100.0.100 <none> 53/UDP,53/TCP 20m
This is all that I got from a config standpoint. Now in order to test my setup, I downloaded busybox and tested a nslookup
[root#kubernetes-master DNS]# kubectl get svc | grep kubernetes
kubernetes 10.100.0.1 <none> 443/TCP
[root#kubernetes-master DNS]# kubectl exec busybox -- nslookup kubernetes
nslookup: can't resolve 'kubernetes'
Server: 10.100.0.100
Address 1: 10.100.0.100
Is there something that I have missed ?
EDIT ::
Going through the logs, I see something that might explain why this is not working :
kubectl logs $(kubectl get pods -l k8s-app=kube-dns -o name) -c kubedns
.
.
.
E1220 17:44:48.403976 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
E1220 17:44:48.487169 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:48.487716 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.410311 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: Get https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0: x509: failed to load system roots and no roots provided
I1220 17:44:49.492338 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: Get https://10.100.0.1:443/api/v1/namespaces/default/services/kubernetes: x509: failed to load system roots and no roots provided. Sleeping 1s before retrying.
E1220 17:44:49.493429 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: Get https://10.100.0.1:443/api/v1/services?resourceVersion=0: x509: failed to load system roots and no roots provided
.
.
.
Looks like kubedns is unable to authorize against K8S master node. I even tried to do a manual call :
curl -k https://10.100.0.1:443/api/v1/endpoints?resourceVersion=0
Unauthorized
Looks like the kube-dns pod is not able to authenticate with the kubernetes api server. I don't see any secret and serviceaccount in the YAML file for the kube-dns pod.
I suggest doing the following:
Create a k8s secret using kubectl create secret for the kube-dns pod with the right certificate file ca.crt and token:
$ kubectl get secrets -n=kube-system | grep dns
kube-dns-token-66tfx kubernetes.io/service-account-token 3 1d
Create a k8s serviceaccount using kubectl create serviceaccount for the kube-dns pod:
$ kubectl get serviceaccounts -n=kube-system | grep dns
kube-dns 1 1d`
Mount the secret at /var/run/secrets/kubernetes.io/serviceaccount inside the kube-dns container in the YAML file:
...
kind: Pod
...
spec:
...
containers:
...
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-dns-token-66tfx
readOnly: true
...
volumes:
- name: kube-dns-token-66tfx
secret:
defaultMode: 420
secretName: kube-dns-token-66tfx
Here are the links about creating serviceaccounts for pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
https://kubernetes.io/docs/admin/service-accounts-admin/
I'm attempting to set up DNS support in Kubernetes 1.2 on Centos 7. According to the documentation, there's two ways to do this. The first applies to a "supported kubernetes cluster setup" and involves setting environment variables:
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="10.0.0.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
I added these settings to /etc/kubernetes/config and rebooted, with no effect, so either I don't have a supported kubernetes cluster setup (what's that?), or there's something else required to set its environment.
The second approach requires more manual setup. It adds two flags to kubelets, which I set by updating /etc/kubernetes/kubelet to include:
KUBELET_ARGS="--cluster-dns=10.0.0.10 --cluster-domain=cluster.local"
and restarting the kubelet with systemctl restart kubelet. Then it's necessary to start a replication controller and a service. The doc page cited above provides a couple of template files for this that require some editing, both for local changes (my Kubernetes API server listens to the actual IP address of the hostname rather than 127.0.0.1, making it necessary to add a --kube-master-url setting) and to remove some Salt dependencies. When I do this, the replication controller starts four containers successfully, but the kube2sky container gets terminated about a minute after completing initialization:
[david#centos dns]$ kubectl --server="http://centos:8080" --namespace="kube-system" logs -f kube-dns-v11-t7nlb -c kube2sky
I0325 20:58:18.516905 1 kube2sky.go:462] Etcd server found: http://127.0.0.1:4001
I0325 20:58:19.518337 1 kube2sky.go:529] Using http://192.168.87.159:8080 for kubernetes master
I0325 20:58:19.518364 1 kube2sky.go:530] Using kubernetes API v1
I0325 20:58:19.518468 1 kube2sky.go:598] Waiting for service: default/kubernetes
I0325 20:58:19.533597 1 kube2sky.go:660] Successfully added DNS record for Kubernetes service.
F0325 20:59:25.698507 1 kube2sky.go:625] Received signal terminated
I've determined that the termination is done by the healthz container after reporting:
2016/03/25 21:00:35 Client ip 172.17.42.1:58939 requesting /healthz probe servicing cmd nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
2016/03/25 21:00:35 Healthz probe error: Result of last exec: nslookup: can't resolve 'kubernetes.default.svc.cluster.local', at 2016-03-25 21:00:35.608106622 +0000 UTC, error exit status 1
Aside from this, all other logs look normal. However, there is one anomaly: it was necessary to specify --validate=false when creating the replication controller, as the command otherwise gets the message:
error validating "skydns-rc.yaml": error validating data: [found invalid field successThreshold for v1.Probe, found invalid field failureThreshold for v1.Probe]; if you choose to ignore these errors, turn validation off with --validate=false
Could this be related? These arguments come directly Kubernetes documentation. if not, what's needed to get this running?
Here is the skydns-rc.yaml I used:
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.14
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain="cluster.local"
- --kube-master-url=http://192.168.87.159:8080
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain="cluster.local"
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.
and skydns-svc.yaml:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: "10.0.0.10"
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
I just commented out the lines that contain the successThreshold and failureThreshold values in skydns-rc.yaml, then re-run the kubectl commands.
kubectl create -f skydns-rc.yaml
kubectl create -f skydns-svc.yaml