Related
I am in the process of deploying my .BNA file to fabric, I been testing and prototyping it in on the bluemix playground succesfully however, when I try to install the network application to fabric I get the error.
> Error: Error trying install business network.
>Error: No valid responses from any peers.
>Response from attempted peer comms was an error:
>Error: 14 UNAVAILABLE: Connect Failed
Command failed
**This is the steps I took**
1. Launch your Fabric network
> ./startFabric.sh
2.) Create the peer admin card
> ./createPeerAdminCard.sh
3.) Install the network application to fabric
> composer network install -a dist/bna.bna -c PeerAdmin#hlfv1
**This step is where I get the error**
✖ Installing business network. This may take a minute...
Error: Error trying install business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: 14 UNAVAILABLE: Connect Failed
Command failed
**Details of my env**
Node Version: v8.11.3
Docker version: 18.03
Composer version: v0.19.12
Docker PS:
[Docker PS Screen shot][1]
[1]: https://i.stack.imgur.com/HQGBf.png
Any help is really appreciated.
UPDATE
Connection.json for hlfv1
{
"name": "hlfv1",
"x-type": "hlfv1",
"x-commitTimeout": 300,
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300",
"eventHub": "300",
"eventReg": "300"
},
"orderer": "300"
}
}
},
"channels": {
"composerchannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpc://localhost:7050"
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:7054",
"caName": "ca.org1.example.com"
}
}
}
Hlfv11 vs HLFv1
I noticed when I look in the the fabric-scrips there are two components hlfv11 vs hlfv1.
Screen shot of fabric tools
When I start the startfabric I get the line that fabric assumes it is "hlfv11" instead of hlfv1.
enter image description here
Any help would be appreciated.
docker inspect peer0.org1.example.com
[
{
"Id": "6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac",
"Created": "2018-07-20T22:49:51.238208735Z",
"Path": "peer",
"Args": [
"node",
"start"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 7506,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-07-20T22:49:51.543106588Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b023f9be07714e495e6d41849d7e916434e85580754423ece145866468ad29a9",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/resolv.conf",
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hosts",
"LogPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac-json.log",
"Name": "/peer0.org1.example.com",
"RestartCount": 0,
"Driver": "aufs",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/run:/host/var/run:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer:/etc/hyperledger/configtx:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/peer/msp:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "composer_default",
"PortBindings": {
"7051/tcp": [
{
"HostIp": "",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "",
"HostPort": "7053"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run",
"Destination": "/host/var/run",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users",
"Destination": "/etc/hyperledger/msp/users",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer",
"Destination": "/etc/hyperledger/configtx",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp",
"Destination": "/etc/hyperledger/peer/msp",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "6caa83b2a8a5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"7051/tcp": {},
"7053/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"CORE_LOGGING_LEVEL=debug",
"CORE_CHAINCODE_LOGGING_LEVEL=DEBUG",
"CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock",
"CORE_PEER_ID=peer0.org1.example.com",
"CORE_PEER_ADDRESS=peer0.org1.example.com:7051",
"CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default",
"CORE_PEER_LOCALMSPID=Org1MSP",
"CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp",
"CORE_LEDGER_STATE_STATEDATABASE=CouchDB",
"CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"FABRIC_CFG_PATH=/etc/hyperledger/fabric"
],
"Cmd": [
"peer",
"node",
"start"
],
"Image": "hyperledger/fabric-peer:x86_64-1.1.0",
"Volumes": {
"/etc/hyperledger/configtx": {},
"/etc/hyperledger/msp/users": {},
"/etc/hyperledger/peer/msp": {},
"/host/var/run": {}
},
"WorkingDir": "/opt/gopath/src/github.com/hyperledger/fabric",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "d44983248579bb25822020f82382fba01b891c3338b2fe91bb17ac3936126c69",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "composer",
"com.docker.compose.service": "peer0.org1.example.com",
"com.docker.compose.version": "1.21.1",
"org.hyperledger.fabric.base.version": "0.4.6",
"org.hyperledger.fabric.version": "1.1.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5645c1988100b53fa9a8c2d13adc40c43f3995cb808b3eda28771176033b26b4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"7051/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7053"
}
]
},
"SandboxKey": "/var/run/docker/netns/5645c1988100",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"composer_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"peer0.org1.example.com",
"6caa83b2a8a5"
],
"NetworkID": "d4f496b7b3aeae87d1b1461523bc8620ac34b54d9b3b9f8d31c6cfa7be4da024",
"EndpointID": "a19687702d04e166dc0291dc9ce1130caf5eccf484ece4fd988c13cc2660c8fb",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:05",
"DriverOpts": null
}
}
}
}
]
Fixed: Needed to reinstall Hyperledger fabric, composer, node, npm, and docker. And need to set "unset ${!DOCKER*}" there seemed to an docker issue.
This error is usually seen when the CLI cannot connect to the Fabric using the addresses specified in the PeerAdmin's connection.json file. Did you download the latest fabric-tools as shown here prior to this?
Sometimes if there is a proxy involved (on a corporate network), there can be some routing failures.
see answer here which may help you -> Hyperledger composer network install
ERROR 14 means that you the composer can't locate the peers. Your issue is here:
"peers": {
"peer0.org1.example.com": {}
}
you need to write something like:
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
}
FIXED:
I uninstalled docker, node, npm, and reinstalled everthing and made sure to use unset ${!DOCKER*} when first installing docker for Mac OS
How can I host an application in ACS (DCOS) on any other port except 80? Can I give any other URL instead of using port number to access?
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Application is available on port 4170 according to Marathon.
I am unable to access from agents fqn:portnumber
Yes, it is possible.
Firstly, you need modify hostPort value to 4170 and acceptedResourceRoles to slave_public.
Then you need open port 4170 on agent node NSG.
Then you also need open port on agent node LB.
1.Add Health probes
2.Load balancing rules
More information about this please check this link.
I have deployed a hello world application in Azure using DCOS and Marathon Framework.I am trying to access that using fqn: portnumber at which the application is hosted. I am unable to open the application
Following is the json I have used
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "HTTP",
"ignoreHttp1xx": false
}
],
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
I have added NSG Inbound rule for master nsg resource
I have added NAT rule for master lb resource allowing the port as custom
In your example, host port is 0, Azure will listen your service on a random port. You need open the port on NSG and lb.
I suggest you could specify the port, you could check the following example:
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 32,
"disk": 0,
"instances": 1,
"acceptedResourceRoles": [
"slave_public"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"labels": {},
"name": "test80"
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "MESOS_HTTP",
"ignoreHttp1xx": false
}
],
"requirePorts": true
}
Note: You should set acceptedResourceRoles to slave_public. More information about this please check this link.
Along with the above-mentioned JSON I need to use the agent URL to access the application. I was missing on that
I've created a cluster with a master and 5 nodes with flannel for the POD network and that is working fine.
What is not working is that after installing kubeDNS (kubedns, dnsmasq and sidecar) but I can't get the new nameserver to be injected into HOST /etc/resolv.conf, because of that I can't resolve any hostnames.
Everything else works fine, all KubeDNS containers are running and with no errors
My kube-proxy ARGS
KUBE_PROXY_ARGS="--cluster-cidr=10.254.0.0/16"
My Kubelet configs
KUBELET_DNS="--cluster-dns=10.254.0.253"
KUBELET_DOMAIN="--cluster-domain=cluster.local"
Here are my configs for the DNS POD:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-dns-4073989832-f7g5g",
"generateName": "kube-dns-4073989832-",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/pods/kube-dns-4073989832-f7g5g",
"uid": "6f76055c-5b1e-11e7-b0c5-0050568fc023",
"resourceVersion": "3974782",
"creationTimestamp": "2017-06-27T09:53:13Z",
"labels": {
"k8s-app": "kube-dns",
"pod-template-hash": "4073989832"
},
"annotations": {
"kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"kube-system\",\"name\":\"kube-dns-4073989832\",\"uid\":\"8afa7fce-5a9e-11e7-b714-0050568fc023\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"3974404\"}}\n",
"scheduler.alpha.kubernetes.io/critical-pod": ""
},
"ownerReferences": [
{
"apiVersion": "extensions/v1beta1",
"kind": "ReplicaSet",
"name": "kube-dns-4073989832",
"uid": "8afa7fce-5a9e-11e7-b714-0050568fc023",
"controller": true
}
]
},
"spec": {
"volumes": [
{
"name": "kube-dns-config",
"configMap": {
"name": "kube-dns",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "kubedns",
"image": "vvcelparti01:443/k8s-dns-kube-dns-amd64:1.14.2",
"args": [
"--domain=cluster.local",
"--dns-port=10053",
"--config-dir=/kube-dns-config",
"--kube-master-url=http://10.64.146.26:8080",
"--v=2"
],
"ports": [
{
"name": "dns-local",
"containerPort": 10053,
"protocol": "UDP"
},
{
"name": "dns-tcp-local",
"containerPort": 10053,
"protocol": "TCP"
},
{
"name": "metrics",
"containerPort": 10055,
"protocol": "TCP"
}
],
"env": [
{
"name": "PROMETHEUS_PORT",
"value": "10055"
}
],
"resources": {
"limits": {
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
},
"volumeMounts": [
{
"name": "kube-dns-config",
"mountPath": "/kube-dns-config"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthcheck/kubedns",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 3,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "dnsmasq",
"image": "vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64:1.14.2",
"args": [
"-v=2",
"-logtostderr",
"-configDir=/etc/k8s/dns/dnsmasq-nanny",
"-restartDnsmasq=true",
"--",
"-k",
"--cache-size=1000",
"--log-facility=-",
"--server=/cluster.local/127.0.0.1#10053",
"--server=/in-addr.arpa/127.0.0.1#10053",
"--server=/ip6.arpa/127.0.0.1#10053"
],
"ports": [
{
"name": "dns",
"containerPort": 53,
"protocol": "UDP"
},
{
"name": "dns-tcp",
"containerPort": 53,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "150m",
"memory": "20Mi"
}
},
"volumeMounts": [
{
"name": "kube-dns-config",
"mountPath": "/etc/k8s/dns/dnsmasq-nanny"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthcheck/dnsmasq",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "sidecar",
"image": "vvcelparti01:443/k8s-dns-sidecar-amd64:1.14.2",
"args": [
"--v=2",
"--logtostderr",
"--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A",
"--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A"
],
"ports": [
{
"name": "metrics",
"containerPort": 10054,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "10m",
"memory": "20Mi"
}
},
"livenessProbe": {
"httpGet": {
"path": "/metrics",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "Default",
"serviceAccountName": "kube-dns",
"serviceAccount": "kube-dns",
"nodeName": "gopher01",
"securityContext": {}
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:52:45Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:52:55Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:53:13Z"
}
],
"hostIP": "10.64.146.24",
"podIP": "172.30.18.4",
"startTime": "2017-06-27T09:52:45Z",
"containerStatuses": [
{
"name": "dnsmasq",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64#sha256:5a9dda0fdf5bf548eb6a63260c3f5e6f5cdc3d0917279e38a435c00967c6c57c",
"containerID": "docker://682fa7e0ffb28f26aee97a8ac7fe564096ece3ef3d7fe14fd9ed6857526d2d2f"
},
{
"name": "kubedns",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-kube-dns-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-kube-dns-amd64#sha256:c78ed83587e42e7fc21f07756364c568c5c0fe10289f4f7f19d03a97f15b7a60",
"containerID": "docker://20b729004655a43efd384f8dded1f97d898a3b54092e190aba3d2031e72da056"
},
{
"name": "sidecar",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-sidecar-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-sidecar-amd64#sha256:8d8c0e03e5f91ae85be7402ac88f804c52431dac32491c7a2557fd462fd2695b",
"containerID": "docker://bbaec6e9d0aa933daaee7c33b6d64d0f37f1a57213fabd2aa1c686c61a356f7f"
}
]
}
}
Here is my throubleshooting session:
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.254.0.253 <none> 53/UDP,53/TCP 24d
kubernetes-dashboard 10.254.170.86 <none> 80/TCP 29d
$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.30.18.4:53,172.30.18.4:53 24d
Am trying to bring up a ubuntu container in a POD in openshift. I have setup my local docker registry and have configured DNS accordingly. Starting the ubuntu container with just docker works fine without any issues. When I deploy the POD, I can see that my docker ubuntu image is pulled successfully, but doesnt succeed in starting the same. It fails with back-off pulling image error. Is this because my entry point does not have any background process running in side the container ?
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
Snapshot of the events
Deployment-config :
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"namespace": "testproject",
"selfLink": "/oapi/v1/namespaces/testproject/deploymentconfigs/ubuntu",
"uid": "e7c7b9c6-4dbd-11e6-bd2b-0800277bbed5",
"resourceVersion": "4340",
"generation": 6,
"creationTimestamp": "2016-07-19T14:34:31Z",
"labels": {
"app": "ubuntu"
},
"annotations": {
"openshift.io/deployment.cancelled": "4",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {}
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"ubuntu"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "testproject",
"name": "ubuntu:latest"
},
"lastTriggeredImage": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"annotations": {
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"latestVersion": 5,
"details": {
"causes": [
{
"type": "ConfigChange"
}
]
},
"observedGeneration": 5
}
The problem was with the http proxy. After solving that image pull was successful