unable to access helloworld App deployed using DCOS Marathon in Azure - azure

I have deployed a hello world application in Azure using DCOS and Marathon Framework.I am trying to access that using fqn: portnumber at which the application is hosted. I am unable to open the application
Following is the json I have used
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "HTTP",
"ignoreHttp1xx": false
}
],
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
I have added NSG Inbound rule for master nsg resource
I have added NAT rule for master lb resource allowing the port as custom

In your example, host port is 0, Azure will listen your service on a random port. You need open the port on NSG and lb.
I suggest you could specify the port, you could check the following example:
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 32,
"disk": 0,
"instances": 1,
"acceptedResourceRoles": [
"slave_public"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"labels": {},
"name": "test80"
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "MESOS_HTTP",
"ignoreHttp1xx": false
}
],
"requirePorts": true
}
Note: You should set acceptedResourceRoles to slave_public. More information about this please check this link.

Along with the above-mentioned JSON I need to use the agent URL to access the application. I was missing on that

Related

Application in ACS (DCOS) on any other port except 80

How can I host an application in ACS (DCOS) on any other port except 80? Can I give any other URL instead of using port number to access?
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Application is available on port 4170 according to Marathon.
I am unable to access from agents fqn:portnumber
Yes, it is possible.
Firstly, you need modify hostPort value to 4170 and acceptedResourceRoles to slave_public.
Then you need open port 4170 on agent node NSG.
Then you also need open port on agent node LB.
1.Add Health probes
2.Load balancing rules
More information about this please check this link.

KubeDNS not injecting nameservers, Kubernetes 1.5.2 on RHEL 7-

I've created a cluster with a master and 5 nodes with flannel for the POD network and that is working fine.
What is not working is that after installing kubeDNS (kubedns, dnsmasq and sidecar) but I can't get the new nameserver to be injected into HOST /etc/resolv.conf, because of that I can't resolve any hostnames.
Everything else works fine, all KubeDNS containers are running and with no errors
My kube-proxy ARGS
KUBE_PROXY_ARGS="--cluster-cidr=10.254.0.0/16"
My Kubelet configs
KUBELET_DNS="--cluster-dns=10.254.0.253"
KUBELET_DOMAIN="--cluster-domain=cluster.local"
Here are my configs for the DNS POD:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-dns-4073989832-f7g5g",
"generateName": "kube-dns-4073989832-",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/pods/kube-dns-4073989832-f7g5g",
"uid": "6f76055c-5b1e-11e7-b0c5-0050568fc023",
"resourceVersion": "3974782",
"creationTimestamp": "2017-06-27T09:53:13Z",
"labels": {
"k8s-app": "kube-dns",
"pod-template-hash": "4073989832"
},
"annotations": {
"kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"kube-system\",\"name\":\"kube-dns-4073989832\",\"uid\":\"8afa7fce-5a9e-11e7-b714-0050568fc023\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"3974404\"}}\n",
"scheduler.alpha.kubernetes.io/critical-pod": ""
},
"ownerReferences": [
{
"apiVersion": "extensions/v1beta1",
"kind": "ReplicaSet",
"name": "kube-dns-4073989832",
"uid": "8afa7fce-5a9e-11e7-b714-0050568fc023",
"controller": true
}
]
},
"spec": {
"volumes": [
{
"name": "kube-dns-config",
"configMap": {
"name": "kube-dns",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "kubedns",
"image": "vvcelparti01:443/k8s-dns-kube-dns-amd64:1.14.2",
"args": [
"--domain=cluster.local",
"--dns-port=10053",
"--config-dir=/kube-dns-config",
"--kube-master-url=http://10.64.146.26:8080",
"--v=2"
],
"ports": [
{
"name": "dns-local",
"containerPort": 10053,
"protocol": "UDP"
},
{
"name": "dns-tcp-local",
"containerPort": 10053,
"protocol": "TCP"
},
{
"name": "metrics",
"containerPort": 10055,
"protocol": "TCP"
}
],
"env": [
{
"name": "PROMETHEUS_PORT",
"value": "10055"
}
],
"resources": {
"limits": {
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
},
"volumeMounts": [
{
"name": "kube-dns-config",
"mountPath": "/kube-dns-config"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthcheck/kubedns",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 3,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "dnsmasq",
"image": "vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64:1.14.2",
"args": [
"-v=2",
"-logtostderr",
"-configDir=/etc/k8s/dns/dnsmasq-nanny",
"-restartDnsmasq=true",
"--",
"-k",
"--cache-size=1000",
"--log-facility=-",
"--server=/cluster.local/127.0.0.1#10053",
"--server=/in-addr.arpa/127.0.0.1#10053",
"--server=/ip6.arpa/127.0.0.1#10053"
],
"ports": [
{
"name": "dns",
"containerPort": 53,
"protocol": "UDP"
},
{
"name": "dns-tcp",
"containerPort": 53,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "150m",
"memory": "20Mi"
}
},
"volumeMounts": [
{
"name": "kube-dns-config",
"mountPath": "/etc/k8s/dns/dnsmasq-nanny"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthcheck/dnsmasq",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "sidecar",
"image": "vvcelparti01:443/k8s-dns-sidecar-amd64:1.14.2",
"args": [
"--v=2",
"--logtostderr",
"--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A",
"--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A"
],
"ports": [
{
"name": "metrics",
"containerPort": 10054,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "10m",
"memory": "20Mi"
}
},
"livenessProbe": {
"httpGet": {
"path": "/metrics",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "Default",
"serviceAccountName": "kube-dns",
"serviceAccount": "kube-dns",
"nodeName": "gopher01",
"securityContext": {}
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:52:45Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:52:55Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:53:13Z"
}
],
"hostIP": "10.64.146.24",
"podIP": "172.30.18.4",
"startTime": "2017-06-27T09:52:45Z",
"containerStatuses": [
{
"name": "dnsmasq",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64#sha256:5a9dda0fdf5bf548eb6a63260c3f5e6f5cdc3d0917279e38a435c00967c6c57c",
"containerID": "docker://682fa7e0ffb28f26aee97a8ac7fe564096ece3ef3d7fe14fd9ed6857526d2d2f"
},
{
"name": "kubedns",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-kube-dns-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-kube-dns-amd64#sha256:c78ed83587e42e7fc21f07756364c568c5c0fe10289f4f7f19d03a97f15b7a60",
"containerID": "docker://20b729004655a43efd384f8dded1f97d898a3b54092e190aba3d2031e72da056"
},
{
"name": "sidecar",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-sidecar-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-sidecar-amd64#sha256:8d8c0e03e5f91ae85be7402ac88f804c52431dac32491c7a2557fd462fd2695b",
"containerID": "docker://bbaec6e9d0aa933daaee7c33b6d64d0f37f1a57213fabd2aa1c686c61a356f7f"
}
]
}
}
Here is my throubleshooting session:
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.254.0.253 <none> 53/UDP,53/TCP 24d
kubernetes-dashboard 10.254.170.86 <none> 80/TCP 29d
$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.30.18.4:53,172.30.18.4:53 24d

Can I loop over properties in ARM templates?

I have an ARM template where I set up a load balancer and I want to add a number of port openings by adding rules and probes to the LB.
This is the template I have so far:
{
"type": "Microsoft.Network/loadBalancers",
"name": "LB-front",
"apiVersion": "2016-03-30",
"location": "westeurope",
"tags": { },
"properties": {
"frontendIPConfigurations": [
{
"name": "LoadBalancerIPConfig",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddresses_lbipdev_0_name'))]"
}
}
}
],
"backendAddressPools": [
{
"name": "LoadBalancerBEAddressPool"
}
],
"loadBalancingRules": [
{
"name": "AppPortLBRule1",
"properties": {
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_LB_dev_id_6')]"
},
"frontendPort": 80,
"backendPort": 80,
"enableFloatingIP": false,
"idleTimeoutInMinutes": 5,
"protocol": "Tcp",
"loadDistribution": "Default",
"backendAddressPool": {
"id": "[parameters('loadBalancers_LB_dev_id_7')]"
},
"probe": {
"id": "[parameters('loadBalancers_LB_dev_id_8')]"
}
}
},
{
"name": "AppPortLBRule2",
"properties": {
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_LB_dev_id_9')]"
},
"frontendPort": 81,
"backendPort": 81,
"enableFloatingIP": false,
"idleTimeoutInMinutes": 5,
"protocol": "Tcp",
"loadDistribution": "Default",
"backendAddressPool": {
"id": "[parameters('loadBalancers_LB_dev_id_10')]"
},
"probe": {
"id": "[parameters('loadBalancers_LB_dev_id_11')]"
}
}
},
{
"name": "AppPortLBRule3",
"properties": {
"frontendIPConfiguration": {
"id": "[parameters('loadBalancers_LB_dev_id_12')]"
},
"frontendPort": 82,
"backendPort": 82,
"enableFloatingIP": false,
"idleTimeoutInMinutes": 5,
"protocol": "Tcp",
"loadDistribution": "Default",
"backendAddressPool": {
"id": "[parameters('loadBalancers_LB_dev_id_13')]"
},
"probe": {
"id": "[parameters('loadBalancers_LB_dev_id_14')]"
}
}
}
],
"probes": [
{
"name": "AppPortProbe1",
"properties": {
"protocol": "Tcp",
"port": 80,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
},
{
"name": "AppPortProbe2",
"properties": {
"protocol": "Tcp",
"port": 81,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
},
{
"name": "AppPortProbe3",
"properties": {
"protocol": "Tcp",
"port": 82,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
}
],
"inboundNatRules": [],
"outboundNatRules": [],
"inboundNatPools": []
},
"resources": [],
"dependsOn": [
"[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPAddresses_lbipdev_1_name'))]"
]
},
(some details omitted)
What I would like to do is to have an array of the port numbers I want to create rules and probes for and loop over those instead of explicitly having to write each rule and probe as a property for the resource.
Basically I would like a parameter or variable in my template like this:
"ports": [ 80, 81, 82, ...]
and that I could loop over this similar to this: https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple.
Indeed you can! Copy does work with properties!
Create a parameter or variable like this (this example will use parameter array):
"lbRules": {
"type": "array",
"defaultValue": [
{
"name": "httpPort",
"frontendPort": "80",
"backendPort": "80",
"protocol": "tcp"
},
{
"name": "customAppPort",
"frontendPort": "8080",
"backendPort": "8888",
"protocol": "tcp"
},
{
"name": "httpsPort",
"frontendPort": "443",
"backendPort": "443",
"protocol": "tcp"
}
]
}
Use this parameter in the Loadbalancer resource using copy like this that will create that many probes and rules that you defined in your parameter array:
{
"apiVersion": "[variables('lbApiVersion')]",
"type": "Microsoft.Network/loadBalancers",
"name": "[parameters('myLoadBalancer')]",
"location": "[parameters('computeLocation')]",
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/',concat(parameters('lbIPName'),'-','0'))]"
],
"properties": {
"frontendIPConfigurations": [
{
"name": "LoadBalancerIPConfig",
"properties": {
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(parameters('lbIPName'),'-','0'))]"
}
}
}
],
"backendAddressPools": [
{
"name": "LoadBalancerBEAddressPool",
"properties": {}
}
],
"copy": [
{
"name": "probes",
"count": "[length(parameters('lbRules'))]",
"input": {
"name": "[concat(parameters('lbRules')[copyIndex('probes')].name,'Probe')]",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[parameters('lbRules')[copyIndex('probes')].backendPort]",
"protocol": "[parameters('lbRules')[copyIndex('probes')].protocol]"
}
}
},
{
"name": "loadBalancingRules",
"count": "[length(parameters('lbRules'))]",
"input": {
"name": "[parameters('lbRules')[copyIndex('loadBalancingRules')].name]",
"properties": {
"frontendIPConfiguration": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('myLoadBalancer')),'/frontendIPConfigurations/LoadBalancerIPConfig')]"
},
"frontendport": "[parameters('lbRules')[copyIndex('loadBalancingRules')].frontendport]",
"backendport": "[parameters('lbRules')[copyIndex('loadBalancingRules')].backendport]",
"enableFloatingIP": false,
"idleTimeoutInMinutes": "5",
"protocol": "[parameters('lbRules')[copyIndex('loadBalancingRules')].protocol]",
"backendAddressPool": {
"id": "[concat(resourceId('Microsoft.Network/loadBalancers', parameters('myLoadBalancer')),'/backendAddressPools/LoadBalancerBEAddressPool')]"
},
"probe": {
"id": "[concat(variables('lbID0'),'/probes/', parameters('lbRules')[copyIndex('loadBalancingRules')].name,'Probe')]"
}
}
}
}
],
"inboundNatPools": []
},
}
}
More info can be found here:
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple#property-iteration
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/copy-properties
You can only apply the copy object to a top-level resource.
You cannot apply it to a property on a resource type, or to a child resource.
"resources": [
{
"type": "{provider-namespace-and-type}",
"name": "parentResource",
"copy": {
/* yes, copy can be applied here */
},
"properties": {
"exampleProperty": {
/* no, copy cannot be applied here */
}
},
"resources": [
{
"type": "{provider-type}",
"name": "childResource",
/* copy can be applied if resource is promoted to top level */
}
]
}
]
Source of Quotation: Deploy multiple instances of resources in Azure Resource Manager templates
You can loop over properties in ARM Template ONLY IF the copy object is applied to a top-level resource, which is in your case the "Microsoft.Network/loadBalancers", but that will also create multiple copy of the said resource.
If this is not what you want to achieve, I would recommend you to keep your existing way until ARM Template support copy object to property on a resource type in the future.
It is now possible to loop on properties or on child resources as stated in https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple#property-iteration or in
https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-multiple#create-multiple-instances-of-a-child-resource
You can a child resource extension (e.g. WebSite/Extension) as a top-level resource by following the format for the type:
{resource-provider-namespace}/{parent-resource-type}/{child-resource-type}.
For instance
Microsoft.Web/sites/siteextensions
You have also to reference the parent resource in the child resource by a concat. For instance:
"name": "[concat('mywebsite', '/', 'myextension', copyIndex())]"
What you want to achieve is possible with the take function.
You linked the proper documentation site yourself. Go to the link you posted and check out the section "Create multiple instances when copy won't work".
in your case this would look like this:
"variables": {
"probeArray": [
{
"name": "AppPortProbe1",
"properties": {
"protocol": "Tcp",
"port": 80,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
},
{
"name": "AppPortProbe2",
"properties": {
"protocol": "Tcp",
"port": 81,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
},
{
"name": "AppPortProbe3",
"properties": {
"protocol": "Tcp",
"port": 82,
"intervalInSeconds": 5,
"numberOfProbes": 2
}
}
],
You then create an parameter specifying how many probes you want
"parameters": {
...
"numProbes": {
"type": "int",
"maxValue": 3,
"metadata": {
"description": "This parameter allows you to select the number of probes you want"
}
}
Finally you use take inside the resource:
"resources": [
...
{
"type": "Microsoft.Network/loadBalancers",
"properties": {
...
"probes": "[take(variables('probeArray'),parameters('numProbes'))]"
},
...
}
...
}
]
If you continue the through the documentation you will see that you can get even more crazy and combine copy and take with linked templates.

Unable to start a ubuntu container in openshift-origin

Am trying to bring up a ubuntu container in a POD in openshift. I have setup my local docker registry and have configured DNS accordingly. Starting the ubuntu container with just docker works fine without any issues. When I deploy the POD, I can see that my docker ubuntu image is pulled successfully, but doesnt succeed in starting the same. It fails with back-off pulling image error. Is this because my entry point does not have any background process running in side the container ?
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
Snapshot of the events
Deployment-config :
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"namespace": "testproject",
"selfLink": "/oapi/v1/namespaces/testproject/deploymentconfigs/ubuntu",
"uid": "e7c7b9c6-4dbd-11e6-bd2b-0800277bbed5",
"resourceVersion": "4340",
"generation": 6,
"creationTimestamp": "2016-07-19T14:34:31Z",
"labels": {
"app": "ubuntu"
},
"annotations": {
"openshift.io/deployment.cancelled": "4",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {}
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"ubuntu"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "testproject",
"name": "ubuntu:latest"
},
"lastTriggeredImage": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"annotations": {
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"latestVersion": 5,
"details": {
"causes": [
{
"type": "ConfigChange"
}
]
},
"observedGeneration": 5
}
The problem was with the http proxy. After solving that image pull was successful

Can't run docker container on Marathon with network HOST?

I am trying to run some cassandra instances (docker containers) on marathon.
This following description works well:
{
"id": "cassandra",
"constraints": [["hostname", "CLUSTER", "docker-sl-vm"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "cassandra:latest",
"network": "BRIDGE",
"portMappings": [ {"containerPort": 9042,"hostPort": 0,"servicePort": 9042,"protocol": "tcp"} ]
}
},
"env": {
"CASSANDRA_SEED_COUNT": "1"
},
"cpus": 0.5,
"mem": 512.0,
"instances": 1,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600
}
However, I was following a tutorial that uses the following description:
{
"id": "cassandra-seed",
"constraints": [
[
"hostname",
"UNIQUE"
]
],
"ports": [
7199,
7000,
7001,
9160,
9042
],
"requirePorts": true,
"container": {
"type": "DOCKER",
"docker": {
"image": "cassandra:latest",
"network": "HOST",
"privileged": true
}
},
"env": {
"CASSANDRA_SEED_COUNT": "1"
},
"cpus": 0.5,
"mem": 512,
"instances": 2,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"healthChecks": [
{
"protocol": "TCP",
"gracePeriodSeconds": 30,
"intervalSeconds": 30,
"portIndex": 4,
"timeoutSeconds": 60,
"maxConsecutiveFailures": 30
}
],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5,
"maximumOverCapacity": 0.2
}
}
PROBLEM
If I try ot use the second Marathon description, It takes forever and never loads. It just gets stuck on deploying and do not give me any error at the DEBUG section.
PS.: I am running the mesos cluster into a VirtualBox Ubuntu trusty Guest.
UPDATE ========================================
I've erased the logs and tried to run it again. The log result is shown below:
Content of mesos-slave.docker-sl-vm.invalid-user.log.INFO.20151110-130520.2713

Resources