Application in ACS (DCOS) on any other port except 80 - azure

How can I host an application in ACS (DCOS) on any other port except 80? Can I give any other URL instead of using port number to access?
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Application is available on port 4170 according to Marathon.
I am unable to access from agents fqn:portnumber

Yes, it is possible.
Firstly, you need modify hostPort value to 4170 and acceptedResourceRoles to slave_public.
Then you need open port 4170 on agent node NSG.
Then you also need open port on agent node LB.
1.Add Health probes
2.Load balancing rules
More information about this please check this link.

Related

Must include AWSEBDockerrunVersion key in the Dockerrun.aws.json file

Trying to move my Docker Compose application to Elastic Beanstalk and having some issues.
Been struggling with this for like a week now, come pretty far but still some big issues. I converted my docker-compose.yml to a Dockerrun.aws.json using container transform:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"entryPoint": [
"/client/entrypoint.sh"
],
"essential": true,
"memory": 512,
"image": "nodejs",
"links": [
"server_dans_backend:server_dans_backend"
],
"name": "client_dans_backend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "ABC"
},
{
"name": "POSTGRES_USER",
"value": "ABC"
},
{
"name": "POSTGRES_PASSWORD",
"value": "ABC"
},
{
"name": "POSTGRES_HOST",
"value": "ABC"
}
],
"essential": true,
"image": "postgres:14-alpine",
"memory": 512,
"name": "db_dans_backend",
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432
}
]
},
{
"essential": true,
"image": "nginx:alpine",
"memory": 512,
"links": [
"server_dans_backend",
"client_dans_backend"
],
"name": "nginx_dans_backend",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
},
{
"entryPoint": [
"/app/server/entrypoint.sh"
],
"essential": true,
"image": "alpine:python",
"memory": 512,
"links": [
"db_dans_backend:db_dans_backend"
],
"name": "server_dans_backend",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
Pretty straightforward - Node (NextJS), Python (Django), Nginx and Postgres
My problem is this, it doesn't work in prod and whenever I try eb local run I get the following error:
ERROR: ValidationError - The AWSEBDockerrunVersion key in the Dockerrun.aws.json file is not valid or is not included.
Even weirder, when I actually eb deploy I get this:
Instance deployment: 'Dockerrun.aws.json' in your source bundle specifies an unsupported version. Elastic Beanstalk only supports version 1 for non compose app and version 3 for compose app. The deployment failed.
But there is no version 3 for this file format.
I'm not particularly sure why this is a problem though since the key is clearly included. I read it could be a problem if your EB platform isn't multidocker but I believe my platform is correct.
When I run eb platform show I get the following:
64bit Amazon Linux 2 v3.4.16 running Docker
which I believe is valid - the only other option would be the ECS+EB option which I don't believe works with eb local run anyway.
Thank you in advance, been really struggling with this.

Add volumes and files to a container in Azure using Pulumi

I'm starting to use Pulumi for container deployment in Azure cloud.
At the moment I am facing problems because I need to load some configuration files to a container of Traefik but I cannot find the correct way. The idea is that Traefik works as a reverse proxy for the other containers in the group.
My problem is that no matter how much I specify the creation of a volume and try to connect it to the container, when I go to the Azure dashboard, it appears that the container does not have any connected volume.
import pulumi
import pulumi_azure_nextgen as azure
data_rg = azure.resources.latest.ResourceGroup(
"data-rg",
resource_group_name="data-rg",
location="West Europe")
datahike_group = azure.containerinstance.latest.ContainerGroup(
"data-group",
location="West Europe",
container_group_name="data-cg",
resource_group_name=data_rg.name,
containers=[{
"name":"data",
"image": "wordpress:latest",
"resources": {
"requests": { "cpu": 0.5, "memory_in_gb": 1.5}
},
},
{
"name": "proxy",
"image": "traefik:latest",
"resources": {
"requests": { "cpu": 0.5, "memory_in_gb": 1.5}
},
"ports": [{
"port": 80,
"protocol": "TCP",
}],
"VolumeMount": [{
"mount_path": "/etc/traefik/config_base.yml",
"name": "traefik-volume",
}],
"environment_variables": [{
"name": "TRAEFIK_CONFIG_FILE",
"value": "file"
},{
"name": "TRAEFIK_CONFIG_PATH",
"value": "/etc/traefik/config_base.yml"
}
],
},
],
ip_address={
"dnsNameLabel": "dnsnamelabel1",
"ports": [{
"port": 80,
"protocol": "TCP",
}],
"type": "Public",
},
volumes=[
{
"emptyDir": {},
"name": "datahike-volume",
},
{
"name": "traefik-volume",
"secret": {
"secretKey1": "SecretValue1InBase64",
},
},
],
os_type="Linux",
tags={
"environment": "testing",
})
pulumi.export("data_ip", data_group.ip_address)
Does anyone know why its failing?
in this case, the error was due to a typo:
"volumeMounts": [{
"mount_path": "/etc/traefik/config_base.yml",
"name": "traefik-volume",
}],

SNAT Ports for Outgoing Connection from Azure (Stack)

I don't manage to get a working outgoing connection from my load-balanced VMs created in Azure Stack. I have scenario 2 of the documentation: "Public Load Balancer associated with a VM (no Instance Level Public IP address on the instance)". Only port 80 is working as an outgoing connection out of the box. I am behind an additional firewall and I ask myself whether I have to open any other specific ports to allow communication to the internet. Three questions:
Is it the problem, that the port I try to access from inside the VMs is translated to a different SNAT port by the load-balancer?
The documentation says something about the amount of used SNAT ports, but it does not say which SNAT ports are used? Which ports do I have to open in the outer firewall?
Why is port 80 working out of the box? I can per default access the web from within the VMs. This means, that it is possible to reach the public internet. I did not include any additional rule for port 80 myself.
I found the following in the incoming connection section of the Azure security group: There are explicitly mentioned ephemeral ports between 49152 and 65534. Unfortunately, opening these ports in the outgoing connections of our outer firewall didn't do the trick either. All VM internal firewalls are open on all ports.
I created the cluster using the following template. The SKU should be the default one "standard", since I did not specify anything else for the LoadBalancer.
{
"apiVersion": "[variables('lbApiVersion')]",
"type": "Microsoft.Network/loadBalancers",
"name": "[concat('LB','-', parameters('clusterName'),'-',variables('vmNodeType0Name'))]",
"location": "[variables('location')]",
"dependsOn": [
"[concat('Microsoft.Network/publicIPAddresses/',concat(variables('lbIPName'),'-','0'))]"
],
"properties": {
"frontendIPConfigurations": [
{
"name": "LoadBalancerIPConfig",
"properties": {
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses',concat(variables('lbIPName'),'-','0'))]"
}
}
}
],
"backendAddressPools": [
{
"name": "LoadBalancerBEAddressPool",
"properties": {}
}
],
"loadBalancingRules": [
{
"name": "LBRule",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[variables('nt0fabricTcpGatewayPort')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[variables('nt0fabricTcpGatewayPort')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[variables('lbProbeID0')]"
},
"protocol": "tcp"
}
},
{
"name": "LBHttpRule",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[variables('nt0fabricHttpGatewayPort')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[variables('nt0fabricHttpGatewayPort')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[variables('lbHttpProbeID0')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule1",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[parameters('loadBalancedAppPort1')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort1')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe1')]"
},
"protocol": "tcp"
}
},
{
"name": "AppPortLBRule2",
"properties": {
"backendAddressPool": {
"id": "[variables('lbPoolID0')]"
},
"backendPort": "[parameters('loadBalancedAppPort2')]",
"enableFloatingIP": "false",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPort": "[parameters('loadBalancedAppPort2')]",
"idleTimeoutInMinutes": "5",
"probe": {
"id": "[concat(variables('lbID0'),'/probes/AppPortProbe2')]"
},
"protocol": "tcp"
}
}
],
"probes": [
{
"name": "FabricGatewayProbe",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[variables('nt0fabricTcpGatewayPort')]",
"protocol": "tcp"
}
},
{
"name": "FabricHttpGatewayProbe",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[variables('nt0fabricHttpGatewayPort')]",
"protocol": "tcp"
}
},
{
"name": "AppPortProbe1",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[parameters('loadBalancedAppPort1')]",
"protocol": "tcp"
}
},
{
"name": "AppPortProbe2",
"properties": {
"intervalInSeconds": 5,
"numberOfProbes": 2,
"port": "[parameters('loadBalancedAppPort2')]",
"protocol": "tcp"
}
}
],
"inboundNatPools": [
{
"name": "LoadBalancerBEAddressNatPool",
"properties": {
"backendPort": "3389",
"frontendIPConfiguration": {
"id": "[variables('lbIPConfig0')]"
},
"frontendPortRangeEnd": "4500",
"frontendPortRangeStart": "3389",
"protocol": "tcp"
}
}
]
},
"tags": {
"resourceType": "Service Fabric",
"clusterName": "[parameters('clusterName')]"
}
},
To make it short? How to realize outgoind connections from Azure VMs.
For your issue, I will tell you all I know. Hope it will help you.
Is it the problem, that the port I try to access from inside the VMs
is translated to a different SNAT port by the load-balancer?
No, use SNAT rules, you can translate access flow from the Internet to a different port or not, all as you want. Azure Load Balancer SNAT rules mean you can connect to the VM
port A inside from the Internet with port B. Port A and port B can be the same or not.
The documentation says something about the amount of used SNAT ports,
but it does not say which SNAT ports are used? Which ports do I have
to open in the outer firewall?
As I test, you even can use port 1 in Load Balancer NAT rules. So I assume that the document means how many ports can be used per IP configuration. I suggest you can read the document again and understand carefully.
Why is port 80 working out of the box? I can per default access the
web from within the VMs. This means, that it is possible to reach the
public internet. I did not include any additional rule for port 80
myself.
For this issue, you should make sure for some things. First, if you have a public IP associated to your VM except the Load Balancer. Second, you should take a look in the Azure portal if there are any other NAT rules. Or you can use cli command az network lb inbound-nat-rule list.

unable to access helloworld App deployed using DCOS Marathon in Azure

I have deployed a hello world application in Azure using DCOS and Marathon Framework.I am trying to access that using fqn: portnumber at which the application is hosted. I am unable to open the application
Following is the json I have used
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "HTTP",
"ignoreHttp1xx": false
}
],
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
I have added NSG Inbound rule for master nsg resource
I have added NAT rule for master lb resource allowing the port as custom
In your example, host port is 0, Azure will listen your service on a random port. You need open the port on NSG and lb.
I suggest you could specify the port, you could check the following example:
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 32,
"disk": 0,
"instances": 1,
"acceptedResourceRoles": [
"slave_public"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp",
"labels": {},
"name": "test80"
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"healthChecks": [
{
"gracePeriodSeconds": 10,
"intervalSeconds": 2,
"timeoutSeconds": 10,
"maxConsecutiveFailures": 10,
"portIndex": 0,
"path": "/",
"protocol": "MESOS_HTTP",
"ignoreHttp1xx": false
}
],
"requirePorts": true
}
Note: You should set acceptedResourceRoles to slave_public. More information about this please check this link.
Along with the above-mentioned JSON I need to use the agent URL to access the application. I was missing on that

Can't run docker container on Marathon with network HOST?

I am trying to run some cassandra instances (docker containers) on marathon.
This following description works well:
{
"id": "cassandra",
"constraints": [["hostname", "CLUSTER", "docker-sl-vm"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "cassandra:latest",
"network": "BRIDGE",
"portMappings": [ {"containerPort": 9042,"hostPort": 0,"servicePort": 9042,"protocol": "tcp"} ]
}
},
"env": {
"CASSANDRA_SEED_COUNT": "1"
},
"cpus": 0.5,
"mem": 512.0,
"instances": 1,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600
}
However, I was following a tutorial that uses the following description:
{
"id": "cassandra-seed",
"constraints": [
[
"hostname",
"UNIQUE"
]
],
"ports": [
7199,
7000,
7001,
9160,
9042
],
"requirePorts": true,
"container": {
"type": "DOCKER",
"docker": {
"image": "cassandra:latest",
"network": "HOST",
"privileged": true
}
},
"env": {
"CASSANDRA_SEED_COUNT": "1"
},
"cpus": 0.5,
"mem": 512,
"instances": 2,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"healthChecks": [
{
"protocol": "TCP",
"gracePeriodSeconds": 30,
"intervalSeconds": 30,
"portIndex": 4,
"timeoutSeconds": 60,
"maxConsecutiveFailures": 30
}
],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5,
"maximumOverCapacity": 0.2
}
}
PROBLEM
If I try ot use the second Marathon description, It takes forever and never loads. It just gets stuck on deploying and do not give me any error at the DEBUG section.
PS.: I am running the mesos cluster into a VirtualBox Ubuntu trusty Guest.
UPDATE ========================================
I've erased the logs and tried to run it again. The log result is shown below:
Content of mesos-slave.docker-sl-vm.invalid-user.log.INFO.20151110-130520.2713

Resources