I am using AWS ECS and have a container for my frontend (Node app) and for my backend (mongo database).
The mongo container is exposing port 27017, but I cannot figure out how to connect to it from my frontend container. If I try to connect to the db using 'mongodb://localhost:27017/db_name' I get an ECONNREFUSED error.
I have a service running for both of these task definitions with an ALB for the frontend. I don't have them in the same task definition because it doesn't seem optimal to have to scale them together.
I have tried multiple variations of the url
mongodb://0.0.0.0:27017/db_name
mongodb://localhost:27017/db_name
If I "curl" the mongo container from within the EC2 instance, I get an empty reply from server.
Database Task Definition:
{
"executionRoleArn": null,
"containerDefinitions": [
{
"dnsSearchDomains": null,
"logConfiguration": null,
"entryPoint": null,
"portMappings": [
{
"hostPort": 27017,
"protocol": "tcp",
"containerPort": 27017
}
],
"command": null,
"linuxParameters": null,
"cpu": 0,
"environment": [
{
"name": "MONGODB_ADMIN_PASS",
"value": <PASSWORD>
},
{
"name": "MONGODB_APPLICATION_DATABASE",
"value": <DB NAME>
},
{
"name": "MONGODB_APPLICATION_PASS",
"value": <PASSWORD>
},
{
"name": "MONGODB_APPLICATION_USER",
"value": <USERNAME>
}
],
"ulimits": null,
"dnsServers": null,
"mountPoints": [],
"workingDirectory": null,
"dockerSecurityOptions": null,
"memory": null,
"memoryReservation": 128,
"volumesFrom": [],
"image": "registry.hub.docker.com/library/mongo:latest",
"disableNetworking": null,
"healthCheck": null,
"essential": true,
"links": null,
"hostname": null,
"extraHosts": null,
"user": null,
"readonlyRootFilesystem": null,
"dockerLabels": null,
"privileged": null,
"name": "mongo"
}
],
"placementConstraints": [],
"memory": null,
"taskRoleArn": null,
"compatibilities": [
"EC2"
],
"taskDefinitionArn": "arn:aws:ecs:us-east-2:821819063141:task-definition/dappy_coin_database:2",
"family": "dappy_coin_database",
"requiresAttributes": [
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.21"
}
],
"requiresCompatibilities": null,
"networkMode": null,
"cpu": null,
"revision": 2,
"status": "ACTIVE",
"volumes": []
}
OLD:
You have to add, in the task definition for node, which I assume you have: links: ["mongo"]
Then you can reference mongo://...
NEW:
Just saw that you want them in separate task definitions. That's a lot of complexity and I want to dissuade you from this path, because you are facing options like: ELB, service discovery via DNS, ambassador container pattern (per this answer - which, if that is all you wanted, this question is a dupe). If you have to do it, see that answer, and weep.
Maybe you would consider deploying your Node app as a single-container Elastic Beanstalk app, and connecting it to MongoDB Atlas? That way you get load balancing, auto-scaling, monitoring, all built in, instead of needing to do it yourself.
Or, at least you could use AWS Fargate. It is a launch mode of ECS that handles more of the infrastructure and networking for you. To quote the docs,
links are not allowed as they are a property of the “bridge” network mode (and are now a legacy feature of Docker). Instead, containers share a network namespace and communicate with each other over the localhost interface. They can be referenced using the following:
localhost/127.0.0.1:<some_port_number>
Where in this case, some_port_number = 27017.
Related
Trying to move my Docker Compose application to Elastic Beanstalk and having some issues.
Been struggling with this for like a week now, come pretty far but still some big issues. I converted my docker-compose.yml to a Dockerrun.aws.json using container transform:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"entryPoint": [
"/client/entrypoint.sh"
],
"essential": true,
"memory": 512,
"image": "nodejs",
"links": [
"server_dans_backend:server_dans_backend"
],
"name": "client_dans_backend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "ABC"
},
{
"name": "POSTGRES_USER",
"value": "ABC"
},
{
"name": "POSTGRES_PASSWORD",
"value": "ABC"
},
{
"name": "POSTGRES_HOST",
"value": "ABC"
}
],
"essential": true,
"image": "postgres:14-alpine",
"memory": 512,
"name": "db_dans_backend",
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432
}
]
},
{
"essential": true,
"image": "nginx:alpine",
"memory": 512,
"links": [
"server_dans_backend",
"client_dans_backend"
],
"name": "nginx_dans_backend",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
},
{
"entryPoint": [
"/app/server/entrypoint.sh"
],
"essential": true,
"image": "alpine:python",
"memory": 512,
"links": [
"db_dans_backend:db_dans_backend"
],
"name": "server_dans_backend",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
Pretty straightforward - Node (NextJS), Python (Django), Nginx and Postgres
My problem is this, it doesn't work in prod and whenever I try eb local run I get the following error:
ERROR: ValidationError - The AWSEBDockerrunVersion key in the Dockerrun.aws.json file is not valid or is not included.
Even weirder, when I actually eb deploy I get this:
Instance deployment: 'Dockerrun.aws.json' in your source bundle specifies an unsupported version. Elastic Beanstalk only supports version 1 for non compose app and version 3 for compose app. The deployment failed.
But there is no version 3 for this file format.
I'm not particularly sure why this is a problem though since the key is clearly included. I read it could be a problem if your EB platform isn't multidocker but I believe my platform is correct.
When I run eb platform show I get the following:
64bit Amazon Linux 2 v3.4.16 running Docker
which I believe is valid - the only other option would be the ECS+EB option which I don't believe works with eb local run anyway.
Thank you in advance, been really struggling with this.
I am using ansible-collections/azure and I got this error: basically I need to define at least one system pool. But there is no example for it and it gives error for every tries which I did.
20-04-01 API is version which I used for this automation.
I follow this links.
https://learn.microsoft.com/en-us/azure/templates/microsoft.containerservice/managedclusters
https://github.com/ansible-collections/azure
If anybody can help me that would be great!
The full traceback is:
File "/tmp/ansible_azure.azcollection.azure_rm_aks_payload_6b2sjfcj/ansible_azure.azcollection.azure_rm_aks_payload.zip/ansible_collections/azure/azcollection/plugins/modules/azure_rm_aks.py", line 791, in create_update_aks
File "/usr/local/lib/python3.5/dist-packages/azure/mgmt/containerservice/v2020_04_01/operations/_managed_clusters_operations.py", line 670, in create_or_update
**operation_config
File "/usr/local/lib/python3.5/dist-packages/azure/mgmt/containerservice/v2020_04_01/operations/_managed_clusters_operations.py", line 621, in _create_or_update_initial
raise exp
[WARNING]: Azure API profile latest does not define an entry for ContainerServiceClient
fatal: [127.0.0.1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"aad_profile": null,
"ad_user": null,
"addon": null,
"adfs_authority_url": null,
"agent_pool_profiles": [
{
"count": 2,
"dns_prefix": null,
"enable_auto_scaling": null,
"max_count": null,
"min_count": null,
"name": "default",
"os_disk_size_gb": null,
"os_type": null,
"ports": null,
"storage_profiles": null,
"type": "VirtualMachineScaleSets",
"vm_size": "Standard_D2_v2",
"vnet_subnet_id": null
}
],
"api_profile": "latest",
"append_tags": true,
"auth_source": null,
"cert_validation_mode": null,
"client_id": null,
"cloud_environment": "AzureCloud",
"dns_prefix": "myaks1",
"enable_rbac": true,
"kubernetes_version": "1.16.9",
"linux_profile": {
"admin_username": "azureuser",
"ssh_key": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"location": "uksouth",
"name": "myaks",
"network_profile": null,
"node_resource_group": "nodetest",
"password": null,
"profile": null,
"resource_group": "mytest",
"secret": null,
"service_principal": {
"client_id": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"client_secret": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"state": "present",
"subscription_id": null,
"tags": {
"Environment": "Production"
},
"tenant": null
}
},
"msg": "Error creating the AKS instance: Operation failed with status: 'Bad Request'. Details: Must define at least one system pool."
}
This version of the API version requires an agentPoolMode to be provided.
AgentPoolMode
AgentPoolMode represents mode of an agent pool.
Name Type Description
System string
User string
https://learn.microsoft.com/en-us/rest/api/aks/agentpools/get
AKS requires a minimum of one system agent node pool
it fixed by new pull request, I tested in my local.
https://github.com/ansible-collections/azure/pull/170
I am using rest api to gather some information from azure devops. I want to get full build results including every stage. But in the documentation it is not available. The simple build api call only gives me limited data. Is there any way to collect the stage wise information like whether the stage was successful or the start and end time for each stage.
Will be grateful for the help.
You should first call this url:
https://dev.azure.com/<YourOrg>/<Your-project>/_apis/build/builds/<buildid>?api-version=5.1
in links you will find timeline:
"_links": {
"self": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/Builds/460"
},
"web": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_build/results?buildId=460"
},
"sourceVersionDisplayUri": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/builds/460/sources"
},
"timeline": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/builds/460/Timeline"
},
"badge": {
"href": "https://dev.azure.com/thecodemanual/4fa6b279-3db9-4cb0-aab8-e06c2ad550b2/_apis/build/status/30"
}
},
and there you will find what you are looking for:
{
"previousAttempts": [],
"id": "67c760f8-35f0-533f-1d24-8e8c3788c96d",
"parentId": null,
"type": "Stage",
"name": "A",
"startTime": "2020-04-24T08:42:37.2133333Z",
"finishTime": "2020-04-24T08:42:46.9933333Z",
"currentOperation": null,
"percentComplete": null,
"state": "completed",
"result": "succeeded",
"resultCode": null,
"changeId": 12,
"lastModified": "0001-01-01T00:00:00",
"workerName": null,
"order": 1,
"details": null,
"errorCount": 0,
"warningCount": 0,
"url": null,
"log": null,
"task": null,
"attempt": 1,
"identifier": "A"
},
You can also refer to the below api, this rest api is grabbed from the browser's Network.
Get https://dev.azure.com/{org}/{pro}/_build/results?buildId={id}&__rt=fps&__ver=2
Stage results are represented by different numbers i.e 0->completed,5->canceled etc.
The disadvantage of this api is that the returned content cannot be read intuitively. In contrast, the workaround provided by Krzysztof Madej is more convenient and intuitive
How can I host an application in ACS (DCOS) on any other port except 80? Can I give any other URL instead of using port number to access?
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Application is available on port 4170 according to Marathon.
I am unable to access from agents fqn:portnumber
Yes, it is possible.
Firstly, you need modify hostPort value to 4170 and acceptedResourceRoles to slave_public.
Then you need open port 4170 on agent node NSG.
Then you also need open port on agent node LB.
1.Add Health probes
2.Load balancing rules
More information about this please check this link.
I'm struggling to keep my node.js container running on ECS. It runs fine when I run it locally with docker compose, but on ECS it runs for a 2-3 mins and handles a few connections (2-3 health checks from the load balancer), then closes down. And I can't work out why.
My Dockerfile -
FROM node:6.10
RUN npm install -g nodemon \
&& npm install forever-monitor \
winston \
express-winston
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
Then in my package.json -
{
...
"main": "forever.js",
"dependencies": {
"mongodb": "~2.0",
"abbajs": ">=0.1.4",
"express": ">=4.15.2"
}
...
}
In my docker-compose.yml I run with nodemon -
node:
...
command: nodemon
In my cloudWatch logs I can see everything start -
14:20:24 npm info lifecycle my_app#1.0.0~start: my_app#1.0.0
Then I see the health check requests (all with http 200's), then a bit later it all wraps up -
14:23:00 npm info lifecycle mapov_reporting#1.0.0~poststart: mapov_reporting#1.0.0
14:23:00 npm info ok
I've tried wrapping my start.js script in forever-monitor, but that doesn't seem to be making any difference.
UPDATE
My ECS task definition -
{
"requiresAttributes": [
{
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
"targetId": null,
"targetType": null
}
],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:562155596068:task-definition/node:12",
"networkMode": "bridge",
"status": "ACTIVE",
"revision": 12,
"taskRoleArn": null,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 128,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 0,
"containerPort": 3000,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [],
"name": "node",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [
{
"name": "awslogs-group",
"value": "node_logs"
},
{
"name": "awslogs-region",
"value": "us-east-1"
},
{
"name": "NODE_ENV",
"value": "production"
}
],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "562155596068.dkr.ecr.us-east-1.amazonaws.com/node:06b5a3700df163c8563865c2f23947c2685edd7b",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "node_logs",
"awslogs-region": "us-east-1"
}
},
"cpu": 1,
"privileged": null,
"memoryReservation": null
}
],
"placementConstraints": [],
"volumes": [],
"family": "node"
}
Tasks are all stopped with the status Task failed ELB health checks in (target-group .... Health checks pass 2 or 3 times before they start failing. And there's no record of anything other than an http 200 in the logs.
I was using an old version of the mongo driver ~2.0, and keeping connections to more than one db. When I upgraded the driver, the issue went away.
"dependencies": {
"mongodb": ">=2.2"
}
I can only assume that there was a bug in the driver.