Must include AWSEBDockerrunVersion key in the Dockerrun.aws.json file - node.js

Trying to move my Docker Compose application to Elastic Beanstalk and having some issues.
Been struggling with this for like a week now, come pretty far but still some big issues. I converted my docker-compose.yml to a Dockerrun.aws.json using container transform:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"entryPoint": [
"/client/entrypoint.sh"
],
"essential": true,
"memory": 512,
"image": "nodejs",
"links": [
"server_dans_backend:server_dans_backend"
],
"name": "client_dans_backend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "ABC"
},
{
"name": "POSTGRES_USER",
"value": "ABC"
},
{
"name": "POSTGRES_PASSWORD",
"value": "ABC"
},
{
"name": "POSTGRES_HOST",
"value": "ABC"
}
],
"essential": true,
"image": "postgres:14-alpine",
"memory": 512,
"name": "db_dans_backend",
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432
}
]
},
{
"essential": true,
"image": "nginx:alpine",
"memory": 512,
"links": [
"server_dans_backend",
"client_dans_backend"
],
"name": "nginx_dans_backend",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
},
{
"entryPoint": [
"/app/server/entrypoint.sh"
],
"essential": true,
"image": "alpine:python",
"memory": 512,
"links": [
"db_dans_backend:db_dans_backend"
],
"name": "server_dans_backend",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
Pretty straightforward - Node (NextJS), Python (Django), Nginx and Postgres
My problem is this, it doesn't work in prod and whenever I try eb local run I get the following error:
ERROR: ValidationError - The AWSEBDockerrunVersion key in the Dockerrun.aws.json file is not valid or is not included.
Even weirder, when I actually eb deploy I get this:
Instance deployment: 'Dockerrun.aws.json' in your source bundle specifies an unsupported version. Elastic Beanstalk only supports version 1 for non compose app and version 3 for compose app. The deployment failed.
But there is no version 3 for this file format.
I'm not particularly sure why this is a problem though since the key is clearly included. I read it could be a problem if your EB platform isn't multidocker but I believe my platform is correct.
When I run eb platform show I get the following:
64bit Amazon Linux 2 v3.4.16 running Docker
which I believe is valid - the only other option would be the ECS+EB option which I don't believe works with eb local run anyway.
Thank you in advance, been really struggling with this.

Related

Add volumes and files to a container in Azure using Pulumi

I'm starting to use Pulumi for container deployment in Azure cloud.
At the moment I am facing problems because I need to load some configuration files to a container of Traefik but I cannot find the correct way. The idea is that Traefik works as a reverse proxy for the other containers in the group.
My problem is that no matter how much I specify the creation of a volume and try to connect it to the container, when I go to the Azure dashboard, it appears that the container does not have any connected volume.
import pulumi
import pulumi_azure_nextgen as azure
data_rg = azure.resources.latest.ResourceGroup(
"data-rg",
resource_group_name="data-rg",
location="West Europe")
datahike_group = azure.containerinstance.latest.ContainerGroup(
"data-group",
location="West Europe",
container_group_name="data-cg",
resource_group_name=data_rg.name,
containers=[{
"name":"data",
"image": "wordpress:latest",
"resources": {
"requests": { "cpu": 0.5, "memory_in_gb": 1.5}
},
},
{
"name": "proxy",
"image": "traefik:latest",
"resources": {
"requests": { "cpu": 0.5, "memory_in_gb": 1.5}
},
"ports": [{
"port": 80,
"protocol": "TCP",
}],
"VolumeMount": [{
"mount_path": "/etc/traefik/config_base.yml",
"name": "traefik-volume",
}],
"environment_variables": [{
"name": "TRAEFIK_CONFIG_FILE",
"value": "file"
},{
"name": "TRAEFIK_CONFIG_PATH",
"value": "/etc/traefik/config_base.yml"
}
],
},
],
ip_address={
"dnsNameLabel": "dnsnamelabel1",
"ports": [{
"port": 80,
"protocol": "TCP",
}],
"type": "Public",
},
volumes=[
{
"emptyDir": {},
"name": "datahike-volume",
},
{
"name": "traefik-volume",
"secret": {
"secretKey1": "SecretValue1InBase64",
},
},
],
os_type="Linux",
tags={
"environment": "testing",
})
pulumi.export("data_ip", data_group.ip_address)
Does anyone know why its failing?
in this case, the error was due to a typo:
"volumeMounts": [{
"mount_path": "/etc/traefik/config_base.yml",
"name": "traefik-volume",
}],

Problems with containerdefinition in aws to work with secrets

I have a containerdefinition to populate tasks in a cluster, like this one, I'm trying just two things, first of all with a command I want to write a simple hello to my index.html:
[ {
"name": "cb-app",
"image": "${app_image}",
"cpu": ${fargate_cpu},
"memory": ${fargate_memory},
"networkMode": "awsvpc",
"command":[
"bin/sh -c \"echo 'hola222' > /usr/share/nginx/html/index.html\""
],
"entryPoint": [
"sh",
"-c"
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/cb-app",
"awslogs-region": "${aws_region}",
"awslogs-stream-prefix": "ecs"
}
},
"secrets": [
{
"name": "USERNAME2_VALUE",
"valueFrom": "arn:aws:secretsmanager:xxxxx:xxxxxxx:secret:USERNAME2_VALUE-ipilBA"
}
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
},
{
"containerPort": 22,
"hostPort": 22
}
]
}
]
I have an ecr with a simple nginx-alpine image. If I work without the entryPoint and command, it works fine and it shows the first page of the webserver but when I write the entryPoing and command. It doesn't work and I don't know why. I am using fargate. Could you help me?. Thanks a lot.
When I write the command and entryPoint all task stop and I have an exitCode 0.
This is what I have and I don't have any kind of log
As well, I get STOPPED (Essential container in task exited) on the tasks which were stopped

Application in ACS (DCOS) on any other port except 80

How can I host an application in ACS (DCOS) on any other port except 80? Can I give any other URL instead of using port number to access?
{
"id": "/dockercloud-hello-world",
"cmd": null,
"cpus": 0.1,
"mem": 128,
"disk": 0,
"instances": 2,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"volumes": [],
"docker": {
"image": "dockercloud/hello-world",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"labels": {}
}
],
"privileged": false,
"parameters": [],
"forcePullImage": true
}
},
"portDefinitions": [
{
"port": 10000,
"protocol": "tcp",
"name": "default",
"labels": {}
}
]
}
Application is available on port 4170 according to Marathon.
I am unable to access from agents fqn:portnumber
Yes, it is possible.
Firstly, you need modify hostPort value to 4170 and acceptedResourceRoles to slave_public.
Then you need open port 4170 on agent node NSG.
Then you also need open port on agent node LB.
1.Add Health probes
2.Load balancing rules
More information about this please check this link.

How do I run a node container on AWS ECS without exiting

I'm struggling to keep my node.js container running on ECS. It runs fine when I run it locally with docker compose, but on ECS it runs for a 2-3 mins and handles a few connections (2-3 health checks from the load balancer), then closes down. And I can't work out why.
My Dockerfile -
FROM node:6.10
RUN npm install -g nodemon \
&& npm install forever-monitor \
winston \
express-winston
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
Then in my package.json -
{
...
"main": "forever.js",
"dependencies": {
"mongodb": "~2.0",
"abbajs": ">=0.1.4",
"express": ">=4.15.2"
}
...
}
In my docker-compose.yml I run with nodemon -
node:
...
command: nodemon
In my cloudWatch logs I can see everything start -
14:20:24 npm info lifecycle my_app#1.0.0~start: my_app#1.0.0
Then I see the health check requests (all with http 200's), then a bit later it all wraps up -
14:23:00 npm info lifecycle mapov_reporting#1.0.0~poststart: mapov_reporting#1.0.0
14:23:00 npm info ok
I've tried wrapping my start.js script in forever-monitor, but that doesn't seem to be making any difference.
UPDATE
My ECS task definition -
{
"requiresAttributes": [
{
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
"targetId": null,
"targetType": null
}
],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:562155596068:task-definition/node:12",
"networkMode": "bridge",
"status": "ACTIVE",
"revision": 12,
"taskRoleArn": null,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 128,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 0,
"containerPort": 3000,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [],
"name": "node",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [
{
"name": "awslogs-group",
"value": "node_logs"
},
{
"name": "awslogs-region",
"value": "us-east-1"
},
{
"name": "NODE_ENV",
"value": "production"
}
],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "562155596068.dkr.ecr.us-east-1.amazonaws.com/node:06b5a3700df163c8563865c2f23947c2685edd7b",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "node_logs",
"awslogs-region": "us-east-1"
}
},
"cpu": 1,
"privileged": null,
"memoryReservation": null
}
],
"placementConstraints": [],
"volumes": [],
"family": "node"
}
Tasks are all stopped with the status Task failed ELB health checks in (target-group .... Health checks pass 2 or 3 times before they start failing. And there's no record of anything other than an http 200 in the logs.
I was using an old version of the mongo driver ~2.0, and keeping connections to more than one db. When I upgraded the driver, the issue went away.
"dependencies": {
"mongodb": ">=2.2"
}
I can only assume that there was a bug in the driver.

Can't run docker container on Marathon with network HOST?

I am trying to run some cassandra instances (docker containers) on marathon.
This following description works well:
{
"id": "cassandra",
"constraints": [["hostname", "CLUSTER", "docker-sl-vm"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "cassandra:latest",
"network": "BRIDGE",
"portMappings": [ {"containerPort": 9042,"hostPort": 0,"servicePort": 9042,"protocol": "tcp"} ]
}
},
"env": {
"CASSANDRA_SEED_COUNT": "1"
},
"cpus": 0.5,
"mem": 512.0,
"instances": 1,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600
}
However, I was following a tutorial that uses the following description:
{
"id": "cassandra-seed",
"constraints": [
[
"hostname",
"UNIQUE"
]
],
"ports": [
7199,
7000,
7001,
9160,
9042
],
"requirePorts": true,
"container": {
"type": "DOCKER",
"docker": {
"image": "cassandra:latest",
"network": "HOST",
"privileged": true
}
},
"env": {
"CASSANDRA_SEED_COUNT": "1"
},
"cpus": 0.5,
"mem": 512,
"instances": 2,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"healthChecks": [
{
"protocol": "TCP",
"gracePeriodSeconds": 30,
"intervalSeconds": 30,
"portIndex": 4,
"timeoutSeconds": 60,
"maxConsecutiveFailures": 30
}
],
"upgradeStrategy": {
"minimumHealthCapacity": 0.5,
"maximumOverCapacity": 0.2
}
}
PROBLEM
If I try ot use the second Marathon description, It takes forever and never loads. It just gets stuck on deploying and do not give me any error at the DEBUG section.
PS.: I am running the mesos cluster into a VirtualBox Ubuntu trusty Guest.
UPDATE ========================================
I've erased the logs and tried to run it again. The log result is shown below:
Content of mesos-slave.docker-sl-vm.invalid-user.log.INFO.20151110-130520.2713

Resources