I'm struggling to keep my node.js container running on ECS. It runs fine when I run it locally with docker compose, but on ECS it runs for a 2-3 mins and handles a few connections (2-3 health checks from the load balancer), then closes down. And I can't work out why.
My Dockerfile -
FROM node:6.10
RUN npm install -g nodemon \
&& npm install forever-monitor \
winston \
express-winston
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
Then in my package.json -
{
...
"main": "forever.js",
"dependencies": {
"mongodb": "~2.0",
"abbajs": ">=0.1.4",
"express": ">=4.15.2"
}
...
}
In my docker-compose.yml I run with nodemon -
node:
...
command: nodemon
In my cloudWatch logs I can see everything start -
14:20:24 npm info lifecycle my_app#1.0.0~start: my_app#1.0.0
Then I see the health check requests (all with http 200's), then a bit later it all wraps up -
14:23:00 npm info lifecycle mapov_reporting#1.0.0~poststart: mapov_reporting#1.0.0
14:23:00 npm info ok
I've tried wrapping my start.js script in forever-monitor, but that doesn't seem to be making any difference.
UPDATE
My ECS task definition -
{
"requiresAttributes": [
{
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
"targetId": null,
"targetType": null
}
],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:562155596068:task-definition/node:12",
"networkMode": "bridge",
"status": "ACTIVE",
"revision": 12,
"taskRoleArn": null,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 128,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 0,
"containerPort": 3000,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [],
"name": "node",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [
{
"name": "awslogs-group",
"value": "node_logs"
},
{
"name": "awslogs-region",
"value": "us-east-1"
},
{
"name": "NODE_ENV",
"value": "production"
}
],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "562155596068.dkr.ecr.us-east-1.amazonaws.com/node:06b5a3700df163c8563865c2f23947c2685edd7b",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "node_logs",
"awslogs-region": "us-east-1"
}
},
"cpu": 1,
"privileged": null,
"memoryReservation": null
}
],
"placementConstraints": [],
"volumes": [],
"family": "node"
}
Tasks are all stopped with the status Task failed ELB health checks in (target-group .... Health checks pass 2 or 3 times before they start failing. And there's no record of anything other than an http 200 in the logs.
I was using an old version of the mongo driver ~2.0, and keeping connections to more than one db. When I upgraded the driver, the issue went away.
"dependencies": {
"mongodb": ">=2.2"
}
I can only assume that there was a bug in the driver.
Related
I'm following the steps in this tutorial. I'm having trouble executing this CLI command:
az container create \
--name docks \
--resource-group MyResourceGroup \
--ip-address Public \
--image jenkins/inbound-agent:latest \
--os-type linux \
--ports 80 \
--command-line "jenkins-agent -url http://jenkinsServer:8080 secret agentName"
It gives the following output:
{
"containers": [
{
"command": [
"jenkins-agent",
"-url",
"http://jenkinsServer:8080",
"secret",
"agentName"
],
"environmentVariables": [],
"image": "jenkins/inbound-agent:latest",
"instanceView": {
"currentState": {
"detailStatus": "CrashLoopBackOff: Back-off restarting failed",
"exitCode": null,
"finishTime": null,
"startTime": null,
"state": "Waiting"
},
"events": [
{
"count": 1,
"firstTimestamp": "2022-09-07T16:57:57+00:00",
"lastTimestamp": "2022-09-07T16:57:57+00:00",
"message": "pulling image \"jenkins/inbound-agent#sha256:f495769bfc767bc77f6c2f8268a734dbac98249449f139f95fc434eb26c6489a\"",
"name": "Pulling",
"type": "Normal"
},
{
"count": 1,
"firstTimestamp": "2022-09-07T16:59:00+00:00",
"lastTimestamp": "2022-09-07T16:59:00+00:00",
"message": "Successfully pulled image \"jenkins/inbound-agent#sha256:f495769bfc767bc77f6c2f8268a734dbac98249449f139f95fc434eb26c6489a\"",
"name": "Pulled",
"type": "Normal"
},
{
"count": 2,
"firstTimestamp": "2022-09-07T16:59:57+00:00",
"lastTimestamp": "2022-09-07T17:00:18+00:00",
"message": "Started container",
"name": "Started",
"type": "Normal"
},
{
"count": 1,
"firstTimestamp": "2022-09-07T17:00:08+00:00",
"lastTimestamp": "2022-09-07T17:00:08+00:00",
"message": "Killing container with id XXXXXXXXXXXXXXXXXXXXXXX.",
"name": "Killing",
"type": "Normal"
}
],
"previousState": {
"detailStatus": "Error",
"exitCode": 255,
"finishTime": "2022-09-07T17:00:29.169000+00:00",
"startTime": "2022-09-07T17:00:18.785000+00:00",
"state": "Terminated"
},
"restartCount": 1
},
"livenessProbe": null,
"name": "docks",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"readinessProbe": null,
"resources": {
"limits": null,
"requests": {
"cpu": 1.0,
"gpu": null,
"memoryInGb": 1.5
}
},
"volumeMounts": null
}
],
"diagnostics": null,
"dnsConfig": null,
"encryptionProperties": null,
"id": "/subscriptions/azureSub/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerInstance/containerGroups/docks",
"identity": null,
"imageRegistryCredentials": null,
"initContainers": [],
"instanceView": {
"events": [],
"state": "Running"
},
"ipAddress": {
"dnsNameLabel": null,
"fqdn": null,
"ip": "XX.XXX.XXX.XX",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"type": "Public"
},
"location": "westeurope",
"name": "docks",
"osType": "Linux",
"provisioningState": "Succeeded",
"resourceGroup": "MyResourceGroup",
"restartPolicy": "Always",
"sku": "Standard",
"subnetIds": null,
"tags": {},
"type": "Microsoft.ContainerInstance/containerGroups",
"volumes": null,
"zones": null
}
As you see it gives a 255 Error, however I didn't find anything related to it yet.
I also tried to change the --command-line to:
java -jar agent.jar -jnlpUrl http://jenkinsServer:8080 secret agentName
But the same output happens.
This creates the Container, but it keeps restarting indefinitely (starts and fails).
The Jenkins server is in a Linux VM made following this tutorial
How can I make a Jenkins agent from the VM be run in a docker container image using Azure?
When I tried to reproduce the issue, noted that we have different ways to get CrashLoopBackOff error.
1). ENVIRONMENT VARIABLES SETUP
CrashLoopBackOff will occur when the environment variables are set incorrectly.
please check the ENV_PATH set to be correct or not
type ENV in Azure Cli or PowerShell
2). INSTALLING THE CORRECT VERSION OF S/W
can you please check which java version you have installed
if it is JDK-8, please update it to 11.0.16.1, it will work
apt-get update -y
apt-get install openjdk-11-jdk
after installing java I have followed the steps Ms-Doc and created the container instance successfully.
az container create \
--name docks \
--resource-group jenkins-get-started-rgz \
--ip-address Public \
--image jenkins/inbound-agent:latest \
--os-type linux \
--ports 80 \
--command-line "jenkins-agent -url http://jenkinsServer:8080 JENKINS_SECRET AGENT_NAME"
NOTE:
if the EXIT CODE is "0" means we have created the container instance successfully
if the EXIT CODE is in between "1-255" it corresponds to error.
Trying to move my Docker Compose application to Elastic Beanstalk and having some issues.
Been struggling with this for like a week now, come pretty far but still some big issues. I converted my docker-compose.yml to a Dockerrun.aws.json using container transform:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"entryPoint": [
"/client/entrypoint.sh"
],
"essential": true,
"memory": 512,
"image": "nodejs",
"links": [
"server_dans_backend:server_dans_backend"
],
"name": "client_dans_backend",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 3000
}
]
},
{
"environment": [
{
"name": "POSTGRES_DB",
"value": "ABC"
},
{
"name": "POSTGRES_USER",
"value": "ABC"
},
{
"name": "POSTGRES_PASSWORD",
"value": "ABC"
},
{
"name": "POSTGRES_HOST",
"value": "ABC"
}
],
"essential": true,
"image": "postgres:14-alpine",
"memory": 512,
"name": "db_dans_backend",
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432
}
]
},
{
"essential": true,
"image": "nginx:alpine",
"memory": 512,
"links": [
"server_dans_backend",
"client_dans_backend"
],
"name": "nginx_dans_backend",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
},
{
"entryPoint": [
"/app/server/entrypoint.sh"
],
"essential": true,
"image": "alpine:python",
"memory": 512,
"links": [
"db_dans_backend:db_dans_backend"
],
"name": "server_dans_backend",
"portMappings": [
{
"containerPort": 8000,
"hostPort": 8000
}
]
}
]
}
Pretty straightforward - Node (NextJS), Python (Django), Nginx and Postgres
My problem is this, it doesn't work in prod and whenever I try eb local run I get the following error:
ERROR: ValidationError - The AWSEBDockerrunVersion key in the Dockerrun.aws.json file is not valid or is not included.
Even weirder, when I actually eb deploy I get this:
Instance deployment: 'Dockerrun.aws.json' in your source bundle specifies an unsupported version. Elastic Beanstalk only supports version 1 for non compose app and version 3 for compose app. The deployment failed.
But there is no version 3 for this file format.
I'm not particularly sure why this is a problem though since the key is clearly included. I read it could be a problem if your EB platform isn't multidocker but I believe my platform is correct.
When I run eb platform show I get the following:
64bit Amazon Linux 2 v3.4.16 running Docker
which I believe is valid - the only other option would be the ECS+EB option which I don't believe works with eb local run anyway.
Thank you in advance, been really struggling with this.
I am in the process of deploying my .BNA file to fabric, I been testing and prototyping it in on the bluemix playground succesfully however, when I try to install the network application to fabric I get the error.
> Error: Error trying install business network.
>Error: No valid responses from any peers.
>Response from attempted peer comms was an error:
>Error: 14 UNAVAILABLE: Connect Failed
Command failed
**This is the steps I took**
1. Launch your Fabric network
> ./startFabric.sh
2.) Create the peer admin card
> ./createPeerAdminCard.sh
3.) Install the network application to fabric
> composer network install -a dist/bna.bna -c PeerAdmin#hlfv1
**This step is where I get the error**
✖ Installing business network. This may take a minute...
Error: Error trying install business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: 14 UNAVAILABLE: Connect Failed
Command failed
**Details of my env**
Node Version: v8.11.3
Docker version: 18.03
Composer version: v0.19.12
Docker PS:
[Docker PS Screen shot][1]
[1]: https://i.stack.imgur.com/HQGBf.png
Any help is really appreciated.
UPDATE
Connection.json for hlfv1
{
"name": "hlfv1",
"x-type": "hlfv1",
"x-commitTimeout": 300,
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300",
"eventHub": "300",
"eventReg": "300"
},
"orderer": "300"
}
}
},
"channels": {
"composerchannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpc://localhost:7050"
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:7054",
"caName": "ca.org1.example.com"
}
}
}
Hlfv11 vs HLFv1
I noticed when I look in the the fabric-scrips there are two components hlfv11 vs hlfv1.
Screen shot of fabric tools
When I start the startfabric I get the line that fabric assumes it is "hlfv11" instead of hlfv1.
enter image description here
Any help would be appreciated.
docker inspect peer0.org1.example.com
[
{
"Id": "6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac",
"Created": "2018-07-20T22:49:51.238208735Z",
"Path": "peer",
"Args": [
"node",
"start"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 7506,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-07-20T22:49:51.543106588Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b023f9be07714e495e6d41849d7e916434e85580754423ece145866468ad29a9",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/resolv.conf",
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hosts",
"LogPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac-json.log",
"Name": "/peer0.org1.example.com",
"RestartCount": 0,
"Driver": "aufs",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/run:/host/var/run:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer:/etc/hyperledger/configtx:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/peer/msp:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "composer_default",
"PortBindings": {
"7051/tcp": [
{
"HostIp": "",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "",
"HostPort": "7053"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run",
"Destination": "/host/var/run",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users",
"Destination": "/etc/hyperledger/msp/users",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer",
"Destination": "/etc/hyperledger/configtx",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp",
"Destination": "/etc/hyperledger/peer/msp",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "6caa83b2a8a5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"7051/tcp": {},
"7053/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"CORE_LOGGING_LEVEL=debug",
"CORE_CHAINCODE_LOGGING_LEVEL=DEBUG",
"CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock",
"CORE_PEER_ID=peer0.org1.example.com",
"CORE_PEER_ADDRESS=peer0.org1.example.com:7051",
"CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default",
"CORE_PEER_LOCALMSPID=Org1MSP",
"CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp",
"CORE_LEDGER_STATE_STATEDATABASE=CouchDB",
"CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"FABRIC_CFG_PATH=/etc/hyperledger/fabric"
],
"Cmd": [
"peer",
"node",
"start"
],
"Image": "hyperledger/fabric-peer:x86_64-1.1.0",
"Volumes": {
"/etc/hyperledger/configtx": {},
"/etc/hyperledger/msp/users": {},
"/etc/hyperledger/peer/msp": {},
"/host/var/run": {}
},
"WorkingDir": "/opt/gopath/src/github.com/hyperledger/fabric",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "d44983248579bb25822020f82382fba01b891c3338b2fe91bb17ac3936126c69",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "composer",
"com.docker.compose.service": "peer0.org1.example.com",
"com.docker.compose.version": "1.21.1",
"org.hyperledger.fabric.base.version": "0.4.6",
"org.hyperledger.fabric.version": "1.1.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5645c1988100b53fa9a8c2d13adc40c43f3995cb808b3eda28771176033b26b4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"7051/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7053"
}
]
},
"SandboxKey": "/var/run/docker/netns/5645c1988100",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"composer_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"peer0.org1.example.com",
"6caa83b2a8a5"
],
"NetworkID": "d4f496b7b3aeae87d1b1461523bc8620ac34b54d9b3b9f8d31c6cfa7be4da024",
"EndpointID": "a19687702d04e166dc0291dc9ce1130caf5eccf484ece4fd988c13cc2660c8fb",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:05",
"DriverOpts": null
}
}
}
}
]
Fixed: Needed to reinstall Hyperledger fabric, composer, node, npm, and docker. And need to set "unset ${!DOCKER*}" there seemed to an docker issue.
This error is usually seen when the CLI cannot connect to the Fabric using the addresses specified in the PeerAdmin's connection.json file. Did you download the latest fabric-tools as shown here prior to this?
Sometimes if there is a proxy involved (on a corporate network), there can be some routing failures.
see answer here which may help you -> Hyperledger composer network install
ERROR 14 means that you the composer can't locate the peers. Your issue is here:
"peers": {
"peer0.org1.example.com": {}
}
you need to write something like:
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
}
FIXED:
I uninstalled docker, node, npm, and reinstalled everthing and made sure to use unset ${!DOCKER*} when first installing docker for Mac OS
I am using AWS ECS and have a container for my frontend (Node app) and for my backend (mongo database).
The mongo container is exposing port 27017, but I cannot figure out how to connect to it from my frontend container. If I try to connect to the db using 'mongodb://localhost:27017/db_name' I get an ECONNREFUSED error.
I have a service running for both of these task definitions with an ALB for the frontend. I don't have them in the same task definition because it doesn't seem optimal to have to scale them together.
I have tried multiple variations of the url
mongodb://0.0.0.0:27017/db_name
mongodb://localhost:27017/db_name
If I "curl" the mongo container from within the EC2 instance, I get an empty reply from server.
Database Task Definition:
{
"executionRoleArn": null,
"containerDefinitions": [
{
"dnsSearchDomains": null,
"logConfiguration": null,
"entryPoint": null,
"portMappings": [
{
"hostPort": 27017,
"protocol": "tcp",
"containerPort": 27017
}
],
"command": null,
"linuxParameters": null,
"cpu": 0,
"environment": [
{
"name": "MONGODB_ADMIN_PASS",
"value": <PASSWORD>
},
{
"name": "MONGODB_APPLICATION_DATABASE",
"value": <DB NAME>
},
{
"name": "MONGODB_APPLICATION_PASS",
"value": <PASSWORD>
},
{
"name": "MONGODB_APPLICATION_USER",
"value": <USERNAME>
}
],
"ulimits": null,
"dnsServers": null,
"mountPoints": [],
"workingDirectory": null,
"dockerSecurityOptions": null,
"memory": null,
"memoryReservation": 128,
"volumesFrom": [],
"image": "registry.hub.docker.com/library/mongo:latest",
"disableNetworking": null,
"healthCheck": null,
"essential": true,
"links": null,
"hostname": null,
"extraHosts": null,
"user": null,
"readonlyRootFilesystem": null,
"dockerLabels": null,
"privileged": null,
"name": "mongo"
}
],
"placementConstraints": [],
"memory": null,
"taskRoleArn": null,
"compatibilities": [
"EC2"
],
"taskDefinitionArn": "arn:aws:ecs:us-east-2:821819063141:task-definition/dappy_coin_database:2",
"family": "dappy_coin_database",
"requiresAttributes": [
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.21"
}
],
"requiresCompatibilities": null,
"networkMode": null,
"cpu": null,
"revision": 2,
"status": "ACTIVE",
"volumes": []
}
OLD:
You have to add, in the task definition for node, which I assume you have: links: ["mongo"]
Then you can reference mongo://...
NEW:
Just saw that you want them in separate task definitions. That's a lot of complexity and I want to dissuade you from this path, because you are facing options like: ELB, service discovery via DNS, ambassador container pattern (per this answer - which, if that is all you wanted, this question is a dupe). If you have to do it, see that answer, and weep.
Maybe you would consider deploying your Node app as a single-container Elastic Beanstalk app, and connecting it to MongoDB Atlas? That way you get load balancing, auto-scaling, monitoring, all built in, instead of needing to do it yourself.
Or, at least you could use AWS Fargate. It is a launch mode of ECS that handles more of the infrastructure and networking for you. To quote the docs,
links are not allowed as they are a property of the “bridge” network mode (and are now a legacy feature of Docker). Instead, containers share a network namespace and communicate with each other over the localhost interface. They can be referenced using the following:
localhost/127.0.0.1:<some_port_number>
Where in this case, some_port_number = 27017.
I have a problem with mounting in Docker. I want simply save and return pictures to front-end.
This is a dockerfile:
FROM node:boron
WORKDIR /app
COPY . .
RUN npm install --production
RUN mkdir -p /app/public
VOLUME ["/app/public"]
CMD yum install imagemagick
# if we don't use this specific form, SIGINT/SIGTERM doesn't get forwarded
CMD node server.js
I'm deploying with skyliner.io.
Inspecting my image I get :
[
{
"Id": "sha256:598085445f82a8324f41842a7ac4f93a55b009d93bfaf07e7ce7b8a4bc5918d9",
"RepoTags": [
"thurst-back-end:latest"
],
"RepoDigests": [],
"Parent": "",
"Comment": "",
"Created": "2017-01-09T16:05:50.958866532Z",
"Container": "85457fb45353305715ea72297187fd6b88a019aa369426428c536a6a80450206",
"ContainerConfig": {
"Hostname": "45f28166fed1",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=6.9.4"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) CMD [\"/bin/sh\" \"-c\" \"node server.js\"]"
],
"ArgsEscaped": true,
"Image": "sha256:64249ddf0e9111ef191b1fb02d1af3ae2c7735f0509169a8e5fa6bc980a463ba",
"Volumes": {
"/app/public": {}
},
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"DockerVersion": "1.11.2",
"Author": "",
"Config": {
"Hostname": "45f28166fed1",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NPM_CONFIG_LOGLEVEL=info",
"NODE_VERSION=6.9.4"
],
"Cmd": [
"/bin/sh",
"-c",
"node server.js"
],
"ArgsEscaped": true,
"Image": "sha256:64249ddf0e9111ef191b1fb02d1af3ae2c7735f0509169a8e5fa6bc980a463ba",
"Volumes": {
"/app/public": {}
},
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": [],
"Labels": {}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 700375224,
"VirtualSize": 700375224,
"GraphDriver": {
"Name": "overlay",
"Data": {
"RootDir": "/var/lib/docker/overlay/739c2f7ee799c2ec0e75beb02c24c084aa9545fa6f1680b6a65062bf5d6133e8/root"
}
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:b6ca02dfe5e62c58dacb1dec16eb42ed35761c15562485f9da9364bb7c90b9b3",
"sha256:60a0858edcd5aad240966e33389850e4328de4cfb5282977eddda56bffc7f95f",
"sha256:53c779688d06353f7ba4fd7ce1d43ce146ad0278ebead0feea1846383c730024",
"sha256:0a5e2b2ddeaa749d95730bad9be3e3a472ff6f80544da0082a99ba569df34ff3",
"sha256:fa18e5ffd316beb0c4c929ea1fff8d559a73a366f30f1004bb06af3e9f800696",
"sha256:604c78617f347c58e4ce0021f47928b7df3d799ea7c5e9367fa5a800e473dc06",
"sha256:6a73c39a0ab65b5e2da69b9013fc7f50c8bf5be27c0cf5fb3b642a247a8993ca",
"sha256:b7ce32b271bee3f3c614232448a4308cdfc4a2bf6f8db1436f51cb74ae5c15dc",
"sha256:a276062d9f56b85bf34797301d74b761970c3e6ce0ccd3525f4535e675a0974e",
"sha256:2f616e13f894a3a5c4dc33cbbcce345c51a704d56a70396cacdfb2e96e2ff9df",
"sha256:c6dfd7a877dba2837cc46e906cde9aa6e1cc5f89c9c65cefa81f130d59e2c7ac"
]
}
}
]
Next command to understand problem:
$ docker volume ls
DRIVER VOLUME NAME
local 2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53
AND also
$ docker volume inspect 2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53
[
{
"Name": "2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/2fe327f9a9d82d7ddad72e8d9dcda76e3212653e100c24453de9edbbf60fbe53/_data",
"Labels": null
}
]
When I run project not in container - all work good, files saves to /public/images/:id/:id-user.jpg.
But when I run project in docker, files are located in /var/lib/docker/overlay/0a2bdfae85072dce01e470eb71f1199ab23d90eb6f9e573d6a65e06d3d387cce/upper/app/public/images.
No sure I understand it correct, but could it be because your app writes to a path /public?
You say when you run not in container, you get /public/images/..., but your volume is /app/public, which is another path, and hence you write into you container volume..