I am using SAM (Serverless application model) to test Lambda functions locally that connect to an Aurora RDS instance in the cloud.
Using the following command:
sam local invoke "lambda function name" --event event.json
The Lambda function is being executed but when it comes to returning SQL results, it's returning null as output.
How can I configure the Docker container to communicate with the RDS instance?
As mentioned in the help for sam local invoke, you can connect your Docker container to an existing Docker network:
▶ sam local invoke --help
...
--docker-network TEXT Specifies the name or id of an existing
docker network to lambda docker containers
should connect to, along with the default
bridge network. If not specified, the Lambda
containers will only connect to the default
bridge docker network.
So, to list your Docker networks:
▶ docker network ls
NETWORK ID NAME DRIVER SCOPE
25a03c8453a6 bridge bridge local
00de89cf09d0 host host local
41597d91a389 none null local
Then, to connect your Lambda function's Docker container to the host network:
▶ sam local invoke "lambda function name" --event event.json \
--docker-network 00de89cf09d0
Note that you can also use the environment variable SAM_DOCKER_NETWORK:
▶ SAM_DOCKER_NETWORK=00de89cf09d0 sam local invoke "lambda function name" \
--event event.json
As mentioned here.
Assuming the host network can access the RDS instance, that should fix your problem.
Pass --docker-network host to sam local invoke
sam runs your lambda/api in a docker container using default bridge network. The bridge network has limited access. You can either create a custom docker-network which has access to your RDS or use 'host' network which mimics your OS network.
When you install docker, it creates a network named host which you can assign to any docker container and grant it full access to all IP/port accessible by your OS.
Related
I followed this turorial: https://learn.microsoft.com/en-us/azure/container-instances/tutorial-docker-compose
But when I set up my project there's no default network created.
Anybody know how to solve this?
The default network should be bridge, as can be seen with
$ docker network ls
Whenever you start a container without providing --network flag your container is connected to the default bridge network by default. You can provide the --network bridge flag in your docker run command.
Alternatively, you can connect a container to a network using
$ docker network connect <network> <container>
I have setup a gitlab-ci build with the architecture illustrated below.
(source: gitlab.com)
.
The listener container is unable to communicate with the postgres container using the hostname, ‘postgres’. The hostname is unrecognised. How can the listener container communicate with the postgres database instance?
The documentation recommends configuring a postgres instance as a service in .gitlab-cy.yml. CI jobs defined in .gitlab-ci.yml, are able to connect to the postgres instance via the service name, 'postgres'.
The tusd, minio and listener containers are spawned within a docker-compose process, triggered inside the pytest CI job. The listener container writes information back to the postgres database.
Subsequently, I thought about using the IP address of the postgres service in place of the hostname. From within the pytest CI build job I have tried to determine the IP address of the postgres database using the following bash command sequence:
export DB_IP="$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' postgres)"
echo "DB IP ADDRESS IS $DB_IP"
However, postgres is not recognised as a container.
How do I determine the IP address of the postgres service? Alternatively can I use the IP address of the shared runner? How do I determine this?
Does anybody have any ideas?
Update 11/1/2019
Resolved by moving all services into docker-compose file so that they can communicate with each other. This includes the postgres container etc…After some refactoring in test environment initialisation, tests are now invoked via docker-compose run command.
Now able to successfully run tests using gitlab-shared runner…
I am working in a POC using Hyperledger Composer v0.16.0 and Node.js SDK. I have deployed my Hyperledger Fabric instance following this developer tutorial and when I run locally my Node.js app via node server.js command it works correctly, I can retrieve participants, assets, etc.
However, when I Dockerize my Node.js app and I run the container for this app I am not able to reach the Hyperledger Fabric instance. So, how can I set the credentials to be able to reach my Hyperledger Fabric or another one since my Node.js app?
My Dockerfile looks like this:
FROM node:8.9.1
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I run my docker/node.js image with this command:
docker run --network composer_default -p 3000:3000 -d myuser/node-web-app
There are 2 pitfalls to watch out for with Dockerizing your app. 1. Location of Cards and 2. Network Address of Fabric servers.
The Business Network Card(s) used by your app to connect to the Fabric. These cards are in a hidden folder under your default home folder e.g. /home/thatcher/.composer on a Linux machine. You need to 'pass' these into the container or share them with a shared volume as suggested by the previous answer. So running your container for the first time try adding this in the command -v ~/.composer:/home/<user>/.composer where is the name of the default user in your container. Be aware also that the folder on your Docker Host machine must allow write access to the UID of the user inside the container.
When you have sorted out the sharing of the cards you need to consider what connection information is in the card. It is quite likely that the Business Network Card you are using will be using localhost as the addresses of your Fabric servers, the port forwarding of the ports from your Docker host into the containers means that localhost is easy and works. However in your container localhost will redirect inside the container so will not see the Fabric. The arguments on the Docker command --network composer_default will set up your new container on the same Docker network as the Fabric Containers and so your Container could see the 'addresses' of the Fabric servers e.g. orderer.example.com but you card would then fail outside your container. The best way forward would be to put the IP Address number of your Docker Host machine into the connection.json file instead of localhost, and then your card would work inside and outside of your container.
So, credentials would be config info. The two ways to pass config info into a basic docker container are:
environment variables (-e)
mount a volumes (-v) with config info.
You can also have scripts that you install from Dockerfile that modify files and such.
The docker logs may give clues as to the exact problem or set of problems.
docker logs mynode
You can also enter a running container and snoop around using the command
docker exec -it mynode bash
Is there a way I can reach my docker containers using names instead of ip addresses?
I've heard of pipework and I've seen some dns and hostname type options for docker, but I still am unable to piece everything together.
Thank you for your time.
I'm not sure if this is helpful, but this is what I've done so far:
installed docker container host using docker-machine and the vmwarevsphere driver
started up all the services with docker-compose
I can reach all of the services from any other machine on the network using IP and port
I've added a DNS alias entry to my private network DNS server and it matches the machine name that's used by docker-machine. But the machine always picks up a different IP address when it boots and connects to the network.
I'm just lost as to where to tackle this:
network DNS server
docker-machine hostname
docker container hostname
probably some combination of all of them
I'm probably looking for something similar to this question:
How can let docker use my network router to assign dhcp ip to containers easily instead of pipework?
Any general direction will be awesome...thanks again!
Docker 1.10 has a built in DNS. If your containers are connected to the same user defined network (create a network docker network create my-network and run your container with --net my-network) they can reference each other using the container name. (Docs).
Cool!
One caveat if you are using Docker compose you know that it adds a prefix to your container names, i.e. <project name>_<service name>-#. This makes your container names somewhat more difficult to control, but it might be ok for your use case. You can override the docker compose naming functionality by manually setting the container name in your compose template, but then you wont be able to scale with compose.
Create a new bridge network other than docker0, run your containers inside it and you can reference the containers inside that network by their names.
Docker daemon runs an embedded DNS server to provide automatic service
discovery for containers connected to user-defined networks. Name
resolution requests from the containers are handled first by the
embedded DNS server.
Try this:
docker network create <network name>
docker run --net <network name> --name test busybox nc -l 0.0.0.0:7000
docker run --net <network name> busybox ping test
First, we create a new network. Then, we run a busybox container named test listening on port 7000 (just to keep it running). Finally, we ping the test container by its name and it should work.
EDIT 2018-02-17: Docker may eventually remove the links key from docker-compose, therefore they suggest to use user-defined networks as stated here => https://docs.docker.com/compose/compose-file/#links
Assuming you want to reach the mysql container from the web container of your docker-compose.yml file, such as:
web:
build: .
links:
- mysql
mysqlservice:
image: mysql
You'll be pleased to know that Docker Compose already adds a mysqlservice domain name (in the web container /etc/hosts) which point to the mysql container.
Instead of looking for the mysql container IP address, you can just use the mysqlservice domain name.
If you want to add custom domain names, it's also possible with the extra_hosts parameter.
You might want to try out dnsdock. Looks straight forward and easy(!) to set up. Have a look at http://blog.brunopaz.net/easy-discover-your-docker-containers-with-dnsdock/ and https://github.com/tonistiigi/dnsdock .
If you want out of the box solution, you might want to check for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks between services. Thanks to that every service/container can be reached by service_name.kontena.local.
I changed the --net parameter with --network parameter and it runs as expected:
docker network create <network name>
docker run --network <network name> --name <container name> <other container options>
docker run --network <network name> --name <container name> <other container options>
If you are using Docker Compose, and your docker-compose.yml file has a top-level services: block (you are not using the obsolete "version 1" file format), then Compose does all of the required setup automatically. The names underneath services: can be directly used as host names.
version: '3.8'
services:
database: # <-- "database" is a usable hostname
image: postgres
application: # <-- "application" is a usable hostname
build: .
environment:
PGHOST: database # <-- use the "database" hostname
Networking in Compose in the Docker documentation describes this setup further.
These host names only work for connections between containers, in the same Compose file. If you manually declare networks: then the two containers must have some network in common, but the easiest setup is to just not declare networks: at all. These connections will only use the "standard" port (for PostgreSQL, for example, always connect to port 5432); a ports: declaration is not required and is ignored if present.
Im a trying to deploy my application using Docker and came across an issue that restarting named containers assigns a different IP to container. Maybe explaining what I am doing will better explain the issue:
Postgres runs inside a separate container named "postgres"
$ PG_ID=$(docker run --name postgres postgres/image)
My webapp container links to postgres container
$ APP_ID=$(docker run --link postgres:postgres webapp/image)
Linking postgres container image to webapp container inserts in webapp container a hosts file entry with the IP of the postgres container. This allows me to point to postgres db within my webapp using postgres:5432 (I am using Django btw). This all works well except if for some reason postgres crashes.
Before I manually stop postgres process to simulate postgres process crashing I verify IP of postgres container:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.73
Now to simulate crash I stop postgres container:
$ docker stop $PG_ID
If now I restart postgres by using
$ docker start $PG_ID
the ip of the container changes:
$ docker inspect --format "{{.NetworkSettings.IPAddress}}" $PG_ID
172.17.0.74
Therefore the IP which points to postgres container in webapp container is no longer correct. I though that by naming container docker assigns a name to it with specific configs so that you can reliably link between containers (both network and volumes). If the IP changes this seems to defeat the purpose.
If I have to restart my webapp process each time I postgres restarts, this does not seem any better than just using a single container to run both processes. Then I can use supervisor or something similar to keep both of them running and use localhost to link between processes.
I am still new to Docker so am I doing something wrong or is this a bug in docker?
2nd UPDATE: maybe you already discovered this, but as workaround, I plan to map the service to share the database to the host interface (ej: with -p 5432:5432), and connect the webapps to the host IP (the IP of the docker0 interface: in my Ubuntu and CentOS, the IP is 172.17.42.1). If you restart the postgres container, the conteiner's IP will change, but I wil be accesible using 172.17.42.1:5432. The downside is that you are exposing that port to all the containers, and loose the fine-grained mapping that --link gives you.
--- OLD UPDATES:
CORRECTION: Docker will map 'postgres' to the container's IP in the /etc/hosts files, on the webapp container. So, in the webapp container, you can ping 'postgres', and it will be mapped to the IP.
1st UPDATE: I've seen that Docker generates and mounts /etc/hosts, /etc/resolv.conf, etc. to have always the correct information, but this does not apply when the linked container is restarted. So, I've assumed (wrongly) that Docker would update the hosts files.
-- ORIGINAL (wrong) response:
Add --hostname=postgres-db (you can use anythin, I'm using something different than 'postgres' to avoid confussion with the container name):
$ docker run --name postgres --hostname postgres-db postgres/image
Docker will map 'postgres-db' to the container's IP (check the contents of /etc/hosts on the webapp container).
This will allow you run 'ping postgres-db' from the webapp container. If the IP changes, Dockers will update /etc/hosts for you.
In the Django app, use 'postgres-db' instead of the IP (or whatever you use for --hostname of the container with PostgreSql).
Bye!
Horacio
According to https://docs.docker.com/engine/reference/commandline/run/, it should be possible to assign a static IP for your container -- at the time of container creation -- using the --ip option:
Example:
docker run -itd --ip 172.30.100.104 --name postgres postgres/image
....where 172.30.100.104 is a free IP address on a custom bridge/overlay network.
This should then retain the same IP address even if postgres container crashes/restarts.
Looks like this was released in Docker Engine v 1.10 or greater, therefore if you have a lower version, you have to upgrade first.
As of Docker 1.0 they implemented a stronger sense of linked containers. Now you can use the container instance name as if it were the host name.
Here is a link
I found a link that better describes your problem. And while that question was answered I wonder whether or not this ambassador pattern might not solve the problem... this assumes that the ambassador is more reliable than the services that link.