I'm trying to get the X-Ms-Client-Principal-Name value from the request header but it doesn't come anymore after I change the container settings from Single Container to Docker Compose.
Any idea why is this happening?
This is the docker-compose file that I'm using:
version: "3.8"
services:
web:
image: webimage:v1.0.0
ports:
- "8000:80"
redis:
image: redis:alpine
Both images are correctly downloaded from the repository and launched
Related
I'm using Docker Compose with Azure Container Instance service. Their docs/guides say I should be able to build custom images with that service using docker compose up -d, but the service forces me to include a pre-built image in my compose.yml. How can I deploy a web app from a Compose file so that Azure builds it too?
Here's my my desired compose file is like. Note that I use a generic Redis image, but that the DB and API both rely on other Dockerfiles to be built. I'd like to use some Azure service (I think Container Instance is the only one that supports interacting with compose files) to deploy with a single command so that my local code is pushed to some build server (or pulled using git by the build server) to build any needed images used by my compose.yml. Is this possible?
version: "3.9"
services:
db:
build:
context: docker/db
dockerfile: db.Dockerfile
restart: always
ports:
- "3306:3306"
api:
build:
context: docker/api
dockerfile: api.Dockerfile
restart: always
environment:
- MYSQL_HOST=db
- MYSQL_HOST_REPLICA=db
- REDIS_HOSTNAME=redis
ports:
- "8000:8000"
depends_on:
- db
- redis
redis:
image: redis:alpine
restart: always
ports:
- "6379:6379"
I have an Azure app service which is using docker-compose.yml file as it is multi-container docker app. It's docker-compose.yml file is given below:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
The app works pefectly with no issues. I can open it on the browser perfectly.
Now here starts the problem. I am trying to put an SSL Certficate for this app. I want to use volume and map it to inside the container. So i changed the docker-compose.yml file to:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- SSL:/https
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
Here only the following lines are added to force the app to use aspnetapp.pfx as the SSL. This ssl should be mount to https folder of the container.
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- SSL:/https/aspnetapp.pfx:ro
Also note that here SSL given on the volume is referring to Azure File Share. I have created an Azure Storage account (called 'myazurestorage1') and inside it created an Azure file share (called 'myfileshare1'). In this file share I created a folder by th name of ssl and uploaded my certificate inside this folder.
Then in the azure app service which contains the app running in docker compose. I did path mappings for this azure file share so that it can be used with your app. In the below screenshot see this:
I also tried the following docker-compose.yml file as given in official docs but no use:
version: '3.4'
services:
multiapp:
image: yogyogi/apps:i1
build:
context: .
dockerfile: MultiApp/Dockerfile
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=mypass123
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ./mydata/ssl/aspnetapp.pfx:/https/aspnetapp.pfx:ro
multiapi:
image: yogyogi/apps:i2
build:
context: .
dockerfile: MultiApi/Dockerfile
volumes:
mydata:
driver: azure_file
driver_opts:
share_name: myfileshare1
storage_account_name: myazurestorage1
storageAccountKey: j74O20KrxwX+vo3cv31boJPb+cpo/pWbSy72BdSDxp/d7hXVgEoR56FVA7B+L6D/CnmdpIqHOhiEKqbuttLZAw==
But this does not works as app starts getting error. What is wrong and how to solve it?
Normally I'd have an instance neo4j running in Docker, then in a script I access the driver like so:
self.driver = GraphDatabase.driver(uri="bolt://localhost:7687", auth=("username", "password"))
I'm now putting this script itself into a Docker container, but I now get the error message:
neo4j.exceptions.ServiceUnavailable: Failed to establish connection to IPv6Address(('::1', 7687, 0, 0)) (reason [Errno 99] Cannot assign requested address)
What uri (or other parameter) needs changing to access a neo4 Docker instance, from another docker container?
Within my docker-compose.yml, I have:
version: '3'
services:
neo4j:
container_name: neo4j
image: neo4j:3.5
restart: always
environment:
- NEO4J_dbms_memory_pagecache_size=2G
- dbms_connector_bolt_tls__level=OPTIONAL
- NEO4J_dbms_memory_heap_max__size=3500M
- NEO4J_AUTH=neo4j/start
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
ports:
- 7474:7474
- 7687:7687
appgui:
container_name: appgui
image: python:3.7.3-slim
build:
context: ./APPGUI/
volumes:
- ./APPGUI/:/usr/src/app/
restart: always
environment:
PORT: 5000
FLASK_DEBUG: 1
ports:
- 80:80
depends_on:
- neo4j
I also can't access my web app (http://localhost:5000)
Your service can't connect to localhost Neo4j, because it is inside a docker container, and localhost points to the docker containers instead of your local machine.
In this case, it is best to run both containers with docker-compose. You want to set the depends on feature in the other docker container. Here is an example docker-compose.yml file from my project.
version: '3.7'
services:
neo4j:
image: neo4j:4.1.2
restart: always
hostname: neo4jngs
container_name: neo4jngs
ports:
- 7474:7474
- 7687:7687
api:
build:
context: ./API
hostname: api
restart: always
container_name: api
ports:
- 3000:3000
depends_on:
- neo4j
As you can see, the api container is a service that will connect to Neo4j. Now you can change the driver settings to:
self.driver = GraphDatabase.driver(uri="bolt://neo4j:7687", auth=("username", "password"))
And you are good to go.
I solved it and it was actually a dumb mistake, but one that could happen to others I guess...
In the docker-compose.yml:
build: ./APP1/
needs to be in quotes, so:
build: './APP1/'
However Tomaž Bratanič provided me with some helpful tips to get a resolve.
This is my first time using azure, I made a Pipeline in DevOps that creates images from dockerfile and pushes them to my ACR. Then I created a multi-container web app use docker-compose and connected my ACR to the web app. But the web app can't pull the images from the ACR. I don't know what I am doing wrong.
I am getting errors when I try to pull images from ACR in multi container web app.
Errors:
2019-11-28 19:27:46.755 INFO - Pulling image: [registry-name].azurecr.io/server
2019-11-28 19:27:46.921 ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"manifest for [registry-name].azurecr.io/server:latest not found: manifest unknown: manifest unknown"}
Docker-compose config:
version: "3"
services:
# SERVER CONTAINER
server:
image: [registry-name].azurecr.io/server
expose:
- 4000
ports:
- 4000:4000
command: node src/server.js
restart: always
# CLIENT CONTAINER
client:
image: [registry-name].azurecr.io/client
ports:
- "80:80"
- "443:443"
links:
- server
depends_on:
- server
restart: unless-stopped
First of all, the error shows that it cannot find the [registry-name].azurecr.io/server image with latest tag. So if your image does exist, then the problem is the tag.
If you do not add the tag for your image, it will pull the latest tag in default generally. The logs show like this:
But add a special tag for the image is better than no tag. If you want to get the latest image from the ACR, then always add the latest tag when push the image, do the same when you use that image.
I have a two container and How I can access another container in python file with django web server
Docker-compose.yml file
version: '2'
services:
web:
build: ./web/
command: python3 manage.py runserver 0.0.0.0:8001
volumes:
- ./web:/code
ports:
- "8001:80"
networks:
- dock_net
container_name: con_web
depends_on:
- "api"
links:
- api
api:
build: ./api/
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./api:/code
ports:
- "8000:80"
networks:
- dock_net
container_name: con_api
networks:
dock_net:
driver: bridge
Python File:
I can get mail_string from form
mail = request.POST.get('mail_string', '')
url = 'http://con_api:8000/api/'+mail+'/?format=json'
resp = requests.get(url=url)
return HttpResponse(resp.text)
I request api container and get value but I dont know ip address
Updated Answer
In your python file, you can use url = 'http://api/'+mail+'/?format=json'. This will enable you to access the url you are trying to get request from.
Original Answer
If the two containers are independent, then you can create a network and when you make both the containers a part of same network then you can access them using their hostname which you can specify by --hostname=HOSTNAME.
Another easy way is to use docker-compose file which creates a network by default and all the services declared in the file are a part of that network. By that you can access other services by their service name. Simply like http://container1/ when your docker-compose file is like this:
version: '3'
services:
container1:
image: some_image
container2:
image: another_or_same_image
Now enter into container2 by:
docker exec -it container2 /bin/bash
and run ping http://contianer1
You will receive packets and therefore be able to access other container.