I have found out from Azure team that the memory limit for Edge modules including edgeHub can be controlled by specifying createOptions -> HostConfig -> Memory. How to control memory limit for edgeAgent Docker container as it is created by edgeAgent? Is this documented? Currently here it shows as 1.88GB on a 2GB VM.
Below is an extract from docker stats
fd66aaa4dbe1 edgeAgent 0.01% 40.59MiB / 1.885GiB 2.10% 2.18MB / 1.13MB 55.6MB / 705kB 15
Setting the memory limit is not specific to the edgeHub module but it is the same for any other docker module used with IotEdge. You can add the Memory setting to the HostConfig section in the createOptions of any module.
For the edgeAgent the deployment setting would look like this:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{\"HostConfig\":{\"Memory\":536870912}}"
}
},
"edgeHub": {
...
}
}
With this do a new deployment.
For the changes to take affect on your machine you have to remove the edgeAgent module so it will create a new one according to your changed deployment rules.
You can do so with the following command:
sudo docker rmi mcr.microsoft.com/azureiotedge-agent:1.0 -f
After that restart the iotEdge Deamon with
sudo systemctl restart iotedge
After that the memory will be limited for the edgeAgent module.
Note:
If you want to limit memory on a Raspberry Pi, be aware that memory limit support is turned off by default. You can find a HowTo for enabling it here: https://blog.raveland.org/post/docker_raspian/
Related
I am learning Docker. Wen i run two MYSQL containers with -v options whose two volumes names are the same , only one of those two volumes is created on the host file system. Would the second one override the first one or the system keeps the first one ? I don't see any command showing the volume names conflict. Here are my commands
docker container run -d --name mysql_1 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql_db:/var/lib/mysql mysql
docker container run -d --name mysql_2 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql_db:/var/lib/mysql mysql
I check the logs with command docker volume [name] inspect and it seems the second volume will override the first one
[
{
"CreatedAt": "2020-07-24T09:34:05Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/mysql_db78/_data",
"Name": "mysql_db78",
"Options": null,
"Scope": "local"
}
]
The createdAt is the time that i typed the last docker container run -v .. command. But it is strange that docker didn't notify about volume name conflict
You're allowed to mount the same volume into different containers. Files read and written by one can be read and written by the other. There's no "conflict" here.
In both docker run commands you're telling Docker to mount a volume named mysql_db on to the path /var/lib/mysql. In the first command, Docker automatically creates the named volume, as though you had run docker volume create mysql_db, since it doesn't exist yet. Then the second docker run command reuses that some volume.
(Operationally, you can't have multiple MySQL servers running off the same data store, so you should see a startup-time error referring to a lock file in the mysql_2 container. At a design level, try to avoid file sharing and prefer cross-container API calls instead, since coordinating file sharing can be tricky and it doesn't scale well to more advanced environments like Kubernetes.)
When we run up a container on a Compute Engine using COS, it writes its logs to JSON files. We are finding an error:
"level=error msg="Failed to log msg \"\" for logger json-file: write /var/lib/docker/containers/[image]-json.log: no space left on device".
I was looking to change the logging settings for Docker and found this article on changing the logging driver settings:
https://docs.docker.com/config/containers/logging/json-file/
My puzzle is I don't know how to set the parameters through the console or gcloud in order to set log-opts.
It seems that /var/lib/docker is on the / filesystem, and if this filesystem is running out of inodes, you will receive that message when you’ll try to run up a container and it tries to write its logs to JSON files. You can check this by running
df -i /var/lib/docker
You can configure your logging drivers to change the default values in ‘/etc/docker/daemon.json’
This is a configuration example of the daemon.json file
cat /etc/docker/daemon.json
{
"live-restore": true,
"storage-driver": "overlay2"
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
Don’t forget to restart the docker daemon after changed the file.:
systemctl restart docker.service
You can check the following documentation for further information about how to configure logging drivers.
Please let me know the results.
I'm very new to Azure IoT Edge and I'm trying to deploy to my Raspberry PI : Image Recognition with Azure IoT Edge and Cognitive Services
but after Build & Push IoT Edge Solution and Deploy it to Single Device ID I see none of those 2 modules listed in Docker PS -a & Iotedge list
And when try to check it on EdgeAgent Logs there's error message and it seems EdgeAgent get error while creating those Modules (camera-capture and image-classifier-service)
I've tried :
1. Re-build it from fresh folder package
2. Pull the image manually from Azure Portal and run the image manually by script
I'm stuck on this for days.
in deployment.arm32v7.json for those modules I define the Image with registered registry url :
"modules": {
"camera-capture": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "zzzz.azurecr.io/camera-capture-opencv:1.1.12-arm32v7",
"createOptions": "{\"Env\":[\"Video=0\",\"azureSpeechServicesKey=2f57f2d9f1074faaa0e9484e1f1c08c1\",\"AiEndpoint=http://image-classifier-service:80/image\"],\"HostConfig\":{\"PortBindings\":{\"5678/tcp\":[{\"HostPort\":\"5678\"}]},\"Devices\":[{\"PathOnHost\":\"/dev/video0\",\"PathInContainer\":\"/dev/video0\",\"CgroupPermissions\":\"mrw\"},{\"PathOnHost\":\"/dev/snd\",\"PathInContainer\":\"/dev/snd\",\"CgroupPermissions\":\"mrw\"}]}}"
}
},
"image-classifier-service": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "zzzz.azurecr.io/image-classifier-service:1.1.5-arm32v7",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/pi/images:/images\"],\"PortBindings\":{\"8000/tcp\":[{\"HostPort\":\"80\"}],\"5679/tcp\":[{\"HostPort\":\"5679\"}]}}}"
}
Error message from EdgeAgent Logs :
(Inner Exception #0) Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException- Message:Error calling Create module
image-classifier-service: Could not create module image-classifier-service
caused by: Could not pull image zzzzz.azurecr.io/image-classifier-service:1.1.5-arm32v7
caused by: Get https://zzzzz.azurecr.io/v2/image-classifier-service/manifests/1.1.5-arm32v7: unauthorized: authentication required
When trying to run the pulled image by script :
sudo docker run --rm --name testName -it zzzz.azurecr.io/camera-capture-opencv:1.1.12-arm32v7
None
I get this error :
Camera Capture Azure IoT Edge Module. Press Ctrl-C to exit.
Error: Time:Fri May 24 10:01:09 2019 File:/usr/sdk/src/c/iothub_client/src/iothub_client_core_ll.c Func:retrieve_edge_environment_variabes Line:191 Environment IOTEDGE_AUTHSCHEME not set
Error: Time:Fri May 24 10:01:09 2019 File:/usr/sdk/src/c/iothub_client/src/iothub_client_core_ll.c Func:IoTHubClientCore_LL_CreateFromEnvironment Line:1572 retrieve_edge_environment_variabes failed
Error: Time:Fri May 24 10:01:09 2019 File:/usr/sdk/src/c/iothub_client/src/iothub_client_core.c Func:create_iothub_instance Line:941 Failure creating iothub handle
Unexpected error IoTHubClient.create_from_environment, IoTHubClientResult.ERROR from IoTHub
When you pulled the image directly with docker run, it pulled but then failed to run outside of the edge runtime, which is expected. But when the edge agent tried to pull it, it failed because it was not authorized. No credentials were supplied to the runtime, so it attempted to access the registry anonymously.
Make sure that you add your container registry credentials to the deployment so that edge runtime can pull images. The deployment should contain something like the following in the runtime settings:
"MyRegistry" :{
"username": "<username>",
"password": "<password>",
"address": "<registry-name>.azurecr.io"
}
As #silent pointed out in the comments, the documentation is here, including an example deployment that includes container registry credentials.
I'm trying to build a docker image for centos:7 that restricts system commands which any user (including root) can execute inside a docker machine. My intention is that I want to build an docker image with security profile that I need and then use that as my base image to build other application images thereby inheriting security profile from the base image. Is this doable? Am I missing something?
Here is a sample security profile I'm testing:
{
"defaultAction" : "SCMP_ACT_ALLOW",
"syscalls": [
{
"name": "mkdir",
"action": "SCMP_ACT_ERRNO"
},
{
"name": "chown",
"action":"SCMP_ACT_ERRNO"
}
]
}
When i run:
docker build -t test . --security-opt seccomp:policy.json
It throws an error :
Error response from daemon: The daemon on this platform does not support setting security options on build
Thoughts on how to get past this or other approaches I could use?
From Github...
"Docker engine does not support the parameter "--security-opt seccomp=" when executing command "docker build"
#cason you can supply a custom default profile to the daemon.
`--secomp-profile /path/to/profile.json'
https://github.com/moby/moby/issues/34454#issuecomment-321135510
I created an EBS volume, attached and mounted it to my Container Instance. In the task definition volumes I set the volume Source Path with the mounted directory.
The container data is not beeing created in the mounted directory, all other directories out of the mounted EBS works properly.
The purpose is to save the data out of the container and with this another volume backup it.
Is there a way to use this attached volume with my container? or is a better way to work with volumes and backups.
EDIT: It was tested with a random docker image running it specifying the volume and I faced the same problem. I manage to make it work restarting the Docker service but I'm still looking for a solution without restart Docker.
Inspecting a container with a volume directory that is the mounted EBS
"HostConfig": {
"Binds": [
"/mnt/data:/data"
],
...
"Mounts": [
{
"Source": "/mnt/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
the directory displays:
$ ls /mnt/data/
lost+found
Inspecting a container with a volume directory that is not the mounted EBS
"HostConfig": {
"Binds": [
"/home/ec2-user/data:/data"
],
...
"Mounts": [
{
"Source": "/home/ec2-user/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
the directory displays:
$ ls /home/ec2-user/data
databases dbms
It sounds like what you potentially want to do is make use of the AWS EC2 Launch Configurations. Using Launch Configurations, you can specify EBS volumes be created and attached to your instance at launch. This happens prior to the docker agent and subsequent tasks being started.
As part of your launch configuration, you'll want to also update the User data under Configure details with something along the lines of:
mkdir /data;
mkfs -t ext4 /dev/xvdb;
mount /dev/xvdb /data;
echo '/dev/xvdb /data ext4 defaults,nofail 0 2' >> /etc/fstab;
Then, so long as your container is setup to access /data on the host, everything will just work the first go.
Bonus: If you're using ECS clusters, I presume you're already making use of Launch Configurations to get your instances joined to the cluster. If not, you can add new instances automatically as well, using something like:
#!/bin/bash
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env=ECS_LOGFILE=/log/ecs-agent.log --env=ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"syslog\",\"gelf\"] --env=ECS_LOGLEVEL=info --env=ECS_DATADIR=/data --env=ECS_CLUSTER=your-cluster-here amazon/amazon-ecs-agent:latest
Specifically in that bit, you'll want to edit this part: --env=ECS_CLUSTER=your-cluster-here
Hope this helps.
The current documentation on Using Data Volumes in Tasks seems to address this problem:
Prior to the release of the Amazon ECS-optimized AMI version 2017.03.a, only file systems that were available when the Docker daemon was started are available to Docker containers. You can use the latest Amazon ECS-optimized AMI to avoid this limitation, or you can upgrade the docker package to the latest version and restart Docker.