What is the proper configuration to persist edgeAgent storage? - azure

I am working on an Azure IoT Edge project. Currently I am going through the production readiness checklist. I followed the documentation to use storage on the host filesystem for the edgeAgent and edgeHub modules.
When I run sudo iotedge check edgeHub is OK but edgeAgent raises a warning:
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ production readiness: Edge Hub's storage directory is persisted on the host filesystem - OK
Here is a snippet from the deployment template:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeAgent/storage/:/iotedge/storage/"
]
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeHub/storage:/iotedge/storage/"
],
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
}
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
}
},

As of release 1.0.9, there's an issue where edgeAgent's configuration doesn't update unless its image tag is updated. Two options from your current state:
Use a specific tag in the image settings (always recommended). E.g. mcr.microsoft.com/azureiotedge-agent:1.0.9
Delete the edgeAgent container on the device: docker rm -f edgeAgent. It will get restarted in under 30 secs and the new storageFolder env var will be picked up.
Run 'iotedge check' again after the container is update, and this warning should go away.

I have followed the same documentation and was able to avoid the production readiness checklist warnings on my Raspberry Pi 3.
1) I have configured the "Binds" as per the documentation Link module storage to device storage
"Binds":["/etc/iotedge/storage/:/iotedge/storage/"]
2) I have provided the user access on the HostStoragePath, from SSH terminal.
sudo chown 1000 /etc/iotedge/storage/
sudo chmod 700 /etc/iotedge/storage/
3) Restarted Raspberry Pi 3, to make sure the grant access takes into effect.
Make sure you have host storage folders available on your edge device.
Make sure to provide the user full access on those folders.
Try the deployment with your updated manifest, it should probably work.

Related

How to disable App Service authentication for a path?

I enabled identity federation V2 for an App Service that hosts a single page app. This works fine but now I need to disable it again for routes that start with /.well-known/ because that's where I store files that don't require authentication, e.g. apple-app-site-associations.
In previous versions, I was able to upload an authorization.json file to my App Service to disable authentication for this path, but this no longer works?
{
"routes": [
{
"path_prefix": "/",
"policies": {
"unauthenticated_action": "RedirectToLoginPage"
}
},
{
"path_prefix": "/.well-known/",
"policies": {
"unauthenticated_action": "AllowAnonymous"
}
}
]
}
I'm still unsure why the old way of configuring path exclusions stopped working, but I figured out how to do it with V2 configuration.
First migrate to file-based configuration as documented here: https://learn.microsoft.com/en-us/azure/app-service/configure-authentication-file-based#enabling-file-based-configuration
In short, copy all config from Microsoft.Web/sites/<siteName>/config/authsettingsV2 to a file in your wwwroot folder, e.g. wwwroot/auth.json. This file will be accessible over HTTP so remove secrets from configuration as documented. Set platform.configFilePath to auth.json and restart the app service.
Once you've confirmed that everything still works with file-based configuration, you can add path exclusions to the configuration file.
{
"platform": {
"enabled": true
},
"globalValidation": {
...
"excludedPaths": [
"/.well-known/apple-app-site-association",
"/.well-known/assetlinks.json"
]
},
...
}
Restart the app service one more time for changes to take effect.
If you're trying this 12/2022, it seems that "configFilePath" is not working for quite some time (evidence)
If you change directly on Azure Resource Explorer, it works.

How do I access another Docker container (running a node express server) from a AWS SAM Local Docker container?

We are developing locally using SAM Local to invoke a Lambda in an API Gateway. SAM Local does this using a Docker container (setup as close to the Lambda node runtime as possible). We want this Lambda to access some data in an API Mocking service in the shape of some Node express servers running in another container (this could also just be run locally too if needed). Both containers are in a user-created Docker bridge network created as follows:
docker network create sam-demo
The API mocking service is run and added to the bridge network:
docker run --network sam-demo --name mock -d -P mock:latest
The Lambda is invoked in debug mode and added to the bridge network:
sam local start-api -t template.json -d 9229 --docker-network sam-demo
Inspecting the bridge network reveals both the SAM local lambda (wizardly_knuth) and the mocks are there:
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5ebfa4295a56e4df840676a2e214891543fd4e8cb271ed70ddd67946ab451119": {
"Name": "wizardly_knuth",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"d735c9aa840e4ce7180444cf168cd6b68451c9ca29ba87b7cb23edff11abea7b": {
"Name": "mock",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Now, what should the URL be for the Lambda to hit the mock? According to Docker docs it should be the IPv4Address of the mock container i.e. http://172.20.0.2 but I am not sure what port or how to find what port?
I can exec into the mock and ping the SAM Local container successfully BUT I can't do the same from the SAM Local container as the shell doesn't have ping, curl, nc or anything installed.
I can't hit the Mock container directly from my machine as it is a Mac and I believe there are issues with doing so.
Any advice or next steps are greatly appreciated.
Much thanks,
Sam
UPDATE
In the end I gave up on this approach as I could not figure out what the URL for the Lambda should be to hit the mock within the Docker Bridge network.
The alternative approach was to actually just hit the mock Docker container directly from the Lambda using this URL (the mock container is exposing port 3002):-
http://docker.for.mac.localhost:3002/
Hope this might help somebody out.... please let me know if anyone solves the bridge network issue I originally posted about.
Thanks,
Sam

Getting error when deployed: An attempt was made to access a socket in a way forbidden by its access permissions

I have created an Azure Function in Go. The function is working properly in local machine. But, when I deploy it to Azure, I am getting the below exception:
An attempt was made to access a socket in a way forbidden by its access permissions.
Inner exception method is: System.Net.Http.ConnectHelper+<ConnectAsync>d__1.MoveNext
Error log is here: https://github.com/mpurusottamc/azurefunc-go/blob/master/errorlog.json
local.settings.json file has below code:
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "node",
"AzureWebJobsStorage": "<proper storage link>"
}
}
host.json file has golang executable reference as the worker.
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true
}
}
},
"httpWorker": {
"description": {
"defaultExecutablePath": "hello.exe"
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
My code is hosted in github here: https://github.com/mpurusottamc/azurefunc-go
deploy.sh file contains the deployment script.
Followed this reference article: https://itnext.io/write-azure-functions-in-any-language-with-the-http-worker-34d01f522bfd
Am I missing something?
I updated golang version to 1.14 on my machine. And also applied --force in the deployment script and that resolved the issue for me.
go.mod is updated like this:
module hello
go 1.14
deployment script updated to include --force argument
func azure functionapp publish hellogoapp --force

Azure: Unable to connect to cluster (Aks-engine) using kubectl

SOLUTION
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.
PROBLEM
I deployed a Kubernetes cluster using aks-engine but I'm getting this error Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout when I try to use kubectl. I'm able to access the master node with the serial console but not through ssh (same error comes in this case).
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl get node
Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Aks-Engine version - v0.28.1-linux-amd64
Kubernetes version - 1.10.12
Here is the kubeconfig.westus2.json file -
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "*****"
"server": "https://masquerade-az.westus2.cloudapp.azure.com"
},
"name": "masquerade-az"
}
],
"contexts": [
{
"context": {
"cluster": "masquerade-az",
"user": "masquerade-az-admin"
},
"name": "masquerade-az"
}
],
"current-context": "masquerade-az",
"kind": "Config",
"users": [
{
"name": "masquerade-az-admin",
"user": {"client-certificate-data":"****","client-key-data":"*****"}
}
]
}
This is the screenshots for inbound ports.
This is the screenshot for outbound ports.
As shared by the original poster, the solution is:
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.

Get host IP from Vagrant and use it with Ansible

I playing around with vagrant to set up some droplets and deploy my nodejs server with ansible (I am using DigitalOcean). I have some parts in my js code where I need to set the current IP into the script. Problem is that I can't set the IP manually so I get a random IP via vagrant from DO. How can I "get" this IP and make use of it in my Ansible script? sure I just could do a wget http://ipinfo.io/ip -qO - on the host itself or check it with ip but I guess it should also work to get this info from Vagrant?
How can I "get" this IP and make use of it in my Ansible script?
Use ipify_facts:
- name: Get my public IP
ipify_facts:
- debug: var=ipify_public_ip
From the documentation (specifically: http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts), Ansible seems to have a pre-defined variable containing your networking information:
"ansible_eth0": {
"active": true,
"device": "eth0",
"ipv4": {
"address": "REDACTED",
"netmask": "255.255.255.0",
"network": "REDACTED"
},
"ipv6": [
{
"address": "REDACTED",
"prefix": "64",
"scope": "link"
}
],
"macaddress": "REDACTED",
"module": "e1000",
"mtu": 1500,
"type": "ether"
},
I can't remember which is the public interface on DigitalOcean, but you should be able to use {{ ansible_eth0.ipv4.address }} in your playbook.
Side note, you can use this command to list all "discovered" variables:
ansible hostname -m setup

Resources