How do I create an IoT Edge Module for an existing docker container from Azure Cognitive Services? - speech-to-text

I currently have public preview access for the Azure Cognitive Services for Speech-To-Text as a docker container. This allows the container to be run on an IoT Edge device, rather than accessing the cloud to perform this service. This public preview came with installation instructions that show I can download an existing docker image of one of the containers and run it from a CLI using "docker run".
But I don't want to have to run the docker container manually on my IoT Edge devices. I want it to be automatically deployed to my IoT Edge devices and automatically start running. In order to do this, I believe that it needs to exist as an IoT Edge Module. Is my understanding correct?
So my question is more of an instructional question. Do I need to create my own IoT Edge Module that utilizes this ACS docker container, or is there some other way to automatically deploy it to my IoT Edge Device and have it start running automatically?
I was unable to find any documentation or examples online of deploying an existing docker container to an IoT Edge device. Any guidance would be greatly appreciated!

OK, after much digging, I found a solution. Whatever you do, don't search online for "create iot module from docker container" or anything COMPLETELY MEANINGFUL like that. Instead, I had to search for something very specific to the Azure Cognitive Service's EULA acceptance on the docker run (i.e. I had to search for "iot edge module docker \"eula\""). Note the quotation marks around eula to ensure it is in the search result. I came across this article.
Using the article's guidance, I will repeat in detail what I did here in case the link ever goes stale.
In VS Code, create a new IoT Edge Solution
In your solution, add a new IoT Edge Module
a. When prompted for the type of Module to create, select "Choose Existing Module (Enter Full URL)"
If you look inside your deployment.template.json file, you will now see a new element for "registryCredentials" that got added to your edgeAgent details. Fill out the address, username, and password accordingly.
If you haven't done so yet, create your Cognitive Services resource online to obtain an Endpoint URL and an ApiKey. Take note of these values.
In the deployment.template.json file, under your new module's configuration settings, add the following.
"settings": {
"image": "containerpreview.azurecr.io/microsoft/cognitive-services-speech-to-text:latest",
"createOptions":
{
"Cmd": [
"Eula=accept",
"Billing={enter-your-EndpointURL}",
"ApiKey={enter-your-ApiKey}"
],
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
}
}
}
This will be equivalent to running "docker run" from the command line with parameters like this:
docker run --rm -it -p 5000:5000 --memory 4g --cpus 1 \
containerpreview.azurecr.io/microsoft/cognitive-services-recognize-text \
Eula=accept \
Billing={BILLING_ENDPOINT_URI} \
ApiKey={BILLING_KEY}
Now "Build and Push your IoT Edge Solution", followed by "Create Deployment for Single Device". On your target IoT Edge device, you should now see the module installed and running via CLI "iotedge list".
Update: 2020/05/01
After submitting a request for better documentation from MSFT, they updated their docs site to include information on how to modify the deployment.template.json file to match the docker command line arguments: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-use-create-options

This is the essence from the link above (https://learn.microsoft.com/en-us/azure/iot-edge/how-to-use-create-options):
Once you have the module working the way you want it (ie: in a
docker container), run docker inspect . This command
outputs the module details in JSON format. Find the parameters that
you configured, and copy the JSON.

Related

Azure - App Service Docker Compose - is working_directory supported?

I have a node app that I'm running in Azure App Service however in my docker-compose file the option for working_dir doesn't seem to actually do anything. I've tested locally with the same image I have pushed to the registry and it works, just not on Azure.
I just want to check is it actually supported?
https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#docker-compose-options
Thanks,
The link you provide in the question shows you the support and not support options, the option working_dir is not in both. But the Note also tells it:
Any other options not explicitly called out are ignored in Public
Preview.
So in fact, the option working_dir is ignored. Instead, I recommend you use the WORKDIR in the Dockerfile.

Azure WebApp for Container WebSocket Not Working - SAFE / ElmishBridge

I have been working on a project built from the SAFE stack template and everything runs successfully when I build it to a docker container and run this locally.
Using Azure WebApp for Container, the container successfully attaches and deploys, and I am able to load the app from the URL as expected. [The Server is responding with the Client App]
The issue is that the WebSockets are not working once deployed, but they work properly from when I run the container locally.
I've looked through a lot in regards to all of this and tried a lot of different things, but am having no success. I could share more, but I was primarily seeing if anyone has encountered this.
I did run this:
az webapp config set --web-sockets-enabled true --name MyAppName --resource-group MyResourceGroup
as per something suggested from here: https://social.msdn.microsoft.com/Forums/azure/en-US/036f9c3d-16dc-4e52-b943-5eb1afed824f/enabling-websockets-on-a-web-app-for-containers-service
I can confirm that the WebSockets being enabled was set to false, by default, and that it required using the CloudShell to set it to true.
It is frustrating, because I am unable to get any information beyon the following:
WebSocket connection to 'wss://xxx.azurewebsites.net/socket/init' failed: Error during WebSocket handshake: Unexpected response code: 503
I don't want to initially overshare detail about code, unless requested as something helpful, because everything works when run in a container locally. It does feel oddly like something related to that Azure setting or perhaps some kind of Port-related Application Setting or the such.
Further, this does feel like it is not an aspect of SAFE-template or Elmish-Bridge, but anyone who has successfully deployed this combo on Azure using a Docker Container may have direct insight on this problem. It seems like something wider than this particular usage, but related to Container/Websocket usage on Azure.
Any help is appreciated. Thanks.
It seems that WebSockets are not fully supported in Azure WebApp for Containers that are running Linux Containers:
https://github.com/aspnet/AspNetCore/issues/10370
Can you check what appservice plan you have?
TLDR: you need to get at least a B1 appservice plan. The FREE one will not work with Streamlit (and apps using WebSockets).
After a couple of hours trying to find the answer to the same answer, I found out what it was. I wanted to deploy a streamlit app, but was stuck at the same place after following the guidance. A Ctrl+Maj+J showed in the "Please wait" page that WebSockets were an issue. It appears WebSockets will not work for FREE Linux appservice plans, and after recreating a B1 appservice plan (as in the guidance), it worked.
Might be a duplicate of this issue.

Unable to log into Hyperledger cello operator dashboard

I just installed hyperledger cello following the instructions here The operator dashboard opens up at port 8080 but when I try logging in with the credentials admin:pass as suggested by tutorials like these, a spinner appears on the login button for a while and then the button becomes active again.
Are there any other credentials one can use to log into the operator dashboard? I can't log into the operator dashboard and I can't access the user dashboard. The user dashboard container is not running and there's nothing running on my port 8081 where the user dashboard should be. Please help.
I think maybe you are running the service under dev mode, and if you run dev mode, must compile js files for operator dashboard, if not operator dashboard can't find any js files needed, so after you login the page is empty. In the latest code, will compile js files if you run it in dev mode, such as:
MODE=dev make start
So please try again after you clone the latest code, and if you also have any problem, can comment more here.
And We have released v0.9.0-alpha version, and in the newest version have some new feature, include:
Support kubernetes agent.
Can apply fabric 1.2 chain in user dashboard, and use it.
And this is the new tutorial, in this tutorial include:
How to setup master service.
How to setup k8s, docker worker node.
How to create host for k8s, docker type in operator dashboard.
How to create chain in operator dashboard.
Apply & Use fabric chain in user dashboard, include fabric v1.0 & v1.2.

How to run Azure Functions as IoT Edge Module on a Raspberry Pi?

I'm looking for someone who was able to follows the documentation of Microsoft (https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-deploy-function) and Jon Gallant (https://blog.jongallant.com/2017/11/azure-iot-edge-raspberrypi/) with success.
After following the whole documentation to get a working Azure Function on a Raspberry PI hosted in a docker container, the Function does not work and the edgeAgent log contains just an info that the Functions can't start. To find out what is happen, I setup the debug environment like described here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-vscode-debug-azure-function but nothing works. After hit the last F5 (as its described) I get the error, that the docker container does not exists.
Yes, Dariusz is right.
So, I went to check the status of Azure Functions Runtime and the good news is that they have now a version of their runtime for ARM. https://hub.docker.com/r/microsoft/azure-functions-runtime/tags/ for ARM.
So, we just need to update our Binding and start publishing our image for Arm.
I opened this GitHub Issue to rack the status:
https://github.com/Azure/iot-edge/issues/485
UPDATE: We have release 1.0.0-preview022 that has ARM version for Function that works for raspberry pi! Let us know if you found any problem. We should be updating our VS Code Template soon.
As of today there is no Azure Functions image for ARM released. If you look at the docker hub url https://hub.docker.com/r/microsoft/azureiotedge-functions-binding/tags/ there are just Windows Nano container and Linux x64 container available.

Deployment of individual nodes in Node-RED

Node-RED has very nice "single-click deployment" feature. Using this feature, node-red deploys all its nodes and flows very quickly.
However, I would like to know - where flows and nodes ,we develop in Node-RED editor, are deployed when we click "deploy" button in the node-RED? The reason why I am asking because I would like to deploy Node-RED's generated code ( I guess it is in Node.js) on remote devices (e.g., Android) automatically.
I know one of solution to connect is -- run MQTT publisher on Android device and write subscriber in MQTT subscriber in NodeRED to get event data. But, the problem with this solution is - manual deployment (time consuming, error prone).
As Tiago has suggested in the answer section of this question, Node-RED generates JSON files in the useDir according to nodes and flows we define in the Node-RED editor. Can we get node.js files, instead of JSON files ? This would help us to deploy device specific code on each device without loading Node-RED editor on each device?
You can copy your userDir on those remote devices and launch node-red:
Your flows as well as your settings file is there:
# /opt/node/bin/node-red -help
Node-RED v0.12.1
Usage: node-red [-v] [-?] [--settings settings.js] [--userDir DIR] [flows.json]
Options:
-s, --settings FILE use specified settings file
-u, --userDir DIR use specified user directory
-v enable verbose output
-?, --help show usage
My userDir:
root#arm:~# find /root/.node-red/
/root/.node-red/
/root/.node-red/lib
/root/.node-red/lib/flows
/root/.node-red/settings.js
/root/.node-red/flows_arm.json
/root/.node-red/.config.json
/root/.node-red/.flows_arm.json.backup
BTW your flow is just a json file, take a look at it I'm sure you understand how it works :)

Resources