I have an ASP.NET 6 application using Dapr. I works in the local development environment, but when I deploy it to Azure Container Apps, it is unable to reach the Dapr sidecar.
The application is defined in Bicep, with the following settings relevant to Dapr:
"dapr": {
"enabled": true,
"appId": "<redaced>",
"appProtocol": "http",
"appPort": 5032
}
The following environment variables are set on the container:
"env": [
{
"name": "DAPR_HTTP_PORT",
"value": "3500"
},
{
"name": "DAPR_GRPC_PORT",
"value": "3500"
}
]
I use the Dapr SDK, and make the following call: _daprClient.GetStateAsync<ICollection<string>>(STORE_NAME, KEY), which results in an exception with this message: ---> Grpc.Core.RpcException: Status(StatusCode="Internal", Detail="Error starting gRPC call. HttpRequestException: Connection refused (127.0.0.1:3500) SocketException: Connection refused", DebugException="System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:3500).
The documentation for Azure Container Apps states that 3500 is the default port for the Dapr sidecar. Any ideas why this does not work?
Related
I am try to deploy my function but this keep failing.
Currently The Function is deploy on NodeJS 10. I Reworked the code and deployed with Node Version 12.
I am using CLI of firebase.
The deploy failed and asked to check the log in cloud console and this was the log:
{
"textPayload": "ERROR: build step 2 \"us.gcr.io/fn-img/buildpacks/nodejs10/builder:nodejs10_20210505_10_24_1_RC00\" failed: step exited with non-zero status: 1",
"insertId": "c3362881-be3d-4d52-965d-84793fde26ed-174",
"resource": {
"type": "build",
"labels": {
"project_id": "ecutter",
"build_id": "c3362881-be3d-4d52-965d-84793fde26ed",
"build_trigger_id": ""
}
},
"timestamp": "2021-05-24T19:49:39.640786230Z",
"severity": "INFO",
"labels": {
"build_step": "MAIN"
},
"logName": "projects/ecutter/logs/cloudbuild",
"receiveTimestamp": "2021-05-24T19:49:39.769241759Z"
}
What i tried ?
At first i deployed the function and it failed.
I thought it could be becz of node version so i tried to update the node version of deployed function from cloud console but it failed.
Tried Reinstalling Node Versions npm modules but none helped.
What's going on ?
We are developing locally using SAM Local to invoke a Lambda in an API Gateway. SAM Local does this using a Docker container (setup as close to the Lambda node runtime as possible). We want this Lambda to access some data in an API Mocking service in the shape of some Node express servers running in another container (this could also just be run locally too if needed). Both containers are in a user-created Docker bridge network created as follows:
docker network create sam-demo
The API mocking service is run and added to the bridge network:
docker run --network sam-demo --name mock -d -P mock:latest
The Lambda is invoked in debug mode and added to the bridge network:
sam local start-api -t template.json -d 9229 --docker-network sam-demo
Inspecting the bridge network reveals both the SAM local lambda (wizardly_knuth) and the mocks are there:
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"5ebfa4295a56e4df840676a2e214891543fd4e8cb271ed70ddd67946ab451119": {
"Name": "wizardly_knuth",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"d735c9aa840e4ce7180444cf168cd6b68451c9ca29ba87b7cb23edff11abea7b": {
"Name": "mock",
"EndpointID": "xxx",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
Now, what should the URL be for the Lambda to hit the mock? According to Docker docs it should be the IPv4Address of the mock container i.e. http://172.20.0.2 but I am not sure what port or how to find what port?
I can exec into the mock and ping the SAM Local container successfully BUT I can't do the same from the SAM Local container as the shell doesn't have ping, curl, nc or anything installed.
I can't hit the Mock container directly from my machine as it is a Mac and I believe there are issues with doing so.
Any advice or next steps are greatly appreciated.
Much thanks,
Sam
UPDATE
In the end I gave up on this approach as I could not figure out what the URL for the Lambda should be to hit the mock within the Docker Bridge network.
The alternative approach was to actually just hit the mock Docker container directly from the Lambda using this URL (the mock container is exposing port 3002):-
http://docker.for.mac.localhost:3002/
Hope this might help somebody out.... please let me know if anyone solves the bridge network issue I originally posted about.
Thanks,
Sam
I am working on an Azure IoT Edge project. Currently I am going through the production readiness checklist. I followed the documentation to use storage on the host filesystem for the edgeAgent and edgeHub modules.
When I run sudo iotedge check edgeHub is OK but edgeAgent raises a warning:
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
Data might be lost if the module is deleted or updated.
Please see https://aka.ms/iotedge-storage-host for best practices.
√ production readiness: Edge Hub's storage directory is persisted on the host filesystem - OK
Here is a snippet from the deployment template:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeAgent/storage/:/iotedge/storage/"
]
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": {
"HostConfig": {
"Binds": [
"/home/pi/iotedge/edgeHub/storage:/iotedge/storage/"
],
"PortBindings": {
"5671/tcp": [
{
"HostPort": "5671"
}
],
"8883/tcp": [
{
"HostPort": "8883"
}
],
"443/tcp": [
{
"HostPort": "443"
}
]
}
}
}
},
"env": {
"storageFolder": {
"value": "/iotedge/storage/"
}
}
}
},
As of release 1.0.9, there's an issue where edgeAgent's configuration doesn't update unless its image tag is updated. Two options from your current state:
Use a specific tag in the image settings (always recommended). E.g. mcr.microsoft.com/azureiotedge-agent:1.0.9
Delete the edgeAgent container on the device: docker rm -f edgeAgent. It will get restarted in under 30 secs and the new storageFolder env var will be picked up.
Run 'iotedge check' again after the container is update, and this warning should go away.
I have followed the same documentation and was able to avoid the production readiness checklist warnings on my Raspberry Pi 3.
1) I have configured the "Binds" as per the documentation Link module storage to device storage
"Binds":["/etc/iotedge/storage/:/iotedge/storage/"]
2) I have provided the user access on the HostStoragePath, from SSH terminal.
sudo chown 1000 /etc/iotedge/storage/
sudo chmod 700 /etc/iotedge/storage/
3) Restarted Raspberry Pi 3, to make sure the grant access takes into effect.
Make sure you have host storage folders available on your edge device.
Make sure to provide the user full access on those folders.
Try the deployment with your updated manifest, it should probably work.
I am currently having trouble to debug my Azure Functions Core Tools in VS Code.
I am using the npm package azure-functions-core-tools#2.
As i read on many resources func host start / func start should always start the node process in with --inspect=9229. As you can see this is not the case in my setup:
[4/30/19 4:51:25 AM] Starting language worker process:node "/usr/lib/node_modules/azure-functions-core-tools/bin/workers/node/dist/src/nodejsWorker.js" --host 127.0.0.1 --port 50426 --workerId 3e909143-72a3-4779-99c7-376ab3aba92b --requestId 656a9413-e705-4db8-b09f-da44fb1f991d --grpcMaxMessageLength 134217728
[4/30/19 4:51:25 AM] node process with Id=92 started
[4/30/19 4:51:25 AM] Generating 1 job function(s)
[...]
[4/30/19 4:51:25 AM] Job host started
Hosting environment: Production
Also all attempts of changing the hosting environment failed. I tried to add FUNCTIONS_CORETOOLS_ENVIRONMENT to my local configuration, resulting in an error:
An item with the same key has already been added. Key: FUNCTIONS_CORETOOLS_ENVIRONMENT
I tried to add several environment variables in my launch and task settings, using --debug, changing project settings. Nothing works.
I am currently running this on the Windows Subsystem for Linux (WSL) and except this it works really well.
Anyone has a clue on what i am doing wrong here?
I don't think debug is enabled by default. You will have to set the language worker arguments for this to work as documented.
In local.settings.json
To debug locally, add "languageWorkers:node:arguments": "--inspect=5858" under Values in your local.settings.json file and attach a debugger to port 5858.
With func CLI
You could set this by using the --language-worker argument
func host start --language-worker -- --inspect=5858
In VS Code
If you are developing using VS Code and the Azure Functions extension, the --inspect is added automatically as the corresponding environment variable is set in .vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "runFunctionsHost",
"type": "shell",
"command": "func host start",
"isBackground": true,
"presentation": {
"reveal": "always"
},
"problemMatcher": "$func-watch",
"options": {
"env": {
"languageWorkers__node__arguments": "--inspect=5858"
}
},
"dependsOn": "installExtensions"
},
{
"label": "installExtensions",
"command": "func extensions install",
"type": "shell",
"presentation": {
"reveal": "always"
}
}
]
}
You could also set this environment variable directly if you'd like instead of adding it in local.settings.json too.
SOLUTION
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.
PROBLEM
I deployed a Kubernetes cluster using aks-engine but I'm getting this error Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout when I try to use kubectl. I'm able to access the master node with the serial console but not through ssh (same error comes in this case).
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl get node
Unable to connect to the server: dial tcp 13.66.162.75:443: i/o timeout
$ KUBECONFIG=_output/kubeconfig/kubeconfig.westus2.json kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Aks-Engine version - v0.28.1-linux-amd64
Kubernetes version - 1.10.12
Here is the kubeconfig.westus2.json file -
{
"apiVersion": "v1",
"clusters": [
{
"cluster": {
"certificate-authority-data": "*****"
"server": "https://masquerade-az.westus2.cloudapp.azure.com"
},
"name": "masquerade-az"
}
],
"contexts": [
{
"context": {
"cluster": "masquerade-az",
"user": "masquerade-az-admin"
},
"name": "masquerade-az"
}
],
"current-context": "masquerade-az",
"kind": "Config",
"users": [
{
"name": "masquerade-az-admin",
"user": {"client-certificate-data":"****","client-key-data":"*****"}
}
]
}
This is the screenshots for inbound ports.
This is the screenshot for outbound ports.
As shared by the original poster, the solution is:
I appended feature-gates in kube-apiserver.yaml in master node. This broke the apiserver, so kubectl couldn't connect to the nodes. After removing them, it was working fine.