I cannot get a deployment of an Azure Function by private repository, using then new Github artifact repo for Docker to work (https://github.com/features/packages).
My linux_fx_version is:
'linux_fx_version': 'DOCKER|{}'.format(self.docker_image_id)
with docker_image_id having the value organisation/project-name/container-name:latest
For the other settings, I am using
{ "name": "DOCKER_REGISTRY_SERVER_PASSWORD", "value": self.docker_password },
{ "name": "DOCKER_REGISTRY_SERVER_USERNAME", "value": self.docker_username },
{ "name": "DOCKER_REGISTRY_SERVER_URL", "value": self.docker_url },
with the docker_url being https://docker.pkg.github.com/, and the password being the token with read:packages
Things look good, and yet I get the following (I am not able to fetch any deployment logs as the runtime is unreachable).
Error:
Azure Functions Runtime is unreachable. Click here for details on storage configuration.
Solution found.
Use https://docker.pkg.github.com/ as the docker URL,
and docker.pkg.github.com/<org>/<project-name>/<container-name>:<version> as the linux_fx_version
Related
I'm running different nodejs microservices on Google Kubernetes Services.
Sometimes these services crash and according to Cloud Logging, I can find detailed information in a logging file. For example, the logging message says
{
"textPayload": "npm ERR! /root/.npm/_logs/2021-10-27T11_26_28_534Z-debug.log\n",
"insertId": "zoqxk8wvkuofhslm",
"resource": {
"type": "k8s_container",
"labels": {
"pod_name": "client-depl-7f679c6b49-5d9tz",
"container_name": "client",
"namespace_name": "production",
"cluster_name": "cluster-1",
"location": "europe-west3-a",
"project_id": "XXX"
}
},
"timestamp": "2021-10-27T11:26:28.701252670Z",
"severity": "ERROR",
"labels": {
"k8s-pod/app": "client",
"k8s-pod/skaffold_dev/run-id": "b5518659-05d6-4c08-9b55-9d58fdd5807f",
"k8s-pod/pod-template-hash": "7f679c6b49",
"compute.googleapis.com/resource_name": "gke-cluster-1-pool-1-8bfc60b2-ag86",
"k8s-pod/app_kubernetes_io/managed-by": "skaffold"
},
"logName": "projects/xxx-productive/logs/stderr",
"receiveTimestamp": "xxx"
}
Where do I find these logs on Google Cloud Platform?
---------------- Edit 2021.10.28 ---------------------------
I should clarify that I am already using the logs explorer. This is what I see there:
The logs show 7 consecutive error entries about npm failing. The last two entries indicate that there are more information in a log file "/root/.npm/_logs/2021-10-27T11_26_28_534Z-debug.log".
Does this log file has more info about the failure or is all the info I get in these 7 error log entries?
Thanks
kubectl logs <your_pod>
You can use GCP Logs Explorer
Assuming you already Enable Logging and Monitoring, You can view logs on:
a. Go to the Logs explorer in the Cloud Console.
b. Click Resource. Under ALL_RESOURCE_TYPES, select Kubernetes Container.
c. Under CLUSTER_NAME, select the name of your user cluster.
d. Under NAMESPACE_NAME, select default.
e. Click Add and then click Run Query.
f. Under Query results, you can see log entries from the monitoring-example Deployment. For example:
{
"textPayload": "2020/11/14 01:24:24 Starting to listen on :9090\n",
"insertId": "1oa4vhg3qfxidt",
"resource": {
"type": "k8s_container",
"labels": {
"pod_name": "monitoring-example-7685d96496-xqfsf",
"cluster_name": ...,
"namespace_name": "default",
"project_id": ...,
"location": "us-west1",
"container_name": "prometheus-example-exporter"
}
},
"timestamp": "2020-11-14T01:24:24.358600252Z",
"labels": {
"k8s-pod/pod-template-hash": "7685d96496",
"k8s-pod/app": "monitoring-example"
},
"logName": "projects/.../logs/stdout",
"receiveTimestamp": "2020-11-14T01:24:39.562864735Z"
}
How about
log into the pod while it is alive
kubectl exec -it your-pod -- sh
wait for it to crash and watch the crash file in real time while the pod is not restarted yet :)
How to login to a GCP Pod:
From the Google Cloud Platform main menu go to Kubernetes Engine -> Workloads
Click on the workload you're interested in:
Find the Managed Pods section and click on the Pod you want to access:
Click on KUBECTL -> Exec -> [name of workload/namespace]
A terminal should appear at the bottom of the browser page, SSHing you into the pod. You can look around for your log file from inside here
I am learning about K8s and did setup a release pipeline with a kubectl apply. I've setup the AKS cluster via Terraform and on the first run all seemed fine. Once I destroyed the cluster I reran the pipeline, I get issues which I believe are related to the kubeconfig file mentioned in the exception. I tried the cloud shell etc. to get to the file or reset it but I wasn't succesful. How can I get back to a clean state?
2020-12-09T09:08:51.7047177Z ##[section]Starting: kubectl apply
2020-12-09T09:08:51.7482440Z ==============================================================================
2020-12-09T09:08:51.7483217Z Task : Kubectl
2020-12-09T09:08:51.7483729Z Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running kubectl commands
2020-12-09T09:08:51.7484058Z Version : 0.177.0
2020-12-09T09:08:51.7484996Z Author : Microsoft Corporation
2020-12-09T09:08:51.7485587Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/deploy/kubernetes
2020-12-09T09:08:51.7485955Z ==============================================================================
2020-12-09T09:08:52.7640528Z [command]C:\ProgramData\Chocolatey\bin\kubectl.exe --kubeconfig D:\a\_temp\kubectlTask\1607504932712\config apply -f D:\a\r1\a/medquality-cordapp/k8s
2020-12-09T09:08:54.1555570Z Unable to connect to the server: dial tcp: lookup mq-k8s-dfee38f6.hcp.switzerlandnorth.azmk8s.io: no such host
2020-12-09T09:08:54.1798118Z ##[error]The process 'C:\ProgramData\Chocolatey\bin\kubectl.exe' failed with exit code 1
2020-12-09T09:08:54.1853710Z ##[section]Finishing: kubectl apply
Update, workflow tasks of the release pipeline:
Initially I get the artifact, clone of the repo containing the k8s yamls, then the stage does a kubectl apply.
"workflowTasks": [
{
"environment": {},
"taskId": "cbc316a2-586f-4def-be79-488a1f503564",
"version": "0.*",
"name": "kubectl apply",
"refName": "",
"enabled": true,
"alwaysRun": false,
"continueOnError": false,
"timeoutInMinutes": 0,
"definitionType": null,
"overrideInputs": {},
"condition": "succeeded()",
"inputs": {
"kubernetesServiceEndpoint": "82e5971b-9ac6-42c6-ac43-211d2f6b60e4",
"namespace": "",
"command": "apply",
"useConfigurationFile": "false",
"configuration": "",
"arguments": "-f $(System.DefaultWorkingDirectory)/medquality-cordapp/k8s",
"secretType": "dockerRegistry",
"secretArguments": "",
"containerRegistryType": "Azure Container Registry",
"dockerRegistryEndpoint": "",
"azureSubscriptionEndpoint": "",
"azureContainerRegistry": "",
"secretName": "",
"forceUpdate": "true",
"configMapName": "",
"forceUpdateConfigMap": "false",
"useConfigMapFile": "false",
"configMapFile": "",
"configMapArguments": "",
"versionOrLocation": "version",
"versionSpec": "1.7.0",
"checkLatest": "false",
"specifyLocation": "",
"cwd": "$(System.DefaultWorkingDirectory)",
"outputFormat": "json",
"kubectlOutput": ""
}
}
]
```
I can see you are using kubernetesServiceEndpoint as the Service connection type in Kubectl task.
Once I destroyed the cluster I reran the pipeline, I get issues....
If the cluster was destroyed. The kubernetesServiceEndpoint in azure devops is still connected to the origin cluster. Kubectl task which using the origin kubernetesServiceEndpoint is still looking for the old cluster. And it will fail with above error, since the old cluster was destroyed.
You can fix this issue by updating the kubernetesServiceEndpoint in azure devops with the newly created cluster:
Go to Azure devops Project settings-->Service connections--> Find your Kubernetes Service connection-->Click Edit to update the configuration.
But if your kubernete cluster gets destroyed and recreated frequently. I would suggest using Azure Resource Manager as the Service connection type to connect to the cluster in Kubectl task. See below screenshot.
By using azureSubscriptionEndpoint and specifying azureResourceGroup, if only the cluster's name doesnot change, It doesnot matter how many times the cluster is recreated.
See document to create an Azure Resource Manager service connection
When you destroy and reprovision AKS cluster the kube API URL and some other things change, but as you found out, nothing updates this automatically on your configured clients.
What I do to get access new and reprovisioned AKS clusters is :
az aks get-credentials --subscription <sub> -g <rg> -n <aksname> -a --overwrite
After installing azure-core-functions v3 and migrating a project to v3 Powershell projects began to fail. Narrowed down the issue to be that Az modules loaded as dependencies were no longer recognized at runtime. Further testing revealed the managed dependency setting in the functions host.json file is properly loading the Az modules as deleting the data/ManagedDependencies folder via Kudu and restarting the Function App restores the Az Modules, so requirements.psd1 is working - Powershell just cannot find the downloaded modules.
After reverting to v2 I find the same issue in v2. I was able to get around the issue temporarily by adding the required AZ modules to the modules folder in the Azure Function project. Note: Dev and Deploy is currently via VS Code.
How are Managed Dependencies referenced by Powershell?
What are the next avenues to pursue to resolve the reference issues?
Host.json contents:
"version": "2.0",
"managedDependency": {
"enabled": true
}
}
requirements.psd1 contents:
# This file enables modules to be automatically managed by the Functions service.
# See https://aka.ms/functionsmanageddependency for additional information.
#
#{
'Az' = '3.*'
}
Function App Config:
[
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "32178670-77eb-40aa-afbc-ca17946f0350",
"slotSetting": false
},
{
"name": "AzureWebJobsStorage",
"value": "DefaultEndpointsProtocol=https;AccountName=REDACTED;EndpointSuffix=core.windows.net",
"slotSetting": false
},
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "~2",
"slotSetting": false
},
{
"name": "FUNCTIONS_WORKER_RUNTIME",
"value": "powershell",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTAZUREFILECONNECTIONSTRING",
"value": "DefaultEndpointsProtocol=REDACTED;EndpointSuffix=core.windows.net",
"slotSetting": false
},
{
"name": "WEBSITE_CONTENTSHARE",
"value": "rightrezmonitor7d0758",
"slotSetting": false
},
{
"name": "WEBSITE_NODE_DEFAULT_VERSION",
"value": "~10",
"slotSetting": false
},
{
"name": "WEBSITE_RUN_FROM_PACKAGE",
"value": "1",
"slotSetting": false
}
]
The data/ManagedDependencies/200103210646931.r directory in Kudu contains folders for AZ and AZ.Module folders
are you still seeing this issue? Does your function app have a dependency on .Net Core 2.2?
I have a PowerShell function app which uses the Get-AzKeyVaultSecret cmdlet to retrieve secrets from KeyVault. This function app was originally created to run on V2. However, I manually made the change to move it to V3, and everything continued to work as expected.
To answer your questions:
How are Managed Dependencies referenced by Powershell?
A: The managed dependencies path, which points to the storage account, is appended to $env:PSModulePath in the first invocation.
What are the next avenues to pursue to resolve the reference issues?
A: You could try force reinstalling the function app dependencies. To do so, go to the Portal and select your function app. Go to Overview and stop the function app. After that, select Platform features and go to Kudu as shown below.
Once in Kudu, go to Debug console, and select PowerShell as shown below.
From there, navigate to the D:\home\data\ManagedDependencies. Once there run Remove-Item * -Recurse -Force, e.g.,
cd D:\home\data\ManagedDependencies
Remove-Item * -Recurse -Force
Next, start the function app, and on the first function invocation, the dependencies will be downloaded and the path will be appended to $env:PSModulePath.
If you are still seeing issues after moving your app to V3, please open a issue at https://github.com/Azure/azure-functions-powershell-worker/issues, provide me with your function app name, and I will take a look.
Cheers,
Francisco
I'm very new to Azure IoT Edge and I'm trying to deploy to my Raspberry PI : Image Recognition with Azure IoT Edge and Cognitive Services
but after Build & Push IoT Edge Solution and Deploy it to Single Device ID I see none of those 2 modules listed in Docker PS -a & Iotedge list
And when try to check it on EdgeAgent Logs there's error message and it seems EdgeAgent get error while creating those Modules (camera-capture and image-classifier-service)
I've tried :
1. Re-build it from fresh folder package
2. Pull the image manually from Azure Portal and run the image manually by script
I'm stuck on this for days.
in deployment.arm32v7.json for those modules I define the Image with registered registry url :
"modules": {
"camera-capture": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "zzzz.azurecr.io/camera-capture-opencv:1.1.12-arm32v7",
"createOptions": "{\"Env\":[\"Video=0\",\"azureSpeechServicesKey=2f57f2d9f1074faaa0e9484e1f1c08c1\",\"AiEndpoint=http://image-classifier-service:80/image\"],\"HostConfig\":{\"PortBindings\":{\"5678/tcp\":[{\"HostPort\":\"5678\"}]},\"Devices\":[{\"PathOnHost\":\"/dev/video0\",\"PathInContainer\":\"/dev/video0\",\"CgroupPermissions\":\"mrw\"},{\"PathOnHost\":\"/dev/snd\",\"PathInContainer\":\"/dev/snd\",\"CgroupPermissions\":\"mrw\"}]}}"
}
},
"image-classifier-service": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "zzzz.azurecr.io/image-classifier-service:1.1.5-arm32v7",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/pi/images:/images\"],\"PortBindings\":{\"8000/tcp\":[{\"HostPort\":\"80\"}],\"5679/tcp\":[{\"HostPort\":\"5679\"}]}}}"
}
Error message from EdgeAgent Logs :
(Inner Exception #0) Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException- Message:Error calling Create module
image-classifier-service: Could not create module image-classifier-service
caused by: Could not pull image zzzzz.azurecr.io/image-classifier-service:1.1.5-arm32v7
caused by: Get https://zzzzz.azurecr.io/v2/image-classifier-service/manifests/1.1.5-arm32v7: unauthorized: authentication required
When trying to run the pulled image by script :
sudo docker run --rm --name testName -it zzzz.azurecr.io/camera-capture-opencv:1.1.12-arm32v7
None
I get this error :
Camera Capture Azure IoT Edge Module. Press Ctrl-C to exit.
Error: Time:Fri May 24 10:01:09 2019 File:/usr/sdk/src/c/iothub_client/src/iothub_client_core_ll.c Func:retrieve_edge_environment_variabes Line:191 Environment IOTEDGE_AUTHSCHEME not set
Error: Time:Fri May 24 10:01:09 2019 File:/usr/sdk/src/c/iothub_client/src/iothub_client_core_ll.c Func:IoTHubClientCore_LL_CreateFromEnvironment Line:1572 retrieve_edge_environment_variabes failed
Error: Time:Fri May 24 10:01:09 2019 File:/usr/sdk/src/c/iothub_client/src/iothub_client_core.c Func:create_iothub_instance Line:941 Failure creating iothub handle
Unexpected error IoTHubClient.create_from_environment, IoTHubClientResult.ERROR from IoTHub
When you pulled the image directly with docker run, it pulled but then failed to run outside of the edge runtime, which is expected. But when the edge agent tried to pull it, it failed because it was not authorized. No credentials were supplied to the runtime, so it attempted to access the registry anonymously.
Make sure that you add your container registry credentials to the deployment so that edge runtime can pull images. The deployment should contain something like the following in the runtime settings:
"MyRegistry" :{
"username": "<username>",
"password": "<password>",
"address": "<registry-name>.azurecr.io"
}
As #silent pointed out in the comments, the documentation is here, including an example deployment that includes container registry credentials.
This example shows how to create a Web App that is linked to a GitHub repo via Azure Resource Manager (ARM) template: https://github.com/Azure/azure-quickstart-templates/tree/master/201-web-app-github-deploy
Here's the snippet of the dependent resource:
{
"apiVersion": "2015-04-01",
"name": "web",
"type": "sourcecontrols",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites', parameters('siteName'))]"
],
"properties": {
"RepoUrl": "[parameters('repoURL')]",
"branch": "[parameters('branch')]",
"IsManualIntegration": true
}
}
However, I want to create a website where I deploy from my own, local git repo. How can this be done with an ARM template?
UPDATE:
To clarify what I am looking to do: I want to create a Web App that has an attached Git repo. See this article: https://azure.microsoft.com/en-us/documentation/articles/web-sites-publish-source-control/ -- The step I am trying to automate is described in the section headed "Enable the web app repository" -- I don't want to have to do it through the Azure portal
I was able to find the right settings for the ARM template JSON by browsing: https://resources.azure.com
Here is the snippet...
"resources": [
{
"apiVersion": "2015-08-01",
"name": "web",
"type": "config",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites/', variables('siteName'))]"
],
"properties": {
"phpVersion": " ",
"scmType": "LocalGit"
}
}
]
The solve was to add "scmType" key with value of "LocalGit".
A Git repo does not, in itself do anything other than to just save versions of your code and scripts.
However, if you hook this up to a build system, such as Visual Studio Team Services (free = nice!) you can have the build system both compile your website, and then execute your ARM template, through release management in order to provision a clean/fresh environment.