In this issue:
Azure Functions - Event Hub not triggering Functions
I described a problem with syncing event hub triggers and I managed to find a solution by simply invoking 'syncfunctiontriggers' action with Azure CLI:
az resource invoke-action --resource-group <resourceGrouName> --action syncfunctiontriggers --name <functionAppName> --resource-type Microsoft.Web/sites
Unfortunately this stopped working since 5th June and triggers are not fired on with messages getting into event hub - even if I call this command above. I need to go to portal and open function apps to sync them again but for me it is not a feasible solution.
I need to have an automated way of creating whole resource group, containing event hub, function apps, storages. At best with the use of Azure CLI.
Has anybody found a workaround for it or is this problem known already to Azure's team?
In the meantime I have found a workaround that is not entering the azure's portal. Simply make a call to trigger's function app page, e.g.:
curl -s https://<function-app-name>.azurewebsites.net > /dev/null
And after that if I run E2E tests, event hub triggers start to work. However, as with previous workarounds that I'd used, I don't how long it might be valid.
Related
I get the following error in a pipeline that's first activity is to do a lookup on a storage container to get the contents of a file. When I test the connectionns, linked server, datasets or debug the pipeline I do not receive any errors. However when the pipeline is triggered by the storage event, it throws this error:
ErrorCode=AzureBlobCredentialMissing,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Please provide either connectionString or sasUri or serviceEndpoint to connect to Blob.,Source=Microsoft.DataTransfer.ClientLibrary,'
As per your scenario, where the debug is successful but the trigger runs failing. This make me assume that your dev changes have not been published which is why the trigger run fails. In simple terms the most recent published version of your linked service is different than that of your development version which haven't been published.
In case if you are using Source control then I would recommed following this tutorial for best practices - Automated publishing for continuous integration and delivery
If you are using CI-CD, then the issue might indeed cause by the DevOps pipeline not overriding the linked service parameters. Try redeploying the resource bye following below step and it should work as expected. (Linked service parameters had to be overwritten on the Azure resource template)
For example, if you have a linked service such as below:
Then you will still have to add below values into the overrideParameters section of the AzureResourceManagerTemplateDeployment task.
We were experiencing timeouts the first time a function from a Function App was being called so we move from a normal to premium service plan as in theory you can have always a warmed instance ready to answer a call (based on this documentation).
The thing is that when trying to configure the functionality we cannot see the setting present in the documentation. This is our portal:
And this is the portal settings appearing in the documentation:
The functions are still timing out the first time they are called so we did not see any difference moving from a normal plan to the premium one. Are we missing something?
Didn't reproduce your problem, I can set the pre-warmed instances on my portal. But I have an idea to solve this problem.
Try to use powshell in Azure Cloud Shell to configure the pre-warmed instances of your function app instead of using portal:
az resource update -g <resource_group> -n <function_app_name>/config/web --set properties.preWarmedInstanceCount=<desired_prewarmed_count> --resource-type Microsoft.Web/sites
Test whether it can be setted. If can't, have a look of the error. This may be an error with the portal.
If you have doubts, please let me know.
Turns out that you need to modify the Function App settings while we were tweaking the Service Plan.
We have there the warmed instances setting but it is not having the expected result, we are still facing the timeouts we had with the normal Service Plan.
I've been trying to find a way to run a simple command against one of my existing Azure VMs using Azure Data Factory V2.
Options so far:
Custom Activity/Azure Batch won't let me add existing VMs to the pool
Azure Functions - I have not played with this but I have not found any documentation on this using AZ Functions.
Azure Cloud Shell - I've tried this using the browser UI and it works, however I cannot find a way of doing this via ADF V2
The use case is the following:
There are a few tasks that are running locally (Azure VM) in task scheduler that I'd like to orchestrate using ADF as everything else is in ADF, these tasks are usually python applications that restore a SQL Backup and or purge some folders.
i.e. sqdb-restore -r myDatabase
where sqldb-restore is a command that is recognized locally after installing my local python library. Unfortunately the python app needs to live locally in the VM.
Any suggestions? Thanks.
Thanks to #martin-esteban-zurita, his answer helped me to get to what I needed and this was a beautiful and fun experiment.
It is important to understand that Azure Automation is used for many things regarding resource orchestration in Azure (VMs, Services, DevOps), this automation can be done with Powershell and/or Python.
In this particular case I did not need to modify/maintain/orchestrate any Azure resource, I needed to actually run a Bash/Powershell command remotely into one of my existing VMs where I have multiple Powershell/Bash commands running recurrently in "Task Scheduler".
"Task Scheduler" was adding unnecessary overhead to my data pipelines because it was unable to talk to ADF.
In addition, Azure Automation natively only runs Powershell/Python commands in Azure Cloud Shell which is very useful to orchestrate resources like turning on/off Azure VMs, adding/removing permissions from other Azure services, running maintenance or purge processes, etc, but I was still unable to run commands locally in an existing VM. This is where the Hybrid Runbook Worker came into to picture. A Hybrid worker group
These are the steps to accomplish this use case.
1. Create an Azure Automation Account
2. Install the Windows Hybrid Worker in my existing VM . In my case it was tricky because my proxy was giving me some errors. I ended up downloading the Nuget Package and manually installing it.
.\New-OnPremiseHybridWorker.ps1 -AutomationAccountName <NameofAutomationAccount> -AAResourceGroupName <NameofResourceGroup>
-OMSResourceGroupName <NameofOResourceGroup> -HybridGroupName <NameofHRWGroup>
-SubscriptionId <AzureSubscriptionId> -WorkspaceName <NameOfLogAnalyticsWorkspace>
Keep in mind that in the above code, you will need to find your own parameter values, the only parameter that does not have to be found and will be created is HybridGroupName this will define the name of the Hybrid Group
3. Create a PowerShell Runbook
[CmdletBinding()]
Param
([object]$WebhookData) #this parameter name needs to be called WebHookData otherwise the webhook does not work as expected.
$VerbosePreference = 'continue'
#region Verify if Runbook is started from Webhook.
# If runbook was called from Webhook, WebhookData will not be null.
if ($WebHookData){
# Collect properties of WebhookData
$WebhookName = $WebHookData.WebhookName
# $WebhookHeaders = $WebHookData.RequestHeader
$WebhookBody = $WebHookData.RequestBody
# Collect individual headers. Input converted from JSON.
$Input = (ConvertFrom-Json -InputObject $WebhookBody)
# Write-Verbose "WebhookBody: $Input"
#Write-Output -InputObject ('Runbook started from webhook {0} by {1}.' -f $WebhookName, $From)
}
else
{
Write-Error -Message 'Runbook was not started from Webhook' -ErrorAction stop
}
#endregion
# This is where I run the commands that were in task scheduler
$callBackUri = $Input.callBackUri
# This is extremely important for ADF
Invoke-WebRequest -Uri $callBackUri -Method POST
4. Create a Runbook Webhook pointing to the Hybrid Worker's VM
4. Create a webhook activity in ADF where the above PowerShell runbook script will be called via a POST Method
Important Note: When I created the webhook activity it was timing out after 10 minutes (default), so I noticed in the Azure Automation Account that I was actually getting INPUT data (WEBHOOKDATA) that contained a JSON structure with the following elements:
WebhookName
RequestBody (This one contains whatever you add in the Body plus a default element called callBackUri)
All I had to do was to invoke the callBackUri from Azure Automation. And this is why in the PowerShell runbook code I added Invoke-WebRequest -Uri $callBackUri -Method POST. With this, ADF was succeeding/failing instead of timing out.
There are many other details that I struggled with when installing the hybrid worker in my VM but those are more specific to your environment/company.
This looks like a use case that is supported with Azure Automation, using a hybrid worker. Try reading here: https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker
You can call runbooks with webhooks in ADFv2, using the web activity.
Hope this helped!
I've been trying to create an azure function which will move blobs from one container to another. For this purpose, I created a function following this tutorial.
Now when I add blob in container, I can debug the rest in local. Problem starts when I try to deploy the function. I'm using CICD for function deployment. After that, when I'm trying to configure the event subscription, it's failing.
I cannot edit the ENDPOINT DETAILS when creating event subscription
If I try to "Add event grid subscription", it gives a strange error message - Deployment has failed with the following error:
{"code":"Url validation","message":"The attempt to validate the
provided endpoint
https://.azurewebsites.net/admin/extensions/EventGridExtensionConfig
failed. For more details, visit https://aka.ms/esvalidation."}
All the tutorials I found are either irrelevant or outdated. Does anyone have any suggestion what I should do or what am I doing wrong?
Thanks in adance
I have an Azure infrastructure:
2 HTTP Functions -> Event Hub -> 2 Functions -> Table Storage
(so two http functions sending messages to event hub, and two functions triggered by messages in Event Hub, one of them saving message in table storage)
The infrastructure is daily automatically created by Azure ARM templates, with the use of Azure CLI. I haven't changed the logic in recent two months but since beginning of April I have noticed the new, weird behaviour.
At the end of setting up, E2E tests are executed automatically. They are sending some message and after some time they check, if message are in table storage.
And here is the problem: since beginning of April these tests almost always fail! And I did not change anything in logic of function or template.json's for infrastructure.
It looks that Functions that should be triggered by Event Hub are not executed at all! I have already found a workaround for it - if I go to Azure portal and run these functions manually ("Run" button above code editor), then the functions finally starts to work!
Does anybody else encounter this problem?
Is there some way to automatically, directly run non-HTTP triggered function by e.g. Azure CLI or REST interface?
It seems that problem is already quite well known:
https://github.com/Azure/Azure-Functions/issues/210
I'm using currently workaround from this issue, i.e. calling Azure CLI's method to synchronize function triggers after creating infrastructure and zip pushing of functions:
az resource invoke-action --resource-group <resourceGrouName> --action syncfunctiontriggers --name <functionAppName> --resource-type Microsoft.Web/sites