So I'm fairly new to working with Azure and there are some things I can't quite wrap my head around. One of them being the Azure Storage Account.
My web jobs keeps stopping with the following error "Unhandled Exception: System.InvalidOperationException: The account credentials for '[account_name]' are incorrect." Understanding the error however is not the problem, at least that's what I think. The problem lies in understanding why I need an Azure Storage Account to overcome it.
Please read on as I try to take you through the steps taken thus far. Hopefuly the real question will become more clear to you.
In my efforts to deploy a WebJob on Azure we have created the following resources so far:
App Service Plan
App Service
SQL server
SQL database
I'm using the following code snippet to prevent my web job from exiting:
JobHostConfiguration config = new JobHostConfiguration();
config.DashboardConnectionString = null;
new JobHost(config).RunAndBlock();
To my understanding from other sources the Dashboard connection string is optional but the AzureWebJobsStorage connection string is required.
I tried setting the required connection string in portal using the configuration found here.
DefaultEndpointsProtocol=[http|https];AccountName=myAccountName;AccountKey=myAccountKey
Looking further I found this answer that clearly states where I would get the values needed, namely an/my missing Azure Storage Account.
So now for the actualy question: Why do I need an Azure Storage Account when I seemingly have all the resources I need place for the WebJob to run? What does it do? Is it a billing thing, cause I thought we had that defined in the App Service Plan. I've tried reading up on Azure Storage Accounts over here but I need a bit more help understanding how it relates to everything.
From the docs:
An Azure storage account provides resources for storing queue and blob data in the cloud.
It's also used by the WebJobs SDK to store logging data for the dashboard.
Refer to the getting started guide and documentation for further information
The answer to your question is "No", it is not mandatory to use Azure Storage when you are trying to setup and run a Azure web job.
If you are using JobHost or JobHostConfiguration then there is indeed a dependency for Storage accounts.
Sample code snippet is give below.
class Program
{
static void Main()
{
Functions.ExecuteTask();
}
}
public class Functions
{
[NoAutomaticTrigger]
public static void ExecuteTask()
{
// Execute your task here
}
}
The answer is no, you don't. You can have a WebJob run without being tied to an Azure Storage Account. Like Murray mentioned, your WebJob dashboard does use a storage account to log data but that's completely independent.
Related
Lately I've had trouble with deploying a Function App via Azure CLI. Last week on Tuesday, I was still able to deploy a Function App via Azure CLI.
This week, like any other day before that, I used fairly common Azure Function Tools command func azure functionapp publish. The version of Azure Function Tools I am using is 3.0.3233.
Now I am getting this error every time:
Retry: 1 of 3
Error creating a Blob container reference. Please make sure your connection string in "AzureWebJobsStorage" is valid
Retry: 2 of 3
Error creating a Blob container reference. Please make sure your connection string in "AzureWebJobsStorage" is valid
Retry: 3 of 3
Error creating a Blob container reference. Please make sure your connection string in "AzureWebJobsStorage" is valid
I checked that AzureWebJobsStorage setting has a correct value, I even connected to storage account connection string via Azure Storage Explorer app.
Just in case, I created a new Function App in another region and I still get the same error.
Has anyone else encountered this error? I suspect this is an error in the tool itself, maybe a faulty build?
I suspect that AzureWebJobsStorage is not present/invalid in App Settings section of the function app in the Azure portal.
Make sure that it is added there and you are not deleting those settings through CLI/templates and recreating them without AzureWebJobsStorage.
I answer to my own question. It seems that this was a transient error. Without changing any code, today I was able to redeploy my function app. Cheers.
If you don't have "Allow storage account key access" enabled , you get this error.
There could be other scenarios as well. But the error does not say anything .
I am using a console application that runs on on-premise servers triggered by a task scheduler. This console application performs the various actions and needed to be logged these. It would generate logs of around 200kb per run and the console app runs every hour.
Since the server is not accessible to us, I am planning to store the logs to Azure. I read about blob/table storage.
I would like to know what is the best strategy to store the logs in Azure.
Thank you.
Though you can write logging data in Azure Storage (both Blobs and Tables), it would actually make more sense if you use Azure Application Insights for logging this data.
I recently did the same for a console application I built. I found it incredibly simple.
I created an App Insight Resource in my Azure Subscription and got the instrumentation key. I then installed App Insights SDK and referenced appropriate namespaces in my project.
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
This is how I initialized the telemetry client:
var appSettingsReader = new AppSettingsReader();
var appInsightsInstrumentationKey = (string)appSettingsReader.GetValue("AppInsights.InstrumentationKey", typeof(string));
TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = appInsightsInstrumentationKey;
telemetryClient = new TelemetryClient(configuration);
telemetryClient.InstrumentationKey = appInsightsInstrumentationKey;
For logging trace data, I simply did the following:
TraceTelemetry telemetry = new TraceTelemetry(message, SeverityLevel.Verbose);
telemetryClient.TrackTrace(telemetry);
For logging error data, I simply did the following:
catch (Exception excep)
{
var message = string.Format("Error. {0}", excep.Message);
ExceptionTelemetry exceptionTelemetry = new ExceptionTelemetry(excep);
telemetryClient.TrackException(exceptionTelemetry);
telemetryClient.Flush();
Task.Delay(5000).Wait();//Wait for 5 seconds before terminating the application
}
Just keep one thing in mind though: Make sure you wait for some time (5 seconds is good enough) to flush the data before terminating the application.
If you're still keen on writing logs to Azure Storage, depending on the logging library you're using you will find suitable adapters that will write directly into Azure Storage.
For example, there's an NLog target for Azure Tables: https://github.com/harouny/NLog.Extensions.AzureTableStorage (though this project is not actively maintained).
A seeming simple ask, but had no luck finding the best way.
So i need to log application events from a console app that will spin up inside a container and do some work then die.
How can i log custom data from inside?
I've tried Azure Monitor and created a workspace and used HTTP Data Collector API inside the app but no joy in working out where logs are being stored.
Is there a simple way to log to an Azure Storage account and then using Azure Monitor to manage the events?
I've been googling for hours but a lot of posts are 8 years old and not relevant and i cannot really find a simple use case in modern azure.
Perhaps it's so simple i just cannot see it
Any pointers or links greatly received!
thanks
Paul
Why not trace events using Application Insight custom events ?
https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics
With that you can trace events with any metadata and check them in the Azure Application Insights Blade or reach them by the Application Insights SDK or The Api.
You just need to create an Application Insight instance and use the Telemetry Key to do that.
SDK: https://github.com/Microsoft/ApplicationInsights-dotnet
API : https://dev.applicationinsights.io/reference
Code sample to write events:
TelemetryClient client = new TelemetryClient();
client .InstrumentationKey = "INSERT YOUR KEY";
client.TrackEvent("SomethingInterestingHappened");
Also you can send more than just an string value:
tc.TrackEvent("PurchaseOrderSubmitted",
new Dictionary<string, string>()
{
{"CouponCode", "JULY2015" }
}, new Dictionary<string, double>()
{
{"OrderTotal", 68.99 },
{"ItemsOrdered", 5}
});
I created a function like this
public static Task HandleStorageQueueMessageAsync(
[QueueTrigger("%QueueName%", Connection = "%ConnectionStringName%")] string body,
TextWriter logger)
{
if (logger == null)
{
throw new ArgumentNullException(nameof(logger));
}
logger.WriteLine(body);
return Task.CompletedTask;
}
The queue name and the connection string name come from my configuration that has an INameResolver to get the values. The connection string itself I put from my secret store into the app config at app start. If the connection string is a normal storage connection string granting all permissions for the whole account, the method works like expected.
However, in my scenario I am getting an SAS from a partner team that only offers read access to a single queue. I created a storage connection string from that which looks similar like
QueueEndpoint=https://accountname.queue.core.windows.net;SharedAccessSignature=st=2017-09-24T07%3A29%3A00Z&se=2019-09-25T07%3A29%3A00Z&sp=r&sv=2018-03-28&sig=token
(I tried successfully to connect using this connection string in Microsoft Azure Storage Explorer)
The queue name used in the QueueTrigger attribute is also gathered from the SAS
However, now I am getting the following exceptions
$exception {"Error indexing method 'Functions.HandleStorageQueueMessageAsync'"} Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException
InnerException {"No blob endpoint configured."} System.Exception {System.InvalidOperationException}
If you look at the connection string, you can see the exception is right. I did not configure the blob endpoint. However I also don't have access to it and neither do I want to use it. I'm using the storage account only for this QueueTrigger.
I am using Microsoft.Azure.WebJobs v2.2.0. Other dependencies prevent me from upgrading to a v3.x
What is the recommended way for consuming messages from a storage queue when only a SAS URI with read access to a single queue is available? If I am already on the right path, what do I need to do in order to get rid of the exception?
As you have seen, v2 WebJobs SDK requires access to blob endpoint as well. I am afraid it's by design, using connection string without full access like SAS is an improvement tracked but not realized yet.
Here are the permissions required by v2 SDK. It needs to get Blob Service properties(Blob,Service,Read) and Queue Metadata and process messages(Queue,Container&Object,Read&Process).
Queue Trigger is to get messages and delete them after processing, so SAS requires Process permission. It means the SAS string you got is not authorized correctly even if SDK doesn't require blob access.
You could ask partner team to generate SAS Connection String on Azure portal with minimum permissions above. If they can't provide blob access, v3 SDK seems an option to try.
But there are some problems 1. Other dependencies prevent updating as you mentioned 2. v3 SDK is based on .NET Core which means code changes can't be avoided. 3. v3 SDK document and samples are still under construction right now.
I was having a load of issues getting a SAS token to work for a QueueTrigger.
Not having blob included was my problem. Thanks Jerry!
Slightly newer screenshot (I need add also):
I'm trying to find a solution that I can use to perform virus scanning on files that have been uploaded to Azure blob storage. I wanted to know if it is possible to copy the file to local storage on a Worker Role instance, call Antimalware for Azure Cloud Services to perform the scan on that specific file, and then depending on whether the file is clean, process the file accordingly.
If the Worker Role cannot call the scan programmatically, is there a definitive way to check if a file has been scanned and whether it is clean or not once it has been copied to local storage (I don't know if the service does a real-time scan when new files are added, or only runs on a schedule)?
There isn't a direct API that we've found, but the anti-malware services conform to the standards used by Windows desktop virus checkers in that they implement the IAttachmentExecute COM API.
So we ended up implementing a file upload service that writes the uploaded file to a Quarantine local resource, then calling the IAttachmentExecute API. If the file is infected then, depending on the anti-malware service in use, it will either throw an exception, silently delete the file or mark it as inaccessible. So by attempting to read the first byte of the file, we can test if the file remains accessible.
var type = Type.GetTypeFromCLSID(new Guid("4125DD96-E03A-4103-8F70-E0597D803B9C"));
var svc = (IAttachmentExecute)Activator.CreateInstance(type);
try {
svc.SetClientGuid(ref clientGuid);
svc.SetLocalPath(path);
svc.Save();
}
finally
{
svc.ClearClientState();
}
using (var fileStream = File.OpenRead(path))
{
fileStream.ReadByte();
}
[Guid("73DB1241-1E85-4581-8E4F-A81E1D0F8C57")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IAttachmentExecute
{
void SetClientGuid(ref Guid guid);
void SetLocalPath(string pszLocalPath);
void Save();
void ClearClientState();
}
I think the best way for you to know is simply take an Azure VM (IaaS) and activate Microsoft Antimalware extension. Then you may log into it and do all the necessary check and tests against the service.
Later, you will apply all this into the Worker Role (there is a similar PaaS extension available for that, calles PaaSAntimalware).
See the next excerpt from https://msdn.microsoft.com/en-us/library/azure/dn832621.aspx:
"In PaaS, the VM agent is called GuestAgent, and is always available on Web and Worker Role VMs. (For more information, see Azure Role Architecture.) The VM agent for Role VMs can now add extensions to the cloud service VMs in the same way that it does for persistent Virtual Machines.
The biggest difference between VM Extensions on role VMs and persistent VMs is that with role VMs, extensions are added to the cloud service first and then to the deployments within that cloud service.
Use the Get-AzureServiceAvailableExtension cmdlet to list all available role VM extensions."