I'm trying to use Azure Service Bus with .NET Core. Obviously at the moment, this kind of sucks. I have tried the following routes:
The official SDK: doesn't work with .NET Core
AMQP.Net Lite: no (decent) documentation, no management APIs around creating/listing topics, etc. Only Service Bus examples cover a small subset of functionality and need you to have a topic already, etc
The community wrapper around AMQP.Net Lite which mirrors the Azure SDK (https://github.com/ppatierno/azuresblite): doesn't work with .NET Core
Then, I moved on to REST.
https://azure.microsoft.com/en-gb/documentation/articles/service-bus-brokered-tutorial-rest/ is a good start (although no RestSharp support for .NET Core either, and for some reason the official SDK doesn't seem to cover a REST client - no Swagger def, no AutoRest client, etc). Although this crappy example concatenates strings into XML without encoding, and covers a small subset of functionality.
So I decided to look for REST documentation. There are two sections, "classic" REST and just REST. Plain-old new REST doesn't support actually sending and receiving messages it seems (...huh?). I'm loathed to use an older technology labelled "classic" unless I can understand what it is - of course, docs are no help here. It also uses XML and ATOM rather than JSON. I have no idea why.
Bonus: the sample linked to in the documentation for the REST API, e.g. from https://msdn.microsoft.com/en-US/library/azure/hh780786.aspx, no longer exists.
Are there any viable approaches anyone has managed to use to read/write messages to topics/from subscriptions with Azure Service Bus and .NET Core?
Still there's not sufficient support for OnMessage implementation which I think the-most important thing in ServiceBus, .Net Core version of ServiceBus was rolled out several days ago.
Receive message example for .Net Core > https://github.com/Azure/azure-service-bus-dotnet/tree/b6f0474429efdff5960cab7cf18031ba2cbbbf52/samples/ReceiveSample
Github project link >
https://github.com/Azure/azure-service-bus-dotnet
And, nuget information > https://www.nuget.org/packages/Microsoft.Azure.Management.ServiceBus/
The support for Azure Service Bus in .Net Core is getting better and better. There is a dedicated nuget package for it: Microsoft.Azure.ServiceBus. As for now (March 2018) it supports most of the scenarios that you might need, altough there are some gaps, like:
receiving messages in batches
checking if topic / queue / subscription exists
creating new topic / queue / subscription from code
As for OnMessage support for receiving messages, there is a new method: RegisterMessageHandler, that does the same thing.
Here is a code sample how it can be used:
public class MessageReceiver
{
private const string ServiceBusConnectionString = "Endpoint=sb://bialecki.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[privateKey]";
public void Receive()
{
var subscriptionClient = new SubscriptionClient(ServiceBusConnectionString, "productRatingUpdates", "sampleSubscription");
try
{
subscriptionClient.RegisterMessageHandler(
async (message, token) =>
{
var messageJson = Encoding.UTF8.GetString(message.Body);
var updateMessage = JsonConvert.DeserializeObject<ProductRatingUpdateMessage>(messageJson);
Console.WriteLine($"Received message with productId: {updateMessage.ProductId}");
await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
},
new MessageHandlerOptions(async args => Console.WriteLine(args.Exception))
{ MaxConcurrentCalls = 1, AutoComplete = false });
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.Message);
}
}
}
For full information have a look at my blog posts:
Sending messages in .Net Core: http://www.michalbialecki.com/2017/12/21/sending-a-azure-service-bus-message-in-asp-net-core/
Receiving messages in .Net Core: http://www.michalbialecki.com/2018/02/28/receiving-messages-azure-service-bus-net-core/
Unfortunately, as of time of this writing, your only options for using service bus are either to roll your own if you want to use Azure Storage, or an alternative third party library, such as Hangfire, which has a sort-of-queue in form of Sql Server storage.
Related
I'm looking to use the Change Feed Processor SDK to monitor for changes to an Azure Cosmos DB collection, however, I have not seen clear documentation about whether the host can be run as an Azure Web Job. Can it? And if yes, are there any known issues or limitations versus running it as a Console App?
There are a good number of blog posts about using the CFP SDK, however, most of them vaguely mention running the host on a VM, and none of them or any examples running the host as an azure web job.
Even if it's possible, as a side question is, if such a host is deployed as a continuous web job and I select the "Scale" setting of the web job to Multi Instance, what are the approaches or recommendations to make the extra instances run with a different instance name, which the CFP SDK requires?
According to my research,Cosmos db trigger could be implemented in the WebJob SDK.
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddCosmosDB(a =>
{
a.ConnectionMode = ConnectionMode.Gateway;
a.Protocol = Protocol.Https;
a.LeaseOptions.LeasePrefix = "prefix1";
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
But it seems only Nuget for c# sdk could be used,no clues for other languages.So,you could refer to the Compare Functions and WebJobs to balance your needs and cost.
The Cosmos DB Trigger for Azure Functions it's actually, a WebJobs extension: https://github.com/Azure/azure-webjobs-sdk-extensions/tree/dev/src/WebJobs.Extensions.CosmosDB
And it uses the Change Feed Processor.
Functions run over WebJob technology. So to answer the question, yes, you can run Change Feed Processor on WebJobs, just make sure that:
Your App Service is set to Always On
If you plan to use multiple instances, make sure to set the InstanceName accordingly and not a static/fixed value. Probably something that identifies the WebJob instance.
Our application consist of angular 6 for the UI and .netcore 2.0 for the back end, looking to implement tracing to it and so far opentracing seems the most prominent but I can't seem to find any good help documentations for .netcore 2.0 apps.
There is several components which works together and can fully satisfy your requirement.
Common opentracing library, consisted of abstract layer for span, tracer, injectors and extractors, etc.
Official jaeger-client-csharp. Full list of clients can be found here, which implement opentracing abstraction layer mentioned earlier.
The final piece is the OpenTracing API for .NET, which is glue between opentracing library and DiagnosticSource concept in dotnet.
Actually, the final library has sample which uses jaeger csharp implementation of ITracer and configure it as default GlobalTracer.
At the rest in your Startup.cs, you will end up with something like from that sample (services is IServiceCollection):
services.AddSingleton<ITracer>(serviceProvider =>
{
string serviceName = Assembly.GetEntryAssembly().GetName().Name;
ILoggerFactory loggerFactory = serviceProvider.GetRequiredService<ILoggerFactory>();
ISampler sampler = new ConstSampler(sample: true);
ITracer tracer = new Tracer.Builder(serviceName)
.WithLoggerFactory(loggerFactory)
.WithSampler(sampler)
.Build();
GlobalTracer.Register(tracer);
return tracer;
});
// Prevent endless loops when OpenTracing is tracking HTTP requests to Jaeger.
services.Configure<HttpHandlerDiagnosticOptions>(options =>
{
options.IgnorePatterns.Add(request => _jaegerUri.IsBaseOf(request.RequestUri));
});
I have an application built from a series of web servers and microservices, perhaps 12 in all. I would like to monitor and, importantly, map this suite of services in Applications Insights. Some of the services are built with Dot Net framework 4.6 and deployed as Windows services using OWIN to receive and respond to requests.
In order to get the instrumentation working with OWIN I'm using the ApplicationInsights.OwinExtensions package. I'm using a single instrumentation key across all my services.
When I look at my Applications Insights Application Map, it appears that all the services that I've instrumented are grouped into a single "application", with a few "links" to outside dependencies. I do not seem to be able to produce the "Composite Application Map" the existence of which is suggested here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-app-map.
I'm assuming that this is because I have not set a different "RoleName" for each of my services. Unfortunately, I cannot find any documentation that describes how to do so. My map looks as follow, but the big circle in the middle is actually several different microservices:
I do see that the OwinExtensions package offers the ability to customize some aspects of the telemetry reported but, without a deep knowledge of the internal structure of App Insights telemetry, I can't figure out whether it allows the RoleName to be set and, if so, how to accomplish this. Here's what I've tried so far:
appBuilder.UseApplicationInsights(
new RequestTrackingConfiguration
{
GetAdditionalContextProperties =
ctx =>
Task.FromResult(
new [] { new KeyValuePair<string, string>("cloud_RoleName", ServiceConfiguration.SERVICE_NAME) }.AsEnumerable()
)
}
);
Can anyone tell me how, in this context, I can instruct App Insights to collect telemetry which will cause a Composite Application Map to be built?
The following is the overall doc about TelemetryInitializer which is exactly what you want to set additional properties to the collected telemetry - in this case set Cloud Rolename to enable application map.
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling#add-properties-itelemetryinitializer
Your telemetry initializer code would be something along the following lines...
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
// set role name correctly here.
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
Please try this and see if this helps.
In my .NET Framework 4.6.1 Web API applications, I'm using the System.Diagnostics.Trace class' CorrelationManager property, along with NLog, to group log messages per request. Unfortunately, it seems like the CorrelationManager property no longer exists on System.Diagnostics.Trace. I have two questions:
Is there a replacement concept somewhere in .NET Standard?
Does NLog natively support that replacement?
It's already supported from version 4.3.1 of NLog.Web. Use variable ${aspnet-TraceIdentifier}.
You can use also custom one with custom logic, for example:
app.Use(next => {
return async context => {
context.TraceIdentifier = Guid.NewGuid().ToString();
await next(context);
};
});
It appears that Microsoft.AspNetCore.Http.HttpContext.TraceIdentifier is what I am looking for. NLog does not currently support this.
We are designing an Azure Website which will allow users to Upload content(MP4,Docx...MSOffice Files) which can then be accessed.
Some video content we will encode to provide several differing quality formats, before it will be streamed (using Azure Media Services).
We need to add an intermediate step so we can scan uploaded files for potential virus risk. Is there functionality built into azure (or third party) which will allow us to call an API to scan content before processing it? We are ideally looking for an API rather than just a background service on a VM, so we can get feedback potentially for use in a web or worker role.
Had a quick look at Symantec Endpoint and Windows Defender but not sure these offer an API
I have successfully done this using the open source ClamAV. You don't specify what languages you are using, but as it's Azure I'll assume .Net.
There is a .Net wrapper that should provide the API that you are looking for:
https://github.com/tekmaven/nClam
Here is some sample code (note: this is copied directly from the nClam GitHub repo page and reproduced here just to protect against link rot)
using System;
using System.Linq;
using nClam;
class Program
{
static void Main(string[] args)
{
var clam = new ClamClient("localhost", 3310);
var scanResult = clam.ScanFileOnServer("C:\\test.txt"); //any file you would like!
switch(scanResult.Result)
{
case ClamScanResults.Clean:
Console.WriteLine("The file is clean!");
break;
case ClamScanResults.VirusDetected:
Console.WriteLine("Virus Found!");
Console.WriteLine("Virus name: {0}", scanResult.InfectedFiles.First().VirusName);
break;
case ClamScanResults.Error:
Console.WriteLine("Woah an error occured! Error: {0}", scanResult.RawResult);
break;
}
}
}
There are also APIs available for refreshing the virus definition database. All the necessary ClamAV files can be included in the deployment package and any configuration can be put into the service start-up code.
ClamAV is a good idea, specially now that 0.99 is about to be released with YARA rule support - it will make it really easy for you to write custom rules and allow clamav to use tons of good YARA rules in the open today.
Another route, and a bit of shameless plugging, is to check out scanii.com, it's a SaaS for malware/virus detection and it integrates quite nicely with AWS and Azures.
There are a number of options to achieve this:
Firstly you can use ClamAV as already mentioned. ClamAV doesn't always receive the best press for its virus databases but as others have pointed out it's easy to use and is expandable.
You can also install a commercial scanner, such as avg, kaspersky etc. Many of these come with a C API that you can talk to directly, although often getting access to this can be expensive from a licensing point of view.
Alternatively you can make calls to the executable directly using something like the following to capture the output:
var proc = new Process {
StartInfo = new ProcessStartInfo {
FileName = "scanner.exe",
Arguments = "arguments needed",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
proc.Start();
while (!proc.StandardOutput.EndOfStream) {
string line = proc.StandardOutput.ReadLine();
}
You would then need to parse the output to get the result and use it within your application.
Finally, now there are some commercial APIs available to do this kind of thing such as attachmentscanner (disclaimer I'm related to this product) or scanii. These will provide you with an API and a more scalable option to scan specific files and receive the response from at least one virus checking engine.
New thing coming Spring / Summer 2020. Advanced threat protection for Azure Storage includes Malware Reputation Screening, which detects malware uploads using hash reputation analysis leveraging the power of Microsoft Threat Intelligence, which includes hashes for Viruses, Trojans, Spyware and Ransomware. Note: cannot guarantee every malware will be detected using hash reputation analysis technique.
https://techcommunity.microsoft.com/t5/Azure-Security-Center/Validating-ATP-for-Azure-Storage-Detections-in-Azure-Security/ba-p/1068131