Katana+OWIN Multithreaded Performance? - iis

Where can I find info on how many requests per second a Katana-on-OWIN implementation (Azure-hosted) can support?
There are performance benchmarks all over the place for IIS but I can't seem to find comparable data anywhere.
I am concerned that if I do something like this in a vacuum
public async Task Invoke(IDictionary<string, object> environment)
{
var response = environment["owin.ResponseBody"] as Stream;
using (var writer = new StreamWriter(response))
{
if (_options.IncludeTimestamp)
{
await writer.WriteAsync(DateTime.Now.ToLongTimeString());
}
await writer.WriteAsync("Hello, " + _options.Name + "!");
}
}
(taken from http://odetocode.com/blogs/scott/archive/2013/11/11/writing-owin-middleware.aspx) and compare it to a simple .aspx.cs page that writes "Hello world" I will not get an apples-to-apples performance metric.
The way IIS handles threading and pooling is well-documented. But I am not sure about how Katana-on-OWIN (self hosted or under Azure) handles simultaneous requests and works "under load".
Thanks.

OWIN is merely an abstraction for running web applications on different web servers. Katana is one implementation. The most important performance numbers for requests/second are those for web servers, not OWIN or Katana.
OWIN performance comparisons would only make sense if you wanted to know how much overhead a framework added to your web app and could be tested using the Microsoft.Owin.Testing TestServer in isolation of network latency. Here, you could compare differences in Katana, Dyfrig, NancyFx, Web API, and others.

Related

Writing all your functions in one cloud function

What if I put multiple function inside a single cloud function so that its instance lives at max and that I will have to deal with cold start once?
Why is this a bad idea?
export const shop = functions.https.onCall(async (data, context) => {
switch (data.type) {
case "get_fruits":
return await getFruits();
case "place_order":
return await placeOrder();
case "add_to_cart":
return await addToCart();
default:
return;
}
});
It will work but, IMO, it's not a good thing to do. There are many principles and patterns that exist today and that you do not enforce your solution.
Microservice
One of them is the split in microservices. There is no problem to build a monolith, but when I'm seeing your example (get_fruit, place_order, add_to_cart), I'm seeing different roles and responsibilities. I love the separation of concern: 1 service does 1 thing.
Routing
But, maybe your service is only a service for the routing and call functions deployed independently (and you enforce the microservice principle). If so, your service can become a bottleneck, if there are a lot of entries and a lot of queries.
In addition, there are services dedicated for routing: load balancers. They use the URL path of the requests and reach the correct microservices to serve them
Developer usage
Yes a URL, not a field in the body of your message to route the traffic. Today, the developers are familiar with the REST API. To get the fruit, they perform a GET request to the /fruit URL and they know they will get the fruits. If they want to add to the cart, they perform a POST request to the /cart URL and it works!
You USE URL, standard REST definition, load balancers and microservices.
You can imagine other benefits:
Each microservice can scale independently (you can have more get_fruit request than place_order, the service scale differently)
The security is easier to control (no security to get the catalog (fruits)), but you have to be authenticated to place an order
Evolution velocity can be decoupled between the services
...

SSL based webserver on Windows IoT

I am working on a project which involves gathering some sensor data and build a GUI on it, with controlling of sensors. It has following two basic requirements.
Should be a web based solution (Although it will only be used on LAN or even same PC)
It should be executable on both windows IoT core and standard windows PC (Windows 7 and above)
I have decided to use Embedded webserver for Windows IoT, which seems to be a good embedded server based on PCL targeting .NET 4.5 and UWP. So I can execute it on both environments. That is great! But the problem is this web server doesn't support SSL, I have tried to search other servers and have come up with Restup for UWP, which is also a good REST based web server, but it also doesn't support SSL.
I needs an expert opinion, that if there is any possibility I can use SSL protocol in these web servers. Is it possible that it can be implemented using some libraries like OpenSSL etc? (Although I think that it would be too complex and much time taking to implement it correctly)
Edit
I would even like to know about ASP.NET core on Windows 10 IoT Core, if I can build an application for both windows. I found one example but it is DNXbased, and I don't want to follow this way, as DNX is deprecated.
Any help is highly appreciated.
Late answer, but .NET Core 2.0 looks promising with Kestrel. I successfully created a .Net Core 2.0 app on the PI 3 this morning. Pretty nifty and If you already have an Apache web server, you’re almost done. I’m actually going to embed (might not be the right term) my .Net Core 2.0 web application into a UWP app, rather than create multiple unique apps for the touchscreens around the house.
.Net Core 2.0 is still in preview though.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel?tabs=aspnetcore2x
I know this post is pretty old, but I have built the solution which you are asking bout. I’m currently running .Net 5.0 on a Raspberry pi. When you build the .net core web project, select the correct target framework and the target runtime to win-arm. Copy the output some directory on the pi and you will have to access the device using powershell to create a scheduled task to start the web project. Something like this:
schtasks /create /tn "Startup Web" /tr c:\startup.bat /sc onstart /ru SYSTEM
That starts a bat file which runs a powershell command which has the following command:
Set-Location C:\apps\vradWebServer\ .\VradTrackerWeb.exe (the .\VradTrackerWeb.exe is on a second line in the file) - the name of the webapp.
That starts the server. If you have any web or apps posting to the webserver you will need an ssl cert. I used no-ip and let’s encrypt for this. For let’s encrypt to work, you will need an external facing web server and have the domain name point to it. Run let’s encrypt on the external server and then copy out the cert and place it in your web directory on the pi. I then have a uwp program that runs on the pi and when it starts, it gets it’s local address and then updates no-ip with the local address, so the local devices communicating will be correctly routed and have the ssl cert. Side note, my uwp app is the startup app on the device. The scheduled task is important because it allows you to run you app and the web server. The following snip is how I get the ip address and then update no-ip.
private string GetLocalIP()
{
string localIP = "";
using (Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, 0))
{
socket.Connect("8.8.8.8", 65530);
IPEndPoint endPoint = socket.LocalEndPoint as IPEndPoint;
localIP = endPoint.Address.ToString();
}
return localIP;
}//GetLocalIP
private async void UpdateIP()
{
string localIP = "";
string msg = "";
var client = new HttpClient(new HttpClientHandler { Credentials = new NetworkCredential("YourUserName", "YourPassword") });
try
{
localIP = GetLocalIP();
string noipuri = "http://dynupdate.no-ip.com/nic/update?hostname=YourDoman.hopto.org&myip=" + localIP;
using (var response = await client.GetAsync(noipuri))
using (var content = response.Content)
{
msg= await content.ReadAsStringAsync();
}
if (msg.Contains("good") == true || msg.Contains("nochg")==true)
{
SentDynamicIP = true;
LastIPAddress = localIP;
}
else
{
SentDynamicIP = false;
}
}
catch(Exception ex)
{
string x = ex.Message;
}
finally
{
client.Dispose();
}
}//UpdateIP

Azure Service Bus - topics, messages - using .NET Core

I'm trying to use Azure Service Bus with .NET Core. Obviously at the moment, this kind of sucks. I have tried the following routes:
The official SDK: doesn't work with .NET Core
AMQP.Net Lite: no (decent) documentation, no management APIs around creating/listing topics, etc. Only Service Bus examples cover a small subset of functionality and need you to have a topic already, etc
The community wrapper around AMQP.Net Lite which mirrors the Azure SDK (https://github.com/ppatierno/azuresblite): doesn't work with .NET Core
Then, I moved on to REST.
https://azure.microsoft.com/en-gb/documentation/articles/service-bus-brokered-tutorial-rest/ is a good start (although no RestSharp support for .NET Core either, and for some reason the official SDK doesn't seem to cover a REST client - no Swagger def, no AutoRest client, etc). Although this crappy example concatenates strings into XML without encoding, and covers a small subset of functionality.
So I decided to look for REST documentation. There are two sections, "classic" REST and just REST. Plain-old new REST doesn't support actually sending and receiving messages it seems (...huh?). I'm loathed to use an older technology labelled "classic" unless I can understand what it is - of course, docs are no help here. It also uses XML and ATOM rather than JSON. I have no idea why.
Bonus: the sample linked to in the documentation for the REST API, e.g. from https://msdn.microsoft.com/en-US/library/azure/hh780786.aspx, no longer exists.
Are there any viable approaches anyone has managed to use to read/write messages to topics/from subscriptions with Azure Service Bus and .NET Core?
Still there's not sufficient support for OnMessage implementation which I think the-most important thing in ServiceBus, .Net Core version of ServiceBus was rolled out several days ago.
Receive message example for .Net Core > https://github.com/Azure/azure-service-bus-dotnet/tree/b6f0474429efdff5960cab7cf18031ba2cbbbf52/samples/ReceiveSample
Github project link >
https://github.com/Azure/azure-service-bus-dotnet
And, nuget information > https://www.nuget.org/packages/Microsoft.Azure.Management.ServiceBus/
The support for Azure Service Bus in .Net Core is getting better and better. There is a dedicated nuget package for it: Microsoft.Azure.ServiceBus. As for now (March 2018) it supports most of the scenarios that you might need, altough there are some gaps, like:
receiving messages in batches
checking if topic / queue / subscription exists
creating new topic / queue / subscription from code
As for OnMessage support for receiving messages, there is a new method: RegisterMessageHandler, that does the same thing.
Here is a code sample how it can be used:
public class MessageReceiver
{
private const string ServiceBusConnectionString = "Endpoint=sb://bialecki.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[privateKey]";
public void Receive()
{
var subscriptionClient = new SubscriptionClient(ServiceBusConnectionString, "productRatingUpdates", "sampleSubscription");
try
{
subscriptionClient.RegisterMessageHandler(
async (message, token) =>
{
var messageJson = Encoding.UTF8.GetString(message.Body);
var updateMessage = JsonConvert.DeserializeObject<ProductRatingUpdateMessage>(messageJson);
Console.WriteLine($"Received message with productId: {updateMessage.ProductId}");
await subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
},
new MessageHandlerOptions(async args => Console.WriteLine(args.Exception))
{ MaxConcurrentCalls = 1, AutoComplete = false });
}
catch (Exception e)
{
Console.WriteLine("Exception: " + e.Message);
}
}
}
For full information have a look at my blog posts:
Sending messages in .Net Core: http://www.michalbialecki.com/2017/12/21/sending-a-azure-service-bus-message-in-asp-net-core/
Receiving messages in .Net Core: http://www.michalbialecki.com/2018/02/28/receiving-messages-azure-service-bus-net-core/
Unfortunately, as of time of this writing, your only options for using service bus are either to roll your own if you want to use Azure Storage, or an alternative third party library, such as Hangfire, which has a sort-of-queue in form of Sql Server storage.

Virus Scanning Uploaded files from Azure Web/Worker Role

We are designing an Azure Website which will allow users to Upload content(MP4,Docx...MSOffice Files) which can then be accessed.
Some video content we will encode to provide several differing quality formats, before it will be streamed (using Azure Media Services).
We need to add an intermediate step so we can scan uploaded files for potential virus risk. Is there functionality built into azure (or third party) which will allow us to call an API to scan content before processing it? We are ideally looking for an API rather than just a background service on a VM, so we can get feedback potentially for use in a web or worker role.
Had a quick look at Symantec Endpoint and Windows Defender but not sure these offer an API
I have successfully done this using the open source ClamAV. You don't specify what languages you are using, but as it's Azure I'll assume .Net.
There is a .Net wrapper that should provide the API that you are looking for:
https://github.com/tekmaven/nClam
Here is some sample code (note: this is copied directly from the nClam GitHub repo page and reproduced here just to protect against link rot)
using System;
using System.Linq;
using nClam;
class Program
{
static void Main(string[] args)
{
var clam = new ClamClient("localhost", 3310);
var scanResult = clam.ScanFileOnServer("C:\\test.txt"); //any file you would like!
switch(scanResult.Result)
{
case ClamScanResults.Clean:
Console.WriteLine("The file is clean!");
break;
case ClamScanResults.VirusDetected:
Console.WriteLine("Virus Found!");
Console.WriteLine("Virus name: {0}", scanResult.InfectedFiles.First().VirusName);
break;
case ClamScanResults.Error:
Console.WriteLine("Woah an error occured! Error: {0}", scanResult.RawResult);
break;
}
}
}
There are also APIs available for refreshing the virus definition database. All the necessary ClamAV files can be included in the deployment package and any configuration can be put into the service start-up code.
ClamAV is a good idea, specially now that 0.99 is about to be released with YARA rule support - it will make it really easy for you to write custom rules and allow clamav to use tons of good YARA rules in the open today.
Another route, and a bit of shameless plugging, is to check out scanii.com, it's a SaaS for malware/virus detection and it integrates quite nicely with AWS and Azures.
There are a number of options to achieve this:
Firstly you can use ClamAV as already mentioned. ClamAV doesn't always receive the best press for its virus databases but as others have pointed out it's easy to use and is expandable.
You can also install a commercial scanner, such as avg, kaspersky etc. Many of these come with a C API that you can talk to directly, although often getting access to this can be expensive from a licensing point of view.
Alternatively you can make calls to the executable directly using something like the following to capture the output:
var proc = new Process {
StartInfo = new ProcessStartInfo {
FileName = "scanner.exe",
Arguments = "arguments needed",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
proc.Start();
while (!proc.StandardOutput.EndOfStream) {
string line = proc.StandardOutput.ReadLine();
}
You would then need to parse the output to get the result and use it within your application.
Finally, now there are some commercial APIs available to do this kind of thing such as attachmentscanner (disclaimer I'm related to this product) or scanii. These will provide you with an API and a more scalable option to scan specific files and receive the response from at least one virus checking engine.
New thing coming Spring / Summer 2020. Advanced threat protection for Azure Storage includes Malware Reputation Screening, which detects malware uploads using hash reputation analysis leveraging the power of Microsoft Threat Intelligence, which includes hashes for Viruses, Trojans, Spyware and Ransomware. Note: cannot guarantee every malware will be detected using hash reputation analysis technique.
https://techcommunity.microsoft.com/t5/Azure-Security-Center/Validating-ATP-for-Azure-Storage-Detections-in-Azure-Security/ba-p/1068131

ServiceStack Service structure for predominantly read-only UI

I'm getting started with ServiceStack and I've got to say I'm very impressed with all it has under the bonnet and how easy it is to use!
I am developing a predominantly read-only application with it. There will likely be updates to the database 3 or 4 times a year but the rest of the time the solution will be displaying data on an electronic information board (large touch screen monitor).
The database structure is well normalised with a few foreign keyed tables and with this in mind I think it may be best to separate the read only API from the CRUD API. The CRUD API can be used to create and modify the relational data with POCO classes matching the database tables. I would then ensure the read-only API flattens the relational data into a few POCOs spanning a few db tables making the data easier to handle on the read-only UIs.
I'm just looking for ideas and advice really on whether this separation of concerns is wasted effort or if there is a better way of achieving what I need? Has anyone had similar thoughts / ideas?
Having developed a similar read only application (a gazetteer, updated quarterly/yearly) using ServiceStack we went with optimizing the API for reads, making use of the built in caching:
// For cached responses this has to be an object
public object Any(CachedRequestDto request)
{
string cacheKey = request.CacheKey;
return this.RequestContext.ToOptimizedResultUsingCache(
base.Cache, cacheKey, () =>
{
using (var service = this.ResolveService<RequestService>())
{
return service.Any(request.TranslateTo<RequestDto>()).TranslateTo<CachedResponseDto>();
}
});
}
Where CacheKey is just:
public string CacheKey
{
get
{
return UrnId.Create<CachedRequestDto>(string.Format("{0}_{1}", this.Field1, this.Field2));
}
}
We did start creating a CRUD / POCO service, but for speed went with using bulk import tools such SQL Server DTS/SSIS or console apps which suffices for now, and will revisit this later if required.
Might want to consider something like CQRS.
https://gist.github.com/kellabyte/1964094 (or Google for CQRS Martin Fowler, can only post 2 links).
Also found the following article valuable recently when starting to implement additional search type services: https://mathieu.fenniak.net/stop-designing-fragile-web-apis/

Resources