ServiceName is not changing properly - c#-4.0

I have a need to be able to install the same service multiple times on a single machine.
That part I have working! But I also need the ServiceName's to be different. That part is not.
Below is the code within my Installer.cs:
[RunInstaller(true)]
public partial class ProjectInstaller : System.Configuration.Install.Installer
{
public ProjectInstaller()
{
InitializeComponent();
}
public override void Install(IDictionary stateSaver)
{
RetrieveServiceName();
base.Install(stateSaver);
}
public override void Uninstall(IDictionary savedState)
{
RetrieveServiceName();
base.Uninstall(savedState);
}
private void RetrieveServiceName()
{
var serviceName = Context.Parameters["servicename"];
if(!string.IsNullOrEmpty(serviceName))
{
auditStreamServiceInstaller.ServiceName = serviceName;
auditStreamServiceInstaller.DisplayName = serviceName;
}
}
}
and I use the following cmd to install the service
C:\Windows\Microsoft.Net\Framework\v4.0.30319> installutil /servicename="AuditStream-NW" d:AuditStreamService.exe
Now if I look at the installlog :
Affected parameters are:
logtoconsole =
logfile = C:\AuditStreams\NW\AuditStreamService.InstallLog
assemblypath = C:\AuditStreams\NW\AuditStreamService.exe
servicename = AuditStream-NW
This looks correct, but within my OnStart of my service, I have a line that outputs the ServiceName to a personal log file. But it says that the ServiceName is always AuditStreamService
I was hoping to have that say AuditStream-NW in this case...Can anyone see what I've got wrong?
EXTRA:
The reason I want these names different is because each service also creates a MemoryMappedFile, and originally I had it setup so the name of that non-persistant mmf was always "AuditStream-" + HubName(which is determined within the config file), but an outside program now will be monitoring what the service is doing by reading the mmf, but aside from reading the services config file the external application doesnt know the name of the mmf. My goal is to make all the names the same, ServiceName = MMF Name = ServiceDisplayName.

Ok so it turns out that my installation processes was fine, I just simply cannot use the this.ServiceName variable within the OnStart() as it will always return the generic default name and not the name that was chosen during installation. The following code is what I used to acquire my true name:
int myPid = Process.GetCurrentProcess().Id;
var services = ServiceController.GetServices();
foreach (var service in services)
{
ManagementObject wmiService = new ManagementObject("Win32_Service.Name='" + service.ServiceName + "'");
wmiService.Get();
if (Convert.ToInt32(wmiService["ProcessId"]) == myPid)
myServiceName = service.ServiceName;
}

Related

AzurePageable breaks operation c#

I am having difficulties working with azure pageable.
I have these dificulties in several places.......
What happens is, if i interact with an AzurePageable and something goes wrong, the thread just doesnt return ........
For example yesterday I had too many requests to Azure Appconfiguration, the following piece of code would just hang.......
// Get all the settings from Azure app configuration
private static Dictionary<string, string> GetAllXYZSettings(ConfigurationClient client)
{
var settingsSelector = new SettingSelector() { KeyFilter = "xyz:*" };
var settings = client.GetConfigurationSettings(settingsSelector);
Dictionary<string, string> config = new();
foreach (dynamic setting in settings)
{
string settingValue = (string)setting.Value;
if (!string.IsNullOrEmpty(settingValue))
{
config.Add(setting.Key, settingValue);
}
}
return config;
}
I tried several things, wrapped everything in a try catch but my thread would just not return.
How should i read the config so i can do some correct error handeling .....
Same behaviour is also observed when reading the servicebus queues.......
wrapped in try catch, didn't work

how distinguish traces from different instances .net core application in Application Insights

I work on .NET Core 2.2 console application that uses Microsoft.Extensions.Logging and is configured to send logs to Azure Application Insights using Microsoft.ApplicationInsights.Extensibility by:
services.AddSingleton(x =>
new TelemetryClient(
new TelemetryConfiguration
{
InstrumentationKey = "xxxx"
}));
...
var loggerFactory = serviceProvider.GetService<ILoggerFactory>();
loggerFactory.AddApplicationInsights(serviceProvider, logLevel);
It works ok: I can read logs in Application Insights. But the application can be started simultanously in few instances (in different Docker containers). How can I distinguish traces from different instances? I can use source FileName, but I don't know how I should inject it.
I tried to use Scope:
var logger = loggerFactory.CreateLogger<Worker>();
logger.BeginScope(dto.FileName);
logger.LogInformation($"Start logging.");
It's interesting that my configuration is almost identical as in example: https://github.com/MicrosoftDocs/azure-docs/issues/12673
But in my case I can't see the property "FileName" in Application Insights.
For console project, if you want to use the custom ITelemetryInitializer, you should use this format: .TelemetryInitializers.Add(new CustomInitializer());
Official doc is here.
I test it at my side, and it works. The role name can be set.
Sample code is below:
static void Main(string[] args)
{
TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = "xxxxx";
configuration.TelemetryInitializers.Add(new CustomInitializer());
var client = new TelemetryClient(configuration);
ServiceCollection services = new ServiceCollection();
services.AddSingleton(x => client);
var provider = services.BuildServiceProvider();
var loggerFactory = new LoggerFactory();
loggerFactory.AddApplicationInsights(provider, LogLevel.Information);
var logger = loggerFactory.CreateLogger<Program>();
logger.LogInformation("a test message 111...");
Console.WriteLine("Hello World!");
Console.ReadLine();
}
Check the role name in azure portal:
If you really have no way to distinguish them you can use a custom telemetry initializer like this:
public class CustomInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
telemetry.Context.Cloud.RoleName = Environment.MachineName;
}
}
and/or you can add a custom property:
public class CustomInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
if(telemetry is ISupportProperties)
{
((ISupportProperties)telemetry).Properties["MyIdentifier"] = Environment.MachineName;
}
}
}
In this example I used Environment.MachineName but you can of course use something else if needed. Like this work Id parameter of yours.
the wire it up using:
services.AddSingleton<ITelemetryInitializer, CustomInitializer>();

Blob path name provider for WebJob trigger

I have a following test code that is placed inside a WebJob project. It is triggered after any blob is created (or changed) inside "cBinary/test1/" storage account.
The code works.
public class Triggers
{
public void OnBlobCreated(
[BlobTrigger("cBinary/test1/{name}")] Stream blob,
[Blob("cData/test3/{name}.txt")] out string output)
{
output = DateTime.Now.ToString();
}
}
The question is: how to get rid of ugly hard-coded const string "cBinary/test1/" and ""cData/test3/"?
Hard-coding is one problem, but I need to create and maintain couple of such strings (blob directories) that are created dynamically - depend of supported types. What's more - I need this string value in couple of places, I don't want to duplicate it.
I would like them to be placed in some kind of configuration provider that builds the blob path string depending on some enum, for instance.
How to do it?
You can implement INameResolver to resolve QueueNames and BlobNames dynamically. You can add the logic to resolve the name there. Below is some sample code.
public class BlobNameResolver : INameResolver
{
public string Resolve(string name)
{
if (name == "blobNameKey")
{
//Do whatever you want to do to get the dynamic name
return "the name of the blob container";
}
}
}
And then you need to hook it up in Program.cs
class Program
{
// Please set the following connection strings in app.config for this WebJob to run:
// AzureWebJobsDashboard and AzureWebJobsStorage
static void Main()
{
//Configure JobHost
var storageConnectionString = "your connection string";
//Hook up the NameResolver
var config = new JobHostConfiguration(storageConnectionString) { NameResolver = new BlobNameResolver() };
config.Queues.BatchSize = 32;
//Pass configuration to JobJost
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
}
Finally in Functions.cs
public class Functions
{
public async Task ProcessBlob([BlobTrigger("%blobNameKey%")] Stream blob)
{
//Do work here
}
}
There's some more information here.
Hope this helps.

Self-hosting MVC6 app

I'm trying to get an MVC6 app to be self-hosted for testing. I can do in-memory testing using TestServer, but for testing integration of multiple web apps, one of which includes a middleware that I have no control over that connects to the other app, I need at least one of the apps to be accessible over TCP.
I have tried using WebApp.Start, but it works with an IAppBuilder rather than IApplicationBuilder, so I can't get it to work with my Startup.
Is there any way to get an MVC6 app to be self-hosted in an xUnit test, via OWIN or any other way?
UPDATE:
FWIW, based on Pinpoint's answer and some additional research, I was able to come up with the following base class that works in xUnit, at least when the tests are in the same project as the MVC project:
public class WebTestBase : IDisposable
{
private IDisposable webHost;
public WebTestBase()
{
var env = CallContextServiceLocator.Locator.ServiceProvider.GetRequiredService<IApplicationEnvironment>();
var builder = new ConfigurationBuilder(env.ApplicationBasePath)
.AddIniFile("hosting.ini");
var config = builder.Build();
webHost = new WebHostBuilder(CallContextServiceLocator.Locator.ServiceProvider, config)
.UseEnvironment("Development")
.UseServer("Microsoft.AspNet.Server.WebListener")
.Build()
.Start();
}
public void Dispose()
{
webHost.Dispose();
}
}
Katana's WebApp static class has been replaced by WebHostBuilder, that offers a much more flexible approach: https://github.com/aspnet/Hosting/blob/dev/src/Microsoft.AspNet.Hosting/WebHostBuilder.cs.
You've probably already used this API without realizing it, as it's the component used by the hosting block when you register a new web command in your project.json (e.g Microsoft.AspNet.Hosting server=Microsoft.AspNet.Server.WebListener server.urls=http://localhost:54540) and run it using dnx (e.g dnx . web):
namespace Microsoft.AspNet.Hosting
{
public class Program
{
private const string HostingIniFile = "Microsoft.AspNet.Hosting.ini";
private const string ConfigFileKey = "config";
private readonly IServiceProvider _serviceProvider;
public Program(IServiceProvider serviceProvider)
{
_serviceProvider = serviceProvider;
}
public void Main(string[] args)
{
// Allow the location of the ini file to be specified via a --config command line arg
var tempBuilder = new ConfigurationBuilder().AddCommandLine(args);
var tempConfig = tempBuilder.Build();
var configFilePath = tempConfig[ConfigFileKey] ?? HostingIniFile;
var appBasePath = _serviceProvider.GetRequiredService<IApplicationEnvironment>().ApplicationBasePath;
var builder = new ConfigurationBuilder(appBasePath);
builder.AddIniFile(configFilePath, optional: true);
builder.AddEnvironmentVariables();
builder.AddCommandLine(args);
var config = builder.Build();
var host = new WebHostBuilder(_serviceProvider, config).Build();
using (host.Start())
{
Console.WriteLine("Started");
var appShutdownService = host.ApplicationServices.GetRequiredService<IApplicationShutdown>();
Console.CancelKeyPress += (sender, eventArgs) =>
{
appShutdownService.RequestShutdown();
// Don't terminate the process immediately, wait for the Main thread to exit gracefully.
eventArgs.Cancel = true;
};
appShutdownService.ShutdownRequested.WaitHandle.WaitOne();
}
}
}
}
https://github.com/aspnet/Hosting/blob/dev/src/Microsoft.AspNet.Hosting/Program.cs
You can use Microsoft.AspNet.TestHost
See http://www.strathweb.com/2015/05/integration-testing-asp-net-5-asp-net-mvc-6-applications/ for details on use.
TestHost can work with your startup using a line like
TestServer dataServer = new TestServer(TestServer.CreateBuilder().UseStartup<WebData.Startup>());
where is the name of the application. The application has to be referenced in the test harness

ETW events in Azure diagnostics (SDK 2.5) are logged with incorrect / missing schema

I upgraded to Azure SDK 2.5 and switched to semantic logging with EventSources.
Logging works locally with a custom EventListener.
When deployed, logs are written to a storage table, but only the EventId, Pid, Tid etc. are populated, the really interesting fields (Message, Task, Keyword, Opcode) are left blank.
The diagnostics infrastructure log is full of errors with regards to ETW, but I don't know what to make of them:
Failed to load backup EventSource manifest file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something.WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.backup.xml;
EventSource events will be logged without a proper schema until provider sends the manifest packets
Load manifest file failed for C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf.something. WorkerRole. DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.xml
Failed to manage manifest version for file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something. WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Pid_3436.xml
Failed to process EventSource manifest event GUID:fe06b63d-39aa-5419-0529-18c4dacf4f68, event id:0xFFFE
Change in the number of events lost since the last sample: EventsCaptured=2 EventsLogged=1 EventsLost=0
I do not use a manifest file and specify the EventSource via class / attribute name:
<EtwEventSourceProviderConfiguration scheduledTransferPeriod="PT3M" scheduledTransferLogLevelFilter="Information" provider="something.Core">
<DefaultEvents eventDestination="CoreEvents" />
</EtwEventSourceProviderConfiguration>
I must be missing something, but I do not know what.
The remaining diagnostic services all work (infrastructure logs, performance counter etc.).
The EventId that is being logged is the correct one, but all the important information of the log is missing, I suppose because of an incomplete configuration?
Edit: here is my EventSource code. I won't post the entire thing because it's quite large. I use another type that calls the EventSource methods and handles formatting of parameters (if the source is enabled in that level). Most method arguments are of type string, there are no objects or other complex types passed around (that handles the other type).
[EventSource(Name = "something.Core")]
public sealed class CoreEventSource : EventSource {
private static readonly CoreEventSource SoleInstance = new CoreEventSource();
static CoreEventSource() {}
private CoreEventSource() {}
public static CoreEventSource Instance {
get { return SoleInstance; }
}
public static EventKeywords AllKeywords = (EventKeywords)(-1);
public class Keywords {
public const EventKeywords None = (EventKeywords)(1 << 1);
public const EventKeywords Infrastructure = (EventKeywords)(1 << 2);
[...]
}
public class Tasks {
public const EventTask None = EventTask.None;
// generic operations
public const EventTask Create = (EventTask)11;
public const EventTask Update = (EventTask)12;
public const EventTask Delete = (EventTask)13;
public const EventTask Get = (EventTask)14;
public const EventTask Put = (EventTask)15;
public const EventTask Remove = (EventTask)16;
public const EventTask Process = (EventTask)17;
}
[Event(1, Message = "Initialization of {0} failed: {1}.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void CriticalInitializationFailure(string component, string details, string exception) {
this.WriteEvent(1, component, details, exception);
}
[Event(2, Message = "[Role '{0}'] Startup: {1}", Level = EventLevel.Informational, Keywords = Keywords.Infrastructure)]
public void RoleStartup(string roleName, string message) {
this.WriteEvent(2, roleName, message);
}
[Event(3, Message = "[Role '{0}'] Stop failed: {1}.", Level = EventLevel.Error, Keywords = Keywords.Infrastructure)]
public void RoleStopFailed(string roleName, string details, string exception) {
this.WriteEvent(3, roleName, details, exception);
}
[Event(4, Message = "An unhandled exception occurred.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnhandledException(string exception) {
this.WriteEvent(4, exception);
}
[Event(5, Message = "An unobserved exception occurred in a faulted task.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnobservedTaskException(string exception) {
this.WriteEvent(5, exception);
}
[...]
}
Turns out there were quite a few problems with my EventSource. The first thing I'd recommend to anyone working with ETW is to use the Microsoft TraceEvent Library from NuGet, even if you use System.Diagnostics.Tracing, because it comes with a tool that will verify your EventSource code and notify you about problems.
I had to fix the following:
EventSource names must not contain a period .
Task/Opcode pairs must be unique within an EventSource
One must not declare a None field in a custom Keywords or Tasks enumeration
Hope this is of some use to anyone who encounters a similar problem.
Another thing that should be taken care of (which fixed our case)
- EventSources should only have a Name or a Guid, not both.
In our case, having both caused
- The EtwEventSourceProvider to not log anything
- The EtwEventManifestProvider to log the same way you outlined, with empty data points.

Resources