Is there a way to get Ninject to log out what it is doing?
In particular I'd like to see when objects are being created. As I have a mix of transient and singleton objects it'd be very useful for me during debug to be able to see how many instances of each are being created so that I can fix object scopes where needed.
EDIT: N.B. I'm looking at Ninject 2 as found at http://github.com/ninject/ninject
If you look at the new web site you will see a list of extensions. 2.0 RTM is out and the extensions are being released one at a time, but you can still user them. The logging extension is here and you can track the number with a static object and provide a lambda expression to increment it in .OnActivation(...) during binding.
v1.x answer: Yes, via log adapters for a.o. log4net and NLog - see http://mhinze.com/logging-with-ninject/
[in response to comment] EDIT: v2.0 Beta answer: No:-
From Ninject 2 Reaches Beta!
Things that were in Ninject 1.x that are not in Ninject 2:
Logging infrastructure: Cut because it wasn’t really useful anyway. Ninject doesn’t generate logging messages of its own anymore, but I’m looking into alternative sources of introspection.
...
Related
I've found myself in an interesting position. I currently use the latest Unity container, I'm on asp.net core 2.2, and I use application insights. As such, I have configured DI in my web app to use unity instead of the out of the box DI provider in core. I also use Application Insights and use the IWebHostBuilder.UseApplicationInsights extension to spin up AI for my app.
With all this in mind, I have a piece of code whose constructor takes in IHttpContextAccessor so I can access the HttpContext. It was working great. Then, I had another small app that I was trying to reuse the functionality, and the HttpContext was null coming from IHttpContextAccessor. With a bunch of guess, test, revise, I found that IWebHostBuilder.UseApplicationInsights seems to initialize that Request property (HttpContext) on IHttpContextAccessor. If I commented out that AI extension, I would get null; uncomment it, it worked.
I've started to look through the AI code to figure out what exactly they're doing, but honestly, with all the dependencies and pipelines and all that, it's a pretty daunting task. I was hoping someone could point out where/how AI is doing this so my code doesn't NEED AI in order to work. All help would be incredibly awesome.
Use AddHttpContextAccessor extension to add it to DI. HttpContextAccessor is not added by default due to performance impact.
services.AddHttpContextAccessor();
After some struggle, and hoping this post would enlighten me on the AI ask, I found that I didn't need to replicate the AI mechanism, if there's even one at all.
Originally, I was accessing the IHttpContextAccessor via code in the view (Razor). I have an abstract factory pattern I was using to instantiate IHttpContextAccessor via Unity (this pattern came over from my .Net Frame work days). Once I moved that code back to the controller and used proper .net core DI to get the dependency via the constructor, everything started working.
There must be something there I'm missing, but I have the code working so I'm happy. If someone could shed light on why one way works vs the other, I'd be happy to hear it.
When you enable application insights by calling .UseApplicationInsights(), it adds HttpContextAccessor. There are many components in ApplicationInsights which require HttpContextAccessor injected to it. eg: ClientIpHeaderTelemetryInitializer.
This is the exact line where this is occuring:
https://github.com/Microsoft/ApplicationInsights-aspnetcore/blob/develop/src/Microsoft.ApplicationInsights.AspNetCore/Extensions/ApplicationInsightsExtensions.cs#L137
Short: Is there any way to add a custom ITelemetryInitializer to an AzureFunction application?
Long:
After successfully integrating our AzureFunction application with ApplicationInsights and thoroughly instrumenting our code, it became evident really quickly that we'd need to correlate the various Trace and Request telemetry together. To do this we wanted to reuse a custom property we write to all trace logs called a SessionId.
A quick search yielded this SO post and this Docs article. The problem is that these articles assume I have access to some startup event or to the ApplicationInsights.config file on the server. I could be wrong but I don't believe I have access to either one of these.
So my question is, how do I do this with AzureFunctions?
No, it's not possible to customize that. There is work in progress to allow that, but it's not there yet.
You can see more details on these Github issues
application insights integration - ITelemetryInitializer doesn't have any effects #1416
Unable to access TelemetryConfiguration in DefaultTelemetryClientFactory (App Insights) #1556
EDIT:
It's possible in azure function v2.
There was a question on stackoverflow with problem here:
Custom Application Insight TelemetryInitializer using Azure Function v2
The problem was solved and from version Microsoft.Net.Sdk.Functions 1.0.25 all works fine, more here:
https://github.com/Azure/azure-functions-host/issues/3731#issuecomment-465252591
One abhorrent solution I came up with is to reflect into the ILogger that gets passed to the function to get at the ILogger array it hides inside. Then find the ApplicationInsightsLogger ILogger and reflect harder to pull out the TelemetryClient it uses. Once you get this you can merely set the SessionId property on the client's Context.
This works "great" however not only do I now have reflection in my code I had to downgrade the version of ApplicationInsights package I was using to get the type casting to stick.
Looking forward to better support in the future.
I am trying to implement validation and in reading:
https://github.com/ServiceStack/ServiceStack/wiki/Validation
I see this method being used. It doesn't seem to be on the Funq container, what am I missing?
//This method scans the assembly for validators
container.RegisterValidators(typeof(UserValidator).Assembly);
If you think a method is missing in ServiceStack it's most likely an extension method. RegisterValidators() is an extension method in the ServiceStack.ServiceInterface.Validation namespace.
You should consider using ReSharper as there as it eliminates a whole class of issues with C# development including locating methods, auto including namespaces, auto referencing of dlls, etc.
Otherwise if for some reason you want to continue without ReSharper you can use the T short-cut looking in the ServiceStack GitHub Repo which helps find files, otherwise use Ctrl+Shift+F to do a solution-wide text search in a local fork of ServiceStack.
I have a fairly stable server application version that's been deployed for nearly a year at dozens of customers.
One new customer recently setup the application and is getting the following error:
System.MethodAccessException: Attempt by security transparent method
[SomeMethod] to access security critical method [SomeOtherMethod]
failed.
Both SomeMethod and SomeOtherMethod are methods in assemblies that I wrote, that are built against .NET 4, and that are running inside a Windows Service. If it makes a difference, SomeOtherMethod does reference a type from a 3rd party assembly (EntLib 4.1) built against .NET 2.0. Looking at the code for EntLib 4.1, I do see that they use both SecurityTransparent and APTC attributes, but this has never caused issues at other clients.
These assemblies were upgraded from the .NET 2.0 CLR, but a long time ago. This exact code is running on other customers just fine, and I'm not explicitly using the APTC attribute nor am I using the SecurityCritical attribute anywhere.
This leads me to the conclusion that it's a configuration issue or perhaps .NET Framework patch issue. Has there been a patch released for .NET that would cause this breaking change? Is there a configuration setting some where that enforces this type of check which is off by default but that my customer may have enabled?
One last point. My service utilizes SSRS RDLCs to generate PDFs. Due to some changes in .NET 4, I must force the service to use the legacy security policy via the following config:
<runtime>
<NetFx40_LegacySecurityPolicy enabled="true" />
</runtime>
For more details on why I need to do this, see this stackoverflow post: Very High Memory Usage in .NET 4.0
The important point is that I do this at all my other customers as well. Only this one customer is having issues.
Sigh, the patterns and practices employed by the Microsoft Patterns And Practices team that's responsible for the Enterprise libraries are pretty deplorable. Well, the exception is accurate, you cannot call a method that's decorated as "I'll definitely check security" from code that's decorated with "Meh, I won't check security so don't bother burning the cpu cycles to check it". Which scales about as well as exception specifications as used in Java. CAS is incredibly useful, but diagnosing the exceptions is a major headache and often involves code that you don't own and can't fix. Big reason it got deprecated in .NET 4.
Editorial done. Taking a pot-shot at the problem, you need to find out why CAS is being enforced here. The simplest explanation for that is that the service doesn't run in full trust. The simplest explanation for that is that the client didn't install the service on the local hard drive. Or is generally running code in don't-trust-it mode even on local assemblies, a very paranoid admin could well prefer that. That needs to be configured with Caspol.exe, a tool whose command line options are as mysterious as CAS. Pot-shooting at the non-trusted location explanation, your client needs to run Caspol as shown in this blog post. Or just simply deploy the service locally so the default "I trust thee" applies.
Editing in the real reason as discovered by the OP: beware of the alternate data stream that gets added to a file when it is downloaded from an untrusted Internet or network location. The file will get a stream named "Zone.Identifier" that keeps track of where it came from with the "ZoneId" value. It is that value that overrides the trust derived from the storage location. Usually putting it in the Internet zone. Use Explorer, right-click the file and click "Unblock" to remove that stream. After you're sure you can trust the file :)
I was facing the similar issue while running the downloaded WCF sample from http://www.idesign.net/ while using their ServiceModelEx library.
I commented out the below line in AssemblyInfo.cs in ServiceModelEx project
//[assembly: AllowPartiallyTrustedCallers]
and it worked for me.
In case it helps others i post my solution for this issue:
1) On the AssemblyInfo.cs, removed/commented the [assembly: SecurityTransparent] line.
2) The Class and the Method that does the actual Job was marked as [SecuritySafeCritical], in my case establishing a Network Connection:
[SecuritySafeCritical]
public class NetworkConnection : IDisposable
{
[SecuritySafeCritical]
public NetworkConnection(string networkName, NetworkCredential credentials)
{
.............
}
}
3) The Caller Class and Method was market as [SecurityCritical]:
[SecurityCritical]
public class DBF_DAO : AbstractDAO
{
[SecurityCritical]
public bool DBF_EsAccesoExclusivo(string pTabla, ref ArrayList exepciones)
{
....
using (new NetworkConnection(DBF_PATH, readCredentials))
{
....
}
}
}
In my case it was an issue when I managed a NuGet packages in the solution some package overrides System.Web.Mvc assembly version binding in main web site project. Set back to 4.0.0.0 (I had 5.0 installed). I didn't change notice the change because Mvc v4.0 was installed and accessible via GAC. Set back
I know that there are already some questions related to this topic but I couldn't find a real solution yet.
Currently I am developing applications with EE6, using JPA, CDI, JSF. I would like to take a more modular approach than packaging everything into a WAR or EAR and deploy the whole thing on an Application Server.
I am trying to design my applications as modular as possible by separating a module into 3 maven projects:
API - Contains the interfaces for (stateless) services
Model - Contains the JPA Entities for the specific module
Impl - Contains the implementation of the API, mostly CDI beans
The view logic of every module is currently bundeled within a big web project, which is ugly. I already thought of web fragmets, but if I spread my bean classes and xhtml files in jar files, I would have to implement a hook so that the resources could be looked up by a parent web application. This kind of solution would at least enable me to have a fourth project per module that would contain all the view logic related to the module, which is a good start.
What I want is not only that I can have those 4 kinds of projects, but also that every project is hot swappable. This led me to OSGi, which was at first really cool until I realized that the EE6 technologies are not very well supported within an OSGi Container.
JPA
Let's look at JPA first. There are some tutorials[1] around that explain how to make a JPA enabled OSGi Bundle, but none of these tutorials shows how to spread entities into different bundles(the model project of a module). I would want to have for example three different modules
Core
User
Blog
The model project of the blog module has a (compile-time)dependency on the model project of user.
The model project of the user module has a (compile-time)dependency on the model project of core.
How can I make JPA work in such a scenario without having to create a Persistence Unit for each model project of a module? I want one persistence unit that is aware of all entities available at runtime. The model projects in which the entities are should of course be hot swappable. Maybe I will need to make a separate project for every client that imports all the needed entities of the projects and contains a persistence.xml that includes all necessary configuration things. Are there any available maven plugins for building such a projects or even other approaches to solve that issue?
CDI
CDI is very nice. I really love it and I don't want to miss it any more! I use CDI extensions like MyFaces CODI and DeltaSpike which are awesome!
I inject my (stateless) services into other services or into the view layer which is just great. Since my services are stateless it should not be a problem to use them as OSGi Services, but what about CDI integration in OSGi? I found a glassfish CDI Extension[2] that would the injection of OSGi Services into CDI beans, but I also want may OSGi Services to be CDI beans. I am not totally sure how to achive that, probably I would have to use the BeanManager to instantiate the implementations and then register every implementation for its interface in the ServiceRegistry within a BundleActivator. Is there any standard way for doing that? I would like to avoid any (compile-time)dependencies to the OSGi framework.
I would also like to use my services just like I use them right now, without changing anything(implementations not annotated and injection points not qualified).
There is a JBoss Weld extension/sub project[3] that seems to target that issue but it seems to be inactive, i can't find any best practices or how-tos.
How can I leave my implementation as it is but still be able to use OSGi? I mean it would not be a big deal to add an annotation to the implementations since every implementation is already annotated with a stereotype annotation, anyway I would like to prevent that.
JSF
As mentioned before I would like to be able to spread my view logic module wise. As far as I know this is not really possible out of the box. Pax Web[4] should solve that somehow, but I am not familiar with it.
I would like to have a project "CoreWeb" in the module "core" that contains a Facelet template, let's call it "template.xhtml". A JSF page in a project called "BlogWeb" in the module "blog" should then be able to reference that template and apply a composition.
To be able to extend the view I would introduce a java interface "Extension" that can be implemented by a specific class of a module. A controller for a view would then inject all implementations of the extension. An extension would for example provide a list of subviews that will be included into a main view.
The described extension mechanism can be implemented easily, but the following requirements must be fulfilled:
When adding new OSGi Bundles to the application server, the set of available extensions might change, the extensions must be available for the controller of the view.
The subviews(from a separate bundle) which should be included into a main view should be accessible.
The concept of a single host but multiple slice applications of Spring Slices[5] is very interesting, but seems limited to Spring DM Server and the project also seems to be inactive.
Summary
After all the examples and behaviors I described I hope that you know what I would like to achive. It's simply an EE6 App that is very dynamic and modularized.
What I look for at the end is at least documentation on how to get everything running as I would expect it or even better an already working solution!
[1] http://jaxenter.com/tutorial-using-jpa-in-an-osgi-environment-36661.html
[2] https://blogs.oracle.com/sivakumart/entry/typesafe_injection_of_dynamic_osgi
[3] http://www.slideshare.net/TrevorReznik/weldosgi-injecting-easiness-in-osgi
[4] http://team.ops4j.org/wiki//display/paxweb/Pax+Web
[5] https://jira.springsource.org/browse/SLICE
To answer some of your questions, using a single persistence unit but spreading your entities across multiple bundles is not recommended, but may occasionally work. However, if your entities are so closely related that they need to share a persistence unit, splitting them across modules may not make sense. Also, don't forget you can handle compile-time dependencies by separating the implementation and interface for each entity - interface and implementation need not be in the same bundle.
For dependency injection, you may like Blueprint.
Several implementations are available and most application servers with enterprise OSGi support support Blueprint out of the box. It uses XML to add metadata, so classes themselves won't need any modification.