Hope you can help me out with something I've been struggling with for some time now. I'm using arquillian for my integraton testing and it works very nice. So I thought instead of testing my persistence layer I wanted to isolate my rest services layer to test it also. This time using arquillian, the arquillian rest extension (http://arquillian.org/modules/rest-extension/) which sounds perfect for black box testing of my services layer and combine this with mockito.
My application exists of several layers from top to bottom:
ui (javascript)
interaction layer (Rest services using resteasy)
business layer (Stateless EJBs)
persistence layer (DAOs)
All gets deployed in a java ee6 container and uses that technology.
In order to test the interaction layer which holds the rest services I'm using arquillian rest extension which avoids the hassle of setting up a httpclient in my code and invoking the rest call. This all is done by using arquillian rest extension which sounds great doesn't it!?
My rest services have several CDI beans injected which represent the business layer. In order to avoid having to test all layers at once, I want the rest layer to be isolated using mocks. So I tried to use mockito that could mock the injected beans in the rest layer. Unfortunately my mocks work in my test, but the mocks are not inserted in my rest services.
In order to fix this I stumpled upon another arquillian extension called autodiscover but have no clue what to do or configure in order to let the mocks work in my project.
hope you guys can help me out.
In fact when running black box tests like rest calls or selenium based tests either mockito neither autodiscover or even byteman extension will help you cause your tests are driven outside the container and you can relay only on what was deployed.
One alternative is to deploy fake services so for example if you have a rest endpoint like this:
#Path("rest")
public void MyEndpoint{
#Inject
RealService realService;
#Path("rest-resource")
#GET
public Response methodUnderTest(){
realService.doSomething();
return Response.ok().build();
}
}
on your test #Deployment you can add a FakeService instead of RealService where the FakeService is either a CDI alternative or a Bean Specialization, i have wrote about arquillian and mocks on my blog, you can read the post here.
Related
I've found myself in an interesting position. I currently use the latest Unity container, I'm on asp.net core 2.2, and I use application insights. As such, I have configured DI in my web app to use unity instead of the out of the box DI provider in core. I also use Application Insights and use the IWebHostBuilder.UseApplicationInsights extension to spin up AI for my app.
With all this in mind, I have a piece of code whose constructor takes in IHttpContextAccessor so I can access the HttpContext. It was working great. Then, I had another small app that I was trying to reuse the functionality, and the HttpContext was null coming from IHttpContextAccessor. With a bunch of guess, test, revise, I found that IWebHostBuilder.UseApplicationInsights seems to initialize that Request property (HttpContext) on IHttpContextAccessor. If I commented out that AI extension, I would get null; uncomment it, it worked.
I've started to look through the AI code to figure out what exactly they're doing, but honestly, with all the dependencies and pipelines and all that, it's a pretty daunting task. I was hoping someone could point out where/how AI is doing this so my code doesn't NEED AI in order to work. All help would be incredibly awesome.
Use AddHttpContextAccessor extension to add it to DI. HttpContextAccessor is not added by default due to performance impact.
services.AddHttpContextAccessor();
After some struggle, and hoping this post would enlighten me on the AI ask, I found that I didn't need to replicate the AI mechanism, if there's even one at all.
Originally, I was accessing the IHttpContextAccessor via code in the view (Razor). I have an abstract factory pattern I was using to instantiate IHttpContextAccessor via Unity (this pattern came over from my .Net Frame work days). Once I moved that code back to the controller and used proper .net core DI to get the dependency via the constructor, everything started working.
There must be something there I'm missing, but I have the code working so I'm happy. If someone could shed light on why one way works vs the other, I'd be happy to hear it.
When you enable application insights by calling .UseApplicationInsights(), it adds HttpContextAccessor. There are many components in ApplicationInsights which require HttpContextAccessor injected to it. eg: ClientIpHeaderTelemetryInitializer.
This is the exact line where this is occuring:
https://github.com/Microsoft/ApplicationInsights-aspnetcore/blob/develop/src/Microsoft.ApplicationInsights.AspNetCore/Extensions/ApplicationInsightsExtensions.cs#L137
I am trying to develop Prism 6 UWP Application .
My current problem is I want to register all objects used in Application with Unity Dependency injection container .
But so many UI objects are created by infrastructure(Activator.CreateInstance(type)) and no way to trigger their creation through dependency injection conatiner.I would be fine even if I register created objects with DI conatiner.
I refered Github sample application AdventureWorks.Shopper in Prism samples.
Here I saw views are created by infrastructure ,but some other objects are created by Dependency injection conatiner .
Is there any way to get all objects in applications and register them with DI conatiner ?
Is there any way to get all objects in applications and register them with DI container or trigger all creation through DI conatiner?
Generally you shouldn't try to have your UI objects created by the container, because as you mention, the XAML parsing process (infrastructure as you call it) is responsible for doing that and there is no easy way to get in the loop to take over that construction process.
This is one of the reasons we added the ViewModelLocator to Prism - so that from the ViewModel down through all of its dependencies, you can wire up the SetDefaultViewModelFactory method to use the container to do the construction of all your ViewModels and their dependencies (and their dependency's dependencies, etc.) as long as you use ViewModelLocator to wire the View to the ViewModel.
If you are following the MVVM pattern well, then there should almost never be a need to construct the UI objects themselves through the container because they should not be doing any logic in the code behind that would depend on things injected by the container. But that is not to say you will never need to do that. So for those situations where you need to do that You can either get to the container through the Application.Current as suggested by S Vasudev with some casting. Or if you need to do that a few places and don't want all that casting "noise" in the code, then write a simple helper object with a static property that you can set in the OnInitializeAsync method of the App class and then easily access anywhere.
If you are doing that in more than a few places you should start to question your design. And yes statics (globals) are evil and should be avoided whenever possible. But if it is just a few places in the code behind of a few views, sometimes you need to be a pragmatic programmer who gets things done and not an MVVM purist who overcomplicates things just to avoid a few minor violations of the MVVM guidance.
One way we found :-
You can access unity container like this :-
unityContainer = (UnityContainer)((Prism.Unity.Windows.PrismUnityApplication)Application.Current).Container;
In constructor of objects which are created by Activator.CreateInstance,we can use unity container and register that instance to unity container.
In that way , all objects gets registered with unity conatiner
example:-
unityContainer.RegisterInstance(this);
A pretty common problem with any kind of integration test is getting the unit under test into a known state -- the state that sets up well for the test you want to perform. With a unit test, there's usually not much state, and the only issue is in potentially mocking out interactions with other classes.
On the other hand, when testing a whole app there's all sorts of potentially persistent state, and getting the app into a clean state, or trickier still, into a known state that isn't "clean" without any access to the app itself is a little tricky.
The only suggestion I've found is to embed any necessary setup in the app, and use something like an environment variable to trigger setup. That is, of course, viable, but it's not ideal. I don't really want to embed test code and test data in my final application if I can avoid it.
And then there's mocking out interactions with remote services. Again you can embed code (or even a framework) to do that, and trigger it with an environment variable, but again I don't love the idea of embedding stubbing code into the final app.
Suggestions? I haven't been able to find much, which makes me wonder if no-one is using Xcode UI testing, or is only using it for incredibly simple apps that don't have these kinds of issues.
Unfortunately, the two suggestions you mentioned are the only ones that are possible with Xcode UI Testing in its current state.
There is, however, one thing you can do to mitigate the risk of embedding test code in your production app. With the help of a few compiler flags you can ensure the specific code is only built when running on the simulator.
#if (arch(i386) || arch(x86_64)) && os(iOS)
class SeededHTTPClient: HTTPClientProtocol {
/// ... //
}
#endif
I'm in the middle of building something to make this a little easier. I'll report back when its ready for use.
Regarding setting up the state on the target app there's a solution. Both the test runner app and your app can read and write to the simulator /Library/Caches folder. Knowing that you can bundle fixture data in your test bundle, copy it to the /Library/Caches on setUp() and pass a launch argument to your application to use that fixture data.
This only requires minimal changes to your app. You only need to prepare it to handle this argument at startup and copy over everything to your app container.
If you want to read more about this, or how you can do the same when running on the device, I've actually written a post on it.
Regarding isolating your UI tests from the network, I think the best solution is to embed a web server on your test bundle and have your app connect to it (again you can use a launch argument parameterize your app). You can use Embassy for that.
I know that there are already some questions related to this topic but I couldn't find a real solution yet.
Currently I am developing applications with EE6, using JPA, CDI, JSF. I would like to take a more modular approach than packaging everything into a WAR or EAR and deploy the whole thing on an Application Server.
I am trying to design my applications as modular as possible by separating a module into 3 maven projects:
API - Contains the interfaces for (stateless) services
Model - Contains the JPA Entities for the specific module
Impl - Contains the implementation of the API, mostly CDI beans
The view logic of every module is currently bundeled within a big web project, which is ugly. I already thought of web fragmets, but if I spread my bean classes and xhtml files in jar files, I would have to implement a hook so that the resources could be looked up by a parent web application. This kind of solution would at least enable me to have a fourth project per module that would contain all the view logic related to the module, which is a good start.
What I want is not only that I can have those 4 kinds of projects, but also that every project is hot swappable. This led me to OSGi, which was at first really cool until I realized that the EE6 technologies are not very well supported within an OSGi Container.
JPA
Let's look at JPA first. There are some tutorials[1] around that explain how to make a JPA enabled OSGi Bundle, but none of these tutorials shows how to spread entities into different bundles(the model project of a module). I would want to have for example three different modules
Core
User
Blog
The model project of the blog module has a (compile-time)dependency on the model project of user.
The model project of the user module has a (compile-time)dependency on the model project of core.
How can I make JPA work in such a scenario without having to create a Persistence Unit for each model project of a module? I want one persistence unit that is aware of all entities available at runtime. The model projects in which the entities are should of course be hot swappable. Maybe I will need to make a separate project for every client that imports all the needed entities of the projects and contains a persistence.xml that includes all necessary configuration things. Are there any available maven plugins for building such a projects or even other approaches to solve that issue?
CDI
CDI is very nice. I really love it and I don't want to miss it any more! I use CDI extensions like MyFaces CODI and DeltaSpike which are awesome!
I inject my (stateless) services into other services or into the view layer which is just great. Since my services are stateless it should not be a problem to use them as OSGi Services, but what about CDI integration in OSGi? I found a glassfish CDI Extension[2] that would the injection of OSGi Services into CDI beans, but I also want may OSGi Services to be CDI beans. I am not totally sure how to achive that, probably I would have to use the BeanManager to instantiate the implementations and then register every implementation for its interface in the ServiceRegistry within a BundleActivator. Is there any standard way for doing that? I would like to avoid any (compile-time)dependencies to the OSGi framework.
I would also like to use my services just like I use them right now, without changing anything(implementations not annotated and injection points not qualified).
There is a JBoss Weld extension/sub project[3] that seems to target that issue but it seems to be inactive, i can't find any best practices or how-tos.
How can I leave my implementation as it is but still be able to use OSGi? I mean it would not be a big deal to add an annotation to the implementations since every implementation is already annotated with a stereotype annotation, anyway I would like to prevent that.
JSF
As mentioned before I would like to be able to spread my view logic module wise. As far as I know this is not really possible out of the box. Pax Web[4] should solve that somehow, but I am not familiar with it.
I would like to have a project "CoreWeb" in the module "core" that contains a Facelet template, let's call it "template.xhtml". A JSF page in a project called "BlogWeb" in the module "blog" should then be able to reference that template and apply a composition.
To be able to extend the view I would introduce a java interface "Extension" that can be implemented by a specific class of a module. A controller for a view would then inject all implementations of the extension. An extension would for example provide a list of subviews that will be included into a main view.
The described extension mechanism can be implemented easily, but the following requirements must be fulfilled:
When adding new OSGi Bundles to the application server, the set of available extensions might change, the extensions must be available for the controller of the view.
The subviews(from a separate bundle) which should be included into a main view should be accessible.
The concept of a single host but multiple slice applications of Spring Slices[5] is very interesting, but seems limited to Spring DM Server and the project also seems to be inactive.
Summary
After all the examples and behaviors I described I hope that you know what I would like to achive. It's simply an EE6 App that is very dynamic and modularized.
What I look for at the end is at least documentation on how to get everything running as I would expect it or even better an already working solution!
[1] http://jaxenter.com/tutorial-using-jpa-in-an-osgi-environment-36661.html
[2] https://blogs.oracle.com/sivakumart/entry/typesafe_injection_of_dynamic_osgi
[3] http://www.slideshare.net/TrevorReznik/weldosgi-injecting-easiness-in-osgi
[4] http://team.ops4j.org/wiki//display/paxweb/Pax+Web
[5] https://jira.springsource.org/browse/SLICE
To answer some of your questions, using a single persistence unit but spreading your entities across multiple bundles is not recommended, but may occasionally work. However, if your entities are so closely related that they need to share a persistence unit, splitting them across modules may not make sense. Also, don't forget you can handle compile-time dependencies by separating the implementation and interface for each entity - interface and implementation need not be in the same bundle.
For dependency injection, you may like Blueprint.
Several implementations are available and most application servers with enterprise OSGi support support Blueprint out of the box. It uses XML to add metadata, so classes themselves won't need any modification.
i'm new in web dev and have following questions
I have Web Site project. I have one datacontext class in App_Code folder which contains methods for working with database (dbml schema is also present there) and methods which do not directly interfere with db. I want to test both kind of methods using NUnit.
As Nunit works with classes in .dll or .exe i understood that i will need to either convert my entire project to a Web Application, or move all of the code that I would like to test (ie: the entire contents of App_Code) to a class library project and reference the class library project in the web site project.
If i choose to move methods to separate dll, the question is how do i test those methods there which are working with data base? :
Will i have to create a connection to
db in "setup" method before running
each of such methods? Is this correct that there is no need to run web appl in this case?
Or i need to run such tests during
runtime of web site when the
connection is established? In this case how to setup project and Nunit?
or some another way..
Second if a method is dependent on some setup in my .config file, for instance some network credentials or smtp setup, what is the approach to test such methods?
I will greatly appreciate any help!
The more it's concrete the better it is.
Thanks.
Generally, you should be mocking your database rather than really connecting to it for your unit tests. This means that you provide fake data access class instances that return canned results. Generally you would use a mocking framework such as Moq or Rhino to do this kind of thing for you, but lots of people also just write their own throwaway classes to serve the same purpose. Your tests shouldn't be dependent on the configuration settings of the production website.
There are many reasons for doing this, but mainly it's to separate your tests from your actual database implementation. What you're describing will produce very brittle tests that require a lot of upkeep.
Remember, unit testing is about making sure small pieces of your code work. If you need to test that a complex operation works from the top down (i.e. everything works between the steps of a user clicking something, getting data from a database, and returning it and updating a UI), then this is called integration testing. If you need to do full integration testing, it is usually recommended that you have a duplicate of your production environment - and I mean exact duplicate, same hardware, software, everything - that you run your integration tests against.