I want to write the testcase for the given method. But there exist a static Transport.sendEmail method.
How can I mock this method using MockitoJunitRunner.
public void sendEmail(final String message, final String contentType) {
final Session session = Session.getDefaultInstance(PROPERTIES, null);
final Multipart mpart = new MimeMultipart();
final MimeBodyPart body = new MimeBodyPart();
try {
body.setContent(message, contentType);
mpart.addBodyPart(body);
Transport.send(createMessage(session, mpart));
LOGGER.info("Email Notification Sent Successfully");
} catch (MessagingException | UnsupportedEncodingException e) {
LOGGER.error("Was not able to send mail", e);
}
So:
Transport.send(createMessage(session, mpart));
that static call means: you can't "control" it using Mockito. Plain and simple. If that call just "passes" in your unit test environment, well, then you can test it, but not verify that the call really took place. Worse, if that call throws some exception in the unit test setup, then heck, what could you do?
Options:
turn to PowerMock(ito) or JMockit, as they allow you to gain control
recommended: improve your design to be easy-to-test
That last idea comes in various flavors:
For example, you could create a minimal interface EmailSenderService that offers a void send(Message whatever) method. Next: you create one implementation of that interface that simply invokes that static method. Now your code that actually has to send that message ... simply gets passed in an instance of EmailSenderService. And within your unit tests, you can now #Mock that interface, and you gain control over it.
Alternatively, you simply deprecate that static method (maybe the whole class), and you design a new "real" EmailSenderService that doesn't rely on static methods.
Related
How can I unit test this code?
private ODataQueryResult buildAndExecuteQuery(String path String entity,
String sapClient, String sapLanguage) {
ODataQuery query = ODataQueryBuilder
.withEntity(path, entity)
.withHeader("sap-client", sapClient, true)
.withHeader("sap-language", sapLanguage, true)
.withoutMetadata()
.build();
return query.execute();
}
More precisely: How can I verify that my code calls all the right functions, for example does not forget to call withoutMetadata or set the required headers?
Unfortunately, ODataQueryBuilder and ODataQuery are classes, not interfaces. which makes mocking tricky. ODataQueryBuilder is even final, disabling mocking completely. Also, the chain starts with a static method withEntity which can also not be mocked.
Are there helpers that allow me to spy on behavior or mock data, similar to the MockUtil described in https://blogs.sap.com/2017/09/19/step-12-with-sap-s4hana-cloud-sdk-automated-testing/?
You are right, the structure of those classes make them hard to mock, but as they are part of the "SAP Cloud Platform SDK for Service Development" we have no way to change them.
Other approaches might be:
If you want to stay with the Unit Test approach you might want to have a look at https://github.com/powermock/powermock. This would allow you to mock final and static classes and methods. However, I have never used it personally, so I'm not sure how easy/comfortable to use it is.
If you also would see an integration test suiting you could consider using http://wiremock.org/docs/getting-started/. With that you would be able to setup a "Mock Server", preparing responses for defined requests and with that verify the content of any HTTP call made by your test.
We use WireMock in the SAP Cloud SDK and also provide some integration into our SDK via the MockUtil contained in our testutil-core module.
I hope this helps a bit!
Partial solution is to simulate executing the ODataQuery on a mock HttpClient.
First, the original method needs to be taken apart into two independent parts, one for building the query, the other for executing it. This is good design anyway, so no big problem:
private ODataQuery buildQuery(String path, String entity,
String sapClient, String sapLanguage) {
return ODataQueryBuilder
.withEntity(path, entity)
.withHeader("sap-client", sapClient, true)
.withHeader("sap-language", sapLanguage, true)
.withoutMetadata()
.build();
}
private ODataResponse executeQuery(ODataQuery query) {
return query.execute();
}
The buildQuery method can now be tested as follows:
#Test
public void addsSapLanguageToHeader() throws ODataException, IOException {
ODataQuery query = cut.buildQuery("api/v2", "business-partners", "", "fr");
HttpUriRequest request = getRequest(query);
assertContainsHeader(request, "sap-language", "fr");
}
The method getRequest produces a fake HttpClient that stubs all methods required to get query.execute(httpClient) to work. It stores the actual request and returns it for further inspection. Sample implementation with Mockito:
private HttpUriRequest getRequest(ODataQuery query) throws ODataException, IOException {
// stub methods to make code work
HttpResponse response = mock(HttpResponse.class);
when(httpClient.execute(any())).thenReturn(response);
StatusLine statusLine = mock(StatusLine.class);
when(response.getStatusLine()).thenReturn(statusLine);
HttpEntity entity = mock(HttpEntity.class);
when(response.getEntity()).thenReturn(entity);
InputStream inputStream = new ByteArrayInputStream("".getBytes(StandardCharsets.UTF_8));
when(entity.getContent()).thenReturn(inputStream);
Header[] headers = new Header[0];
when(response.getAllHeaders()).thenReturn(headers);
// simulate the execution of the query
query.execute(httpClient);
// grab the original argument from the mock for inspection
ArgumentCaptor<HttpUriRequest> captor = ArgumentCaptor.forClass(HttpUriRequest.class);
verify(httpClient).execute(captor.capture());
HttpUriRequest request = captor.getValue();
return request;
}
This solution is far from perfect, of course.
First, the amount of code needed to make this work alone shows how fragile this test will be over time. Whenever the CloudSDK decides to add a method or validation to the call sequence, this test will break. Note also that the test is invasive, by testing a private method, while gold standards say we should test only public methods.
Second, the method executeQuery can still not be tested. Execution paths also differ, because the test code uses the .execute(httpClient) variant to run the query, while the original code uses the .execute(destinationName) variant. The two happen to share code, but this may change over time.
I created a Spring Boot (1.4.2) REST application. One of the #RestController methods needs to invoke a 3rd party API REST operation (RestOp1) which returns, say between 100-250 records. For each of those records returned by RestOp1, within the same method, another REST operation of the same 3rd party API (RestOp2) must be invoked. My first attempt involved using a Controller class level ExecutorService based on a Fixed Thread Pool of size 100, and a Callable returning a record corresponding to the response of RestOp2:
// Executor thread pool - declared and initialized at class level
ExecutorService executor = Executors.newFixedThreadPool(100);
// Get records from RestOp1
ResponseEntity<RestOp1ResObj[]> restOp1ResObjList
= this.restTemplate.exchange(url1, HttpMethod.GET, httpEntity, RestOp1ResObj[].class);
RestOp1ResObj[] records = restOp1ResObjList.getBody();
// Instantiate a list of futures (to call RestOp2 for each record)
List<Future<RestOp2ResObj>> futureList = new ArrayList<>();
// Iterate through the array of records and call RestOp2 in a concurrent manner, using Callables.
for (int count=0; count<records.length; count++) {
Future<RestOp2ResObj> future = this.executorService.submit(new Callable<RestOp2ResObj>() {
#Override
public RestOp2ResObj call() throws Exception {
return this.restTemplate.exchange(url2, HttpMethod.GET, httpEntity, RestOp2Obj.class);
}
};
futureList.add(future);
});
// Iterate list of futures and fetch response from RestOp2 for each
// record. Build a final response and send back to the client.
for (int count=0; count<futureList.size(); count++) {
RestOp2ResObj response = futureList.get(count).get();
// use above response to build a final response for all the records.
}
The performance of the above code is abysmal to say the least. The response time for a RestOp1 call (invoked only once) is around 2.5 seconds and that for a RestOp2 call (invoked for each record) is about 1.5 seconds. But the code execution time is between 20-30 seconds, as opposed to an expected range of 5-6 seconds! Am I missing something fundamental here?
Is the service you are calling fast enough to handle that many requests per second?
There is an async version of RestService is available called AsyncRestService. Why are you not using that?
I would probably go like this:
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate(new ConcurrentTaskExecutor(Executors.newFixedThreadPool(100)));
asyncRestTemplate.exchange("http://www.example.com/myurl", HttpMethod.GET, new HttpEntity<>("message"), String.class)
.addCallback(new ListenableFutureCallback<ResponseEntity<String>>() {
#Override
public void onSuccess(ResponseEntity<String> result) {
//TODO: Add real response handling
System.out.println(result);
}
#Override
public void onFailure(Throwable ex) {
//TODO: Add real logging solution
ex.printStackTrace();
}
});
Your question involves two parts :
multiple API callbacks asynchronously
handle timeouts (fallback)
both parts are related as you've to handle the timeout of each call.
you may consider use Spring Cloud (based on spring boot) and use some out of the box solution based on OSS Netflix stacks.
The first (timeouts) on should be a circuit breaker hystrix based on feign client
The second (multiple requests) this is an architecture issue, using native Executors isn't a good idea as it will not scale and has a huge maintenance costs. You may relay on Spring Asynchrounous Methods you'll have better results and fully spring compliant.
Hope this will help.
I want to write a web frontend that wants to "propagate" the HTTP authentication received from the browser to a JBoss AS 4.2.3 that exposes numerous #Remote interfaces.
Consider the following trivial simulation of RMI call concurrency:
Properties user1 = new Properties();
user1.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.security.jndi.JndiLoginInitialContextFactory");
user1.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming");
user1.setProperty(Context.PROVIDER_URL, "127.0.0.1:1099");
user1.setProperty(Context.SECURITY_PRINCIPAL, "user1");
user1.setProperty(Context.SECURITY_CREDENTIALS, "pass1");
Properties user2 = new Properties();
user2.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.security.jndi.JndiLoginInitialContextFactory");
user2.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming");
user2.setProperty(Context.PROVIDER_URL, "127.0.0.1:1099");
user2.setProperty(Context.SECURITY_PRINCIPAL, "user2");
user2.setProperty(Context.SECURITY_CREDENTIALS, "pass2");
InitialContext ctx1 = new InitialContext(user1);
Mine bean1 = (Mine) ctx1.lookup("myear/MyBean/remote");
InitialContext ctx2 = new InitialContext(user2);
Mine bean2 = (Mine) ctx2.lookup("myear/MyBean/remote");
System.out.println(bean1.whoami());
System.out.println(bean2.whoami());
Call uses jbossall-client 4.2.3 and goes to a JBoss AS 4.2.3.
The .whoami() method simply echoes the logged-in username. As it turns our, this results in both calls saying they are made by "user2". Presumably, the underlying connection is shared and only authenticated using the last seen properties bundle.
In short, this sucks. Some preliminary testing indicates that the same problem remains in JBoss AS 7 so no luck.
Is there any other RMI client implementation I can use or any parameter I can pass in the prop bundle to make the InitialContexts not share their login info? Alternatively, can someone point me to the code that needs to be hacked to make this possible?
UPDATE:
As per request:
public class Worker extends Thread {
private final String pass, user;
private int correct = 0;
public Worker(String user, String pass) { this.user = user; this.pass = pass; }
public void run() {
Properties props = new Properties();
props.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.security.jndi.JndiLoginInitialContextFactory");
props.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming");
props.setProperty(Context.PROVIDER_URL, "127.0.0.1:1099");
props.setProperty(Context.SECURITY_PRINCIPAL, this.user);
props.setProperty(Context.SECURITY_CREDENTIALS, this.pass);
try {
InitialContext ctx = new InitialContext(props);
for(int i = 0; i < 100; i++) {
Mine bean = (Mine) ctx.lookup("myear/MyBean/remote");
if(bean.whoami().equals(this.user)) this.correct++;
Thread.sleep(2); }
ctx.close();
} catch (Exception e) { throw new RuntimeException(e); }
System.out.println("Done [id="+this.getId()+", good="+this.correct+"]");
}
}
Running with two workers yields:
public static void main(String[] args) throws Exception {
new Worker("user1", "pass1").start();
new Worker("user2", "pass2").start();
}
Done [t=9, good=0]
Done [t=10, good=100]
Running with 5 threads yields:
public static void main(String[] args) throws Exception {
new Worker("user1", "pass1").start();
new Worker("user2", "pass2").start();
new Worker("user3", "pass3").start();
new Worker("user4", "pass4").start();
new Worker("user5", "pass5").start();
}
Caused by: javax.ejb.EJBAccessException: Authentication failure
at org.jboss.ejb3.security.Ejb3AuthenticationInterceptor.handleGeneralSecurityException(Ejb3AuthenticationInterceptor.java:68)
at org.jboss.aspects.security.AuthenticationInterceptor.invoke(AuthenticationInterceptor.java:70)
at org.jboss.ejb3.security.Ejb3AuthenticationInterceptor.invoke(Ejb3AuthenticationInterceptor.java:110)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.ENCPropagationInterceptor.invoke(ENCPropagationInterceptor.java:46)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.asynchronous.AsynchronousInterceptor.invoke(AsynchronousInterceptor.java:106)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.stateless.StatelessContainer.dynamicInvoke(StatelessContainer.java:304)
at org.jboss.aop.Dispatcher.invoke(Dispatcher.java:106)
at org.jboss.aspects.remoting.AOPRemotingInvocationHandler.invoke(AOPRemotingInvocationHandler.java:82)
at org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:809)
at org.jboss.remoting.transport.socket.ServerThread.processInvocation(ServerThread.java:608)
at org.jboss.remoting.transport.socket.ServerThread.dorun(ServerThread.java:406)
at org.jboss.remoting.transport.socket.ServerThread.run(ServerThread.java:173)
at org.jboss.remoting.MicroRemoteClientInvoker.invoke(MicroRemoteClientInvoker.java:163)
at org.jboss.remoting.Client.invoke(Client.java:1634)
at org.jboss.remoting.Client.invoke(Client.java:548)
at org.jboss.aspects.remoting.InvokeRemoteInterceptor.invoke(InvokeRemoteInterceptor.java:62)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.aspects.tx.ClientTxPropagationInterceptor.invoke(ClientTxPropagationInterceptor.java:67)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.aspects.security.SecurityClientInterceptor.invoke(SecurityClientInterceptor.java:53)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.remoting.IsLocalInterceptor.invoke(IsLocalInterceptor.java:74)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.stateless.StatelessRemoteProxy.invoke(StatelessRemoteProxy.java:107)
at $Proxy0.whoami(Unknown Source)
at net.windwards.Worker.run(TestRMIClient.java:31)
at org.jboss.aspects.remoting.InvokeRemoteInterceptor.invoke(InvokeRemoteInterceptor.java:74)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.aspects.tx.ClientTxPropagationInterceptor.invoke(ClientTxPropagationInterceptor.java:67)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.aspects.security.SecurityClientInterceptor.invoke(SecurityClientInterceptor.java:53)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.remoting.IsLocalInterceptor.invoke(IsLocalInterceptor.java:74)
at org.jboss.aop.joinpoint.MethodInvocation.invokeNext(MethodInvocation.java:101)
at org.jboss.ejb3.stateless.StatelessRemoteProxy.invoke(StatelessRemoteProxy.java:107)
at $Proxy0.whoami(Unknown Source)
at net.windwards.Worker.run(TestRMIClient.java:31)
Making the initial connection takes about 100 ms, so I tried the following (sleeping 10 ms between calls to get good overlap):
public static void main(String[] args) throws Exception {
new Worker("user1", "pass1").start();
Thread.sleep(200);
new Worker("user2", "pass2").start();
Thread.sleep(200);
new Worker("user3", "pass3").start();
Thread.sleep(200);
new Worker("user4", "pass4").start();
Thread.sleep(200);
new Worker("user5", "pass5").start();
}
Done [t=9, good=1]
Done [t=14, good=12]
Done [t=15, good=14]
Done [t=16, good=15]
Done [t=17, good=100]
From the docs for org.jboss.security.jndi.JndiLoginInitialContextFactory :
During the getInitialContext callback from the JNDI naming, layer security context identity is populated with the username ... and the credentials ... There is no actual authentication of this information. It is merely made available to the jboss transport layer for incorporation into subsequent invocations
in this case, by the time you get to invoke your beans, user2 is the last principal set and so is the one available to be used by the jboss transport layer.
However, from the jboss4 source, it looks like you can make the security context scoped to the thread context, in which case your threaded test should work, simply add this property:
userN.setProperty("jnp.multi-threaded", "true");
Another solution would be using org.jboss.security.jndi.LoginInitialContextFactory instead of org.jboss.security.jndi.JndiLoginInitialContextFactory, unlike JndiLoginInitialContextFactory, LoginInitialContextFactory will try to authenticate when the look up is made, not when the EJB is invoked, you could give it a try, even though in the docs, they recommend JndiLoginInitialContextFactory when it comes to EJB authorization on remote clients
The basic problem here is that you haven't close the first context before you use the second one in the same thread. I doubt that this is a fair test. It would be more interesting to actually make the two concurrent, by running them both in separate threads.
When the getInitialContext() is being called from the JNDI, the Security Layer invokes a wrapper with the credential tiles; which is factually never verified with a source, it is just type of a virtual representation of the tiles to JBOSS for subsequent calls to the same entity model.
In your case, user2 is the last one to be available to JBOSS.
Alternatively, you can also use multiple instances of JBOSS on the
same machine by using ServiceBindingManager. This could help you
keep a track of all RMI calls you make, also the properties for the
Connector Object do work because itself is a JMX Bean Object.
You can also use a threaded model which can give you additional security by
adding a property
userN.setProperty("jnp.multi-threaded", "true");
And just as a suggestion, I found online Use JndiLoginInitialContextFactory for EJB Authentication on remote clients.
Hope this helps!
I have a service in my ServiceStack API to handle image results, by implementing IStreamWriter WriteTo(stream). Works great.
To optimize the processing I am adding support for the InMemory Cache, with a TimeSpan to expire the results. My concern is related to IDispose. Prior to cache implementation I was using IDispose to dispose the result object and its image after returning, but with inmemory cache it cannot implement IDispose, otherwise the data will be wiped before it is refetched from cache.
Question is how, or where, to implement the disposal of the cached results? Will the cache dispose the items on expiration? If so, how to implement Dispose only for calls from cache manager, but not from http handler.
public class ImageResult : IDisposable, IStreamWriter, IHasOptions
{
private readonly Image image;
public void WriteTo(Stream responseStream)
{
image.Save(responseStream, imgFormat);
}
public void Dispose()
{
// if we dispose here, will be disposed after the first result is returned
// want the image to be disposed on cache expiration
//if (this.image != null)
// this.image.Dispose();
}
}
public class ImageService : AssetService
{
public object Get(ImageRequest request)
{
var cacheKey = ServiceStack.Common.UrnId.Create<ImageRequest>(request.id);
if (Cache.Get<ImageResult>(cacheKey) == null)
{
Cache.Set<ImageResult>(cacheKey, GetImage(request), TimeSpan.FromMinutes(1));
}
return Cache.Get<ImageResult>(cacheKey);
}
[...]
}
From a quick look at ServiceStack's InMemoryCache you can see there's no event or callback to hook into for cache entry expiration. Consider using System.Runtime.Caching.MemoryCache which gives you similar caching capabilities, plus specifically you can use a change monitor for callback on expiration and/or removal.
Another alternative: create your own from SS's cache source code to provide you with a callback.
Once you have a callback in place, you could call Dispose() from there - but as you said you don't want the ImageResult to be disposable, instead allow access to its Image property and dispose that from the expiration callback yourself. You could wrap a class around .net's image to allow for unit testing (avoid having to use a real image object in tests).
EDIT: actually.. see below(*), this would create a mess.
On another note, I would make some slight changes to your Get() method. The last call to Cache.Get() is superfluous. Even though you're using an in-memory cache you'd still want to minimize access to it as it's potentially slower than it may seem (needs to use locks to synchronize in-memory access from multiple threads).
var imageResult = Cache.Get<ImageResult>(cacheKey);
if (imageResult == null)
{
imageResult = GetImage(request);
Cache.Set<ImageResult>(cacheKey, imageResult, TimeSpan.FromMinutes(1));
}
return imageResult;
(*) Just realized you could have a request getting the ImageResult from the cache, and then an instance later, before it writes anything to the target (response) stream, it expires and gets disposed. Nasty. Instead, let .net handle this for you: instead of making ImageResult implement IDisposable, create a destructor in which you dispose the internal Image object. This will work with SS's in memory cache:
~ImageResult()
{
image.Dispose();
}
My problem is...
...I have a DTO like this
[Route("/route/to/dto/{Id}", "GET")]
public class Foo : IReturn<Bar>
{
public string Id { get; set; }
}
and need to call the service that implements the method with this signature
public Bar Get(Foo)
from a request and/or response filter. I don't know what class implements it (don't want to need to know). What I need is something like the LocalServiceClient class in the example below:
var client = new LocalServiceClient();
Bar bar = client.Get(new Foo());
Does this LocalServiceClient thing exists? JsonServiceClient has a pretty similar interface, but using it would be inneficient (I need to call my own service, I shouldn't need an extra round-trip, even to localhost, just to do this).
I'm aware of ResolveService method from Service class, but it requires me to have a service instance and to know what class will handle the request.
I think this LocalServiceClient is possible because I have all the data that a remote client (e.g. JsonServiceClient) needs to call the service - request DTO, route, verb - but couldn't find how to do it. Actually, it should be easier to implement than JsonServiceClient.
JsonServiceClient would do it, but there must be a better way, using the same request context.
What I want to do (skip this if you're not curious about why I'm doing this)
Actually, my DTOs are like this:
[EmbedRequestedLinks]
[Route("/route/to/dto/{Id}", "GET")]
public class MyResponseDto
{
public string Id { get; set; }
public EmbeddableLink<AResponseDto> RelatedResource { get; set; }
public EmbeddableLink<AnotherResponteDto> AnotherRelatedResource { get; set; }
}
EmbedRequestedLinksAttribute is a request/response filter. This filter checks if there is a query argument named "embed" in the request. If so, the filter need to "embed" the comma-separated related resources referenced by the argument into the response to this request. EmbeddableLink<T> instances can be obtained by using extension methods like these:
1) public static EmbeddableLink<T> ToEmbeddableLink<T>(this IReturn<T> requestDto)
2) public static EmbeddableLink<T> ToEmbeddableLink<T>(this T resource)
Assume a client places this request:
GET /route/to/dto/123456?embed=relatedResource HTTP/1.1
The service that will handle this request will return an instance of MyResponseDto with EmbeddableLinks created using signature (1). Then my response filter will see the embed query argument and will call the Get method of the appropriate service, replacing the RelatedResource with another instance of EmbeddableLink, this time created using extension method (2):
var client = new LocalServiceClient();
response.RelatedResource = client.Get(response.RelatedResource.RequestDto)
.ToEmbeddableLink();
The serialization routine of EmbeddableLink takes care of the rest.
In case an embeddable link is not included in the embed list the serialization routine will call the extension method ToUrl (provided by ServiceStack), that takes a verb and converts a request DTO into a URL. In this example the client will get this response:
{
"id": "9asc09dcd80a98",
"relatedResource": { "id": "ioijo0909801", ... },
"anotherRelatedResource":
{
"$link": { "href": "/route/to/another/dto/1sdf89879s" }
}
}
I know the creators of ServiceStack think that polymorphic request/responses are bad things but this case seems OK to me because I'm not creating services, instead I'm extending the framework to help me create services the way I (and possibly other users of ServiceStack) need. I'm also creating other hypermedia extensions to ServiceStack. (I hope my boss allow me to publish these extensions on github)
If you really want to do this then look the source code for ServiceStack. Look at the ServiceManager and ServiceController. These classes are responsible for registering and resolving services. You might even be able to use reflection to create services on the fly with the static EndpointHost.Metadata like so:
var operation = EndpointHost.Metadata.Operations
.FirstOrDefault(x => x.RequestType == typeof(Person));
if (operation != null)
{
var svc = Activator.CreateInstance(operation.ServiceType);
var method = operation.ServiceType.GetMethod("Get");
var response = method.Invoke(svc, new[] { new Person() });
}
This kinda works but you will get NULL exceptions if there is other code calling
var httpRequest = RequestContext.Get<IHttpRequest>();
But I would not suggest this.
Instead if you create your own Business Service classes that do all the CRUD operations (POST/PUT/GET ect). Then make the ServiceStack Services thin wrappers over them. Now you can call your own services whenever you want without worrying about the HTTP Request and ServiceStack. Only use the ServiceStack Service when you are dealing with HTTP requests
You can call the static AppHostBase.Resolve() method as demonstrated here, calling a SeviceStack Service from an MVC controller:
var helloService = AppHostBase.Resolve<HelloService>();
helloService.RequestContext = System.Web.HttpContext.Current.ToRequestContext();
var response = (HelloResponse)helloService.Any(new HelloRequest { Name = User.Identity.Name });
However, I would take #kampsj's approach of making your ServiceStack services a thin wrapper around your application service classes and only deal with HTTP/Session specific stuff in the ServiceStack service.