using Locust.io for REST web service - performance-testing

I wanted to use Locust for performance testing on Spring Rest WebService, where each service is secured by the token.
is anyone tried to do the same by nesting task sets?
How can we maintain the same token for all the request from single user?
is it possible to go to the task on response from other task?

I had a similar scenario. If you know what the token is in advance, you can do:
def on_start(self):
""" on_start is called when a Locust starts, before any task is scheduled """
self.access_token = "XYZ" # method 1
# self.login() # <-- method 2
Otherwise you could call something like a login method that would authenticate your user, and then store the resulting token on self.
Since on start happens before any tasks, I never had to worry about nesting task sets.
If you need things to happen in a certain order within tasks, you can just run something like:
#task(1)
def mytasks(self):
self.get_service_1()
self.get_service_2()

Related

Create reusable aiohttp.ClientSession for unittest

I have some integration tests that make a few web requests per test, using the aiohttp package.
In each integration test, I have the below line of code to create a persistent session variable:
async with aiohttp.ClientSession() as session:
..make web call 1
..make web call 2
This all works fine but I'd like to move the 'async with aiohttp.ClientSession() as session' out of the test case, and into the base class so that it becomes a variable that all test cases can access. Despite my best efforts,I can only get to a point where the first call in the test is made successfully, but then the session is closed, causing the second call to fail. Does anyone know how to setup the ClientSession(), perhaps in the setUp() declaration.
Ref: https://docs.aiohttp.org/en/latest/http_request_lifecycle.html

Passing OAuth token in multi threaded application

We have a SpringBoot application based on the Sap Cloud SDK (3.32.0) and are using PrincipalPropegation to our on-prem SAP environment.
Our application is also using the Axon Framework (an eventsourcing framework). This means our calls to our RestControllers are send as commands to the Aggregates, which in turn sends out events on the eventbus. Normally we pass the oauth token by adding metadata on the event messages. This is handled by the axon framework. Events are dispatched on different threads then the ones that process the commands.
However, we recently started using the cloud sdk and generated OData V2 clients to send/retrieve information to our on-prem SAP instances. The SAP cloud SDK tries to fetch the AuthToken from the ThreadContext, however, due to the async nature of the Axon framework, this does not work properly.
Is there a way pass the correct token in some other way and skip the default behaviour of the SDK? Since we have the token needed for doing the user token exchange for PrincipalPropegation in the event metadata (which can be accessed by the eventhandler).
Any suggestions would be great!
Danny
You can conveniently propagate the thread context to new threads using the ThreadContextExecutor:
ThreadContextExecutor executor = new ThreadContextExecutor();
Callable operationWithContext = () -> executor.execute(() -> operation());
invokeAsynchronously(operationWithContext);
Check out the documentation on the topic.
Is there a way pass the correct token in some other way and skip the default behaviour of the SDK?
In case the solution with ThreadContextExecutor is not working for you, we can look for a workaround: If you are looking for a way to pass an access token inside the child thread, then use the following code sample:
import com.sap.cloud.sdk.cloudplatform.security.AuthTokenAccessor;
import com.sap.cloud.sdk.cloudplatform.security.AuthToken;
DecodedJWT jwt = JWT.decode("your-access-token");
AuthToken authToken = new AuthToken(jwt);
AuthTokenAccessor.executeWithAuthToken(authToken, () -> {
// do things..
});
Please note: Besides current auth-token, the Cloud SDK may also extract principal and tenant information from the passed JWT.

Is it alright to store user authentication token as a global variable (process.env) in a nodejs lambda function?

We have a BFF built with AWS Lambda (nodejs) and API Gateway that interfaces with an API that requires user authentication. And the way we've built it is we have a separate module/file for the API services. Something like this:
src
--handlers
--users.js // with function getMe()
--apiServices
--usersApi.js // with function getUser(id)
So what happens is the getMe() function will receive the event with the request headers with the authentication token. But we need to use the auth token in getUser(id). I've thought of two options to do this:
update getUser(id) to accept an authToken param.
store the auth token in the global variable
I'm preferring to do #2 because it requires less changes but I'm worried that this might not be a good idea because there's no way of knowing for sure when a lambda container will be reused (or if will be reused at all): https://aws.amazon.com/blogs/compute/container-reuse-in-lambda
Has someone tried the 2nd approach before? Or should I just go with #1? The thing with #1 is that we have a lot of files under apiServices with a lot of functions so I would like to apply as little change as possible.
You can do it both ways, but be careful and double check switching context between users because lambda persists for a short period of time and can be hit multiple times.

Best practice for connections to CRM in web application

am sorry if this question will be a bit to broad but if this is question about normal ASP.NET MVC 5 Owin based application with default connection to MSSQL server i would not have such hard time but we use CRM as our database.
Ok as i mention am working on ASP.NET MVC5 application and am having hard time finding what is the best practice to create, keep open and to close connection to Dynamics CRM 365?
I found some many posts and blogs but everyone pulling on his side of the road.
Some say it's better for every request to open new connection in using statement so it could be closed right away (that's sounds good but it's a possible that requests will be slow because on every request it needs to open new connection to CRM).
Some say it' better to make singleton object on application scope, keep it open during application lifetime and reuse it on every request.
Normally i would use OrganizationServiceProxy in some simple console app but in this case am not sure should i use OrganizationServiceProxy or CrmServiceClient or something else?
If anyone have or had some similar problem, any hint would be great.
UPDATE:
#Nicknow
I downloaded SDK from SDK 365 and am using this dll-s.
Microsoft.Xrm.Sdk.dll, Microsoft.Crm.Sdk.Proxy.dll, Microsoft.Xrm.Tooling.Connector.dll and Microsoft.IdentityModel.Clients.ActiveDirectory.dll.
You mention
Microsoft.CrmSdk.XrmTooling.CoreAssembly 8.2.0.5.
if am correct this nuget package use official assembly that i downloaded, or there are some modification to this package?
About that test
proof test
if i got it right, no matter if i use using statement, implement Dispose() method or just use static class on application scope for a lifetime of application i will allways get same instance (If i use default settings RequireNewInstance=false)?
For code simplicity, I usually create a static class (a singleton could be used too, but would usually be overkill) to return a CrmServiceClient object. That way my code is not littered with new CrmServiceClient calls should I want to change anything about how the connection is being made.
So it would be good practice to create static class on application scope that lives for application lifetime? That means that every user that makes request would use same instance ? Wouldn't that be i performance issue for that one connection?
All of your method calls will execute to completion or throw an exception thus even if the GC takes a while there is no open connection sitting out there eating up resources and/or blocking other activity.
This one takes me back to section where i allways get same instance of CrmServiceClient and got the part that xrm.tooling handles cached connection o the other side but what happens on this side (web application).
Isn't the connection to CRM (i.e. CrmServiceClient) unmanaged resources, shouldn't i Dispose() it explicitly?
I found some examples with CrmServiceClient and pretty much in all examples CrmServiceClient is casted in IOrganizationService using CrmServiceClient.OrganizationWebProxyClient or CrmServiceClient.OrganizationServiceProxy.
Why is that and what are the benefits of that?
I got so many questions but this is already allot to ask, is there any online documentation that you could point me to it?
First, I'm assuming you are using the latest SDK DLLs from Nuget: Microsoft.CrmSdk.XrmTooling.CoreAssembly 8.2.0.5.
I never wrap the connection in a using statement and I don't think I've ever seen an example where that is done. There are examples from the "old days", before we had tooling library, where calls to create OrganizationServiceProxy were wrapped in a using statement, which caused a lot of inexperienced developers to release code with connection performance issues.
Luckily most of this has been fixed for us through the Xrm.Tooling library.
Create your connection object using CrmServiceClient:
CrmServiceClient crmSvc = new CrmServiceClient(#"...connection string goes here...");
Now if I create an OrganizationServiceContext (or an early-bound equivalent) object I do wrap that in a using so that it is determinedly disposed when I've completed my unit of work.
using (var ctx = new OrganizationServiceContext(crmSvc))
{
var accounts = from a in ctx.CreateQuery("account")
select a["name"];
Console.WriteLine(accounts.ToList().Count());
}
The Xrm.Tooling library handles everything else for you as far the connection channel and authentication. Unless you specify to create a new channel each time (by adding 'RequireNewInstance=true' to the connection string or setting the useUniqueInstance to true when calling new CrmServiceClient) the library will reuse the existing authenticated channel.
I used the following code to do a quick proof test:
void Main()
{
var sw = new Stopwatch();
sw.Start();
var crmSvc = GetCrmClient();
Console.WriteLine($"Time to get Client # 1: {sw.ElapsedMilliseconds}");
crmSvc.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 1: {sw.ElapsedMilliseconds}");
var crmSvc2 = GetCrmClient();
Console.WriteLine($"Time to get Client # 2: {sw.ElapsedMilliseconds}");
crmSvc2.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 2: {sw.ElapsedMilliseconds}");
}
public CrmServiceClient GetCrmClient()
{
return new CrmServiceClient("...connection string goes here...");
}
When I run this with RequireNewInstance=true I get the following console output:
Time to get Client # 1: 2216
Time to WhoAmI # 1: 2394
Time to get Client # 2: 4603
Time to WhoAmI # 2: 4780
Clearly it was taking about the same amount of time to create each connection.
Now, if I change it to RequireNewInstance=false (which is the default) I get the following:
Time to get Client # 1: 3761
Time to WhoAmI # 1: 3960
Time to get Client # 2: 3961
Time to WhoAmI # 2: 4145
Wow, that's a big difference. What is going on? On the second call the Xrm.Tooling library uses the existing service channel and authentication (which it cached.)
You can take this one step further and wrap your new CrmServiceClient calls in a using and you'll get the same behavior, because disposing of the return instanced does not destroy the cache.
So this will return times similar to above:
using (var crmSvc = GetCrmClient())
{
Console.WriteLine($"Time to get Client # 1: {sw.ElapsedMilliseconds}");
crmSvc.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 1: {sw.ElapsedMilliseconds}");
}
using (var crmSvc2 = GetCrmClient())
{
Console.WriteLine($"Time to get Client # 2: {sw.ElapsedMilliseconds}");
crmSvc2.Execute(new WhoAmIRequest());
Console.WriteLine($"Time to WhoAmI # 2: {sw.ElapsedMilliseconds}");
}
For code simplicity, I usually create a static class (a singleton could be used too, but would usually be overkill) to return a CrmServiceClient object. That way my code is not littered with new CrmServiceClient calls should I want to change anything about how the connection is being made.
To fundamentally answer the question about using, we don't need to use it because there is nothing to be released. All of your method calls will execute to completion or throw an exception thus even if the GC takes a while there is no open connection sitting out there eating up resources and/or blocking other activity.

Uploading photos using Grails Services

I would like to ask, What would be the most suitable scope for my upload photo service in Grails ? I created this PhotoService in my Grails 2.3.4 web app, all it does is to get the request.getFile("myfile") and perform the necessary steps to save it on the hard drive whenever a user wants to upload an image. To illustrate what it looks like, I give a skeleton of these classes.
PhotoPageController {
def photoService
def upload(){
...
photoService.upload(request.getFile("myfile"))
...
}
}
PhotoService{
static scope="request"
def upload(def myFile){
...
// I do a bunch of task to save the photo
...
}
}
The code above isn't the exact code, I just wanted to show the flow. But my question is:
Question:
I couldn't find the exact definition of these different grails scopes, they have a one liner explanation but I couldn't figure out if request scope means for every request to the controller one bean is injected, or each time a request comes to upload action of the controller ?
Thoughts:
Basically since many users might upload at the same time, It's not a good idea to use singleton scope, so my options would be prototype or request I guess. So which one of them works well and also which one only gets created when the PhotoService is accessed only ?
I'm trying to minimize the number of services being injected into the application context and stays as long as the web app is alive, basically I want the service instance to die or get garbage collect at some point during the web app life time rather than hanging around in the memory while there is no use for it. I was thinking about making it session scope so when the user's session is terminated the service is cleaned up too, but in some cases a user might not want to upload any photo and the service gets created for no reason.
P.S: If I move the "def photoService" within the upload(), does that make it only get injected when the request to upload is invoked ? I assume that might throw exception because there would be a delay until Spring injects the service and then the ref to def photoService would be n
I figured out that Singleton scope would be fine since I'm not maintaining the state for each request/user. Only if the service is supposed to maintain state, then we can go ahead and use prototype or other suitable scopes. Using prototype is safer if you think the singleton might cause unexpected behavior but that is left to testing.

Resources