SoapUI - force endpoint URL when using testrunner.bat - groovy

I'm trying to run a SoapUI test-suite against two different endpoints and I do this by triggering two testrunner command and supply two different "-e" argument values.
The problem is that each of my test cases uses one API that I am testing, for which I do need to use the endpoint that is being passed under -e argument, and another API that should remain static. (The 2nd API is a helper API which sets up the environment for the first API to be able to work). So if I use the -e argument it breaks my tests because it forces the 2nd API to the same endpoint as the first API.
What I've tried so far is using the following groovy script to force endpoint value for specific Test Steps, however it's being ignored or maybe the script runs before the endpoints gets set, I'm not sure.
TestSuite setup script:
def testCases = testSuite.getTestCaseList()
for(testCase in testCases)
{
def testSteps = testCase.getTestStepList()
for(testStep in testSteps)
{
if(testStep.name == "my name")
{
testStep.setPropertyValue('endpoint','http://force.it');
}
}
}
What else can I do to overcome this issue to avoid duplicating my tests?

If you're right seems that e argument overrides all endpoints inclusive the ones you set in the setup script.
Then I purpose the follow approach for your case. SOAPUI as probably you already know has properties at different levels (testSuite, testCase, project, global) and you can use this properties to share information between your tests.
The thing is that you can use this properties to set your endpoints and pass the properties values in the testrunner command.
Set the endpoint for all your test request which test your 1st API using a global property:
${#Project#endpointAPI1}
And for the 2nd API set the endpoint url as:
${#Project#endpointAPI2}
Note: If you don't want to set the endpoints one by one you can use a groovy script testStep similar to the one you show in your question.
Once this is set, then you can invoke the testrunner for your both cases using the properties:
Then to test your cases you can add the follow properties, with -P properties are added at project level.
First endpoint for api1
-PendpointAPI1=http://one_endpointAPI1.com -PendpointAPI2=http://endpointAPI2.com
Second endpoint for api1
-PendpointAPI1=http://second_endpointAPI1.com -PendpointAPI2=http://endpointAPI2.com
Note that I use also a variable for endpoint API2 however if this is static and not change between both tests instead of using ${#Project#endpointAPI2} you can set directly the url for this service and pass only the property -PendpointAPI1.
Hope it helps,

Related

Node typescript library environment specific configuration

I am new to node and typescript. I am working on developing a node library that reaches out to another rest API to get and post data. This library is consumed by a/any UI application to send and receive data from the API service. Now my question is, how do I maintain environment specific configuration within the library? Like for ex:
Consumer calls GET /user
user end point on the consumer side calls a method in the library to get data
But if the consumer is calling the user end point in test environment I want the library to hit the following API Url
for test http://api.test.userinformation.company.com/user
for beta http://api.beta.userinformation.company.com/user
As far as I understand the library is just a reference and is running within the consumer application. Library can for sure get the environment from the consumer, but I do not want the consumer having to specify the full URL that needs to be hit, since that would be the responsibility of the library to figure out.
Note: URL is not the only problem, I can solve that with environment switch within the library, I have some client secrets based on environments which I can neither store in the code nor checkin to source control.
Additional Information
(as per jfriend00's request in comments)
My library has a LibExecutionEngine class and one method in it, which is the entry point of the library:
export class LibExecutionEngine implements ExecutionEngine {
constructor(private environment: Environments, private trailLoader:
TrailLoader) {}
async GetUserInfo(
userId: string,
userGroupVersion: string
): Promise<UserInfo> {
return this.userLoader.loadUserInfo(userId, userGroupVersion)
}
}
export interface ExecutionEngine {
GetUserInfo(userId: string, userGroupVersion: string): Promise<UserInfo>
}
The consumer starts to use the library by creating an instance of the LibraryExecution then calling the getuserinfo for example. As you see the constructor for the class accepts an environment. Once I have the environment in the library, I need to somehow load the values for keys API Url, APIClientId and APIClientSecret from within the constructor. I know of two ways to do this:
Option 1
I could do something like this._configLoader.SetConfigVariables(environment) where configLoader.ts is a class that loads the specific configuration values from files({environment}.json), but this would mean I maintain the above mentioned URL variables and the respective clientid, clientsecret to be able to hit the URL in a json file, which I should not be checking in to source control.
Option 2
I could use dotenv npm package, and create one .env file where I define the three keys, and then the values are stored in the deployment configuration which works perfectly for an independently deployable application, but this is a library and doesn't run by itself in any environment.
Option 3
Accept a configuration object from the consumer, which means that the consumer of the library provides the URL, clientId, and clientSecret based on the environment for the library to access, but why should the responsibility of maintaining the necessary variables for library be put on the consumer?
Please suggest on how best to implement this.
So, I think I got some clarity. Lets call my Library L, and consuming app C1 and the API that the library makes a call out to get user info as A. All are internal applications in our org and have a OAuth setup to be able to communicate, our infosec team provides those clientids and secrets to individual applications, so I think my clarity here is: C1 would request their own clientid and clientsecret to hit A's URL, C1 would then pass in the three config values to the library, which the library uses to communicate with A. Same applies for some C2 in the future.
Which would mean that L somehow needs to accept a full configuration object with all required config values from its consumers C1, C2 etc.
Yes, that sounds like the proper approach. The library is just some code doing what it's told. It's the client in this case that had to fetch the clientid and clientsecret from the infosec team and maintain them and keep them safe and the client also has the URL that goes with them. So, the client passes all this into your library, ideally just once per instance and you then keep it in your instance data for the duration of that instance

How can I set environment variables for dredd testing in a dredd.yml file?

I'm trying to run a number of api calls using dredd and api blueprint to test a site. I would like to run the tests on circleCI, as there are Selenium tests running in the same place. Each transaction needs to be accompanied by two tokens, which are set as cookies in the headers. Ideally, these would be set in the dredd.yml file. When running on a local machine, if I replace ACCESS_TOKEN and REFRESH_TOKEN with the actual values, the test runs as expected.
circle.yml:
test:
override:
- dredd
dredd.yml headers
header: ['Cookie: access_token=ACCESS_TOKEN; refresh_token=REFRESH_TOKEN']
Where ACCESS_TOKEN and REFRESH_TOKEN get replaced by the actual values set in circleCI's environment variables. I have also tried:access_token=$[ACCESS_TOKEN], access_token=$["ACCESS_TOKEN"] and access_token=$ACCESS_TOKEN. None of these are being replaced in the headers for the first api call.
The header looks like: {"Content-Type":"application/json; charset=utf-8","User-Agent":"Dredd/1.4.0 (Darwin 14.5.0; x64)","Cookie":" access_token=$ACCESS_TOKEN; refresh_token=$REFRESH_TOKEN"}
I am new to yaml files, so I'm probably missing something basic, but I did search around for a while. The hooks file is written with node.js, so I don't think the ruby/rails help will be useful here. If I am missing anything in the question don't hesitate to let me know.
YAML is a data representation language, not a template language (or template processor, for that matter). While an individual program might support loading environment variables or additional parameters named in the configuration, the YAML parser (probably, unless it's a custom module) isn't what's injecting them. While skimming the dredd docs I don't see any references to environment variables or parameters, it may be worth creating an issue on the project and starting a discussion with the developers to see if this is supported.
I can think of a number of ways to solve your specific problem, but they all involve additional tools to render the YAML with your variables injected. Perhaps the easiest solution for your case is to set environment variables in the CircleCI web configuration (NOT version-controled circle.yml). Then, set up a pre-build step, where the YAML configuration is generated. To do this, wrap the YAML in a BASH script, with the YAML document contained inside of it as a here-doc.
#!/bin/bash
# ACCESS_TOKEN and REFRESH_TOKEN are injected by CircleCI
cat <<EOF > config.yml
---
header: ['Cookie: access_token=${ACCESS_TOKEN}; refresh_token=${REFRESH_TOKEN}']
EOF
Then run the rest of your job normally, perhaps deleting the configuration file or restoring it from version control before any artifacts are created to avoid the leakage of your credentials.
The better way to work with headers is by Hook files setting headers before each request. As you are using Node.js, try set Node environment variables:
var hooks = require('hooks');
hooks.beforeEach(function(transaction) {
transaction.request.headers.Cookie =
'access_token=' + ACCESS_TOKEN +
'; refresh_token=' + REFRESH_TOKEN;
}

jenkins: setting root url via Groovy API

I'm trying to update Jenkins' root URL via the Groovy API, so I can script the deployment of a Jenkins master without manual input (aside: why is a tool as popular with the build/devops/automation community as Jenkins so resistant to automation?)
Based on this documentation, I believe I should be able to update the URL using the following script in the Script Console.
import jenkins.model.JenkinsLocationConfiguration
jlc = new jenkins.model.JenkinsLocationConfiguration()
jlc.setUrl("http://jenkins.my-org.com:8080/")
println(jlc.getUrl())
Briefly, this instantiates a JenkinsLocationConfiguration object; calls the setter setUrl with the desired value, http://jenkins.my-org.com:8080/; and prints out the new URL to confirm that it has changed.
The println statement prints what I expect it to, but following this, the value visible through the web interface at "Manage Jenkins" -> "Configure System" -> "Jenkins URL" has not updated as I expected.
I'm concerned that the value hasn't been update properly by Jenkins, which might lead to problems when communicating with external APIs.
Is this a valid way to fix the Jenkins root URL? If not, what is? Otherwise, why isn't the change being reflected in the config page?
You are creating a new JenkinsLocationConfiguration object, and updating the new one, not the existing one being used
use
jlc = JenkinsLocationConfiguration.get()
// ...
jlc.save()
to get the one from the global jenkins configuration, update it and save the config descriptor back.
see : https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/JenkinsLocationConfiguration.java

Unit testing claims based authorization with ThinkTecture ClaimsAuthorizeAttribute

We are controlling access to our application's resources and actions by using ThinkTecture's MVC ClaimsAuthorizeAttribute and would like to be able to include some unit test coverage using Moq.
Ideally, I'd like to write a test which requests a controller action decorated with:
[ClaimsAuthorize("operation_x", "resource_1")]
... so as to enter our AuthorizationManager's CheckAccess override method during execution of the test.
Our CheckAccess override simply gets the action and resource from the incoming AuthorizationContext ("operation_x" and "resource_1") and determines whether the Principal has the resource/action combination as a claim and returns true if a match is found.
The test would pass or fail based on the result of our CheckAccess override.
Most of the examples I've found online are about unit testing custom Authorize attributes or testing whether a controller action has been decorated by an AuthzAttribute. There don't seem to be many examples of testing ThinkTecture's ClaimsAuthorize attribute.
Is it even possible to achieve what I've described? If so, please advise!
Thanks
You may be looking to do more work than necessary - you don't need to test ThinkTecture's ClaimsAuthorizeAttribute, because ThinkTecture have already done that. You should write tests which test your own code - namely the outcome of the actions performed inside your override of CheckAccess.
If you want to check whether the ThinkTecture attribute works as it should, you should look into setting up an integration test which causes the controller action in question to be invoked.

How can I unit test a request filter using the ReturnAuthRequired extension method?

In a previous version of serviceStack I was able to write a request filter for authorization this filter used res.ReturnAuthRequired() when I could not authorize the user. In the current version of servicestack my unit tests now return a null reference exception because ReturnAuthRequired now calls httpRes.EndServiceStackRequest(false); which then calls EndpointHost.CompleteRequest(); How can I unit test this now that there is a reference to the EndpointHost global variable? Should I not use the extension method?
Yeah the callbacks are required to support proper finalization of resources used in the request - e.g. it's required for Funq's new Request Scope support.
Anyway I've added some null checks in this commit to make it friendlier when unit testing, which will be available from the next v3.96 release onwards.

Resources