JMeter: Connect to PostGresSQL in JSR using groovy and then compare values from multiple tables in DB with API response - groovy

Sorry for the long post, but I really need some guidance here. I need to compare values from an API response with the values from multiple tables in the DB.
Currently, I am doing it as follows:
Use a JDBC Connect Configuration to connect to Postgres DB and then use the JDBC Sampler to execute queries. I use it three times to query 3 different tables. I store this data in variables (lets call them DBVariables). Please see this image for current Jmeter setup. https://i.stack.imgur.com/GZJyF.png
In JSR Assertion, I have written code that takes data from various DBVariables and compares it against the API response.
However, my issue is the API response can have an array of records and then nested arrays inside each (please see API response sample below). And these array elements can be sorted in any order. This is where I have issues.
I was wondering what would be the most efficient way of writing this JSR Assertion to validate all data elements returned by the API are the same as what is in the DB.
I am very new to groovy, but I think if I can query the DB inside the JSR assertion (instead of using the JDBC sampler), then the comparison can be done by storing API response in a map and then the DBResponse in another map and sorting both and comparing the items.
My questions are:
How can I connect to postgressql using groovy and then execute query statements in it? I have not done that before and was hoping if someone can provide a sample code.
How can I store API response and DB responses in Map and sort them and compare them in groovy?
The API response is of the following type:
{
"data":{
"response":{
"employeeList":[
{
"employeeNumber":"11102",
"addressList":[
{
"addrType":"Home",
"street_1":"123 Any street"
},
{
"addrType":"Alternate",
"street_1":"123 Any street"
}
],
"departmentList":[
{
"deptName":"IT"
},
{
"deptName":"Finance"
},
{
"deptName":"IT"
}
]
},
{
"employeeNumber":"11103",
"addressList":[
{
"addrType":"Home",
"street_1":"123 Any street"
},
{
"addrType":"Alternate",
"street_1":"123 Any street"
}
],
"departmentList":[
{
"deptName":"IT"
},
{
"deptName":"Finance"
},
{
"deptName":"IT"
}
]
}
]
}
}
}

Have you seen Working with a relational database chapter of Groovy documentation? Alternatively you can obtain a Connection instance from the JDBC Configuration Element like
def connection = org.apache.jmeter.protocol.jdbc.config.getConnection('your-pool-name')
With regards to "sort" There is DefaultGroovyMethods class which provides sort() function for any "sortable" entity. With regards to "compare" - we don't know how the object from the database looks like hence cannot provide a comprehensive solution.
Maybe an easier option would be converting the response from the JDBC Sampler to JSON using JsonBuilder and once you have 2 JSON structures use the library like JSONassert which doesn't care about order and depth
You haven't asked, but if you're "very new to groovy" maybe it worth extracting individual values from API using JSON Extractor, do the same for the database with the JDBC elements and compare individual JMeter Variables using Response Assertion?

Related

NodeJS - Simplify/Resolve GraphQL query

I am currently writing a Lambda authorizer for an AWS AppSync API, however the authorization depends on the target resource being accessed.
Every resource has their own ACL listing the users and conditions for allowing access to it.
Currently the best I could find would be to get the identity of the caller, look at all the ACLs, and authorize the call while denying access to all the other resources, what's not only highly inefficient, but also extremely impractical, if not impossible.
The solution I had originally came up with was to get the target resource, retrieve the ACL and check if the user fits the specified criteria. The problem is that I am unable to reliably define what's the target resource. What I get from AWS is a request like this:
{
"authorizationToken": "ExampleAUTHtoken123123123",
"requestContext": {
"apiId": "aaaaaa123123123example123",
"accountId": "111122223333",
"requestId": "f4081827-1111-4444-5555-5cf4695f339f",
"queryString": "mutation CreateEvent {...}\n\nquery MyQuery {...}\n",
"operationName": "MyQuery",
"variables": {}
}
}
So, I only have the query string and variables, leaving the actual parsing of this to me. I got to convert it to an AST using graphql-js, but it's still extremely verbose and most importantly, it's structure varies greatly.
My first code to retrieve the target worked for the AppSync console queries, but not the Amplify Front-End, for example. I also can't rely on something as simple as the variable name, as an attacker could quite easily craft a query with an arbitrary name, or even not use variables at all.
I thought about implementing this authorization logic within Lambda Resolvers, what should be simpler in a way, but would require me to use resolvers as authorizers, what doesn't seem ideal, and implement the entire resolver logic when I just want the most trivial possible resolvers.
Ideally I'd like something like this:
/* Schema:
type Query {
operationName(key: KEY!): responseType
}*/
/* Query:
query abitraryQueryName($var1: KEY!) {
operationName(key: $var1) {
field1
field2
}
}*/
/* Variables:
{ "var1": "value1" } */
parsedQuery = {
operation: "operationName",
params: { "key": "value1" },
fields: [ "field1", "field2" ]
};
Is there any way to resolve/simplify the queries from GraphQL to JSON/similar in a way that this information can be easily extracted?
Well, couldn't find anything on it, so I made something myself.
On the off chance someone needs something similar, here's the gist with the code I used: https://gist.github.com/Iorpim/6544dad46060522dd0b17477871bc434
I didn't make it a proper full lib, as it's a very specific use case and it's likely a one-off, and I am also not sure how reliable it is, but it solves my problem!

How do I know which fields are indexed in pouchdb if I use query() API?

I am new to pouchdb and I am reading below source code:
db.query('product_index', {
startkey: ["01234"],
endkey: ["01234", {}],
include_docs: false
});
this code executes for a long time. After read some pouchdb document it looks like it builds index on the database when it run the first time. But I don't understand which fields are indexed based on above code.
Below code I can see it builds index on field foo. But how can I understand query API for building index? What is the different between using query and createIndex from index perceptive?
db.createIndex({
index: {
fields: ['foo']
}
})
Have you seen the PouchDB Guide Bulk operations section Please use 'allDocs()'. Seriously.?
Far too many developers overlook this valuable API, because they
misunderstand it. When a developer says "my PouchDB app is slow!", it
is usually because they are using the slow query() API when they
should be using the fast allDocs() API.
When designing your data structures it's very important to bear that in mind. You should define your record id fields to optimize data accessibility through allDocs().

Case insensitive search in arrays for CosmosDB / DocumentDB

Lets say I have these documents in my CosmosDB. (DocumentDB API, .NET SDK)
{
// partition key of the collection
"userId" : "0000-0000-0000-0000",
"emailAddresses": [
"someaddress#somedomain.com", "Another.Address#someotherdomain.com"
]
// some more fields
}
I now need to find out if I have a document for a given email address. However, I need the query to be case insensitive.
There are ways to search case insensitive on a field (they do a full scan however):
How to do a Case Insensitive search on Azure DocumentDb?
select * from json j where LOWER(j.name) = 'timbaktu'
e => e.Id.ToLower() == key.ToLower()
These do not work for arrays. Is there an alternative way? A user defined function looks like it could help.
I am mainly looking for a temporary low-effort solution to support the scenario (I have multiple collections like this). I probably need to switch to a data structure like this at some point:
{
"userId" : "0000-0000-0000-0000",
// Option A
"emailAddresses": [
{
"displayName": "someaddress#somedomain.com",
"normalizedName" : "someaddress#somedomain.com"
},
{
"displayName": "Another.Address#someotherdomain.com",
"normalizedName" : "another.address#someotherdomain.com"
}
],
// Option B
"emailAddressesNormalized": {
"someaddress#somedomain.com", "another.address#someotherdomain.com"
}
}
Unfortunately, my production database already contains documents that would need to be updated to support the new structure.
My production collections contain only 100s of these items, so I am even tempted to just get all items and do the comparison in memory on the client.
If performance matters then you should consider one of the normalization solution you have proposed yourself in question. Then you could index the normalized field and get results without doing a full scan.
If for some reason you really don't want to retouch the documents then perhaps the feature you are missing is simple join?
Example query which will do case-insensitive search from within array with a scan:
SELECT c FROM c
join email in c.emailAddresses
where lower(email) = lower('ANOTHER.ADDRESS#someotherdomain.com')
You can find more examples about joining from Getting started with SQL commands in Cosmos DB.
Note that where-criteria in given example cannot use an index, so consider using it only along another more selective (indexed) criteria.

JSON object selector to describe the criteria for querying documents in Azure Cosmos/ Document DB

I am using a Javascript Azure function to bind to CosmosDB (Document DB) and query documents within a collection. I would like the SELECT query to be formed based on a JSON object that would be coming in the request body. IBM Cloudant provides a feature wherein you can pass a JSON object (selector) to describe the criteria for selecting documents. How do I achieve the same in Azure?
The JSON selector looks like this-
{
"selector": {
"id": {
"$gt": 0
},
USERS": {
"username": "Jack",
"department": "HR"
}
}
}
The sql-from-mongo npm package provides conversion of expressions similar to these into CosmosDB SQL. The module can be easily loaded in your Azure Functions or it can even be loaded into a sproc with a little bit of manipulation.
Full disclosure: I'm the author of the npm package.

Marklogic QueryByExample in collection NodeJS

TLDR
Is there a way to limit queryByExample to a collection in NodeJS?
Problem faced
I have a complex query with some optional fields (i.e. sometimes some search fields will be omitted). So I need to create a query dynamically, e.g. in JSON. QueryByExample seems to be the right tool to use here as it gives me that flexibility to pass a JSON. However my problem is that I would like to limit my search to only one collection or directory.
e.g. I was hoping for something like
searchJSON = {
title: { $word: "test" },
description: { $word: "desc" }
};
//query
db.documents.query(qb.where(
qb.collection("collectionName"),
qb.byExample(searchJSON)
)).result()...
In this case searchJSON could have been built dynamically, for example maybe sometimes title may be omitted from the search.
This doesn't work because the query builder only allows queryByExample to be the only query. But I'd instead like to built a dynamic search query which is limited to a collection or directory.
At present, I think you would have to express the query with QueryBuilder instead of Query By Example using
qb.and([
qb.collection('collectionName'),
qb.word('title', 'test'),
qb.word('description', 'desc')
])
See http://docs.marklogic.com/jsdoc/queryBuilder.html#word
That said, it should be possible for the Node.js API to relax that restriction based on the fixes in MarkLogic 9.0-2
Please file an issue on https://github.com/marklogic/node-client-api

Resources