Apply json patch to mongodb document - python-3.x

I want to implement HTTP PATCH using Python Flask framework. As an input, I would be receiving JSON patch like:
[
{ "op": "replace", "path": "/work/title", "value": "Senior Engineer" }
]
My database is MongoDB where I want to apply the above patch directly to Mongo. For example, below is the document stored in Mongo:
{
"name":"ABC",
"age":25,
"work":{
"title":"Engineer",
"company":"XYZ"
}
}
After applying the patch, it should be:
{
"name":"ABC",
"age":25,
"work":{
"title":"Senior Engineer",
"company":"XYZ"
}
}
Could you please help me to find a way to implement?
According to my research, I found a Python module python-json-patch which helps to apply json to patch to a json object. So, we would need to get json/document from the MongoDB and apply the patch using above module. Then replace the document back in MongoDB. So, basically this would end up in PUT rather than PATCH.
One more approach I thought to have a module to parse the json patch and construct the json and apply the update to MongoDB using $set. But this approach is naive and not efficient.
So, please suggest a good way to implement HTTP PATCH using json patch and directly applying to the MongoDB document.

Related

JMeter: Connect to PostGresSQL in JSR using groovy and then compare values from multiple tables in DB with API response

Sorry for the long post, but I really need some guidance here. I need to compare values from an API response with the values from multiple tables in the DB.
Currently, I am doing it as follows:
Use a JDBC Connect Configuration to connect to Postgres DB and then use the JDBC Sampler to execute queries. I use it three times to query 3 different tables. I store this data in variables (lets call them DBVariables). Please see this image for current Jmeter setup. https://i.stack.imgur.com/GZJyF.png
In JSR Assertion, I have written code that takes data from various DBVariables and compares it against the API response.
However, my issue is the API response can have an array of records and then nested arrays inside each (please see API response sample below). And these array elements can be sorted in any order. This is where I have issues.
I was wondering what would be the most efficient way of writing this JSR Assertion to validate all data elements returned by the API are the same as what is in the DB.
I am very new to groovy, but I think if I can query the DB inside the JSR assertion (instead of using the JDBC sampler), then the comparison can be done by storing API response in a map and then the DBResponse in another map and sorting both and comparing the items.
My questions are:
How can I connect to postgressql using groovy and then execute query statements in it? I have not done that before and was hoping if someone can provide a sample code.
How can I store API response and DB responses in Map and sort them and compare them in groovy?
The API response is of the following type:
{
"data":{
"response":{
"employeeList":[
{
"employeeNumber":"11102",
"addressList":[
{
"addrType":"Home",
"street_1":"123 Any street"
},
{
"addrType":"Alternate",
"street_1":"123 Any street"
}
],
"departmentList":[
{
"deptName":"IT"
},
{
"deptName":"Finance"
},
{
"deptName":"IT"
}
]
},
{
"employeeNumber":"11103",
"addressList":[
{
"addrType":"Home",
"street_1":"123 Any street"
},
{
"addrType":"Alternate",
"street_1":"123 Any street"
}
],
"departmentList":[
{
"deptName":"IT"
},
{
"deptName":"Finance"
},
{
"deptName":"IT"
}
]
}
]
}
}
}
Have you seen Working with a relational database chapter of Groovy documentation? Alternatively you can obtain a Connection instance from the JDBC Configuration Element like
def connection = org.apache.jmeter.protocol.jdbc.config.getConnection('your-pool-name')
With regards to "sort" There is DefaultGroovyMethods class which provides sort() function for any "sortable" entity. With regards to "compare" - we don't know how the object from the database looks like hence cannot provide a comprehensive solution.
Maybe an easier option would be converting the response from the JDBC Sampler to JSON using JsonBuilder and once you have 2 JSON structures use the library like JSONassert which doesn't care about order and depth
You haven't asked, but if you're "very new to groovy" maybe it worth extracting individual values from API using JSON Extractor, do the same for the database with the JDBC elements and compare individual JMeter Variables using Response Assertion?

How to add file to the 'Evidance' using Cucumber JSON Multipart into the jenkinsci/xray-connector-plugin

I'd like to add file to the Evidence for new test execution.
How to upload file to the Evidance input using Cucumber JSON Multipart?
Xray plugin repo: jenkinsci/xray-connector-plugin
There is my multipart JSON:
{
"fields": {
"project": {
"key": "${projectKey}"
},
"summary": "Test Execution for Cucumber results (Generated by job: ${BUILD_TAG})",
"description": "Test Execution for Cucumber results (Generated by job: ${BUILD_TAG})",
"issuetype": {
"id": "12453"
}
},
"xrayFields": {
"testPlanKey": "${testPlanKey}",
"environments": [
"${env}"
]
}
}
What should I add to this json to upload a file into Evidence input?
That Cucumber JSON multipart endpoint (similarly with other endpoints) doesn't provide a way to specify that field explicitly.
The Xray JSON format (and either the standard or multipart endpoints) will allow you to specify that information; however, this format is not the most adequate one in case you want to upload Cucumber related results.
You have two options, since in this case I guess we can exclude using Xray JSON format:
a) use Cucumber JSON (either standard or multipart endpoint) and take advantage of the fact that Xray can process attachments (i.e. screenshots) that you can embed in the cucumber steps implementation (please see example on Xray docs where it briefly mentions it). You should look at cucumber's API to see how that can be done in terms of API.
As we speak, in Java, I think you have to do +something like this in a hook
scenario.attach(data, "image/png", "My screenshot");
Example here.
b) you can use your current approach and then use the GraphQL to update the evidence on the corresponding Test Run. This will be tricky to implement as you'll need to know the Test Execution issue key, then it's id and then use the proper GraphQL request to update evidence on a specific Test Run inside that Test Execution... this is not trivial though.

Mongoose, ExpressJs - exposing mongo documents

Every example of using expressjs and mongoose are like that:
const contentTypes = await ContentType.find().sort({createdAt: -1});
res.json(contentTypes);
But in this scenario we are returning all document by REST api (even mongoose version field '_v'. I think it would be good practive to describe interface like this
export class ContentTypeEntry {
id: string;
name: string;
}
and convert mongoose type to this interface object and return this DTO. I just starting using nodejs ecosystem, so maybe in this ecosystem returning directly ORM objects is normal?
How are You dealing with moongose objects and REST endpoints?
I'm not entirely sure if I got the question right, but this is what my response object looks like-
// GET /api/products/1010
{
"meta": {
"type": "success",
"code": 200,
"message": "Retrieved product number 1010"
},
"data": {
"id": 1010,
"name": "Apple iPhone X",
"description: "Yada yada",
"price": 1000
}
}
This just separates the metadata and the actual returned data to make it easier for whoever is consuming the api to handle errors better. I also modify the JSON object to return only the required data and omit things like the version field from the response.
I think this is a great question, even if it's a bit broad. There are lots of frameworks that build on top of Node/Express (for example, LoopBack) to act as the glue between your data layer and your HTTP layer (REST, API, whatever you want to call it), deciding what you want to actually exist at a given endpoint. Happy to share other thoughts here if you have more specific questions.
You could also stay fairly lean and override the toJSON method of your Mongoose object (see this for an example). This is probably in line with your example of having an additional class that your object will conform to before being delivered to the end user, but personally I'd prefer to keep that within my object definition (the Mongoose model file).
I suppose at the end of the day, it's a question of how much control you need, how big the project is going to be and what its future needs will be. For microservices, you may find that Express + Mongoose and a couple of specific strategies will solve your concerns.

JSON object selector to describe the criteria for querying documents in Azure Cosmos/ Document DB

I am using a Javascript Azure function to bind to CosmosDB (Document DB) and query documents within a collection. I would like the SELECT query to be formed based on a JSON object that would be coming in the request body. IBM Cloudant provides a feature wherein you can pass a JSON object (selector) to describe the criteria for selecting documents. How do I achieve the same in Azure?
The JSON selector looks like this-
{
"selector": {
"id": {
"$gt": 0
},
USERS": {
"username": "Jack",
"department": "HR"
}
}
}
The sql-from-mongo npm package provides conversion of expressions similar to these into CosmosDB SQL. The module can be easily loaded in your Azure Functions or it can even be loaded into a sproc with a little bit of manipulation.
Full disclosure: I'm the author of the npm package.

Return only _source from a search

Is it possible to only retrieve the _source document(s) when I execute a search query with the (official) nodejs-elasticsearch library? According to the documentation, there seems to be a way, sort of:
Use the /{index}/{type}/{id}/_source endpoint to get just the _source field of the document, without any additional content around it. For example:
curl -XGET 'http://localhost:9200/twitter/tweet/1/_source'
And the corresponding API call in the nodejs library is:
client.getSource([params, [callback]])
However, this method only seems to be able to retrieve documents on an ID basis. I need to issue a full search body (with filters and query_strings and whatnot), which this method doesn't support.
I'm running ES 1.4
You can use "fields" for this. See a simplified example below. Go ahead and customize your query as per your requirement:
{
"fields": [
"_source"
],
"query": {
"match_all": {}
}
}
The value of fields _index, _type, _id and _score will always be present in the response of Search API.

Resources