I've been trying to find some documentation for jsonpatch==1.16 on how to make PATCH paths case-insensitive. The problem is that:
PATCH /users/123
[
{"op": "add", "path": "/firstname", "value": "Spammer"}
]
Seems to mandate that the DB (MySQL / MariaDB) column is also exactly firstname and not for example Firstname or FirstName. When I change the path in the JSON to /FirstName, which is what the DB column is, then the patch works just fine. But I'm not sure if you are supposed to use CamelCase in the JSON in this case? It seems a bit non-standard.
How can I make jsonpatch at least case-insensitive? Or alternatively, is there some way to insert some mapping in the middle, for example like this:
def users_mapping(self, path):
select = {
"/firstname": "FirstName",
"/lastname": "last_name", # Just an example
}
return select.get(path, None)
Using Python 3.5, SQLAlchemy 1.1.13 and Flask-SQLAlchemy 2.2
Well, the answer is: yes, you can add mapping. Here's my implementation with some annotations:
The endpoint handler (eg. PATCH /news/123):
def patch(self, news_id):
"""Change an existing News item partially using an instruction-based JSON,
as defined by: https://tools.ietf.org/html/rfc6902
"""
news_item = News.query.get_or_404(news_id)
self.patch_item(news_item, request.get_json())
db.session.commit()
# asdict() comes from dictalchemy method make_class_dictable(news)
return make_response(jsonify(news_item.asdict()), 200)
The method it calls:
# news = the db.Model for News, from SQLAlchemy
# patchdata = the JSON from the request, like this:
# [{"op": "add", "path": "/title", "value": "Example"}]
def patch_item(self, news, patchdata, **kwargs):
# Map the values to DB column names
mapped_patchdata = []
for p in patchdata:
# Replace eg. /title with /Title
p = self.patch_mapping(p)
mapped_patchdata.append(p)
# This follows the normal JsonPatch procedure
data = news.asdict(exclude_pk=True, **kwargs)
# The only difference is that I pass the mapped version of the list
patch = JsonPatch(mapped_patchdata)
data = patch.apply(data)
news.fromdict(data)
And the mapping implementation:
def patch_mapping(self, patch):
"""This is used to map a patch "path" or "from" to a custom value.
Useful for when the patch path/from is not the same as the DB column name.
Eg.
PATCH /news/123
[{ "op": "move", "from": "/title", "path": "/author" }]
If the News column is "Title", having "/title" would fail to patch
because the case does not match. So the mapping converts this:
{ "op": "move", "from": "/title", "path": "/author" }
To this:
{ "op": "move", "from": "/Title", "path": "/Author" }
"""
# You can define arbitrary column names here.
# As long as the DB column is identical, the patch will work just fine.
mapping = {
"/title": "/Title",
"/contents": "/Contents",
"/author": "/Author"
}
mutable = deepcopy(patch)
for prop in patch:
if prop == "path" or prop == "from":
mutable[prop] = mapping.get(patch[prop], None)
return mutable
Related
Currently, we have have a patch endpoint for jsonschema and now we want to limit the operation that we can do while applying the patch.
The patch endpoint looks like this:
def patch(self, name, version):
...
schema = Schema.get(name, version)
serialized_schema = schema.patch_serialize()
...
data = request.get_json()
patched_schema = apply_patch(serialized_schema, data)
serialized_data, errors = update_schema_serializer.load(
patched_schema, partial=True)
...
data example:
[
{ "op": "replace", "path": "/config/notifications/", "value": "foo" },
{ "op": "add", "path": "/config/", "value": {} },
{ "op": "remove", "path": "/config/notifications/"}
]
Here we want to only have add, remove and replace operation. Is there any way to only apply patch for these operation and not for copy, move and test?
In your schema definition, define op as an enum with only the options that you want
This is my code:
where 'schema' and 'config' are heavily nested dictionaries, this function work as intended when run separately.
def find_config_difference(schema, config, key_name=None):
for k in schema:
try:
if "default" in schema[k] and k in config:
if schema[k]["default"] != config[k]:
out_of_date_default.append(f"key {k} in {key_name} has out of date default value.")
schema[k] = schema[k]["default"]
except TypeError:
pass
finally:
if k in config:
if isinstance((schema[k] and config[k]), dict):
find_config_difference(schema[k], config[k], key_name=k)
else:
yield f"{k} in {key_name} is missing/extra in config"
output = list(find_config_difference(schema, config))
This is my pytest code :
when run through pytest it does not call the function recursively and 'for' loop only goes through the outermost keys of 'schema'.
import unittest
from unittest.mock import patch
from config_compare import *
import ast
config = ast.literal_eval(open('config.json', 'r').read())
schema = ast.literal_eval(open('config_schema.json', 'r').read())
class Test_config_compare(unittest.TestCase):
def test_find_config_difference(self):
missing = list(find_config_difference(schema, config, key_name=None))
length = len(missing)
print(length)
Here is a section of 'schema', which is almost similar to 'config'.
it would only iterate through keys($schema, additionalProperties, definitions, button_content)
schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"additionalProperties": false,
"definitions": {
"color_hex": {"pattern": "^#([A-Fa-f0-9]{6})$", "type": "string"},
"palette": {
"additionalProperties": false,
"properties": {
"background": {"$ref": "#/definitions/color_hex", "default": "#FFFFFF"},
"primary": {
"rules": {"testTR": 12, "tier": 2},
},
},
},
},
"button_content": {
"additionalProperties": false,
"properties": {
"accessibilityLabel": {"type": "string"},
},
"required": ["accessibilityLabel", "value"],
"type": "object",
},
}
When I removed the yield generator it works fine. But, I also don't want to remove it and store the string in a variable. So is there a way to work around it?
Your recursive function cannot possibly work. It's a generator function, but when you make the recursive call, you're not iterating over the returned generator object. You probably want to yield from the recursive call:
finally:
if k in config:
if isinstance((schema[k] and config[k]), dict):
yield from find_config_difference(schema[k], config[k], key_name=k)
If you want to be selective about which values yielded by the recursion get yielded, you could write your own for loop iterating on the recursive result with whatever logic you want in it, yielding or not as you desire.
All that said, the fact that your function is both a generator that yields values, and it has side effects (such as updating out_of_date_default and some of the schema[k] values) seems like a dubious design. You should probably make your function only do one of those things (either modify things in place, or yield new values, not both).
I am currently working on a python program to query public github API url to get github user email address. The response from the python object is a huge list with a lot of dictionaries.
My code so far
import requests
import json
# username = ''
username = 'FamousBern'
base_url = 'https://api.github.com/users/{}/events/public'
url = base_url.format(username)
try:
res = requests.get(url)
r = json.loads(res.text)
# print(r) # List slicing
print(type(r)) # List that has alot dictionaries
for i in r:
if 'payload' in i:
print(i['payload'][6])
# matches = []
# for match in r:
# if 'author' in match:
# matches.append(match)
# print(matches)
# print(r[18:])
except Exception as e:
print(e)
# data = res.json()
# print(data)
# print(type(data))
# email = data['author']
# print(email)
By manually accessing this url in chrome browser i get the following
[
{
"id": "15069094667",
"type": "PushEvent",
"actor": {
"id": 32365949,
"login": "FamousBern",
"display_login": "FamousBern",
"gravatar_id": "",
"url": "https://api.github.com/users/FamousBern",
"avatar_url": "https://avatars.githubusercontent.com/u/32365949?"
},
"repo": {
"id": 332684394,
"name": "FamousBern/FamousBern",
"url": "https://api.github.com/repos/FamousBern/FamousBern"
},
"payload": {
"push_id": 6475329882,
"size": 1,
"distinct_size": 1,
"ref": "refs/heads/main",
"head": "f9c165226201c19fd6a6acd34f4ecb7a151f74b3",
"before": "8b1a9ac283ba41391fbf1168937e70c2c8590a79",
"commits": [
{
"sha": "f9c165226201c19fd6a6acd34f4ecb7a151f74b3",
"author": {
"email": "bernardberbell#gmail.com",
"name": "FamousBern"
},
"message": "Changed input functionality",
"distinct": true,
"url": "https://api.github.com/repos/FamousBern/FamousBern/commits/f9c165226201c19fd6a6acd34f4ecb7a151f74b3"
}
]
},
The json object is huge as well, i just sliced it. I am interested to get the email address in the author dictionary.
You're attempting to index into a dict() with i['payload'][6] which will raise an error.
My personal preferred way of checking for key membership in nested dicts is using the get method with a default of an empty dict.
import requests
import json
username = 'FamousBern'
base_url = 'https://api.github.com/users/{}/events/public'
url = base_url.format(username)
res = requests.get(url)
r = json.loads(res.text)
# for each dict in the list
for event in r:
# using .get() means you can chain .get()s for nested dicts
# and they won't fail even if the key doesn't exist
commits = event.get('payload', dict()).get('commits', list())
# also using .get() with an empty list default means
# you can always iterate over commits
for commit in commits:
# email = commit.get('author', dict()).get('email', None)
# is also an option if you're not sure if those keys will exist
email = commit['author']['email']
print(email)
I have the below strings from list object:
'items.find({"repo": "lld-test-helm", "path": "customer-customer", "name": "customer-customer-0.29.4.tgz", "type": "file"})'
'items.find({"repo": "lld-test-docker", "path": "docker.io/ubuntu/18.05", "type": "file"})'
can you please suggest how to manipulate and print it (using python 3) in human-friendly syntax to pipeline console? for example:
repository: lld-test-helm
chart: customer-customer
version: 0.29.4
repository name: lld-test-dokcer
image: docker.io/ubuntu
tag: 18.05
You can use builtin eval() method to change your string to actual dict.
Of course you need to get rid of items.find( part and the right bracket )
If string always start with items.find(, you can do it that way:
a = 'items.find({"repo": "lld-test-docker", "path": "docker.io/ubuntu/18.05", "type": "file"})'
a = a[11:-1]
or just use replace:
a = a.replace('items.find(', '')[:-1]
then use, as mentioned before, eval():
a = eval(a)
Now you can iterate thru an dict:
for key in a:
print(key, ' : ', a[key])
Example how to parse output to match one from your question:
b = {"repo": "lld-test-docker", "path": "docker.io/ubuntu/18.05", "type": "file"}
for item in b:
if item == "repo":
print('repository : ', b[item])
if item == "path":
if "ubuntu" in b[item]:
separator = len('ubuntu')+b[item].find('ubuntu')
print('image : ', b[item][:separator])
print('tag : ', b[item][separator+1:])
I want to get data from thisarray of json object :
[
{
"outgoing_relationships": "http://myserver:7474/db/data/node/4/relationships/out",
"data": {
"family": "3",
"batch": "/var/www/utils/batches/d32740d8-b4ad-49c7-8ec8-0d54fcb7d239.resync",
"name": "rahul",
"command": "add",
"type": "document"
},
"traverse": "http://myserver:7474/db/data/node/4/traverse/{returnType}",
"all_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/all/{-list|&|types}",
"property": "http://myserver:7474/db/data/node/4/properties/{key}",
"self": "http://myserver:7474/db/data/node/4",
"properties": "http://myserver:7474/db/data/node/4/properties",
"outgoing_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/out/{-list|&|types}",
"incoming_relationships": "http://myserver:7474/db/data/node/4/relationships/in",
"extensions": {},
"create_relationship": "http://myserver:7474/db/data/node/4/relationships",
"paged_traverse": "http://myserver:7474/db/data/node/4/paged/traverse/{returnType}{?pageSize,leaseTime}",
"all_relationships": "http://myserver:7474/db/data/node/4/relationships/all",
"incoming_typed_relationships": "http://myserver:7474/db/data/node/4/relationships/in/{-list|&|types}"
}
]
what i tried is :
def messages=[];
for ( i in families) {
messages?.add(i);
}
how i can get familes.data.name in message array .
Here is what i tried :
def messages=[];
for ( i in families) {
def map = new groovy.json.JsonSlurper().parseText(i);
def msg=map*.data.name;
messages?.add(i);
}
return messages;
and get this error :
javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: groovy.json.JsonSlurper.parseText() is applicable for argument types: (com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex) values: [v[4]]\nPossible solutions: parseText(java.lang.String), parse(java.io.Reader)
Or use Groovy's native JSON parsing:
def families = new groovy.json.JsonSlurper().parseText( jsonAsString )
def messages = families*.data.name
Since you edited the question to give us the information we needed, you can try:
def messages=[];
families.each { i ->
def map = new groovy.json.JsonSlurper().parseText( i.toString() )
messages.addAll( map*.data.name )
}
messages
Though it should be said that the toString() method in com.tinkerpop.blueprints.pgm.impls.neo4j.Neo4jVertex makes no guarantees to be valid JSON... You should probably be using the getProperty( name ) function of Neo4jVertex rather than relying on a side-effect of toString()
What are you doing to generate the first bit of text (which you state is JSON and make no mention of how it's created)
Use JSON-lib.
GJson.enhanceClasses()
def families = json_string as JSONArray
def messages = families.collect {it.data.name}
If you are using Groovy 1.8, you don't need JSON-lib anymore as a JsonSlurper is included in the GDK.
import groovy.json.JsonSlurper
def families = new JsonSlurper().parseText(json_string)
def messages = families.collect { it.data.name }