My main intention is to dynamically change the Employees collection while using pymongo, and i was able to do it for insert commands, I am facing problems with the find command, no matter what i do exec() always returns None. but if i copy the string and run it value gets assigned to the variable.
can someone throw some light on why the exec is unable to return a resultset or assign a the resultset to a variable?
db.Employees.update_one(
{"id": criteria},
{
"$set": {
"name":name,
"age":age,
"country":country
}
}
)
from pymongo import MongoClient
import ast
client = MongoClient('localhost:27017')
db = client.TextClassifier
insert works
def mongo_insert_one(COLLECTION_NAME, JSON):
QUERY = """db.%(COLLECTION_NAME)s.insert_one( %(JSON)s )""" % locals();
exec(QUERY)
def mongo_retrive(COLLECTION_NAME, JSON):
resultset = None
query = """resultset = db.%(COLLECTION_NAME)s.find( %(JSON)s )""" % locals();
return resultset
print(mongo_retrive('hungry_intent', "{'Intent':'Hungry'}"))
neither this would work
resultset = exec(""" db.%(COLLECTION_NAME)s.find( %(JSON)s )""" % locals();)
this would not work for an entirely different reason,it says If you meant to call the 'locals' method on a 'Database' object it is failing because no such method exists.
resultset = db.locals()[COLLECTION_NAME].find()
PyMongo Database objects support bracket notation to access a named collection, and PyMongo's included bson module provides a much better JSON decoder than "eval":
from bson import json_util
COLLECTION_NAME = 'hungry_intent'
JSON = "{'Intent':'Hungry'}"
print(list(db[COLLECTION_NAME].find(json_util.loads(JSON))))
This will be faster and more reliable than your "eval" code, and also prevents the injection attack that your "eval" code is vulnerable to.
If you can avoid using JSON at all it could be preferable:
COLLECTION_NAME = 'hungry_intent'
QUERY = {'Intent':'Hungry'}
print(list(db[COLLECTION_NAME].find(QUERY)))
Related
I've been playing around with some "weirder" querying of DynamoDB with Reserved Words using boto3.resource method, and came across a pretty annoying issue which I can't resolve for quite some time (Always the same error sigh), and can't seem to find the answer anywhere.
My code is the following:
import logging
import boto3
from boto3.dynamodb.conditions import Key
logger = logging.getLogger()
logger.setLevel(logging.INFO)
TABLE_NAME = "some-table"
def getItems(record, table=None):
if table is None:
dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table(TABLE_NAME)
record = str(record)
get_item = table.query(
KeyConditionExpression=Key("#pk").eq(":pk"),
ExpressionAttributeNames={"#pk": "PK"},
ExpressionAttributeValues={":pk": record},
)
logger.info(
f"getItem parameters\n{json.dumps(get_item, indent=4,sort_keys=True, default=str)}"
)
return get_item
if __name__ == "__main":
record = 5532941
getItems(record)
It's nothing fancy, as I mentioned I'm just playing around, but I'm constantly getting the following error no matter what I try:
"An error occurred (ValidationException) when calling the Query operation: Value provided in ExpressionAttributeNames unused in expressions: keys: {#pk}"
As far as I understand in order to "replace" the reserved keys/values with something arbitrary you put it into ExpressionAttributeNames and ExpressionAttributeValues, but I can't wrap my head around as to why it's telling me that this key is not used.
I should mention that this Primary Key exists with this value in the record var in DynamoDB.
Any suggestions?
Thanks
If you're using Key then just provide the string values and don't be fancy with substitution. See this example:
https://github.com/aws-samples/aws-dynamodb-examples/blob/master/DynamoDB-SDK-Examples/python/WorkingWithQueries/query_equals.py
If you're writing an equality expression as a single string, then you need the substitution. See this example:
https://github.com/aws-samples/aws-dynamodb-examples/blob/master/DynamoDB-SDK-Examples/python/WorkingWithQueries/query-consistent-read.py
I need to filter the webserver requests and setting a query for pymongo, its not so simple as I need to have "and", or "or" functionality for multiple fields.
I have filtered the get request, got the parameters, built the string to be passed to db..find. But it throws error. I have identified the error as because I am forming a string like this to passed to the function, now as its a string and not actually a dict, its throwing an error. What is the right way of doing it?
Actually, I have to get something like: {$and:[{Title:{"$regex":"Hong Kong"}},{Url:{"$regex":"hong"}}]}{'_id':0, 'Body':0}
The get request I am sending is: http://127.0.0.1:5000/getRequest?Title="Hong Kong protest"&Url="hong" Now the below thing gives the exact required string, but it throws an error as its not supposed to be string. Please help.
#app.route('/getRequest', methods=['GET'])
def request():
global connection
args = request.args
if len(args) > 1:
search_str = ""
for key, val in args.items():
search_str += '{'+key+':{"$regex":'+str(val)+'}},'
search_str = search_str[:-1]
display_dict={'id':0, 'Body':0}
final_search_str = "{$and:["+search_str+"]},{'_id':0, 'Body':0}"
#return(final_search_str)
# query_str = request.args.get('query_string')
db = connection['test']
collection = db['collect1']
output = []
for s in collection.find(final_search_str):
output.append({'Title' : s['Title'], 'Url' : s['Url']})
It should be dict which should be passed to the function. Any better way to do this complex query via pymongo?
You can do this using re and bson.regex.Regex module.
http://api.mongodb.com/python/current/api/bson/regex.html
import re
from bson.regex import Regex
query = {}
for key, val in args.items():
pattern = re.compile(val)
regex = Regex.from_native(regex)
query[key] = regex
for s in collection.find(query):
output.append({'Title' : s['Title'], 'Url' : s['Url']})
I want to encrypt the data in the PostgreSQL. I am using the below two methods to insert the data, one using ORM, other without ORM
db = sql.create_engine(connection_string)
metadata = sql.schema.MetaData(bind=db, reflect=True)
inputStringtable = sql.Table('person_info', metadata, autoload=True)
######Using ORM########
class RowInputString(object):
pass
orm.Mapper(RowInputString, inputStringtable)
Sess = orm.sessionmaker(bind=db)
session = Sess()
inputTable = RowInputString()
inputTable.person_id = personId
inputTable.person_name = personName
session.add(inputTable)
session.commit()
################################
######not using ORM
def inserting_data(personId, personName):
insertData = inputStringtable.insert().values(person_id=personId, person_name=personName)
conn = db.connect()
conn.execute(ins)
inserting_data(personId, personName)
I came across the below snippet to the encrypt and send it to database:
INSERT INTO users(login, passwd)
VALUES('my_login', crypt('my_password', gen_salt('md5')));
I find it little difficult how I can use this snippet in my code?
For general encryption, you can use the EncryptedType SQLAlchemy type.
For password hashing you can define a custom type in SQLAlchemy:
https://github.com/sqlalchemy/sqlalchemy/wiki/DatabaseCrypt
This uses bind_expression of the TypeDecorator API to map the passed-in column value to an expression involving built-in database functions (gen_salt and crypt).
I'm rather new to the whole ORM topic, and I've already searched forums and docs.
The question is about a flask application with SQLAlchemy as ORM for the PostgreSQL.
The __init__.py contains the following line:
db = SQLAlchemy()
the created object is referenced in the other files to access the DB.
There is a save function for the model:
def save(self):
db.session.add(self)
db.session.commit()
and also an update function:
def update(self):
for var_name in self.__dict__.keys():
if var_name is not ('_sa_instance_state' or 'id' or 'foreign_id'):
# Workaround for JSON update problem
flag_modified(self, var_name)
db.session.merge(self)
db.session.commit()
The problem occurs when I'm trying to save a new object. The save function writes it to DB, it's visible when querying the DB directly (psql, etc.), but a following ORM query like:
model_list = db.session.query(MyModel).filter(MyModel.foreign_id == this_id).all()
gives an empty response.
A call of the update function does work as expected, new data is visible when requesting with the ORM.
I'm always using the same session object for example this:
<sqlalchemy.orm.scoping.scoped_session object at 0x7f0cff68fda0>
If the application is restarted everything works fine until a new object was created and tried to get with the ORM.
An unhandsome workaround is using raw SQL like:
model_list = db.session.execute('SELECT * FROM models_table WHERE
foreign_id = ' + str(this_id))
which gives a ResultProxy with latest data like this:
<sqlalchemy.engine.result.ResultProxy object at 0x7f0cf74d0390>
I think my problem is a misunderstanding of the session. Can anyone help me?
It figured out that the problem has nothing to do with the session, but the filter() method:
# Neccessary import for string input into filter() function
from sqlalchemy import text
# Solution or workaround
model_list = db.session.query(MyModel).filter(text('foreign_key = ' + str(this_id))).all()
I could not figure out the problem with:
filter(MyModel.foreign_id == this_id) but that's another problem.
I think this way is better than executing raw SQL.
I've recently started using PyMongo as an interface with MongoDB. But I'm having some strange issues when deleting documents from a collection.
Here is an example:
from bson import ObjectId
from pymongo import MongoClient
# Open connection
client = MongoClient(mongo_html)
collection_post = client["MyCollection"].posts
# Delete procedure
_ids_to_delete = [ObjectID("xxxxxxx..."), ..., ObjectID("xxxxxxx...")]
n_to_delete = len(_ids_to_delete)
result = collection_post.delete_many({'_id': {'$in': _ids_to_delete}})
n_delete = result.deleted_count
if n_delete != n_to_delete:
raise Exception("Well well well...")
Now, I know for a fact that all the documents in _ids_to_delete exist in the database, in fact, if I run the following if the exception is raised
if n_delete != n_to_delete:
for _id in _ids_to_delete:
search_result = collection_post.find({'_id': _id})
It will still find documents that were supposed to be deleted. To get around this, I tried using delete_one() instead and looping, with similar results.
Am I missing something here? Does the fact that another process on an another computer is writing to the same collection at the same time have this effect?