custom pagination of limit and page in Django Rest Framework - python-3.x

I wanted to create custom paginations for this get_queryset.
get_queryset = Comments.objects.filter(language_post_id=post_in_lang_id,is_post_comment=True).order_by('-created_on')[offset:offset+limit]
I want to change the offset value whenever the page_no updates. Suppose someone enters page_no=1, so offset must be 0, and when enters 2, so offset should be 10, and so on. Every time page_no updates, it should update the offset value accordingly.
like ?page_no=3:
get_queryset = Comments.objects.filter(language_post_id=post_in_lang_id,is_post_comment=True).order_by('-created_on')[offset:offset+limit] # [ 20 : 20 + 10 ]

I guess you want to do that in a ListAPIView. If that's the case, you can do this really simply using PageNumberPagination.Just define the page size and the page_query_param you want, and the default paginate_queryset() method will take care of everything for you, you don't have to override it or calculate the offset by yourself.
# pagination.py
from rest_framework.pagination import PageNumberPagination
class CustomPagination(PageNumberPagination):
# Returns 10 elements per page, and the page query param is named "page_no"
page_size = 10
page_query_param = 'page_no'
# views.py
from rest_framework.generics import ListAPIView
from my_app.pagination import CustomPagination
class MyListView(ListAPIView):
pagination_class = CustomPagination
serializer_class = CommentSerializer
def get_queryset(self):
post_in_lang_id = '1' # Retrieve your post_in_lang_id here
return Comments.objects.filter(language_post_id=post_in_lang_id,is_post_comment=True).order_by('-created_on')
You can also set it as the default paginator by defining DEFAULT_PAGINATION_CLASS in your settings file.
Here is a mock of what you would get as a result for the first page using this method :
{
"count": 20,
"previous": null,
"next": "http://localhost:8000/api/comments/?page_no=2",
"results": [ # List of first 10 elements
{
"id": 1,
[...]
},
[...]
{
"id": 10,
[...]
},
]
}

Related

How do I get a value from a json dict that's not constant?

I'm trying to write an automation script that needs to get the values from the output below. The problem is the CValue is not a constant number. It can range anywhere from 1 - x sample values. Is there a way I can store each value properly?
{
'Output': {
'Name': 'Sample',
'Version': {
'Errors': [],
'VersionNumber': 2,
'AValue': 'Hello',
'BValue': ['val:val:BVal'],
'CValue': [{
'DValue': 'aaaaa-bbbbb-cccc',
'Name': 'Sample_Name_1'
}, {
'DValue': 'aaaaa-bbbbb-ddddd',
'Name': 'Sample_Name_2'
}]
}
},
'RequestId': 'eeeee-fffff-gggg'
}
Right now, I'm doing it in the most inefficient way by storing each value separately. My code looks like something below:
def get_sample_values():
test_get = command.sdk(xxxx)
dset_1 = test_get['Output']['Version']['CValue'][0]['DValue']
dset_2 = test_get['Output']['Version']['CValue'][1]['DValue']
return dset_1, dset_2
It works but it's limited to only 2 sets of the dset. Can you please provide input on how I can do it more efficiently?
Use case is this, I need the DValues for another function that requires it. The format for that request is going to be something like:
Source = {
'SourceReference': {
'DataReference': [
{
'EValue': 'string, string, string'
'FValue': DValue1
},
'EValue': 'string, string, string'
'FValue': DValue2
}
]
}
Use a list comprehension to create a list constructed from the desired element of the CValue dicts, then return the list.
return [x['DValue'] for x in test_get['Output']['Version']['CValue']]
Does this work for you?
# either
def get_sample_values():
test_get = command.sdk(xxxx)
return test_get['Output']['Version']['CValue']
# or a generator - maybe not that useful here, but possible
def get_sample_values():
test_get = command.sdk(xxxx)
yield from test_get['Output']['Version']['CValue']
# Then you can use it
for value in get_sample_values():
print(value)
# or
print(values[3])
# for the generator
values = list(get_sample_values())
print(values[3])
For more information https://realpython.com/introduction-to-python-generators/

Updating multiple worksheets in google spreadsheet

I have some code that look this:
from __future__ import print_function
import pickle
import os.path
from googleapiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from google.auth.transport.requests import Request
SCOPES = ['https://www.googleapis.com/auth/spreadsheets']
service = build('sheets', 'v4', credentials=creds)
spreadsheet = {
'properties': {
'title': 'Data Integrity Report Completed on {}'.format(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
}
}
spreadsheet = service.spreadsheets().create(body=spreadsheet,
fields='spreadsheetId').execute()
gsheet_id = spreadsheet.get('spreadsheetId')
response_date = service.spreadsheets().values().append(
spreadsheetId=gsheet_id,
valueInputOption='RAW',
range='A1:Z100',
body=dict(
majorDimension='ROWS',
values=miss_occ_df.T.reset_index().T.values.tolist())
).execute()
This code basically creates a google spreadsheet and appends my dataframe to the first worksheet. What I want is to have a spreadsheet that has 3 worksheets. I also need to name each worksheet and upload 3 different dataframes to each worksheet. How can I do this?
You want to achieve the following things.
Create new Spreadsheet.
You want 3 sheets in the created Spreadsheet.
You want to rename the sheets.
Put the values to each sheet.
You want to achieve them using google-api-python-client with Python.
If my understanding is correct, how about this modification? I think that your goal can be achieved by one API call. But in your case, it seems that 2 dimensional array for the request body is required to be used. So in this answer, I would like to propose the method for achieving your goal by 2 API calls. So please think of this as just one of several answers.
The flow of this method is as follows.
Flow:
Create new Spreadsheet.
At that time, the 3 sheets (worksheets) are created by giving the names. In this case, the method of create() is used.
Put the values to 3 sheets using the method of values().batchUpdate().
In your case, the values are put to the new Spreadsheet. So the values can be put using this method.
Modified script:
Please modify the script below service = build('sheets', 'v4', credentials=creds) as follows. And please set variables of sheet names and values.
# Please set worksheet names.
sheetNameForWorksheet1 = "sample1"
sheetNameForWorksheet2 = "sample2"
sheetNameForWorksheet3 = "sample3"
# Please set values for each worksheet. Values are 2 dimensional array.
valuesForWorksheet1 = miss_occ_df.T.reset_index().T.values.tolist()
valuesForWorksheet2 = miss_occ_df.T.reset_index().T.values.tolist()
valuesForWorksheet3 = miss_occ_df.T.reset_index().T.values.tolist()
spreadsheet = {
'properties': {
'title': 'Data Integrity Report Completed on {}'.format(datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"))
},
"sheets": [
{
"properties": {
"title": sheetNameForWorksheet1
}
},
{
"properties": {
"title": sheetNameForWorksheet2
}
},
{
"properties": {
"title": sheetNameForWorksheet3
}
}
]
}
spreadsheet = service.spreadsheets().create(body=spreadsheet, fields='spreadsheetId').execute()
gsheet_id = spreadsheet.get('spreadsheetId')
batch_update_values_request_body = {
"data": [
{
"values": valuesForWorksheet1,
"range": sheetNameForWorksheet1,
},
{
"values": valuesForWorksheet2,
"range": sheetNameForWorksheet2,
},
{
"values": valuesForWorksheet3,
"range": sheetNameForWorksheet3,
}
],
"valueInputOption": "USER_ENTERED"
}
response = service.spreadsheets().values().batchUpdate(spreadsheetId=gsheet_id, body=batch_update_values_request_body).execute()
Note:
This modified script supposes that you have already been able to put and get values to the Spreadsheet using Sheets API.
References:
Method: spreadsheets.values.batchUpdate
Method: spreadsheets.create
At first, please confirm whether my understanding for your question is correct. If I misunderstood your question and this was not the direction you want, I apologize.

load json into excel using office js web addin

if I have json string which looks like this: [{"id":1,"name":"manish"},{"id":1,"name":"John"}]
can I just office js to simply load it in table. I saw this https://learn.microsoft.com/en-us/office/dev/add-ins/excel/excel-add-ins-tables#import-json-data-into-a-table
but then when I add more columns in my json I will have to do code changes and code is not generic enough. I could live with it but was wondering if there is a better way.
You can use Object.keys(obj).length to count the number of properties in one of your JSON objects. Something like myObjects[0].keys(obj).length.
Then get a Range object for the cell that will be the upper-left cell of the table; for example, getRange("A1").
Then use that Range object's getResizedRange method and pass 1 as the first parameter (rows) and pass the number of properties in your JSON object as the second parameter (columns).
Use the Range object that is returned by getResizedRange as the first parameter to the sheet.tables.add method.
Expanding on #RickKirkham answer and the link in the OP from https://learn.microsoft.com/en-us/office/dev/add-ins/excel/excel-add-ins-tables#import-json-data-into-a-table I build a "more generic" approach. My issue was that I have no control over the JSON response, I don't know how many rows, columns etc. I first tried to just use A1 and expected it to expand as needed, but that didn't work.
I did finally get it to work as I wanted below:
var transactions = [
{
"DATE": "2017",
"MERCHANT": "The Phone Company",
"CATEGORY": "Communications",
"AMOUNT": "120"
},
{
"DATE": "2017",
"MERCHANT": "Southridge Video",
"CATEGORY": "Entertainment",
"AMOUNT": "40"
}
];
var sheet = context.workbook.worksheets.getItem("Sheet1");
var rng = sheet.getRangeByIndexes(0, 0, 1, Object.keys(transactions[0]).length)
var expensesTable = sheet.tables.add(rng, true);
expensesTable.name = "ExpensesTable";
expensesTable.getHeaderRowRange().values = [Object.keys(transactions[0])];
for (let i = 0; i < transactions.length; i++) {
expensesTable.rows.add(null, [Object.values(transactions[i])]);
}
if (Office.context.requirements.isSetSupported("ExcelApi", "1.2")) {
sheet.getUsedRange().format.autofitColumns();
sheet.getUsedRange().format.autofitRows();
}
sheet.activate();
PS: Very new to JS/Office JS atm.

Need to get the values of the objects (JSON) which have been created dynamically

How to get the top object value in PentahoDI? I have got the other elements like Category, Subcategory, section from the following example of Json file. However, I need to capture the first root object which is x#chapter#e50de0196d77495d9b50fc05567b4a4b and x#e50de0196d77495d9b50fc05567b4a4b
{
"x#chapter#e50de0196d77495d9b50fc05567b4a4b": {
"Category": "chapter",
"SubCategory": [
"x#4eb9072cf36f4d6fa1e98717e6bb54f7",
"x#d85849fbde324690b6067f3b18c4258d",
"x#3edff1a1864f41fe8b212df2bc96bf13"
],
"Section": {
"display_name": "Week 1 Section"
}
},
"x#e50de0196d77495d9b50fc05567b4a4b": {
"category": "course",
"Subcategory": [
"x#e50de0196d77495d9b50fc05567b4a4b"
],
"Section": {
"advanced_modules": [
"google-document"
],
}
}
}
In the Fields tab of the Json Input step I have given the Names and Paths as: Category --> $..Category, Subcategory --> $..Subcategory, Section --> $..Section.
However, I am unable to get the root element as it is crucial information for us to work on it. ex (x#chapter#e50de0196d77495d9b50fc05567b4a4b and x#e50de0196d77495d9b50fc05567b4a4b)
I have used the following code to get the values of the dynamic objects but it didnt work. The following is the code I used it.
var obj = JSON.parse (JBlock) //Jblock is the one which holds the entire string.
var keys = Object.name( obj);
JSONPath is not able to get the keys of a JSON structure. This is one of my main issues with JSONPath, and I wish Pentaho had included other JSON parsing engines.
This JavaScript to be used in Modified Java Script Value works for me. Add a value in the fields editor like this:
And then a script like this:
var obj = JSON.parse(JBlock);
var keys = Object.keys(obj);
for (var i = 0; i < keys.length; i++) {
    var row = createRowCopy(getOutputRowMeta().size());
    var idx = getInputRowMeta().size();
row[idx++] = keys[i];
putRow(row);
}
trans_Status = SKIP_TRANSFORMATION;

Multiple key search in CouchDB

Given the following object structure:
{
key1: "...",
key2: "...",
data: "..."
}
Is there any way to get this object from a CouchDB by quering both key1 and key2 without setting up two different views (one for each key) like:
select * from ... where key1=123 or key2=123
Kind regards,
Artjom
edit:
Here is a better description of the problem:
The object described above is a serialized game state. A game has exactly one creator user (key1) and his opponent (key2). For a given user I would like to get all games where he is involved (both as creator and opponent).
Emit both keys (or only one if equal):
function(doc) {
if (doc.hasOwnProperty('key1')) {
emit(doc.key1, 1);
}
if (doc.hasOwnProperty('key2') && doc.key1 !== doc.key2) {
emit(doc.key2, 1);
}
}
Query with (properly url-encoded):
?include_docs=true&key=123
or with multiple values:
?include_docs=true&keys=[123,567,...]
UPDATE: updated to query multiple values with a single query.
You could create a CouchDB view which produces output such as:
["key1", 111],
["key1", 123],
["key2", 111],
["key2", 123],
etc.
It is very simple to write a map view in javascript:
function(doc) {
emit(["key1", doc["key1"]], null);
emit(["key2", doc["key2"]], null);
}
When querying, you can query using multiple keys:
{"keys": [["key1", 123], ["key2", 123]]}
You can send that JSON as the data in a POST to the view. Or preferably use an API for your programming language. The results of this query will be each row in the view that matches either key. So, every document which matches on both key1 and key2 will return two rows in the view results.
I also was struggling with simular question, how to use
"select * from ... where key1=123 or key2=123".
The following view would allow you to lookup customer documents by the LastName or FirstName fields:
function(doc) {
if (doc.Type == "customer") {
emit(doc.LastName, {FirstName: doc.FirstName, Address: doc.Address});
emit(doc.FirstName, {LastName: doc.LastName, Address: doc.Address});
}
}
I am using this for a web service that queries all my docs and returns every doc that matches both the existence of a node and the query. In this example I am using the node 'detail' for the search. If you would like to search a different node, you need to specify.
This is my first Stack Overflow post, so I hope I can help someone out :)
***Python Code
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import httplib, json
from tornado.options import define,options
define("port", default=8000, help="run on the given port", type=int)
class MainHandler(tornado.web.RequestHandler):
def get(self):
db_host = 'YOUR_COUCHDB_SERVER'
db_port = 5984
db_name = 'YOUR_COUCHDB_DATABASE'
node = self.get_argument('node',None)
query = self.get_argument('query',None)
cleared = None
cleared = 1 if node else self.write('You have not supplied an object node.<br>')
cleared = 2 if query else self.write('You have not supplied a query string.<br>')
if cleared is 2:
uri = ''.join(['/', db_name, '/', '_design/keysearch/_view/' + node + '/?startkey="' + query + '"&endkey="' + query + '\u9999"'])
connection = httplib.HTTPConnection(db_host, db_port)
headers = {"Accept": "application/json"}
connection.request("GET", uri, None, headers)
response = connection.getresponse()
self.write(json.dumps(json.loads(response.read()), sort_keys=True, indent=4))
class Application(tornado.web.Application):
def __init__(self):
handlers = [
(r"/", MainHandler)
]
settings = dict(
debug = True
)
tornado.web.Application.__init__(self, handlers, **settings)
def main():
tornado.options.parse_command_line()
http_server = tornado.httpserver.HTTPServer(Application())
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
if __name__ == '__main__':
main()
***CouchDB Design View
{
"_id": "_design/keysearch",
"language": "javascript",
"views": {
"detail": {
"map": "function(doc) { var docs = doc['detail'].match(/[A-Za-z0-9]+/g); if(docs) { for(var each in docs) { emit(docs[each],doc); } } }"
}
}
}

Resources