Is it possible to get all the history of reccuring google tasks with google tasks API? - python-3.x

I have a tasklist with repeating tasks inside. I managed to retrieve the tasks via google tasks API, but only for the current day. From the documentation I find it impossible to retrieve all the history of reccuring tasks. Am I missing something?
The idea is to create a script to keep a count of how successful a person is in doing his dailies, keeping a streak, maybe make a chart to see ratio of completed/not completed etc...
I used the google quickstart code and modified it a bit, but as much as I try, I can only get the reccuring tasks from that day... Any help?
# Call the Tasks API
results = service.tasks().list(tasklist='b09LaUd1MzUzY3RrOGlSUg', showCompleted=True,
showDeleted=False, showHidden=True).execute()
items = results.get('items', [])
if not items:
print('No task lists found.')
return
print('Task lists:')
for item in items:
print(item)

Related

How to handle replies in with Pyrogram when using ForceReply?

I am building a telegram bot where I am attempting to get the user to fill in detail about an event and store them in a dictionary which is itself in a list.
However I want it be link a conversation. I want it to look like:
user: /create
bot-reply: What would you like to call it?
user-reply: Chris' birth day
bot-reply: When is it?
user-reply: 08/11/2021
bot-reply: Event Chris birth day on 08/11/2021 has been saved!
To achieve this I plan to use ForceReply which states in the documentation
This can be extremely useful if you want to create user-friendly step-by-step interfaces without having to sacrifice privacy mode.
The problem is the documentation does not seem to explain how to handle responses.
Currently my code looks like this:
#app.on_message(filters.command('create'))
async def create_countdown(client, message):
global countdowns
countdown = {
'countdown_id': str(uuid4())[:8],
'countdown_owner_id': message.from_user.id,
'countdown_onwner_username': message.from_user.username,
}
try:
await message.reply('What do you want to name the countdown?',
reply_markup=ForceReply()
)
except FloodWait as e:
await asyncio.sleep(e.x)
Looking through the form I have found options like this:
python telegram bot ForceReply callback
which are exactly what I am looking for but they are using different libraries like python-telegram-bot which permit them to use ConversationHandler. It seems to not be part of pyrogram
How to I create user-friendly step-by-step interfaces with pyrogram?
Pyrogram doesn't have a ConversationHandler.
You could use a dict with your users' ID as the key and the state they're in as the value, then you can use that dictionary of states as your reference to know where your User is in the conversation.
Dan: (Pyrogram creator)
A conversation-like feature is not available yet in the lib. One way to do that is saving states into a dictionary using user IDs as keys. Check the dictionary before taking actions so that you know in which step your users are and update it once they successfully go past one action
https://t.me/pyrogramchat/213488

Group together celery results

TL:DR
I want to lable results in the backend.
I have a flask/celery project and I'm new to celery.
A user sends in a batch of tasks for celery to work on.
Celery saves the results to a backend SQL database (table automatically created by Celery, named celery_taskmeta).
I want to let the user see the status of his batch, and request the results from the backend.
My problem is that all the results are in one table. What are my options to lable this batch, so the user can differentiate the batches?
My ideas:
Can I add a lable to each task, e.g. "Bob's batch no. 12" and then query celery_taskmeta for that?
Can I put each batch in named backend tables, so ask Celery to save results to a table named task_12?
Trying with groups
I've tried the following code to group the results
job_group = group(api_get.delay(url) for url in urllist)
But I don't see any way to identify the group in the backend/results DB
Trying with task name
In the backend I see an empty column header 'name' so I thought I could add an arbitrary string there:
#app.task(name="an amazing vegetable")
def api_get(url: str) -> tuple:
...
But then the celery worker throws an error when I run the task:
KeyError: 'an amazing vegetable'
[2020-12-18 12:07:22,713: ERROR/MainProcess] Received unregistered task of type 'an amazing vegetable'.
Probably the simplest solution is to use Group and use the Group Result to periodically poll for group state.
A1: As for the label question - yes, you can "label" your task by using the custom state feature.
A2: you can hack around to put each batch of tasks inside backend table, but I strongly advise not to mess with it. If you really want to go this route, make a separate database for this particular use.

Django-viewflow how to get the current user?

First day learning viewflow, I managed to get the tutorial to work, but I have a use case that I don't know how to implement.
What I want is when a workflow is started, I want it to automatically assign the task to the workflow starter (the user), how do I go about reference the current request object inside the workflow?
eg.
start = (flow.Start(CreateProcessView)).Permission(auto_create=True).Next(this.fill_request)
fill_request = (flow.View(UpdateProcessView).Assign(#current user))
An .Assign(...) could be specified with a callable that takes a process activation and should return a user. Ex .Assign(lambda act: User.objects.get(...))
There are several callable shortcuts provided by Viewflow. Any this.[task_name].owner point to a user who completed that task, and activation,process.created_by points to a user who made the .Start task
fill_request = (
flow.View(UpdateProcessView)
.Assign(lambda act: act.process.created_by)
# .Assign(this.start.owner)
)

Right way to delete and then reindex ES documents

I have a python3 script that attempts to reindex certain documents in an existing ElasticSearch index. I can't update the documents because I'm changing from an autogenerated id to an explicitly assigned id.
I'm currently attempting to do this by deleting existing documents using delete_by_query and then indexing once the delete is complete:
self.elasticsearch.delete_by_query(
index='%s_*' % base_index_name,
doc_type='type_a',
conflicts='proceed',
wait_for_completion=True,
refresh=True,
body={}
)
However, the index is massive, and so the delete can take several hours to finish. I'm currently getting a ReadTimeoutError, which is causing the script to crash:
WARNING:elasticsearch:Connection <Urllib3HttpConnection: X> has failed for 2 times in a row, putting on 120 second timeout.
WARNING:elasticsearch:POST X:9200/base_index_name_*/type_a/_delete_by_query?conflicts=proceed&wait_for_completion=true&refresh=true [status:N/A request:140.117s]
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='X', port=9200): Read timed out. (read timeout=140)
Is my approach correct? If so, how can I make my script wait long enough for the delete_by_query to complete? There are 2 timeout parameters that can be passed to delete_by_query - search_timeout and timeout, but search_timeout defaults to no timeout (which is I think what I want), and timeout doesn't seem to do what I want. Is there some other parameter I can pass to delete_by_query to make it wait as long as it takes for the delete to finish? Or do I need to make my script wait some other way?
Or is there some better way to do this using the ElasticSearch API?
You should set wait_for_completion to False. In this case you'll get task details and will be able to track task progress using corresponding API: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html#docs-delete-by-query-task-api
Just to explain more in the form of codebase explained by Random for the newbee in ES/python like me:
ES = Elasticsearch(['http://localhost:9200'])
query = {'query': {'match_all': dict()}}
task_id = ES.delete_by_query(index='index_name', doc_type='sample_doc', wait_for_completion=False, body=query, ignore=[400, 404])
response_task = ES.tasks.get(task_id) # check if the task is completed
isCompleted = response_task["completed"] # if complete key is true it means task is completed
One can write custom definition to check if the task is completed in some interval using while loop.
I have used python 3.x and ElasticSearch 6.x
You can use the 'request_timeout' global param. This will reset the Connections timeout settings, as mentioned here
For example -
es.delete_by_query(index=<index_name>, body=<query>,request_timeout=300)
Or set it at connection level, for example
es = Elasticsearch(**(get_es_connection_parms()),timeout=60)

python multiprocess requests on API

I need to get data from API which url based on different ID.
url = "http://123456789/"
id = "jimmy"
I have a list of ID, here are my code
for id in ID:
response = requests.get(url+id)
info = response.json(encoding = "utf-8")
##save info
But I have 400,000 ID and it will take to long to grab all data,
So I want use multiprocess to finish this job.
Cut the ID list into 10 or more small list and run them in the same time.
How can I do that?
Please help, thanks!
You can use Pool.
Cut the list of IDs into many subIds, and different process handle different ids.
https://docs.python.org/2/library/multiprocessing.html

Resources