i am trying to perform mail merge with google docs and sheets
this is the link to my project. https://github.com/EyadZaeim/MDS
however, i keep encountering this error
<HttpError 400 when requesting https://docs.googleapis.com/v1/documents/1ON3u_SSf33ow-AkzAf01yMl0RrgXIdUT:batchUpdate?fields=&alt=json returned "This operation is not supported for this document". Details: "This operation is not supported for this document">
File "C:\Users\BAU\Desktop\MDS\Merge.py", line 100, in merge_template
documentId=copy_id, fields='').execute()
File "C:\Users\BAU\Desktop\MDS\Merge.py", line 132, in <module>
i+1, merge_template(DOCS_FILE_ID, SOURCE, DRIVE))) ```
in this line ``` # send requests to Docs API to do actual merge
DOCS.documents().batchUpdate(body={'requests': reqs},
documentId=copy_id, fields='').execute()
return copy_id ```
what can i do ? thank you so much
Related
Desired Behaviour
Upload file using Microsoft Forms
Get file content
Create new file in new location
Delete original file
Actual Behaviour
I am getting error at step 2:
"body": {
"status": 404,
"message": "File not found\r\nclientRequestId: yadda-yadda\r\nserviceRequestId: yadda-yadda"
}
What I've Tried
In the Power Automate flow, the trigger is:
When a new response is submitted
The next action is:
Get response details
The Raw Outputs of this last action is essentially:
"body": {
"responder": "me#domain.com",
"submitDate": "7/5/2021 3:17:56 AM",
"my-text-field-01": "text string here",
"my-file-upload-field-01": [{.....}],
"my-file-upload-field-02": [{.....}],
"my-text-field-02": "text string here"
}
The file upload fields have this schema:
{
"name": "My File Name_Uploader Name.docx",
"link": "https://my-tenant.sharepoint.com/sites/my-team-site/_layouts/15/Doc.aspx?sourcedoc=%7B0F1C3107-32C9-4CEF-B4BA-87E57C9DC514%7D&file=My%20File%20Name_Uploader%20Name.docx&action=default&mobileredirect=true",
"id": "01NSAULIQHGEOA7SJS55GLJOUH4V6J3RIU",
"type": null,
"size": 20400,
"referenceId": "01NSAULISZJG7M56NSV5AIDUQFHG3BOBCH",
"driveId": "letters-and-numbers-here",
"status": 1,
"uploadSessionUrl": null
}
Strangely, the id or referenceId values in this object do not correspond with the Document ID that is displayed when looking at the document's properties in the SharePoint document library:
Anyhow, I can target the uploaded file properties with these expressions in the flow:
json(body('Get_response_details')?['random-letters-and-numbers'])[0]['name']
json(body('Get_response_details')?['random-letters-and-numbers'])[0]['driveId']
json(body('Get_response_details')?['random-letters-and-numbers'])[0]['id']
The next action I want to take is Get file content.
It seems this can be done via the following actions:
SharePoint Connectors
Get file content
Get file content using path
OneDrive for Business Connectors
Get file content
Get file content using path
I'd like to use Get file content (as it seems more dynamic than having to pass through a hardcoded path).
Several posts suggest the value I pass through to this action as the File ID should be a concatenation of driveId and id, ie:
driveId.id
Sources:
Move, rename a file submitted in a Microsoft Form
Working with files from the Forms "File Upload" question type
However, when I try the following:
I get the error:
"body": {
"status": 404,
"message": "File not found\r\nclientRequestId: yadda-yadda\r\nserviceRequestId: yadda-yadda"
}
Question
What should I be passing into Get file content as the File Identifier?
Edit 1
After reading this, perhaps File Identifier actually refers to a 'file path', ie:
/Shared Documents/Apps/Microsoft Forms/My Form Name/Question/My File Name.docx
Ergh, I tried the path above as the File Identifier (by using the UI to manually select the file) and it works - not sure how I can create it dynamically as passing in a dynamic file name does not work:
/Shared Documents/Apps/Microsoft Forms/My Form Name/Question/#{variables('file_upload_wor_document_name')}
Edit 2
The last code snippet works as File Identifier when using SharePoint's Get file content using path connector.
Would still appreciate any clarification on all the different types of id that are referred to in SharePoint/Power Automate/MS Graph etc and why driveId.id was suggested as the value to use in some places.
I am finding not having access to the relevant file id at different times is problematic, eg the Delete file action requires File Identifier to delete the file uploaded to Microsoft Forms - and I don't have access to that from the Get response details response.
You may find what you need by first getting the file metadata. When working with files uploaded through forms I sometimes use the following steps:
Parse JSON for the question related to the uploaded file(s).
Get File Metadata (in my example, using path)
Now you have the details for doing what you want. In my example to create a table in the uploaded XLXS file for other uses.
Example: Getting file metadata from MS/Forms Upload
This will probably sound stupid, but I have a python script which is trying to refresh a Tableau Extract using a workbook id on Server. I have all the code working just fine and I am even getting the extract to work using the server.workbooks.refresh method passing the workbook id in the call. I am returning the value into a value called "results". The problem is that I want to pull the job id from the results variable and everything I have tried to reference the id within the "result" variable does not work. I keep getting an AttributeError 'JobItem' object has no attribute error.
I have tried to reference the object as a string, as a tuple, as a dictionary, and a list. But I cannot figure out what this object actually is so I can reference the data within it and I cannot find anywhere on the internet that talks about what is returned.
results = server.workbooks.refresh(selected_workbook_id)
print(results)
print("\nThe data of workbook {0} is refreshed.".format(results.name))
Here is the error after the print statement:
<Job#fc62052d-e824-4594-8681-64dbb9a8216c RefreshExtract created_at(2019-11-06 22:18:21+00:00) started_at(None) completed_at(None) progress (None) finish_code(-1)>
https://wnuapesstablu01.dstcorp.net/api/3.4/auth
Traceback (most recent call last):
File "C:\Users\dt24358\Python36\Scripts\Tableau REST API Scripts\Refresh_Single_Extract_v2.py", line 134, in <module>
main()
File "C:\Users\dt24358\Python36\Scripts\Tableau REST API Scripts\Refresh_Single_Extract_v2.py", line 131, in main
print("\nThe data of workbook {0} is refreshed.".format(results.name))
AttributeError: 'JobItem' object has no attribute 'name'
To close this issue out. I realized that I needed to use the right API reference for the JobItem Class. See https://tableau.github.io/server-client-python/docs/api-ref#jobs
Valid references are things like "id", "type", "created_at", "started_at". So the for those who didn't understand this like me, the reference is:
workbook = server.workbooks.get_by_id(selected_workbook_id)
results = server.workbooks.refresh(workbook.id)
print(results)
jobid = results.id
This will return the job id that the refresh task started on. You can then write a routine to poll the server looking to see when the extract job is finished.
Hope this helps someone... It was driving me crazy.
The python code which I have written, it generates some global and BigQuery related logs on stackdriver logging window. Further I am trying to create a metric manually and then send some alerts. I wanted to know whether we can create a single metric for both global and bigquery logs on stackdriver?
In Advanced filter query I tried:
resource.type="bigquery_resource" AND resource.type="global"
severity=ERROR
but its give error: "Invalid request: Request contains an invalid argument"
Then I tried:
resource.type="bigquery_resource", "global" AND
severity=ERROR
again it gives error: "Invalid request: Unparseable filter: syntax error at line 1, column 33, token ','"
import logging
from google.cloud import logging as gcp_logging
client = gcp_logging.Client()
client.setup_logging()
try:
if row_count_before.num_rows == row_count_after.num_rows:
logging.error("Received empty query result")
else:
newly_added_rows = row_count_after.num_rows - row_count_before.num_rows
logging.info("{} rows are found as query result".format(newly_added_rows))
except RuntimeError:
logging.error("Exception occured {}".format(client1.report_exception()))
Looking for a approach where I can have single metric for multiple resource type. Thank you.
I think you want
resource.type="bigquery_resource" OR resource.type="global"
severity=ERROR
I am building my first chat bot using Rasa NLU and Rasa Core in Python 3.6.7
Everything was working well. I added few new utterances in the templates section of the domain file and wrote some stories to use these utterances. Now none of the new template is working and training the model produces the following error:
File "dialogue_management_model.py", line 46, in <module>
train_dialogue()
File "dialogue_management_model.py", line 31, in train_dialogue
augmentation_factor = 50)
File "/home/pprasai/anaconda3/envs/nluenv/lib/python3.6/site-packages/rasa_core/agent.py", line 268, in train
**kwargs)
File "/home/pprasai/anaconda3/envs/nluenv/lib/python3.6/site-packages/rasa_core/policies/ensemble.py", line 72, in train
policy.train(training_trackers, domain, **kwargs)
File "/home/pprasai/anaconda3/envs/nluenv/lib/python3.6/site-packages/rasa_core/policies/memoization.py", line 152, in train
self._add(trackers_as_states, trackers_as_actions, domain)
File "/home/pprasai/anaconda3/envs/nluenv/lib/python3.6/site-packages/rasa_core/policies/memoization.py", line 108, in _add
feature_item = domain.index_for_action(action)
File "/home/pprasai/anaconda3/envs/nluenv/lib/python3.6/site-packages/rasa_core/domain.py", line 151, in index_for_action
self._raise_action_not_found_exception(action_name)
File "/home/pprasai/anaconda3/envs/nluenv/lib/python3.6/site-packages/rasa_core/domain.py", line 159, in _raise_action_not_found_exception
"Available actions are: \n{}".format(action_name, actions))
Exception: Can not access action 'utter_ask_email_send', as that name is not a registered action for this domain. Available actions are:
- action_check_ao
- action_default_fallback
- action_listen
- action_restart
- action_restaurant
- action_send_mail
- utter_ask_budget
- utter_ask_cuisine
- utter_ask_howcanhelp
- utter_ask_location
- utter_default
- utter_goodbye
- utter_greet
- utter_unsupported_city
The new templates I created are not shown in this list.
Following is an excerpt of my templates:
templates:
utter_sending_email:
- "An email is being sent."
utter_ask_email_send:
- "Would you like me to send you an email with details?"
utter_ask_email_address:
- "Could you please tell me your email address?"
utter_invalid_email:
- "It seems you might have entered an invalid email. Would you like to try again?"
utter_greet:
- "hey there! How may i help you"
- "Hi, How can I help you!"
- "Hey, How is it going. How May I help you Today"
utter_goodbye:
- "goodbye :("
- "Bye-bye"
utter_default:
- "I could not process you last query. I am terribly sorry."
And here's how I use these in the stories file:
* greet
- utter_greet
* restaurant_search
- utter_ask_location
* restaurant_search{"location": "tokyo", "cuisine": "chinese"}
- slot{"location": "tokyo"}
- slot{"cuisine": "chinese"}
- utter_ask_budget
* restaurant_search{"budget": "economy"}
- slot{"budget": "ecnnomy"}
- action_search_restaurant
- utter_ask_email_send
* small_talk
- utter_sending_email
- utter_invalid_email
* deny
- utter_goodbye
I am using Rasa Core version 0.10.1. Can anyone kindly help me figure this out?
NOTE Removing stories with the new templates will resolve the error and the training will run successfully. So I think the error must be in either stories file or the domain file.
It was very stupid of me.
I did not mention the new templates under actions section of my domail.yml file. Adding utter_ask_email_send under actions section solved the problem. Perhaps now I'll never forget.
I am writing a Karate DSL test to test a web service end point. I have defined my url base in karate-config.js file already. But when I try to use this in the Background section, I am getting the below error. Please help. Provided my feature file below.
Error: "required (...)+ loop did not match anything at input 'Scenario:'"
Feature: Test Data Management service endpoints that perform different operations with EPR
Background:
url dataManagementUrlBase
Scenario: Validate that the contractor's facility requirements are returned from EPR
Given path 'facilities'
And def inputpayload = read('classpath:dataManagementPayLoad.json')
And request inputpayload
When method post
Then status 200
And match $ == read('classpath:dataManagementExpectedJson.json')
You are missing a * before the url
Background:
* url dataManagementUrlBase