How to get contents of a single file uploaded via Microsoft Forms using 'File Identifier'? - sharepoint-online

Desired Behaviour
Upload file using Microsoft Forms
Get file content
Create new file in new location
Delete original file
Actual Behaviour
I am getting error at step 2:
"body": {
"status": 404,
"message": "File not found\r\nclientRequestId: yadda-yadda\r\nserviceRequestId: yadda-yadda"
}
What I've Tried
In the Power Automate flow, the trigger is:
When a new response is submitted
The next action is:
Get response details
The Raw Outputs of this last action is essentially:
"body": {
"responder": "me#domain.com",
"submitDate": "7/5/2021 3:17:56 AM",
"my-text-field-01": "text string here",
"my-file-upload-field-01": [{.....}],
"my-file-upload-field-02": [{.....}],
"my-text-field-02": "text string here"
}
The file upload fields have this schema:
{
"name": "My File Name_Uploader Name.docx",
"link": "https://my-tenant.sharepoint.com/sites/my-team-site/_layouts/15/Doc.aspx?sourcedoc=%7B0F1C3107-32C9-4CEF-B4BA-87E57C9DC514%7D&file=My%20File%20Name_Uploader%20Name.docx&action=default&mobileredirect=true",
"id": "01NSAULIQHGEOA7SJS55GLJOUH4V6J3RIU",
"type": null,
"size": 20400,
"referenceId": "01NSAULISZJG7M56NSV5AIDUQFHG3BOBCH",
"driveId": "letters-and-numbers-here",
"status": 1,
"uploadSessionUrl": null
}
Strangely, the id or referenceId values in this object do not correspond with the Document ID that is displayed when looking at the document's properties in the SharePoint document library:
Anyhow, I can target the uploaded file properties with these expressions in the flow:
json(body('Get_response_details')?['random-letters-and-numbers'])[0]['name']
json(body('Get_response_details')?['random-letters-and-numbers'])[0]['driveId']
json(body('Get_response_details')?['random-letters-and-numbers'])[0]['id']
The next action I want to take is Get file content.
It seems this can be done via the following actions:
SharePoint Connectors
Get file content
Get file content using path
OneDrive for Business Connectors
Get file content
Get file content using path
I'd like to use Get file content (as it seems more dynamic than having to pass through a hardcoded path).
Several posts suggest the value I pass through to this action as the File ID should be a concatenation of driveId and id, ie:
driveId.id
Sources:
Move, rename a file submitted in a Microsoft Form
Working with files from the Forms "File Upload" question type
However, when I try the following:
I get the error:
"body": {
"status": 404,
"message": "File not found\r\nclientRequestId: yadda-yadda\r\nserviceRequestId: yadda-yadda"
}
Question
What should I be passing into Get file content as the File Identifier?
Edit 1
After reading this, perhaps File Identifier actually refers to a 'file path', ie:
/Shared Documents/Apps/Microsoft Forms/My Form Name/Question/My File Name.docx
Ergh, I tried the path above as the File Identifier (by using the UI to manually select the file) and it works - not sure how I can create it dynamically as passing in a dynamic file name does not work:
/Shared Documents/Apps/Microsoft Forms/My Form Name/Question/#{variables('file_upload_wor_document_name')}
Edit 2
The last code snippet works as File Identifier when using SharePoint's Get file content using path connector.
Would still appreciate any clarification on all the different types of id that are referred to in SharePoint/Power Automate/MS Graph etc and why driveId.id was suggested as the value to use in some places.
I am finding not having access to the relevant file id at different times is problematic, eg the Delete file action requires File Identifier to delete the file uploaded to Microsoft Forms - and I don't have access to that from the Get response details response.

You may find what you need by first getting the file metadata. When working with files uploaded through forms I sometimes use the following steps:
Parse JSON for the question related to the uploaded file(s).
Get File Metadata (in my example, using path)
Now you have the details for doing what you want. In my example to create a table in the uploaded XLXS file for other uses.
Example: Getting file metadata from MS/Forms Upload

Related

How can I obtain an attached file from a record using the C# REST web services

I'm trying to retrieve an attached file (or files) from a record in Acumatica. I'm using the following example:
https://help-2021r1.acumatica.com/Help?ScreenId=ShowWiki&pageid=b1bc82ee-ae6b-442a-a369-863d98f14630
I've attached a file to the demo inventory stock item "AACOMPUT01".
Most of the code runs as expected, but when it gets to the code line:
JArray jFiles = jItem.Value<JArray>("files");
it returns null for the jFiles JArray - as if there are no files attached.
Is there something wrong with this example - or something I need to add to get it to work?
I'm using 2021 R1 (21.107.0023), and the endpoint is default 20.200.001...
Thanks...
Execute a GET request on StockItem endpoint with the expand files option:
http://localhost/Acumatica/entity/Default/20.200.001/StockItem/AACOMPUT01?$expand=files
This returns the files array:
"files": [
{
"id": "bdb9534c-6aa9-41fa-a65d-3119e32b0fe5",
"filename": "Stock Items (AACOMPUT01)\\AACOMPUT01.jpg",
"href": "/Acumatica/entity/Default/20.200.001/files/bdb9534c-6aa9-41fa-a65d-3119e32b0fe5"
}
Use the href URL parameter value to issue the GET request which returns the file content:
http://localhost/Acumatica/entity/Default/20.200.001/files/bdb9534c-6aa9-41fa-a65d-3119e32b0fe5

How to use Power Automate to handle excel file that gets refreshed/replaced

I have a SharePoint library in which I will upload an excel file periodically.
I want to read the content of the excel file, and do some processing.
I have added the 'List rows present in a table' and it is working okay initially.
However when I upload an updated excel file, the flow fails with the below error:
No table was found with the name '{********-****-****-****-C30E********}'.
clientRequestId: ********-****-****-****-4fbe********
serviceRequestId: ********-****-****-****-c00c********
How can I get around this?
The number of rows will be different each time.
Is there any other way to read the content of the excel file?
You may wish to use the below. You can build a string variable of your file name and call an http request to Sharepoint to evaluate it, find the file, and return the file ID.
I use this method so a user can provide the XLSX file name at runtime as an input to a manually triggered flow.
"Send an HTTP request to Sharepoint"
Site address : the Sharepoint Site
Method : Get
URI : _api/v2.0/drive/root:/My Sharepoint Folder if not at the root folder/My Sharepoint File.xlsx
Headers : accept application/json
Body : {
"type": "object",
"properties": {
"path": {
"type": "string"
},
"table": {
"type": "string"
}
}
}
Test if it failed .. do something else
Then you can use the ID in your
"List rows present in a table"
Location : The Sharepoint Site
Document Library: Documents
File : #{body('Send_an_HTTP_request_to_SharePoint')?['id']}
Table : Table1

Azure Logic App - How to upload file to Azure Blob Storage from byte array

I am trying to create a Logic App that is triggered by an HttpRequest that contains as payload a JSON request. Inside of this JSON, one field contains the file:
{
"EntityId": "45643",
"SharedGuid": "00000000-0000-0000-0000-000000000000",
"Document": {
"DocumentName": "MyFileName.pdf",
"File": "JVBERi0xLjMKJfv8/f4KMS.....lJUVPRg=="
}}
This "file" content is being generated by the customer application by using the following C# function: File.ReadAllBytes("local path here").
I managed to upload the byte array to blob storage. But the file is not valid once it is uploaded in the Blob Storage.
I tried different file contents for the file in the JSON schema definition as: string, binary, application/octet-stream.
Any help will be appreciated.
Did you do the operation to convert the byte to Base64String in your httprequest code, just like the code below:
byte[] b = File.ReadAllBytes(#"filepath");
string s = Convert.ToBase64String(b);
According to the file content you provided, it seems you have convert it to base64string as above, so I provide the solution below:
For this requirement, you can just parse the response data as string(do not need to use "binary" in schema) in your "Parse JSON" action and then use base64ToBinary() method in the "Create blob" action, please refer to the screenshot shown as below:
The expression in "Blob content" is:
base64ToBinary(body('Parse_JSON')?['Document']?['File'])
Hope it helps~
If still have any problem, please feel free to let me know.

Azure : How to write path to get a file from a time series partitioned folder using the Azure logic apps

I am trying to retrieve a csv file from the Azure blob storage using the logic apps.
I set the azure storage explorer path in the parameters and in the get blob content action I am using that parameter.
In the Parameters I have set the value as:
concat('Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
So during the run time this path should form as:
Directory1/Year=2019/Month=12/Day=30/myfile.csv
but during the execution action is getting failed with the following error message
{
"status": 400,
"message": "The specifed resource name contains invalid characters.\r\nclientRequestId: 1e2791be-8efd-413d-831e-7e2cd89278ba",
"error": {
"message": "The specifed resource name contains invalid characters."
},
"source": "azureblob-we.azconn-we-01.p.azurewebsites.net"
}
So my question is: How to write path to get data from the time series partitioned path.
The response of the Joy Wang was partially correct.
The Parameters in logic apps will treat values as a String only and will not be able to identify any functions such as concat().
The correct way to use the concat function is to use the expressions.
And my solution to the problem is:
concat('container1/','Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
You should not use that in the parameters, when you use this line concat('Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv') in the parameters, its type is String, it will be recognized as String by logic app, then the function will not take effect.
And you need to include the container name in the concat(), also, no need to use string(int()), because utcNow() and substring() both return the String.
To fix the issue, use the line below directly in the Blob option, my container name is container1.
concat('container1/','Directory1/','Year=',substring(utcNow(),0,4),'/Month=',substring(utcnow(),5,2),'/Day=',substring(utcnow(),8,2),'/myfile.csv')
Update:
As mentioned in #Stark's answer, if you want to drop the leading 0 from the left.
You can convert it from string to int, then convert it back to string.
concat('container1/','Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')

U-SQL: How to skip files from analysis based on content

I have a lot of files each containing a set of json objects like this:
{ "Id": "1", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "2", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "3", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
etc.
Each file containts information about a specific type of session. In this case it are sessions from a Web App, but it could also be sessions of a Desktop App. In that case the value for Origin is "DesktopClient" instead of "WebClient"
For analysis purposes say I am only interested in DesktopClient sessions.
All files representing a session are stored in Azure Blob Storage like this:
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef77.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef78.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef79.json
Is it possible to skip files of which the first line already makes it clear if it is not a DesktopClient session file, like in my example? I think it would save a lot of query resources if files that I know of do not contain the right session type can be skipped since they can be quit big.
At the moment my query read the data like this:
#RawExtract = EXTRACT [RawString] string
FROM #"wasb://plancare-events-blobs#centrallogging/2017/07/20/{*}.json"
USING Extractors.Text(delimiter:'\b', quoting : false);
#ParsedJSONLines = SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple([RawString]) AS JSONLine
FROM #RawExtract;
...
Or should I create my own version of Extractors.Text and if so, how should I do that.
To answer some questions that popped up in the comments to the question first:
At this point we do not provide access to the Blob Store meta data. That means that you need to express any meta data either as part of the data in the file or as part of the file name (or path).
Depending on the cost of extraction and sizes of files, you can either extract all the rows and then filter out the rows where the beginning of the row is not fitting your criteria. That will extract all files and all rows from all files, but does not need a custom extractor.
Alternatively, write a custom extractor that checks for only the files that are appropriate (that may be useful if the first solution does not give you the performance and you can determine the conditions efficiently inside the extractors). Several example extractors can be found at http://usql.io in the example directory (including an example JSON extractor).

Resources