I'm trying to retrieve an attached file (or files) from a record in Acumatica. I'm using the following example:
https://help-2021r1.acumatica.com/Help?ScreenId=ShowWiki&pageid=b1bc82ee-ae6b-442a-a369-863d98f14630
I've attached a file to the demo inventory stock item "AACOMPUT01".
Most of the code runs as expected, but when it gets to the code line:
JArray jFiles = jItem.Value<JArray>("files");
it returns null for the jFiles JArray - as if there are no files attached.
Is there something wrong with this example - or something I need to add to get it to work?
I'm using 2021 R1 (21.107.0023), and the endpoint is default 20.200.001...
Thanks...
Execute a GET request on StockItem endpoint with the expand files option:
http://localhost/Acumatica/entity/Default/20.200.001/StockItem/AACOMPUT01?$expand=files
This returns the files array:
"files": [
{
"id": "bdb9534c-6aa9-41fa-a65d-3119e32b0fe5",
"filename": "Stock Items (AACOMPUT01)\\AACOMPUT01.jpg",
"href": "/Acumatica/entity/Default/20.200.001/files/bdb9534c-6aa9-41fa-a65d-3119e32b0fe5"
}
Use the href URL parameter value to issue the GET request which returns the file content:
http://localhost/Acumatica/entity/Default/20.200.001/files/bdb9534c-6aa9-41fa-a65d-3119e32b0fe5
I have a SharePoint library in which I will upload an excel file periodically.
I want to read the content of the excel file, and do some processing.
I have added the 'List rows present in a table' and it is working okay initially.
However when I upload an updated excel file, the flow fails with the below error:
No table was found with the name '{********-****-****-****-C30E********}'.
clientRequestId: ********-****-****-****-4fbe********
serviceRequestId: ********-****-****-****-c00c********
How can I get around this?
The number of rows will be different each time.
Is there any other way to read the content of the excel file?
You may wish to use the below. You can build a string variable of your file name and call an http request to Sharepoint to evaluate it, find the file, and return the file ID.
I use this method so a user can provide the XLSX file name at runtime as an input to a manually triggered flow.
"Send an HTTP request to Sharepoint"
Site address : the Sharepoint Site
Method : Get
URI : _api/v2.0/drive/root:/My Sharepoint Folder if not at the root folder/My Sharepoint File.xlsx
Headers : accept application/json
Body : {
"type": "object",
"properties": {
"path": {
"type": "string"
},
"table": {
"type": "string"
}
}
}
Test if it failed .. do something else
Then you can use the ID in your
"List rows present in a table"
Location : The Sharepoint Site
Document Library: Documents
File : #{body('Send_an_HTTP_request_to_SharePoint')?['id']}
Table : Table1
I am trying to create a Logic App that is triggered by an HttpRequest that contains as payload a JSON request. Inside of this JSON, one field contains the file:
{
"EntityId": "45643",
"SharedGuid": "00000000-0000-0000-0000-000000000000",
"Document": {
"DocumentName": "MyFileName.pdf",
"File": "JVBERi0xLjMKJfv8/f4KMS.....lJUVPRg=="
}}
This "file" content is being generated by the customer application by using the following C# function: File.ReadAllBytes("local path here").
I managed to upload the byte array to blob storage. But the file is not valid once it is uploaded in the Blob Storage.
I tried different file contents for the file in the JSON schema definition as: string, binary, application/octet-stream.
Any help will be appreciated.
Did you do the operation to convert the byte to Base64String in your httprequest code, just like the code below:
byte[] b = File.ReadAllBytes(#"filepath");
string s = Convert.ToBase64String(b);
According to the file content you provided, it seems you have convert it to base64string as above, so I provide the solution below:
For this requirement, you can just parse the response data as string(do not need to use "binary" in schema) in your "Parse JSON" action and then use base64ToBinary() method in the "Create blob" action, please refer to the screenshot shown as below:
The expression in "Blob content" is:
base64ToBinary(body('Parse_JSON')?['Document']?['File'])
Hope it helps~
If still have any problem, please feel free to let me know.
I am trying to retrieve a csv file from the Azure blob storage using the logic apps.
I set the azure storage explorer path in the parameters and in the get blob content action I am using that parameter.
In the Parameters I have set the value as:
concat('Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
So during the run time this path should form as:
Directory1/Year=2019/Month=12/Day=30/myfile.csv
but during the execution action is getting failed with the following error message
{
"status": 400,
"message": "The specifed resource name contains invalid characters.\r\nclientRequestId: 1e2791be-8efd-413d-831e-7e2cd89278ba",
"error": {
"message": "The specifed resource name contains invalid characters."
},
"source": "azureblob-we.azconn-we-01.p.azurewebsites.net"
}
So my question is: How to write path to get data from the time series partitioned path.
The response of the Joy Wang was partially correct.
The Parameters in logic apps will treat values as a String only and will not be able to identify any functions such as concat().
The correct way to use the concat function is to use the expressions.
And my solution to the problem is:
concat('container1/','Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
You should not use that in the parameters, when you use this line concat('Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv') in the parameters, its type is String, it will be recognized as String by logic app, then the function will not take effect.
And you need to include the container name in the concat(), also, no need to use string(int()), because utcNow() and substring() both return the String.
To fix the issue, use the line below directly in the Blob option, my container name is container1.
concat('container1/','Directory1/','Year=',substring(utcNow(),0,4),'/Month=',substring(utcnow(),5,2),'/Day=',substring(utcnow(),8,2),'/myfile.csv')
Update:
As mentioned in #Stark's answer, if you want to drop the leading 0 from the left.
You can convert it from string to int, then convert it back to string.
concat('container1/','Directory1/','Year=',string(int(substring(utcNow(),0,4))),'/Month=',string(int(substring(utcnow(),5,2))),'/Day=',string(int(substring(utcnow(),8,2))),'/myfile.csv')
I have a lot of files each containing a set of json objects like this:
{ "Id": "1", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "2", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
{ "Id": "3", "Timestamp":"2017-07-20T10:43:21.8841599+02:00", "Session": { "Origin": "WebClient" }}
etc.
Each file containts information about a specific type of session. In this case it are sessions from a Web App, but it could also be sessions of a Desktop App. In that case the value for Origin is "DesktopClient" instead of "WebClient"
For analysis purposes say I am only interested in DesktopClient sessions.
All files representing a session are stored in Azure Blob Storage like this:
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef77.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef78.json
container/2017/07/20/00399076-2b88-4dbc-ba56-c7afeeb9ef79.json
Is it possible to skip files of which the first line already makes it clear if it is not a DesktopClient session file, like in my example? I think it would save a lot of query resources if files that I know of do not contain the right session type can be skipped since they can be quit big.
At the moment my query read the data like this:
#RawExtract = EXTRACT [RawString] string
FROM #"wasb://plancare-events-blobs#centrallogging/2017/07/20/{*}.json"
USING Extractors.Text(delimiter:'\b', quoting : false);
#ParsedJSONLines = SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple([RawString]) AS JSONLine
FROM #RawExtract;
...
Or should I create my own version of Extractors.Text and if so, how should I do that.
To answer some questions that popped up in the comments to the question first:
At this point we do not provide access to the Blob Store meta data. That means that you need to express any meta data either as part of the data in the file or as part of the file name (or path).
Depending on the cost of extraction and sizes of files, you can either extract all the rows and then filter out the rows where the beginning of the row is not fitting your criteria. That will extract all files and all rows from all files, but does not need a custom extractor.
Alternatively, write a custom extractor that checks for only the files that are appropriate (that may be useful if the first solution does not give you the performance and you can determine the conditions efficiently inside the extractors). Several example extractors can be found at http://usql.io in the example directory (including an example JSON extractor).