How to get the contents of files in a SharePoint document library using PnP.js and pass them to client application so users can view them - sharepoint

I need to get a list of files from a SharePoint document library and return the results to a client application with file metadata and the contents.
I can get the list of files using
const files = await sp.web.getFolderByServerRelativeUrl("/sites/mysite/mylib/docs").files.get();
How can I retrieve the contents of the files and send them to clients too so they can view the files along with the metadata.
It looks like I can get the content of a file using
const blob = await sp.web.getFileByServerRelativeUrl("/sites/dev/documents/file.avi").getBlob();
Since I already retrieve an array of file objects, does the file content already returned from the file object? If yes, how to access it?
I know that I can return the LinkingUrl to the client. But for some reasons, we have users that do not have access to the SharePoint site so the link url doesn't work. I have to return the contents directly to them.

Related

Give direct access of local files to Document AI

I know there is a way by which we can call Document AI from python environment in local system. In that process one needs to upload the local file to GCS bucket so that Document AI can access the file from there. Is there any way by which we can give direct access of local files to Document AI (i.e., without uploading the file to GCS bucket) using python? [Note that it's a mandatory requirement for me to run python code in local system, not in GCP.]
DocumentAI cannot "open" files by itself from your local filesystem.
If you don't want / cannot upload the documents to a bucket, you can send them in as part of the REST API. BUT in this case you cannot use BatchProcessing: I mean, you must process the files one by one and wait for a response.
The relevant REST API documentation is here: https://cloud.google.com/document-ai/docs/reference/rest/v1/projects.locations.processors/process
In the quickstart documentation for python you've got this sample code that reads a file and sends it inline as part of the request:
# The full resource name of the processor, e.g.:
# projects/project-id/locations/location/processor/processor-id
# You must create new processors in the Cloud Console first
name = f"projects/{project_id}/locations/{location}/processors/{processor_id}"
# Read the file into memory
with open(file_path, "rb") as image:
image_content = image.read()
document = {"content": image_content, "mime_type": "application/pdf"}
# Configure the process request
request = {"name": name, "raw_document": document}
result = client.process_document(request=request)

How to get multiple files using GetBobContent and add as attachment to Email in Azure Logic app?

Hi I am working in Azure Logic app. I am trying to get multiple files from azure data Lake gen v2 and attach these multiple files in an email. As a first step I have added http request and I am giving required information along with file path. It works fine for one file. but I am trying to input folder path and inside that folder, all the files I want to get and attach in email.
Logic app Flow Diagram
Added sample screenshot for attachment
tried to add attchment
In the above diagram, Get blob content step which works fine for one file but I am finding difficult to attach multiple files in email. Can some one help me to figure out the solution. Any help would be appreciated. Thank you
You can use List blobs action to list all blobs in the folder you want:
Then you can define a variable to append the attachments array.
Use For Each to loop the blobs from List Blobs action. Within For Each you can use Get blob content to get blob content, and then use Append to array variable to append attachments.
The expressions of Path, DisplayName and File Content are as follows:
Path : items('For_each')?['Path']
DisplayName : items('For_each')?['DisplayName']
File Content : body('Get_blob_content')
Finally, please fill in the attachment in the email:
==========================update===================
If you send an email with 400 response, please use expression in Append to array variable as below:
base64(body('Get_blob_content'))

Is there any rest end point where I can find if the id I have is a file or a folder for onedrive/sharepoint?

I have an id(fdxxxf8-6xxb-4xx0-8xx9-9xxx8) which i got from one of the API response. Is there anyway to figure it out if the id is of a file or a folder?
I need to call to different API based on if it is a folder or a file
We can use the OneDrive API like below, if the Response data contains file property, the id is of a file, if the Response data contains folder property, the id is of a folder.
/me/drive/items/{item-id}?$select=file,folder
Reference: Get a DriveItem resource

How to iterate through all files in sftp folder in Microsoft Azure Logic App

steps i already did using SFTP connector (how can i access files while looping through list files in folder in azure logic app):
I added foreach loop
I added list files in folder
I passed Body as parameter in foreach loop
then i added action to create new file with new name for all files.
but i am not able to get file name and content while iterating sftp folder using foreach loop?
Please see example image, showing the logic app design. I am iterating sftp folder posting file content to http end point:

How to handle data import and export from a Windows Universal app

I am developing a Windows Universal app that collects results of races. It saves each race result in a sql-lite database in an application folder so the user can view previous results. I have further requirements, however, for saving and opening race results.
I need to be able to export the results of a race as a CSV file so that they can be opened by a third-party application that might be running on a separate machine on a different operating system.
I need to be able to export the results as an HTML file that can be uploaded/included in the user's own web site.
I need the user to be able to print the results (which I was thinking could just be done by printing the HTML file from a browser)
I would like the user to be able to choose to import the results of a race created by my own legacy application in my own format.
It seems, however, that we are restricted in a Windows Universal app to saving files to just very specific folders under very specific circumstances if we have requested that app capability. Therefore I am getting access denied errors both saving and reading files using the FileOpenPicker and FileSavePicker.
I think I probably need to view the export and import of results in a different way, but after a lot of searching I have not been able to come up with the right and recommended solution to this. So the question is how should I be handling the import and export of results? Should I be using the user's documents folder, or their OneDrive? Do I need to create a web application for my app so that the user can store results in the cloud and download them from there?
CSV and HTML are both text files with some encoding. So your question is about how to read/write files with JS.
Here is example how to create html page with FileSavePicker:
var savePicker = new Windows.Storage.Pickers.FileSavePicker();
savePicker.suggestedStartLocation = Windows.Storage.Pickers.PickerLocationId.documentsLibrary;
savePicker.fileTypeChoices.insert("Web page", [".html"]);
savePicker.suggestedFileName = "New page";
savePicker.pickSaveFileAsync().then(function (file)
{
if (file) {
var _WriteThis="<!DOCTYPE html>" + "\r\n";
_WriteThis = _WriteThis + "<html><head><title>Web page title</title>" + "\r\n";
// .....
Windows.Storage.FileIO.writeTextAsync(file, _WriteThis, Windows.Storage.Streams.UnicodeEncoding.utf8);
}
});
This example doesn't required any special rules and you can save file anywhere on you PC HDD or USD stick without setting capabilities in manifest (except system folders)
Same way you can save in csv format

Resources