I have some node.js code that reads in a PDF file using Google Drive API drive.files.get()... (it's a file that is sourced on our Google team/shared drives). I convert the returned stream to a single base64 data chunk with something like this:
// read using drive.files.get() into 'pdfStream'
//...
let pdf64 = '';
pdfSream.on('readable', function () {
var chunk = pdfStream.read();
if (chunk != null) {
var chunk64 = chunk.toString('base64');
pdf64 += chunk64;
}
});
pdfStream.on('end', function () {
// Do SendGrid email with pdf64 as attachment
});
Note that the final goal is to send an email with the PDF. I haven't included much of the code because this all works great - as long as the email recipient is on our company's domain. For external email recipients, the PDF attachment is unviewable and cannot be downloaded - at least this is what the situation appears to be.
I didn't think that access-restrictions and permissions would stay with data that is read directly using drive.files.get(). Is this a thing? I would suspect SendGrid except that we send attachments in other areas of our code with no issue.
Thoughts anyone? Much appreciated!
~Bob
I was able to fix it. The permissions thing was a red herring - the pdf attachments were corrupt and our company email system was just more lenient with the errors than others (like Gmail). I refactored the above code to first create a complete array of data, then converted that array to base64:
// read using drive.files.get() into 'pdfStream'
//...
let pdfChunks = [];
// Read through stream chunks and concat them together
pdfStream.on('readable', function () {
var chunk = pdfStream.read();
if (chunk != null) {
pdfChunks.push(chunk);
}
});
// On the end of the stream, convert to base64 and email the Pdf
pdfStream.once('end', function () {
let pdfBin = Buffer.concat(pdfChunks);
let pdf64 = pdfBin.toString('base64');
//...
// Do SendGrid email with pdf64 as attachment
});
Cheers!
~Bob
Related
I am using google drive API to download an excel file in my Flutter app but I want to store the downloaded file content response in a File and then do some update operations using excel dart package, below is the given code from reading an xlsx file from a path location.
var file = "Path_to_pre_existing_Excel_File/excel_file.xlsx"; //here I want to store the response from drive api
var bytes = File(file).readAsBytesSync();
var excel = Excel.decodeBytes(bytes);
//Do some logic here
for (var table in excel.tables.keys) {
print(table); //sheet Name
print(excel.tables[table].maxCols);
print(excel.tables[table].maxRows);
for (var row in excel.tables[table].rows) {
print("$row");
}
}
//then saving the excel file
// updating the excel sheet to Drive
updateToDrive(excel,fileId);
I have created all the required auth functions, drive scopes and my download function looks like this :
Future<void> downloadFile() async{
String fileId = '1TOa4VKfZBHZe######WLA4M95nOWp';
final response = await driveApi.files.get(
fileId,
downloadOptions: drive.DownloadOptions.fullMedia
);
print(response);
}
This function is executing correctely and giving Media type response, but I could not able to read this response so that I could store it in a file.
Any help would be truly appreciated, Thanks
I changed my download function to this, as drive.files.get() was returning a Future Object so I changed it to return Future<Media?> by type casting.
String fileId = "19jF3lOVW563LU6m########jXVLNQ7poXY1Z";
drive.Media? response = (await driveApi.files.get(
fileId,
downloadOptions: drive.DownloadOptions.fullMedia
)) as drive.Media?;
Now response is a Media on which we can listen to the sream to store the response in a file.
To do that first we need to get the app directory by path_provider
final String path = (await getApplicationSupportDirectory()).path;
final String fileName = '$path/Output.xlsx';
File file = File(fileName);
Now we want to write the stream of response Stream<List> into our file object which I found from this link
List<int> dataStore = [];
await response!.stream.listen((data) {
print("DataReceived: ${data.length}");
dataStore.insertAll(dataStore.length, data);
}, onDone: () {
print("Task Done");
file.writeAsBytes(dataStore);
OpenFile.open(file.path);
print("File saved at ${file.path}");
}, onError: (error) {
print("Some Error");
});
Now we can do whatever we want to make changes through excel package.
So, I currently have a node.js project that uses chilkat in order to read emails. Currently it does most of what I'm looking for, returning the subject, header, and the email address of each email. The only thing I can't get to return is the body text. Currently when I try to do so, it returns either an empty of null result.
I know why the problem happens. It's because I am only fetching email header info here before trying to print the result:
var bundle;
bundle = imap.FetchHeaders(messageSet);
if (imap.LastMethodSuccess == false) {
console.log(imap.LastErrorText);
return;
}
var i = 0;
while (i < bundle.MessageCount) {
// email: Email
var email;
email = bundle.GetEmail(i);
console.log(email.GetHeaderField("Date"));
console.log(email.Subject);
console.log(email.From);
console.log(email.Body);
console.log("--");
i = i+1;
}
So, I know the fix should happen here, but for the life of me I can't find what should go in place the the imap.FetchHeaders() part.
Use FetchBundle() in place of FetchHeaders().
My Azure Web app calls html2pdfrocket with this code:
MemoryStream stream = new MemoryStream(result.Content.ReadAsByteArrayAsync().Result);
System.IO.File.WriteAllText(path, stream.ToString());
But I get back an invalid PDF of just a few bytes. I know the URL I pass to html2pdfrocket is valid because I can paste it into their Website to test it. Do I need to async/await or something else to get all the data before attempting to save it to a folder?
No need to use async/await, the .Result does the thing like await.
A similar error in your code, stream.ToString() only converts the stream object itself to a string, but does not contain the content.
I suggest you use byte[] array instead of stream(I did test with stream, but the saved .pdf file is empty even though the content length is correct).
Try use byte[] array like below, and it works at my side:
using (var client = new HttpClient())
{
var content = new FormUrlEncodedContent(new[]
{
new KeyValuePair<string, string>("apikey","xxxxx"),
new KeyValuePair<string, string>("value", "the url")
});
var result = client.PostAsync("http://api.html2pdfrocket.com/pdf", content).Result;
if (result.IsSuccessStatusCode)
{
// change the path as per your need
System.IO.File.WriteAllBytes(#"d:\temp\0618.pdf", result.Content.ReadAsByteArrayAsync().Result);
}
}
I am using the Forge data management API to access my A360 files and aim to translate them into the SVF format so that I can view them in my viewer. So far I have been able to reach the desired item using the ForgeDataManagement.ItemsApi, but I don't know what to do with the item to upload it to the bucket in my application.
From the documentation it seems like uploadObject is the way to go (https://github.com/Autodesk-Forge/forge.oss-js/blob/master/docs/ObjectsApi.md#uploadObject), but I don't know exactly how to make this function work.
var dmClient = ForgeDataManagement.ApiClient.instance;
var dmOAuth = dmClient.authentications ['oauth2_access_code'];
dmOAuth.accessToken = tokenSession.getTokenInternal();
var itemsApi = new ForgeDataManagement.ItemsApi();
fileLocation = decodeURIComponent(fileLocation);
var params = fileLocation.split('/');
var projectId = params[params.length - 3];
var resourceId = params[params.length - 1];
itemsApi.getItemVersions(projectId, resourceId)
.then (function(itemVersions) {
if (itemVersions == null || itemVersions.data.length == 0) return;
// Use the latest version of the item (file).
var item = itemVersions.data[0];
var contentLength = item.attributes.storageSize;
var body = new ForgeOSS.InputStream();
// var body = item; // Using the item directly does not seem to work.
// var stream = fs.createReadStream(...) // Should I create a stream object lik suggested in the documention?
objectsAPI.uploadObject(ossBucketKey, ossObjectName, contentLength, body, {}, function(err, data, response) {
if (err) {
console.error(err);
} else {
console.log('API called successfully. Returned data: ' + data);
//To be continued...
}
I hope someone can help me out!
My current data:
ossObjectName = "https://developer.api.autodesk.com/data/v1/projects/"myProject"/items/urn:"myFile".dwfx";
ossBucketKey = "some random string based on my username and id";
Regards,
torjuss
When using the DataManagement API, you can either work with
2 legged oAuth (client_credentials) and access OSS' buckets and objects,
or 3 legged (authorization_code) and access a user' Hubs, Projects, Folders, Items, and Revisions
When using 3 legged, you do access someones content on A360, or BIM360 and these files are automatically translated by the system, so you do not need to translate them again, not to transfer them on a 2 legged application bucket. The only thing you need to do it is get the manifest of the Item or its revision and use the URN to see it in the viewer.
Checkout an example here: https://developer.autodesk.com/en/docs/data/v2/reference/http/projects-project_id-versions-version_id-GET/
you'll see something like
Examples: Successful Retrieval of a Specific Version (200)
curl -X GET -H "Authorization: Bearer kEnG562yz5bhE9igXf2YTcZ2bu0z" "https://developer.api.autodesk.com/data/v1/projects/a.45637/items/urn%3Aadsk.wipprod%3Adm.lineage%3AhC6k4hndRWaeIVhIjvHu8w"
{
"data": {
"relationships": {
"derivatives": {
"meta": {
"link": {
"href": "/modelderivative/v2/designdata/dXJuOmFkc2sud2lwcWE6ZnMuZmlsZTp2Zi50X3hodWwwYVFkbWhhN2FBaVBuXzlnP3ZlcnNpb249MQ/manifest"
}
},
Now, to answer teh other question about upload, I got an example available here:
https://github.com/Autodesk-Forge/forge.commandline-nodejs/blob/master/forge-cb.js#L295. I copied the relevant code here for everyone to see how to use it:
fs.stat (file, function (err, stats) {
var size =stats.size ;
var readStream =fs.createReadStream (file) ;
ossObjects.uploadObject (bucketKey, fileKey, size, readStream, {}, function (error, data, response) {
...
}) ;
}) ;
Just remember that ossObjects is for 2 legged, where as Items, Versions are 3 legged.
We figured out how to get things working after some support from Adam Nagy. To put it simple we had to do everything by use of 3 legged OAuth since all operations involves a document from an A360 account. This includes accessing and showing the file structure, translating a document to SVF, starting the viewer and loading the document into the viewer.
Also, we were targeting the wrong id when trying to translate the document. This post shows how easily it can be done now, thanks to Adam for the info!
I am using bot framework. I want to create an excel file in bot framework to display some table which is the output of some BI type question like "what is the revenue of product x in previous year?"
Is it possible to do so?
You can generate a CSV file by using trivial code, or use a .NET library like spreadsheetlight in order to generate an Excel file.
Then you can attach the newly generated Excel file as an attachment to your bot's message.
Services like Facebook are OK with your bot sending PDF or Excel files. In an IDialog you would use it like this:
private async Task SendAttachmentToUser(IDialogContext context, IAwaitable<IMessageActivity> argument)
{
string incoming = await argument;
//this is your method that generates excel file and returns a link to it
var excelLink = GenerateExcelFileBasedOnUserQuery(incoming.Text);
IMessageActivity response = context.MakeMessage();
response.Attachments = new List<Attachment>()
{
new Attachment()
{
ContentType = "application/vnd.ms-excel",
ContentUrl = "http://yoursuperbot.azurewebsites.net/Files/2843578193485719fa011.xls",
Name = "Your_Results.xls"
}
};
response.Text = "Here are your results.";
await context.PostAsync(response);
context.Done(this);
}